{"text": "# Parameter Selection for the Functionally Assembled Terrestrial Ecosystem Simulator\n\n__Summary__ \nNumerical, process-based models that simulate tropical forest ecosystem dynamics, such as the Functionally Assembled Terrestrial Ecosystem Simulator (FATES), have been proposed as a way to improve climate change projections. However, parameterizing these complex models is challenging due to their numerous and interconnected non-linear relationships. Here, we to identify three high-performing parameter sets for use in future FATES simulations by quantitatively evaluating the performance of nearly 300 simulations, each run with a unique parameter set, against observations at a tropical forest test site.\n\n__Motivation__ \nGreenhouse gas emissions from human activities are warming the Earth with potentially catastrophic implications for human health. Projections of twenty-first century warming are critical for climate change mitigation and adaptation efforts. Unfortunately, the numerical models used to project future climate differ in their predictions of warming even for the same greenhouse gas emissions scenario. This variability in projections across models stems largely from uncertainty about how Earth's vegetation will respond to climate change. In particular, predictions of tropical forest responses to climate change must be improved, as these forests exert strong control over global climate. Numerical, process-based models of vegetated ecosystem dynamics, such as the Functionally Assembled Terrestrial Ecosystem Simulator (FATES; Fisher et al., 2015, 2018), have been proposed as a way of improving projections of tropical forests and thus future climate. However, parameterizing these complex models remains challenging due to their numerous parameters and interconnected, non-linear relationships.\n\n__Goal__ \nThe goal of this analysis is to identify model parameter sets that allows the FATES model to best match observations of tropical forest structure and functioning at a test site.\n\n__Methods__ \nThis analysis identifies high-performing parameter sets for use in future experiments by quantitatively evaluating the performance of nearly 300 different FATES parameter sets against observations at a tropical forest test site, Barro Colorado Island, Panama. \n\n_Parameter ensemble simulations_ \nPrior to this analysis, we ran an ensemble of FATES simulations at our test site, Barro Colorado Island, Panama. Each simulation in the ensemble was initialized with a unique parameter set but was otherwise set-up identically to all other simulations. The 287 parameter sets we tested differed in 12 key plant trait parameters, which were sampled from observed distributions when possible (following Koven et al., _in prep_). All simulations were forced with repeating meteorological data for Barro Colorado Island, Panama, from the years 2003 to 2016 (Faybishenko et al., 2018). See Kovenock (2019) for further details of the parameter ensemble simulations.\n\nPlant structure and functioning are sensitive to atmospheric carbon dioxide concentration, which increased over the observational time period. We therefore repeat all parameter ensemble simulations and our analysis below for two carbon dioxide concentrations that approximately bookend the observational time period (367ppm and 400ppm carbon dioxide).\n\n_Parameter set evaluation and selection_ \nThis code uses the above ensemble of simulations to quantitatively evaluate the performance of each parameter set against observations of six variables at our tropical forest test site. These variables characterize ecosystem structure (leaf area index, above-ground biomass, basal area) and functioning (gross primary productivity, latent heat flux, sensible heat flux). Observations come from the following sources: leaf area index from Detto et al. (2018); above-ground biomass from Meakem et al. (2018), Feeley et al. (2007), and Baraloto et al. (2013); basal area from Condit et al. (2017, 2012), Condit (1998), and Hubbell et al. (1999); and gross primary productivity, latent heat flux, and sensible heat flux from Koven et al. (in prep). As some of these data sets require permission to use or are not yet publicly available, observational data sets are not included for download here.\n\nWe use two performance metrics $-$ error rate and normalized root mean square error $-$ to quantify each parameter set's performance. The error rate measures how frequently the model output falls within the observed range for each variable. The normalized root mean square error measures the distance between the model output and the observed mean, relative to the observed range for each variable. The expectation is that high-performing parameter sets will result in model output that falls within the range of observations (low error rate) and near the observed mean (low normalized root mean square error). After evaluating a parameter set's performance for each individual variable, we calculate the weighted average performance across all variables for that parameter set. To ensure that our selection of a high-performing parameter set is robust to weighting method, we consider three different weighting approaches: even weighting, weighting favoring structural variables, and weighting accounting for correlation between individual variable performance.\n\nLastly, we identify parameter sets for use in future experiments by assigning an overall rank to each parameter set based on its performance across both performance metrics, three weighted averaging approaches, and two background carbon dioxide concentrations.\n\n__Results__ \nThis analysis identifies three high-performing parameter sets for use in future FATES simulations. We recommend the highest-performing parameter set for use in primary experiments and the next two highest-performing parameter sets for use in parameter sensitivity tests. These high-performing parameter sets are publicly available through the University of Washington ResearchWorks digital repository at http://hdl.handle.net/1773/43779. The performance of these parameter sets is reported in further detail in the [Results](#Results) section below and in Kovenock (2019).\n\n\n## Analysis\n## Step 1: Load libraries\n\n\n```python\nimport netCDF4 as nc4\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy\nfrom scipy import stats\n```\n\n## Step 2: Load and preprocess data\n\nHere we load the data and calculate time series of annual mean values for six ecosystem characteristics for all the simulations and observations. This code section returns two multidimensional arrays, one for model output and one for observations, that contain annual mean values for each variable organized by parameter set and background carbon dioxide concentration.\n\nThe six ecosystem characteristics included in these arrays are:\n\n- Leaf area index,\n- Above-ground biomass,\n- Basal area,\n- Gross primary productivity,\n- Latent heat flux, and\n- Sensible heat flux.\n\n### 2.1 Model output\n\nThis part of the code loads the model output for all FATES simulations in our parameter ensemble. Then it calculates time series of annual mean values for the six variables we use to evaluate the performance of each parameter set.\n\n\n```python\ndef annmeants(filepath,var,varfiletype,nyrs,conv_factor):\n ''' Calculate time series of annual means for a model output variable.\n :param filepath (str): file path to data file\n :param var (str): name of variable to call from filename\n :param nyrs (int, float): number of years to analyze\n :param conv_factor (float): conversion factor specific to variable specified by var\n :return: 2-D array containing annual mean time series (ensemble member, nyrs)\n '''\n \n # If model output is stored as monthly average for all tree sizes,\n # need to calculate annual mean. \n if varfiletype == 0:\n \n # Load monthly time series\n # For all cases except latent heat flux (FLH):\n if var != 'FLH':\n mthts_temp = nc4.Dataset(filepath).variables[var][:,:,0]\n \n # For the special case of latent heat flux:\n elif var == 'FLH':\n # Sum of three terms:\n mthts_temp = (nc4.Dataset(filepath).variables['FCTR'][:,:,0] \n + nc4.Dataset(filepath).variables['FGEV'][:,:,0] \n + nc4.Dataset(filepath).variables['FCEV'][:,:,0])\n \n \n # Calculate annual mean time series for nyrs and convert units if necessary\n annmeants = np.nanmean(np.reshape((mthts_temp[:,int(-1*nyrs*12):] * conv_factor),\n (mthts_temp.shape[0],-1,12)),axis=2)\n \n # Else if model output is stored as annual mean but structured by tree size,\n # need to sum across tree sizes.\n elif varfiletype == 1:\n # Calculate annual mean time series for entire ecosystem by summing across tree sizes\n annmeants = np.squeeze(np.nansum((\n nc4.Dataset(filepath).variables[var + '_SCLS'][:,int(-1*nyrs):,:]),\n axis=2))\n \n mthts_temp = None\n \n return annmeants\n```\n\nFirst, we will specify the information required to load and calculate annual mean time series of model output for each simulation, including file paths and names, variables to analyze, and conversion factors.\n\n\n```python\n# Filepath\nmodel_filepath = 'data/'\n\n# Filenames\n# {1} = carbon dioxide concentration specified by CO2level;\n# {2} = variable file type specified by varfiletype.\nmodel_filenames =[\n 'fates_clm5_fullmodel_bci_parameter_ensemble_1pft_slaprofile_{}_v001.I2000Clm50FatesGs.Cdf9b02d-Fb178808.2018-07-27.h{}.ensemble.sofar.nc',\n 'fates_clm5_fullmodel_bci_parameter_ensemble_1pft_slaprofile_{}_v001.I2000Clm50FatesGs.Cdf9b02d-Fb178808.2018-07-27.h{}.ensemble.sofar.nc']\n\n# Background carbon dioxide (CO2) concentration\nCO2levels = ['367ppm', '400ppm']\n\n# Variable list for model output\nvarlist = ['TLAI','AGB','BA','GPP','FLH','FSH']\n\n# Data structure for each variable in varlist:\n# 0 = monthly data for entire ecosystem;\n# 1 = annual data structured by tree size structure.\nvarfiletype = [0,1,1,0,0,0]\n\n# Conversion factor for each variable in varlist:\nvarconv = [1, 1, 1, 86400*365, 1, 1]\n\n# Variable units after applying conversion factor for each variable in varlist:\nvarunits = ['$m^2/m^2$','$kgC/m^2$','$m^2/ha$','$gC/m^2/yr$','$W/m^2$','$W/m^2$']\n\n# Number of years of model output to analyze\nnyrs = 50\n\n# Number of parameter sets in ensemble\nnens = nc4.Dataset(model_filepath + model_filenames[0].format(CO2levels[0],varfiletype[0])).variables[varlist[0]].shape[0]\n```\n\nNext, we create a multidimensional array that contains a time series of annual means for each variable, parameter set and background carbon dioxide concentration.\n\n\n```python\n# Return model_data (float): a 4-D array of annual mean values for\n # each variable with dimensions\n # (CO2levels, varlist, nens, nyrs)\n # with the following indexing:\n # CO2levels: \n # 0 = 367ppm CO2; \n # 1 = 400ppm CO2.\n # varlist: \n # 0 = Leaf area index;\n # 1 = Above ground biomass;\n # 2 = Basal area;\n # 3 = Gross primary productivity;\n # 4 = Latent heat flux;\n # 5 = Sensible heat flux.\n # nens: \n # 0:286 = parameter set index.\n\n# Initialize array\nmodel_data = np.zeros([len(CO2levels), len(varlist), nens, nyrs])\n\nfor c in range(len(CO2levels)):\n for v in range(len(varlist)):\n \n filepath = model_filepath + model_filenames[c].format(CO2levels[c],varfiletype[v])\n \n model_data[c, v, :, :] = annmeants(filepath, varlist[v], varfiletype[v], nyrs, varconv[v])\n\n filepath = None\n```\n\n### 2.2 Observations\n\nThis code section loads data for the observations we will use to evaluate the performance of each parameter set in our FATES parameter ensemble. It then calculates annual mean values when necessary.\n\n#### Leaf area index\n\nLeaf area index observations come from Detto et al. (2018) and were made using hemispherical photographs taken approximately monthly from January 2015 to August 2017 at 188 locations at our test site, Barro Colorado Island, Panama. We calculate annual mean values from the monthly means reported by Detto et al. (2018). (Note that monthly data consists of spatial means across photograph locations, rather than temporal means.) In order to use all the data available, we calculate two time series of annual means - one starting in from January and the second starting from September. Data was captured from Detto et al. (2018) Figure 7a using GraphClick software.\n\n\n```python\n# Return obs_data_lai (float): 2-D array of annual mean leaf area index\n# (sample number, years) using the following index coding for\n# sample number: \n# 0 = sample months starting from January; \n# 1 = sample months starting from September.\n\n\n# File path\nfilepath = 'data/LAI_Detto2018Obs.csv'\n\n# Monthly spatial means\nlai_mthts = np.asarray([col[2] for col in (pd.read_csv(filepath)).values])\n\n# Specify start months for observations\nstartmonth_list = np.array([1,9])\n\n# Number of annual means per sample\nnyears_lai = round(len(lai_mthts)/12-0.5)\n\n# Initialize array\nobs_data_lai = np.zeros([len(startmonth_list), nyears_lai])\n\n# Calculate annual means and fill array\nfor x in range(len(startmonth_list)):\n obs_data_lai[x,:] = np.nanmean(np.reshape(lai_mthts[startmonth_list[x]-1:24+startmonth_list[x]-1],(nyears_lai,12)),1)\n```\n\n#### Above-ground carbon biomass\n\nAbove-ground carbon biomass estimates were calculated from a 1995 census survey at our test site, Barro Colorado Island, by Meakem et al. (2018). They estimate above-ground biomass using two different methods (the standard and Chave allometric formulations). We use values from these two methods to represent uncertainty in the observational estimate. \n\nAlternatively, we can approximate above-ground carbon biomass from estimates of total biomass (rather than just carbon biomass) from census survey data reproted in Baraloto et al. (2013) and Feeley et al. (2007) for the following years: 1985, 1990,1995, 2000, and 2005. This alternative method yields similar results and can be implemented by setting use_alt_agb_obs to 1 in the code below.\n\n\n```python\n# Return obs_data_agb (float): vector of above-ground\n# carbon biomass (KgC/m2) indexed by allometric \n# formulation:\n# 0 = standard;\n# 1 = Chave.\n\nuse_alt_agb_obs = 0;\n\nfilepath = 'data/BCI_biomass.csv'\n\nif use_alt_agb_obs == 0:\n # Above-ground carbon biomass from Meakem et al. 2018 (MgC/ha) \n cbiomass_obs_Mgha = np.asarray([col[2] for col in (pd.read_csv(filepath)).values])[-2:,]\n # Convert from MgC/ha to KgC/m2\n ha_to_m2 = 1/10000\n Mg_to_kg = 1000\n obs_data_agb = cbiomass_obs_Mgha * ha_to_m2 * Mg_to_kg\n \nelif use_alt_agb_obs == 1:\n # Total aboveground biomass (Mg biomass/ha) from \n # Baraloto et al. (2013) and Feeley et al. (2007)\n agb_biomass_obs = np.asarray([col[1] for col in (pd.read_csv(filepath)).values])[:-2,]\n # Estimate of carbon biomass from total biomass using\n # following Meakem et al. 2018\n obs_data_agb_v2 = agb_biomass_obs*0.47\n obs_data_agb = obs_data_agb_v2\n```\n\n#### Basal area\n\nWe use estimates of the median basal area for our test site Barro Colorado Island, Panama, from census surveys conducted in 1999, 2001, 2006, and 2011 by Condit (1998), Condit et al. (2012, 2017), and Hubbell et al. (1999).\n\n\n```python\n# Return obs_data_ba (float): vector containing basal area (m^2/ha)\n# indexed by census year in chronological order\n\nfilepath = 'data/census_bmks_bci_171208.nc'\n\n# Load basal area median values for the last 5 census dates\n# Data structured as follows:\n# [census number, tree diameter size class,...\n# distribution percentiles (0.05,0.5,0.95)]\nbasalarea_bysize = nc4.Dataset(filepath).variables['basal_area_by_size_census'][-5:,:,1]\n\n# Sum across tree size classes\nobs_data_ba = np.nansum(basalarea_bysize,1)\n```\n\n#### Gross primary productivity, latent heat flux, and sensible heat flux\n\nEstimates of gross primary productivity, latent heat fluxes, and sensible heat fluxes were calculated from fluxtower eddy covariance measurements made from July 2012 to August 2017 at Barro Colorado Islana by Koven et al. (in prep). To use all available data in our analysis, we calculate two versions of the annual means time series, one beginning in July and the second beginning in September.\n\n\n```python\ndef annmeants_fluxobs(mthts,startmth):\n ''' Calculate time series of annual means from monthly fluxtower estimates.\n :param mthts (float): 2-D array containing fluxtower observations (years, months)\n :param startmth (int): number corresponding to start month for this annual mean time series\n (e.g. 7 = start with July, 9 = start with Sept)\n :return: vector containing annual mean time series of size (nyrs) \n '''\n # Discard number of months specified by dif\n mthts_dif = np.reshape(mthts,(1,-1))[:,startmth-1:startmth-1-12]\n \n # Calculate annual mean time series\n annmeants = np.nanmean(np.reshape(mthts_dif,(5,12)),axis=1)\n \n return annmeants\n```\n\n\n```python\n# Return obs_data_flux (float): 3-D array containing annual mean values\n# indexed as (sample number, variable, year).\n# sample number: \n# 0 = sample months starting from July; \n# 1 = sample months starting from September.\n# variable:\n# 0 = gross primary productivity;\n# 1 = latent heat flux;\n# 2 = sensible heat flux.\n\n# Load observations\nGPP_data = np.load('data/fluxdata_GPP.npy')\nLH_data = np.load('data/fluxdata_LH.npy')\nSH_data = np.load('data/fluxdata_SH.npy')\nfluxdata_mask= np.load('data/fluxdata_mask.npy')\n\n# Apply mask to arrays\nGPP_monthyear = np.ma.masked_array(GPP_data, mask=fluxdata_mask)\nLH_monthyear = np.ma.masked_array(LH_data, mask=fluxdata_mask)\nSH_monthyear = np.ma.masked_array(SH_data, mask=fluxdata_mask)\n\n# Specify start months for observations\nstartmonth_list = np.array([7,9])\n\n# Number of years\nnyrs_obsflux = len(annmeants_fluxobs(GPP_monthyear,startmonth_list[0]))\n\n# Initialize array\nobs_data_flux = np.zeros([len(startmonth_list), 3, nyrs_obsflux])\n\n# Fill array\nfor x in range(len(startmonth_list)):\n obs_data_flux[x,0,:] = annmeants_fluxobs(GPP_monthyear,startmonth_list[x])\n obs_data_flux[x,1,:] = annmeants_fluxobs(LH_monthyear,startmonth_list[x])\n obs_data_flux[x,2,:] = annmeants_fluxobs(SH_monthyear,startmonth_list[x])\n```\n\n## Step 3: Quanitfy performance of each parameter set\n\nIn this section we evaluate the model performance against observations for each parameter set and background carbon dioxide level. As we would like to identify parameter sets that robsutly perform well regardless of performance metric we use two metrics to evaluate performance: error rate and normalized root mean square error (NRMSE). We calculate both metrics for each variable. Then, we take a weighted average across all variables for each metric and parameter set.\n\n### Performance Metric #1: Error Rate\n\nThe error rate measures the percent of model annual means that fall within the observed range for each variable and ensemble member. The observed range is defined as the difference between the maximum and minimum observed values. To account for relatively small sample sizes and potential measurment error within the observations we extend the observational range by 10% in both directions.\n\n\n```python\ndef error_rate(model_ts,obs_ts,dg):\n '''Function calculates the error rate for each simulation\n as the percentage of model output annual means that fall\n within the observed range for the variable.\n param model_ts (float): a 2-D array containing the time series of annual means\n for a given variable indexed by (parameter set, years)\n param obs_ts (float): a vecotr or 2-D array containing the observed time series for\n the given variable indexed as (years) or (sample number, years)\n param dg (float): a scalar specifying the degradation level for observed range\n as a fraction\n return error_rate (float): a vector containing the error rates indexed by parameter set\n (nens)'''\n \n # Number of ensemble members\n nens = model_ts.shape[0]\n \n # Empty array to fill\n error_rate = np.zeros([nens])\n\n # Observed minimum and maximum\n obs_min = np.nanmin(obs_ts)\n obs_max = np.nanmax(obs_ts)\n \n error_rate = 100*np.nansum(np.where((model_ts <= obs_min*(1-dg)) | (model_ts >= obs_max*(1+dg)),1,0),1)/model_ts.shape[1]\n return error_rate\n```\n\n\n```python\n# Return error_rate_array (float): a 3-D array containing error rates\n# indexed by (CO2 level, varlist, parameter set)\n\n# Specify observed data arrays in order corresponding to varlist\nobs_data_list = [obs_data_lai,obs_data_agb,obs_data_ba,\n obs_data_flux[:,0,:],obs_data_flux[:,1,:],\n obs_data_flux[:,2,:]]\n\n# Degradation level for the observational range \n# as fraction, not percent\ndg = np.array([0.10])\n\n# Calculate error rate\nerror_rate_array = np.zeros([len(CO2levels),len(varlist), nens])\nfor i in range(len(CO2levels)):\n for j in range(len(varlist)):\n error_rate_array[i,j,:] = error_rate(model_data[i,j,:,:], obs_data_list[j], dg) \n```\n\n### Performance Metric #2: Normalized root mean square error (NRMSE)\n\nThe normalized root mean square error (NRMSE) measures the distance between the model output for each parameter set and the observed mean value. We normalize the root mean square error by the observed range for each variable so that we can compare NRMSE values across variables. In other words, normalizing the error tells us whether the distance from the observation is large compared to the spread in observations for each varialbe. This becomes especially useful when we take a weighted average NRMSE across all variables in the next code section. We caluclate this metric for each variable, parameter set, and background carbon dioxide level. The NRMSE is calculated as follows:\n\n\\begin{equation}\\Large\nNRMSE = \\frac{ \\sqrt{ \\sum_{k=1}^n \\frac{(x_{model,k} - \\bar{X}_{obs})^2}{n}}} {x_{obs,max} - x_{obs,min}}\n\\end{equation}\n\nwhere $NRMSE$ is the normalized root mean square error for a single variable (e.g. leaf area index) and parameter set, $n$ is the number of years of model output, $x_{model,k}$ is the model annual mean for year $k$, $\\bar{X}_{obs}$ is the overall annual mean of the observed values, and $x_{obs,max}$ and $x_{obs,min}$ are the maximum and minimum observed values, respectively. When mulitple annual mean time series were sampled for an observed variable (e.g. observations for leaf area index spanned a partial year), we calculate the difference between the observed mean and model output using the time series that minimizes this difference.\n\n\n```python\ndef nrmse(model_ts,obs_ts):\n '''Function calculates the normalized root mean square error(NRMSE)\n for each model ensemble member. When multiple observation time series \n are available, this function calculates the NRMSE for each time series \n and then selects the lowest of those NRMSE values.\n param model_ts (float): a 2-D array containing the time series of annual means\n for a given variable for all model ensemble members \n (ensemble member, years)\n param obs_ts (float): a vecotr or 2-D array containing the observed time series for\n the given variable indexed as (years) or (sample number, years)\n return nrmse (float): a vector containing the normalized root mean square error \n for each ensemble member indexed by (parameter set)'''\n \n # Number of ensemble members\n nens = model_ts.shape[0]\n\n # If multiple observation time series, \n # take the lowest NRMSE for each ensemble member\n try:\n if obs_ts.shape[1]>0:\n # Number of observation time series\n nobs = obs_ts.shape[0]\n obs_min = np.nanmin(obs_ts,axis=1)\n obs_max = np.nanmax(obs_ts, axis=1)\n obs_mean = np.nanmean(obs_ts,axis=1)\n \n temp_nrmse = np.zeros([nobs,nens])\n \n for obsnum in range(nobs):\n temp_nrmse[obsnum,:] = np.sqrt(np.nansum((model_ts[:,:] - obs_mean[obsnum])**2,axis=1) / model_ts.shape[1]) / (obs_max[obsnum]-obs_min[obsnum])\n \n nrmse = np.nanmin(temp_nrmse,axis=0)\n\n temp_nrmse = None\n \n # Otherwise, simply calculate NRMSE\n except IndexError:\n obs_min = np.nanmin(obs_ts,axis=0)\n obs_max = np.nanmax(obs_ts,axis=0)\n obs_mean = np.nanmean(obs_ts,axis=0) \n \n nrmse = np.sqrt(np.nansum((model_ts[:,:] - obs_mean)**2,axis=1) / model_ts.shape[1]) / (obs_max-obs_min)\n\n return nrmse\n```\n\n\n```python\n# Return nrsmse_array (float): a 3-D array containing the NRMSE \n# indexed by [CO2level, varlist, parameer set] \n\nnrmse_array = np.zeros([len(CO2levels),len(varlist), nens])\n\nfor i in range(len(CO2levels)):\n for j in range(len(varlist)):\n nrmse_array[i,j,:] = nrmse(model_data[i,j,:,:], obs_data_list[j])\n```\n\n### Weighted average permformance metrics across variables\n\nWe calculate weighted average performance metrics across variables for both the normalized root mean square error and the error rate. We calculate and consider three different weighting approaches to ensure that our selection of high-performing parameter sets is robust to weighting method. The weighting approaches we use are:\n\n1. Even: All variables are evenly weighted.\n\n2. Structure: This weighting favors structural ecosystem properties (leaf area index, above-ground biomass, and basal area). This weighting scheme reflects the likelihood that structural variables at our test site include less measurment uncertainty than flux variables.\n\n3. Correlation: This weighting scheme is informed by correlations between individual variable performance metrics. The ability of a parameter set to mach observations of flux variables (gross primary productivity, sensible heat, and latent heat) was correlated with ability to match observations of leaf area index, as well as other flux variables. As leaf area index observations likely include smaller measurement uncertainty, we choose to give a greater weighting to leaf area index at the expense of flux variables. We also reduced the weightings of basal area and above-ground biomass performance to account for their correlation with one another.\n\n#### Weighted average error rate\n\n\n```python\n# Even weighting across all variables\ner_wavg_even = np.nansum(error_rate_array,1) / error_rate_array.shape[1]\n\n# Weighted average favoring structural properties\nw = 0.3\ner_wavg_strct = (w*(error_rate_array[:,0,:]) \n + w*(error_rate_array[:,1,:])\n + w*(error_rate_array[:,2,:])\n + (1-3*w)*((error_rate_array[:,3,:])\n +(error_rate_array[:,4,:])\n +(error_rate_array[:,5,:]))/3)\n\n# Weighting based on correlations between performance metric for each variable\nw1 = 0.4\nw2 = 0.25\nw3 = 0.1\ner_wavg_corr = ( w1*(error_rate_array[:,0,:]) \n + w2*(error_rate_array[:,1,:]) \n + w2*(error_rate_array[:,2,:]) \n + w3*((error_rate_array[:,3,:])\n +(error_rate_array[:,4,:])\n +(error_rate_array[:,5,:]))/3)\n```\n\n#### Weighted average NRMSE\n\nWe quantify the distance of model output from the mean observations in multivariate space by calculating the weighted Euclidean distance as follows:\n\n\\begin{equation}\nNRMSE_{avg} = \\sqrt{ \\sum_{i=1}^m (\\omega_{i} \\cdot NRMSE_{i})^2}\n\\end{equation}\n\nwhere $NRMSE_{avg}$ is the weighted average normalized root mean square error across variables, $m$ is the number of variables we consider, $NRMSE_{i}$ is the normalized root mean square error for each individual variable, and $\\omega_{i}$ is the weighting for each variable. \n\n\n```python\n# Even weighting across all variables\nw = 1/6\nnrmse_wavg_even = np.sqrt(w*(nrmse_array[:,0,:])**2 \n + w*(nrmse_array[:,1,:])**2 \n + w*(nrmse_array[:,2,:])**2 \n + w*(nrmse_array[:,3,:])**2 \n + w*(nrmse_array[:,4,:])**2 \n + w*(nrmse_array[:,5,:])**2)\n\n\n# Weighted average favoring structural properties\nw = 0.3\nnrmse_wavg_strct = np.sqrt(w*(nrmse_array[:,0,:])**2 \n + w*(nrmse_array[:,1,:])**2 \n + w*(nrmse_array[:,2,:])**2 \n + (1-3*w)*((nrmse_array[:,3,:])**2 \n +(nrmse_array[:,4,:])**2 \n +(nrmse_array[:,5,:])**2)/3)\n\n# Weighting based on correlations between performance metric for each variable\nw1 = 0.4\nw2 = 0.25\nw3 = 0.1\nnrmse_wavg_corr = np.sqrt(w1*(nrmse_array[:,0,:])**2 \n + w2*(nrmse_array[:,1,:])**2 \n + w2*(nrmse_array[:,2,:])**2 \n + w3*((nrmse_array[:,3,:])**2 \n +(nrmse_array[:,4,:])**2 \n +(nrmse_array[:,5,:])**2)/3)\n```\n\n## Step 4: Rank parameter sets by performance\nHere we assign an overall rank to each parameter set based on its performance across both performance metrics (error rate and NRMSE), three weighting schemes (even, structure, and correlated), and two cases (low and high atmospheric carbon dioxide concentration). The goal of this analysis is to identify parameter sets that robsutly perform well at our test site.\n\n\n```python\nall_avg_array = np.stack([er_wavg_even,nrmse_wavg_even,\n er_wavg_strct,nrmse_wavg_strct,\n er_wavg_corr,nrmse_wavg_corr])\n\nrank_array = scipy.stats.mstats.rankdata(all_avg_array,axis=2)\n\n# Sum ranks across cases and ranking methods\nsum_rank_array = np.nansum(np.nansum(rank_array,axis=0),axis=0)\n\n# Sort the index number for each ensemble member by their summed rank (best to worst performance)\nsum_rank_index = np.argsort(sum_rank_array)\n\n#Print Index # for highest-performing parameter sets\nhighperform_num = np.transpose(sum_rank_index)[:10,]+1\nhighperform_indx = np.transpose(sum_rank_index)[:10,]\nprint(\"Indices for the highest-performing parameter sets: \", highperform_num[0:3])\n```\n\n Indices for the highest-performing parameter sets: [ 86 260 151]\n\n\n__Result:__ Parameter sets 86, 260, and 151 resulted in the highest model performance (in descending order) at our test site.\n\n\n## Step 5: Plot performance metrics for each parameter set\n\nIn this section we visualize the model performance for each parameter set to gain insight into how the highest-performing parameter sets performed in individual variables and in comparison to all other parameter sets. More specifically, we plot heat maps of each parameter set's performance by performance metric and background carbon dioxide concentration.\n\n\n```python\ndef heatsubplotfxn(heatdata, CO2indx, minval, maxval, plotnum, metriclabel,\n highperform_indx, heat_var_labels, ens10label, enslabel):\n \n '''Function creates a heatmap for each performance metric\n for high-performing ensemble members and then all ensemble members.\n\n param heatdata (float): 3-D array containing a performance metric (CO2levels, variable, nens)\n param CO2indx(int, float): index for background CO2 level (0 = 367 ppm; 1 = 400ppm)\n param minval (int): minimum value for heatmap colorbar\n param maxval (int): maximum value for heatmap colorbar\n param plotnum (int): subplot number\n param metriclabel (str): label for performance metric\n param highperform_indx (int, float): vector of index numbers for\n high-performing parameter sets\n param heat_var_labels (str): list of variable labels indexed by (variable)\n param ens10label (str): list of high-performing parameter set numbers\n to be used as plot labels\n param enslabel (str): list parameter set numbers to label in plot of\n all parameter sets' performance metrics\n \n return heatmap subplot for one performance metric\n '''\n \n #Subplot indexing paramter\n i = 2\n \n # Highest Performing Ensemble Members\n ax1 = plt.subplot(3,i,plotnum)\n im1 = ax1.imshow(\n heatdata[CO2indx,:,highperform_indx],\n vmin = minval, vmax = maxval,\n cmap=\"viridis_r\",aspect='auto')\n\n ax1.set_xticks(np.arange(len(heat_var_labels)))\n ax1.xaxis.tick_top()\n ax1.set_xticklabels(heat_var_labels)\n ax1.xaxis.set_label_position('top')\n\n ax1.set_ylabel('High Performing Parameter Sets (#)')\n ax1.set_yticks(np.arange(len(ens10)))\n ax1.set_yticklabels(ens10)\n\n # All Ensemble Members\n ax2 = plt.subplot(3,i,(i+plotnum,i*2+plotnum))\n im2 = ax2.imshow(\n np.transpose(heatdata[CO2indx,:,:]),\n vmin = minval, vmax = maxval,\n cmap=\"viridis_r\",aspect='auto')\n \n # Colorbar\n cbar = ax1.figure.colorbar(\n im2, ax=ax2, orientation=\"horizontal\", \n pad=0.025)\n # Labels\n cbar.ax.set_xlabel(metriclabel, fontsize = 16, \n fontweight ='bold')\n ax2.set_xticks([]) # hide xticks/labels\n ax2.set_ylabel('All Parameter Sets (#)')\n ax2.set_yticks(ens)\n ax2.set_yticklabels(enslist)\n```\n\n\n```python\n# Return error_heatdata: a 3-D array containing error rates\n# indexed by (CO2level, variable, nens)\n# Return nrmse_heatdata: a 3-D array containing NRMSE \n# indexed by (CO2level, variable, parameter set)\n# Variable indexing as follows:\n# 0-5 = variables in order of varlist, \n# 6-8 = weighted averages across variables\n# using even,\n# structure, and correlation weights,\n# respectively\n\n# Concatenate data for error rate heatmap\nerror_rate_wavg_array = np.stack([er_wavg_even,er_wavg_strct,er_wavg_corr],axis=1)\nerror_heatdata = np.concatenate([error_rate_array,error_rate_wavg_array],axis=1)\n\n# Concatenate data for NRMSE heatmap\nnrmse_wavg_array = np.stack([nrmse_wavg_even,nrmse_wavg_strct,nrmse_wavg_corr],axis=1)\nnrmse_heatdata = np.concatenate([nrmse_array,nrmse_wavg_array],axis=1)\n```\n\n\n```python\n# Additional information for figures\n\n# Metric/Variable labels\nheat_var_labels = [\"LAI\",\"AGB\",\"BA\",\n \"GPP\",\"LH\",\"SH\",\"Av$_{E}$\",\"Av$_{S}$\",\"Av$_{C}$\"]\n\n# Ensemble member labels\n# 10 highest performing\nens10 = [str(int(x)) for x in highperform_num]\n# All (label every 25th ensemble member)\nens = np.array(range(25,300,25))\nenslist = [str(x) for x in ens]\n```\n\n### Plot heat maps of performance by parameter set, performance metric, and background carbon dioxide concentration\n\n\n```python\nfig1 = plt.figure(figsize=(12,12))\n\n# Set CO2levels index number\ncasenum = 0\n\n# Plot Error Rate\nplotnum = 1\nheatsubplotfxn(error_heatdata, casenum, 0, 100, plotnum, 'A. Error Rate (%)', \n highperform_indx, heat_var_labels, ens10, enslist)\n\n# Plot NRMSE\nplotnum = plotnum+1\nheatsubplotfxn(nrmse_heatdata, casenum, 0, 10, plotnum, 'B. NRMSE', \n highperform_indx, heat_var_labels, ens10, enslist)\n\nplt.tight_layout()\n```\n\n__Figure 1.__ FATES model performance as measured by (A) error rate and (B) normalized root mean square error (NRMSE) for simulations run with 367 ppm atmospheric carbon dioxide. Performance metrics are shown for leaf area index (LAI), above-ground biomass (AGB), basal area (BA), gross primary productivity (GPP), latent heat flux (LH), sensible heat flux (SH) and weighted averages across variables using three weighting approaches: even (Av$_E$), favoring structural variables (Av$_S$), and considering correlations (Av$_C$). Top panels highlight the performance of the top 10 highest-perfoming parameter sets. Bottom panels show the performance of all 287 parameter sets we tested.\n\n\n```python\nfig2 = plt.figure(figsize=(12,12))\n\n# Set CO2levels index number\ncasenum = 1\n\n# Plot Error Rate\nplotnum = 1\nheatsubplotfxn(error_heatdata, casenum, 0, 100, plotnum, 'A. Error Rate (%)',\n highperform_indx, heat_var_labels, ens10, enslist)\n\n# Plot NRMSE\nplotnum = plotnum+1\nheatsubplotfxn(nrmse_heatdata, casenum, 0, 10, plotnum, 'B. NRMSE',\n highperform_indx, heat_var_labels, ens10, enslist)\n\nplt.tight_layout()\n```\n\n__Figure 2.__ FATES model performance as measured by (A) error rate and (B) normalized root mean square error (NRMSE) for simulations run with 400 ppm atmospheric carbon dioxide. Performance metrics are shown for leaf area index (LAI), above-ground biomass (AGB), basal area (BA), gross primary productivity (GPP), latent heat flux (LH), sensible heat flux (SH) and weighted averages across variables using three weighting approaches: even (Av$_E$), favoring structural variables (Av$_S$), and considering correlations (Av$_C$). Top panels highlight the performance of the top 10 highest-perfoming parameter sets. Bottom panels show the performance of all 287 parameter sets we tested.\n\n## Results\nThis analysis identifies three high-performing parameter sets for use in future FATES simulations. We recommend the highest-performing parameter set (parameter set 86; Figures 1 and 2) as the primary parameter set for future experiments. Two other high-performing parameter sets (151 and 260; Figures 1 and 2) may be of interest for testing the sensitivity of simulation results to parameterization. Parameter set 260 is similar in parameter values and performance to the highest-performing parameter set (number 86). Parameter set 151 has the third highest performance but, has greater differences in key parameter values and results in a different simulated ecosystem. Specifically, parameter set 151 has higher performance in the above-ground biomass and basal area structural properties but, lower performance in leaf area index and gross primary productivity (Figures 1 and 2). These three high-performing parameter sets are publicly available through the University of Washington ResearchWorks digital repository at http://hdl.handle.net/1773/43779.\n\n\n## Further Information\n__Details of the parameter ensemble and analysis herein:__\n\nKovenock, M. (2019). Ecosystem and large-scale climate impacts of plant leaf dynamics (Doctoral dissertation). Chapter 4: \"Within-canopy gradient of specific leaf area improves simulation of tropical forest structure and functioning in a demographic vegetation model.\" http://hdl.handle.net/1773/44061\n\n__Details of the Functionally Assembled Ecosystem Simulator (FATES):__\n\nhttps://github.com/NGEET/fates\n\nFisher, R. A., Koven, C. D., Anderegg, W. R., Christoffersen, B. O., Dietze, M. C., Farrior, C. E., et al. (2018). Vegetation demographics in Earth System Models: A review of progress and priorities. Global Change Biology, 24(1), 35\u201354. https://doi.org/10.1111/gcb.13910\n\nFisher, R. A., Muszala, S., Verteinstein, M., Lawrence, P., Xu, C., McDowell, N. G., et al. (2015). Taking off the training wheels: the properties of a dynamic vegetation model without climate envelopes. _Geoscientific Model Development, 8_(4), 3293\u20133357. https://doi.org/10.5194/gmdd-8-3293-2015\n\n\n## References\n\nBaraloto, C., Molto, Q., Rabaud, S., H\u00e9rault, B., Valencia, R., Blanc, L., et al. (2013). Rapid simultaneous estimation of aboveground biomass and tree diversity across Neotropical forests: a comparison of field inventory methods. _Biotropica, 45_(3), 288\u2013298. https://doi.org/10.1111/btp.12006\n\nCondit, R. (1998). Tropical forest census plots. Berlin, Germany, and Georgetown, Texas: Springer-Verlag and R. G. Landes Company.\n\nCondit, R. S., Aguilar, S., Perez, R., Lao, S., Hubbell, S. P., & Foster, R. B. (2017). Barro Colorado 50-ha Plot Taxonomy as of 2017. https://doi.org/10.25570/stri/10088/32990\n\nCondit, R., Lao, S., P\u00e9rez, R., Dolins, S. B., Foster, R., & Hubbell, S. (2012). Barro Colorado forest census plot data (version 2012). Center for Tropical Forest Science Databases. https://doi.org/10.5479/data.bci.20130603\n\nDetto, M., Wright, S. J., Calder\u00f3n, O., & Muller-Landau, H. C. (2018). Resource acquisition and reproductive strategies of tropical forest in response to the El Ni\u00f1o$-$Southern Oscillation. _Nature Communications, 9_(1), 913. https://doi.org/10.1038/s41467-018-03306-9\n\nFaybishenko, B., Paton, S., Powell, T., Knox, R., Pastorello, G., Varadharajan, C., et al. (2018). QA/QC-ed BCI meteorological drivers. United States: Next-Generation Ecosystem Experiments Tropics; STRI; LBNL. https://doi.org/doi:10.15486/ngt/1423307\n\nFeeley, K. J., Davies, S. J., Ashton, P. S., Bunyavejchewin, S., Supardi, M. N., Kassim, A. R., et al. (2007). The role of gap phase processes in the biomass dynamics of tropical forests. _Proceedings of the Royal Society B: Biological Sciences, 274_(1627), 2857\u20132864. https://doi.org/10.1098/rspb.2007.0954\n\nFisher, R. A., Koven, C. D., Anderegg, W. R., Christoffersen, B. O., Dietze, M. C., Farrior, C. E., et al. (2018). Vegetation demographics in Earth System Models: A review of progress and priorities. _Global Change Biology, 24_(1), 35\u201354. https://doi.org/10.1111/gcb.13910\n\nFisher, R. A., Muszala, S., Verteinstein, M., Lawrence, P., Xu, C., McDowell, N. G., et al. (2015). Taking off the training wheels: the properties of a dynamic vegetation model without climate envelopes. _Geoscientific Model Development, 8_(4), 3293\u20133357. https://doi.org/10.5194/gmdd-8-3293-2015\n\nHubbell, S. P., Foster, R. B., O'Brien, S. T., Harms, K. E., Condit, R., Wechsler, B., et al. (1999). Light-gap disturbances, recruitment limitation, and tree diversity in a neotropical forest. _Science, 283_(5401), 554\u2013557. https://doi.org/10.1126/science.283.5401.554 \n\nKoven, C. D., et al. (_in prep_). Benchmarking and parameter sensitivity of physiological and vegetation dynamics using the Functionally Assembled Terrestrial Ecosystem Simulator (FATES) at Barro Colorado Island, Panama.\n\nKovenock, M. (2019). Ecosystem and large-scale climate impacts of plant leaf dynamics (Doctoral dissertation). http://hdl.handle.net/1773/44061\n\nMeakem, V., Tepley, A. J., Gonzalez-Akre, E. B., Herrmann, V., Muller-Landau, H. C., Wright, S. J., et al. (2018). Role of tree size in moist tropical forest carbon cycling and water deficit responses. _New Phytologist, 219_, 947\u2013958. https://doi.org/doi:10.1111/nph.14633\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2b65f6441997858538a8a830607bba9fc4aabc83", "size": 182341, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook/fates_parameter_ensemble_benchmarking_190703.ipynb", "max_stars_repo_name": "kovenock/FATES_Parameter_Selection", "max_stars_repo_head_hexsha": "eb38cc96b3cb6c02ae71426b6351e60b16ed8a56", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook/fates_parameter_ensemble_benchmarking_190703.ipynb", "max_issues_repo_name": "kovenock/FATES_Parameter_Selection", "max_issues_repo_head_hexsha": "eb38cc96b3cb6c02ae71426b6351e60b16ed8a56", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/fates_parameter_ensemble_benchmarking_190703.ipynb", "max_forks_repo_name": "kovenock/FATES_Parameter_Selection", "max_forks_repo_head_hexsha": "eb38cc96b3cb6c02ae71426b6351e60b16ed8a56", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 175.6657032755, "max_line_length": 65096, "alphanum_fraction": 0.866716756, "converted": true, "num_tokens": 10551, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6187804478040616, "lm_q2_score": 0.4843800842769844, "lm_q1q2_score": 0.29972492545628154}}
{"text": "# Section IV. DYNAMICS AND CONTROL\n \n# Chapter 13. What are Dynamics and Control?\n\nThe purpose of dynamics is to study how time and force act on a\nmechanism, while the purpose of controls is to study how a system should\nrespond to errors and disturbances. At this point, we have described how\nto reason about the positions of robots and how to generate continuous\npaths. But actually executing those paths requires us to think much more\ncarefully about the physics of robot mechanisms, and the role of time\nand velocity. Even the strongest robots cannot instantaneously change\nvelocities, and driving and flying robots cannot move sideways.\n\nIt is through the use of control that an industrial robot can move to a\nposition with sub-millimeter accuracy, and an aircraft can fly for\nthousands of kilometers but land on an airstrip a few meters wide. It is\nalso a means for understanding locomotion and reflexes in the biological\nsensorimotor system. It is important to note that both dynamics and\ncontrol are deep fields of study that are more than one hundred years\nold, and yet they are still undergoing significant change! Classical\napproaches rely heavily on mathematical analysis, while more modern\napproaches to control rely on computation as a key tool. Due to this\ndepth, to master these fields requires years of specialized\ninvestigation, and this part of the book can only survey the main points\nas they relate to robotics. We will see some of both the historical and\nmodern approaches in the next few chapters.\n\nIn the topic of dynamics we will cover 1) basic terminology of dynamical\nsystems, 2) simple dynamical systems from physics, 3) the dynamics of\narticulated robots, and 4) contact mechanics. In controls will describe\nmethods for 1) analyzing stability of controlled dynamical systems, 2)\ncontrolling articulated robots with high accuracy, and 3) generating\nfeasible and optimal control strategies.\n\nBasic terminology\n-----------------\n\nA *dynamical system* is one in which the state of the system changes\ncontinuously over time. The notion of *state* is similar to that of a\nconfiguration, although it can also include terms like joint velocities.\nIn this section, we let $x \\in \\mathbb{R}^n$ be the quantity defining\nthe *state* of the system. Robots are able to apply forces and otherwise\n*alter the rate of change of the state* using their actuators. We define\nthe *control* (aka control input) as $u \\in \\mathbb{R}^m$, where $m$ is\nthe number of independently chosen variables.\n\nFor example, in a 6-joint industrial robot, the state of the robot is\ntypically considered as $x=(q,v) \\in \\mathbb{R}^{12}$. The inclusion of\na velocity term allows us to express how the robot's momentum affects\nits future movement, and how joint forces affect velocities. The control\nvariable $u$ can take on many forms, depending on how the controller is\ndesigned. For example, if the controller takes desired joint velocities\nas inputs, the control variable is $u=(v_{d1},\\ldots,v_{d6})$ where\n$v_{di}$ indicates the desired velocity of joint $i$. On the other hand,\nif it takes joint torques as inputs, the control variable is\n$u=(\\tau_{1},\\ldots,\\tau_{6})$.\n\nThe standard terminology for modeling a dynamical system is an\nexpression relating the state and control to the derivative of the\nstate. In the case that we do not have the ability to control a system,\nwe have an *uncontrolled dynamics equation* of the form\n$$\\dot{x} = f(x).\n\\label{eq:UncontrolledDynamicEquation}$$ If the system can indeed be\ncontrolled by a control $u$, we have a *controlled dynamics equation*:\n$$\\dot{x} = f(x,u)\n\\label{eq:DynamicEquation}$$ where $x$ is the state, $u$ is the control,\nand $\\dot{x}$ is the time derivative of the state $\\frac{dx}{dt}$. The function $f$ is\nknown as the dynamics of the system. These equations are also known as\nthe *equations of motion*.\n\nIt is important to note that $x$ and $u$ are actually *functions of\ntime*. If we need to explicitly represent the dependence on time we\nshall write $x(t)$ and $u(t)$. Hence, the dot notation is simply the\ntime derivative $\\dot{x} = \\frac{d}{dt}x$. (Or more explicitly,\n$\\dot{x}(t) = \\frac{dx}{dt}(t)$). Also note that from this chapter\nonward, all variables except for time will be vector quantities unless\nstated otherwise.\n\nIt should be noted that we have introduced the terms \"dynamical\" and\n\"dynamics\" which should be taken to be *almost* synonyms. Being quite\npedantic, we will say something is dynamic when it changes over time,\nwhile something is dynamical if it *regards* dynamics. When we say\n\"dynamical system\" it means that the system regards a dynamic quantity\n(the state) but the system itself is not changing over time. We shall\nalso sometimes say \"dynamic equation\" which is a synonym with \"dynamics\nequation\" and is chosen according to author preference. But why don't we\ncall it a \"dynamical equation?\" Let's just move on, and let the grammar\nNazis squabble over terminology\\...\n\n### Open-loop and closed-loop control\n\nGiven a dynamics function $f$, our job is to decide upon the control $u$\nin order to accomplish some desired task. There are two primary types of\ncontrols: 1) *open-loop* control, in which case $u \\equiv u(t)$ only\ndepends on time, and 2) closed-loop control, in which case\n$u \\equiv u(x)$ depends on state. (It may also depend on time, in which\ncase we write $u \\equiv u(x,t)$).\n\nThe significance of closed-loop control is that the control function can\n\"observe\" the state of the system and change accordingly in order to\nachieve the desired task. The control function in this case is also\nknown as a *control policy*. This allows a robot to adapt to\ndisturbances to achieve high accuracy and to prevent veering off-course.\nHowever, for purposes of planning, it will often be easier to compute an\nopen-loop trajectory. Later, we shall see how to convert an open loop\nplan into a closed-loop one via the approach of model predictive\ncontrol.\n\n### Discrete-time systems\n\nIn many cases it is convenient to talk about *discrete-time* systems in\nwhich time is no longer a continuous variable but a discrete quantity\n$t=0,1,2,\\ldots$, and the dynamics are specified in the form\n$$x_{t+1} = f(x_t,u_t).\n\\label{eq:DiscreteTimeDynamicEquation}$$ \nHere, the control is allowed to\nchange only at discrete points in time, and the state is only observed\nat discrete points in time. This more accurately characterizes digital\ncontrol systems which operate on a given clock frequency. However, in\nmany situations the *control frequency* is so high that the\ncontinuous-time\nmodel ($\\ref{eq:DynamicEquation}$) is appropriate.\n\n### Converting higher-order dynamic systems into first-order systems\n\nOften, we shall see systems of the form\n\n$$\\ddot{x} = f(x,\\dot{x},u)\n\\label{eq:SecondOrderSystem}\n$$\n\nwhich relate state and controls to *accelerations* of the state $\\ddot{x} = \\frac{d^2 x}{dt^2}$. This does not seem to satisfy our definition of a dynamic system, since we've never seen a double time derivative. However, we can employ a *stacking trick* to define a first order system, but of twice the dimension. Let us define the stacked state vector\n\n$$y \\equiv \\begin{bmatrix} x \\\\ \\dot{x} \\end{bmatrix}.$$\n\nThen, we can rewrite ($\\ref{eq:SecondOrderSystem}$) in a first-order form as:\n\n$$\\dot{y} = g(y,u)$$\n\nwhere $g(y,u) \\equiv f(x,\\dot{x},u)$ simply \"unstacks\" the state and velocity from $y$. Now all of the machinery of first-order systems can be applied to the second order system. This can also be done for dynamic systems of order 3 and higher, wherein all derivatives are stacked into a single vector.\n\n(Note that to define an initial state $y_0$, we will need to specify the initial position $x_0$ as well as the velocity $\\dot{x}_0$.)\n\nODE integration\n--------------------------\n\nConsider a controlled, continuous time dynamic system $\\dot{x}= f(x,u)$, with $x\\in \\mathbb{R}^n$\nand $u\\in \\mathbb{R}^m$. Suppose we are given an _initial state_ $x_0$ encountered at $t=0$, and a control $u(x,t)$ defined for $t \\geq 0$. Solving for the state trajectory requires solving an **initial value problem** of an **ordinary differential equation** (ODE):\n\n$$\\text{Find }x(t) \\text{ for } t > 0 \\text{ subject to }\\dot{x}(t) = g(x(t),t) \\text{ and } x(0)=x_0. $$\n\nwhere $g(x,t) \\equiv f(x,u(x,t))$ is a time-varying dynamics function. (Note that we have applied the simple trick of pushing the control $u$ inside $g$, which turns the controlled system into an uncontrolled system.)\n\nFor some limited classes of dynamic systems and control trajectories we can solve the ODE analytically. We shall see some of these solutions for the [Dubins car](#Dubins-car) and [linear time invariant systems](#Linear-Time-Invariant-Systems). However, in the general case, we shall need to resort to numerical methods. This problem is known as **ODE integration** (also known as **simulation**).\n\n### Euler's method\n\nThe simplest numerical integration technique is known as **Euler's method**, which divides time into a sequence of small steps of $\\Delta t$ in which the dynamics are assumed constant. Each subsequent movement simply displaces the state by the first-order approximation $\\Delta t g(x(t),t)$. What emerges is a sequence of states $x_0,\\ldots,x_N$ given by:\n\n$$x_1 = x_0 + \\Delta t \\cdot g(x_0,t)$$\n\n$$x_2 = x_1 + \\Delta t \\cdot g(x_1,t)$$\n\n$$...$$\n\n$$x_N = x_{N-1} + \\Delta t \\cdot g(x_{N-1},\\Delta t\\cdot(N-1))$$\n\nThis is a widely-used technique due to its straightforward implementation, and it is also easy to analyze. Code for this method is given below.\n\n\n```python\nimport numpy as np\n\ndef integrate_euler(f,x0,N,dt,t0=0):\n \"\"\"Approximates the trajectory resulting from the initial value problem x'=f(x,t)\n using euler's method.\n \n Arguments:\n - f(x,t): a function of state and time giving the derivative dx\n - x0: the initial state at time t0, x(t0)=x0\n - N: the number of steps to take\n - t0: the initial time\n \n Return value: a trajectory ([t0,t1,...,tN],[x0,x1,...,xN])\n \"\"\"\n t = t0\n x = x0\n ts = [t0]\n xs = [x0]\n for i in range(N):\n dx = f(x,t)\n x = x + dt*dx\n t = t + dt\n ts.append(t)\n xs.append(x)\n return (ts,xs)\n```\n\nThe below code plots the result of Euler's method applied to a simple 2D particle under a gravity field, with the control u giving an external acceleration (here, $u(t)=0$ for all $t$).\n\n\n```python\n# Code for plotting Euler integration of a 2D particle\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport math\n\ng = -9.8 #the gravitational constant\n#the 4D state is [px,py,vx,vy]\ndef zero_control(x,t):\n return np.zeros(2)\n#you might try replacing zero_control with sin_control below and seeing what happens...\ndef sin_control(x,t):\n return np.array([5.0*math.sin(t*15),0])\ndef f_grav(x,t):\n u = zero_control(x,t)\n return np.hstack((x[2:4],u + np.array([0,g])))\n#initial px,py,vx,vy (at origin, with forward velocity 1, upwards velocity 10)\nx0 = np.array([0.0,0.0,1.0,10.0])\n#integrate for total time T\nT = 2.0\n#compare several time steps\ndts = [0.025,0.05,0.1,0.2]\nfor dt in dts:\n N = int(T/dt)\n times,points = integrate_euler(f_grav,x0,N,dt)\n times = np.array(times)\n points = np.array(points)\n plt.plot(points[:,0],points[:,1],label='Euler, dt='+str(dt))\ntimes = np.linspace(0,T,50)\nground_truth = np.vstack((x0[2]*times,x0[3]*times+0.5*g*times**2)).T\nplt.plot(ground_truth[:,0],ground_truth[:,1],label='Exact')\nplt.xlabel('x')\nplt.ylabel('y')\n\nplt.legend()\nplt.show()\n```\n\nNote that the accuracy of the integration depends heavily on the timestep chosen. In general, the smaller\nthe timestep, the more accurate the integration will be. More formally, define the *integration error*\nas $\\epsilon(t) = x(t) - x_{\\lfloor t/Delta t\\rfloor}$. \nHigher errors result as\n\n* The spatial variation of the dynamics function is large. More precisely, the error will grow if the Jacobian of f (in either x or u) are large.\n\n* The time $t$ is large (i.e., the error generally gets worse over time.)\n\n* $\\Delta t$ is large.\n\n### Higher order integrators\n\nA great deal of work has investigated ODE integration techniques that are more accurate than Euler integration. Rather than approximate the dynamics function as a first order Taylor expansion, they may use higher order terms to achieve lower approximation error. A popular class of higher order methods are the **Runge-Kutta methods**, which use multiple evaluations of the dynamics function to achieve far lower error than standard Euler integration.\nMore advanced methods may also use **adaptive step size**, which take smaller steps where the dynamics function is found to be more highly varying. \n\nMany numerical libraries have a variety of integrators to choose from. For example, the below plot shows an integrator used in Scipy library, which is, in fact, exact for this dynamics function.\n\n\n```python\n# Code for the plot using scipy ODE integration\ndef integrate_scipy(f,x0,N,dt,t0=0):\n \"\"\"Same arguments and return type as euler, but using the integrators in the Scipy library\"\"\"\n from scipy.integrate import ode\n r = ode(lambda t,x:f(x,t)) #need to swap the order of arguments for scipy's ode function\n r.set_integrator('dopri5') #lots of options here... see function documentation\n r.set_initial_value(x0, t0)\n t = t0\n ts = [t0]\n xs = [x0]\n for i in range(N):\n x = r.integrate(t+dt)\n t += dt\n ts.append(t)\n xs.append(x)\n return (ts,xs)\n\ndt = 0.1\ntimes,points = integrate_scipy(f_grav,x0,int(T/dt),dt)\ntimes = np.array(times)\npoints = np.array(points)\nplt.plot(points[:,0],points[:,1],label='Scipy, dt='+str(dt))\nplt.plot(ground_truth[:,0],ground_truth[:,1],label='Exact')\nplt.xlabel('x')\nplt.ylabel('y')\n\nplt.legend()\nplt.show()\n```\n\n### Stability, convergence, and divergence\n\nA dynamic system is said to be:\n\n* **Stable** for some class of initial states if its solution trajectories do not\ngrow without bound,\n\n* **Unstable** (or **divergent**) if the trajectories grow without bound, and\n\n* **Convergent** if the solution trajectories approach a single point.\n\nA *stable point* is a state $x$ such that for some neighborhood\nof $x$, the ODE is convergent toward $x$. A necessary condition for a\npoint to be stable is $f(x) = 0$, and points that satisfy this criteria\nare known as *equilibrium points*. All stable points are equilibria, but\nthe converse is not true.\n\nThe trajectories derived from Euler integration can be divergent even when the underlying system itself is stable or convergent. As an example, consider the damped harmonic oscillator system $$\\ddot{x} = -10x - \\dot{x}$$.\n\nWith the initial condition $x(0)=1$, $\\dot{x}(0)=0$, the solution trajectory is $x(t) = e^{-t/2}\\cos(\\omega t)$ with $\\omega=\\sqrt{10-1/2^2}$. But see what happens when this is integrated using Euler's method:\n\n\n\n```python\n# Code for integration of a damped harmonic oscillator with Euler's method\ndef f_harmonic_oscillator(x,t):\n return np.array([x[1],-10*x[0]-x[1]])\n\n#initial x,dx\nx0 = np.array([1.0,0.0])\n#integrate for total time T\nT = 4.0\n#compare several time steps\ndts = [0.025,0.1,0.2]\nfor dt in dts:\n N = int(T/dt)\n times,points = integrate_euler(f_harmonic_oscillator,x0,N,dt)\n #times,points = integrate_scipy(f_harmonic_oscillator,x0,N,dt)\n times = np.array(times)\n points = np.array(points)\n #plt.plot(points[:,0],points[:,1],label='Euler, dt='+str(dt))\n plt.plot(times,points[:,0],label='Euler, dt='+str(dt))\ntimes = np.linspace(0,T,100)\nd = 0.5\nw = math.sqrt(10-d**2)\nground_truth = np.vstack((np.multiply(np.exp(-d*times),np.cos(times*w)),\n -d*np.multiply(np.exp(-d*times),w*np.sin(times*w)))).T\n#plt.plot(ground_truth[:,0],ground_truth[:,1],label='Exact')\nplt.plot(times,ground_truth[:,0],label='Exact')\nplt.xlabel('t')\nplt.ylabel('x')\n\nplt.legend()\nplt.show()\n```\n\nWhen the time step is small, the integrated trajectory does indeed converge torward 0, like the exact solution. However, at $\\Delta t=0.1$, the solution is oscillatory between $[-1,1]$ and never converges. At $\\Delta t = 0.2$, the solution \"blows up\" toward infinity! This is a serious problem for simulation, since we would like to avoid the computational expense of taking tiny steps, but while also integrating accurately.\n\nIn fact there are systems that are stable everywhere for which Euler's\nmethod is unstable everywhere! An example is the oscillator:\n$$\\begin{bmatrix}\\dot{x} \\\\ \\dot{y} \\end{bmatrix} = \\begin{bmatrix}0 & -1 \\\\ 1& 0\\end{bmatrix} \\begin{bmatrix}x \\\\ y \\end{bmatrix}.$$\nHere, the flow vector at a point is always perpendicular and CCW to the\nvector from the origin to that point. The solution trajectories are\ncircles $(r \\cos (t - \\theta), r \\sin (t - \\theta))$, where $(r,\\theta)$\nare the polar coordinates of the initial point. If we were to\napproximate this using Euler integration, each integration step brings the state\nfurther and further from the origin, spiraling outward without bound. Taking\nsmaller time steps helps a little, but cannot completely remedy the problem.\n\n\n```python\n#Code for plotting the phase space of a pure oscillator\ndef f_oscillator(x,t):\n return np.array([-x[1],x[0]])\nX, Y = np.meshgrid(np.arange(-3, 3, .5), np.arange(-3, 3, .5))\nUV = np.array([f_oscillator([x,y],0) for x,y in zip(X,Y)])\nU = UV[:,0]\nV = UV[:,1]\nplt.quiver(X, Y, U, V)\n\n#compare several time steps\nT = 8.0\ndts = [0.025,0.1,0.25]\nx0 = np.array([1,0])\nfor dt in dts:\n N = int(T/dt)\n times,points = integrate_euler(f_oscillator,x0,N,dt)\n #times,points = integrate_scipy(f_harmonic_oscillator,x0,N,dt)\n times = np.array(times)\n points = np.array(points)\n plt.plot(points[:,0],points[:,1],label='Euler, dt='+str(dt))\nplt.legend()\nplt.show()\n```\n\nSimple dynamic systems \n-----------------------------------\n### Basic physics: mass, force, and torque\n\nNewton's laws\n\nF = m a\n\nTorques and moment arms\n\n### Particle driven by forces\nA 1D particle with mass $m$, position $p$ and velocities $v$, controlled by forces $u$, follows Newton's laws under the second-order controlled dynamics:\n$$\\ddot{p} = u / m$$\n\nThis problem can be modeled\nwith a state $x = (p,v) \\in \\mathbb{R}^2$ and control\n$u = f \\in \\mathbb{R}$ with the dynamics equation\n\n\n\n\\begin{equation}\n\\dot{x} \\equiv \\begin{bmatrix} \\dot{p}\\\\ \\dot{v} \\end{bmatrix} = f(x,u) = \\begin{bmatrix}v \\\\ f/m \\end{bmatrix}. \\label{eq:PointMass}\n\\end{equation}\n\nThis function $f$ can be thought of as a *vector\nfield* that maps each 2D point to a 2D vector. If we plot this vector\nfield on the $(p,v)$ plane for various values of $f$, we observe a few\nthings. First, it is invariant to $p$. Second, the value of $f$ varies\nthe length and direction of the vectors in the $v$ direction.\n\nFor any initial state $x_0=(p_0,v_0)$ under a constant forcing\n$u(t) = f$, the velocity of the solution trajectory $x(t)$ can be\ndetermined through simple integration:\n$$v(t) = v_0+\\int_0^t f/m dt = v_0 + t f/m.\n\\label{eq:PointMassVelocity}$$ Continuing with the position, we see that\n$$p(t) = p_0+\\int_0^t v(t) dt = p_0+\\int_0^t (v_0 + t f/m) dt =p_0 + t v_0 + \\frac{1}{2}t^2 f/m.\n\\label{eq:PointMassPosition}$$ This means that the velocity of the\nparticle increases or decreases over time according to a linear function\nwith slope depending on $f$, and its position takes on a parabola\ntrajectory over time. Note that this model generalizes to any point\nparticle in $n$-D space, except that position, velocity, and force\nbecome vector quantities. The state is then a $2n$-D vector and control\nis $n$-D.\n\nNow, let us suppose we wish to drive the particle from some position to\nanother (say, from 0 to 1) while starting and stopping at 0 velocity.\nCan we use a constant force to do so? We start with $x(0)=(0,0)$ and\nwish to achieve $x(T)=(1,0)$ at some future time $T$. Well,\nby (\\ref{eq:PointMassPosition}) we would need $T^2 f/m = 1$, but\nby (\\ref{eq:PointMassVelocity}), we would need $T f / m = 0$. This is\na contradiction, so we could not reach this other state via a constant\nforce.\n\nCan we use a linear interpolation instead? If we define $u=t/T$ as the\ninterpolation parameter, such a trajectory would have\n$v(t) = 0\\cdot (1-u) + 0\\cdot u = 0$ and\n$p(t) = 0\\cdot (1-u) + 1\\cdot u = t/T$. However, this trajectory does\nnot satisfy dynamic constraints for any value of $t>0$ and any value of\n$f$!\n\nThere are a couple ways to solve this problem. One is to make $f$ a\nclosed-loop control, such as the PD controller\n$f(t) \\equiv u(x(t)) = -k_P (p-1) - k_D v$. We will show when we discuss [PID control](Control.ipynb) that for certain constants $k_P$ and $k_D$,\nthis choice can be shown to force the system to converge toward the\ntarget $(1,0)$. Another is to design a clever open loop control that\nsatisfies the endpoint constraints and the dynamic constraints, such as\n$T = 2$, $f(t) = 1$ for $t\\leq 1$ and $f(t) = -1$ for $1 < t \\leq 2$.\nThis control accelerates the system to the point $(p,v)=(0.5,1)$ at\n$t=1$, and then decelerates to $(1,0)$ at $t=2$. We shall see more\ngeneral ways of designing such control functions using the optimal\ncontrol methods presented in [later chapters](OptimalControl.ipynb).\n\n### Pendulum swing-up\n\nThe pendulum swing up problem asks to control an actuator with limited\ntorque to drive a pendulum with progressively larger and larger\nmomentum, until it can then reach and stabilize about the vertical\nposition. The pendulum is assumed to with a point mass at the end of a\nbar of length $L$, with the other end fixed to rotate about the origin.\nThe system has a state space of $x=(\\theta,\n\\omega)$, with $\\theta$ the CCW angle of the mass with respect to the\n$x$ axis, and $\\omega$ its angular velocity. The start state is\n$x=(3\\pi/2,0)$ and the goal state is $x=(\\pi/2,0)$.\n\n************\n\n\n\n
Figure 1.\nIllustrating the dynamics of a controlled pendulum moving from the\ndown ($\\theta = 3\\pi/2 \\approx 4.71$) to the up\n($\\theta = \\pi/2 \\approx 1.57$) position. If the motor is strong enough,\nit can proceed almost directly toward the goal state. The legend\ndisplays torque requirement needed to implement such a controller.\n
\n\n************\n\n\nThe actuator $u$ applies a torque about the origin, and is usually\nassumed bounded $|u|\\leq u_{max}$. The force of gravity produces a\ntorque of magnitude $mg L \\cos \\theta$ about the origin. Since the\nmoment of inertia of the point mass is $mL$, the overall acceleration of\nthe system is: $$\\ddot{\\theta} = g \\cos \\theta + u/mL.$$ Writing this in\ncanonical form, we have\n$$\\dot{x} \\equiv \\begin{bmatrix}\\dot{\\theta}\\\\\\ddot{\\omega}\\end{bmatrix} = f(x,u) = \\begin{bmatrix}{\\omega}\\\\{g \\cos \\theta}\\end{bmatrix} + u \\begin{bmatrix}1 \\\\ 1/mL \\end{bmatrix}.$$\nThis is a nonlinear equation without an analytical solution.\n\nWith $u_{max}$ sufficiently large ($u_{max} > mLg$) the motor has enough\nstrength to hold the pendulum steady horizontally, and it is possible to\ndrive it monotonically to the goal\n([Fig. 1](#fig:PendulumStrongMotor)). But if the maximum torque is\nlowered beyond some amount, the motor can no longer achieve sufficient\ninertia to raise the pendulum without \"pumping,\" like a child on a\nswing, to increase the kinetic energy of the system. As we shall see when we discuss [bang-bang control](OptimalControl.ipynb), the optimal controller will then\nalternate between extreme controls to build up enough kinetic energy to\nreach the goal. This implies that the time evolution of the system will\nswitch between the flow fields shown in\n[Fig. 2](#fig:PendulumWeakMotor).\n\n************\n\n|Max CW|Max CCW|\n|----|----|\n| | |\n\n
Figure 2.\nThe flow fields corresponding to minimum (left) and maximum (right)\ncontrols for a pendulum swing-up problem with unit mass, unit length,\nand torque bounded at\n$|u| \\leq 5$\u2006N$\\cdot$m.\n
\n\n************\n\n### Cart-pole\n\nThe cart-pole problem is a toy underactuated system in which a cart that\ncan translate in $x$ direction needs to swing up and/or balance a pole\nattached to it with a pin joint\n([Fig. 3](#fig:Cartpole)). Its control has been studied quite\nextensively, and it has similar dynamics to the Segway mobility\nscooters.\n\n************\n\n\n\n
Figure 3.\n Illustration of the cart-pole problem.\n
\n\n************\n\n\nIn this problem, the system's configuration has two parameters\n$q=(q_1,q_2)$ which denote the $x$ translation of the cart and the angle\nof the pole, respectively. In the below convention we treat the\ncart-pole as a PR robot, so that $q_2$ is the CCW angle of the pole from\nthe $x$ axis. In the balancing task, we wish to design a controller to\nmaintain the state near the unstable equilibrium point $q_2=\\pi/2$ under\ndisturbances. In the swing-up task, we wish to go from $q_2=-\\pi/2$ to\n$\\pi/2$. (Keep in mind that the topology of $q_2$ is SO(2), so the pole\ncan swing either left or right.)\n\nThis is a highly dynamic system where the cart's motors can apply forces\n$u_1$ in the positive and negative $x$ direction. Optionally, the pole\ncould apply torques $u_2$, but it is typical to enforce $u_2=0$ so that\nthe pole swings passively. The cart and pole have masses $m_1$ and $m_2$\nrespectively, and the pole is assumed to have all of its mass\nconcentrated at a point distance $L$ away from the pin.\n\nIn [Chapter 14](RobotDynamics.ipynb), we shall derive the equations\nof motion for the cart-pole system to be the second-order system of\nequations: $$\\begin{aligned}\n(m_1+m_2) \\ddot{q_1} -\\frac{m_2 L}{2} \\ddot{q}_2 \\sin q_2 - \\frac{m_2 L}{2} \\dot{q}_2^2 \\cos q_2 = u_1 \\\\\n-\\frac{m_2 L}{2} \\ddot{q}_1 \\sin q_2 + \\frac{m_2 L^2}{4} \\ddot{q}_2 + m_2 g \\cos q_2 = u_2\n\\end{aligned}$$ where $g$ is the gravitational constant. Notice here\nthat the accelerations $\\ddot{q}_1$ and $\\ddot{q}_2$ are coupled, in\nthat they appear in both equations. Solving this system of equations, we\nobtain a solution: $$\\begin{bmatrix}{\\ddot{q}_1}\\\\{\\ddot{q}_2}\\end{bmatrix} = \n\\frac{1}{d} \\begin{bmatrix}\n\\frac{m_2 L^2}{4} & \\frac{m_2 L}{2} \\sin q_2 \\\\\n\\frac{m_2 L}{2} \\sin q_2 & m_1+m_2 \n\\end{bmatrix}\n\\begin{bmatrix}{u_1 + \\frac{m_2 L}{2} \\dot{q}_2^2 \\cos q_2}\\\\{u_2-m_2 g \\cos q_2}\\end{bmatrix}$$\nwith $d= \\frac{m_1 m_2 L^2}{4}+\\frac{m_2^2 L^2}{4} \\cos^2 q_2$. For any\ngiven choice of $u_1$ and $u_2$, this can then be integrated to obtain\nsolution trajectories.\n\nThe cart-pole system is highly sensitive to the behavior of the cart.\n[Fig. 4](#fig:CartpoleSpin) displays the behavior of the swing-up\nproblem under 1.5 sinusoidal movements of the cart with amplitude 0.5.\nEach plot shows a slighly different period. In this setup, the pole\nswings over the upright position only for periods in approximately the\nrange $[1.12,1.29]$. There is another range of periods where the pole is\nswung about the upright position in the range $[1.32,1.39]$.\n\n*************\n\n|Period 1.288s | Period 1.5s |\n|----|----|\n| | |\n\n
Figure 4\n Behavior of the cart pole problem as a function of time. Slightly\nchanging the period of the cart's movement from 1.288\u2006s to 1.5\u2006s fails\nto swing the pendulum past the upright position. A good swing-up\ncontroller might use a period of 1.288 and then switch to a stabilizing\ncontroller around\n$t=2$.\n
\n\n*************\n\n### Dubins car\n\nA Dubins car model approximates the mobility of a standard 2-axle car\nmoving on a flat surface, ignoring accelerations. In this model,\n$(p_x,p_y)$ is the center of its rear axle, $\\theta$ is its heading, and\n$L$ is the distance between the front and rear axles. The control\n$u=(v,\\phi)$ specifies the velocity $v$ and the steering angle of the\nfront wheels $\\phi$. The dynamics of this system are given as follows:\n$$\\dot{x} \\equiv \\begin{bmatrix}{\\dot{p}_x}\\\\{\\dot{p}_y}\\\\{\\dot{\\theta}}\\end{bmatrix} = f(x,u) = \\begin{bmatrix}{v \\cos \\theta}\\\\{v \\sin \\theta}\\\\{\\frac{v}{L}\\tan \\phi}\\end{bmatrix}$$\nNote that the velocity vector is always parallel to the heading\n$(\\cos \\theta,\\sin \\theta)$, and the turning rate $\\dot{\\theta}$ depends\non both the steering angle and the velocity. For constant $u$, the\nposition $(p_x,p_y)$ traces out straight lines (with $\\phi=0$) or arcs\n(with $\\phi\\neq 0$).\n\nTypically, the control is subject to bounds\n$v_{min} \\leq v \\leq v_{max}$ and $|\\phi| \\leq \\phi_{max}$. With these\nlimits, the vehicle has a minimum turning radius of\n$\\frac{1}{L \\tan \\phi_{max}}$. The vehicle cannot move sideways, and\nmust instead perform \"parallel parking\" maneuvers in order to move in\nthe state-space direction $(-\\sin \\theta,\\cos \\theta,0)$.\n\nLinear time invariant systems\n-----------------------------\n\n\nIn general, the $f$ function may be nonlinear in its arguments. However,\na widely studied class of dynamical system is the *linear,\ntime-invariant* (LTI) system. In an LTI system, the dynamics equation\ntakes on the form $$\\dot{x} = Ax + Bu$$ where $A$ and $B$ are constant\nmatrices of size $n \\times n$ and $n \\times m$, respectively. This type\nof system is easily analyzed using results from linear algebra can\nrepresent a wide range of dynamic behavior.\n\nFor example, the 1D point\nmass system (\\ref{eq:PointMass}) can be represented as an LTI system with:\n$$\\dot{x} \\equiv \\begin{bmatrix}\\dot{p} \\\\ \\dot{v} \\end{bmatrix} = \\begin{bmatrix}0&1\\\\0&0\\end{bmatrix} \\begin{bmatrix}p \\\\ v \\end{bmatrix} + \\begin{bmatrix} 0 \\\\ 1/m \\end{bmatrix}u$$\n\nIn the discrete-time\nform (\\ref{eq:DiscreteTimeDynamicEquation}), an LTI system takes the\nform $$x_{t+1} = A x_t + B u_t.$$ A continuous-time LTI system can be\nconverted to an equivalent discrete-time LTI system through integration.\n\nFor example, the point mass system with time step $\\Delta t$ and\nconstant control can be represented in discrete time as\n$$\\begin{aligned}\nx_{t+1} &\\equiv \\begin{bmatrix}{p(t+\\Delta t)}\\\\{v(t+\\Delta t)}\\end{bmatrix} = \\begin{bmatrix}{p(t) + \\Delta t v(t) + \\frac{1}{2} f /m\\Delta t^2}\\\\{v(t)+\\Delta t f/m}\\end{bmatrix} \\\\\n& = \\begin{bmatrix}1 & \\Delta t \\\\ 0 & 1\\end{bmatrix} x_t + \\begin{bmatrix}{\\frac{1}{2}\\Delta t^2 / m}\\\\{\\Delta t/m}\\end{bmatrix} u_t\n\\end{aligned}$$\n\nMoreover, nonlinear systems can be approximated by an LTI system about\nany stable point in state space using linearization. Consider\nlinearizing a system of the form $\\dot{x} = f(x) + g(x)u$ about state\n$x_0$ and control $u_0$. Also assume that $u_0$ applied at $x_0$ leads\nto no derivative (i.e., $f(x_0)+g(x_0) u_0=0$). Perform a change of\nvariables to $(\\Delta x, \\Delta u)$ such that $x = x_0 + \\Delta x$ and\n$u = u_0 + \\Delta u$. Then $$\\begin{aligned}\n\\dot{x} & = \\dot {\\Delta x} = (f(x_0)+g(x_0) u_0) + \\left(\\frac{\\partial f}{\\partial x}(x_0) + \\frac{\\partial g}{\\partial x}(x_0)u_0\\right) \\Delta x + g(x_0) \\Delta u \\\\\n & = \\left(\\frac{\\partial f}{\\partial x}(x_0) + \\frac{\\partial g}{\\partial x}(x_0)u_0\\right) \\Delta x + g(x_0) \\Delta u \n\\end{aligned}$$ This is LTI in $(\\Delta x,\\Delta u)$ with\n$A=\\frac{\\partial f}{\\partial x}(x_0) + \\frac{\\partial g}{\\partial x}(x_0)$\nand $B=\\frac{\\partial g}{\\partial x}(x_0)$.\n\n\nNoise, uncertainty, disturbances, errors\n----------------------------------------\n\nBesides handling the differential constraint of the dynamics function,\nthe purpose of control is to handle deviations from an idealized state\nor trajectory. These deviations are in various contexts called noise,\nbias, uncertainty, disturbances, or errors. When they do occur, a\nvariety of problems could happen: the robot could fail to reach a goal,\nhit an obstacle, reach an unrecoverable state, or even run into a\nperson! A *robust* planner or controller is designed to produce\nhigh-quality behavior even when such deviations exist. It is important\nto recognize *errors are a fact of life* for all robots outside of\ntightly controlled industrial environments.\n\nGenerally speaking, errors can be characterized as being either *noisy*\nor *systematic*. A noisy error is one obeys no obvious pattern each time\nit is measured. A systematic error is one that does obey a pattern. We\nshall also see that for the purposes of control, these deviations fall\nunder two fundamental classes, which we call *motion uncertainty* and\n*state uncertainty*.\n\n*Disturbances* are a form of motion uncertainty that cause the state to\nbe moved in unexpected ways at future points in time. For example, wind\ngusts are very hard to predict in advance, and can move a drone from a\ndesired path.\n\n*Actuation error* occurs when a desired control is not executed\nfaithfully. An example would be a controller that outputs desired\ntorques for a robot, but where these are not followed exactly by the\nlow-level motor controller. These errors can be treated as motion\nuncertainty.\n\n*Measurement error* is a type of state uncertainty where due to sensor\nnoise the state is observed incorrectly. Understanding measurement error\nis critical for closed-loop controllers which base their behavior on the\nmeasured state.\n\n*Partial observability* means that only certain aspects of the state\n*can possibly be measured* by the available sensors. For example, a\nmobile robot with a GPS sensor can only measure position, whereas it may\nneed to model velocity as part of its state. State estimation techniques,\nsuch as Kalman filtering and particle filtering,\ncan be used to extrapolate the unobserved components of state to provide\nreasonable state estimates. With those estimates, there will be some\nremaining *localization error* that the controller will still need to\nhandle.\n\n*Modeling error*, or *parameter uncertainty* means that the true\ndynamics function differs from what is *known* to the robot. This is\nsometimes considered a third class of uncertainty, but could also be\ntreated as state uncertainty as we shall see below.\n\nMotion uncertainty can be modeled as a disturbance to the dynamics\n$$\\dot{x} = f(x,u) + \\epsilon_d$$ where $\\epsilon_d(t) \\in E_d$ is some\nerror. Here $E_d$ is a set of possible disturbances, or a probability\ndistribution over disturbances. Motion uncertainty will cause an\nopen-loop system to \"drift\" from its intended trajectory over time. A\nproperly designed closed-loop controller can regulate the disturbances\nby choosing controls that drive the system back to intended trajectory.\n\nState uncertainty can be modeled as a discrepancy between the estimated\nstate $\\hat{x}$ and the \"true\" state of the system $x$, such that\n$\\hat{x} = x + \\epsilon_x$. This means that in open-loop trajectory\nplanning, we will start a plan from the estimated state $\\hat{x}$. Then,\neven if there was no motion uncertainty and we planned the best control\nsequence possible $u(t)$ starting from $\\hat{x}$, bad things could still\nhappen when it is executed. For closed-loop control, the control policy\n$u(\\hat{x})$ is *always chosen based on an incorrect estimate*. This\nmakes it much more difficult to ensure that it is correcting for true\ndeviations from the intended trajectory, rather than phantom errors\ncaused by uncertainty.\n\nTo design a robust controller, we might try to characterize $E_d$ and\n$E_x$ by observing likely disturbance values. If we observe systematic\nerrors like a constant *bias*, then perhaps we can improve our models to\nbe more accurate and cancel out the systematic error (called\n*calibration*). On the other hand, noisy errors are much harder to\ncancel out. To make any theoretical guarantees about a system's behavior\nin the case of motion uncertainty, it is usually necessary to ensure\nthat noise in $E_x$ and $E_d$ are relatively small.\n\nFinally, let us note that modeling error can often be treated as state\nuncertainty on a different dynamical system on an *augmented state*\nvector. Suppose that we are controlling a 1D point mass, but we do not\nobserve the true mass $m$. Instead, we observe $\\hat{m}$ which is\ndisturbed from the true value by $\\epsilon_m$ such that\n$\\hat{m} = m + \\epsilon_m$. If we construct the augmented state vector\n$(p,v,m)\\in \\mathbb{R}^3$, then the state follows dynamics\n$$\\dot{x} \\equiv \\begin{bmatrix}\\dot{p} \\\\ \\dot{v} \\\\ \\dot{m} \\end{bmatrix} = f(x,u) = \\begin{bmatrix} v \\\\ f/m \\\\ 0 \\end{bmatrix}.$$\nHence, the modeling error is equivalent to the state uncertainty vector\n$$\\epsilon_x = \\begin{bmatrix} 0 \\\\ 0 \\\\ \\hat{m}-m \\end{bmatrix}.$$\n\nTrajectories with timing\n-------------------------------------\n\nIt is important to discuss the difference between trajectories of a\ndynamic system vs. the geometric paths that we worked with in kinematic\nmotion planning. In a dynamic system, the trajectory in state space\n$x(t):[0,T]\\rightarrow \\mathbb{R}^n$ is parameterized by time. The state\nspace of a robotic system typically includes both configuration and\nvelocity components. By contrast, a geometric path moves in\nconfiguration space and has no inherent notion of time.\n\nMoreover, a geometric path can move in any direction as long as it does\nnot touch an obstacle, whereas a valid dynamic trajectory can only move\nin directions that can be generated by feasible controls. Hence we must\nconsider both time and dynamic constraints when representing valid\ntrajectories.\n\n### Trajectory representation\n\nOne basic representation is to store a trajectory as a sequence of\nstates sampled along the trajectory $(x_0,\\ldots,x_n)$ along with the\ninitial time $t_0$ (often assumed to be 0) and the time step $\\Delta t$\nbetween each point. An approximate interpolation between each point can\nbe performed piecewise-linearly or with splines. For example, the\npiecewise linear approximation has\n$$x(t) = x_k + \\frac{t-t_0-k\\Delta t}{\\Delta t}(x_{k+1} - x_k)$$ defined\nover $t \\in [t_0,t_0+n\\Delta t]$, where\n$k = \\lfloor \\frac{t-t_0}{\\Delta t} \\rfloor$ is the index of the\ntrajectory segment corresponding to the time $t$.\n\nMore generally, the\ntrajectory could store both states $(x_0,\\ldots,x_n)$ and times\n$(t_0,\\ldots,t_n)$, with a slightly modified interpolation function\n$$x(t) = x_k + \\frac{t-t_k}{t_{k+1}-t_k}(x_{k+1} - x_k)$$ defined over\nthe range $[t_0,t_n]$ and $k$ determined to be the point in time so that\n$t_k \\leq t \\leq t_{k+1}$.\n\nIf we are given an *integrator* (i.e., a *simulator*) for the dynamics\nfunction, trajectories can be encoded in a control-space representation\n$(x_0,u)$, which captures the initial state $x_0$ and an arbitrary control trajectory $u(t)$.\nFrom these items, the integrator *generates* the state trajectory\n$x(t)$. Specifically, we assume the existence of a function\n$Sim(f,x_0,u,t)$ that integrates the dynamics $f$ forward over time $t$,\nstarting from $x_0$ and using the control trajectory $u$. The control\n$u$ can be stored using arbitrary path representations, like\npiecewise-constant functions, piecewise-linear functions, polynomials,\nand splines. Then, we can regenerate the state-space trajectory\n$x(t) \\equiv Sim(f,x_0,u,t)$ as needed.\n\n### Path to trajectory conversion\n\nIt is almost trivial to convert trajectories to paths: simply \nconstruct a state space path and dropping the time component.\nThe converse --- creating a timed, dynamically-feasible trajectory from\na path --- can in some cases be quite challenging or even impossible. The reason is that the speed at which a robot should execute a path requires foresight into future twists and turns, like a race car driver slowing down ahead of a hairpin turn. \n\nIf a piecewise linear path were to be executed at a constant rate, then the timed trajectory would instantaneously change velocity at each milestone. But infinite forces are needed to execute instantaneous changes of velocity, so sending such trajectories to motors would lead to overshooting corners. We will examine\nbetter methods for industrial robots to start and stop smoothly at milestones\nwhen we discuss [motion generation](RobotControl.ipynb#Motion-queues-(motion-generation)). The basic idea is to speed up and slow down gradually, while choosing the point in time when the robot slows so that the robot ends exactly at the next milestone.\n\nThe more general case is known as a *time-scaling* problem. Mathematically, we describe such a problem as being given a geometric path $p(s)$ as input, and we wish to find a timed path $x(t)$ such that:\n\n* The trajectory follows the path: for all $t$, there exists an $s$ such that $x(t) = p(s)$\n* First-order dynamic constraints satisfied: $g(t,\\dot{x}(t)) \\leq 0$ for all $t$\n* Second-order dynamic constraints satisfied: $h(t,\\dot{x}(t),\\ddot{x}(t)) \\leq 0$ for all $t$\n* Possibly higher-order constraints as well...\n\nThis is formulated as finding a smooth, monotonically increasing 1D function $t(s)$ that defines the timing along the path. At one end of the domain, there is a boundary constraint $t(0)=0$. Since $t(s)$ is monotonically increasing, it has a well-defined inverse $s(t)$, so that the trajectory is defined via $x(t) = p(s(t))$. and we can define the trajectory velocity, acceleration, and higher order derivatives using the chain rule:\n\n* $\\dot{x}(t) = p^\\prime(s(t)) s^\\prime(t)$ \n* $\\ddot{x}(t) = p^{\\prime\\prime}(s(t)) s^\\prime(t)^2 + p^\\prime(s(t)) s^{\\prime\\prime}(t)$ \n* ...\n\nThen, a dynamic constraint of order $k$ can then be rewritten in terms of $p$ (which is known), $s$, and their derivatives up to order $k$. Choosing $s(t)$ then becomes a constrained trajectory optimization problem, which we will discuss when we visit the topic of [optimal control](OptimalControl.ipynb).\n\n\nSummary\n-------\n\n* Continuous-time dynamic systems are represented by a dynamics equation in the canonical form $\\dot{x}(t) = f(x(t),u(t))$, where $x$ is the state trajectory and $u$ is the control trajectory. Discrete-time systems are represented by the form $x_{t+1} = f(x_t,u_t)$.\n* Integration (or simulation) is needed to determine the trajectory that the state will follow under a given control. Numerical instability can result with a time step that is too large.\n* Dynamic systems can be convergent, stable, or divergent under a given controller.\n* \n\nExercises\n---------\n\n\n```python\n\n```\n", "meta": {"hexsha": "193e673f1c9bd07441cd2e331d76bea71a904272", "size": 190375, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "WhatAreDynamicsAndControl.ipynb", "max_stars_repo_name": "Nitin-Mane/RoboticSystemsBook", "max_stars_repo_head_hexsha": "715e327c42d57e54cf0505ab05f4d34bb91d8899", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "WhatAreDynamicsAndControl.ipynb", "max_issues_repo_name": "Nitin-Mane/RoboticSystemsBook", "max_issues_repo_head_hexsha": "715e327c42d57e54cf0505ab05f4d34bb91d8899", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "WhatAreDynamicsAndControl.ipynb", "max_forks_repo_name": "Nitin-Mane/RoboticSystemsBook", "max_forks_repo_head_hexsha": "715e327c42d57e54cf0505ab05f4d34bb91d8899", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 181.6555343511, "max_line_length": 56604, "alphanum_fraction": 0.8698305975, "converted": true, "num_tokens": 11515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5926665999540697, "lm_q2_score": 0.5039061705290805, "lm_q1q2_score": 0.2986483567833458}}
{"text": "# Learning disentangled representations in Flatland\n\nThis notebook uses our method to learn either entangled or disentangled representations in the Flatland environment (see Caselles-Dupr\u00e9, Hugo, et al. \"Flatland: a lightweight first-person 2-d environment for reinforcement learning.\" arXiv preprint arXiv:1809.00510 (2018).)\n\n\n```python\nimport os\nimport gym\nimport math\nimport numpy as np\nimport time\nfrom PIL import Image\nimport matplotlib.pyplot as plt\nfrom IPython import display\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nos.chdir('src/flatland/flat_game/')\nfrom env import Env\n\n```\n\n pygame 1.9.6\n Hello from the pygame community. https://www.pygame.org/contribute.html\n Loading chipmunk for Linux (64bit) [/home/william/.local/lib/python3.6/site-packages/pymunk/libchipmunk.so]\n\n\n# Flatworld environment\n\nWe start by defining the flatworld environment, which is based on the code available at https://github.com/Caselles/NeurIPS19-SBDRL. This environment returns pixel observations of a ball on a cyclical 2D grid. The available (discrete) actions step the ball by a fixed amount in all four directions.\n\n\n```python\nRADIUS = 15\nPERIOD = 10\n\nclass FlatWorld():\n \n class action_space():\n def __init__(self,n_actions):\n self.n = n_actions\n \n def sample(self, k=1):\n return torch.randint(0,self.n,(k,)) \n\n class observation_space():\n def __init__(self):\n self.shape = [84,84]\n \n def __init__(self, env_parameters, period=10, radius=15):\n\n self.action_space = self.action_space(4)\n self.observation_space = self.observation_space() \n self.period = period\n \n self.step_size = 0.1*(63-2*env_parameters['agent']['radius'])/period\n start_positions_list = [27 + 10*self.step_size*i for i in range(period)]\n self.start_positions = []\n for i in start_positions_list:\n for j in start_positions_list:\n self.start_positions.append((i,j))\n \n env_parameters['agent']['radius'] = radius\n self.env = Env(**env_parameters)\n \n def reset(self, start_position=None):\n if start_position==None:\n obs = self.env.reset(position=random.sample(self.start_positions, 1)[0])\n else:\n obs = self.env.reset(position=start_position)\n return torch.FloatTensor(obs)/255\n \n def step(self, action):\n action_dict = self.create_action_dict(action)\n obs, reward, done, info = self.env.step(action_dict)\n return torch.FloatTensor(obs)/255\n \n def create_action_dict(self, action):\n action_dict = {}\n if action == 0:\n action_dict['longitudinal_velocity'] = 0\n action_dict['lateral_velocity'] = self.step_size\n action_dict['angular_velocity'] = 0\n if action == 1:\n action_dict['longitudinal_velocity'] = 0\n action_dict['lateral_velocity'] = -self.step_size\n action_dict['angular_velocity'] = 0\n if action == 2:\n action_dict['longitudinal_velocity'] = self.step_size\n action_dict['lateral_velocity'] = 0\n action_dict['angular_velocity'] = 0\n if action == 3:\n action_dict['longitudinal_velocity'] = -self.step_size\n action_dict['lateral_velocity'] = 0\n action_dict['angular_velocity'] = 0\n return action_dict\n \n```\n\n\n```python\nagent_parameters = {\n 'radius': 15,\n 'speed': 10,\n 'rotation_speed' : math.pi/8,\n 'living_penalty': 0,\n 'position': (30,30),\n 'angle': 0,\n 'sensors': [\n \n {\n 'nameSensor' : 'proximity_test',\n 'typeSensor': 'proximity',\n 'fovResolution': 64,\n 'fovRange': 300,\n 'fovAngle': math.pi ,\n 'bodyAnchor': 'body',\n 'd_r': 0,\n 'd_theta': 0,\n 'd_relativeOrientation': 0,\n 'display': False,\n }\n \n \n ],\n 'actions': ['forward', 'turn_left', 'turn_right', 'left', 'right', 'backward'],\n 'measurements': ['health', 'poisons', 'fruits'],\n 'texture': {\n 'type': 'color',\n 'c': (255, 255, 255)\n },\n 'normalize_measurements': False,\n 'normalize_states': False,\n 'normalize_rewards': False\n}\n\nenv_parameters = {\n 'map':False,\n 'n_rooms': 2,\n 'display': False,\n 'horizon': 10001,\n 'shape': (84, 84),\n 'mode': 'time',\n 'poisons': {\n 'number': 0,\n 'positions': 'random',\n 'size': 10,\n 'reward': -10,\n 'respawn': True,\n 'texture': {\n 'type': 'color',\n 'c': (255, 255, 255),\n }\n },\n 'fruits': {\n 'number': 0,\n 'positions': 'random',\n 'size': 10,\n 'reward': 10,\n 'respawn': True,\n 'texture': {\n 'type': 'color',\n 'c': (255, 150, 0),\n }\n },\n 'obstacles': [\n \n ],\n 'walls_texture': {\n 'type': 'color',\n 'c': (1, 1, 1)\n },\n 'agent': agent_parameters\n}\n```\n\n**Now show a few consecutive states from this environment**\n\n\n```python\nimport matplotlib.gridspec as gridspec\n\nenv = FlatWorld(env_parameters, period=PERIOD, radius=RADIUS)\nplt.figure(figsize = (3,3))\ngs1 = gridspec.GridSpec(3, 3)\ngs1.update(wspace=0.02, hspace=0.02)\nplt.grid(None)\nstate = env.reset()\nfor i in range(9):\n ax = plt.subplot(gs1[i])\n ax.axis('off')\n ax.set_aspect('equal')\n ax.imshow(state)\n display.display(plt.gcf())\n time.sleep(0.2)\n display.clear_output(wait=True)\n action = random.sample([0,1,2,3],k=1)[0]\n action = 2\n #print(env.env.agent.body.position)\n state = env.step(action)\n \nplt.savefig(\"env.png\", bbox_inches='tight')\n```\n\n### Latent space\n\n**Encoder/Decoder**\n\nNow we want to learn to represent this environment in some latent space (which we, for now, simply assume to be 4-dimensional). We will require both an encoder and decoder, which will use convolutional neural networks.\n\n\n```python\nclass Encoder(nn.Module):\n\n def __init__(self, n_out=4, n_hid = 64):\n\n super().__init__()\n\n self.conv = nn.Conv2d(1, 5, 10, stride=3)\n self.fc1 = nn.Linear(180, n_hid)\n self.fc2 = nn.Linear(n_hid, n_out)\n\n def forward(self, x):\n x = F.relu(self.conv(x.unsqueeze(0).unsqueeze(1)))\n x = F.max_pool2d(x, 4, 4)\n x = x.view(-1, 180)\n x = F.relu(self.fc1(x))\n return F.normalize(self.fc2(x)).squeeze()\n\nclass Decoder(nn.Module):\n \n def __init__(self, n_in=4, n_hid = 64):\n\n super().__init__()\n \n self.fc1 = nn.Linear(n_in, n_hid)\n self.fc2 = nn.Linear(n_hid, 180)\n self.conv = nn.ConvTranspose2d(5, 1, 34, stride=10)\n\n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = x.view(1,5,6,6)\n x = self.conv(x)\n return torch.sigmoid(x).squeeze()\n```\n\n\n```python\nencoder = Encoder(n_out=4)\ndecoder = Decoder(n_in=4)\nprint(encoder)\nprint(decoder)\n```\n\n Encoder(\n (conv): Conv2d(1, 5, kernel_size=(10, 10), stride=(3, 3))\n (fc1): Linear(in_features=180, out_features=64, bias=True)\n (fc2): Linear(in_features=64, out_features=4, bias=True)\n )\n Decoder(\n (fc1): Linear(in_features=4, out_features=64, bias=True)\n (fc2): Linear(in_features=64, out_features=180, bias=True)\n (conv): ConvTranspose2d(5, 1, kernel_size=(34, 34), stride=(10, 10))\n )\n\n\n**Make sure dimensions match**\n\n\n```python\nobs = torch.FloatTensor(state)\nprint(obs.shape)\nlatent = encoder(obs)\nprint(latent.shape)\nreconstructed = decoder(latent)\nprint(reconstructed.shape)\n```\n\n torch.Size([84, 84])\n torch.Size([4])\n torch.Size([84, 84])\n\n\n**Representation**\n\nThe crux of the matter is learning to 'represent' actions in the observation space with actions in latent space. Here, we will do this by assuming every action is a generalized rotation in latent space, which we denote with a series of 2-dimensional rotations.\n\nA 2-d rotation is given by:\n\n\\begin{pmatrix}\n\\cos(\\theta) & \\sin(\\theta) \\\\\n-\\sin(\\theta) & \\cos(\\theta)\n\\end{pmatrix}\n\nand we denote a rotation in dimensions $i$ and $j$ of a higher dimensional space as $R_{i,j}(\\theta)$. For $i=1$, $j=4$, in a 4-dimensional space:\n\n\\begin{equation}\nR_{1,4}(\\theta) = \n\\begin{pmatrix}\n\\cos(\\theta) & 0 & 0 & \\sin(\\theta) \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n-\\sin(\\theta) & 0 & 0 & \\cos(\\theta)\n\\end{pmatrix}\n\\end{equation}\n\nAn arbitrary rotation, denoted $g$ as I am subtly moving towards this being a group action, can then be written as:\n\n\\begin{equation}\n g(\\theta_{1,2},\\theta_{1,3},\\dots,\\theta_{n-1,n}) = \\prod_{i=1}^{n-1} \\prod_{j=1+1}^{n} R_{i,j}(\\theta_{i,j})\n\\end{equation}\n\nwhich has $n(n-1)/2$ free parameters (i.e. $\\theta_{i,j}$'s).\n\n\n```python\nclass Representation():\n\n def __init__(self, dim=4):\n self.dim = dim\n self.params = dim*(dim-1)//2\n self.thetas = torch.autograd.Variable(np.pi*(2*torch.rand(self.params)-1)/dim, requires_grad=True)\n\n self.__matrix = None\n \n def set_thetas(self, thetas):\n self.thetas = thetas\n self.thetas.requires_grad = True\n self.clear_matrix()\n \n def clear_matrix(self):\n self.__matrix = None\n \n def get_matrix(self):\n if self.__matrix is None:\n k = 0\n mats = []\n for i in range(self.dim-1):\n for j in range(self.dim-1-i):\n theta_ij = self.thetas[k]\n k+=1\n c, s = torch.cos(theta_ij), torch.sin(theta_ij)\n\n rotation_i = torch.eye(self.dim, self.dim)\n rotation_i[i, i] = c\n rotation_i[i, i+j+1] = s\n rotation_i[j+i+1, i] = -s\n rotation_i[j+i+1, j+i+1] = c\n\n mats.append(rotation_i)\n\n def chain_mult(l):\n if len(l)>=3:\n return l[0]@l[1]@chain_mult(l[2:])\n elif len(l)==2:\n return l[0]@l[1]\n else:\n return l[0]\n\n self.__matrix = chain_mult(mats)\n \n return self.__matrix\n```\n\n**LatentWorld**\n\nNow, for symmetry's sake, we'll also have a `LatentWorld` which acts as the environment in the latent space.\n\n\n```python\nclass LatentWorld():\n \n class action_space():\n def __init__(self,n_actions):\n self.n = n_actions\n \n def sample(self, k=1):\n return torch.randint(0,self.n,(k,))\n\n class observation_space():\n def __init__(self,n_features):\n self.shape = [n_features]\n \n def __init__(self,\n dim=4,\n n_actions=4,\n action_reps=None):\n\n self.dim = dim\n\n self.action_space = self.action_space(n_actions)\n self.observation_space = self.observation_space(dim)\n \n if action_reps is None:\n self.action_reps = [Representation(dim=self.dim) for _ in range(n_actions)]\n else:\n if len(action_reps)!=n_actions:\n raise Exception(\"Must pass an action representation for every action.\")\n if not all([rep.dim==self.dim]):\n raise Exception(\"Action representations do not act on the dimension of the latent space.\")\n self.action_reps = action_reps\n \n def reset(self, state_init):\n self.state = state_init\n return self.get_observation()\n \n def clear_representations(self):\n for rep in self.action_reps:\n rep.clear_matrix()\n \n def get_representation_params(self):\n params = []\n for rep in self.action_reps:\n params.append(rep.thetas)\n return params\n \n def save_representations(self, path):\n if os.path.splitext(path)[-1] != '.pth':\n path += '.pth'\n rep_thetas = [rep.thetas for rep in self.action_reps]\n return torch.save(rep_thetas, path)\n \n def load_reprentations(self, path):\n rep_thetas = torch.load(path)\n for rep in self.action_reps:\n rep.set_thetas(rep_thetas.pop(0))\n \n def get_observation(self):\n return self.state\n \n def step(self,action):\n self.state = torch.mv(self.action_reps[action].get_matrix(), self.state)\n obs = self.get_observation()\n return obs\n```\n\n## 3. Training\n\nSo the basic training loop is pretty straightfoward. We simply play out episodes from random starting configurations, encoded by the `Encoder`, for `ep_steps` time-steps. Each random action is executed in both the `FlatWorld` and the `LatentWorld`, and then the latent state is transformed to the observation space by the `Decoder` where the loss function measures its deviation from the true state.\n\nNote: sometimes training without the disentanglement regularization fails to find toroidal structure, especially when the radius of the ball is very small. \n\n\n```python\ndim = 4\n\nobs_env = FlatWorld(env_parameters, period=PERIOD, radius=RADIUS)\nlat_env = LatentWorld(dim = dim,\n n_actions = obs_env.action_space.n)\ndecoder = Decoder(n_in = dim, n_hid = 64)\nencoder = Encoder(n_out = dim, n_hid = 64)\n\noptimizer_dec = optim.Adam(decoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_enc = optim.Adam(encoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_rep = optim.Adam(lat_env.get_representation_params(),\n lr=1e-2,\n weight_decay=0)\n\nlosses = []\n```\n\n\n```python\nn_sgd_steps = 3000\nep_steps = 5\nbatch_eps = 16\n\ni = 0\n\nt_start = time.time()\n\ntemp = 0\n\nwhile i < n_sgd_steps:\n \n loss = torch.zeros(1)\n \n for _ in range(batch_eps):\n t_ep = -1\n while t_ep < ep_steps:\n if t_ep == -1:\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n t_ep += 1 \n \n obs_x_recon = decoder(obs_z)\n\n loss += F.binary_cross_entropy(obs_x_recon, obs_x)\n \n loss /= (ep_steps*batch_eps)\n \n losses.append(loss.item())\n \n optimizer_dec.zero_grad()\n optimizer_enc.zero_grad()\n optimizer_rep.zero_grad()\n loss.backward()\n optimizer_enc.step()\n optimizer_dec.step()\n optimizer_rep.step()\n \n # Remember to clear the cached action representations after we update the parameters!\n lat_env.clear_representations()\n\n i+=1\n \n if i%10==0:\n print(\"iter {} : loss={:.3f} : last 10 iters in {:.3f}s\".format(i, loss.item(), time.time() - t_start),\n end=\"\\r\" if i%100 else \"\\n\")\n t_start = time.time()\n```\n\n## 4. Testing\n\nTesting is easy too, we just play out an episode and see how well the reconstructed image agrees with the ground truth!\n\n\n```python\ndef plot_state(obs, ax):\n ax.imshow(obs)\n ax.set_aspect('equal')\n ax.set_xticks([])\n ax.set_yticks([])\n \n return ax\n \nn_steps = 10\n\nfig, (ax1,ax2) = plt.subplots(1, 2)\n\nax1.set_title(\"Ground truth\")\nax2.set_title(\"Reconstruction\")\n\nfor i in range(n_steps+1):\n \n if i==0:\n action = \"N\\A\"\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n obs_x_recon = decoder(obs_z)\n \n fig.suptitle('step {} : last action = {}'.format(i, action), fontsize=16)\n \n plot_state(obs_x.detach().numpy(),ax1)\n plot_state(obs_x_recon.detach().numpy(),ax2)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())\n time.sleep(0.5)\n \ndisplay.clear_output(wait=False)\n```\n\nWe will now have a look at the latent space, we will make a 2D projection of the 4D latent space for every possible frame (There are 121 possible frames in this environment). Note that since we use random projections, in some cases the toric structure we find is more obvious than in others.\n\n**Positions in latent space**\n\n\n```python\nfrom sklearn.random_projection import GaussianRandomProjection\n\nlatent_points = []\n\nfor start_position in obs_env.start_positions:\n obs = obs_env.reset(start_position=start_position)\n latent = encoder(obs)\n latent_points.append(latent.detach().tolist())\n\nlatent_map = np.array(latent_points)\n```\n\n\n```python\nperiod = obs_env.period\n\ncolor=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf',\"#2fa36b\"]\nmarks=[\"1\",\"2\",\"3\",\"4\",\"+\",\">\",\"<\",\"^\",\"v\",\"x\",\"d\"]\npca = GaussianRandomProjection(n_components=2)\n\nlatent_2d = pca.fit_transform(latent_map)\n\nfig = plt.figure(figsize=(6,4))\nax = fig.add_subplot(111)#, projection='3d')\ns=[120]*5+[50]*6\nfor i in range (period**2):\n ax.scatter(x=latent_2d.transpose()[0][i],\n y=latent_2d.transpose()[1][i],\n c=color[i//period], \n s=s[i%period],\n marker=marks[i%period])\nplt.title('Representations - Our method',fontsize=16)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nfig.show()\nplt.savefig(\"latent_flatland.png\", bbox_inches='tight')\n```\n\n**2) Plot Action Representations**\n\n\n```python\nwidth=0.5\n\nrep_thetas = [rep.thetas.detach().numpy() for rep in lat_env.action_reps]\n\nfor rep in lat_env.action_reps:\n print(rep.get_matrix())\n print(torch.matrix_power(rep.get_matrix(), 5))\n\nplt_lim = max( 0.12, max([max(t) for t in rep_thetas])/(2*np.pi) )\ntitles = [\"up\", \"down\", \"right\", \"left\"]\n\nwith plt.style.context('seaborn-paper', after_reset=True):\n\n fig, axs = plt.subplots(1, len(rep_thetas), figsize=(15, 3), gridspec_kw={\"wspace\":0.4})\n \n for i, thetas in enumerate(rep_thetas):\n x = np.arange(len(thetas))\n axs[i].bar(x - width/2, thetas/(2*np.pi), width, label='Rep {}'.format(i))\n axs[i].hlines((0.2,-0.2), -2., 7., linestyles=\"dashed\")\n axs[i].hlines(0., -2., 7.)\n axs[i].set_xticks(x-0.25)\n axs[i].set_xticklabels([\"12\",\"13\",\"14\",\"23\",\"24\",\"34\"], fontsize = 15)\n axs[i].set_xlabel(\"$ij$\", fontsize = 15)\n \n axs[i].set_ylim(-plt_lim,plt_lim)\n axs[i].set_xlim(-.75, 5.75)\n axs[i].set_title(titles[i], fontsize = 15)\n \n axs[i].tick_params(labelsize=15)\n\n axs[0].set_ylabel(r\"$\\theta / 2\\pi$\", fontsize = 15)\n plt.savefig(\"action_rep_entangled.png\", bbox_inches='tight')\n \n```\n\n## 5. Disentanglement\n\n***Some jargon***\n\nIt's nice that it works, but the real point here is to try and learn a *disentangled* representation of the actions.\n\nBefore considering how best to do this, we want to define a metric of 'disentanglement'. We consider the evolution of an observable (latent) vector, $x \\in X$ ($z \\in Z$), under the element $g \\in G$ of the group of symmetries generating transformations of the object. Then we are looking for a representation, $\\rho:G \\rightarrow GL(V)$, such that the transformation is linear in the latent space, i.e.\n\\begin{equation}\n z^{\\prime} = \\rho(g) \\cdot z.\n\\end{equation}\nNote, in our case, the representations are the rotation matrices we learn.\n\nFor this representation to be disentangled, it means that if there exists a subgroup decomposition of $G$\n\\begin{equation}\n G = G_1 \\times G_2 \\times \\dots \\times G_n,\n\\end{equation}\nthen we equivalently decompose the representation, $(\\rho, G)$, into subrepresentations:\n\\begin{equation}\n V = V_1 \\oplus V_2 \\oplus \\dots \\oplus V_n\n\\end{equation}\nsuch that the restricted subrepresentations $(\\rho_{\\vert G_i}, V_i)_i$ are non-trivial, and the restricted subrepresentations $(\\rho_{\\vert G_i}, V_j)_{j \\neq i}$ are trivial.\n\nIn our context, a GridWorld with 5 points in each dimension is represented by $G = C_5 \\times C_5$ (where $C_5$ is the cyclic group). This is a subgroup of $\\mathrm{SO}(2) \\times \\mathrm{SO}(2)$, therefore we hope to find the disentangled representation of the actions (up, down, left, right) that corresponds to this.\n\n***Some practicalities***\n\nOur intuition is that the disentangled representation acts as the identity on as many dimensions as possible. We could attempt to enforce this with some regularization during training. Normal weight decay won't cut it, as that tries to reduce all weights, where as what we really want to do is have all *but one* of our thetas (which corresponds to the rotation/coupling of two dimensions) to be zero.\n\n**1. Entanglement regularisation**\n\nSo for $m$ parameters, ${\\theta_1, \\dots, \\theta_m}$, we want to regularise with\n\\begin{equation}\n \\sum_{i \\neq j} \\vert\\theta_i\\vert^2, \\mathrm{where\\ } \\theta_j {=} \\mathrm{max_k}({\\vert\\theta_k\\vert}).\n\\end{equation}\nWe will also use this term as our metric of 'entanglement'.\n\n\n```python\ndef calc_entanglement(params):\n params = params.abs().pow(2)\n return params.sum() - params.max()\n\nparams = torch.FloatTensor([1,1,0.5,0,0])\ncalc_entanglement(params)\n```\n\n\n\n\n tensor(1.2500)\n\n\n\n### Training with regularization\n\n\n```python\ndim = 4\n\nobs_env = FlatWorld(env_parameters, period=PERIOD, radius=RADIUS)\nlat_env = LatentWorld(dim = dim,\n n_actions = obs_env.action_space.n)\ndecoder = Decoder(n_in = dim, n_hid = 64)\nencoder = Encoder(n_out = dim, n_hid = 64)\n\noptimizer_dec = optim.Adam(decoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_enc = optim.Adam(encoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_rep = optim.Adam(lat_env.get_representation_params(),\n lr=1e-2,\n weight_decay=0)\n\nlosses = []\nentanglement = []\n```\n\n\n```python\nn_sgd_steps = 3000\nep_steps = 5\nbatch_eps = 16\nentanglement_target = 0\n\ni = 0\n\nt_start = time.time()\n\ntemp = 0\n\nwhile i < n_sgd_steps:\n \n loss = torch.zeros(1)\n \n for _ in range(batch_eps):\n t_ep = -1\n while t_ep < ep_steps:\n if t_ep == -1:\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n t_ep += 1 \n \n obs_x_recon = decoder(obs_z)\n\n loss += F.binary_cross_entropy(obs_x_recon, obs_x)\n \n loss /= (ep_steps * batch_eps)\n raw_loss = loss.item()\n \n reg_loss = sum([calc_entanglement(r.thetas) for r in lat_env.action_reps])/4\n \n loss += (reg_loss-entanglement_target).abs() * 1e-2\n \n losses.append(raw_loss)\n entanglement.append(reg_loss.item())\n \n optimizer_dec.zero_grad()\n optimizer_enc.zero_grad()\n optimizer_rep.zero_grad()\n loss.backward()\n optimizer_enc.step()\n optimizer_dec.step()\n optimizer_rep.step()\n \n # Remember to clear the cached action representations after we update the parameters!\n lat_env.clear_representations()\n\n i+=1\n \n if i%10==0:\n print(\"iter {} : loss={:.3f} : entanglement={:.2e} : last 10 iters in {:.3f}s\".format(\n i, raw_loss, reg_loss.item(), time.time() - t_start\n ), end=\"\\r\" if i%100 else \"\\n\")\n t_start = time.time()\n```\n\n iter 100 : loss=0.196 : entanglement=1.28e-01 : last 10 iters in 2.261s\n iter 200 : loss=0.110 : entanglement=1.64e-01 : last 10 iters in 2.717s\n iter 300 : loss=0.079 : entanglement=1.37e-01 : last 10 iters in 2.428s\n iter 400 : loss=0.068 : entanglement=1.21e-01 : last 10 iters in 2.503s\n iter 500 : loss=0.058 : entanglement=9.97e-02 : last 10 iters in 2.390s\n iter 600 : loss=0.052 : entanglement=7.73e-02 : last 10 iters in 2.173s\n iter 700 : loss=0.047 : entanglement=6.18e-02 : last 10 iters in 2.457s\n iter 720 : loss=0.047 : entanglement=5.78e-02 : last 10 iters in 2.508s\r\n\n /home/william/Bureau/Python/gantime/rep-learning/paris/src/flatland/flat_game/sensors/proximity_sensor.py:42: RuntimeWarning: invalid value encountered in not_equal\n mask = resized_img != 0\n\n\n iter 800 : loss=0.045 : entanglement=4.67e-02 : last 10 iters in 2.678s\n iter 900 : loss=0.045 : entanglement=3.69e-02 : last 10 iters in 3.733s\n iter 1000 : loss=0.041 : entanglement=2.65e-02 : last 10 iters in 2.215s\n iter 1100 : loss=0.042 : entanglement=2.10e-02 : last 10 iters in 2.264s\n iter 1200 : loss=0.040 : entanglement=1.57e-02 : last 10 iters in 2.423s\n iter 1300 : loss=0.040 : entanglement=1.22e-02 : last 10 iters in 2.254s\n iter 1400 : loss=0.039 : entanglement=8.43e-03 : last 10 iters in 3.703s\n iter 1500 : loss=0.040 : entanglement=7.17e-03 : last 10 iters in 3.049s\n iter 1600 : loss=0.039 : entanglement=5.31e-03 : last 10 iters in 2.675s\n iter 1700 : loss=0.037 : entanglement=4.04e-03 : last 10 iters in 2.599s\n iter 1800 : loss=0.039 : entanglement=3.47e-03 : last 10 iters in 2.330s\n iter 1900 : loss=0.038 : entanglement=2.71e-03 : last 10 iters in 2.874s\n iter 2000 : loss=0.038 : entanglement=2.13e-03 : last 10 iters in 2.481s\n iter 2100 : loss=0.042 : entanglement=1.38e-03 : last 10 iters in 2.716s\n iter 2200 : loss=0.037 : entanglement=1.36e-03 : last 10 iters in 2.523s\n iter 2300 : loss=0.036 : entanglement=9.73e-04 : last 10 iters in 3.021s\n iter 2400 : loss=0.036 : entanglement=8.62e-04 : last 10 iters in 2.309s\n iter 2500 : loss=0.036 : entanglement=5.97e-04 : last 10 iters in 2.594s\n iter 2600 : loss=0.037 : entanglement=5.08e-04 : last 10 iters in 2.333s\n iter 2700 : loss=0.036 : entanglement=5.65e-04 : last 10 iters in 2.319s\n iter 2800 : loss=0.037 : entanglement=6.81e-04 : last 10 iters in 2.877s\n iter 2900 : loss=0.037 : entanglement=2.65e-04 : last 10 iters in 2.333s\n iter 3000 : loss=0.035 : entanglement=1.87e-04 : last 10 iters in 2.304s\n\n\n### Testing: action representations\n\n\n```python\nwidth=0.5\n\nrep_thetas = [rep.thetas.detach().numpy() for rep in lat_env.action_reps]\n\nfor rep in lat_env.action_reps:\n print(rep.get_matrix())\n print(torch.matrix_power(rep.get_matrix(), 5))\n\nplt_lim = max( 0.12, max([max(t) for t in rep_thetas])/(2*np.pi) )\ntitles = [\"up\", \"down\", \"right\", \"left\"]\n\nwith plt.style.context('seaborn-paper', after_reset=True):\n\n fig, axs = plt.subplots(1, len(rep_thetas), figsize=(20, 3), gridspec_kw={\"wspace\":0.4})\n \n for i, thetas in enumerate(rep_thetas):\n x = np.arange(len(thetas))\n axs[i].bar(x - width/2, thetas/(2*np.pi), width, label='Rep {}'.format(i))\n axs[i].hlines((0.2,-0.2), -2., 6., linestyles=\"dashed\")\n axs[i].hlines(0., -2., 6.)\n axs[i].set_xticks(x-0.25)\n axs[i].set_xticklabels([\"12\",\"13\",\"14\",\"23\",\"24\",\"34\"], fontsize = 15)\n axs[i].set_xlabel(\"$ij$\", fontsize = 15)\n \n axs[i].set_ylim(-plt_lim,plt_lim)\n axs[i].set_xlim(-.75, 5.5)\n axs[i].set_title(titles[i], fontsize = 15)\n \n axs[i].tick_params(labelsize=15)\n\n axs[0].set_ylabel(r\"$\\theta / 2\\pi$\", fontsize = 15)\n plt.savefig(\"action_rep_entangled.png\", bbox_inches='tight')\n \n```\n\n**Show predictions made by trained network with disentangled representations**\n\n\n```python\ndef plot_state(obs, ax):\n ax.imshow(obs)\n ax.set_aspect('equal')\n ax.set_xticks([])\n ax.set_yticks([])\n \n return ax\n\nn_steps = 10\n\nfig, (ax1,ax2) = plt.subplots(1, 2)\n\nax1.set_title(\"Ground truth\")\nax2.set_title(\"Reconstruction\")\n\nfor i in range(n_steps+1):\n \n if i==0:\n action = \"N\\A\"\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n obs_x_recon = decoder(obs_z)\n \n fig.suptitle('step {} : last action = {}'.format(i, action), fontsize=16)\n \n plot_state(obs_x.detach().numpy(),ax1)\n plot_state(obs_x_recon.detach().numpy(),ax2)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())\n time.sleep(0.5)\n \ndisplay.clear_output(wait=False)\n```\n\n**Show 2D projections of learned representations in latent space**\n\n\n```python\nfrom sklearn.random_projection import GaussianRandomProjection\n\nlatent_points = []\n\nfor start_position in obs_env.start_positions:\n obs = obs_env.reset(start_position=start_position)\n latent = encoder(obs)\n latent_points.append(latent.detach().tolist())\n\nlatent_map = np.array(latent_points)\n```\n\n\n```python\nperiod = obs_env.period\n\ncolor=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf',\"#2fa36b\",'#2f14fb','#ada854','#c353cc','#1c1392','#8eeeb0']\nmarks=[\"1\",\"2\",\"3\",\"4\",\"+\",\">\",\"<\",\"^\",\"v\",\"x\",\"d\",\"p\",\"P\",\"X\",'_','|']\npca = GaussianRandomProjection(n_components=2)\n\nlatent_2d = pca.fit_transform(latent_map)\n\nfig = plt.figure(figsize=(6,4))\nax = fig.add_subplot(111)#, projection='3d')\ns=[120]*5+[50]*6\nfor i in range (period**2):\n ax.scatter(x=latent_2d.transpose()[0][i],\n y=latent_2d.transpose()[1][i],\n # zs=latent_2d.transpose()[2][i],\n c=color[i//period],\n #s=s[i%period],\n marker=marks[i%period])\n # ax.set_xlim(-.6/1.4,.6/1.4)\n # ax.set_ylim(-.8/1.4,.8/1.4)\n # ax.set_zlim(-1./1.6,1./1.6)\n #ax.view_init(elev=45, azim=45)\nplt.title('Representations - Our method',fontsize=16)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nfig.show()\nplt.savefig(\"latent_flatland.png\", bbox_inches='tight')\n```\n", "meta": {"hexsha": "76a2921e4741f2b1fae2b8bd7a9cd242851f7651", "size": 114092, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "fig3_flatland.ipynb", "max_stars_repo_name": "luis-armando-perez-rey/learning-group-structure", "max_stars_repo_head_hexsha": "e238308de73a29506d9281e1b55cdd2de2795ebb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2020-02-16T10:34:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-20T00:27:19.000Z", "max_issues_repo_path": "fig3_flatland.ipynb", "max_issues_repo_name": "luis-armando-perez-rey/learning-group-structure", "max_issues_repo_head_hexsha": "e238308de73a29506d9281e1b55cdd2de2795ebb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-06-08T22:32:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:42.000Z", "max_forks_repo_path": "fig3_flatland.ipynb", "max_forks_repo_name": "luis-armando-perez-rey/learning-group-structure", "max_forks_repo_head_hexsha": "e238308de73a29506d9281e1b55cdd2de2795ebb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-03T08:24:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-16T02:02:10.000Z", "avg_line_length": 87.830638953, "max_line_length": 26600, "alphanum_fraction": 0.7791606773, "converted": true, "num_tokens": 8452, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5698526514141572, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.2982724736454876}}
{"text": "```python\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (12, 9)\nplt.rcParams[\"font.size\"] = 18\n```\n\n# Reactor Kinetics\n\nReactor kinetics focuses on the relationship between criticality, power, and time. Delayed neutrons and reactor control are at the heart of reactor kinetics. As changing conditions in the core impact neutron multiplication, control rods, burnable poisons, and chemical shim may be introduced to keep $k_{eff}$ near 1. \n\nTime-dependent fluctuations in neutron population, fluid flow, and heat transfer are essential to understanding the performance and safety of a reactor. Such transients include normal reactor startup and shutdown as well as abnormal scenarios including Beyond Design Basis Events (BDBEs) such as Accident Transients Without Scram (ATWS). \n\n## Learning Objectives\n\nAt the end of this lesson, you will be equipped to:\n\n- State the relationship between criticality and reactivity\n- State the Point Reactor Kinetics Equations\n- Describe temperature feedback of reactivity\n- Apply the Point Reactor Kinetics Equations\n\n## Fission and Delayed Neutrons\n\n\n\n\n$$\\sigma(E,\\vec{r},\\hat{\\Omega},T,t,x,i)$$\n \n
$k_{eff}$
\n\n
i
\n\n
$\\beta_i$
\n\n \\begin{align}\n\t\t k &= \\mbox{\"neutron multiplication factor\"}\\\\\n\t\t &= \\frac{\\mbox{neutrons causing fission}}{\\mbox{neutrons produced by fission}}\\\\\n\t\t \\rho &= \\frac{k-1}{k}\\\\\n\t\t \\rho &= \\mbox{reactivity}\\\\\n\\end{align}\n\n\n\n\\begin{align} \n\\rho(t) = \\rho_0 + \\rho_f(t) + \\rho_{ext}\n\\end{align}\n\nwhere\n \n\\begin{align}\n \\rho(t) &= \\mbox{total reactivity}\\\\\n \\rho_f(t) &= \\mbox{reactivity from feedback}\\\\\n \\rho_{ext}(t) &= \\mbox{external reactivity insertion}\\\\\n \\rho_f(t) &= \\sum_i \\alpha_i\\delta T_i\\\\\n T_i &= \\mbox{temperature of component i}\\\\\n \\alpha_i &= \\mbox{temperature reactivity coefficient of i}.\n\\end{align}\n\n## Point Reactor Kinetics Equations\n\nSimplifying assumptions: \n\n- The reactor is a point\n- Assume all delayed neutron precursors have the same decay constant, $\\lambda$.\n- Let C(t) be the total number of delayed neutron precursors at time t.\n\nIn each neutron cycle, $k_{eff}n(t)$ is the number of new neutrons eventually produced, and is a combination of both prompt and delayed neutrons.\n\n\n\\begin{align}\n(1-\\beta)k_{eff}n(t) &= \\mbox{ prompt neutrons at the end of cycle}\\\\\n\\beta k_{eff}n(t) &= \\mbox{ delayed neutron precursors produced in the cycle}\\\\\n\\mathscr{l}' &= \\mbox{ cycle length of each cycle}\\\\\n\\frac{\\beta k_{eff}n(t)}{\\mathscr{l}'} &= \\mbox{ delayed neutron precursors per unit time}\\\\\n\\lambda C(t) &= \\mbox{ rate of decay by precursors}\n\\end{align}\n\nThus, the net rate of increase in the number of precursors is :\n\n\\begin{align}\n\\frac{dC(t)}{dt} &= \\frac{\\beta}{\\mathscr{l}'}n(t) - \\lambda C(t)\\\\\n\\end{align}\n\n### In each cycle:\n\n\\begin{align}\nn(t) &= \\mbox{ neutrons disappear}\\\\\n(1-\\beta)k_{eff}n(t) &= \\mbox{ prompt neutrons are produced by fission}\\\\\n\\frac{\\left[(1-\\beta)k_{eff} - 1\\right]n(t)}{\\mathscr{l}'} &=\\mbox{ net rate of increase by prompt neutrons}\\\\\n\\lambda C(t) &= \\mbox{ rate of neutron production by delayed neutron precursors}\\\\\nS(t) &= \\mbox{ rate of neutron production by non-fission sources}\n\\end{align}\n\nThus, the rate of increase in neutron population is the sum of production mechanisms:\n\n\n\\begin{align}\n\\frac{dn(t)}{dt} &= \\frac{1}{\\mathscr{l}'}\\left[(1-\\beta)k_{eff} - 1\\right]n(t) + \\lambda C(t) + S(t)\\\\\n &= \\frac{k_{eff}}{\\mathscr{l}'}\\left[\\frac{k_{eff} -1}{k_{eff}} - \\beta \\right]n(t) + \\lambda C(t) + S(t)\\\\\n\\end{align}\n\nWe can define the effective neutron lifetime as $\\Lambda = \\mathscr{l} = \\frac{\\mathscr{l}'}{k_{eff}}$ to simplify this equation:\n\n\n\\begin{align}\n\\frac{dn(t)}{dt} &= \\frac{\\rho - \\beta}{\\mathscr{l}}n(t) + \\lambda C(t) + S(t)\\\\\n\\frac{dC(t)}{dt} &= \\frac{\\beta}{\\mathscr{l}}n(t) - \\lambda C(t) \\\\\n\\end{align}\n\nThis can be solved by assuming a solution of the form:\n\n\\begin{align}\nn(t) = \\phi(t) = Ae^{\\omega t}\\\\\nC(t) = C_0 e^{\\omega t}\n\\end{align}\n\n\n### Multiple Delayed Neutron Precursor Groups\n\nIn reality, the delayed neutron precursors have very different halflives. As there are dozens of delayed neutron precursors, a typical strategy is to group these precurors into a small number of groups with similar halflives, as in the table below.\n\n\n|Group\t| Half-Life (s)\t| Decay Constant (s\u22121)\t| Energy (keV) |\tYield\t| Fraction |\n|-------------|-------------|-------------|-------------|-------------|-------------|\n|1 |\t55.72 |\t0.0124 |\t250 |\t0.00052 |\t0.000215 |\n|2 |\t22.72 |\t0.0305 |\t560 |\t0.00346 |\t0.001424 |\n|3 |\t6.22 |\t0.111 |\t405 |\t0.00310 |\t0.001274 |\n|4 |\t2.30 |\t0.301 |\t450 |\t0.00624 |\t0.002568 |\n|5 |\t0.610 |\t1.14 |\t- |\t0.00182 |\t0.000748 |\n|6 |\t0.230 |\t3.01 |\t- |\t0.00066 |\t0.000273 |\n\n\n\n\\begin{align}\n\\frac{dn(t)}{dt} &= \\frac{\\rho - \\beta}{\\mathscr{l}}n(t) + \\sum_{i=1}^G\\lambda_iC_i(t) + S(t)\\\\\n\\frac{dC_i(t)}{dt} &= \\frac{\\beta_i}{\\mathscr{l}}n(t) - \\lambda_iC_i(t)\\\\\n\\beta &= \\sum_i^G \\beta_i\\\\\ni&= 1,...,G\\\\\n\\end{align}\n\n## Feedback\n\nPutting all of this together, the point reactor kinetics equations, with feedback, result in a \"stiff\" set of PDEs:\n\n\\begin{align}\n\\frac{d}{dt}\\left[\n \\begin{array}{c}\n p\\\\\n \\zeta_1\\\\\n .\\\\\n \\zeta_j\\\\\n .\\\\\n \\zeta_J\\\\\n \\omega_1\\\\\n .\\\\\n \\omega_k\\\\\n .\\\\\n \\omega_K\\\\\n T_{i}\\\\\n .\\\\\n T_{I}\\\\\n \\end{array}\n \\right]\n =\n \\left[\n \\begin{array}{ c }\n \\frac{\\rho(t,T_{i},\\cdots)-\\beta}{\\Lambda}p +\n \\displaystyle\\sum^{j=J}_{j=1}\\lambda_{d,j}\\zeta_j\\\\\n \\frac{\\beta_1}{\\Lambda} p - \\lambda_{d,1}\\zeta_1\\\\\n .\\\\\n \\frac{\\beta_j}{\\Lambda}p-\\lambda_{d,j}\\zeta_j\\\\\n .\\\\\n \\frac{\\beta_J}{\\Lambda}p-\\lambda_{d,J}\\zeta_J\\\\\n \\kappa_1p - \\lambda_{FP,1}\\omega_1\\\\\n .\\\\\n \\kappa_kp - \\lambda_{FP,k}\\omega_k\\\\\n .\\\\\n \\kappa_{k p} - \\lambda_{FP,k}\\omega_{k}\\\\\n f_{i}(p, C_{p,i}, T_{i}, \\cdots)\\\\\n .\\\\\n f_{I}(p, C_{p,I}, T_{I}, \\cdots)\\\\\n \\end{array}\n \\right]\n \\end{align}\n\\begin{align}\n p &= \\mbox{ reactor power }\\\\\n \\rho(t,&T_{fuel},T_{cool},T_{mod}, T_{refl}) = \\mbox{ reactivity}\\\\\n \\beta &= \\mbox{ fraction of neutrons that are delayed}\\\\\n \\beta_j &= \\mbox{ fraction of delayed neutrons from precursor group j}\\\\\n \\zeta_j &= \\mbox{ concentration of precursors of group j}\\\\\n \\lambda_{d,j} &= \\mbox{ decay constant of precursor group j}\\\\\n \\Lambda &= \\mbox{ mean generation time }\\\\\n \\omega_k &= \\mbox{ decay heat from FP group k}\\\\\n \\kappa_k &= \\mbox{ heat per fission for decay FP group k}\\\\\n \\lambda_{FP,k} &= \\mbox{ decay constant for decay FP group k}\\\\\n T_i &= \\mbox{ temperature of component i}\n\\end{align}\n\n## Units of Reactivity\n\n### Delayed neutron fraction\n\nIn all of this, recall that the **delayed neutron fraction** is thus:\n\n\\begin{align}\n\\beta &= \\mbox{Delayed neutron fraction}\\\\\n&=\\frac{\\mbox{precursor atoms}}{\\mbox{prompt neutrons }+\\mbox{ precursor atoms}}\\\\\n&= \\frac{\\mbox{delayed neutrons}}{\\mbox{prompt neutrons }+\\mbox{ delayed neutrons}}\n\\end{align}\n\nThe delayed neutron fraction **$\\beta$ varies by fission isotope.**\n- For a reactor where all fissions are from $^{235}U$, this fraction is $\\beta_{235U} = 0.0064$.\n- In $^{238}U$, the fraction is lower, $\\beta_{238U} = 0.00157$.\n- And $^{239}Pu$ generates even fewer delayed neutrons per fission $\\beta_{239Pu} = 0.002$.\n\n\n#### Think-pair-share\n**Can you think of a physical reason that $\\beta$ should vary by fission isotope?**\n\n\nThe **effective delayed neutron fraction** ($\\beta_{eff}$) varies dramatically by reactor because:\n\n- in most reactors, not all fissions are from $^{235}U$\n- breeding and burning occurs over time, $\\beta$ changes with burnup \n\n\n### Dollar\n\nWe define one dollar of reactivity in a particular reactor as $\\frac{\\rho}{\\beta}$.\n\n### Cent\n\nA cent is $\\frac{1}{100^{th}}$ of a dollar of reactivity, so it's $\\frac{\\rho}{100\\beta}$.\n\n### Per Cent Mille (pcm)\n\nA cent is $\\frac{1}{100^{th}}$ and one mille is $\\frac{1}{1000^{th}}$ of a dollar.\nThus, one per cent mille (pcm) of a dollar is $10^{-5}\\frac{\\rho}{\\beta}$.\n\n## The Delayed Neutron Fraction and Criticality\n\n### Subcriticality \n$k<1$ is a subcritical reactor. This is immediate and all subcriticality is effectively prompt, though delayed neutrons have a slight impact.\n\n### Supercriticality\n$k>1$ is a supercritical reactor.\n\n### Prompt\nIf the reactor is supercritical on prompt neutrons alone, then it is *prompt supercritical*. This is **bad** because control rods take more than $10^{-14}s$ to drop. Prompt supercriticality occurs when reactivity is equal to one dollar :\n\n\\begin{align}\n\\rho \\ge \\beta_{eff}\n\\end{align}\n\n### Delayed\n\nIf the reactor is supercritical only if delayed neutrons are included, then it is just supercritical, or *delayed supercritical*. Delayed, controllable supercriticality occurs when reactivity is positive but below one dollar :\n\n\\begin{align}\n0 < \\rho < \\beta_{eff}\n\\end{align}\n\n## Example\n\nLet's imagine a reactor with $\\beta_{eff} = 0.006$\n\\begin{align}\nk_{eff} = 0.99 \\\\\n\\rho &= \\frac{k_{eff} - 1}{k_{eff}} \\\\\n&= -0.01\n\\end{align}\n\nThis reactivity, in units of dollars, for this reactor, is:\n\\begin{align}\n\\frac{\\rho}{\\beta} &= \\frac{-0.01}{0.006}\\\\\n&= -1.67 [$]\\\\\n&= -167 [cents]\\\\\n\\end{align}\n\nThe same reactivity in a reactor with $\\beta_{eff} = 0.005$: \n\n\\begin{align}\n\\frac{\\rho}{\\beta} &= \\frac{-0.01}{0.005}\\\\\n&= -2.00 [$]\\\\\n&= -200 [cents]\\\\\n\\end{align}\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "aa34086b254cb050f6087fdf845f0a4475f76dd3", "size": 16589, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "reactor-physics/reactor-kinetics.ipynb", "max_stars_repo_name": "katyhuff/npr247", "max_stars_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-12-17T06:07:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:14:51.000Z", "max_issues_repo_path": "reactor-physics/reactor-kinetics.ipynb", "max_issues_repo_name": "katyhuff/npr247", "max_issues_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-29T17:27:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T17:46:50.000Z", "max_forks_repo_path": "reactor-physics/reactor-kinetics.ipynb", "max_forks_repo_name": "katyhuff/npr247", "max_forks_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-25T20:00:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T03:05:26.000Z", "avg_line_length": 38.850117096, "max_line_length": 347, "alphanum_fraction": 0.5049128941, "converted": true, "num_tokens": 3265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118791767282, "lm_q2_score": 0.5851011542032312, "lm_q1q2_score": 0.29712131662441543}}
{"text": "# Extended Rant Version\n\n> This version needs significant refactoring. It is fairly complete tough. \n\n## What happened\n\nI do not normally write blog posts.\n\nToday is not a normal day tough. \n\nI believe that I have found a missing link in my understanding of bayesian modelling theory.\n\nThis is not so much about the bayes belief update equation. It's good. The problem is that the notation is confusing, making it difficult to distinguish and source(compute, obtain) the various terms in the equation, in connection to the real-world phenomena. And then, when solving a real world problem, this is not even where you put most of the time-effort in. \n\nYou might think that bayes update rule:\n\n$p( A | B) = \\frac{p(B|A)p(A)}{p(B)}$\n\nis all that it takes.\n\nThis form is nice for proving that the bayes' theorem is correct. \n\nHowever, what do these terms mean? How does one apply this? \n\n\n\n\n\n\nYou might have heard that $A$ is a model parameter, and $B$ is observation. And that you can derive the model parameter $A$ from observations. \n\nStill, it is not at all obvious to someone that tries to figure it out for himself. Sources of confusion include:\n\n* `probability` and `belief` are not the same thing, yet they are denoted with the same letter\n* $p(B|A)$ does not need to sum to 1.0 while $p(A|B)$ does ? \n* values of $p(B|A)$ need to be between 0 and 1, while values of $p(A|B)$ do not ? \n* The name and function body for $p(A)$ and $p(B)$ is **completely** different, even tough they use the same symbols?\n* Some symbols evaluate to scalars, and others evaluate to vector? then, some places that used to be scalar can be vectors sometimes?\n* Data (observations) seems to have the same type as probability (`float`) , even tough the data can be `nominal`, like `T`/`H`, distribution, or vector what then?\n* `Discrete events`, `datapoints`, `sets` and `distributions` are very different things, yet they are denoted by the same symbol and *location*\n* `latent model parameters` versus `observable data` -- and where they come from, and where they go into? these are conceptually very different, yet they are denoted by location in a composite symbol as if they were interchangeable. Granted, the interchangeability is proven and even exploited in deriving the Bayes update rule: $p(A|B) p(B) = p(B|A)p(A)$ for any meaning of $A$ and $B$. However, this neat algebraic trick does no favours to the conceptual understanding and practical applications of this equation. \n* Most books go straight to continuous distributions and multidimensional variables. These are beautiful exhibits of maths' notations' brevity and generalism, but also are absolutely redundant. Discrete events are fine. Scalars are fine. The additional hoop created by going straight to things like fractional dimensions is a serious barrier for the children AND their teachers.\n* `class of function`, `instance of function`, `function name`, `function body` and `value of function` are all separate concepts in computer programming, but they are used interchangeably in mathematics books. \n* The update equation looks simple, but all practical problems have more than 2 dimensions that we are interested in exploring. Even for the most simple model $m$, we will want a (1) distribution of posterior beliefs for (2) possible $A$. That's already a 2D plot. but then, the $A$ can be 1,2,n dimensional, we can have different observation scenarios or multidimensional observations, and a variety of prior beliefs to worry about. And then, there is the variety of possible $models$. How do you visualize that? \n* It seems that, in paper-and-pencil maths, symbols are often aliased and shortened to save on hand movements. Modern practical computer science has long demonstrated that such savings are counter-productive. \n\nAnd the most confusing of all -- how does one build the `likelihood function`? Where does it come from? This is not specified in the bayes update equation at all, yet when doing exercises, one is expected to just magically come up with a correct one. \n\nFor the longest time I thought that this is something that needs to be derived from the Bayes' equation itself. This is how you solve all the other problems in the school, right? The teacher told you that $F=m*a$ so that's all that you should need. . . . actually, no. Bayes' theory tells you a true nothing, zero, nil, about the model $M$ of the world. This you have to invent separately. Only after you are done inventing, Bayes can tell you something about how good your model $M$ is. This is a true stunner for a high-school children who, up to this moment, were lead to believe that there exists only one correct solution to all problems. \n\nAmazingly, despite all the talk about probabilities, coin tossing and chances of getting cured by a new medicine -- the likelihood function, and the value of the likelihood function, as well as prior and posterior beliefs, have no random chance in them at all. In bayesian thinking, some things are hidden, but no things are random! \n\nThe only place where randomness is allowed is the plant. This randomness can be modelled in the model. \n\n--- \n\nOK, so ranting over, let me try to clarify some of these things.\n\n> Warning: Over the course of this section, I will rewrite the classic equations a couple of times. Do not call me out on the blatant fact that I use different symbols for the same thing across the first part of this article. This is necessary to make my point.\n\n**1. Reality check.** \n\nTo ground the concepts in some tangible reality, let's consider that we have a **plant**. \n\nPlant is just a name for some object, or process, that has inputs and outputs. It does not need to be a manufacturing plant, or like a greenhouse plant. All that it does is that it exists, has inputs, and outputs:\n\n\n>>> Drawing here <<<\n\nFor now, we will only be concerned with the plant's outputs.\n\nThis plant produces data $\\{ D \\}$, which we can observe. The process of producing instances of $D$ can be approximated and described by a $model$ $M$. The model $M$ describes the plant somehow, but at this point we should be clear that there can be more than one good model for that plant (even tough most models will be useless). \n\nLet's suppose that we have a model $M$ of the plant, that takes a hidden(latent) parameter $\\phi$. It could be then said that the data $D = M(\\phi)$. \n\nWe'd like to know what the true $\\phi$ is, but we cannot be sure. We cannot measure $\\phi$ directly. The $M$ can have a probabilistic nature to it, in sense that depending on $\\phi$ it can produce given $D$ more or less often. \n\nStill, we are allowed to have suspicions and beliefs about what the true $\\phi$ is. We can observe the events $D$ that happen with $M$'s and $\\phi$'s contribution.\n\n**2. Use the reasoning to figure out $\\phi$**\n\n\nFor a start, let's replace $A$ and $B$ with model parameter $\\phi$ and data(observation) $D$:\n\n$$p( \\phi | D) = \\frac{p(D|\\phi)p(\\phi)}{p(D)}$$\n\nNext, let's rewrite the bayes' update equation *slightly* differently (this is not the end of the re-writing):\n\n$$p( \\phi | D) = \\frac{p(D|\\phi)}{p(D)}\\times p(\\phi)$$\n\nCompared to the original form, this already gives you a better hint: you can get posterior belief, $p( \\phi | D)$, from the prior $p(\\phi)$, modified by an \"updater term\", $\\frac{p(D|\\phi)}{p(D)}$.\n\nOk, so what about this updater term seems to be so difficult?\n\nWait. Let me clarify the concepts one by one. And there is quite a bit to untangle, before the road becomes straightforward for us.\n\n\n**Untangle the concepts of Belief, Probability, and Sample**\n\n\nLet's take in a new concept: that `belief` is not the same as `probability`, and both are distinct from `sample`.\n\nHere are some hints as to how to separate the two:\n\n* `Belief` is something that you hold in your head. `Probability` is a property of the world. Since there exists an impenetrable epistemological barrier between your \"inner\" and \"outer\" world, these things are already distinct.\n* Things do not happen because of `belief`, nor because of their `probability`. \n* Things happen inside the `plant` according to the plant's model and parameters. The plant's model is $M$ and it's parameter is $\\phi$. We can observe data $D=f(M, \\phi)$.\n* `Belief` is something that we can change ourselves. `Probability` is not something that we can change.\n* `Probability` we can estimate from frequency of observation. It follows that for things that are never observed, we cannot talk about their probability. \n* `Belief` we can assume, virtually out of nothing. We can then either hold oto that belief, or update it. We can update it, for example, arbitrarily(without any reasoning), or using some rules. For example, using Bayes' update function (maximum amount of reason).\n* Estimate of the value of `probability` can be a function of assumptions, including `beliefs`. Updates of `belief` can be a function of `data`. \n* `Probability` and `belief` seem to have the same unit, \"fraction of $\\Omega$\" -- where $\\Omega$ is \"all that there is, all possibilities\". Maybe this is the reason why historically they both have been noted as $p()$. This is a poor reason. I propose that we give them a units. \n* I propose that, to make it easier to keep a distinction between `probability` and `belief`, they carry distinct units.\n\nHere, I propose the following units:\n\nI will be using unit of `Rey`, or R for belief, that certain statement is true.\n\nI will be using the symbol $\\Omega$ for probability on that a certain parameter, event, or statement, or data point $E$ will happen in the future, or is happening while we do not see it happening. Note again, that this is very different from having a completely certain data set that shows that \"$E$ has happened 70% of the time\". \n\nWhile we are at it, we can also add an unit for that observed dataset, $\\{ D \\}$. The reason for a separate unit here is that the set of all possibilities, $\\Omega$ is infinite, while the dataset $\\{ D \\}$ is a finite **sample** from $\\Omega$. Hence, it is **not true** that if we have a dataset that shows \"$D$ has, so far, happened 70% of the time\" is equal to \"$D$ is 70% of $\\Omega$\". Instead, let's define a new unit, [$S$] to describe the prevalence of $E$ in dataset $\\{ D \\}$.\n\nTo summarize:\n\nGiven that, for us, $E$ could mean either $\\phi$ or $D$, we have:\n\n0.7[R]=700[mR] means \"My belief is such that I am 70% sure that a specific value of $\\phi$ is true. I leave the 30% to beliefs that some other value of $\\phi$ can be true.\"\n\n0.8[$\\Omega$]=800[m$\\Omega$] means \"In this chance process, If I sample forever, I will get $D$ 80% of the time. I will get something else 30% of the time\".\n\n0.9[$S$]=900[m$S$] means \"In this dataset, which is a sample of $\\Omega$, event $D$ occurred 90% of the time\".\n\nAgain, although [R], [$\\Omega$], and [$S$] seem to be mathematically interchangeable and can be expressed in percent, or fraction of a whole -- semantically they are distinct, as they refer to different concepts.\n\n\n\nHaving these insights, I propose a following notation.\n\n## The stage star $b(\\phi)$\n\nLet's use function symbols, $f(something)$ : \n\n* $b$ for belief, both prior and posterior,\n* $l$ is for likelihood, \n* $m$ for marginal probability. \n\nLet's use the variable symbols, $something$: \n* $\\phi$ for a certain model parameter that belongs to a set of considered model parameters $\\{ \\phi \\}$ \n* $D$ for a datapoint that already occurred, and we have it in a dataset $\\{ D \\}$; the dataset $\\{ D \\}$ has been sampled from a population $\\Omega$\n\nIt follows that, \n\nThe **new belief** about parameter $\\phi$, that takes into account the **new information** from observing a datapoint $D$, is noted as $b(\\phi|D)$, \n\nAnd that $b(\\phi|D)$ **can be calculated** as coming from old(prior) belief about $\\phi$, noted as $b(\\phi)$ \n\nAnd that this calculation involves evaluating a Bayesian modifier term, $\\frac{l(D|\\phi)}{m(D)}$:\n\n$$b(\\phi|D) \\leftarrow \\frac{l(D|\\phi)}{m(D)} \\times b(\\phi)$$\n\n\n## The ugly duck $m(D)$\n\nThe marginal probability $m(D)$ is also sometimes called \"evidence\". It is calculated by summing (or integrating) the likelihood over all considered values of $\\phi$, weighted by the prior belief in $\\phi$. \n\n$$m(D) = \\sum_{\\phi}{l(D|\\phi)b(\\phi)}$$\n\nBut wait!!! In the above equation, the $\\phi$ is not the same $\\phi$ as in the previous equation! If you didn't know that, I do not blame you. \n\nInstead, here $\\phi$ is to say \"for all $\\phi$s\". Because of this, one is not allowed to substitute this expression for $m(D)$ into the bayes' update equation from the previous section -- at least, not using regular algebraic rules as learned in high school : that would create a notation conflict!\n\nIn my school times, changing the meaning of notation between different classes was the single most confusing obstacle to quick assimilation of new concepts. If I learned something in the chemistry class, I had to forget about it before going to the Physics class, or else I was in for trouble!\n\nBe advised that for a child's mind, like for the early computers, all symbols and variables are \"global\". Have you seen that viral video where a 3-year old girl says that a black man ate the cookies? She did not learn to remap the concepts for political correctness when being filmed. She merely rehashed the concepts she heard from the people that surround her. \n\nFor adults, switching the context is possible, but still taxing. \n\nChanging the meaning of notation in a middle of doing a school problem is a good recipe for catastrophe. This is in no small part due to that the children are plain not afforded the time and slowdowns that it takes to create correct concept remapping in their heads. They are expected to solve a problem in under 5 minutes, or fail. You tell me what happens to most of us.\n\nLet's make things easier to learn, by using a clearer, non ambiguous notation.\n\nIn order to note that the $\\phi$ in the following equation has a \"set of $\\phi$\" meaning, rather than just \"single scalar $\\phi$\" meaning, let's note it as $\\{\\phi\\}$. The symbols $\\{, \\}$ are used in secondary schools to denote closed sets. That hints the student that he must consider the entire set of $\\phi$s and not just a single $\\phi$. \n\nMoreover, we can prepare the same person for using list comprehension semantic from Python (and hence, make Python easier for this person), by using notation like \"for $\\alpha$ in $\\{\\phi\\}$\" or \"for $\\alpha \\in \\left<\\phi\\right>$\":\n\n$$m(D) = \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{l(D|\\alpha)b(\\alpha)}$$\n\nNow we are safe to write that\n\n$$b(\\phi|D) \\leftarrow b(\\phi) \\times \\frac{l(D|\\phi)}{ \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{(l(D|\\alpha)b(\\alpha))} }$$\n\n\nThe veterans will notice how now, no symbol aliasing occurs. Every symbol has unique semantic meaning.\n\nMoreover, we can be much more clear about where the specific numbers needed are to come from. Only now, having this unambiguous expression, one is equipped to attempt to solve practical problems.\n\n\n\n> Warning: In the following part of the article, I will not be changing the notation any more, so feel free to call me out on any errors.\n\n---\n\n## Prior $b(\\phi)$\n\n$b(\\phi)$ is the initial, or prior **belief** about one of the possible $\\phi$s. \n\n$b(\\{\\phi\\})$ is the initial, or prior **belief** about all of the possible values for $\\phi$s, and is a vector, or a set: that is, there is a new value $b(...)$ for each $\\phi$ from the set of $\\{ \\phi \\}$s\n\nValues for $b(\\{\\phi\\})$ come truly from outside of the dataset and model. In other words, one has to start with some beliefs, something based on external knowledge of the problem at hand. \n\nAt this point, most books go into depths about the challenges of holding an \"informative\" or \"uninformative\" prior, and give (often unclear) experimental examples on how do they affect the result.\n\nHere I hope to make the examples clearer, by the virtue of using the unambiguous notation developed in the previous section.\n\nFor example, we can have, for $\\{\\phi\\}$ = [0.1, 0.5, 0.9], $b(\\{\\phi\\})$ = [0.1, 0.8, 0.1]. This means that our belief is that $\\phi$ is most likely 0.5, but we also allow for a total doubt of 20% that it can give way to $\\phi$ being 0.1 or 0.9. \n\nImportantly, this distribution does sum up to 1, that is $\\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}b(\\alpha) \\ \\ = 1$\n\nOne can start either with an \"uninformative\" prior or an \"informative\" prior. \"uninformative\" priors are fine if there is enough experimental data to come by; however, if the data is scarce, and there is something that we already know about the problem (e.g., someone told us, or we made a related but not identical experiment), then would be a mistake not to incorporate it into our thought process.\n\n---\n\n## Posterior $b(\\phi | D)$\n\n$b(\\phi | D)$ is the posterior **belief** after update. \n\nFor simple problems, we often talk not about different $D$s, but rather just one discrete $D$. Obviously, data sets $\\{ D\\}$ will also enter the fray as we become more proficient.\n\nIt can be important to note that there is a difference between:\n* $b(\\phi | D)$ -> Scalar\n* $b(\\{\\phi\\} | D)$ -> vector\n* $b(phi | \\{ D\\})$ -> vector\n\nAs all of these involve a vector computation or a loop of some kind. \n\nHaving that, it is even more clear to see that $b(\\{ \\phi \\} | \\{ D\\})$ denotes and evaluates to a 2D array of numbers.\n\n\nAlternatively, we could talk about updating our belief for one specific $\\phi$ depending on what $D$ we get. In other words, building a closed-form function. Although this view can be useful, this is not what most examples in the literature are about. $D$ is most often a given constant.\n\n$b(\\{\\phi\\} | D)$ over a range of $\\phi$, evaluates to a 1D numeric vector, and this is what you typically want to get to: a description of the final belief about what the latent $\\phi$ could be. In other words, an indication of \"what should you believe about the different possible $\\phi$\". This distribution does sum up to 1.\n\nFor example, let's say, that after having our prior beliefs $b(\\{\\phi\\})$ = [0.1, 0.8, 0.1] of what the latent value of $\\phi$ could be, we observe new data point, $D$. Having this new data point, we update (it doesn't yet matter how) our belief for values of $\\{\\phi\\}$=[0.1, 0.5, 0.9] to be new $b(\\{\\phi\\} | D)$=[0.45, 0.45, 0.1]. Meaning, that we still believe that $\\phi$ value of 0.9 is incredible, but now give equal credibility to values $\\phi$=0.1 and $\\phi$=0.5.\n\nTo summarize, using the $\"|D\"$ part of the notation is there to symbolize that this is about the updated belief, given the data point $D$. Not that we have a sweep of datapoints, and not that we distribute our belief across different $\\phi$. \n\n---\n\n## The Likelihood function $l(D|\\phi)$\n\n$l(D|\\phi)$ is the \"likelihood function\". \n\n\"Likelihood\" itself, is a word that typically doesn't really tell you much, because the meaning used here is quite different and distinct from the common-language synonyms of \"likelihood\". Here, by \"likelihood\" we do not mean any of \"frequency\", \"chance\", \"odds\", \"feasibility\", \"plausibility\" e.t.c. We mean something quite specific here.\n\nLet's attach concrete meaning to the word \"likelihood\".\n\nHere, \"likelihood\" it means \"the function, and the values of probabilities that single point(or single batch) of data $D$ has been generated by the $model$ $M$ with a given parameter $\\phi$\".\n\nLikelihood values carry the unit of probability, $[\\Omega]$\n\n\n\nNotably, \n\n* A value array $l(D|\\{\\phi\\})$ does not need to sum to 1.0 as in the case of belief distribution. \n* the individual values of $l(D|\\phi)$ for any discrete $\\phi$ must still be in the range of (0, 1), as it should be with probability.\n\n\nFor a contrived example, we can have $l(D|\\{ \\phi \\})$ = [1.0, 0.1, 0.5] for $\\{ \\phi \\}$ = [0.1, 0.5, 0.9] and data $D$ = [0]. \n\nThis means that: \n* The data value \"0\" is reliably always generated by the $model$ $M$ when the $model$'s $\\phi$==0.1. \n* if the $\\phi$==1, then this data would only be generated rarely, approx. 10% of cases. \n* if the $model$'s $\\phi$=0.9, then the data value \"0\" is still generated approximately half of the time.\n\nWhen sampling infinitely from the model $M$ having the parameter $\\phi$, we cannot get a $D$ population fraction bigger than 1.0$[\\Omega]$ or smaller than 0.0$[\\Omega]$.\n\nHowever, the sum of all $l(D|\\{ \\phi \\})$, or $\\sum_{\\alpha \\in \\phi}{p(D|\\alpha)}$ can be more or less than 1.00 \n\nMoreover, if we observe a single data point $D$=[0], then we still cannot be sure if the true value of model's $\\phi$ was 0.1, 0.5, or 0.9. All possible values of $\\phi$ are consistent with getting $D$=[0] sometimes.\n\nNote that I have used the word `probability that` rather than `belief that`. This is because for likelihood, there is no guessing of any kind involved, and there is nothing latent(hidden). Instead, given a datapoint, and given one (or a list) of model parameter values, we calculate the already-happened chance that the data, as seen, has happened. We can calculate this chance for all possible hidden parameter values, irrespectively of our belief in them. This is kind of like a grid search, or grid view. Us doing this calculation does not favour any $\\phi$ out of the set of $\\{ \\phi \\}$ (yet) and the result does not mean that any specific $\\phi$ is the right one. \n\nObviously, in order to perform this calculation, we need a model of the world, and a kind of that uses these hidden parameters and is conductive to our calculation. Here much of the trouble and effort of the user of bayes' rule comes in. Many treatises on how awesome the bayes rule is will not help you with this.\n\nIt is up to the user to construct a model, make sure that the model is representative of the reality, and that it is conductive to the likelihood function value evaluation. \n\nLet's take a look at two most common, basic models, and how is the likelihood function constructed for them. The examples listed below are NOT to say that this is the only way the likelihood function can be constructed! \n\n\n\n## Example model of coin toss, and constructing it's likelihood function.\n\n\nThe classic coin-toss toy model is a good one, and widely applicable and extensible, if explained correctly. \n\n### Model description\n\nHere it is so that you don't have to go back to the book.\n\nLet's say that the result of the coin toss can be 0 or 1. Our model approximates the real coin by ignoring the possibility of the real coin landing on the edge.\n\n---\n\nProposition: biased coin gives \"1s\" more often. \n\n---\n\nLet's say that the coin could be biased(that is, unfair) and we describe this bias with a parameter $\\phi$. We do not know, and cannot know directly what the true $\\phi$ is. What we do know is that the $\\phi$ can be anywhere from 0.0 to 1.0, with a 'fair' coin being at $\\phi \\equiv$ 0.50. \n\n* 0.10 means that the coin is very unfair towards only giving zeroes, \n* 0.90 would mean that it is very unfair towards only giving ones. \n* 0.50 means that it gives 0 in 50% times, and 1 in 50% of times, that is, the coin is fair\n\n\n### The Likelihood function for the model\n\nTo compile this description into something computable, we can say that our model for probability of getting a 1 is $l(D \\equiv 1 | \\phi)= \\phi$. Symmetrically, the model of probability of getting a zero, is $l(D \\equiv 0 | \\phi)=(1-\\phi)$\n\nThis likelihood function is not something that comes out of the world. It does not come from the bayes rule. It is a new construct that we have created to link our suspected coin bias $\\phi$ with the probability of getting an observation $D$. We have created a $model M$. We are using the model $M$ to approximate and describe the real coin. \n\nNote that there is no belief involved here, only assumptions; we assume that $model M$ is an approximation of a real coin.\n\n### Prior belief\n\nWe can give our prior belief that the coin is fair, and our disbelief that it is unfair. We can do it by setting, for a possible values of $\\{\\phi\\}$ = [0.1, 0.5, 0.9], values of initial belief as $b(\\{\\phi\\})$ = b([0.1, 0.5, 0.9]) = [0.1, 0.8, 0.1]\n\n### Calculations\n\nPutting in some concrete numbers, let's say that we believe that $\\phi=0.5$ (perfectly fair) and then we toss the coin, and get $D\\equiv1$. What was the likelihood to get such a result? $l(D \\equiv1 | \\phi \\equiv 0.5) = 0.5$. \n\nAt this point, we can end our lame discussion -- the coin is fair, the chance of getting a one was 50%, so everything is fine, right?\n\nThat's correct. However, what if we do not fully believe that the coin is fair? what if we suspect that the coin is actually biased towards zero?\n\nLet's compute what was the chance of getting a 1, if the latent parameter $\\phi$ was 0.1: $l(D \\equiv 1 | \\phi \\equiv 0.1) = 0.1$.\n\nSo, **If** the coin was heavily biased towards zero, then we still could get a 1, 10% of the time, and getting a result of \"1\" in one coin toss is not very surprising. \n\nThere is seemingly nothing to worry about, except for that we did not make any progress on discovering the true value of $\\phi$, even tough we have made an indirect observation of it, through observing the $D$.\n\nAll that we know so far is that the likelihood of observing the result, depends on the parameter in a model $M$ of the plant $P$ that generated that result. To say this, is to say nearly quite nothing.\n\n### No result?\n\nWhat we really care to know, is what to believe about $\\{ \\phi \\}$ -- is the real value closer to 0.5 or to 0.1? So far we have nothing from reason to go by to believe either of these things. Or do we?\n\nHere is the first time when the bayesian update comes in. And, this is what the books on bayesian reasoning fail the hardest at. They start with discussing prior belief, posterior belief, and worry about how the prior affects the posterior, and why having a prior belief is or is not a good idea. And how to convince other people to your priors. They droll about informative or uninformative priors. Prior this, prior that. \n\nHowever, all this fails because the Bayesian update is truly useless unless you explain and understand the likelihood function first.\n\nAgain: Be aware that the likelihood function DOES NOT COME FROM THE BAYES EQUATION. It comes from the model of the plant!\n\nBefore we get to the bayesian belief update, please muster your patience, and take a minute and think about other possible models of likelihood for coin toss-- even if they are implausible. Or for any other model of other world phenomena that you care about. Another classic problem in this category is looking for disease in population, using an imperfect test. I am sure you have heard about it.\n\n### Why bother?\n\nIn the previous section, we have created and used a \"forward model\" of the world. \n\nThe reason we do it this way,\n\nis that the \"forward models\" that is, casual models (where the principle of action-reaction is held) \n\nof this kind are relatively easy to make for a very wide variety of real-world phenomena. \n\n\n## Example model with linear regression, and constructing its likelihood function.\n\nI bet that you are anxious to hear if bayesian reasoning can be used to any more than coin tosses, or figuring out how if taking a coronavirus test makes sense or not.\n\nYes, it can. Here is an example on how would you treat a linear regression problem, of the kind:\n\n\nProposition: Taller people are heavier. \n\nQuestion: What is the proportionality coefficient?\n\nClarification: In Bayesian belief terms, what should we believe about various propositions for the value of proportionality coefficient?\n\n### Model description\n\n$\\mapsto$ model input: height $h$\n\n$\\mapsto$ model output: weight $w$\n\n$\\mapsto$ model parameters: proportionality coefficient $\\phi_{prop}$, and the uncertainty descriptor $\\phi_{\\sigma}$. In other words, we have two latent parameters, not one.\n\n$\\mapsto$ Plant's model of uncertainty: random variable $N(0,\\sigma )$\n\n$\\mapsto$ The complete model for predicting the weight for a person of height $h$ is $\\begin{equation}w = h * \\phi_{prop} + N(0,\\phi_{\\sigma} )\\end{equation}$\n\n$\\mapsto$ nose-free version of the model -- that is, if we set noise to zero, we get weight a model $w = h * \\phi_{prop}$ \n\n### Likelihood function for linear regression\n\nHere is the critical bit. We construct the equation for the likelihood function of data point $D_{weight}$ happening, given the pair of ($\\phi_{prop}$, $\\phi_{\\sigma}$). \n\nLet's say that the likelihood, or probability of this data point happening is inversely proportional to the distance of the data point value from the model's predicted noiseless value. Hence we want something like\n\n$ l( D_{weight} | (\\phi_{p}, \\phi_{\\sigma})) \\ \\ \\propto \\ \\ 1/distance$\n\nHere are some proposals on how this could look like.\n\nDistance, is simply the difference between the model's predicted, noiseless $w$ and the observed $D_{w}$. For clarity of notation, let's use symbol delta defined as $\\Delta = D_{w} - h * \\phi_{prop}$\n\nOne possible distance metric is :\n\n$_{proposed}distance = abs ( \\Delta ) $\n\nWe can also use a square of difference, which has the nice property that it weighs large distance more. (It also has a couple of other nice properties):\n\n$_{proposed}distance = \\Delta^{2} $\n\nWe also want to weight the distance by the amount of uncertainty.\n* If the model certainty is high ($\\sigma$ is small), then the \"improbability\" due to distance will be higher.\n* If the model certainty is low ($\\sigma$ is large), then the probability is higher even at a distance.\n\nWe account for this by \"weighing\" the distance by $\\sigma$:\n\n$_{proposed}weightedDistance = (\\frac{\\Delta}{\\phi_{std}})^{2}$ \n\nOne more property that we need, before we can get to the likelihood, is that the likelihood for any prospective parameter must evaluate to between 0..1. For that, we need to get a bit creative . . . or \"inspired\".\n\n\nHere's a proposed likelihood function that has all the properties required above, sourced from https://en.wikipedia.org/wiki/Bayesian_linear_regression#Model_setup\n\n$ l( D_{weight} | (\\phi_{prop}, \\phi_{\\sigma})) = \\sigma * \\exp(-\\frac{1}{2\\sigma^{2}} \\Delta^{2}) $\n\nThis function has this nice property that the maximum value is 1.0, quickly decays towards zero as the $\\Delta$ increases, and works nice with $\\Delta$ weighted by $\\sigma$.\n\nThis specific function also has the property that it integrates to 1.00 for $\\Delta$ of $(-\\infty , +\\infty )$ -- but we really do not require this property to be there in the likelihood function. As long as you choose a function that outputs something between 0 and 1, it's fine. To see that this is the case, recall that we are talking about the probability of data given the model parameter. There can be more than one more model parameter that will often (or always) produce a given datapoint value. That's fine. We will make the probabilities in the likelihood compatible with beliefs using a normalizing term - \"Evidence\".\n\nAgain, let me be clear that the above presented likelihood function, is merely an example function that happens to have desirable properties. It is by far not the only function that exhibits this properties! For example, you can have a process that does not exhibit gaussian uncertainty like $\\propto \\exp(-\\frac{1}{x^{2}})$. That's fine for Bayesian reasoning!.\n\nNow, that you see that it is possible to build a likelihood function, suitable to describe the problem of \"what should I believe...\" in bayesian terms for a regression problem, you can begin to believe that it is possible to construct such a function for a very wide variety of problems!\n\nWe are not done yet. There is one more hoop to clear before we can get to the posteriority: The Evidence function.\n\n\n## Evidence, $m()$\n\nThe marginal probability $m(D)$ is also sometimes called \"evidence\".\n\n\nWhy marginal? this word is not to be confused with things like \"unimportant\", \"extreme\", or \"bad\". The word \"marginal\" is just a code-word that came from the depths of history. One used to write the results of an experiment (when taking the data) in a table (on paper!). Then, came the time to analise the data. Since copying the numbers by hand is time consuming, one would use the margin of the page to squeeze in additional scribblings. Hence, these computed numbers are \"marginal\". \n\n\nWhy \"evidence\"? I am not sure, but what I do understand is that again, the name-word, code-word \"evidence\" does not have anything to do with common-sense meaning of that word. It does not mean \"certainty\", \"proof\" , \"confirmation\", \"verification\", \"display\", \"demonstration\", e.t.c. Here we are only using this word as a name for a certain operation.\n\nAs hinted above, the value of the marginal probability, $m(D)$ is calculated by summing (or integrating) the likelihood function of $D$ for all of the considered values of $\\phi$, weighted by the prior belief in $\\phi$. \n\n$$m(D) = \\sum_{\\alpha\\in\\{\\phi\\}}{l(D|\\alpha)b(\\alpha})$$\n\nWhat we get in effect, is a scaling, or normalization term that makes the units and scale of belief and likelihood match. \n\nIn the original formula, the same thing is denoted as $p(D)$. Which is triple as confusing because (a) requires an integral or summation over all $\\phi$, (b) it integrates to more than 1.0 and hence (c) it has none of the properties of the other $p()$'s. Uhh. (?!?!?). \n\nIf you are confused, I do not blame you.\n\nInstead, I propose that it would make much more sense never to write $p(D)$ nor $p(B)$, but rather, teach what the marginal probability function really is, right away: that $m() = m(D,l(),\\{\\phi\\}, b(\\{\\phi\\}))$. Sounds complicated? It's tedious to write, but it is not complicated. (That's why people `in the know` shorten it). Let's see an example:\n\n\nHere is an example how to perform this calculation. For our coin toss example, we have:\n\n$m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) ) = \\ldots$\n\n$\\ldots \\sum_{\\alpha\\in\\{\\phi\\}}{l(D \\equiv 1|\\alpha)b(\\alpha}) = \\ldots $\n\n$ \\ldots l(D \\equiv 1 | \\phi \\equiv 0.1)b(\\phi \\equiv 0.1) \\ldots$\n\n$ \\ldots + l(D \\equiv 1 | \\phi \\equiv 0.5)b(\\phi \\equiv 0.5) \\ldots$\n\n$ \\ldots + l(D \\equiv 1 | \\phi \\equiv 0.9)b(\\phi \\equiv 0.9) \\ldots$\n\n$ = \\ldots \\\\ $\n\n$ \\ldots 0.1*0.1+0.5*0.8+0.9*0.1 = 0.01 + 0.4 + 0.09 \\ldots \\\\ $\n\n$ \\ldots \\large{= 0.5}$\n\nSo, the marginal probability of the \"1\" happening, under current belief system, is 0.5.\n\nKind of anticlimactic?\n\nWait until you see what happens when the prior beliefs were different (unbalanced prior). \n\nFor example, for $b(\\{\\phi\\})$ = [0.8, 0.1, 0.1] we get $m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) )$ = 0.22 . In other words, if our prior belief was that the coin is biased towards zero, then the \"marginal probability\" (Evidence!) of getting $D \\equiv 1$ is lower!\n\n\nAnd then, for $b(\\{\\phi\\})$ = [0.1, 0.1, 0.8], $m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) )$ = 0.78\n\nFor \"uninformative\" prior belief of $b(\\{\\phi\\})$ = [0.33, 0.33, 0.33], $m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) )$ = 0.50 again. \n\nHow come?\n\n\nIf this surprises you, you are in a good company. The surprise comes from the historical fact that the \"marginal probability function\" has the word \"probability\" in it. It would seem that the probability of something happening depends on our belief about it???\n\nAlas, this is not the case. \"Marginal Probability\" or \"evidence\" is merely a scaling factor that we need to apply to the likelihood, in our full equation for update of the prior belief:\n\n$$b_{\\phi, updated} \\ = \\ b(\\phi|D) \\leftarrow b_{\\phi, prior} \\times \\frac{l()}{m()} \\ = \\ b(\\phi) \\times \\frac{l(D|\\phi)}{ m(D,l(),\\{\\phi\\}, b(\\{\\phi\\})) } \\ = \\ b(\\phi) \\times \\frac{l(D|\\phi)}{ \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{(l(D|\\alpha)b(\\alpha))} }$$\n\nHence, it is much more enlightening to say that the \"Bayes Factor\" -- the ratio $\\frac{l()}{m()}$ -- tells us how we should modify our prior belief, given the data point $D$. \n\nThis factor can be less than one, or more than one. It could be close to zero if $D$ is unlikely, or it could be tending to infinity if our prior belief was very very low. The \"Bayes Factor\" is composed of the interplay between the likelihood function and our prior belief about all possible $\\{ \\phi \\}$. Seeing it this way dispels any magic about it. \n\n\n\n## Finally, the Bayesian update\n\nPhew.\n\nAfter all this introduction -- which is not really introduction, it is THE meat that should be taught in school in the first place -- we can get to the bayesian method for updating beliefs:\n\n$$b(\\phi|D) \\leftarrow b(\\phi) \\frac{l(D|\\phi)}{ \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{(l(D|\\alpha)b(\\alpha))} }$$\n\nSee the next chapter for a demonstration of this method in action.\n\n\n\n```\n#hide\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "b065c68dbf946e156e289c625796c24aef501aa8", "size": 44411, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10-gr-extended-rant-version.ipynb", "max_stars_repo_name": "jerzydziewierz/bayesian_reasoning_v0", "max_stars_repo_head_hexsha": "cdb76d4a7f2c406f32e1d5fad25b07abb8af772c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "10-gr-extended-rant-version.ipynb", "max_issues_repo_name": "jerzydziewierz/bayesian_reasoning_v0", "max_issues_repo_head_hexsha": "cdb76d4a7f2c406f32e1d5fad25b07abb8af772c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10-gr-extended-rant-version.ipynb", "max_forks_repo_name": "jerzydziewierz/bayesian_reasoning_v0", "max_forks_repo_head_hexsha": "cdb76d4a7f2c406f32e1d5fad25b07abb8af772c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.940027894, "max_line_length": 684, "alphanum_fraction": 0.639323591, "converted": true, "num_tokens": 9432, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6334102498375401, "lm_q2_score": 0.46879062662624377, "lm_q1q2_score": 0.2969367879328261}}
{"text": "```python\nimport pandas as pd\nimport xarray as xr\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%load_ext autoreload\n%autoreload 2\nfrom ar6_ch6_rcmipfigs.constants import INPUT_DATA_DIR_BADC, BASE_DIR\n```\n\n The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n\n\n\n```python\nfrom ar6_ch6_rcmipfigs.utils.badc_csv import write_badc_header\n```\n\n\n```python\nfrom ar6_ch6_rcmipfigs.utils.plot import get_cmap_dic\nfrom ar6_ch6_rcmipfigs.constants import OUTPUT_DATA_DIR, RESULTS_DIR\n```\n\n# Code + figures\n\n\n```python\noutput_name = 'fig_em_based_ERF_GSAT_period2010-2019_1850-1900'\n```\n\n### Path input data\n\n\n```python\n\n# PATH_DATASET = OUTPUT_DATA_DIR / 'ERF_data.nc'\nPATH_DATASET = OUTPUT_DATA_DIR / 'historic_delta_GSAT/dT_data_hist_recommendation.nc'\n\nfn_ERF_2019 = OUTPUT_DATA_DIR / 'historic_delta_GSAT/2019_ERF_est.csv'\n# fn_output_decomposition = OUTPUT_DATA_DIR / 'historic_delta_GSAT/hist_ERF_est_decomp.csv'\n\nfn_ERF_timeseries = OUTPUT_DATA_DIR / 'historic_delta_GSAT/hist_ERF_est.csv'\n\nfp_collins_sd = RESULTS_DIR / 'tables_historic_attribution/table_std_smb_orignames.csv'\n\nfn_TAB2_THORNHILL = INPUT_DATA_DIR_BADC / 'table2_thornhill2020.csv'\n```\n\n### Path output data\n\n\n```python\nPATH_DF_OUTPUT = OUTPUT_DATA_DIR / 'historic_delta_GSAT/dT_data_hist_recommendation.csv'\n\nprint(PATH_DF_OUTPUT)\n```\n\n /Users/sarablichner/science/PHD/IPCC/public/AR6_CH6_RCMIPFIGSv2/ar6_ch6_rcmipfigs/data_out/historic_delta_GSAT/dT_data_hist_recommendation.csv\n\n\n### various definitions\n\n**Set reference year for temperature change:**\n\n\n```python\n\nref_period = [1850, 1900]\npd_period = [2010, 2019]\n```\n\n\n```python\n# variables to plot:\nvariables_erf_comp = [\n 'CO2', 'N2O', 'CH4', 'HC', 'NOx', 'SO2', 'BC', 'OC', 'NH3', 'VOC'\n]\n# total ERFs for anthropogenic and total:\nvariables_erf_tot = []\nvariables_all = variables_erf_comp + variables_erf_tot\n# Scenarios to plot:\nscenarios_fl = []\n```\n\n\n```python\nvarn = ['co2', 'N2O', 'HC', 'HFCs', 'ch4', 'o3', 'H2O_strat', 'ari', 'aci']\nvar_dir = ['CO2', 'N2O', 'HC', 'HFCs', 'CH4_lifetime', 'O3', 'Strat_H2O', 'Aerosol', 'Cloud']\n```\n\nNames for labeling:\n\n\n```python\nrename_dic_cat = {\n 'CO2': 'Carbon dioxide (CO$_2$)',\n 'GHG': 'WMGHG',\n 'CH4_lifetime': 'Methane (CH$_4$)',\n 'O3': 'Ozone (O$_3$)',\n 'Strat_H2O': 'H$_2$O (strat)',\n 'Aerosol': 'Aerosol-radiation',\n 'Cloud': 'Aerosol-cloud',\n 'N2O': 'N$_2$O',\n 'HC': 'CFC + HCFC',\n 'HFCs': 'HFC'\n\n}\nrename_dic_cols = {\n 'co2': 'CO$_2$',\n 'CO2': 'CO$_2$',\n 'CH4': 'CH$_4$',\n 'ch4': 'CH$_4$',\n 'N2O': 'N$_2$O',\n 'n2o': 'N$_2$O',\n 'HC': 'CFC + HCFC + HFC',\n 'HFCs': 'HFC',\n 'NOx': 'NO$_x$',\n 'VOC': 'NMVOC + CO',\n 'SO2': 'SO$_2$',\n 'OC': 'Organic carbon',\n 'BC': 'Black carbon',\n 'NH3': 'Ammonia'\n}\n```\n\n\n```python\nrn_dic_cat_o = {}\nfor key in rename_dic_cat.keys():\n rn_dic_cat_o[rename_dic_cat[key]]=key\nrn_dic_cols_o = {}\nfor key in rename_dic_cols.keys():\n rn_dic_cols_o[rename_dic_cols[key]]=key\n```\n\n### Open ERF dataset:\n\n\n```python\nds = xr.open_dataset(PATH_DATASET)\nds # ['Delta T']\n```\n\n\n\n\n
\n\n\n\n### Overview plots\n\n\n```python\ncols = get_cmap_dic(ds['variable'].values)\n```\n\n (0.9568627450980393, 0.796078431372549, 0.21176470588235294)\n (0.8274509803921568, 0.0, 0.1568627450980392)\n (1.0, 0.4196078431372549, 0.07450980392156863)\n (0.26666666666666666, 0.0, 0.5254901960784314)\n (0.3764705882352941, 0.5725490196078431, 0.796078431372549)\n (0.5411764705882353, 0.2235294117647059, 0.0)\n (0.4745098039215686, 0.792156862745098, 0.9333333333333333)\n (0.0, 0.6901960784313725, 0.6039215686274509)\n (0.0, 0.5019607843137255, 0.23137254901960785)\n (0.47843137254901963, 0.5058823529411764, 0.5058823529411764)\n\n\n\n```python\nfig, axs = plt.subplots(2, sharex=True, figsize=[6, 6])\n\nax_erf = axs[0]\nax_dT = axs[1]\nfor v in ds['variable'].values:\n ds.sel(variable=v)['Delta T'].plot(ax=ax_dT, label=v, c=cols[v])\n ds.sel(variable=v)['ERF'].plot(ax=ax_erf, c=cols[v])\nds.sum('variable')['Delta T'].plot(ax=ax_dT, label='Sum', c='k', linewidth=2)\nds.sum('variable')['ERF'].plot(ax=ax_erf, c='k', linewidth=2)\n\nax_dT.set_title('Temperature change')\nax_erf.set_title('ERF')\nax_erf.set_ylabel('ERF [W m$^{-2}$]')\nax_dT.set_ylabel('$\\Delta$ GSAT [$^{\\circ}$C]')\nax_erf.set_xlabel('')\nax_dT.legend(ncol=4, loc='upper left', frameon=False)\nplt.tight_layout()\nfig.savefig('hist_timeseries_ERF_dT.png', dpi=300)\n```\n\n\n```python\ndf_deltaT = ds['Delta T'].squeeze().drop('percentile').to_dataframe().unstack('variable')['Delta T']\n\ncol_list = [cols[c] for c in df_deltaT.columns]\n\ndf_deltaT = ds['Delta T'].squeeze().drop('percentile').to_dataframe().unstack('variable')['Delta T']\n\nfig, ax = plt.subplots(figsize=[10, 5])\nax.hlines(0, 1740, 2028, linestyle='solid', alpha=0.9, color='k',\n linewidth=0.5) # .sum(axis=1).plot(linestyle='dashed', color='k', linewidth=3)\n\ndf_deltaT.plot.area(color=col_list, ax=ax)\ndf_deltaT.sum(axis=1).plot(linestyle='dashed', color='k', linewidth=3, label='Sum')\nplt.legend(loc='upper left', ncol=3, frameon=False)\nplt.ylabel('$\\Delta$ GSAT ($^\\circ$ C)')\nax.set_xlim([1740, 2028])\nsns.despine()\n```\n\n\n```python\n\n```\n\n# Split up ERF/warming into sources by using data from Thornhill\n\nWe use the original split up in ERF from Thornhill/Bill Collin's plot \n\nOpen dataset from Bill Collin's script:\n\n\n```python\n\n```\n\n\n```python\ndf_collins = pd.read_csv(fn_ERF_2019, index_col=0)\ndf_collins.index = df_collins.index.rename('emission_experiment')\ndf_collins_sd = pd.read_csv(fp_collins_sd, index_col=0)\ndf_collins\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
CO2
\n
CH4_lifetime
\n
Strat_H2O
\n
Aerosol
\n
Cloud
\n
O3
\n
HC
\n
N2O
\n
HFCs
\n
\n
\n
emission_experiment
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n \n \n
\n
CO2
\n
2.057554
\n
0.000000
\n
0.00
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
CH4
\n
0.017549
\n
0.844457
\n
0.05
\n
-0.002653
\n
0.018421
\n
0.266736
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
N2O
\n
0.000000
\n
-0.035967
\n
0.00
\n
-0.002090
\n
0.042503
\n
0.026124
\n
0.00
\n
0.21
\n
0.000000
\n
\n
\n
HC
\n
0.000053
\n
-0.050927
\n
0.00
\n
-0.008080
\n
-0.017419
\n
-0.162033
\n
0.41
\n
0.00
\n
0.039772
\n
\n
\n
NOx
\n
0.000000
\n
-0.380025
\n
0.00
\n
-0.009166
\n
-0.014458
\n
0.137102
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
VOC
\n
0.069491
\n
0.162462
\n
0.00
\n
-0.002573
\n
0.008884
\n
0.202071
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
SO2
\n
0.000000
\n
0.000000
\n
0.00
\n
-0.234228
\n
-0.703784
\n
0.000000
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
OC
\n
0.000000
\n
0.000000
\n
0.00
\n
-0.072143
\n
-0.136919
\n
0.000000
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
BC
\n
0.000000
\n
0.000000
\n
0.00
\n
0.144702
\n
-0.037227
\n
0.000000
\n
0.00
\n
0.00
\n
0.000000
\n
\n
\n
NH3
\n
0.000000
\n
0.000000
\n
0.00
\n
-0.033769
\n
0.000000
\n
0.000000
\n
0.00
\n
0.00
\n
0.000000
\n
\n \n
\n
\n\n\n\n\n```python\nwidth = 0.7\nkwargs = {'linewidth': .1, 'edgecolor': 'k'}\n```\n\n## Decompose GSAT signal as the ERF signal\n\n### GSAT\n\nGet period mean difference for GSAT:\n\n\n```python\ndf_deltaT = ds['Delta T'].squeeze().drop('percentile').to_dataframe().unstack('variable')['Delta T']\nmean_PD = df_deltaT.loc[pd_period[0]:pd_period[1]].mean()\nmean_PD\n\nmean_PI = df_deltaT.loc[ref_period[0]:ref_period[1]].mean()\n\ndT_period_diff = pd.DataFrame(mean_PD - mean_PI, columns=['diff']) # df_deltaT.loc[2019])\ndT_period_diff.index = dT_period_diff.index.rename('emission_experiment')\n```\n\nMake normalized decomposition of different components from emission based ERF. \n\n\n```python\ndf_coll_t = df_collins.transpose()\nif 'Total' in df_coll_t.index:\n df_coll_t = df_coll_t.drop('Total')\n# scale by total:\nscale = df_coll_t.sum()\n# normalized ERF: \ndf_col_normalized = df_coll_t / scale\n#\ndf_col_normalized.transpose().plot.barh(stacked=True)\n```\n\nWe multiply the change in GSAT in 2010-2019 vs 1850-1900 by the matrix describing the source distribution from the ERF:\n\n\n```python\ndT_period_diff['diff']\n```\n\n\n\n\n emission_experiment\n CO2 0.788254\n N2O 0.095969\n CH4 0.515223\n NOx -0.138916\n SO2 -0.597658\n BC 0.051592\n OC -0.083104\n NH3 -0.014523\n VOC 0.224660\n HC 0.096768\n Name: diff, dtype: float64\n\n\n\n\n```python\ndf_dt_sep = dT_period_diff['diff'] * df_col_normalized\n\ndf_dt_sep = df_dt_sep.transpose()\ndf_dt_sep\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
CO2
\n
CH4_lifetime
\n
Strat_H2O
\n
Aerosol
\n
Cloud
\n
O3
\n
HC
\n
N2O
\n
HFCs
\n
\n
\n
emission_experiment
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n \n \n
\n
BC
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.069463
\n
-0.017871
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
CH4
\n
0.007569
\n
0.364236
\n
0.021566
\n
-0.001144
\n
0.007945
\n
0.115050
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
CO2
\n
0.788254
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
HC
\n
0.000024
\n
-0.023315
\n
0.000000
\n
-0.003699
\n
-0.007975
\n
-0.074182
\n
0.187706
\n
0.000000
\n
0.018209
\n
\n
\n
N2O
\n
0.000000
\n
-0.014348
\n
0.000000
\n
-0.000834
\n
0.016956
\n
0.010421
\n
0.000000
\n
0.083775
\n
0.000000
\n
\n
\n
NH3
\n
0.000000
\n
0.000000
\n
0.000000
\n
-0.014523
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
NOx
\n
0.000000
\n
-0.198057
\n
0.000000
\n
-0.004777
\n
-0.007535
\n
0.071454
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
OC
\n
0.000000
\n
0.000000
\n
0.000000
\n
-0.028678
\n
-0.054427
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
SO2
\n
0.000000
\n
0.000000
\n
0.000000
\n
-0.149239
\n
-0.448419
\n
0.000000
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n
\n
VOC
\n
0.035454
\n
0.082889
\n
0.000000
\n
-0.001313
\n
0.004533
\n
0.103097
\n
0.000000
\n
0.000000
\n
0.000000
\n
\n \n
\n
\n\n\n\n\n```python\ndf_dt_sep.plot.bar(stacked=True)\ndT_period_diff['diff'].reindex(df_dt_sep.index).plot()\n```\n\n### ERF\n\nGet period mean difference for ERF:\n\n\n```python\ndf_ERF = ds['ERF'].squeeze().to_dataframe().unstack('variable')['ERF']\nmean_ERF_PD = df_ERF.loc[pd_period[0]:pd_period[1]].mean()\n\nmean_ERF_PI = df_ERF.loc[ref_period[0]:ref_period[1]].mean()\n```\n\n\n```python\nERF_period_diff = pd.DataFrame(mean_ERF_PD - mean_ERF_PI, columns=['diff']) # df_deltaT.loc[2019])\nERF_period_diff.index = ERF_period_diff.index.rename('emission_experiment')\n```\n\n\nWe multiply the change in ERF in 2010-2019 vs 1850-1900 by the matrix describing the source distribution from the ERF:\n\n\n```python\ndf_erf_sep = ERF_period_diff['diff'] * df_col_normalized\ndf_erf_sep = df_erf_sep.transpose()\n```\n\n\n```python\nERF_period_diff\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
diff
\n
\n
\n
emission_experiment
\n
\n
\n \n \n
\n
CO2
\n
1.703144
\n
\n
\n
N2O
\n
0.202558
\n
\n
\n
CH4
\n
1.018111
\n
\n
\n
NOx
\n
-0.274165
\n
\n
\n
SO2
\n
-0.979315
\n
\n
\n
BC
\n
0.097273
\n
\n
\n
OC
\n
-0.157024
\n
\n
\n
NH3
\n
-0.030328
\n
\n
\n
VOC
\n
0.402956
\n
\n
\n
HC
\n
0.205605
\n
\n \n
\n
\n\n\n\n\n```python\ndf_erf_sep.plot.bar(stacked=True)\nERF_period_diff['diff'].reindex(df_erf_sep.index).plot.line()\nplt.show()\n```\n\n# Accounting for non-linearities in ERFaci, we scale down the GSAT change from aci contribution to fit with chapter 7 \n\nThe GSAT change from aerosol cloud interactions in 2019 vs 1750 is estimated to -0.38 degrees by chapter 7, which accounts for non-linearities in ERFaci. When considering the 1750-2019 change in GSAT, we therefore scaled the GSAT change by aerosol cloud interactions to fit this total. This constituted a 25% reduction. \nFor the GSAT averaged over the period 2010-2019 vs 1850-1900 we thus reduce by 25%. \n\nFurthermore, ERFaci over the same period (2010-2019 vs 1850-1900) is also sligtly overestimated due to higher emissions in the period versus 2019. To scale this, we use the ratio between ERFaci and ERFari as estimated by CHRIS [INSERT PROPER REFS] in the two periods respectively. The logic is that these both originate from the same emissions, so their ratio reflects the dampening of ERFaci with increased emissions. \n\nLet $\\alpha$ be the ratio \n\n\\begin{equation}\n\\frac{ERF_{aci}}{ERF_{ari}} = \\alpha \n\\end{equation}\n\nFrom the data (from FaIR, Chris) $\\alpha_{period}=3.42$ for 1850-1900 vs 2010-2019, while for the standard period from 1750 to 2019, it is $\\alpha_{standard} = 3.91$. \n\nThus, the ratio is \n\\begin{equation}\n\\frac{\\alpha_{period}}{\\alpha_{stanard}} = 0.874\n\\end{equation}\n\nThis results in a scaling down of approximately 12.5% of ERFaci. \n\n\n\n```python\nscale_down_by = 0.25\naci_tot = df_dt_sep.sum()['Cloud']\naci_tot\ndf_dt_sep['Cloud'] = df_dt_sep['Cloud'] * (1 - scale_down_by) # scale_by\ndf_dt_sep.sum()\n```\n\n\n\n\n CO2 0.831302\n CH4_lifetime 0.211404\n Strat_H2O 0.021566\n Aerosol -0.134744\n Cloud -0.380095\n O3 0.225841\n HC 0.187706\n N2O 0.083775\n HFCs 0.018209\n dtype: float64\n\n\n\n\n```python\ndf_erf_sep.sum()\n```\n\n\n\n\n CO2 1.781745\n CH4_lifetime 0.397714\n Strat_H2O 0.042616\n Aerosol -0.221752\n Cloud -0.843504\n O3 0.417665\n HC 0.398825\n N2O 0.176819\n HFCs 0.038688\n dtype: float64\n\n\n\n\n```python\nscale_down_by = 0.125\naci_tot = df_erf_sep.sum()['Cloud']\naci_tot\ndf_erf_sep['Cloud'] = df_erf_sep['Cloud'] * (1 - scale_down_by) # scale_by\ndf_erf_sep.sum()\n```\n\n\n\n\n CO2 1.781745\n CH4_lifetime 0.397714\n Strat_H2O 0.042616\n Aerosol -0.221752\n Cloud -0.738066\n O3 0.417665\n HC 0.398825\n N2O 0.176819\n HFCs 0.038688\n dtype: float64\n\n\n\n# Uncertainties\n\n\n```python\nfrom ar6_ch6_rcmipfigs.utils.badc_csv import read_csv_badc\n\nnum_mod_lab = 'Number of models (Thornhill 2020)'\nthornhill = read_csv_badc(fn_TAB2_THORNHILL, index_col=0)\nthornhill.index = thornhill.index.rename('Species')\nthornhill\n\n# ratio between standard deviation and 5-95th percentile.\nstd_2_95th = 1.645\n\nsd_tot = df_collins_sd['Total_sd']\ndf_err = pd.DataFrame(sd_tot.rename('std'))\ndf_err['SE'] = df_err\n\ndf_err['SE'] = df_err['std'] / np.sqrt(thornhill[num_mod_lab])\ndf_err['95-50_SE'] = df_err['SE'] * std_2_95th\ndf_err.loc['CO2', '95-50_SE'] = df_err.loc['CO2', 'std']\ndf_err\n\ndf_err['95-50'] = df_err['std'] * std_2_95th\n# CO2 is already 95-50 percentile: \ndf_err.loc['CO2', '95-50'] = df_err.loc['CO2', 'std']\ndf_err\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
std
\n
SE
\n
95-50_SE
\n
95-50
\n
\n
\n
Species
\n
\n
\n
\n
\n
\n \n \n
\n
CO2
\n
0.246907
\n
NaN
\n
0.246907
\n
0.246907
\n
\n
\n
CH4
\n
0.236538
\n
0.083629
\n
0.137569
\n
0.389105
\n
\n
\n
N2O
\n
0.061736
\n
0.027609
\n
0.045417
\n
0.101555
\n
\n
\n
HC
\n
0.116583
\n
0.047595
\n
0.078293
\n
0.191779
\n
\n
\n
NOx
\n
0.170036
\n
0.076043
\n
0.125090
\n
0.279710
\n
\n
\n
VOC
\n
0.136683
\n
0.061127
\n
0.100553
\n
0.224844
\n
\n
\n
SO2
\n
0.419710
\n
0.171346
\n
0.281864
\n
0.690423
\n
\n
\n
OC
\n
0.139932
\n
0.057127
\n
0.093974
\n
0.230188
\n
\n
\n
BC
\n
0.187990
\n
0.071053
\n
0.116883
\n
0.309243
\n
\n
\n
NH3
\n
0.004824
\n
0.003411
\n
0.005611
\n
0.007936
\n
\n \n
\n
\n\n\n\n### Uncertainty on period mean ERF is scaled from uncertainty in 2019: \n\n\n\n```python\nERF_2019_tot = df_collins.sum(axis=1).reindex(df_err.index)\nERF_period_diff_tot = df_erf_sep.sum(axis=1).reindex(df_err.index)\n```\n\nScale by the period mean to the original 1750-2019 difference. \n\n\n```python\ndf_err['95-50_period'] = df_err['95-50'] * np.abs(ERF_period_diff_tot / ERF_2019_tot)\n```\n\n\n```python\ndf_err\n```\n\n\n\n\n
\n\n\n\n\n```python\n# Check if there are any NaNs. \ncopied_df.isnull().values.any()\n```\n\n\n\n\n False\n\n\n\n\n```python\nbatch_size = 1\n\ntarget = copied_df.iloc[:, -3 * number:]\ndf = copied_df.iloc[:, :-3 * number]\n\n# target = df.pop(\"target\")\ndataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))\n\ntrain_dataset = dataset.cache().shuffle(len(df)).batch(batch_size)\ntrain_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)\n```\n\n\n```python\nDense = tf.keras.layers.Dense\n\ntf.keras.backend.clear_session()\n\nmodel = tf.keras.Sequential([\n Dense(3 * number, input_shape=(len(df.columns), ), activation=tf.nn.relu),\n Dense(1024, activation=tf.nn.relu),\n Dense(256, activation=tf.nn.relu),\n Dense(3 * number, activation=tf.nn.sigmoid)\n])\n\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.1), loss=\"mean_squared_error\")\n\nmodel.fit(train_dataset, epochs=10,)\n```\n\n Epoch 1/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4998\n Epoch 2/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 3/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 4/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 5/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 6/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 7/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 8/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 9/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 10/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n\n\n\n\n\n \n\n\n\n\n```python\nvalue_this = np.rint(np.random.dirichlet(np.ones(16), size=1) * shots)\n# value_this = np.reshape([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, shots], (1, 16))\nnew_val = (value_this - train_mean[:-3 * number].values) / train_std[:-3 * number].values\n```\n\n\n```python\npred = model.predict(new_val)\npred = (pred * train_std[-3 * number:].values) + train_mean[-3 * number:].values\npred = np.squeeze(pred)\n```\n\n\n```python\nqc = QuantumCircuit(number, number)\n\nfor i in range(number):\n qc.rx(pred[0 * (i + 1)], i)\n qc.ry(pred[0 * (i + 2)], i)\n qc.rz(pred[0 * (i + 3)], i)\n\nqc.measure(range(number), range(number))\ncirc = transpile(qc, backend)\n\nqobj = assemble(circ, shots=shots)\n\n# Run and get counts\nresult = backend.run(qobj).result()\ncounts = result.get_counts()\nplot_histogram(counts)\n```\n\n\n```python\npred\n```\n\n\n\n\n array([2.49347505, 2.47280132, 2.51305636, 1.53896315, 2.53338271,\n 1.57226614, 2.45784733, 1.52983895, 1.5794965 , 1.55878848,\n 2.47334759, 1.55470769])\n\n\n\n\n```python\nnames = ['{0:04b}'.format(i) for i in range(2 ** number)]\npred_df = pd.DataFrame(value_this, columns = names)\n```\n\n\n```python\nplot_histogram(pred_df.to_dict(\"records\"))\n```\n\n\n```python\ndiff = 0\nvalue_this = np.squeeze(value_this)\nfor key, value in counts.items():\n diff += abs(value_this[int(key, 2)] - value)\n\nprint(diff)\n```\n\n 3548.0\n\n\nHowever as you can see, the result are not very beautiful. In fact, it's bad using this method of approximating results. \n\n\n```python\n\n```\n", "meta": {"hexsha": "93be7da1bf094ab2bbabcc8a311fb576375c8ed6", "size": 82642, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Supervised_method.ipynb", "max_stars_repo_name": "Wabinab/hackqthon_team27", "max_stars_repo_head_hexsha": "9beff2114be271b2d04b2674379972dd2ef8946c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Supervised_method.ipynb", "max_issues_repo_name": "Wabinab/hackqthon_team27", "max_issues_repo_head_hexsha": "9beff2114be271b2d04b2674379972dd2ef8946c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Supervised_method.ipynb", "max_forks_repo_name": "Wabinab/hackqthon_team27", "max_forks_repo_head_hexsha": "9beff2114be271b2d04b2674379972dd2ef8946c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.6506765068, "max_line_length": 24978, "alphanum_fraction": 0.7504658648, "converted": true, "num_tokens": 8335, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5156199157230156, "lm_q2_score": 0.5736784074525096, "lm_q1q2_score": 0.2958000121027768}}
{"text": "\n\n# Machine learning for material science: using CrystalFeatures to predict the bandgap\n\n*By: Sherif Abdulkader Tawfik Abbas, https://scholar.google.com/citations?user=NhT7vZIAAAAJ*, https://questionid.org/r/sherif_tawfik_abbas\n\nThis tutorials covers the basics of using machine learning for material science. I will walk you through a few Python scripts that will enable you to classify materials and predict their properties.\n\nThe tutorial targets Masters/PhD students and scientists who have some experience with crystal structure (just the basics, like lattice constants and atomic positions). I also expect you to have basic experience with programming (in any language; Python, C, C++, C#, Java, JavaScript, etc).\n\nTopics we will cover:\n- Part 1: The basics of machine learning for material science: Using Python to access and process crystal structures\n - Bird's eye view on machine learning for materials\n - Google Colab\n - The MaterialsProject database\n - The PyMatGen python library\n - Structure file formats\n - Querying structures using PyMatGen\n- Part 2: Doing machine learning: Predicting the band gap of materials\n - Descriptors\n - Building a simple descriptor vector for crystals\n - Building a data set\n - Machine learning\n\n#Part 1\n\n## Bird's eye view on machine learning for materials\n\n### What is machine learning used for here\nSo what to we use machine learning for in material science? The main purpose of machine learning here is the prediction of properties of a given material. This property can either be a class (the classification problem) or a quantity (the regression problem).\n\n### Example problems/papers\n\nExample problems that machine learning solve:\n- Is a given material metallic or semiconducting? An example paper: https://doi.org/10.1038/ncomms15679.\n- What is the specific heat capacity of a material? An example paper, one of mine: https://doi.org/10.1002/adts.201900208. I wrote a blog, that includes code, on this: http://www.sheriftawfikabbas.com/blog/ai/ml/machine-learning-to-predict-the-specific-heat-capacity-of-crystals/\n\n\n### The machine learning workflow\n\nSo how is this all done? Generally, we can think of machine learning as a 3-step process:\n- Step A: First, find numerical/categoricall **descriptors** that can describe your material. That is: every material in your dataset should be uniquely represented by an array of numbers/categories.\n- Step B: Then, apply your procedure to your entire dataset of structures to form a sheet of material descriptors vs. **target properties**.\n- Step C: Use machine learning to predict the target properties based on the descriptors.\n\nA schematic diagram for these steps is shown below.\n\n\n\nBefore we go deeper into the details, we will need to learn a few things today:\n- The programming language (python)\n- The database of materials, from which we will get our data and create the dataset\n\nLet's start!\n\n## Google Colab\n\nIf you wish to learn to program in python and you don't have python installed on your computer, or you don't wish to struggle with the headache of setting it up, then you can use the Google Colab. This is a website that was developed by Google and is publicly available for anyone to use. It is pretty much an online python compiler, where you can write python code and run it, all on your web browser.\n\nThis tutorial, as you can see, is already running on Google Colab (let's just call it Colab from now on). On Colab, you can create python **notebooks**, which are known as Jupyter notebooks. Jupyter is a programming environment for python that allows the programmer to write documents, just like this one, where there is both text and code. To demonstrate the placement of code here, check out the below section. This is some python code that prints out the text `\"Hello material science!\"`. Press the play button, and it will run that code, and actually print out `\"Hello material science!\"`.\n\n\n```\nprint(\"Hello material science! Happy you're here!\")\n```\n\n Hello material science! Happy you're here!\n\n\nNow, let's have a quick overview on python.\n\n## Basic python: variables, operations, if statement, for loop\n\nPython is quite a nice and easy language to learn and understand. In fact, it is one of the easiest languages to get you hit the road running. This is because you can virtually run your python code, or script, anywhere: on your laptop or even on your phone. This is thanks to the many online python servers that will allow you to run python code online. \n\nLet me give you an example of how simple it is to run python code.\n\n### The print statement\nLet's start with the simplest thing you can do in a program: to ask the computer to print out something, such as `\"Like coffee?\"`\n\nHow do we get the computer to print out this question on the screen? For that, we use the print statement. This statement orders the computer to print *something* to the screen. That something is called a *string*, which we will deal with shortly.\n\nLet's write that code, which will be our first ever line of python code in this course:\n\n\n```\nprint(\"Like coffee?\")\n```\n\n Like coffee?\n\n\nDo you notice something in th above line? It looks different from normal text on this page. It is actually an *executable* line of code! This means that, when you press the play button to the left of the print statement, the program wil actually **run**! That's without needing to install any software on any computer. This is the power of using Google' Colab: it lets you execute python code on the fly.\n\nSo what just happened? We wrote a print statement, in which we called the print function. It is a function because it receives something, the \"Like coffee?\" text, and then performs some task with the text.\n\nLet's try something else. Let's print two sentences:\n\n\n```\nprint(\"Like coffee?\",\"Yes, with milk please!\")\n```\n\n Like coffee? Yes, with milk please!\n\n\nYou notice what happened here? The print statement received two, not one, strings. They are separated by a comma. The print function then prints both sentences one after the other, leaving a space in between.\n\nNow let's try to understand how do we write strings like `\"Like coffee?\"`\n\n### Strings\n\nA string is any collection of characters, numbers, dots, symbols, or anything else you can access via the keyboard (or beyond!), enclosed between two quotes, or between two double quotes. So, for example, typing `\"a\"` is a valid string in python:\n\n\n```\n\"a\"\n```\n\n\n\n\n 'a'\n\n\n\nHere we just typed the string `\"a\"`, and python only repeated the value of that string, which again is `'a'`. Did you spot the difference?\n\nA string in python is composed of three things: an opening quotation mark, a set of characters, and a closing quotation mark. Python accepts two types of quotation marks: single, and double. So we can also type `'a'` up there, and we will get the same result.\n\nThe string is an example of a python data type: it is something that represents a value.\n\n### Python data types: strings, numbers and boolean values\n\nPython has a number of data types that enable you to perform various types of operations on data. Here we discuss the fundamental three data types of python: strings, which we just discussed above, numbers and logical, or boolean values.\n\nNumbers are just numbers! Type a number in the python interpreter, say `5`, and you will just get `5` back.\n\n### Variables\n\nA variable in python, as well as in programming language, enables you to store a value in memory. Let us create a variable that will store the string value \"I love coffee\" in memory.\n\n\n\n```\ns = \"I love coffee\"\nprint(s)\n```\n\n I love coffee\n\n\n\n\n\n### Arithmetic operations: `+`, `-`, `*`, `/`, `%`, `//`\n\nBack to operators. You can work with the standard maths operators in python, which are `+`, `-`, `*`, `/` and `%`. The first four are obvious, but `%` might need an introduction. The `//` operator is related to the `/`: it lets you get the quotient of the division, and therefore it is called the floor operator `//`. That is, it just removes the decimal part of the division result. For example, `9//4=2`.\n\n`%` is the modulus operator. It is related to the `//` operator in that: while `a//b` gives you the integer *quotient*, `%` gives the *remainder*. For example, in the division `9/4=2+1/4`, the quotient is `2` and the remainder is `1`. Then in python, `9%4=1`.\n\n### Comparison operations: `==`, `!=`, `<`, `>`, `<=`, `>=`\n\nNow that we know how to use operators on numbers to create numbers, the above operators create boolean values. This is because they ask questions about the variables they operate on.\n\n- `a == b` means: is `a` exactly equal to `b`?\n- `a != b` means: is `a` **not** equal to `b`?\n- `a < b` means: is `a` less than `b`?\n- `a > b` means: is `a` greater than `b`?\n- `a <= b` means: is `a` less than **or equal to** `b`?\n- `a >= b` means: is `a` greater than **or equal to** `b`?\n\nLet's evaluate an expression baesd on these operators:\n\n\n\n```\n#This code demonstrates operations\na = 3\nb = 5\nc = a == b\nprint(c)\nd = a > b\nprint(d)\ne = a <= b\nprint(e)\n```\n\n False\n False\n True\n\n\n\n### Lists\nA `list` in python is exactly what its name suggests, a list of things. Like a list of numbers, names, or even a mix of both. To create a list, we have to follow a simple syntax rule: enclose the things in the list between two *square brackets*, like those `[` and `]`, and separate between the list elements using commas, `,`. So for example, here is a list of numbers: `a = [4,6,7,1,0]`, a list of strings: `a = [\"a\",\"?\",\"neptune is a planet\"]`, a list of both: `a = [3,0,\"Where is my car?\"]`.\n\nWell, you can also create a list of lists in python! And you can *nest* as many lists as you want. Here is an example: `a = [[1,2],[3,4],[5,6]]`. This is a list of three elements, each element being itself a list of two elements.\n\n#### Accessing and changing list elements\n\nTo access a list element, we apply this syntax rule: find out what the *order* of that element is, and then access it using the square brackets. The order of an element in a list is an integer. The order of the first element is always `0`. Here is an example.\n\n\n\n\n```\na = [\"I\",\"want\",\"to\",\"order\",\"the\",4,\"dollars\",\"mocha\"]\nprint(a[0])\n```\n\n I\n\n\nThe string `'I'` is the first element of list `a`, and therefore its index is `0` and can be retrieved by typing `a[0]`.\n\nWe can change the element in a list by just assigning it a new value.\n\n\n\n\n```\na = [\"I\",\"want\",\"to\",\"order\",\"the\",4,\"dollars\",\"mocha\"]\na[5] = \"yes you can!\"\nprint(a)\n```\n\n ['I', 'want', 'to', 'order', 'the', 'yes you can!', 'dollars', 'mocha']\n\n\n\n\n#### Checking if a value exists in a list\n\nTo find if some value belongs to a given list, we use the `in` keyword in python. For example, given the list `a = [1,2,3]`, the boolean expression `2 in a` will return `True` because that's a correct statement.\n\n#### Adding elements to a list\n\nThere are two ways to add elements to a list:\n- By using the `+` operator e.g. `[1,2,3]+[4]` gives `[1,2,3,4]`.\n- By using the `append()` function e.g. `a = [1,2,3];a.append(4);print(a)` gives `[1,2,3,4]`.\n- By using the `inset()` function if you want to add an item at a specified index e.g. `a = [1,2,3];a.insert(1,4);print(a)` gives `[1, 4, 2, 3]`.\n\n#### Adding a list to a list\n\nLet's say you have two lists `a=[1,2]` and `b=[3,4]`, and you wish to create the list `[[1,2],[3,4]]`. Let's try to use `+` and `append()` and see if they will give us what we are after.\n\n### Tuples\n\nA tuple, like a list, is a collection of things, but the things are enclosed between `(` and `)`. But there is even a more important difference: once you group things in a tuple, you cannot change them. That is, a tuple is *immutable*.\n\nFor example, let's create a tuple and attempt to change the value of one of the elements.\n\n\n\n\n```\na = (3,4,5)\n#a[0] = 2\n#The above line gives an error!\n```\n\n### Dictionaries\n\nWe learned in lists and tuples that the elements are indexed. The index is an integer that starts from `0`. A dictionary extends the indexing concept: a dictionary is a collection of indexed objects, where the indices themselves can be anything *immutable*: numbers, float, strings and tuples (and frozensets, but we won't discuss that one today).\n\nThe syntax for creating a dictionary is as follows: `{key:value}`, where `key` is the index, `value` is any data type. For example,\n\n\n```\na = {'apple':3.5, 'pear': 2.5, 'banana':4}\nprint(a['apple'])\nb = {'a':\"lists\",'b':\"tuples\",'c':\"sets\",'d':\"dictionaries\"}\nprint(b['c'])\n```\n\n 3.5\n sets\n\n\n### The conditional statement\n\nSo far, we have been dealing with simple python statements. Each statement could be written in a single line of code, and they instructed the computer to perform a single task. For example, `a = 4` instructs the computer to put `4` into `a` and that's it.\n\nSerious programming starts when we let the computer make decisions after it tests certain conditions. Instead of just printing a name, how about we get the computer to print the name only **if** it starts with letter `A`?\n\nI just said **if**, which means: some condition should be tested before the print statement is executed. Now let me introduce the `if` statement in python. This statement has the following syntax:\n\n```\nif boolean_expression:\n some_statements\n```\n\nThe `boolean_expression` evaluates to either True or False. The `if` statement will only execute the statements if `boolean_expression` evaluates to `True`. Otherwise, these statements will not be executed.\n\nSo, to solve the above problem, here is the code:\n\n\n\n```\ns = 'Ahmed'\nif s[0] == 'A':\n print(s)\n```\n\n Ahmed\n\n\n#### The `elif` clause\n\nSometimes the condition we are testing might evaluate to more than two possible outcomes. For a simple demonstration: I will decide to put on a jacket if it's very cold outside. But if it's fair, maybe just a jumper. Otherwise, a t-shirt. So here we have three possible outcomes for the condition testing. The testing will check the temperature, and decide accordingly.\n\n\n```\ntemperature = 15\nif temperature < 15:\n print('I am wearing a jacket')\nelif temperature < 20:\n print('I am wearing a jumper')\nelse:\n print('I am wearing a t-shirt')\n```\n\n I am wearing a jumper\n\n\n### The loop statements\n\nPython has two loop statements: the `for` and the `while` loop statements. The loop is a very important programming construct. It enables you to repetitively run a block of statements as long as a given condition is correct. Loops let us start write complex code that can solve complex problems; it is actually the starting point for doing serious programming!\n\n#### The `for` loop\n\nThe syntax of the `for` loop is:\n\n```\nfor x in collection:\n statement1\n statement2\n ...\n statementN\n```\n\nHere `collection` could be any of the four collection types in python that we covered in Class 3. Note the `in` operator here.\n\nIn the `for` statement, `x` is called the *index* of the loop.\n\nFor example, the following loop will print out the elements from a list:\n\n\n\n```\nfor k in [1,2,3]:\n print(k)\n```\n\n 1\n 2\n 3\n\n\n\n#### The `while` loop\n\nThe syntax of the `while` loop is:\n\n```\nwhile x:\n statement1\n statement2\n ...\n statementN\n```\n\nThe `while` loop keeps running the statement block as long as `x` is true. So `x` here is a boolean expression. For example:\n\n\n\n```\na = 10\nwhile a > 0:\n print(a)\n a-=1\n```\n\n 10\n 9\n 8\n 7\n 6\n 5\n 4\n 3\n 2\n 1\n\n\n### Python libraries\n\nOne of the most powerful features of python is its libraries. A library is a python script that was written by someone, and that can perform a set of tasks. You can make use of a python library by just using the `import` command. For example, when you want to calculate the logarithm, the `log()` function you would look for exists in the `numpy` library.\n\n\n\n```\nimport numpy as np\nprint(np.log(11))\n```\n\n 2.3978952727983707\n\n\n\n## The MaterialsProject database\n\nThe MaterialsProject (MP) database is a massive amount of material science data that was generated using density functional theory (DFT). Have a look at the database here: https://materialsproject.org. Check the statistics at the bottom of the page. There are 124,000 inorganic crystals in the database, along with the DFT-calculated properties of these materials. There are also 530,000 nanoporous materials, as well as other stuff. It's a huge amount of material data.\n\n### Signing up and the API key\n\nYou will need to **sign up** in materialsproject.org to be able to access the database. Signing up there is free.\n\nOnce you sign up, you can obtain an **API key** that will enable you to access the database using python. Will discuss this further shortly.\n\n### A look at MaterialsProject\n\nLet's have a look at the 124,000 inorganic crystals data. Each one of these crystals is a 3D crystal that includes: a number of elements, arranged in a lattice structure. Check, for example, the MP page for diamond: https://materialsproject.org/materials/mp-66/.\n\n\n\nNote that each material on MP is identified by an ID that goes like `mp-X` where `X` is a number. The ID of diamond is `mp-66`. People use these identifiers when referring to MP materials in papers, and we will use them soon when we start querying materials from MP using python.\n\nThere you will find the crystal structure, the lattice parameters, the basic properties (in a column to the right of the figure that displays the crystal), and then a range of DFT-calculated properties.\n\n### The DFT properties\n\nThese are quantities that are calculated for each crystal in MP. In fact, every thing you see on the MP page for diamond was calculated using DFT. \n\n- For a given elemental composition, the lattice parameters and the positions of the atoms within the lattice are all obtained using DFT.\n- For the obtained crystal structure, the `Final Magnetic Moment`, `Formation Energy / Atom`, `Energy Above Hull / Atom`, `Band Gap` are calculated. The `Density` is derived from the obtained crystal structure.\n- Further DFT calculations are performed to obtain the band structure as well as other properties that you can find as you scroll down the structure page on MP.\n\nSome of the crystals on MP correspond to crystals that exist in nature, and some are purely hypothetical. The hypothetical crystals have been generated by some algorithm that uses artificial intelligence, or probably by simple elemental substitution.\n\n### Why is MP great for machine learning?\n\nBecause of the huge amount of materials (dataset) and DFT-calculated properties (target properties). That much data can be utilised using a range of machine learning methods to make predictions with various levels of accuracy.\n\n## The PyMatGen python library\n\nTo be able to query the MP database, the MP team provided the community with a python library that you can install on your computer (or in Colab as I will show you).\n\nRemember: to be able to run the codes in this section, you must obtain an API key from the MP website: https://materialsproject.org/docs/api#API_keys.\n\nThe first thing we do here is to install PyMatGen in Colab.\n\n\n\n```\n!pip3 install pymatgen\n```\n\n Requirement already satisfied: pymatgen in /usr/local/lib/python3.7/dist-packages (2022.0.8)\n Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from pymatgen) (2.23.0)\n Requirement already satisfied: matplotlib>=1.5 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.2.2)\n Requirement already satisfied: spglib>=1.9.9.44 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.16.1)\n Requirement already satisfied: ruamel.yaml>=0.15.6 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (0.17.7)\n Requirement already satisfied: palettable>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.3.0)\n Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from pymatgen) (0.8.9)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.1.5)\n Requirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (2.5.1)\n Requirement already satisfied: typing-extensions>=3.7.4.3; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.7.4.3)\n Requirement already satisfied: numpy>=1.20.1 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.20.3)\n Requirement already satisfied: scipy>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.6.3)\n Requirement already satisfied: uncertainties>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.1.5)\n Requirement already satisfied: monty>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (2021.5.9)\n Requirement already satisfied: plotly>=4.5.0 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (4.14.3)\n Requirement already satisfied: sympy in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.7.1)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (1.24.3)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (2.10)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (3.0.4)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (2020.12.5)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (1.3.1)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (0.10.0)\n Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (2.8.1)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (2.4.7)\n Requirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == \"CPython\" and python_version < \"3.10\" in /usr/local/lib/python3.7/dist-packages (from ruamel.yaml>=0.15.6->pymatgen) (0.2.2)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->pymatgen) (2018.9)\n Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.2->pymatgen) (4.4.2)\n Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from uncertainties>=3.1.4->pymatgen) (0.16.0)\n Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from plotly>=4.5.0->pymatgen) (1.15.0)\n Requirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.7/dist-packages (from plotly>=4.5.0->pymatgen) (1.3.3)\n Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy->pymatgen) (1.2.1)\n\n\nThis will download PyMatGen into your Golab environment. Now, we are going to use PyMatGen to do two things: open a CIF crystal file to view its content, and query MP for crystals that satisfy certain properties.\n\nBy the way, I used the `pip3` command to install PyMatGen. On your computer, if you have python installed, you can install PyMatGen by typing the same command with the `!`:\n\n```\npip3 install pymatgen\n```\n\n## Structure file formats\n\nOne of the most common file formats that describe crystal structure is the CIF format (Crystallographic Information File). The official definition of this formnat is here: https://www.iucr.org/resources/cif.\n\n\nBut we are not going to learn the details of the format. We will just learn how to open a CIF with pythton. Here is how we can do this.\n\n\n\n```\nfrom pymatgen.io.cif import CifParser\nfrom urllib.request import urlopen\n\nrequest = urlopen(\"https://raw.githubusercontent.com/sheriftawfikabbas/crystalfeatures/master/Li10Ge(PS6)2_mp-696128_conventional_standard.cif\")\ncifFile = request.read().decode('utf-8')\nparser = CifParser.from_string(cifFile)\n```\n\nIn the above code, we imported a **class** from the PyMatGen library: the `CifParser` class. It allows us to create a new CIF file **object**. This object will then represent the CIF structure, and can be used to access its information.\n\nNext, let's extract some information from the `CifParser` object.\n\n\n\n```\n\nstructure = parser.get_structures()\n# Returns a list of Structure objects\n# #http://pymatgen.org/_modules/pymatgen/core/structure.html\n# Let's print the first (and only) Structure object\nprint(structure[0])\n\n```\n\n Full Formula (Li20 Ge2 P4 S24)\n Reduced Formula: Li10Ge(PS6)2\n abc : 8.787600 8.787600 12.657500\n angles: 90.000000 90.000000 90.000000\n Sites (50)\n # SP a b c\n --- ---- ------ ------ ------\n 0 Li 0.2287 0.273 0.2946\n 1 Li 0.7713 0.727 0.2946\n 2 Li 0.273 0.7713 0.7946\n 3 Li 0.727 0.2287 0.7946\n 4 Li 0.2287 0.727 0.2946\n 5 Li 0.7713 0.273 0.2946\n 6 Li 0.273 0.2287 0.7946\n 7 Li 0.727 0.7713 0.7946\n 8 Li 0 0 0.9397\n 9 Li 0 0 0.4397\n 10 Li 0.5 0.5 0.548\n 11 Li 0.5 0.5 0.048\n 12 Li 0.2563 0.7248 0.0367\n 13 Li 0.7437 0.2752 0.0367\n 14 Li 0.2752 0.2563 0.5367\n 15 Li 0.7248 0.7437 0.5367\n 16 Li 0.2752 0.7437 0.5367\n 17 Li 0.7248 0.2563 0.5367\n 18 Li 0.2563 0.2752 0.0367\n 19 Li 0.7437 0.7248 0.0367\n 20 Ge 0.5 0.5 0.801\n 21 Ge 0.5 0.5 0.301\n 22 P 0 0 0.6861\n 23 P 0 0 0.1861\n 24 P 0 0.5 0.5041\n 25 P 0.5 0 0.0041\n 26 S 0 0.6944 0.4121\n 27 S 0 0.3056 0.4121\n 28 S 0.3056 0 0.9121\n 29 S 0.6944 0 0.9121\n 30 S 0.5 0.1898 0.0971\n 31 S 0.5 0.8102 0.0971\n 32 S 0.1898 0.5 0.5971\n 33 S 0.8102 0.5 0.5971\n 34 S 0 0.8047 0.0941\n 35 S 0 0.1953 0.0941\n 36 S 0.1953 0 0.5941\n 37 S 0.8047 0 0.5941\n 38 S 0.5 0.29 0.4032\n 39 S 0.5 0.71 0.4032\n 40 S 0.29 0.5 0.9032\n 41 S 0.71 0.5 0.9032\n 42 S 0 0.192 0.7771\n 43 S 0 0.8081 0.7771\n 44 S 0.8081 0 0.2771\n 45 S 0.192 0 0.2771\n 46 S 0.5 0.7074 0.6982\n 47 S 0.5 0.2926 0.6982\n 48 S 0.7074 0.5 0.1982\n 49 S 0.2926 0.5 0.1982\n\n\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:709: UserWarning: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(msg)\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:1121: UserWarning: Issues encountered while parsing CIF: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(\"Issues encountered while parsing CIF: %s\" % \"\\n\".join(self.warnings))\n\n\nHere we have the details of the CIF structure in a human-readable format, which include the formula, the lattice parameters and the positions of the atoms in the crystal.\n\n\n\n```\n\nstructure = structure[0]\n\nprint(structure.lattice)\nprint(structure.species)\nprint(structure.charge)\nprint(structure.cart_coords)\nprint(structure.atomic_numbers)\nprint(structure.distance_matrix)\n```\n\n 8.787600 0.000000 0.000000\n 0.000000 8.787600 0.000000\n 0.000000 0.000000 12.657500\n [Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Ge, Element Ge, Element P, Element P, Element P, Element P, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S]\n 0\n [[2.00972412e+00 2.39901480e+00 3.72889950e+00]\n [6.77787588e+00 6.38858520e+00 3.72889950e+00]\n [2.39901480e+00 6.77787588e+00 1.00576495e+01]\n [6.38858520e+00 2.00972412e+00 1.00576495e+01]\n [2.00972412e+00 6.38858520e+00 3.72889950e+00]\n [6.77787588e+00 2.39901480e+00 3.72889950e+00]\n [2.39901480e+00 2.00972412e+00 1.00576495e+01]\n [6.38858520e+00 6.77787588e+00 1.00576495e+01]\n [0.00000000e+00 0.00000000e+00 1.18942527e+01]\n [0.00000000e+00 0.00000000e+00 5.56550275e+00]\n [4.39380000e+00 4.39380000e+00 6.93631000e+00]\n [4.39380000e+00 4.39380000e+00 6.07560000e-01]\n [2.25226188e+00 6.36925248e+00 4.64530250e-01]\n [6.53533812e+00 2.41834752e+00 4.64530250e-01]\n [2.41834752e+00 2.25226188e+00 6.79328025e+00]\n [6.36925248e+00 6.53533812e+00 6.79328025e+00]\n [2.41834752e+00 6.53533812e+00 6.79328025e+00]\n [6.36925248e+00 2.25226188e+00 6.79328025e+00]\n [2.25226188e+00 2.41834752e+00 4.64530250e-01]\n [6.53533812e+00 6.36925248e+00 4.64530250e-01]\n [4.39380000e+00 4.39380000e+00 1.01386575e+01]\n [4.39380000e+00 4.39380000e+00 3.80990750e+00]\n [0.00000000e+00 0.00000000e+00 8.68431075e+00]\n [0.00000000e+00 0.00000000e+00 2.35556075e+00]\n [7.06576930e-16 4.39380000e+00 6.38064575e+00]\n [4.39380000e+00 0.00000000e+00 5.18957500e-02]\n [9.81294040e-16 6.10210944e+00 5.21615575e+00]\n [4.31859820e-16 2.68549056e+00 5.21615575e+00]\n [2.68549056e+00 0.00000000e+00 1.15449058e+01]\n [6.10210944e+00 0.00000000e+00 1.15449058e+01]\n [4.39380000e+00 1.66788648e+00 1.22904325e+00]\n [4.39380000e+00 7.11971352e+00 1.22904325e+00]\n [1.66788648e+00 4.39380000e+00 7.55779325e+00]\n [7.11971352e+00 4.39380000e+00 7.55779325e+00]\n [1.13716491e-15 7.07138172e+00 1.19107075e+00]\n [2.75988949e-16 1.71621828e+00 1.19107075e+00]\n [1.71621828e+00 0.00000000e+00 7.51982075e+00]\n [7.07138172e+00 0.00000000e+00 7.51982075e+00]\n [4.39380000e+00 2.54840400e+00 5.10350400e+00]\n [4.39380000e+00 6.23919600e+00 5.10350400e+00]\n [2.54840400e+00 4.39380000e+00 1.14322540e+01]\n [6.23919600e+00 4.39380000e+00 1.14322540e+01]\n [2.71325541e-16 1.68721920e+00 9.83614325e+00]\n [1.14196963e-15 7.10125956e+00 9.83614325e+00]\n [7.10125956e+00 0.00000000e+00 3.50739325e+00]\n [1.68721920e+00 0.00000000e+00 3.50739325e+00]\n [4.39380000e+00 6.21634824e+00 8.83746650e+00]\n [4.39380000e+00 2.57125176e+00 8.83746650e+00]\n [6.21634824e+00 4.39380000e+00 2.50871650e+00]\n [2.57125176e+00 4.39380000e+00 2.50871650e+00]]\n (3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 32, 32, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16)\n [[0. 5.6632708 7.70578018 ... 5.64011881 4.81286828 2.40485506]\n [5.6632708 0. 7.70578018 ... 6.80832646 2.40485506 4.81286828]\n [7.70578018 7.70578018 0. ... 4.81286828 6.80832646 5.64011881]\n ...\n [5.64011881 6.80832646 4.81286828 ... 0. 6.8334794 6.8334794 ]\n [4.81286828 2.40485506 6.80832646 ... 6.8334794 0. 3.64509648]\n [2.40485506 4.81286828 5.64011881 ... 6.8334794 3.64509648 0. ]]\n\n\n## Querying structures using PyMatGen\n\nNow let's use PyMatGen to query structures from MP. To be able to do that, we need first to create a `MPRester` with the API key that we receive from MP.\n\n\n```\nfrom pymatgen.ext.matproj import MPRester\nfrom pymatgen.ext.matproj import MPRestError\n\nm = MPRester(\"Ua7LfrKkn9yTWA3t\")\n```\n\n**Note: I am hiding my API key here. I am not allowed to share it, sorry!**\n\nThen we can use the object variable, `m`, to access MP. For today, I willl just show you a simplle query: getting MP IDs for all materials with bandgap larger than 1:\n\n\n```\nresults=m.query({\"band_gap\": {\"$gt\": 6}},properties=[\"material_id\"])\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 918/918 [00:00<00:00, 2412.54it/s]\n\n\nYou can notice that this operation takes time: there are 918 such materials.\n\nFinally, let's have a look at the MP IDs we got. This code shows the total count, and the first 10 MP IDs.\n\n\n```\nprint(len(results))\nresults[0:10]\n```\n\n 918\n\n\n\n\n\n [{'material_id': 'mp-642824'},\n {'material_id': 'mp-1210031'},\n {'material_id': 'mp-1196163'},\n {'material_id': 'mp-1206658'},\n {'material_id': 'mp-1208862'},\n {'material_id': 'mp-1199215'},\n {'material_id': 'mp-1196584'},\n {'material_id': 'mp-1205912'},\n {'material_id': 'mp-1106386'},\n {'material_id': 'mp-28289'}]\n\n\n\n\nOne last thing: let's query a crystal from MP using its MP ID.\n\n\n\n\n```\nresults=m.query({\"material_id\": 'mp-1207450'},properties=[\"cif\"])\ncifFile = results[0]['cif']\nparser = CifParser.from_string(cifFile)\n\nstructure = parser.get_structures()\nprint(structure[0])\n```\n\n Full Formula (Zn18 Fe8)\n Reduced Formula: Zn9Fe4\n abc : 7.773250 7.773250 7.773250\n angles: 109.471221 109.471221 109.471221\n Sites (26)\n # SP a b c\n --- ---- -------- -------- --------\n 0 Zn 0.356237 0 0.751283\n 1 Zn 0.751283 0 0.356237\n 2 Zn 0.248717 0.604954 0.248717\n 3 Zn 0.604954 0.248717 0.248717\n 4 Zn 0.643763 0.395046 0.643763\n 5 Zn 0.395046 0.643763 0.643763\n 6 Zn 1 0.751283 0.356237\n 7 Zn 0 0.356237 0.751283\n 8 Zn 0.356237 0.751283 0\n 9 Zn 0.643763 0.643763 0.395046\n 10 Zn 0.248717 0.248717 0.604954\n 11 Zn 0.751283 0.356237 0\n 12 Zn 0 0.643607 0.643607\n 13 Zn 0 0.356393 0.356393\n 14 Zn 0.356393 0.356393 0\n 15 Zn 0.643607 0.643607 0\n 16 Zn 0.356393 0 0.356393\n 17 Zn 0.643607 0 0.643607\n 18 Fe 0.206521 0 0\n 19 Fe 0 1 0.206521\n 20 Fe 1 0.206521 1\n 21 Fe 0.793479 0.793479 0.793479\n 22 Fe 0.683339 0 0\n 23 Fe 0 0 0.683339\n 24 Fe 1 0.683339 1\n 25 Fe 0.316661 0.316661 0.316661\n\n\n#Part 2\n\n## Descriptors\n\nBefore we start ML, let\u2019s address a very important question that lies at the centre of the field of ML-driven material discovery: **how do we apply ML to predict crystal properties?**\n\nFirst, **what are crystal properties?** These are quantities that are measured or calculated for crystals. By crystal, I mean a structure that is endowed with **periodicity**. Let\u2019s take the example of **diamond**. This material has just one atom, carbon (C). If you could zoom-in very close into diamond such that you can see the C atoms (people sort of do that using advanced experimental techniques by the way), you will see something similar to the following figure.\n\n\n\nThe figure illustrates how a diamond crystal is created from the **diamond unit cell**, which is the little molecule on the left with 4 C atoms. The crystal has **infinitely many diamond unit cells* along the three directions, so it is a 3D pattern.\n\nBefore we discuss crystals further, consider molecules for a minute. Molecules are **fundamentally different** from crystals because of the pattern (periodicity) bit: a molecule is just that one single molecule, sitting on its own, in isolation, whereas a crystal is really composed of an infinite number of molecules. How would the pattern in the crystal make ML for molecules different from ML for crystals?\n\nTo predict the properties of the molecules one can derive a set of descriptors for the molecules in the data set that are based on the **positions of the atoms within the molecule**. One can derive descriptors based on the relative positions, in order to ensure that the descriptors are **invariant to transformations**: rotation and translation.\n\nThe key thing here in the molecular descriptors is that they are **based on the atomic positions**. In crystals, **we can't really use atomic positions** like we did with molecules to obtain descriptors. Why?\n\nThink about the diamond crystal pattern above. Because it is a pattern, it is **symmetric in all three directions**. Now let's say we are going to calculate the Coulomb matrix for diamond, so we start with the positions of the atoms in the unit cell of diamond, that figure on the left. But wait: even though these four atoms do form a valid unit cell for diamond, we can also come up with another valid unit cell, as shown below.\n\n\n\nThe figure above is also valid. How to obtain it? Look at the diamond crystal in Figure 1, and take a different repeating unit from it. As long as this repeating unit can also form the same pattern, it is a valid unit cell!\n\nSo, there are many different ways we can represent the unit cell of a crystal. Therefore, we cannot use the atomic coordinates to derive descriptors for crystals, otherwise the derived descriptors, such as the eigenvalues of the Coulomb matrix in the case of molecules, will **change dramatically for the same crystal**. That is, **the descriptor vector is not invariant with respect to translation of the unit cell**. What do we do then?\n\n## Building a simple descriptor vector for crystals\n\nA possible solution to this problem is to use some statistics of atomic properties as the descriptor vector. For example:\n\n- Average of the atomic numbers of all the elements in the crystal. For example, in silicon carbide, SiC, the average value would be the average of 14 (for Si) and 6 (for C), which is (14 + 6)/2 = 10. So that's now one number in the descriptor vector.\n- The average of the ionization potential of the atoms\n- The average of the electron affinity of the atoms\n- And more averages\n\nSo we can keep adding averages of properties to this list, to expand the descriptor vector. This vector will not suffer from the lack of invariance issue pointed out above, because these are average values of quantities that do not depend on the geometry of the crystal.\n\nAverage is just one statistic. We can also add other statistics, such as the standard deviation and the variance. Adding those will triple the number elements in the descriptor vector above.\n\n\n\n\n```\nimport numpy as np\nstructure = structure[0]\nmean_atomic_number=np.mean(structure.atomic_numbers)\nmax_atomic_number=np.max(structure.atomic_numbers)\nmin_atomic_number=np.min(structure.atomic_numbers)\nstd_atomic_number=np.std(structure.atomic_numbers)\n\nprint(mean_atomic_number,max_atomic_number,min_atomic_number,std_atomic_number)\n```\n\n 11.36 32 3 7.509354166637767\n\n\nHowever, there is a **problem**. A lot of materials exist in **various phases**. That is, for the same atomic composition, let's say SiC, there are several possible structures. Right now, there are 27 possible structures for SiC on MaterialsProject.org.\n\nSo, the above descriptors won't work. For example, for the case of SiC, all of the 27 SiC phases in MP will have the same values for the statistical values above.\n\nTo solve this problem, we have to add descriptors **based on the geometrical arrangement of atoms**. A simple such descriptor is to average the bond lengths (a bond is formed between two atoms).\n\n\n```\nmean_distance_matrix = np.mean(structure.distance_matrix)\nmax_distance_matrix = np.max(structure.distance_matrix)\nmin_distance_matrix = np.min(structure.distance_matrix)\nstd_distance_matrix = np.std(structure.distance_matrix)\n\nprint(mean_distance_matrix, max_distance_matrix,\n min_distance_matrix, std_distance_matrix)\n```\n\n 4.959684299990455 8.869274685254709 0.0 1.6490128596408165\n\n\n## Building a data set\n\nNow it's time to build our dataset, before we can do machine learning on it. We will do this in 3 steps:\n\n- Step 1: collecting the structures\n- Step 2: pre-processing the data\n\n### Step 1: Collecting the structures\n\nWe want to predict the bandgaps of structures, so we need to collect the structures (dataset) along with their corresponding bandgaps (target vector).\n\nFor this exercise, let's focus on stoichiometric perovskites: these are materials of the form ABC3. The followiing query will collect the CIFs and bandgaps for these materials from MP.\n\n\n\n```\nresults = m.query({\"formula_anonymous\": \"ABC3\"}, properties=[\"cif\", \"band_gap\"])\n\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4608/4608 [00:02<00:00, 1654.43it/s]\n\n\n### Step 2: Pre-processing the data\n\nHere we will extract the data we need from the structures, put them in a pandas DataFrame and then apply normalization.\n\n\n```\n\nfrom pymatgen.io.cif import CifParser\nfrom urllib.request import urlopen\nimport pandas as pd\nfrom pymatgen.ext.matproj import MPRester\nfrom pymatgen.ext.matproj import MPRestError\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\ndef descriptors(cif):\n\n atomic_numbers = []\n\n distance_matrix = []\n van_der_waals_radius = []\n electrical_resistivity = []\n velocity_of_sound = []\n reflectivity = []\n poissons_ratio = []\n molar_volume = []\n thermal_conductivity = []\n melting_point = []\n critical_temperature = []\n superconduction_temperature = []\n liquid_range = []\n bulk_modulus = []\n youngs_modulus = []\n brinell_hardness = []\n rigidity_modulus = []\n # mineral_hardness = []\n vickers_hardness = []\n density_of_solid = []\n coefficient_of_linear_thermal_expansion = []\n average_ionic_radius = []\n average_cationic_radius = []\n average_anionic_radius = []\n\n parser = CifParser.from_string(cif)\n\n structure = parser.get_structures()\n structure = structure[0]\n\n numElements = len(structure.atomic_numbers)\n\n num_metals = 0\n for e in structure.species:\n if e.Z in range(3, 4+1) or e.Z in range(11, 12+1) or e.Z in range(19, 30+1) or e.Z in range(37, 48+1) or e.Z in range(55, 80 + 1) or e.Z in range(87, 112+1):\n num_metals += 1\n metals_fraction = num_metals/numElements\n\n spg = structure.get_space_group_info()\n\n spacegroup_numbers = {}\n for i in range(1, 231):\n spacegroup_numbers[i] = 0\n\n spacegroup_numbers[spg[1]] = 1\n\n spacegroup_numbers_list = []\n for i in range(1, 231):\n spacegroup_numbers_list += [spacegroup_numbers[i]]\n\n atomic_numbers = [np.mean(structure.atomic_numbers), np.max(structure.atomic_numbers), np.min(\n structure.atomic_numbers), np.std(structure.atomic_numbers)]\n\n # Lattice parameters:\n a_parameters = structure.lattice.abc[0]\n b_parameters = structure.lattice.abc[1]\n c_parameters = structure.lattice.abc[2]\n alpha_parameters = structure.lattice.angles[0]\n beta_parameters = structure.lattice.angles[1]\n gamma_parameters = structure.lattice.angles[2]\n\n distance_matrix += [np.mean(structure.distance_matrix), np.max(structure.distance_matrix),\n np.min(structure.distance_matrix), np.std(structure.distance_matrix)]\n\n e1, e2, e3, e4, e5, e6, e7, e8, e9, e10, e11, e12, e13, e14, e15, e16, e17, e18, e19, e20, e21, e22, e23 = [\n ], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []\n for e in structure.species:\n e1 += [e.van_der_waals_radius]\n e2 += [e.electrical_resistivity]\n e3 += [e.velocity_of_sound]\n e4 += [e.reflectivity]\n e6 += [e.poissons_ratio]\n e7 += [e.molar_volume]\n e8 += [e.thermal_conductivity]\n e9 += [e.melting_point]\n e10 += [e.critical_temperature]\n e11 += [e.superconduction_temperature]\n e12 += [e.liquid_range]\n e13 += [e.bulk_modulus]\n e14 += [e.youngs_modulus]\n e15 += [e.brinell_hardness]\n e16 += [e.rigidity_modulus]\n # e17 +=[e.mineral_hardness ]\n e18 += [e.vickers_hardness]\n e19 += [e.density_of_solid]\n e20 += [e.coefficient_of_linear_thermal_expansion]\n e21 += [e.average_ionic_radius]\n e22 += [e.average_cationic_radius]\n e23 += [e.average_anionic_radius]\n\n e1 = [0 if v is None else v for v in e1]\n e2 = [0 if v is None else v for v in e2]\n e3 = [0 if v is None else v for v in e3]\n e4 = [0 if v is None else v for v in e4]\n # e5=[0 if v is None else v for v in e5]\n e6 = [0 if v is None else v for v in e6]\n e7 = [0 if v is None else v for v in e7]\n e8 = [0 if v is None else v for v in e8]\n e9 = [0 if v is None else v for v in e9]\n e10 = [0 if v is None else v for v in e10]\n e11 = [0 if v is None else v for v in e11]\n e12 = [0 if v is None else v for v in e12]\n e13 = [0 if v is None else v for v in e13]\n e14 = [0 if v is None else v for v in e14]\n e15 = [0 if v is None else v for v in e15]\n e16 = [0 if v is None else v for v in e16]\n # e17=[0 if v is None else v for v in e17]\n e18 = [0 if v is None else v for v in e18]\n e19 = [0 if v is None else v for v in e19]\n e20 = [0 if v is None else v for v in e20]\n e21 = [0 if v is None else v for v in e21]\n e22 = [0 if v is None else v for v in e22]\n e23 = [0 if v is None else v for v in e23]\n\n van_der_waals_radius = [np.mean(e1), np.max(e1), np.min(e1), np.std(e1)]\n electrical_resistivity = [np.mean(e2), np.max(e2), np.min(e2), np.std(e2)]\n velocity_of_sound = [np.mean(e3), np.max(e3), np.min(e3), np.std(e3)]\n reflectivity = [np.mean(e4), np.max(e4), np.min(e4), np.std(e4)]\n poissons_ratio = [np.mean(e6), np.max(e6), np.min(e6), np.std(e6)]\n molar_volume = [np.mean(e7), np.max(e7), np.min(e7), np.std(e7)]\n thermal_conductivity = [np.mean(e8), np.max(e8), np.min(e8), np.std(e8)]\n melting_point = [np.mean(e9), np.max(e9), np.min(e9), np.std(e9)]\n critical_temperature = [np.mean(e10), np.max(\n e10), np.min(e10), np.std(e10)]\n superconduction_temperature = [\n np.mean(e11), np.max(e11), np.min(e11), np.std(e11)]\n liquid_range = [np.mean(e12), np.max(e12), np.min(e12), np.std(e12)]\n bulk_modulus = [np.mean(e13), np.max(e13), np.min(e13), np.std(e13)]\n youngs_modulus = [np.mean(e14), np.max(e14), np.min(e14), np.std(e14)]\n brinell_hardness = [np.mean(e15), np.max(e15), np.min(e15), np.std(e15)]\n rigidity_modulus = [np.mean(e16), np.max(e16), np.min(e16), np.std(e16)]\n vickers_hardness = [np.mean(e18), np.max(e18), np.min(e18), np.std(e18)]\n density_of_solid = [np.mean(e19), np.max(e19), np.min(e19), np.std(e19)]\n coefficient_of_linear_thermal_expansion = [\n np.mean(e20), np.max(e20), np.min(e20), np.std(e20)]\n average_ionic_radius = [np.mean(e21), np.max(\n e21), np.min(e21), np.std(e21)]\n average_cationic_radius = [\n np.mean(e22), np.max(e22), np.min(e22), np.std(e22)]\n average_anionic_radius = [\n np.mean(e23), np.max(e23), np.min(e23), np.std(e23)]\n\n V = a_parameters*b_parameters*c_parameters\n Density = V / numElements\n\n descriptors_list = atomic_numbers +\\\n [Density] +\\\n [alpha_parameters] +\\\n [beta_parameters] +\\\n [gamma_parameters] +\\\n [metals_fraction] +\\\n distance_matrix +\\\n van_der_waals_radius +\\\n electrical_resistivity +\\\n velocity_of_sound +\\\n reflectivity +\\\n poissons_ratio +\\\n molar_volume +\\\n thermal_conductivity +\\\n melting_point +\\\n critical_temperature +\\\n superconduction_temperature +\\\n liquid_range +\\\n bulk_modulus +\\\n youngs_modulus +\\\n brinell_hardness +\\\n rigidity_modulus +\\\n vickers_hardness +\\\n density_of_solid +\\\n coefficient_of_linear_thermal_expansion +\\\n average_ionic_radius +\\\n average_cationic_radius +\\\n average_anionic_radius +\\\n spacegroup_numbers_list\n return descriptors_list\n\n\ndescriptors(cifFile)\n\n\n```\n\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:709: UserWarning: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(msg)\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:1121: UserWarning: Issues encountered while parsing CIF: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(\"Issues encountered while parsing CIF: %s\" % \"\\n\".join(self.warnings))\n\n\n\n\n\n [11.36,\n 32,\n 3,\n 7.509354166637767,\n 19.548727468344,\n 90.0,\n 90.0,\n 90.0,\n 0.4,\n 4.959684299990455,\n 8.869274685254709,\n 0.0,\n 1.6490128596408165,\n 1.8204000000000002,\n 2.11,\n 1.8,\n 0.059898580951471596,\n 4.799999999999999e+22,\n 1e+23,\n 9.5e-08,\n 4.995998398718718e+22,\n 2616.0,\n 6000.0,\n 0.0,\n 2953.463052079711,\n 0.0,\n 0,\n 0,\n 0.0,\n 0.0,\n 0,\n 0,\n 0.0,\n 14.5692,\n 17.02,\n 13.02,\n 1.3852477612326248,\n 36.51728,\n 85.0,\n 0.205,\n 41.23727548082681,\n 441.72880000000004,\n 1211.4,\n 317.3,\n 162.35371969425276,\n 1999.44,\n 3223.0,\n 0.0,\n 1032.056319393472,\n 0.0,\n 0,\n 0,\n 0.0,\n 716.5688,\n 1881.6,\n 232.7,\n 473.34035339759487,\n 8.976,\n 11.0,\n 0.0,\n 2.4434860343370084,\n 1.96,\n 4.9,\n 0.0,\n 2.4004999479275146,\n 0.0,\n 0,\n 0,\n 0.0,\n 1.68,\n 4.2,\n 0.0,\n 2.0575713839378698,\n 0.0,\n 0,\n 0,\n 0.0,\n 1513.56,\n 5323.0,\n 535.0,\n 1032.8763751775912,\n 1.864e-05,\n 4.6e-05,\n 0.0,\n 2.2369407681027228e-05,\n 0.8572,\n 0.9,\n 0.55,\n 0.0940008510599771,\n 0.6603999999999999,\n 0.9,\n 0.47,\n 0.2044989975525553,\n 0.8160000000000001,\n 1.7,\n 0.0,\n 0.8493197277821821,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 1,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0]\n\n\n\nNow let's iterate through the list of results and extract our descriptors into the above lists. This will take a few minutes.\n\n\n```\n\n\nband_gaps = []\ndataset = []\n\ncounter =0\nfor r in results:\n cif = r['cif']\n bg = r['band_gap']\n parser = CifParser.from_string(cif)\n\n structure = parser.get_structures()\n structure = structure[0]\n\n dataset += [descriptors(cif)]\n\n band_gaps += [bg]\n print(counter)\n counter +=1\n\ndataset_df = pd.DataFrame(dataset)\n\n```\n\n 0\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n 12\n 13\n 14\n 15\n 16\n 17\n 18\n 19\n 20\n 21\n 22\n 23\n 24\n 25\n 26\n 27\n 28\n 29\n 30\n 31\n 32\n 33\n 34\n 35\n 36\n 37\n 38\n 39\n 40\n 41\n 42\n 43\n 44\n 45\n 46\n 47\n 48\n\n\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:1121: UserWarning: Issues encountered while parsing CIF: Some fractional co-ordinates rounded to ideal values to avoid issues with finite precision.\n warnings.warn(\"Issues encountered while parsing CIF: %s\" % \"\\n\".join(self.warnings))\n\n\n 49\n 50\n 51\n 52\n 53\n 54\n 55\n 56\n 57\n 58\n 59\n 60\n 61\n 62\n 63\n 64\n 65\n 66\n 67\n 68\n 69\n 70\n 71\n 72\n 73\n 74\n 75\n 76\n 77\n 78\n 79\n 80\n 81\n 82\n 83\n 84\n 85\n 86\n 87\n 88\n 89\n 90\n 91\n 92\n 93\n 94\n 95\n 96\n 97\n 98\n 99\n 100\n 101\n 102\n 103\n 104\n 105\n 106\n 107\n 108\n 109\n 110\n 111\n 112\n 113\n 114\n 115\n 116\n 117\n 118\n 119\n 120\n 121\n 122\n 123\n 124\n 125\n 126\n 127\n 128\n 129\n 130\n 131\n 132\n 133\n 134\n 135\n 136\n 137\n 138\n 139\n 140\n 141\n 142\n 143\n 144\n 145\n 146\n 147\n 148\n 149\n 150\n 151\n 152\n 153\n 154\n 155\n 156\n 157\n 158\n 159\n 160\n 161\n 162\n 163\n 164\n 165\n 166\n 167\n 168\n 169\n 170\n 171\n 172\n 173\n 174\n 175\n 176\n 177\n 178\n 179\n 180\n 181\n 182\n 183\n 184\n 185\n 186\n 187\n 188\n 189\n 190\n 191\n 192\n 193\n 194\n 195\n 196\n 197\n 198\n 199\n 200\n 201\n 202\n 203\n 204\n 205\n 206\n 207\n 208\n 209\n 210\n 211\n 212\n 213\n 214\n 215\n 216\n 217\n 218\n 219\n 220\n 221\n 222\n 223\n 224\n 225\n 226\n 227\n 228\n 229\n 230\n 231\n 232\n 233\n 234\n 235\n 236\n 237\n 238\n 239\n 240\n 241\n 242\n 243\n 244\n 245\n 246\n 247\n 248\n 249\n 250\n 251\n 252\n 253\n 254\n 255\n 256\n 257\n 258\n 259\n 260\n 261\n 262\n 263\n 264\n 265\n 266\n 267\n 268\n 269\n 270\n 271\n 272\n 273\n 274\n 275\n 276\n 277\n 278\n 279\n 280\n 281\n 282\n 283\n 284\n 285\n 286\n 287\n 288\n 289\n 290\n 291\n 292\n 293\n 294\n 295\n 296\n 297\n 298\n 299\n 300\n 301\n 302\n 303\n 304\n 305\n 306\n 307\n 308\n 309\n 310\n 311\n 312\n 313\n 314\n 315\n 316\n 317\n 318\n 319\n 320\n 321\n 322\n 323\n 324\n 325\n 326\n 327\n 328\n 329\n 330\n 331\n 332\n 333\n 334\n 335\n 336\n 337\n 338\n 339\n 340\n 341\n 342\n 343\n 344\n 345\n 346\n 347\n 348\n 349\n 350\n 351\n 352\n 353\n 354\n 355\n 356\n 357\n 358\n 359\n 360\n 361\n 362\n 363\n 364\n 365\n 366\n 367\n 368\n 369\n 370\n 371\n 372\n 373\n 374\n 375\n 376\n 377\n 378\n 379\n 380\n 381\n 382\n 383\n 384\n 385\n 386\n 387\n 388\n 389\n 390\n 391\n 392\n 393\n 394\n 395\n 396\n 397\n 398\n 399\n 400\n 401\n 402\n 403\n 404\n 405\n 406\n 407\n 408\n 409\n 410\n 411\n 412\n 413\n 414\n 415\n 416\n 417\n 418\n 419\n 420\n 421\n 422\n 423\n 424\n 425\n 426\n 427\n 428\n 429\n 430\n 431\n 432\n 433\n 434\n 435\n 436\n 437\n 438\n 439\n 440\n 441\n 442\n 443\n 444\n 445\n 446\n 447\n 448\n 449\n 450\n 451\n 452\n 453\n 454\n 455\n 456\n 457\n 458\n 459\n 460\n 461\n 462\n 463\n 464\n 465\n 466\n 467\n 468\n 469\n 470\n 471\n 472\n 473\n 474\n 475\n 476\n 477\n 478\n 479\n 480\n 481\n 482\n 483\n 484\n 485\n 486\n 487\n 488\n 489\n 490\n 491\n 492\n 493\n 494\n 495\n 496\n 497\n 498\n 499\n 500\n 501\n 502\n 503\n 504\n 505\n 506\n 507\n 508\n 509\n 510\n 511\n 512\n 513\n 514\n 515\n 516\n 517\n 518\n 519\n 520\n 521\n 522\n 523\n 524\n 525\n 526\n 527\n 528\n 529\n 530\n 531\n 532\n 533\n 534\n 535\n 536\n 537\n 538\n 539\n 540\n 541\n 542\n 543\n 544\n 545\n 546\n 547\n 548\n 549\n 550\n 551\n 552\n 553\n 554\n 555\n 556\n 557\n 558\n 559\n 560\n 561\n 562\n 563\n 564\n 565\n 566\n 567\n 568\n 569\n 570\n 571\n 572\n 573\n 574\n 575\n 576\n 577\n 578\n 579\n 580\n 581\n 582\n 583\n 584\n 585\n 586\n 587\n 588\n 589\n 590\n 591\n 592\n 593\n 594\n 595\n 596\n 597\n 598\n 599\n 600\n 601\n 602\n 603\n 604\n 605\n 606\n 607\n 608\n 609\n 610\n 611\n 612\n 613\n 614\n 615\n 616\n 617\n 618\n 619\n 620\n 621\n 622\n 623\n 624\n 625\n 626\n 627\n 628\n 629\n 630\n 631\n 632\n 633\n 634\n 635\n 636\n 637\n 638\n 639\n 640\n 641\n 642\n 643\n 644\n 645\n 646\n 647\n 648\n 649\n 650\n 651\n 652\n 653\n 654\n 655\n 656\n 657\n 658\n 659\n 660\n 661\n 662\n 663\n 664\n 665\n 666\n 667\n 668\n 669\n 670\n 671\n 672\n 673\n 674\n 675\n 676\n 677\n 678\n 679\n 680\n 681\n 682\n 683\n 684\n 685\n 686\n 687\n 688\n 689\n 690\n 691\n 692\n 693\n 694\n 695\n 696\n 697\n 698\n 699\n 700\n 701\n 702\n 703\n 704\n 705\n 706\n 707\n 708\n 709\n 710\n 711\n 712\n 713\n 714\n 715\n 716\n 717\n 718\n 719\n 720\n 721\n 722\n 723\n 724\n 725\n 726\n 727\n 728\n 729\n 730\n 731\n 732\n 733\n 734\n 735\n 736\n 737\n 738\n 739\n 740\n 741\n 742\n 743\n 744\n 745\n 746\n 747\n 748\n 749\n 750\n 751\n 752\n 753\n 754\n 755\n 756\n 757\n 758\n 759\n 760\n 761\n 762\n 763\n 764\n 765\n 766\n 767\n 768\n 769\n 770\n 771\n 772\n 773\n 774\n 775\n 776\n 777\n 778\n 779\n 780\n 781\n 782\n 783\n 784\n 785\n 786\n 787\n 788\n 789\n 790\n 791\n 792\n 793\n 794\n 795\n 796\n 797\n 798\n 799\n 800\n 801\n 802\n 803\n 804\n 805\n 806\n 807\n 808\n 809\n 810\n 811\n 812\n 813\n 814\n 815\n 816\n 817\n 818\n 819\n 820\n 821\n 822\n 823\n 824\n 825\n 826\n 827\n 828\n 829\n 830\n 831\n 832\n 833\n 834\n 835\n 836\n 837\n 838\n 839\n 840\n 841\n 842\n 843\n 844\n 845\n 846\n 847\n 848\n 849\n 850\n 851\n 852\n 853\n 854\n 855\n 856\n 857\n 858\n 859\n 860\n 861\n 862\n 863\n 864\n 865\n 866\n 867\n 868\n 869\n 870\n 871\n 872\n 873\n 874\n 875\n 876\n 877\n 878\n 879\n 880\n 881\n 882\n 883\n 884\n 885\n 886\n 887\n 888\n 889\n 890\n 891\n 892\n 893\n 894\n 895\n 896\n 897\n 898\n 899\n 900\n 901\n 902\n 903\n 904\n 905\n 906\n 907\n 908\n 909\n 910\n 911\n 912\n 913\n 914\n 915\n 916\n 917\n 918\n 919\n 920\n 921\n 922\n 923\n 924\n 925\n 926\n 927\n 928\n 929\n 930\n 931\n 932\n 933\n 934\n 935\n 936\n 937\n 938\n 939\n 940\n 941\n 942\n 943\n 944\n 945\n 946\n 947\n 948\n 949\n 950\n 951\n 952\n 953\n 954\n 955\n 956\n 957\n 958\n 959\n 960\n 961\n 962\n 963\n 964\n 965\n 966\n 967\n 968\n 969\n 970\n 971\n 972\n 973\n 974\n 975\n 976\n 977\n 978\n 979\n 980\n 981\n 982\n 983\n 984\n 985\n 986\n 987\n 988\n 989\n 990\n 991\n 992\n 993\n 994\n 995\n 996\n 997\n 998\n 999\n 1000\n 1001\n 1002\n 1003\n 1004\n 1005\n 1006\n 1007\n 1008\n 1009\n 1010\n 1011\n 1012\n 1013\n 1014\n 1015\n 1016\n 1017\n 1018\n 1019\n 1020\n 1021\n 1022\n 1023\n 1024\n 1025\n 1026\n 1027\n 1028\n 1029\n 1030\n 1031\n 1032\n 1033\n 1034\n 1035\n 1036\n 1037\n 1038\n 1039\n 1040\n 1041\n 1042\n 1043\n 1044\n 1045\n 1046\n 1047\n 1048\n 1049\n 1050\n 1051\n 1052\n 1053\n 1054\n 1055\n 1056\n 1057\n 1058\n 1059\n 1060\n 1061\n 1062\n 1063\n 1064\n 1065\n 1066\n 1067\n 1068\n 1069\n 1070\n 1071\n 1072\n 1073\n 1074\n 1075\n 1076\n 1077\n 1078\n 1079\n 1080\n 1081\n 1082\n 1083\n 1084\n 1085\n 1086\n 1087\n 1088\n 1089\n 1090\n 1091\n 1092\n 1093\n 1094\n 1095\n 1096\n 1097\n 1098\n 1099\n 1100\n 1101\n 1102\n 1103\n 1104\n 1105\n 1106\n 1107\n 1108\n 1109\n 1110\n 1111\n 1112\n 1113\n 1114\n 1115\n 1116\n 1117\n 1118\n 1119\n 1120\n 1121\n 1122\n 1123\n 1124\n 1125\n 1126\n 1127\n 1128\n 1129\n 1130\n 1131\n 1132\n 1133\n 1134\n 1135\n 1136\n 1137\n 1138\n 1139\n 1140\n 1141\n 1142\n 1143\n 1144\n 1145\n 1146\n 1147\n 1148\n 1149\n 1150\n 1151\n 1152\n 1153\n 1154\n 1155\n 1156\n 1157\n 1158\n 1159\n 1160\n 1161\n 1162\n 1163\n 1164\n 1165\n 1166\n 1167\n 1168\n 1169\n 1170\n 1171\n 1172\n 1173\n 1174\n 1175\n 1176\n 1177\n 1178\n 1179\n 1180\n 1181\n 1182\n 1183\n 1184\n 1185\n 1186\n 1187\n 1188\n 1189\n 1190\n 1191\n 1192\n 1193\n 1194\n 1195\n 1196\n 1197\n 1198\n 1199\n 1200\n 1201\n 1202\n 1203\n 1204\n 1205\n 1206\n 1207\n 1208\n 1209\n 1210\n 1211\n 1212\n 1213\n 1214\n 1215\n 1216\n 1217\n 1218\n 1219\n 1220\n 1221\n 1222\n 1223\n 1224\n 1225\n 1226\n 1227\n 1228\n 1229\n 1230\n 1231\n 1232\n 1233\n 1234\n 1235\n 1236\n 1237\n 1238\n 1239\n 1240\n 1241\n 1242\n 1243\n 1244\n 1245\n 1246\n 1247\n 1248\n 1249\n 1250\n 1251\n 1252\n 1253\n 1254\n 1255\n 1256\n 1257\n 1258\n 1259\n 1260\n 1261\n 1262\n 1263\n 1264\n 1265\n 1266\n 1267\n 1268\n 1269\n 1270\n 1271\n 1272\n 1273\n 1274\n 1275\n 1276\n 1277\n 1278\n 1279\n 1280\n 1281\n 1282\n 1283\n 1284\n 1285\n 1286\n 1287\n 1288\n 1289\n 1290\n 1291\n 1292\n 1293\n 1294\n 1295\n 1296\n 1297\n 1298\n 1299\n 1300\n 1301\n 1302\n 1303\n 1304\n 1305\n 1306\n 1307\n 1308\n 1309\n 1310\n 1311\n 1312\n 1313\n 1314\n 1315\n 1316\n 1317\n 1318\n 1319\n 1320\n 1321\n 1322\n 1323\n 1324\n 1325\n 1326\n 1327\n 1328\n 1329\n 1330\n 1331\n 1332\n 1333\n 1334\n 1335\n 1336\n 1337\n 1338\n 1339\n 1340\n 1341\n 1342\n 1343\n 1344\n 1345\n 1346\n 1347\n 1348\n 1349\n 1350\n 1351\n 1352\n 1353\n 1354\n 1355\n 1356\n 1357\n 1358\n 1359\n 1360\n 1361\n 1362\n 1363\n 1364\n 1365\n 1366\n 1367\n 1368\n 1369\n 1370\n 1371\n 1372\n 1373\n 1374\n 1375\n 1376\n 1377\n 1378\n 1379\n 1380\n 1381\n 1382\n 1383\n 1384\n 1385\n 1386\n 1387\n 1388\n 1389\n 1390\n 1391\n 1392\n 1393\n 1394\n 1395\n 1396\n 1397\n 1398\n 1399\n 1400\n 1401\n 1402\n 1403\n 1404\n 1405\n 1406\n 1407\n 1408\n 1409\n 1410\n 1411\n 1412\n 1413\n 1414\n 1415\n 1416\n 1417\n 1418\n 1419\n 1420\n 1421\n 1422\n 1423\n 1424\n 1425\n 1426\n 1427\n 1428\n 1429\n 1430\n 1431\n 1432\n 1433\n 1434\n 1435\n 1436\n 1437\n 1438\n 1439\n 1440\n 1441\n 1442\n 1443\n 1444\n 1445\n 1446\n 1447\n 1448\n 1449\n 1450\n 1451\n 1452\n 1453\n 1454\n 1455\n 1456\n 1457\n 1458\n 1459\n 1460\n 1461\n 1462\n 1463\n 1464\n 1465\n 1466\n 1467\n 1468\n 1469\n 1470\n 1471\n 1472\n 1473\n 1474\n 1475\n 1476\n 1477\n 1478\n 1479\n 1480\n 1481\n 1482\n 1483\n 1484\n 1485\n 1486\n 1487\n 1488\n 1489\n 1490\n 1491\n 1492\n 1493\n 1494\n 1495\n 1496\n 1497\n 1498\n 1499\n 1500\n 1501\n 1502\n 1503\n 1504\n 1505\n 1506\n 1507\n 1508\n 1509\n 1510\n 1511\n 1512\n 1513\n 1514\n 1515\n 1516\n 1517\n 1518\n 1519\n 1520\n 1521\n 1522\n 1523\n 1524\n 1525\n 1526\n 1527\n 1528\n 1529\n 1530\n 1531\n 1532\n 1533\n 1534\n 1535\n 1536\n 1537\n 1538\n 1539\n 1540\n 1541\n 1542\n 1543\n 1544\n 1545\n 1546\n 1547\n 1548\n 1549\n 1550\n 1551\n 1552\n 1553\n 1554\n 1555\n 1556\n 1557\n 1558\n 1559\n 1560\n 1561\n 1562\n 1563\n 1564\n 1565\n 1566\n 1567\n 1568\n 1569\n 1570\n 1571\n 1572\n 1573\n 1574\n 1575\n 1576\n 1577\n 1578\n 1579\n 1580\n 1581\n 1582\n 1583\n 1584\n 1585\n 1586\n 1587\n 1588\n 1589\n 1590\n 1591\n 1592\n 1593\n 1594\n 1595\n 1596\n 1597\n 1598\n 1599\n 1600\n 1601\n 1602\n 1603\n 1604\n 1605\n 1606\n 1607\n 1608\n 1609\n 1610\n 1611\n 1612\n 1613\n 1614\n 1615\n 1616\n 1617\n 1618\n 1619\n 1620\n 1621\n 1622\n 1623\n 1624\n 1625\n 1626\n 1627\n 1628\n 1629\n 1630\n 1631\n 1632\n 1633\n 1634\n 1635\n 1636\n 1637\n 1638\n 1639\n 1640\n 1641\n 1642\n 1643\n 1644\n 1645\n 1646\n 1647\n 1648\n 1649\n 1650\n 1651\n 1652\n 1653\n 1654\n 1655\n 1656\n 1657\n 1658\n 1659\n 1660\n 1661\n 1662\n 1663\n 1664\n 1665\n 1666\n 1667\n 1668\n 1669\n 1670\n 1671\n 1672\n 1673\n 1674\n 1675\n 1676\n 1677\n 1678\n 1679\n 1680\n 1681\n 1682\n 1683\n 1684\n 1685\n 1686\n 1687\n 1688\n 1689\n 1690\n 1691\n 1692\n 1693\n 1694\n 1695\n 1696\n 1697\n 1698\n 1699\n 1700\n 1701\n 1702\n 1703\n 1704\n 1705\n 1706\n 1707\n 1708\n 1709\n 1710\n 1711\n 1712\n 1713\n 1714\n 1715\n 1716\n 1717\n 1718\n 1719\n 1720\n 1721\n 1722\n 1723\n 1724\n 1725\n 1726\n 1727\n 1728\n 1729\n 1730\n 1731\n 1732\n 1733\n 1734\n 1735\n 1736\n 1737\n 1738\n 1739\n 1740\n 1741\n 1742\n 1743\n 1744\n 1745\n 1746\n 1747\n 1748\n 1749\n 1750\n 1751\n 1752\n 1753\n 1754\n 1755\n 1756\n 1757\n 1758\n 1759\n 1760\n 1761\n 1762\n 1763\n 1764\n 1765\n 1766\n 1767\n 1768\n 1769\n 1770\n 1771\n 1772\n 1773\n 1774\n 1775\n 1776\n 1777\n 1778\n 1779\n 1780\n 1781\n 1782\n 1783\n 1784\n 1785\n 1786\n 1787\n 1788\n 1789\n 1790\n 1791\n 1792\n 1793\n 1794\n 1795\n 1796\n 1797\n 1798\n 1799\n 1800\n 1801\n 1802\n 1803\n 1804\n 1805\n 1806\n 1807\n 1808\n 1809\n 1810\n 1811\n 1812\n 1813\n 1814\n 1815\n 1816\n 1817\n 1818\n 1819\n 1820\n 1821\n 1822\n 1823\n 1824\n 1825\n 1826\n 1827\n 1828\n 1829\n 1830\n 1831\n 1832\n 1833\n 1834\n 1835\n 1836\n 1837\n 1838\n 1839\n 1840\n 1841\n 1842\n 1843\n 1844\n 1845\n 1846\n 1847\n 1848\n 1849\n 1850\n 1851\n 1852\n 1853\n 1854\n 1855\n 1856\n 1857\n 1858\n 1859\n 1860\n 1861\n 1862\n 1863\n 1864\n 1865\n 1866\n 1867\n 1868\n 1869\n 1870\n 1871\n 1872\n 1873\n 1874\n 1875\n 1876\n 1877\n 1878\n 1879\n 1880\n 1881\n 1882\n 1883\n 1884\n 1885\n 1886\n 1887\n 1888\n 1889\n 1890\n 1891\n 1892\n 1893\n 1894\n 1895\n 1896\n 1897\n 1898\n 1899\n 1900\n 1901\n 1902\n 1903\n 1904\n 1905\n 1906\n 1907\n 1908\n 1909\n 1910\n 1911\n 1912\n 1913\n 1914\n 1915\n 1916\n 1917\n 1918\n 1919\n 1920\n 1921\n 1922\n 1923\n 1924\n 1925\n 1926\n 1927\n 1928\n 1929\n 1930\n 1931\n 1932\n 1933\n 1934\n 1935\n 1936\n 1937\n 1938\n 1939\n 1940\n 1941\n 1942\n 1943\n 1944\n 1945\n 1946\n 1947\n 1948\n 1949\n 1950\n 1951\n 1952\n 1953\n 1954\n 1955\n 1956\n 1957\n 1958\n 1959\n 1960\n 1961\n 1962\n 1963\n 1964\n 1965\n 1966\n 1967\n 1968\n 1969\n 1970\n 1971\n 1972\n 1973\n 1974\n 1975\n 1976\n 1977\n 1978\n 1979\n 1980\n 1981\n 1982\n 1983\n 1984\n 1985\n 1986\n 1987\n 1988\n 1989\n 1990\n 1991\n 1992\n 1993\n 1994\n 1995\n 1996\n 1997\n 1998\n 1999\n 2000\n 2001\n 2002\n 2003\n 2004\n 2005\n 2006\n 2007\n 2008\n 2009\n 2010\n 2011\n 2012\n 2013\n 2014\n 2015\n 2016\n 2017\n 2018\n 2019\n 2020\n 2021\n 2022\n 2023\n 2024\n 2025\n 2026\n 2027\n 2028\n 2029\n 2030\n 2031\n 2032\n 2033\n 2034\n 2035\n 2036\n 2037\n 2038\n 2039\n 2040\n 2041\n 2042\n 2043\n 2044\n 2045\n 2046\n 2047\n 2048\n 2049\n 2050\n 2051\n 2052\n 2053\n 2054\n 2055\n 2056\n 2057\n 2058\n 2059\n 2060\n 2061\n 2062\n 2063\n 2064\n 2065\n 2066\n 2067\n 2068\n 2069\n 2070\n 2071\n 2072\n 2073\n 2074\n 2075\n 2076\n 2077\n 2078\n 2079\n 2080\n 2081\n 2082\n 2083\n 2084\n 2085\n 2086\n 2087\n 2088\n 2089\n 2090\n 2091\n 2092\n 2093\n 2094\n 2095\n 2096\n 2097\n 2098\n 2099\n 2100\n 2101\n 2102\n 2103\n 2104\n 2105\n 2106\n 2107\n 2108\n 2109\n 2110\n 2111\n 2112\n 2113\n 2114\n 2115\n 2116\n 2117\n 2118\n 2119\n 2120\n 2121\n 2122\n 2123\n 2124\n 2125\n 2126\n 2127\n 2128\n 2129\n 2130\n 2131\n 2132\n 2133\n 2134\n 2135\n 2136\n 2137\n 2138\n 2139\n 2140\n 2141\n 2142\n 2143\n 2144\n 2145\n 2146\n 2147\n 2148\n 2149\n 2150\n 2151\n 2152\n 2153\n 2154\n 2155\n 2156\n 2157\n 2158\n 2159\n 2160\n 2161\n 2162\n 2163\n 2164\n 2165\n 2166\n 2167\n 2168\n 2169\n 2170\n 2171\n 2172\n 2173\n 2174\n 2175\n 2176\n 2177\n 2178\n 2179\n 2180\n 2181\n 2182\n 2183\n 2184\n 2185\n 2186\n 2187\n 2188\n 2189\n 2190\n 2191\n 2192\n 2193\n 2194\n 2195\n 2196\n 2197\n 2198\n 2199\n 2200\n 2201\n 2202\n 2203\n 2204\n 2205\n 2206\n 2207\n 2208\n 2209\n 2210\n 2211\n 2212\n 2213\n 2214\n 2215\n 2216\n 2217\n 2218\n 2219\n 2220\n 2221\n 2222\n 2223\n 2224\n 2225\n 2226\n 2227\n 2228\n 2229\n 2230\n 2231\n 2232\n 2233\n 2234\n 2235\n 2236\n 2237\n 2238\n 2239\n 2240\n 2241\n 2242\n 2243\n 2244\n 2245\n 2246\n 2247\n 2248\n 2249\n 2250\n 2251\n 2252\n 2253\n 2254\n 2255\n 2256\n 2257\n 2258\n 2259\n 2260\n 2261\n 2262\n 2263\n 2264\n 2265\n 2266\n 2267\n 2268\n 2269\n 2270\n 2271\n 2272\n 2273\n 2274\n 2275\n 2276\n 2277\n 2278\n 2279\n 2280\n 2281\n 2282\n 2283\n 2284\n 2285\n 2286\n 2287\n 2288\n 2289\n 2290\n 2291\n 2292\n 2293\n 2294\n 2295\n 2296\n 2297\n 2298\n 2299\n 2300\n 2301\n 2302\n 2303\n 2304\n 2305\n 2306\n 2307\n 2308\n 2309\n 2310\n 2311\n 2312\n 2313\n 2314\n 2315\n 2316\n 2317\n 2318\n 2319\n 2320\n 2321\n 2322\n 2323\n 2324\n 2325\n 2326\n 2327\n 2328\n 2329\n 2330\n 2331\n 2332\n 2333\n 2334\n 2335\n 2336\n 2337\n 2338\n 2339\n 2340\n 2341\n 2342\n 2343\n 2344\n 2345\n 2346\n 2347\n 2348\n 2349\n 2350\n 2351\n 2352\n 2353\n 2354\n 2355\n 2356\n 2357\n 2358\n 2359\n 2360\n 2361\n 2362\n 2363\n 2364\n 2365\n 2366\n 2367\n 2368\n 2369\n 2370\n 2371\n 2372\n 2373\n 2374\n 2375\n 2376\n 2377\n 2378\n 2379\n 2380\n 2381\n 2382\n 2383\n 2384\n 2385\n 2386\n 2387\n 2388\n 2389\n 2390\n 2391\n 2392\n 2393\n 2394\n 2395\n 2396\n 2397\n 2398\n 2399\n 2400\n 2401\n 2402\n 2403\n 2404\n 2405\n 2406\n 2407\n 2408\n 2409\n 2410\n 2411\n 2412\n 2413\n 2414\n 2415\n 2416\n 2417\n 2418\n 2419\n 2420\n 2421\n 2422\n 2423\n 2424\n 2425\n 2426\n 2427\n 2428\n 2429\n 2430\n 2431\n 2432\n 2433\n 2434\n 2435\n 2436\n 2437\n 2438\n 2439\n 2440\n 2441\n 2442\n 2443\n 2444\n 2445\n 2446\n 2447\n 2448\n 2449\n 2450\n 2451\n 2452\n 2453\n 2454\n 2455\n 2456\n 2457\n 2458\n 2459\n 2460\n 2461\n 2462\n 2463\n 2464\n 2465\n 2466\n 2467\n 2468\n 2469\n 2470\n 2471\n 2472\n 2473\n 2474\n 2475\n 2476\n 2477\n 2478\n 2479\n 2480\n 2481\n 2482\n 2483\n 2484\n 2485\n 2486\n 2487\n 2488\n 2489\n 2490\n 2491\n 2492\n 2493\n 2494\n 2495\n 2496\n 2497\n 2498\n 2499\n 2500\n 2501\n 2502\n 2503\n 2504\n 2505\n 2506\n 2507\n 2508\n 2509\n 2510\n 2511\n 2512\n 2513\n 2514\n 2515\n 2516\n 2517\n 2518\n 2519\n 2520\n 2521\n 2522\n 2523\n 2524\n 2525\n 2526\n 2527\n 2528\n 2529\n 2530\n 2531\n 2532\n 2533\n 2534\n 2535\n 2536\n 2537\n 2538\n 2539\n 2540\n 2541\n 2542\n 2543\n 2544\n 2545\n 2546\n 2547\n 2548\n 2549\n 2550\n 2551\n 2552\n 2553\n 2554\n 2555\n 2556\n 2557\n 2558\n 2559\n 2560\n 2561\n 2562\n 2563\n 2564\n 2565\n 2566\n 2567\n 2568\n 2569\n 2570\n 2571\n 2572\n 2573\n 2574\n 2575\n 2576\n 2577\n 2578\n 2579\n 2580\n 2581\n 2582\n 2583\n 2584\n 2585\n 2586\n 2587\n 2588\n 2589\n 2590\n 2591\n 2592\n 2593\n 2594\n 2595\n 2596\n 2597\n 2598\n 2599\n 2600\n 2601\n 2602\n 2603\n 2604\n 2605\n 2606\n 2607\n 2608\n 2609\n 2610\n 2611\n 2612\n 2613\n 2614\n 2615\n 2616\n 2617\n 2618\n 2619\n 2620\n 2621\n 2622\n 2623\n 2624\n 2625\n 2626\n 2627\n 2628\n 2629\n 2630\n 2631\n 2632\n 2633\n 2634\n 2635\n 2636\n 2637\n 2638\n 2639\n 2640\n 2641\n 2642\n 2643\n 2644\n 2645\n 2646\n 2647\n 2648\n 2649\n 2650\n 2651\n 2652\n 2653\n 2654\n 2655\n 2656\n 2657\n 2658\n 2659\n 2660\n 2661\n 2662\n 2663\n 2664\n 2665\n 2666\n 2667\n 2668\n 2669\n 2670\n 2671\n 2672\n 2673\n 2674\n 2675\n 2676\n 2677\n 2678\n 2679\n 2680\n 2681\n 2682\n 2683\n 2684\n 2685\n 2686\n 2687\n 2688\n 2689\n 2690\n 2691\n 2692\n 2693\n 2694\n 2695\n 2696\n 2697\n 2698\n 2699\n 2700\n 2701\n 2702\n 2703\n 2704\n 2705\n 2706\n 2707\n 2708\n 2709\n 2710\n 2711\n 2712\n 2713\n 2714\n 2715\n 2716\n 2717\n 2718\n 2719\n 2720\n 2721\n 2722\n 2723\n 2724\n 2725\n 2726\n 2727\n 2728\n 2729\n 2730\n 2731\n 2732\n 2733\n 2734\n 2735\n 2736\n 2737\n 2738\n 2739\n 2740\n 2741\n 2742\n 2743\n 2744\n 2745\n 2746\n 2747\n 2748\n 2749\n 2750\n 2751\n 2752\n 2753\n 2754\n 2755\n 2756\n 2757\n 2758\n 2759\n 2760\n 2761\n 2762\n 2763\n 2764\n 2765\n 2766\n 2767\n 2768\n 2769\n 2770\n 2771\n 2772\n 2773\n 2774\n 2775\n 2776\n 2777\n 2778\n 2779\n 2780\n 2781\n 2782\n 2783\n 2784\n 2785\n 2786\n 2787\n 2788\n 2789\n 2790\n 2791\n 2792\n 2793\n 2794\n 2795\n 2796\n 2797\n 2798\n 2799\n 2800\n 2801\n 2802\n 2803\n 2804\n 2805\n 2806\n 2807\n 2808\n 2809\n 2810\n 2811\n 2812\n 2813\n 2814\n 2815\n 2816\n 2817\n 2818\n 2819\n 2820\n 2821\n 2822\n 2823\n 2824\n 2825\n 2826\n 2827\n 2828\n 2829\n 2830\n 2831\n 2832\n 2833\n 2834\n 2835\n 2836\n 2837\n 2838\n 2839\n 2840\n 2841\n 2842\n 2843\n 2844\n 2845\n 2846\n 2847\n 2848\n 2849\n 2850\n 2851\n 2852\n 2853\n 2854\n 2855\n 2856\n 2857\n 2858\n 2859\n 2860\n 2861\n 2862\n 2863\n 2864\n 2865\n 2866\n 2867\n 2868\n 2869\n 2870\n 2871\n 2872\n 2873\n 2874\n 2875\n 2876\n 2877\n 2878\n 2879\n 2880\n 2881\n 2882\n 2883\n 2884\n 2885\n 2886\n 2887\n 2888\n 2889\n 2890\n 2891\n 2892\n 2893\n 2894\n 2895\n 2896\n 2897\n 2898\n 2899\n 2900\n 2901\n 2902\n 2903\n 2904\n 2905\n 2906\n 2907\n 2908\n 2909\n 2910\n 2911\n 2912\n 2913\n 2914\n 2915\n 2916\n 2917\n 2918\n 2919\n 2920\n 2921\n 2922\n 2923\n 2924\n 2925\n 2926\n 2927\n 2928\n 2929\n 2930\n 2931\n 2932\n 2933\n 2934\n 2935\n 2936\n 2937\n 2938\n 2939\n 2940\n 2941\n 2942\n 2943\n 2944\n 2945\n 2946\n 2947\n 2948\n 2949\n 2950\n 2951\n 2952\n 2953\n 2954\n 2955\n 2956\n 2957\n 2958\n 2959\n 2960\n 2961\n 2962\n 2963\n 2964\n 2965\n 2966\n 2967\n 2968\n 2969\n 2970\n 2971\n 2972\n 2973\n 2974\n 2975\n 2976\n 2977\n 2978\n 2979\n 2980\n 2981\n 2982\n 2983\n 2984\n 2985\n 2986\n 2987\n 2988\n 2989\n 2990\n 2991\n 2992\n 2993\n 2994\n 2995\n 2996\n 2997\n 2998\n 2999\n 3000\n 3001\n 3002\n 3003\n 3004\n 3005\n 3006\n 3007\n 3008\n 3009\n 3010\n 3011\n 3012\n 3013\n 3014\n 3015\n 3016\n 3017\n 3018\n 3019\n 3020\n 3021\n 3022\n 3023\n 3024\n 3025\n 3026\n 3027\n 3028\n 3029\n 3030\n 3031\n 3032\n 3033\n 3034\n 3035\n 3036\n 3037\n 3038\n 3039\n 3040\n 3041\n 3042\n 3043\n 3044\n 3045\n 3046\n 3047\n 3048\n 3049\n 3050\n 3051\n 3052\n 3053\n 3054\n 3055\n 3056\n 3057\n 3058\n 3059\n 3060\n 3061\n 3062\n 3063\n 3064\n 3065\n 3066\n 3067\n 3068\n 3069\n 3070\n 3071\n 3072\n 3073\n 3074\n 3075\n 3076\n 3077\n 3078\n 3079\n 3080\n 3081\n 3082\n 3083\n 3084\n 3085\n 3086\n 3087\n 3088\n 3089\n 3090\n 3091\n 3092\n 3093\n 3094\n 3095\n 3096\n 3097\n 3098\n 3099\n 3100\n 3101\n 3102\n 3103\n 3104\n 3105\n 3106\n 3107\n 3108\n 3109\n 3110\n 3111\n 3112\n 3113\n 3114\n 3115\n 3116\n 3117\n 3118\n 3119\n 3120\n 3121\n 3122\n 3123\n 3124\n 3125\n 3126\n 3127\n 3128\n 3129\n 3130\n 3131\n 3132\n 3133\n 3134\n 3135\n 3136\n 3137\n 3138\n 3139\n 3140\n 3141\n 3142\n 3143\n 3144\n 3145\n 3146\n 3147\n 3148\n 3149\n 3150\n 3151\n 3152\n 3153\n 3154\n 3155\n 3156\n 3157\n 3158\n 3159\n 3160\n 3161\n 3162\n 3163\n 3164\n 3165\n 3166\n 3167\n 3168\n 3169\n 3170\n 3171\n 3172\n 3173\n 3174\n 3175\n 3176\n 3177\n 3178\n 3179\n 3180\n 3181\n 3182\n 3183\n 3184\n 3185\n 3186\n 3187\n 3188\n 3189\n 3190\n 3191\n 3192\n 3193\n 3194\n 3195\n 3196\n 3197\n 3198\n 3199\n 3200\n 3201\n 3202\n 3203\n 3204\n 3205\n 3206\n 3207\n 3208\n 3209\n 3210\n 3211\n 3212\n 3213\n 3214\n 3215\n 3216\n 3217\n 3218\n 3219\n 3220\n 3221\n 3222\n 3223\n 3224\n 3225\n 3226\n 3227\n 3228\n 3229\n 3230\n 3231\n 3232\n 3233\n 3234\n 3235\n 3236\n 3237\n 3238\n 3239\n 3240\n 3241\n 3242\n 3243\n 3244\n 3245\n 3246\n 3247\n 3248\n 3249\n 3250\n 3251\n 3252\n 3253\n 3254\n 3255\n 3256\n 3257\n 3258\n 3259\n 3260\n 3261\n 3262\n 3263\n 3264\n 3265\n 3266\n 3267\n 3268\n 3269\n 3270\n 3271\n 3272\n 3273\n 3274\n 3275\n 3276\n 3277\n 3278\n 3279\n 3280\n 3281\n 3282\n 3283\n 3284\n 3285\n 3286\n 3287\n 3288\n 3289\n 3290\n 3291\n 3292\n 3293\n 3294\n 3295\n 3296\n 3297\n 3298\n 3299\n 3300\n 3301\n 3302\n 3303\n 3304\n 3305\n 3306\n 3307\n 3308\n 3309\n 3310\n 3311\n 3312\n 3313\n 3314\n 3315\n 3316\n 3317\n 3318\n 3319\n 3320\n 3321\n 3322\n 3323\n 3324\n 3325\n 3326\n 3327\n 3328\n 3329\n 3330\n 3331\n 3332\n 3333\n 3334\n 3335\n 3336\n 3337\n 3338\n 3339\n 3340\n 3341\n 3342\n 3343\n 3344\n 3345\n 3346\n 3347\n 3348\n 3349\n 3350\n 3351\n 3352\n 3353\n 3354\n 3355\n 3356\n 3357\n 3358\n 3359\n 3360\n 3361\n 3362\n 3363\n 3364\n 3365\n 3366\n 3367\n 3368\n 3369\n 3370\n 3371\n 3372\n 3373\n 3374\n 3375\n 3376\n 3377\n 3378\n 3379\n 3380\n 3381\n 3382\n 3383\n 3384\n 3385\n 3386\n 3387\n 3388\n 3389\n 3390\n 3391\n 3392\n 3393\n 3394\n 3395\n 3396\n 3397\n 3398\n 3399\n 3400\n 3401\n 3402\n 3403\n 3404\n 3405\n 3406\n 3407\n 3408\n 3409\n 3410\n 3411\n 3412\n 3413\n 3414\n 3415\n 3416\n 3417\n 3418\n 3419\n 3420\n 3421\n 3422\n 3423\n 3424\n 3425\n 3426\n 3427\n 3428\n 3429\n 3430\n 3431\n 3432\n 3433\n 3434\n 3435\n 3436\n 3437\n 3438\n 3439\n 3440\n 3441\n 3442\n 3443\n 3444\n 3445\n 3446\n 3447\n 3448\n 3449\n 3450\n 3451\n 3452\n 3453\n 3454\n 3455\n 3456\n 3457\n 3458\n 3459\n 3460\n 3461\n 3462\n 3463\n 3464\n 3465\n 3466\n 3467\n 3468\n 3469\n 3470\n 3471\n 3472\n 3473\n 3474\n 3475\n 3476\n 3477\n 3478\n 3479\n 3480\n 3481\n 3482\n 3483\n 3484\n 3485\n 3486\n 3487\n 3488\n 3489\n 3490\n 3491\n 3492\n 3493\n 3494\n 3495\n 3496\n 3497\n 3498\n 3499\n 3500\n 3501\n 3502\n 3503\n 3504\n 3505\n 3506\n 3507\n 3508\n 3509\n 3510\n 3511\n 3512\n 3513\n 3514\n 3515\n 3516\n 3517\n 3518\n 3519\n 3520\n 3521\n 3522\n 3523\n 3524\n 3525\n 3526\n 3527\n 3528\n 3529\n 3530\n 3531\n 3532\n 3533\n 3534\n 3535\n 3536\n 3537\n 3538\n 3539\n 3540\n 3541\n 3542\n 3543\n 3544\n 3545\n 3546\n 3547\n 3548\n 3549\n 3550\n 3551\n 3552\n 3553\n 3554\n 3555\n 3556\n 3557\n 3558\n 3559\n 3560\n 3561\n 3562\n 3563\n 3564\n 3565\n 3566\n 3567\n 3568\n 3569\n 3570\n 3571\n 3572\n 3573\n 3574\n 3575\n 3576\n 3577\n 3578\n 3579\n 3580\n 3581\n 3582\n 3583\n 3584\n 3585\n 3586\n 3587\n 3588\n 3589\n 3590\n 3591\n 3592\n 3593\n 3594\n 3595\n 3596\n 3597\n 3598\n 3599\n 3600\n 3601\n 3602\n 3603\n 3604\n 3605\n 3606\n 3607\n 3608\n 3609\n 3610\n 3611\n 3612\n 3613\n 3614\n 3615\n 3616\n 3617\n 3618\n 3619\n 3620\n 3621\n 3622\n 3623\n 3624\n 3625\n 3626\n 3627\n 3628\n 3629\n 3630\n 3631\n 3632\n 3633\n 3634\n 3635\n 3636\n 3637\n 3638\n 3639\n 3640\n 3641\n 3642\n 3643\n 3644\n 3645\n 3646\n 3647\n 3648\n 3649\n 3650\n 3651\n 3652\n 3653\n 3654\n 3655\n 3656\n 3657\n 3658\n 3659\n 3660\n 3661\n 3662\n 3663\n 3664\n 3665\n 3666\n 3667\n 3668\n 3669\n 3670\n 3671\n 3672\n 3673\n 3674\n 3675\n 3676\n 3677\n 3678\n 3679\n 3680\n 3681\n 3682\n 3683\n 3684\n 3685\n 3686\n 3687\n 3688\n 3689\n 3690\n 3691\n 3692\n 3693\n 3694\n 3695\n 3696\n 3697\n 3698\n 3699\n 3700\n 3701\n 3702\n 3703\n 3704\n 3705\n 3706\n 3707\n 3708\n 3709\n 3710\n 3711\n 3712\n 3713\n 3714\n 3715\n 3716\n 3717\n 3718\n 3719\n 3720\n 3721\n 3722\n 3723\n 3724\n 3725\n 3726\n 3727\n 3728\n 3729\n 3730\n 3731\n 3732\n 3733\n 3734\n 3735\n 3736\n 3737\n 3738\n 3739\n 3740\n 3741\n 3742\n 3743\n 3744\n 3745\n 3746\n 3747\n 3748\n 3749\n 3750\n 3751\n 3752\n 3753\n 3754\n 3755\n 3756\n 3757\n 3758\n 3759\n 3760\n 3761\n 3762\n 3763\n 3764\n 3765\n 3766\n 3767\n 3768\n 3769\n 3770\n 3771\n 3772\n 3773\n 3774\n 3775\n 3776\n 3777\n 3778\n 3779\n 3780\n 3781\n 3782\n 3783\n 3784\n 3785\n 3786\n 3787\n 3788\n 3789\n 3790\n 3791\n 3792\n 3793\n 3794\n 3795\n 3796\n 3797\n 3798\n 3799\n 3800\n 3801\n 3802\n 3803\n 3804\n 3805\n 3806\n 3807\n 3808\n 3809\n 3810\n 3811\n 3812\n 3813\n 3814\n 3815\n 3816\n 3817\n 3818\n 3819\n 3820\n 3821\n 3822\n 3823\n 3824\n 3825\n 3826\n 3827\n 3828\n 3829\n 3830\n 3831\n 3832\n 3833\n 3834\n 3835\n 3836\n 3837\n 3838\n 3839\n 3840\n 3841\n 3842\n 3843\n 3844\n 3845\n 3846\n 3847\n 3848\n 3849\n 3850\n 3851\n 3852\n 3853\n 3854\n 3855\n 3856\n 3857\n 3858\n 3859\n 3860\n 3861\n 3862\n 3863\n 3864\n 3865\n 3866\n 3867\n 3868\n 3869\n 3870\n 3871\n 3872\n 3873\n 3874\n 3875\n 3876\n 3877\n 3878\n 3879\n 3880\n 3881\n 3882\n 3883\n 3884\n 3885\n 3886\n 3887\n 3888\n 3889\n 3890\n 3891\n 3892\n 3893\n 3894\n 3895\n 3896\n 3897\n 3898\n 3899\n 3900\n 3901\n 3902\n 3903\n 3904\n 3905\n 3906\n 3907\n 3908\n 3909\n 3910\n 3911\n 3912\n 3913\n 3914\n 3915\n 3916\n 3917\n 3918\n 3919\n 3920\n 3921\n 3922\n 3923\n 3924\n 3925\n 3926\n 3927\n 3928\n 3929\n 3930\n 3931\n 3932\n 3933\n 3934\n 3935\n 3936\n 3937\n 3938\n 3939\n 3940\n 3941\n 3942\n 3943\n 3944\n 3945\n 3946\n 3947\n 3948\n 3949\n 3950\n 3951\n 3952\n 3953\n 3954\n 3955\n 3956\n 3957\n 3958\n 3959\n 3960\n 3961\n 3962\n 3963\n 3964\n 3965\n 3966\n 3967\n 3968\n 3969\n 3970\n 3971\n 3972\n 3973\n 3974\n 3975\n 3976\n 3977\n 3978\n 3979\n 3980\n 3981\n 3982\n 3983\n 3984\n 3985\n 3986\n 3987\n 3988\n 3989\n 3990\n 3991\n 3992\n 3993\n 3994\n 3995\n 3996\n 3997\n 3998\n 3999\n 4000\n 4001\n 4002\n 4003\n 4004\n 4005\n 4006\n 4007\n 4008\n 4009\n 4010\n 4011\n 4012\n 4013\n 4014\n 4015\n 4016\n 4017\n 4018\n 4019\n 4020\n 4021\n 4022\n 4023\n 4024\n 4025\n 4026\n 4027\n 4028\n 4029\n 4030\n 4031\n 4032\n 4033\n 4034\n 4035\n 4036\n 4037\n 4038\n 4039\n 4040\n 4041\n 4042\n 4043\n 4044\n 4045\n 4046\n 4047\n 4048\n 4049\n 4050\n 4051\n 4052\n 4053\n 4054\n 4055\n 4056\n 4057\n 4058\n 4059\n 4060\n 4061\n 4062\n 4063\n 4064\n 4065\n 4066\n 4067\n 4068\n 4069\n 4070\n 4071\n 4072\n 4073\n 4074\n 4075\n 4076\n 4077\n 4078\n 4079\n 4080\n 4081\n 4082\n 4083\n 4084\n 4085\n 4086\n 4087\n 4088\n 4089\n 4090\n 4091\n 4092\n 4093\n 4094\n 4095\n 4096\n 4097\n 4098\n 4099\n 4100\n 4101\n 4102\n 4103\n 4104\n 4105\n 4106\n 4107\n 4108\n 4109\n 4110\n 4111\n 4112\n 4113\n 4114\n 4115\n 4116\n 4117\n 4118\n 4119\n 4120\n 4121\n 4122\n 4123\n 4124\n 4125\n 4126\n 4127\n 4128\n 4129\n 4130\n 4131\n 4132\n 4133\n 4134\n 4135\n 4136\n 4137\n 4138\n 4139\n 4140\n 4141\n 4142\n 4143\n 4144\n 4145\n 4146\n 4147\n 4148\n 4149\n 4150\n 4151\n 4152\n 4153\n 4154\n 4155\n 4156\n 4157\n 4158\n 4159\n 4160\n 4161\n 4162\n 4163\n 4164\n 4165\n 4166\n 4167\n 4168\n 4169\n 4170\n 4171\n 4172\n 4173\n 4174\n 4175\n 4176\n 4177\n 4178\n 4179\n 4180\n 4181\n 4182\n 4183\n 4184\n 4185\n 4186\n 4187\n 4188\n 4189\n 4190\n 4191\n 4192\n 4193\n 4194\n 4195\n 4196\n 4197\n 4198\n 4199\n 4200\n 4201\n 4202\n 4203\n 4204\n 4205\n 4206\n 4207\n 4208\n 4209\n 4210\n 4211\n 4212\n 4213\n 4214\n 4215\n 4216\n 4217\n 4218\n 4219\n 4220\n 4221\n 4222\n 4223\n 4224\n 4225\n 4226\n 4227\n 4228\n 4229\n 4230\n 4231\n 4232\n 4233\n 4234\n 4235\n 4236\n 4237\n 4238\n 4239\n 4240\n 4241\n 4242\n 4243\n 4244\n 4245\n 4246\n 4247\n 4248\n 4249\n 4250\n 4251\n 4252\n 4253\n 4254\n 4255\n 4256\n 4257\n 4258\n 4259\n 4260\n 4261\n 4262\n 4263\n 4264\n 4265\n 4266\n 4267\n 4268\n 4269\n 4270\n 4271\n 4272\n 4273\n 4274\n 4275\n 4276\n 4277\n 4278\n 4279\n 4280\n 4281\n 4282\n 4283\n 4284\n 4285\n 4286\n 4287\n 4288\n 4289\n 4290\n 4291\n 4292\n 4293\n 4294\n 4295\n 4296\n 4297\n 4298\n 4299\n 4300\n 4301\n 4302\n 4303\n 4304\n 4305\n 4306\n 4307\n 4308\n 4309\n 4310\n 4311\n 4312\n 4313\n 4314\n 4315\n 4316\n 4317\n 4318\n 4319\n 4320\n 4321\n 4322\n 4323\n 4324\n 4325\n 4326\n 4327\n 4328\n 4329\n 4330\n 4331\n 4332\n 4333\n 4334\n 4335\n 4336\n 4337\n 4338\n 4339\n 4340\n 4341\n 4342\n 4343\n 4344\n 4345\n 4346\n 4347\n 4348\n 4349\n 4350\n 4351\n 4352\n 4353\n 4354\n 4355\n 4356\n 4357\n 4358\n 4359\n 4360\n 4361\n 4362\n 4363\n 4364\n 4365\n 4366\n 4367\n 4368\n 4369\n 4370\n 4371\n 4372\n 4373\n 4374\n 4375\n 4376\n 4377\n 4378\n 4379\n 4380\n 4381\n 4382\n 4383\n 4384\n 4385\n 4386\n 4387\n 4388\n 4389\n 4390\n 4391\n 4392\n 4393\n 4394\n 4395\n 4396\n 4397\n 4398\n 4399\n 4400\n 4401\n 4402\n 4403\n 4404\n 4405\n 4406\n 4407\n 4408\n 4409\n 4410\n 4411\n 4412\n 4413\n 4414\n 4415\n 4416\n 4417\n 4418\n 4419\n 4420\n 4421\n 4422\n 4423\n 4424\n 4425\n 4426\n 4427\n 4428\n 4429\n 4430\n 4431\n 4432\n 4433\n 4434\n 4435\n 4436\n 4437\n 4438\n 4439\n 4440\n 4441\n 4442\n 4443\n 4444\n 4445\n 4446\n 4447\n 4448\n 4449\n 4450\n 4451\n 4452\n 4453\n 4454\n 4455\n 4456\n 4457\n 4458\n 4459\n 4460\n 4461\n 4462\n 4463\n 4464\n 4465\n 4466\n 4467\n 4468\n 4469\n 4470\n 4471\n 4472\n 4473\n 4474\n 4475\n 4476\n 4477\n 4478\n 4479\n 4480\n 4481\n 4482\n 4483\n 4484\n 4485\n 4486\n 4487\n 4488\n 4489\n 4490\n 4491\n 4492\n 4493\n 4494\n 4495\n 4496\n 4497\n 4498\n 4499\n 4500\n 4501\n 4502\n 4503\n 4504\n 4505\n 4506\n 4507\n 4508\n 4509\n 4510\n 4511\n 4512\n 4513\n 4514\n 4515\n 4516\n 4517\n 4518\n 4519\n 4520\n 4521\n 4522\n 4523\n 4524\n 4525\n 4526\n 4527\n 4528\n 4529\n 4530\n 4531\n 4532\n 4533\n 4534\n 4535\n 4536\n 4537\n 4538\n 4539\n 4540\n 4541\n 4542\n 4543\n 4544\n 4545\n 4546\n 4547\n 4548\n 4549\n 4550\n 4551\n 4552\n 4553\n 4554\n 4555\n 4556\n 4557\n 4558\n 4559\n 4560\n 4561\n 4562\n 4563\n 4564\n 4565\n 4566\n 4567\n 4568\n 4569\n 4570\n 4571\n 4572\n 4573\n 4574\n 4575\n 4576\n 4577\n 4578\n 4579\n 4580\n 4581\n 4582\n 4583\n 4584\n 4585\n 4586\n 4587\n 4588\n 4589\n 4590\n 4591\n 4592\n 4593\n 4594\n 4595\n 4596\n 4597\n 4598\n 4599\n 4600\n 4601\n 4602\n 4603\n 4604\n 4605\n 4606\n 4607\n\n\nNow that we created our dataset, we need to have a bird's eye view of the data. For now, let's have a look at how the bandgap values look like.\n\n\n```\n\nimport matplotlib.pyplot as plt\n\nplt.rcParams.update({'font.size': 20})\n\nplt.figure(figsize=(10, 10))\nplt.hist(band_gaps, bins=100)\nplt.savefig('Histogram_PDF', bbox_inches='tight')\n```\n\nThis plot shows that amost half of our structures are metals (zero bandgap). The bandgaps around 7 eV could be outliers, but we can deal with those in a later lecture. \n\nHow about a scatter plot?\n\n\n```\nband_gaps_sorted=sorted(band_gaps)\n\n# Scatter plot\nplt.figure(figsize=(10,10))\nplt.plot(band_gaps_sorted)\nplt.ylabel('')\nplt.xlabel('')\nplt.savefig('ScatterPlot', bbox_inches='tight')\n\n```\n\nNext, we split the dataset into training and test sets, and we use the 80/20 split ratio.\n\n\n```\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n dataset_df, band_gaps, test_size=.2, random_state=None)\n```\n\nThen we normalize our dataset using the normalization applied on the training set.\n\n\n```\n\nimport pandas as pd\nfrom sklearn.preprocessing import StandardScaler\n\n# We need to normalize the data using a scaler\n\n# Define the scaler\nscaler = StandardScaler().fit(X_train)\n\n# Scale the training and test sets\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n# Next, we create a pandas DataFrame object\n\n```\n\n## The machine learning task\n\nNow it's time to actually do machine learning. We will try two machine learning models: the random forests and the XGBOOST models. We will quantify the prediction accuracy using two measures: goodness of fit (R2) and the mean squared error (MSE).\n\n\n```\n\n\nfrom sklearn.metrics import mean_absolute_error, r2_score\nfrom xgboost import XGBRegressor\nfrom sklearn.ensemble import RandomForestRegressor\n\nregr = RandomForestRegressor(n_estimators=400, max_depth=400, random_state=0)\nregr.fit(X_train_scaled, y_train)\ny_predicted = regr.predict(X_test_scaled)\n\nprint('RF MAE\\t'+str(mean_absolute_error(y_test, y_predicted))+'\\n')\nprint('RF R2\\t'+str(r2_score(y_test, y_predicted))+'\\n')\n\nxPlot=y_test\nyPlot=y_predicted\nplt.figure(figsize=(10,10))\nplt.plot(xPlot,yPlot,'ro')\nplt.plot(xPlot,xPlot)\nplt.ylabel('RF')\nplt.xlabel('DFT')\nplt.savefig('RF_Correlation_Test', bbox_inches='tight')\n\n\nregr = XGBRegressor(objective='reg:squarederror', max_depth=10, n_estimators=400)\nregr.fit(X_train_scaled, y_train)\ny_predicted = regr.predict(X_test_scaled)\n\nprint('XGBOOST MAE\\t'+str(mean_absolute_error(y_test, y_predicted))+'\\n')\nprint('XGBOOST R2\\t'+str(r2_score(y_test, y_predicted))+'\\n')\n\n\nxPlot=y_test\nyPlot=y_predicted\nplt.figure(figsize=(10,10))\nplt.plot(xPlot,yPlot,'ro')\nplt.plot(xPlot,xPlot)\nplt.ylabel('XGBOOST')\nplt.xlabel('DFT')\nplt.savefig('XGBOOST_Correlation_Test', bbox_inches='tight')\n\n\n```\n\nAchieving R$^2$ > 0.71 and MAE < 0.5 is certainly good, but more can be done. That is why we have developed the CrystalFeatures suit with many more features to improve the prediction accuracy of the bandgap, as well as other things.\n\nThat's it for now!\n", "meta": {"hexsha": "8e3f109ceb31b0c3d18ef68148fb662f1c73d6d6", "size": 310325, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CrystalFeatures_AtomicFeatures.ipynb", "max_stars_repo_name": "jepaur5/2021_07_08_iqtcub_md", "max_stars_repo_head_hexsha": "ddad1f08bd640e930e926fbbb02cf6910a6f3f1f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CrystalFeatures_AtomicFeatures.ipynb", "max_issues_repo_name": "jepaur5/2021_07_08_iqtcub_md", "max_issues_repo_head_hexsha": "ddad1f08bd640e930e926fbbb02cf6910a6f3f1f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CrystalFeatures_AtomicFeatures.ipynb", "max_forks_repo_name": "jepaur5/2021_07_08_iqtcub_md", "max_forks_repo_head_hexsha": "ddad1f08bd640e930e926fbbb02cf6910a6f3f1f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6189791517, "max_line_length": 42806, "alphanum_fraction": 0.5835881737, "converted": true, "num_tokens": 39325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5544704796847396, "lm_q2_score": 0.5312093733737562, "lm_q1q2_score": 0.2945399160675765}}
{"text": "```python\nimport local_models.local_models\nimport local_models.algorithms\nimport local_models.utils\nimport local_models.linear_projections\nimport local_models.loggin\nimport local_models.TLS_models\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.linear_model\nimport sklearn.cluster\nfrom importlib import reload\nfrom ml_battery.utils import cmap\nimport matplotlib as mpl\nimport sklearn.datasets\nimport sklearn.decomposition\nimport logging\nimport ml_battery.log\nimport time\nimport os\nimport mayavi\nimport mayavi.mlab\nimport string\nimport subprocess\nimport functools\nimport cv2\nimport itertools\n\n#on headless systems, tmux: \"Xvfb :1 -screen 0 1280x1024x24 -auth localhost\", then \"export DISPLAY=:1\" in the jupyter tmux\nmayavi.mlab.options.offscreen = True\n\n\n\nlogger = logging.getLogger(__name__)\n\n#reload(local_models.local_models)\n#reload(lm)\n#reload(local_models.loggin)\n#reload(local_models.TLS_models)\nnp.warnings.filterwarnings('ignore')\n\n```\n\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n\n\n\n```python\nimport importlib; importlib.reload(quadrics_utils)\n```\n\n\n\n\n \n\n\n\n\n```python\ndef collect_best(expr, measure=sympy.count_ops):\n best = expr\n best_score = measure(expr)\n perms = itertools.permutations(expr.free_symbols)\n permlen = np.math.factorial(len(expr.free_symbols))\n print(permlen)\n for i, perm in enumerate(perms):\n if (permlen > 1000) and not (i%int(permlen/100)):\n print(i)\n collected = sympy.collect(expr, perm)\n if measure(collected) < best_score:\n best_score = measure(collected)\n best = collected\n else:\n factored = sympy.factor(expr)\n if measure(factored) < best_score:\n best_score = measure(factored)\n best = factored\n return best\n \ndef product(args):\n arg = next(args)\n try:\n return arg*product(args)\n except:\n return arg\n \ndef rcollect_best(expr, measure=sympy.count_ops):\n best = collect_best(expr, measure)\n best_score = measure(best)\n if expr == best:\n return best\n if isinstance(best, sympy.Mul):\n return product(map(rcollect_best, best.args))\n if isinstance(best, sympy.Add):\n return sum(map(rcollect_best, best.args))\n```\n\n\n```python\ndef derive_quadratic_orthogonal_projection_1D_polynomial(n):\n import sympy\n \n Q_sym = sympy.symarray(\"q\", (n+1, n+1))\n Q = sympy.Matrix(np.zeros((n+1,n+1), dtype=int))\n for i, j in itertools.product(range(n+1), range(n+1)):\n if i == n or j == n or i == j:\n Q[i,j] = Q_sym[max(i,j),min(i,j)]\n print(Q)\n\n x_sym = sympy.symarray(\"x\", n+1)\n X = sympy.Matrix(np.ones((n+1, 1), dtype=int))\n for i in range(n):\n X[i] = x_sym[i]\n \n P = sympy.Matrix(np.zeros((n-1, n+1), dtype=int))\n for i in range(n-1):\n P[i,0] = X[i+1]\n P[i,i+1] = -X[0]\n \n QXP = P*Q*X\n \n other_dims_as_x0 = [sympy.solve(QXP[i], X[i+1])[0] for i in range(n-1)] \n \n XQX = sympy.expand((X.T*Q*X)[0])\n XQX_as_x0 = XQX.subs({X[i+1]:other_dims_as_x0[i] for i in range(n-1)})\n for sub in other_dims_as_x0:\n XQX_as_x0 *= sympy.fraction(sub)[1]**2\n XQX_as_x0 = sympy.cancel(XQX_as_x0)\n XQX_as_x0 = sympy.simplify(XQX_as_x0)\n XQX_as_x0 = sympy.poly(XQX_as_x0, X[0])\n \n return (X, Q, XQX_as_x0, other_dims_as_x0)\n \n```\n\n\n```python\ndef collectify_polynomial_coefficients(poly):\n return [rcollect_best(formula) for formula in poly.all_coeffs()]\n```\n\n\n```python\nsorted(map(lambda x: x**2, range(4)))\n```\n\n\n\n\n [0, 1, 4, 9]\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\ndef funcify(signature, returnable):\n return \"def {}: return {}\\n\".format(signature, returnable)\n\ndef librarify_quadric_equations():\n import sympy\n from sympy.abc import a,b,c,d,e,f,g,x,y,z\n\n Q = a*x**2 + b*y**2 + c*z**2 + 2*e*x + 2*f*y + 2*g*z + d\n y_as_x_num = f*x\n y_as_x_den = e-(b-a)*x\n y_as_x = y_as_x_num/y_as_x_den\n z_as_x_num = g*x\n z_as_x_den = e-(c-a)*x\n z_as_x = z_as_x_num/z_as_x_den\n Q_as_x = Q.subs({\n y: y_as_x,\n z: z_as_x,\n })\n\n bigQ = sympy.expand(sympy.simplify(Q_as_x*y_as_x_den**2*z_as_x_den**2))\n\n coeffs = list(map(sympy.factor, sympy.poly(bigQ,x).all_coeffs()))\n\n collected = []\n for coeff in coeffs:\n collected.append(rcollect_best(coeff))\n\n k_mat = sympy.Matrix(collected)\n k_jac = k_mat.jacobian([a,b,c,d,e,f,g])\n\n with open(\"quadrics_utils.py\", \"w\") as f:\n Q_func = funcify(\"Q(a,b,c,d,e,f,g,x,y,z)\",str(Q))\n y_as_x_func = funcify(\"y_as_x(a,b,c,d,e,f,g,x)\", str(y_as_x))\n z_as_x_func = funcify(\"z_as_x(a,b,c,d,e,f,g,x)\", str(z_as_x))\n Q_as_x_func = funcify(\"Q_as_x(a,b,c,d,e,f,g,x)\", str(Q_as_x))\n k_mat_func = funcify(\"k_mat(a,b,c,d,e,f,g)\", str(k_mat.transpose().tolist()[0]))\n k_jac_func = funcify(\"k_jac(a,b,c,d,e,f,g)\", str(k_jac.tolist()))\n individual_k_funcs = [funcify(\"k{:01d}(a,b,c,d,e,f,g)\".format(i), str(eq)) for i,eq in enumerate(collected[::-1])]\n list(map(f.write, [Q_func, y_as_x_func, z_as_x_func, Q_as_x_func, k_mat_func, k_jac_func] + individual_k_funcs))\n```\n", "meta": {"hexsha": "d5dd8d4e96a5b1159ebb67354c40f8acd8b7d5cb", "size": 10305, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/quadrics_solvers.ipynb", "max_stars_repo_name": "csbrown/pylomo", "max_stars_repo_head_hexsha": "377aa386427a32da8b42fe53aacbe3281fbf2bf6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/quadrics_solvers.ipynb", "max_issues_repo_name": "csbrown/pylomo", "max_issues_repo_head_hexsha": "377aa386427a32da8b42fe53aacbe3281fbf2bf6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-01T17:41:36.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-01T17:41:36.000Z", "max_forks_repo_path": "examples/quadrics_solvers.ipynb", "max_forks_repo_name": "csbrown/pylomo", "max_forks_repo_head_hexsha": "377aa386427a32da8b42fe53aacbe3281fbf2bf6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6574394464, "max_line_length": 261, "alphanum_fraction": 0.548568656, "converted": true, "num_tokens": 2149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5621765155565326, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.2942546279364071}}
{"text": "\n# PHY321: Introduction to Classical Mechanics and plans for Spring 2021\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway \n\n **Scott Pratt**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA\n\nDate: **Jan 20, 2021**\n\nCopyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n\n\n\n## Aims and Overview\n\nThe first week starts on Monday January 11. This week is dedicated to a\nreview of learning material and reminder on programming aspects,\nuseful tools, where to find information and much more. There are no lectures during this week and the material here on vectors and Python programming will also be discussed during the week which begins with January 18. Feel free to look at the notes here before our first lecture on the 20th.\n\n* Introduction to the course and reminder on vectors, space, time and motion.\n\n* Python programming reminder, elements from [CMSE 201 INTRODUCTION TO COMPUTATIONAL MODELING](https://cmse.msu.edu/academics/undergraduate-program/undergraduate-courses/cmse-201-introduction-to-computational-modeling/) and how they are used in this course. Installing software (anaconda). . \n\n* Introduction to Git and GitHub. [Overview video on Git and GitHub](https://mediaspace.msu.edu/media/t/1_8mgx3cyf).\n\n**Recommended reading**: John R. Taylor, Classical Mechanics (Univ. Sci. Books 2005), , see also . Chapters 1.2 and 1.3 of Taylor.\n\n\n\n## Introduction to the course and where to find material\n\n[Overview video](https://mediaspace.msu.edu/media/t/1_zzl90pfu)\n\n\n\n\n## Classical mechanics\n\nClassical mechanics is a topic which has been taught intensively over\nseveral centuries. It is, with its many variants and ways of\npresenting the educational material, normally the first **real** physics\ncourse many of us meet and it lays the foundation for further physics\nstudies. Many of the equations and ways of reasoning about the\nunderlying laws of motion and pertinent forces, shape our approaches and understanding\nof the scientific method and discourse, as well as the way we develop our insights\nand deeper understanding about physical systems. \n\n## From Continuous to Discretized Approaches\n\nThere is a wealth of\nwell-tested (from both a physics point of view and a pedagogical\nstandpoint) exercises and problems which can be solved\nanalytically. However, many of these problems represent idealized and\nless realistic situations. The large majority of these problems are\nsolved by paper and pencil and are traditionally aimed\nat what we normally refer to as continuous models from which we may find an analytical solution. As a consequence,\nwhen teaching mechanics, it implies that we can seldomly venture beyond an idealized case\nin order to develop our understandings and insights about the\nunderlying forces and laws of motion.\n\nWe aim at changing this here by introducing throughout the course what\nwe will call a **computational path**, where with computations we mean\nsolving scientific problems with all possible tools and means, from\nplain paper an pencil exercises, via symbolic calculations to writing\na code and running a program to solve a specific\nproblem. Mathematically this normally means that we move from a\ncontinuous problem to a discretized one. This appproach enables us to\nsolve a much broader class of problems.\nIn mechanics this means, since we often rephrase the physical problems in terms of differential equations, that we can in most settings reuse the same program with some minimal changes. \n\n\n\n\n\n\n## Space, Time, Motion, Reference Frames and Reminder on vectors and other mathematical quantities\n\nOur studies will start with the motion of different types of objects\nsuch as a falling ball, a runner, a bicycle etc etc. It means that an\nobject's position in space varies with time.\nIn order to study such systems we need to define\n\n* choice of origin\n\n* choice of the direction of the axes\n\n* choice of positive direction (left-handed or right-handed system of reference)\n\n* choice of units and dimensions\n\nThese choices lead to some important questions such as\n\n* is the physics of a system independent of the origin of the axes?\n\n* is the physics independent of the directions of the axes, that is are there privileged axes?\n\n* is the physics independent of the orientation of system?\n\n* is the physics independent of the scale of the length?\n\n## Dimension, units and labels\n\nThroughout this course we will use the standardized SI units. The standard unit for length is thus one meter 1m, for mass\none kilogram 1kg, for time one second 1s, for force one Newton 1kgm/s$^2$ and for energy 1 Joule 1kgm$^2$s$^{-2}$.\n\nWe will use the following notations for various variables (vectors are always boldfaced in these lecture notes):\n* position $\\boldsymbol{r}$, in one dimention we will normally just use $x$,\n\n* mass $m$,\n\n* time $t$,\n\n* velocity $\\boldsymbol{v}$ or just $v$ in one dimension,\n\n* acceleration $\\boldsymbol{a}$ or just $a$ in one dimension,\n\n* momentum $\\boldsymbol{p}$ or just $p$ in one dimension,\n\n* kinetic energy $K$,\n\n* potential energy $V$ and\n\n* frequency $\\omega$.\n\nMore variables will be defined as we need them.\n\n## Dimensions and Units\n\nIt is also important to keep track of dimensionalities. Don't mix this\nup with a chosen unit for a given variable. We mark the dimensionality\nin these lectures as $[a]$, where $a$ is the quantity we are\ninterested in. Thus\n\n* $[\\boldsymbol{r}]=$ length\n\n* $[m]=$ mass\n\n* $[K]=$ energy\n\n* $[t]=$ time\n\n* $[\\boldsymbol{v}]=$ length over time\n\n* $[\\boldsymbol{a}]=$ length over time squared\n\n* $[\\boldsymbol{p}]=$ mass times length over time\n\n* $[\\omega]=$ 1/time\n\n## Scalars, Vectors and Matrices\n\nA scalar is something with a value that is independent of coordinate\nsystem. Examples are mass, or the relative time between events. A\nvector has magnitude and direction. Under rotation, the magnitude\nstays the same but the direction changes. Scalars have no spatial\nindex, whereas a three-dimensional vector has 3 indices, e.g. the\nposition $\\boldsymbol{r}$ has components $r_1,r_2,r_3$, which are often\nreferred to as $x,y,z$.\n\nThere are several categories of changes of coordinate system. The\nobserver can translate the origin, might move with a different\nvelocity, or might rotate her/his coordinate axes. For instance, a\nparticle's position vector changes when the origin is translated, but\nits velocity does not. When you study relativity you will find that\nquantities you thought of as scalars, such as time or an electric\npotential, are actually parts of four-dimensional vectors and that\nchanges of the velocity of the reference frame act in a similar way to\nrotations.\n\nIn addition to vectors and scalars, there are matrices, which have two\nindices. One also has objects with 3 or four indices. These are called\ntensors of rank $n$, where $n$ is the number of indices. A matrix is a\nrank-two tensor. The Levi-Civita symbol, $\\epsilon_{ijk}$ used for\ncross products of vectors, is a tensor of rank three.\n\n\n## Definitions of Vectors\n\n\nIn these lectures we will use boldfaced lower-case letters to label a\nvector. A vector $\\boldsymbol{a}$ in three dimensions is thus defined as\n\n$$\n\\boldsymbol{a} =(a_x,a_y, a_z),\n$$\n\nand using the unit vectors (see below) in a cartesian system we have\n\n$$\n\\boldsymbol{a} = a_x\\boldsymbol{e}_1+a_y\\boldsymbol{e}_2+a_z\\boldsymbol{e}_3,\n$$\n\nwhere the unit vectors have magnitude $\\vert\\boldsymbol{e}_i\\vert = 1$ with\n$i=1=x$, $i=2=y$ and $i=3=z$. Some authors use letters\n$\\boldsymbol{i}=\\boldsymbol{e}_1$, $\\boldsymbol{j}=\\boldsymbol{e}_2$ and $\\boldsymbol{k}=\\boldsymbol{e}_3$.\n\n\n## Other ways to define a Vector\n\nAlternatively, you may also encounter the above vector as\n\n$$\n\\boldsymbol{a} = a_1\\boldsymbol{e}_1+a_2\\boldsymbol{e}_2+a_3\\boldsymbol{e}_3.\n$$\n\nHere we have used that $a_1=a_x$, $a_2=a_y$ and $a_3=a_z$. Such a\nnotation is sometimes more convenient if we wish to represent vector\noperations in a mathematically more compact way, see below here. We may also find this useful if we want the different\ncomponents to represent other coordinate systems that the Cartesian one. A typical example would be going from a Cartesian representation to a spherical basis. We will encounter such cases many times in this course. \n\nWe use lower-case letters for vectors and upper-case letters for matrices. Vectors and matrices are always boldfaced.\n\n\n## Polar Coordinates\n\nAs an example, consider a two-dimensional Cartesian system with a vector $\\boldsymbol{r}=(x,y)$.\nOur vector is then written as\n\n$$\n\\boldsymbol{r} = x\\boldsymbol{e}_1+y\\boldsymbol{e}_2.\n$$\n\nTransforming to polar coordinates with the radius $\\rho\\in [0,\\infty)$\nand the angle $\\phi \\in [0,2\\pi]$ we have the familiar transformations\n\n$$\nx = \\rho \\cos{\\phi} \\hspace{0.5cm} y = \\rho \\sin{\\phi},\n$$\n\nand the inverse relations\n\n$$\n\\rho =\\sqrt{x^2+y^2} \\hspace{0.5cm} \\phi = \\mathrm{arctan}(\\frac{y}{x}).\n$$\n\nWe can rewrite the vector $\\boldsymbol{a}$ in terms of $\\rho$ and $\\phi$ as\n\n$$\n\\boldsymbol{a} = \\rho \\cos{\\phi}\\boldsymbol{e}_1+\\rho \\sin{\\phi}\\boldsymbol{e}_2,\n$$\n\nand we define the new unit vectors as $\\boldsymbol{e}'_1=\\cos{\\phi}\\boldsymbol{e}_1$ and $\\boldsymbol{e}'_2=\\sin{\\phi}\\boldsymbol{e}_2$, we have\n\n$$\n\\boldsymbol{a}' = \\rho\\boldsymbol{e}'_1+\\rho \\boldsymbol{e}'_2.\n$$\n\nBelow we will show that the norms of this vector in a Cartesian basis and a Polar basis are equal.\n\n\n\n## Unit Vectors\n\nAlso known as basis vectors, unit vectors point in the direction of\nthe coordinate axes, have unit norm, and are orthogonal to one\nanother. Sometimes this is referred to as an orthonormal basis,\n\n\n\n\n$$\n\\begin{equation}\n\\boldsymbol{e}_i\\cdot\\boldsymbol{e}_j=\\delta_{ij}=\\begin{bmatrix}\n1 & 0 & 0\\\\\n0& 1 & 0\\\\\n0 & 0 & 1\n\\end{bmatrix}.\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nHere, $\\delta_{ij}$ is unity when $i=j$ and is zero otherwise. This is\ncalled the unit matrix, because you can multiply it with any other\nmatrix and not change the matrix. The **dot** denotes the dot product,\n$\\boldsymbol{a}\\cdot\\boldsymbol{b}=a_1b_1+a_2b_2+a_3b_3=|a||b|\\cos\\theta_{ab}$. Sometimes\nthe unit vectors are called $\\hat{x}$, $\\hat{y}$ and\n$\\hat{z}$.\n\n\n## Our definition of unit vectors\n\nVectors can be decomposed in terms of unit vectors,\n\n\n\n\n$$\n\\begin{equation}\n\\boldsymbol{r}=r_1\\hat{e}_1+r_2\\hat{e}_2+r_3\\hat{e}_3.\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nThe vector components $r_1$, $r_2$ and $r_3$ might be\ncalled $x$, $y$ and $z$ for a displacement. Another way to write this is to define the vector $\\boldsymbol{r}=(x,y,z)$.\n\nSimilarly, for the velocity we will use in this course the components $\\boldsymbol{v}=(v_x, v_y,v_z$. The accelaration is then given by $\\boldsymbol{a}=(a_x,a_y,a_z)$.\n\n\n## More definitions, repeated indices\n\nAs mentioned above, repeated indices infer sums.\nThis means that when you encounter an expression like the one on the left-hand side here, it stands actually for a sum (right-hand side)\n\n$$\nx_iy_i=\\sum_i x_iy_i=\\boldsymbol{x}\\cdot\\boldsymbol{y}.\n$$\n\nWe will in our lectures seldom use this notation and rather spell out the summations. This inferred summation over indices is normally called [Einstein summation convention](https://en.wikipedia.org/wiki/Einstein_notation).\n\n## Vector Operations, Scalar Product (or dot product)\n\nFor two vectors $\\boldsymbol{a}$ and $\\boldsymbol{b}$ we have\n\n$$\n\\begin{eqnarray*}\n\\boldsymbol{a}\\cdot\\boldsymbol{b}&=&\\sum_ia_ib_i=|a||b|\\cos\\theta_{ab},\\\\\n|a|&\\equiv& \\sqrt{\\boldsymbol{a}\\cdot\\boldsymbol{a}},\n\\end{eqnarray*}\n$$\n\nor with a norm-2 notation\n\n$$\n|a|\\equiv \\vert\\vert \\boldsymbol{a}\\vert\\vert_2=\\sqrt{\\sum_i a_i^2}.\n$$\n\nNot of all of you are familiar with linear algebra. Numerically we will always deal with arrays and the dot product vector is given by the product of the transposed vector multiplied with the other vector, that is we have\n\n$$\n\\boldsymbol{a}^T\\boldsymbol{b}=\\sum_i a_ib_i=|a||b|\\cos\\theta_{ab}.\n$$\n\nThe superscript $T$ represents the transposition operations. \n\n\n## Digression, Linear Algebra Notation for Vectors\n\nAs an example, consider a three-dimensional velocity defined by a vector $\\boldsymbol{v}=(v_x,v_y,v_z)$. For those of you familiar with linear algebra, we would write this quantity as\n\n$$\n\\boldsymbol{v}=\\begin{bmatrix} v_x\\\\ v_y \\\\ v_z \\end{bmatrix},\n$$\n\nand the transpose as\n\n$$\n\\boldsymbol{v}^T=\\begin{bmatrix} v_x & v_y &v_z \\end{bmatrix}.\n$$\n\nThe norm is\n\n$$\n\\boldsymbol{v}^T\\boldsymbol{v}=v_x^2+v_y^2+v_z^2,\n$$\n\nas it should.\n\nSince we will use Python as a programming language throughout this course, the above vector, using the package **numpy** (see discussions below), can be written as\n\n\n```python\nimport numpy as np\n# Define the values of vx, vy and vz\nvx = 0.0\nvy = 1.0\nvz = 0.0\nv = np.array([vx, vy, vz])\nprint(v)\n# The print the transpose of v\nprint(v.T)\n```\n\nTry to figure out how to calculate the norm with **numpy**.\nWe will come back to **numpy** in the examples below.\n\n\n\n## Norm of a transformed Vector\n\nAs an example, consider our transformation of a two-dimensional Cartesian vector $\\boldsymbol{r}$ to polar coordinates.\nWe had\n\n$$\n\\boldsymbol{r} = x\\boldsymbol{e}_1+y\\boldsymbol{e}_2.\n$$\n\nTransforming to polar coordinates with the radius $\\rho\\in [0,\\infty)$\nand the angle $\\phi \\in [0,2\\pi]$ we have\n\n$$\nx = \\rho \\cos{\\phi} \\hspace{0.5cm} y = \\rho \\sin{\\phi}.\n$$\n\nWe can write this\n\n$$\n\\boldsymbol{r} = \\begin{bmatrix} x \\\\ y \\end{bmatrix}= \\begin{bmatrix} \\rho \\cos{\\phi} \\\\ \\rho \\sin{\\phi} \\end{bmatrix}.\n$$\n\nThe norm in Cartesian coordinates is $\\boldsymbol{r}\\cdot\\boldsymbol{r}=x^2+y^2$ and\nusing Polar coordinates we have\n$\\rho^2(\\cos{\\phi})^2+\\rho^2(\\cos{\\phi})^2=\\rho^2$, which shows that\nthe norm is conserved since we have $\\rho = \\sqrt{x^2+y^2}$. A\ntransformation to a new basis should not change the norm.\n\n\n\n\n## Vector Product (or cross product) of vectors $\\boldsymbol{a}$ and $\\boldsymbol{b}$\n\n$$\n\\begin{eqnarray*}\n\\boldsymbol{c}&=&\\boldsymbol{a}\\times\\boldsymbol{b},\\\\\nc_i&=&\\epsilon_{ijk}a_jb_k.\n\\end{eqnarray*}\n$$\n\nHere $\\epsilon$ is the third-rank anti-symmetric tensor, also known as\nthe Levi-Civita symbol. It is $\\pm 1$ only if all three indices are\ndifferent, and is zero otherwise. The choice of $\\pm 1$ depends on\nwhether the indices are an even or odd permutation of the original\nsymbols. The permutation $xyz$ or $123$ is considered to be $+1$. Its elements are\n\n$$\n\\begin{eqnarray}\n\\epsilon_{ijk}&=&-\\epsilon_{ikj}=-\\epsilon_{jik}=-\\epsilon_{kji}\\\\\n\\nonumber\n\\epsilon_{123}&=&\\epsilon_{231}=\\epsilon_{312}=1,\\\\\n\\nonumber\n\\epsilon_{213}&=&\\epsilon_{132}=\\epsilon_{321}=-1,\\\\\n\\nonumber\n\\epsilon_{iij}&=&\\epsilon_{iji}=\\epsilon_{jii}=0.\n\\end{eqnarray}\n$$\n\n## More on cross-products\n\nYou may have met cross-products when studying magnetic\nfields. Because the matrix is anti-symmetric, switching the $x$ and\n$y$ axes (or any two axes) flips the sign. If the coordinate system is\nright-handed, meaning the $xyz$ axes satisfy\n$\\hat{x}\\times\\hat{y}=\\hat{z}$, where you can point along the $x$ axis\nwith your extended right index finger, the $y$ axis with your\ncontracted middle finger and the $z$ axis with your extended\nthumb. Switching to a left-handed system flips the sign of the vector\n$\\boldsymbol{c}=\\boldsymbol{a}\\times\\boldsymbol{b}$.\n\nNote that\n$\\boldsymbol{a}\\times\\boldsymbol{b}=-\\boldsymbol{b}\\times\\boldsymbol{a}$. The vector $\\boldsymbol{c}$ is\nperpendicular to both $\\boldsymbol{a}$ and $\\boldsymbol{b}$ and the magnitude of\n$\\boldsymbol{c}$ is given by\n\n$$\n|c|=|a||b|\\sin{\\theta_{ab}}.\n$$\n\n## Pseudo-vectors\n\nVectors obtained by the cross product of two real vectors are called\npseudo-vectors because the assignment of their direction can be\narbitrarily flipped by defining the Levi-Civita symbol to be based on\nleft-handed rules. Examples are the magnetic field and angular\nmomentum. If the direction of a real vector prefers the right-handed\nover the left-handed direction, that constitutes a violation of\nparity. For instance, one can polarize the spins (angular momentum) of\nnuclei with a magnetic field so that the spins preferentially point\nalong the direction of the magnetic field. This does not violate\nparity because both are pseudo-vectors. Now assume these polarized\nnuclei decay and that electrons are one of the products. If these\nelectrons prefer to exit the decay parallel vs. antiparallel to the\npolarizing magnetic field, this constitutes parity violation because\nthe direction of the outgoing electron momenta are a real vector. This\nis precisely what is observed in weak decays.\n\n## Differentiation of a vector with respect to a scalar\n\nFor example, the\nacceleration $\\boldsymbol{a}$ is given by the change in velocity per unit time, $\\boldsymbol{a}=d\\boldsymbol{v}/dt$\nwith components\n\n$$\na_i = (d\\boldsymbol{v}/dt)_i=\\frac{dv_i}{dt}.\n$$\n\nHere $i=x,y,z$ or $i=1,2,3$ if we are in three dimensions.\n\n## Gradient operator $\\nabla$\n\nThis represents the derivatives $\\partial/\\partial\nx$, $\\partial/\\partial y$ and $\\partial/\\partial z$. An often used shorthand is $\\partial_x=\\partial/\\partial_x$.\n\nThe gradient of a scalar function of position and time\n$\\Phi(x,y,z)=\\Phi(\\boldsymbol{r},t)$ is given by\n\n$$\n\\boldsymbol{\\nabla}~\\Phi,\n$$\n\nwith components $i$\n\n$$\n(\\nabla\\Phi(x,y,z,t))_i=\\partial/\\partial r_i\\Phi(\\boldsymbol{r},t)=\\partial_i\\Phi(\\boldsymbol{r},t).\n$$\n\nNote that the gradient is a vector.\n\nTaking the dot product of the gradient with a vector, normally called the divergence,\nwe have\n\n$$\n\\mathrm{div} \\boldsymbol{a}, \\nabla\\cdot\\boldsymbol{a}=\\sum_i \\partial_i a_i.\n$$\n\nNote that the divergence is a scalar. \n\n## The curl\n\nThe **curl** of a vector is defined as\n$\\nabla\\times\\boldsymbol{a}$,\n\n$$\n{\\rm\\bf curl}~\\boldsymbol{a},\n$$\n\nwith components\n\n$$\n(\\boldsymbol{\\nabla}\\times\\boldsymbol{a})_i=\\epsilon_{ijk}\\partial_j a_k(\\boldsymbol{r},t).\n$$\n\n## The Laplacian\n\nThe Laplacian is referred to as $\\nabla^2$ and is defined as\n\n$$\n\\boldsymbol{\\nabla}^2=\\boldsymbol{\\nabla}\\cdot\\boldsymbol{\\nabla}=\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}+\\frac{\\partial^2}{\\partial z^2}.\n$$\n\nQuestion: is the Laplacian a scalar or a vector?\n\n\n## Some identities\n\nHere we simply state these, but you may wish to prove a few. They are useful for this class and will be essential when you study electromagnetism.\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{a}\\cdot(\\boldsymbol{b}\\times\\boldsymbol{c})&=&\\boldsymbol{b}\\cdot(\\boldsymbol{c}\\times\\boldsymbol{a})=\\boldsymbol{c}\\cdot(\\boldsymbol{a}\\times\\boldsymbol{b})\\\\\n\\nonumber\n\\boldsymbol{a}\\times(\\boldsymbol{b}\\times\\boldsymbol{c})&=&(\\boldsymbol{a}\\cdot\\boldsymbol{c})\\boldsymbol{b}-(\\boldsymbol{a}\\cdot\\boldsymbol{b})\\boldsymbol{c}\\\\\n\\nonumber\n(\\boldsymbol{a}\\times\\boldsymbol{b})\\cdot(\\boldsymbol{c}\\times\\boldsymbol{d})&=&(\\boldsymbol{a}\\cdot\\boldsymbol{c})(\\boldsymbol{b}\\cdot\\boldsymbol{d})\n-(\\boldsymbol{a}\\cdot\\boldsymbol{d})(\\boldsymbol{b}\\cdot\\boldsymbol{c})\n\\end{eqnarray}\n$$\n\n## More useful relations\n\nUsing the fact that multiplication of reals is distributive we can show that\n\n$$\n\\boldsymbol{a}(\\boldsymbol{b}+\\boldsymbol{c})=\\boldsymbol{a}\\boldsymbol{b}+\\boldsymbol{a}\\boldsymbol{c},\n$$\n\nSimilarly we can also show that (using product rule for differentiating reals)\n\n$$\n\\frac{d}{dt}(\\boldsymbol{a}\\boldsymbol{b})=\\boldsymbol{a}\\frac{d\\boldsymbol{b}}{dt}+\\boldsymbol{b}\\frac{d\\boldsymbol{a}}{dt}.\n$$\n\nWe can repeat these operations for the cross products and show that they are distribuitive\n\n$$\n\\boldsymbol{a}\\times(\\boldsymbol{b}+\\boldsymbol{c})=\\boldsymbol{a}\\times\\boldsymbol{b}+\\boldsymbol{a}\\times\\boldsymbol{c}.\n$$\n\nWe have also that\n\n$$\n\\frac{d}{dt}(\\boldsymbol{a}\\times\\boldsymbol{b})=\\boldsymbol{a}\\times\\frac{d\\boldsymbol{b}}{dt}+\\boldsymbol{b}\\times\\frac{d\\boldsymbol{a}}{dt}.\n$$\n\n## Gauss's Theorem\n\nFor an integral over a volume $V$ confined by a surface $S$, Gauss's theorem gives\n\n$$\n\\int_V dv~\\nabla\\cdot\\boldsymbol{A}=\\int_Sd\\boldsymbol{S}\\cdot\\boldsymbol{A}.\n$$\n\nFor a closed path $C$ which carves out some area $S$,\n\n$$\n\\int_C d\\boldsymbol{\\ell}\\cdot\\boldsymbol{A}=\\int_Sd\\boldsymbol{s} \\cdot(\\nabla\\times\\boldsymbol{A})\n$$\n\n## and Stokes's Theorem\n\nStoke's law can be understood by considering a small rectangle,\n$-\\Delta x\n\n
\n\n\n\n\n## Some famous Matrices\n\n * Diagonal if $a_{ij}=0$ for $i\\ne j$\n\n * Upper triangular if $a_{ij}=0$ for $i > j$\n\n * Lower triangular if $a_{ij}=0$ for $i < j$\n\n * Upper Hessenberg if $a_{ij}=0$ for $i > j+1$\n\n * Lower Hessenberg if $a_{ij}=0$ for $i < j+1$\n\n * Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$\n\n * Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$\n\n * Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$\n\n * Banded, block upper triangular, block lower triangular....\n\n## More Basic Matrix Features\n\n**Some Equivalent Statements.**\n\nFor an $N\\times N$ matrix $\\mathbf{A}$ the following properties are all equivalent\n\n * If the inverse of $\\mathbf{A}$ exists, $\\mathbf{A}$ is nonsingular.\n\n * The equation $\\mathbf{Ax}=0$ implies $\\mathbf{x}=0$.\n\n * The rows of $\\mathbf{A}$ form a basis of $R^N$.\n\n * The columns of $\\mathbf{A}$ form a basis of $R^N$.\n\n * $\\mathbf{A}$ is a product of elementary matrices.\n\n * $0$ is not eigenvalue of $\\mathbf{A}$.\n\n\n\n\n\n## Rotations\n\n\nHere, we use rotations as an example of matrices and their operations. One can consider a different orthonormal basis $\\hat{e}'_1$, $\\hat{e}'_2$ and $\\hat{e}'_3$. The same vector $\\boldsymbol{r}$ mentioned above can also be expressed in the new basis,\n\n\n\n\n$$\n\\begin{equation}\n\\boldsymbol{r}=r'_1\\hat{e}'_1+r'_2\\hat{e}'_2+r'_3\\hat{e}'_3.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nEven though it is the same vector, the components have changed. Each\nnew unit vector $\\hat{e}'_i$ can be expressed as a linear sum of the\nprevious vectors,\n\n\n\n\n$$\n\\begin{equation}\n\\hat{e}'_i=\\sum_j U_{ij}\\hat{e}_j,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the matrix $U$ can be found by taking the dot product of both sides with $\\hat{e}_k$,\n\n\n\n\n$$\n\\begin{eqnarray}\n\\nonumber\n\\hat{e}_k\\cdot\\hat{e}'_i&=&\\sum_jU_{ij}\\hat{e}_k\\cdot\\hat{e}_j\\\\\n\\label{eq:lambda_angles} \\tag{5}\n\\hat{e}_k\\cdot\\hat{e}'_i&=&\\sum_jU_{ij}\\delta_{jk}=U_{ik}.\n\\end{eqnarray}\n$$\n\n## More on the matrix $U$\n\nThus, the matrix lambda has components $U_{ij}$ that are equal to the\ncosine of the angle between new unit vector $\\hat{e}'_i$ and the old\nunit vector $\\hat{e}_j$.\n\n\n\n\n$$\n\\begin{equation}\nU = \\begin{bmatrix}\n\\hat{e}'_1\\cdot\\hat{e}_1& \\hat{e}'_1\\cdot\\hat{e}_2& \\hat{e}'_1\\cdot\\hat{e}_3\\\\\n\\hat{e}'_2\\cdot\\hat{e}_1& \\hat{e}'_2\\cdot\\hat{e}_2& \\hat{e}'_2\\cdot\\hat{e}_3\\\\\n\\hat{e}'_3\\cdot\\hat{e}_1& \\hat{e}'_3\\cdot\\hat{e}_2& \\hat{e}'_3\\cdot\\hat{e}_3\n\\end{bmatrix},~~~~~U_{ij}=\\hat{e}'_i\\cdot\\hat{e}_j=\\cos\\theta_{ij}.\n\\label{_auto5} \\tag{6}\n\\end{equation}\n$$\n\n## Properties of the matrix $U$\n\nNote that the matrix is not symmetric, $U_{ij}\\ne U_{ji}$. One can also look at the inverse transformation, by switching the primed and unprimed coordinates,\n\n\n\n\n$$\n\\begin{eqnarray}\n\\label{eq:inverseU} \\tag{7}\n\\hat{e}_i&=&\\sum_jU^{-1}_{ij}\\hat{e}'_j,\\\\\n\\nonumber\nU^{-1}_{ij}&=&\\hat{e}_i\\cdot\\hat{e}'_j=U_{ji}.\n\\end{eqnarray}\n$$\n\nThe definition of transpose of a matrix, $M^{t}_{ij}=M_{ji}$, allows one to state this as\n\n\n\n\n$$\n\\begin{eqnarray}\n\\label{eq:transposedef} \\tag{8}\nU^{-1}&=&U^{t}.\n\\end{eqnarray}\n$$\n\n## Tensors\n\nA tensor obeying Eq. ([8](#eq:transposedef)) defines what is known as\na unitary, or orthogonal, transformation.\n\nThe matrix $U$ can be used to transform any vector to the new basis. Consider a vector\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{r}&=&r_1\\hat{e}_1+r_2\\hat{e}_2+r_3\\hat{e}_3\\\\\n\\nonumber\n&=&r'_1\\hat{e}'_1+r'_2\\hat{e}'_2+r'_3\\hat{e}'_3.\n\\end{eqnarray}\n$$\n\nThis is the same vector expressed as a sum over two different sets of\nbasis vectors. The coefficients $r_i$ and $r'_i$ represent components\nof the same vector. The relation between them can be found by taking\nthe dot product of each side with one of the unit vectors,\n$\\hat{e}_i$, which gives\n\n$$\n\\begin{eqnarray}\nr_i&=&\\sum_j \\hat{e}_i\\cdot\\hat{e}'_j~r'_j.\n\\end{eqnarray}\n$$\n\nUsing Eq. ([7](#eq:inverseU)) one can see that the transformation of $r$ can be also written in terms of $U$,\n\n\n\n\n$$\n\\begin{eqnarray}\n\\label{eq:rotateR} \\tag{9}\nr_i&=&\\sum_jU^{-1}_{ij}~r'_j.\n\\end{eqnarray}\n$$\n\nThus, the matrix that transforms the coordinates of the unit vectors,\nEq. ([7](#eq:inverseU)) is the same one that transforms the\ncoordinates of a vector, Eq. ([9](#eq:rotateR)).\n\n\n## Rotation matrix\n\nAs a small exercise, find the rotation matrix $U$ for finding the\ncomponents in the primed coordinate system given from those in the\nunprimed system, given that the unit vectors in the new system are\nfound by rotating the coordinate system by and angle $\\phi$ about the\n$z$ axis.\n\nIn this case\n\n$$\n\\begin{eqnarray*}\n\\hat{e}'_1&=&\\cos\\phi \\hat{e}_1-\\sin\\phi\\hat{e}_2,\\\\\n\\hat{e}'_2&=&\\sin\\phi\\hat{e}_1+\\cos\\phi\\hat{e}_2,\\\\\n\\hat{e}'_3&=&\\hat{e}_3.\n\\end{eqnarray*}\n$$\n\nBy inspecting Eq. ([5](#eq:lambda_angles)), we get\n\n$$\nU=\\left(\\begin{array}{ccc}\n\\cos\\phi&-\\sin\\phi&0\\\\\n\\sin\\phi&\\cos\\phi&0\\\\\n0&0&1\\end{array}\\right).\n$$\n\n## Unitary Transformations\n\nUnder a unitary transformation $U$ (or basis transformation) scalars\nare unchanged, whereas vectors $\\boldsymbol{r}$ and matrices $M$ change as\n\n$$\n\\begin{eqnarray}\nr'_i&=&U_{ij}~ r_j, ~~({\\rm sum~inferred})\\\\\n\\nonumber\nM'_{ij}&=&U_{ik}M_{km}U^{-1}_{mj}.\n\\end{eqnarray}\n$$\n\nPhysical quantities with no spatial indices are scalars (or\npseudoscalars if they depend on right-handed vs. left-handed\ncoordinate systems), and are unchanged by unitary\ntransformations. This includes quantities like the trace of a matrix,\nthe matrix itself had indices but none remain after performing the\ntrace.\n\n$$\n\\begin{eqnarray}\n{\\rm Tr} M&\\equiv& M_{ii}.\n\\end{eqnarray}\n$$\n\nBecause there are no remaining indices, one expects it to be a scalar. Indeed one can see this,\n\n$$\n\\begin{eqnarray}\n{\\rm Tr} M'&=&U_{ij}M_{jm}U^{-1}_{mi}\\\\\n\\nonumber\n&=&M_{jm}U^{-1}_{mi}U_{ij}\\\\\n\\nonumber\n&=&M_{jm}\\delta_{mj}\\\\\n\\nonumber\n&=&M_{jj}={\\rm Tr} M.\n\\end{eqnarray}\n$$\n\nA similar example is the determinant of a matrix, which is also a scalar.\n\n\n\n## Numerical Elements\n\nNumerical algorithms call for approximate discrete models and much of\nthe development of methods for continuous models are nowadays being\nreplaced by methods for discrete models in science and industry,\nsimply because **much larger classes of problems can be addressed** with\ndiscrete models, often by simpler and more generic methodologies.\n\nAs we will see throughout this course, when properly scaling the equations at hand,\ndiscrete models open up for more advanced abstractions and the possibility to\nstudy real life systems, with the added bonus that we can explore and\ndeepen our basic understanding of various physical systems\n\nAnalytical solutions are as important as before. In addition, such\nsolutions provide us with invaluable benchmarks and tests for our\ndiscrete models. Such benchmarks, as we will see below, allow us \nto discuss possible sources of errors and their behaviors. And\nfinally, since most of our models are based on various algorithms from\nnumerical mathematics, we have a unique oppotunity to gain a deeper\nunderstanding of the mathematical approaches we are using.\n\n\n\nWith computing and data science as important elements in essentially\nall aspects of a modern society, we could then try to define Computing as\n**solving scientific problems using all possible tools, including\nsymbolic computing, computers and numerical algorithms, and analytical\npaper and pencil solutions**. \nComputing provides us with the tools to develope our own understanding of the scientific method by enhancing algorithmic thinking.\n\n## Computations and the Scientific Method\n\nThe way we will teach this course reflects this definition of\ncomputing. The course contains both classical paper and pencil\nexercises as well as computational projects and exercises. The hope is\nthat this will allow you to explore the physics of systems governed by\nthe degrees of freedom of classical mechanics at a deeper level, and\nthat these insights about the scientific method will help you to\ndevelop a better understanding of how the underlying forces and\nequations of motion and how they impact a given system.\n\nFurthermore,\nby introducing various numerical methods via computational projects\nand exercises, we aim at developing your competences and skills about\nthese topics.\n\n\n## Computational Competences\n\nThese competences will enable you to\n\n* understand how algorithms are used to solve mathematical problems,\n\n* derive, verify, and implement algorithms,\n\n* understand what can go wrong with algorithms,\n\n* use these algorithms to construct reproducible scientific outcomes and to engage in science in ethical ways, and\n\n* think algorithmically for the purposes of gaining deeper insights about scientific problems.\n\nAll these elements are central for maturing and gaining a better understanding of the modern scientific process *per se*.\n\nThe power of the scientific method lies in identifying a given problem\nas a special case of an abstract class of problems, identifying\ngeneral solution methods for this class of problems, and applying a\ngeneral method to the specific problem (applying means, in the case of\ncomputing, calculations by pen and paper, symbolic computing, or\nnumerical computing by ready-made and/or self-written software). This\ngeneric view on problems and methods is particularly important for\nunderstanding how to apply available, generic software to solve a\nparticular problem.\n\n*However, verification of algorithms and understanding their limitations requires much of the classical knowledge about continuous models.*\n\n\n## A well-known example to illustrate many of the above concepts\n\nBefore we venture into a reminder on Python and mechanics relevant applications, let us briefly outline some of the\nabovementioned topics using an example many of you may have seen before in for example CMSE201. \nA simple algorithm for integration is the Trapezoidal rule. \nIntegration of a function $f(x)$ by the Trapezoidal Rule is given by following algorithm for an interval $x \\in [a,b]$\n\n$$\n\\int_a^b(f(x) dx = \\frac{1}{2}\\left [f(a)+2f(a+h)+\\dots+2f(b-h)+f(b)\\right] +O(h^2),\n$$\n\nwhere $h$ is the so-called stepsize defined by the number of integration points $N$ as $h=(b-a)/(n)$.\nPython offers an extremely versatile programming environment, allowing for\nthe inclusion of analytical studies in a numerical program. Here we show an\nexample code with the **trapezoidal rule**. We use also **SymPy** to evaluate the exact value of the integral and compute the absolute error\nwith respect to the numerically evaluated one of the integral\n$\\int_0^1 dx x^2 = 1/3$.\nThe following code for the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to $10^8$ points one arrives at a region where numerical errors start to accumulate.\n\n\n```python\n%matplotlib inline\n\nfrom math import log10\nimport numpy as np\nfrom sympy import Symbol, integrate\nimport matplotlib.pyplot as plt\n# function for the trapezoidal rule\ndef Trapez(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n x = a\n for i in range(1,n,1):\n x = x+h\n s = s+ f(x)\n s = 0.5*(f(a)+f(b)) +s\n return h*s\n# function to compute pi\ndef function(x):\n return x*x\n# define integration limits\na = 0.0; b = 1.0;\n# find result from sympy\n# define x as a symbol to be used by sympy\nx = Symbol('x')\nexact = integrate(function(x), (x, a, b))\n# set up the arrays for plotting the relative error\nn = np.zeros(9); y = np.zeros(9);\n# find the relative error as function of integration points\nfor i in range(1, 8, 1):\n npts = 10**i\n result = Trapez(a,b,function,npts)\n RelativeError = abs((exact-result)/exact)\n n[i] = log10(npts); y[i] = log10(RelativeError);\nplt.plot(n,y, 'ro')\nplt.xlabel('n')\nplt.ylabel('Relative error')\nplt.show()\n```\n\n## Analyzing the above example\n\n\nThis example shows the potential of combining numerical algorithms\nwith symbolic calculations, allowing us to\n\n* Validate and verify their algorithms. \n\n* Including concepts like unit testing, one has the possibility to test and test several or all parts of the code.\n\n* Validation and verification are then included *naturally* and one can develop a better attitude to what is meant with an ethically sound scientific approach.\n\n* The above example allows the student to also test the mathematical error of the algorithm for the trapezoidal rule by changing the number of integration points. The students get **trained from day one to think error analysis**. \n\n* With a Jupyter notebook you can keep exploring similar examples and turn them in as your own notebooks. \n\n## Python practicalities, Software and needed installations\n\nWe will make extensive use of Python as programming language and its\nmyriad of available libraries. You will find\nJupyter notebooks invaluable in your work. \n\nIf you have Python installed (we strongly recommend Python3) and you feel\npretty familiar with installing different packages, we recommend that\nyou install the following Python packages via **pip** as \n\n1. pip install numpy scipy matplotlib ipython scikit-learn mglearn sympy pandas pillow \n\nFor Python3, replace **pip** with **pip3**.\n\nFor OSX users we recommend, after having installed Xcode, to\ninstall **brew**. Brew allows for a seamless installation of additional\nsoftware via for example \n\n1. brew install python3\n\nFor Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,\nyou can use **pip** as well and simply install Python as \n\n1. sudo apt-get install python3 (or python for pyhton2.7)\n\netc etc. \n\n\n\n## Python installers\n\nIf you don't want to perform these operations separately and venture\ninto the hassle of exploring how to set up dependencies and paths, we\nrecommend two widely used distrubutions which set up all relevant\ndependencies for Python, namely \n\n* [Anaconda](https://docs.anaconda.com/), \n\nwhich is an open source\ndistribution of the Python and R programming languages for large-scale\ndata processing, predictive analytics, and scientific computing, that\naims to simplify package management and deployment. Package versions\nare managed by the package management system **conda**. \n\n* [Enthought canopy](https://www.enthought.com/product/canopy/) \n\nis a Python\ndistribution for scientific and analytic computing distribution and\nanalysis environment, available for free and under a commercial\nlicense.\n\nFurthermore, [Google's Colab](https://colab.research.google.com/notebooks/welcome.ipynb) is a free Jupyter notebook environment that requires \nno setup and runs entirely in the cloud. Try it out!\n\n## Useful Python libraries\nHere we list several useful Python libraries we strongly recommend (if you use anaconda many of these are already there)\n\n* [NumPy](https://www.numpy.org/) is a highly popular library for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays\n\n* [The pandas](https://pandas.pydata.org/) library provides high-performance, easy-to-use data structures and data analysis tools \n\n* [Xarray](http://xarray.pydata.org/en/stable/) is a Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!\n\n* [Scipy](https://www.scipy.org/) (pronounced \u201cSigh Pie\u201d) is a Python-based ecosystem of open-source software for mathematics, science, and engineering. \n\n* [Matplotlib](https://matplotlib.org/) is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.\n\n* [Autograd](https://github.com/HIPS/autograd) can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives\n\n* [SymPy](https://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. \n\n* [scikit-learn](https://scikit-learn.org/stable/) has simple and efficient tools for machine learning, data mining and data analysis\n\n* [TensorFlow](https://www.tensorflow.org/) is a Python library for fast numerical computing created and released by Google\n\n* [Keras](https://keras.io/) is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano\n\n* And many more such as [pytorch](https://pytorch.org/), [Theano](https://pypi.org/project/Theano/) etc \n\nYour jupyter notebook can easily be\nconverted into a nicely rendered **PDF** file or a Latex file for\nfurther processing. For example, convert to latex as\n\n pycod jupyter nbconvert filename.ipynb --to latex \n\n\nAnd to add more versatility, the Python package [SymPy](http://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) and is entirely written in Python. \n\n\n## Numpy examples and Important Matrix and vector handling packages\n\nThere are several central software libraries for linear algebra and eigenvalue problems. Several of the more\npopular ones have been wrapped into ofter software packages like those from the widely used text **Numerical Recipes**. The original source codes in many of the available packages are often taken from the widely used\nsoftware package LAPACK, which follows two other popular packages\ndeveloped in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.\n\n * LINPACK: package for linear equations and least square problems.\n\n * LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.\n\n * BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from .\n\n## Numpy and arrays\n\n[Numpy](http://www.numpy.org/) provides an easy way to handle arrays in Python. The standard way to import this library is as\n\n\n```python\nimport numpy as np\n```\n\nHere follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,\n\n\n```python\nn = 10\nx = np.random.normal(size=n)\nprint(x)\n```\n\n [ 0.0894565 1.94971919 -1.00217137 1.04531722 1.65047376 -0.39492747\n 0.07211386 -0.43060307 -0.44633089 0.74168775]\n\n\nWe defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.\nAnother alternative is to declare a vector as follows\n\n\n```python\nimport numpy as np\nvx = 0.0\nvy = 1.0\nvz = 0.0\nv = np.array([vx, vy, vz])\nprint(v)\n```\n\n [0. 1. 0.]\n\n\nHere we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++\nstart numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \\dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4, 7, 8]))\nprint(x)\n```\n\n [1.38629436 1.94591015 2.07944154]\n\n\nIn the last example we used Numpy's unary function $np.log$. This function is\nhighly tuned to compute array elements since the code is vectorized\nand does not require looping. We normaly recommend that you use the\nNumpy intrinsic functions instead of the corresponding **log** function\nfrom Python's **math** module. The looping is done explicitely by the\n**np.log** function. The alternative, and slower way to compute the\nlogarithms of a vector would be to write\n\n\n```python\nimport numpy as np\nfrom math import log\nx = np.array([4, 7, 8])\nfor i in range(0, len(x)):\n x[i] = log(x[i])\nprint(x)\n```\n\nWe note that our code is much longer already and we need to import the **log** function from the **math** module. \nThe attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the **automatic** keyword in C++). To change this we could define our array elements to be double precision numbers as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4, 7, 8], dtype = np.float64))\nprint(x)\n```\n\nor simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4.0, 7.0, 8.0])\nprint(x)\n```\n\nTo check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the **itemsize** functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4.0, 7.0, 8.0])\nprint(x.itemsize)\n```\n\n## Matrices in Python\n\nHaving defined vectors, we are now ready to try out matrices. We can\ndefine a $3 \\times 3 $ real matrix $\\hat{A}$ as (recall that we user\nlowercase letters for vectors and uppercase letters for matrices)\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\nprint(A)\n```\n\nIf we use the **shape** function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\n# print the first column, row-major order and elements start with 0\nprint(A[:,0])\n```\n\nWe can continue this was by printing out other columns or rows. The example here prints out the second column\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\n# print the first column, row-major order and elements start with 0\nprint(A[1,:])\n```\n\nNumpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the [Numpy website for more details](http://www.numpy.org/). Useful functions when defining a matrix are the **np.zeros** function which declares a matrix of a given dimension and sets all elements to zero\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to zero\nA = np.zeros( (n, n) )\nprint(A)\n```\n\nor initializing all elements to\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to one\nA = np.ones( (n, n) )\nprint(A)\n```\n\nor as unitarily distributed random numbers (see the material on random number generators in the statistics part)\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \\in [0, 1]\nA = np.random.rand(n, n)\nprint(A)\n```\n\n## Meet the Pandas\n\n\n\n\n\n\n\n\n\n\n\nAnother useful Python package is\n[pandas](https://pandas.pydata.org/), which is an open source library\nproviding high-performance, easy-to-use data structures and data\nanalysis tools for Python. **pandas** stands for panel data, a term borrowed from econometrics and is an efficient library for data analysis with an emphasis on tabular data.\n\n**pandas** has two major classes, the **DataFrame** class with\ntwo-dimensional data objects and tabular data organized in columns and\nthe class **Series** with a focus on one-dimensional data objects. Both\nclasses allow you to index data easily as we will see in the examples\nbelow. **pandas** allows you also to perform mathematical operations on\nthe data, spanning from simple reshapings of vectors and matrices to\nstatistical operations.\n\nThe following simple example shows how we can, in an easy way make\ntables of our data. Here we define a data set which includes names,\nplace of birth and date of birth, and displays the data in an easy to\nread way. We will see repeated use of **pandas**, in particular in\nconnection with classification of data.\n\n\n```python\nimport pandas as pd\nfrom IPython.display import display\ndata = {'First Name': [\"Frodo\", \"Bilbo\", \"Aragorn II\", \"Samwise\"],\n 'Last Name': [\"Baggins\", \"Baggins\",\"Elessar\",\"Gamgee\"],\n 'Place of birth': [\"Shire\", \"Shire\", \"Eriador\", \"Shire\"],\n 'Date of Birth T.A.': [2968, 2890, 2931, 2980]\n }\ndata_pandas = pd.DataFrame(data)\ndisplay(data_pandas)\n```\n\n\n
\n\n
\n \n
\n
\n
First Name
\n
Last Name
\n
Place of birth
\n
Date of Birth T.A.
\n
\n \n \n
\n
0
\n
Frodo
\n
Baggins
\n
Shire
\n
2968
\n
\n
\n
1
\n
Bilbo
\n
Baggins
\n
Shire
\n
2890
\n
\n
\n
2
\n
Aragorn II
\n
Elessar
\n
Eriador
\n
2931
\n
\n
\n
3
\n
Samwise
\n
Gamgee
\n
Shire
\n
2980
\n
\n \n
\n
\n\n\nIn the above we have imported **pandas** with the shorthand **pd**, the latter has become the standard way we import **pandas**. We make then a list of various variables\nand reorganize the above lists into a **DataFrame** and then print out a neat table with specific column labels as *Name*, *place of birth* and *date of birth*.\nDisplaying these results, we see that the indices are given by the default numbers from zero to three.\n**pandas** is extremely flexible and we can easily change the above indices by defining a new type of indexing as\n\n\n```python\ndata_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])\ndisplay(data_pandas)\n```\n\n\n
\n\n
\n \n
\n
\n
First Name
\n
Last Name
\n
Place of birth
\n
Date of Birth T.A.
\n
\n \n \n
\n
Frodo
\n
Frodo
\n
Baggins
\n
Shire
\n
2968
\n
\n
\n
Bilbo
\n
Bilbo
\n
Baggins
\n
Shire
\n
2890
\n
\n
\n
Aragorn
\n
Aragorn II
\n
Elessar
\n
Eriador
\n
2931
\n
\n
\n
Sam
\n
Samwise
\n
Gamgee
\n
Shire
\n
2980
\n
\n \n
\n
\n\n\nThereafter we display the content of the row which begins with the index **Aragorn**\n\n\n```python\ndisplay(data_pandas.loc['Aragorn'])\n```\n\n\n First Name Aragorn II\n Last Name Elessar\n Place of birth Eriador\n Date of Birth T.A. 2931\n Name: Aragorn, dtype: object\n\n\nWe can easily append data to this, for example\n\n\n```python\nnew_hobbit = {'First Name': [\"Peregrin\"],\n 'Last Name': [\"Took\"],\n 'Place of birth': [\"Shire\"],\n 'Date of Birth T.A.': [2990]\n }\ndata_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))\ndisplay(data_pandas)\n```\n\n\n
\n\n
\n \n
\n
\n
First Name
\n
Last Name
\n
Place of birth
\n
Date of Birth T.A.
\n
\n \n \n
\n
Frodo
\n
Frodo
\n
Baggins
\n
Shire
\n
2968
\n
\n
\n
Bilbo
\n
Bilbo
\n
Baggins
\n
Shire
\n
2890
\n
\n
\n
Aragorn
\n
Aragorn II
\n
Elessar
\n
Eriador
\n
2931
\n
\n
\n
Sam
\n
Samwise
\n
Gamgee
\n
Shire
\n
2980
\n
\n
\n
Pippin
\n
Peregrin
\n
Took
\n
Shire
\n
2990
\n
\n \n
\n
\n\n\nHere are other examples where we use the **DataFrame** functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix \nof dimensionality $10\\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.\n\n\n```python\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display\nnp.random.seed(100)\n# setting up a 10 x 5 matrix\nrows = 10\ncols = 5\na = np.random.randn(rows,cols)\ndf = pd.DataFrame(a)\ndisplay(df)\nprint(df.mean())\nprint(df.std())\ndisplay(df**2)\n```\n\n\n
\n\n\nThereafter we can select specific columns only and plot final results\n\n\n```python\ndf.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']\ndf.index = np.arange(10)\n\ndisplay(df)\nprint(df['Second'].mean() )\nprint(df.info())\nprint(df.describe())\n\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\ndf.cumsum().plot(lw=2.0, figsize=(10,6))\nplt.show()\n\n\ndf.plot.bar(figsize=(10,6), rot=15)\nplt.show()\n```\n\nWe can produce a $4\\times 4$ matrix\n\n\n```python\nb = np.arange(16).reshape((4,4))\nprint(b)\ndf1 = pd.DataFrame(b)\nprint(df1)\n```\n\nand many other operations. \n\n\nThe **Series** class is another important class included in\n**pandas**. You can view it as a specialization of **DataFrame** but where\nwe have just a single column of data. It shares many of the same\nfeatures as **DataFrame**. As with **DataFrame**, most operations are\nvectorized, achieving thereby a high performance when dealing with\ncomputations of arrays, in particular labeled arrays. As we will see\nbelow it leads also to a very concice code close to the mathematical\noperations we may be interested in. For multidimensional arrays, we\nrecommend strongly\n[xarray](http://xarray.pydata.org/en/stable/). **xarray** has much of\nthe same flexibility as **pandas**, but allows for the extension to\nhigher dimensions than two.\n\n\n\n\n## Introduction to Git and GitHub/GitLab and similar\n\n[Git](https://git-scm.com/) is a distributed version-control system\nfor tracking changes in any set of files, originally designed for\ncoordinating work among programmers cooperating on source code during\nsoftware development.\n\nThe [reference document and videos here](https://git-scm.com/doc)\ngive you an excellent introduction to the **git**.\n\nWe believe you will find version-control software very useful in your work. \n\n\n## GitHub, GitLab and many other\n\n[GitHub](https://github.com/), [GitLab](https://about.gitlab.com/), [Bitbucket](https://bitbucket.org/product?&aceid=&adposition=&adgroup=92266806717&campaign=1407243017&creative=414608923671&device=c&keyword=bitbucket&matchtype=e&network=g&placement=&ds_kids=p51241248597&ds_e=GOOGLE&ds_eid=700000001551985&ds_e1=GOOGLE&gclid=Cj0KCQiA6Or_BRC_ARIsAPzuer_yrxzs-R8KDVdF0-DduJR9hTBYcjdE8L9_CkA9eyz8XT7-3bFGOpQaAqe2EALw_wcB&gclsrc=aw.ds) and other are code hosting platforms for\nversion control and collaboration. They let you and others work\ntogether on projects from anywhere.\n\n\nAll teaching material related to this course is open and freely\navailable via the GitHub site of the course. The video here gives a\nshort intro to\n[GitHub](https://www.youtube.com/watch/w3jLJU7DT5E?reload=9).\n\nSee also the [overview video on Git and GitHub](https://mediaspace.msu.edu/media/t/1_8mgx3cyf).\n\n\n## Useful Git and GitHub links\n\nThese are a couple references that we have found useful (git commands, markdown, GitPages):\n* \n\n* \n\n* \n\n## Useful IDEs and text editors\n\nWhen dealing with homeworks, at some point you would need to use an\neditor, or an integrated development envinroment (IDE). As an IDE, we\nwould like to recommend **anaconda** since we end up using\njupyter-notebooks. **anaconda** runs on all known operating systems.\n\n\nIf you prefer editing **Python** codes, there are several excellent cross-platform editors.\nIf you are in a Windows environment, **word** is the classical text editor.\n\nThere is however a wealth of text editors and/ord IDEs that run on all operating\nsystems and functions well with Python. Some of the more popular ones are\n\n* [Atom](https://atom.io/)\n\n* [Sublime](https://www.sublimetext.com/)\n", "meta": {"hexsha": "46800be7371ca29fe65cc8563da2609454ecd853", "size": 96088, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_stars_repo_name": "schwartznicholas/Physics321", "max_stars_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_issues_repo_name": "schwartznicholas/Physics321", "max_issues_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_forks_repo_name": "schwartznicholas/Physics321", "max_forks_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 33.5737246681, "max_line_length": 483, "alphanum_fraction": 0.5390995754, "converted": true, "num_tokens": 18155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.4921881357207956, "lm_q2_score": 0.5964331462646255, "lm_q1q2_score": 0.29355731834207466}}
{"text": "\n\n# Customer Segmentation: Estimate Individualized Responses to Incentives\n\nNowadays, business decision makers rely on estimating the causal effect of interventions to answer what-if questions about shifts in strategy, such as promoting specific product with discount, adding new features to a website or increasing investment from a sales team. However, rather than learning whether to take action for a specific intervention for all users, people are increasingly interested in understanding the different responses from different users to the two alternatives. Identifying the characteristics of users having the strongest response for the intervention could help make rules to segment the future users into different groups. This can help optimize the policy to use the least resources and get the most profit.\n\nIn this case study, we will use a personalized pricing example to explain how the [EconML](https://aka.ms/econml) and [DoWhy](https://github.com/microsoft/dowhy) libraries could fit into this problem and provide robust and reliable causal solutions.\n\n### Summary\n\n1. [Background](#background)\n2. [Data](#data)\n3. [Create Causal Model and Identify Causal Effect with DoWhy](#identify)\n4. [Get Causal Effects with EconML](#estimate)\n5. [Test Estimate Robustness with DoWhy](#robustness)\n 1. [Add Random Common Cause](#random-common-cause)\n 2. [Add Unobserved Common Cause](#unobserved-common-cause)\n 3. [Replace Treatment with a Random (Placebo) Variable](#placebo-variable)\n 4. [Remove a Random Subset of the Data](#subset)\n6. [Understand Treatment Effects with EconML](#interpret)\n7. [Make Policy Decisions with EconML](#policy)\n8. [Conclusions](#conclusion)\n\n\n\n\n# Background \n\n\n\nThe global online media market is growing fast over the years. Media companies are always interested in attracting more users into the market and encouraging them to buy more songs or become members. In this example, we'll consider a scenario where one experiment a media company is running is to give small discount (10%, 20% or 0) to their current users based on their income level in order to boost the likelihood of their purchase. The goal is to understand the **heterogeneous price elasticity of demand** for people with different income level, learning which users would respond most strongly to a small discount. Furthermore, their end goal is to make sure that despite decreasing the price for some consumers, the demand is raised enough to boost the overall revenue.\n\nThe EconML and DoWhy libraries complement each other in implementing this solution. On one hand, the DoWhy library can help [build a causal model, indentify the causal effect](#identify) and [test causal assumptions](#robustness). On the other hand, EconML\u2019s `DML` based estimators can be used to take the discount variation in existing data, along with a rich set of user features, to [estimate heterogeneous price sensitivities](#estimate) that vary with multiple customer features. Then, the `SingleTreeCateInterpreter` provides a [presentation-ready summary](#interpret) of the key features that explain the biggest differences in responsiveness to a discount, and the `SingleTreePolicyInterpreter` recommends a [policy](#policy) on who should receive a discount in order to increase revenue (not only demand), which could help the company to set an optimal price for those users in the future.\n\n\n```python\n# Some imports to get us started\r\nimport warnings\r\nwarnings.simplefilter('ignore')\r\n\r\n# Utilities\r\nimport os\r\nimport urllib.request\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom networkx.drawing.nx_pydot import to_pydot\r\nfrom IPython.display import Image, display\r\n\r\n# Generic ML imports\r\nfrom sklearn.preprocessing import PolynomialFeatures\r\nfrom sklearn.ensemble import GradientBoostingRegressor\r\n\r\n# EconML imports\r\nfrom econml.dml import LinearDML, CausalForestDML\r\nfrom econml.cate_interpreter import SingleTreeCateInterpreter, SingleTreePolicyInterpreter\r\n\r\nimport matplotlib.pyplot as plt\r\n\r\n%matplotlib inline\n```\n\n# Data \n\n\nThe dataset* has ~10,000 observations and includes 9 continuous and categorical variables that represent user's characteristics and online behaviour history such as age, log income, previous purchase, previous online time per week, etc. \n\nWe define the following variables:\n\nFeature Name|Type|Details \n:--- |:---|:--- \n**account_age** |W| user's account age\n**age** |W|user's age\n**avg_hours** |W| the average hours user was online per week in the past\n**days_visited** |W| the average number of days user visited the website per week in the past\n**friend_count** |W| number of friends user connected in the account \n**has_membership** |W| whether the user had membership\n**is_US** |W| whether the user accesses the website from the US \n**songs_purchased** |W| the average songs user purchased per week in the past\n**income** |X| user's income\n**price** |T| the price user was exposed during the discount season (baseline price * small discount)\n**demand** |Y| songs user purchased during the discount season\n\n**To protect the privacy of the company, we use the simulated data as an example here. The data is synthetically generated and the feature distributions don't correspond to real distributions. However, the feature names have preserved their names and meaning.*\n\n\nThe treatment and outcome are generated using the following functions:\n$$\nT = \n\\begin{cases}\n 1 & \\text{with } p=0.2, \\\\\n 0.9 & \\text{with }p=0.3, & \\text{if income}<1 \\\\\n 0.8 & \\text{with }p=0.5, \\\\\n \\\\\n 1 & \\text{with }p=0.7, \\\\\n 0.9 & \\text{with }p=0.2, & \\text{if income}\\ge1 \\\\\n 0.8 & \\text{with }p=0.1, \\\\\n\\end{cases}\n$$\n\n\n\\begin{align}\n\\gamma(X) & = -3 - 14 \\cdot \\{\\text{income}<1\\} \\\\\n\\beta(X,W) & = 20 + 0.5 \\cdot \\text{avg_hours} + 5 \\cdot \\{\\text{days_visited}>4\\} \\\\\nY &= \\gamma(X) \\cdot T + \\beta(X,W)\n\\end{align}\n\n\n\n\n```python\n# Import the sample pricing data\r\nfile_url = \"https://msalicedatapublic.blob.core.windows.net/datasets/Pricing/pricing_sample.csv\"\r\ntrain_data = pd.read_csv(file_url)\n```\n\n\n```python\n# Data sample\r\ntrain_data.head()\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
account_age
\n
age
\n
avg_hours
\n
days_visited
\n
friends_count
\n
has_membership
\n
is_US
\n
songs_purchased
\n
income
\n
price
\n
demand
\n
\n \n \n
\n
0
\n
3
\n
53
\n
1.834234
\n
2
\n
8
\n
1
\n
1
\n
4.903237
\n
0.960863
\n
1.0
\n
3.917117
\n
\n
\n
1
\n
5
\n
54
\n
7.171411
\n
7
\n
9
\n
0
\n
1
\n
3.330161
\n
0.732487
\n
1.0
\n
11.585706
\n
\n
\n
2
\n
3
\n
33
\n
5.351920
\n
6
\n
9
\n
0
\n
1
\n
3.036203
\n
1.130937
\n
1.0
\n
24.675960
\n
\n
\n
3
\n
2
\n
34
\n
6.723551
\n
0
\n
8
\n
0
\n
1
\n
7.911926
\n
0.929197
\n
1.0
\n
6.361776
\n
\n
\n
4
\n
4
\n
30
\n
2.448247
\n
5
\n
8
\n
1
\n
0
\n
7.148967
\n
0.533527
\n
0.8
\n
12.624123
\n
\n \n
\n
\n\n\n\n\n```python\n# Define estimator inputs\r\ntrain_data[\"log_demand\"] = np.log(train_data[\"demand\"])\r\ntrain_data[\"log_price\"] = np.log(train_data[\"price\"])\r\n\r\nY = train_data[\"log_demand\"].values\r\nT = train_data[\"log_price\"].values\r\nX = train_data[[\"income\"]].values # features\r\nconfounder_names = [\"account_age\", \"age\", \"avg_hours\", \"days_visited\", \"friends_count\", \"has_membership\", \"is_US\", \"songs_purchased\"]\r\nW = train_data[confounder_names].values\n```\n\n\n```python\n# Get test data\r\nX_test = np.linspace(0, 5, 100).reshape(-1, 1)\r\nX_test_data = pd.DataFrame(X_test, columns=[\"income\"])\n```\n\n# Create Causal Model and Identify Causal Effect with DoWhy \n\nWe define the causal assumptions with DoWhy. For example, we can include features we believe as confounders and features we think will influence the heterogeneity of the effect. With these assumptions defined, DoWhy can generate a causal graph for us, and use that graph to first identify the causal effect.\n\n\n\n\n```python\n# initiate an EconML cate estimator\r\nest = LinearDML(model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor(),\r\n featurizer=PolynomialFeatures(degree=2, include_bias=False))\n```\n\n\n```python\n# fit through dowhy\r\nest_dw = est.dowhy.fit(Y, T, X=X, W=W, outcome_names=[\"log_demand\"], treatment_names=[\"log_price\"], feature_names=[\"income\"],\r\n confounder_names=confounder_names, inference=\"statsmodels\")\n```\n\n WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.\n INFO:dowhy.causal_graph:If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named \"Unobserved Confounders\" to reflect this.\n INFO:dowhy.causal_model:Model to find the causal effect of treatment ['log_price'] on outcome ['log_demand']\n WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.\n INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True.\n INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[]\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n\n\n\n```python\n# Visualize causal graph\r\ntry:\r\n # Try pretty printing the graph. Requires pydot and pygraphviz\r\n display(\r\n Image(to_pydot(est_dw._graph._graph).create_png())\r\n )\r\nexcept:\r\n # Fall back on default graph view\r\n est_dw.view_model() \n```\n\n\n```python\nidentified_estimand = est_dw.identified_estimand_\r\nprint(identified_estimand)\n```\n\n Estimand type: nonparametric-ate\n \n ### Estimand : 1\n Estimand name: backdoor1 (Default)\n Estimand expression:\n d \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500(Expectation(log_demand|is_US,has_membership,days_visited,age,inco\n d[log_price] \n \n \n me,account_age,avg_hours,songs_purchased,friends_count))\n \n Estimand assumption 1, Unconfoundedness: If U\u2192{log_price} and U\u2192log_demand then P(log_demand|log_price,is_US,has_membership,days_visited,age,income,account_age,avg_hours,songs_purchased,friends_count,U) = P(log_demand|log_price,is_US,has_membership,days_visited,age,income,account_age,avg_hours,songs_purchased,friends_count)\n \n ### Estimand : 2\n Estimand name: iv\n No such variable found!\n \n\n\n# Get Causal Effects with EconML \n\nBased on the identified causal effect above, we fit the model as follows using EconML:\n\n\n\\begin{align}\nlog(Y) & = \\theta(X) \\cdot log(T) + f(X,W) + \\epsilon \\\\\nlog(T) & = g(X,W) + \\eta\n\\end{align}\n\n\nwhere $\\epsilon, \\eta$ are uncorrelated error terms. \n\n\nThe models we fit here aren't an exact match for the data generation function above, but if they are a good approximation, they will allow us to create a good discount policy. Although the model is misspecified, we hope to see that our `DML` based estimators can still capture the right trend of $\\theta(X)$ and that the recommended policy beats other baseline policies (such as always giving a discount) on revenue. Because of the mismatch between the data generating process and the model we're fitting, there isn't a single true $\\theta(X)$ (the true elasticity varies with not only X but also T and W), but given how we generate the data above, we can still calculate the range of true $\\theta(X)$ to compare against.\n\n\n```python\n# Define underlying treatment effect function given DGP\r\ndef gamma_fn(X):\r\n return -3 - 14 * (X[\"income\"] < 1)\r\n\r\ndef beta_fn(X):\r\n return 20 + 0.5 * (X[\"avg_hours\"]) + 5 * (X[\"days_visited\"] > 4)\r\n\r\ndef demand_fn(data, T):\r\n Y = gamma_fn(data) * T + beta_fn(data)\r\n return Y\r\n\r\ndef true_te(x, n, stats):\r\n if x < 1:\r\n subdata = train_data[train_data[\"income\"] < 1].sample(n=n, replace=True)\r\n else:\r\n subdata = train_data[train_data[\"income\"] >= 1].sample(n=n, replace=True)\r\n te_array = subdata[\"price\"] * gamma_fn(subdata) / (subdata[\"demand\"])\r\n if stats == \"mean\":\r\n return np.mean(te_array)\r\n elif stats == \"median\":\r\n return np.median(te_array)\r\n elif isinstance(stats, int):\r\n return np.percentile(te_array, stats)\n```\n\n\n```python\n# Get the estimate and range of true treatment effect\r\ntruth_te_estimate = np.apply_along_axis(true_te, 1, X_test, 1000, \"mean\") # estimate\r\ntruth_te_upper = np.apply_along_axis(true_te, 1, X_test, 1000, 95) # upper level\r\ntruth_te_lower = np.apply_along_axis(true_te, 1, X_test, 1000, 5) # lower level\n```\n\n## Parametric heterogeneity\nFirst of all, we can try to learn a **linear projection of the treatment effect** assuming a polynomial form of $\\theta(X)$. We use the `LinearDML` estimator. Since we don't have any priors on these models, we use a generic gradient boosting tree estimators to learn the expected price and demand from the data.\n\n\n```python\nlineardml_estimate = est_dw.estimate_\r\nprint(lineardml_estimate)\n```\n\n *** Causal Estimate ***\n \n ## Identified estimand\n Estimand type: nonparametric-ate\n \n ## Realized estimand\n b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n Target units: ate\n \n ## Estimate\n Mean value: -0.9956103906192235\n \n\n\n\n```python\n# Get treatment effect and its confidence interval\r\nte_pred = est_dw.effect(X_test).flatten()\r\nte_pred_interval = est_dw.effect_interval(X_test)\n```\n\n\n```python\n# Compare the estimate and the truth\r\nplt.figure(figsize=(10, 6))\r\nplt.plot(X_test.flatten(), te_pred, label=\"Sales Elasticity Prediction\")\r\nplt.plot(X_test.flatten(), truth_te_estimate, \"--\", label=\"True Elasticity\")\r\nplt.fill_between(\r\n X_test.flatten(),\r\n te_pred_interval[0].flatten(),\r\n te_pred_interval[1].flatten(),\r\n alpha=0.2,\r\n label=\"95% Confidence Interval\",\r\n)\r\nplt.fill_between(\r\n X_test.flatten(),\r\n truth_te_lower,\r\n truth_te_upper,\r\n alpha=0.2,\r\n label=\"True Elasticity Range\",\r\n)\r\nplt.xlabel(\"Income\")\r\nplt.ylabel(\"Songs Sales Elasticity\")\r\nplt.title(\"Songs Sales Elasticity vs Income\")\r\nplt.legend(loc=\"lower right\")\n```\n\nFrom the plot above, it's clear to see that the true treatment effect is a **nonlinear** function of income, with elasticity around -1.75 when income is smaller than 1 and a small negative value when income is larger than 1. The model fits a quadratic treatment effect, which is not a great fit. But it still captures the overall trend: the elasticity is negative and people are less sensitive to the price change if they have higher income.\n\n\n```python\n# Get the final coefficient and intercept summary\r\nest_dw.summary(feature_names=[\"income\"])\n```\n\n\n\n\n
\n
Coefficient Results
\n
\n
point_estimate
stderr
zstat
pvalue
ci_lower
ci_upper
\n
\n
\n
income
2.386
0.081
29.485
0.0
2.227
2.545
\n
\n
\n
income^2
-0.42
0.028
-15.185
0.0
-0.474
-0.366
\n
\n
\n
\n
CATE Intercept Results
\n
\n
point_estimate
stderr
zstat
pvalue
ci_lower
ci_upper
\n
\n
\n
cate_intercept
-3.003
0.049
-60.738
0.0
-3.1
-2.906
\n
\n
A linear parametric conditional average treatment effect (CATE) model was fitted: $Y = \\Theta(X)\\cdot T + g(X, W) + \\epsilon$ where for every outcome $i$ and treatment $j$ the CATE $\\Theta_{ij}(X)$ has the form: $\\Theta_{ij}(X) = \\phi(X)' coef_{ij} + cate\\_intercept_{ij}$ where $\\phi(X)$ is the output of the `featurizer` or $X$ if `featurizer`=None. Coefficient Results table portrays the $coef_{ij}$ parameter vector for each outcome $i$ and treatment $j$. Intercept Results table portrays the $cate\\_intercept_{ij}$ parameter.\n\n\n\n`LinearDML` estimator can also return the summary of the coefficients and intercept for the final model, including point estimates, p-values and confidence intervals. From the table above, we notice that $income$ has positive effect and ${income}^2$ has negative effect, and both of them are statistically significant.\n\n## Nonparametric Heterogeneity\nSince we already know the true treatment effect function is nonlinear, let us fit another model using `CausalForestDML`, which assumes a fully **nonparametric estimation of the treatment effect**.\n\n\n```python\n# initiate an EconML cate estimator\r\nest_nonparam = CausalForestDML(model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor())\r\n# fit through dowhy\r\nest_nonparam_dw = est_nonparam.dowhy.fit(Y, T, X=X, W=W, outcome_names=[\"log_demand\"], treatment_names=[\"log_price\"],\r\n feature_names=[\"income\"], confounder_names=confounder_names, inference=\"blb\")\n```\n\n WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.\n INFO:dowhy.causal_graph:If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named \"Unobserved Confounders\" to reflect this.\n INFO:dowhy.causal_model:Model to find the causal effect of treatment ['log_price'] on outcome ['log_demand']\n WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.\n INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True.\n INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[]\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n\n\n\n```python\n# Get treatment effect and its confidence interval\r\nte_pred = est_nonparam_dw.effect(X_test).flatten()\r\nte_pred_interval = est_nonparam_dw.effect_interval(X_test)\n```\n\n\n```python\n# Compare the estimate and the truth\r\nplt.figure(figsize=(10, 6))\r\nplt.plot(X_test.flatten(), te_pred, label=\"Sales Elasticity Prediction\")\r\nplt.plot(X_test.flatten(), truth_te_estimate, \"--\", label=\"True Elasticity\")\r\nplt.fill_between(\r\n X_test.flatten(),\r\n te_pred_interval[0].flatten(),\r\n te_pred_interval[1].flatten(),\r\n alpha=0.2,\r\n label=\"95% Confidence Interval\",\r\n)\r\nplt.fill_between(\r\n X_test.flatten(),\r\n truth_te_lower,\r\n truth_te_upper,\r\n alpha=0.2,\r\n label=\"True Elasticity Range\",\r\n)\r\nplt.xlabel(\"Income\")\r\nplt.ylabel(\"Songs Sales Elasticity\")\r\nplt.title(\"Songs Sales Elasticity vs Income\")\r\nplt.legend(loc=\"lower right\")\n```\n\nWe notice that this model fits much better than the `LinearDML`, the 95% confidence interval correctly covers the true treatment effect estimate and captures the variation when income is around 1. Overall, the model shows that people with low income are much more sensitive to the price changes than higher income people.\n\n# Test Estimate Robustness with DoWhy \n\n### Add Random Common Cause \n\nHow robust are our estimates to adding another confounder? We use DoWhy to test this!\n\n\n```python\nres_random = est_nonparam_dw.refute_estimate(method_name=\"random_common_cause\", num_simulations=5)\r\nprint(res_random)\n```\n\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count+w_random | income\n\n\n Refute: Add a Random Common Cause\n Estimated effect:-0.9594204479199662\n New effect:-0.9574777656374094\n \n\n\n### Add Unobserved Common Cause \n\nHow robust are our estimates to unobserved confounders? Since we assume the model is under unconfoundedness, adding an unobserved confounder might bias the estimates. We use DoWhy to test this!\n\n\n```python\nres_unobserved = est_nonparam_dw.refute_estimate(\r\n method_name=\"add_unobserved_common_cause\",\r\n confounders_effect_on_treatment=\"linear\",\r\n confounders_effect_on_outcome=\"linear\",\r\n effect_strength_on_treatment=0.1,\r\n effect_strength_on_outcome=0.1,\r\n)\r\nprint(res_unobserved)\n```\n\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n\n\n Refute: Add an Unobserved Common Cause\n Estimated effect:-0.9594204479199662\n New effect:0.20029340691678463\n \n\n\n### Replace Treatment with a Random (Placebo) Variable \n\nWhat happens our estimates if we replace the treatment variable with noise? Ideally, the average effect would be wildly different than our original estimate. We use DoWhy to investigate!\n\n\n```python\nres_placebo = est_nonparam_dw.refute_estimate(\r\n method_name=\"placebo_treatment_refuter\", placebo_type=\"permute\", \r\n num_simulations=3\r\n)\r\nprint(res_placebo)\n```\n\n INFO:dowhy.causal_refuters.placebo_treatment_refuter:Refutation over 3 simulated datasets of permute treatment\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~placebo+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~placebo+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~placebo+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n WARNING:dowhy.causal_refuters.placebo_treatment_refuter:We assume a Normal Distribution as the sample has less than 100 examples.\n Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size.\n\n\n Refute: Use a Placebo Treatment\n Estimated effect:-0.9594204479199662\n New effect:-0.0009044538846515711\n p value:0.4246571154416484\n \n\n\n### Remove a Random Subset of the Data \n\nDo we recover similar estimates on subsets of the data? This speaks to the ability of our chosen estimator to generalize well. We use DoWhy to investigate this!\n\n\n```python\nres_subset = est_nonparam_dw.refute_estimate(\r\n method_name=\"data_subset_refuter\", subset_fraction=0.8, \r\n num_simulations=3)\r\nprint(res_subset)\n```\n\n INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.8 simulated datasets of size 8000.0 each\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n WARNING:dowhy.causal_refuters.data_subset_refuter:We assume a Normal Distribution as the sample has less than 100 examples.\n Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size.\n\n\n Refute: Use a subset of data\n Estimated effect:-0.9594204479199662\n New effect:-0.9571011772201145\n p value:0.19397906736405435\n \n\n\n# Understand Treatment Effects with EconML \nEconML includes interpretability tools to better understand treatment effects. Treatment effects can be complex, but oftentimes we are interested in simple rules that can differentiate between users who respond positively, users who remain neutral and users who respond negatively to the proposed changes.\n\nThe EconML `SingleTreeCateInterpreter` provides interperetability by training a single decision tree on the treatment effects outputted by the any of the EconML estimators. In the figure below we can see in dark red users respond strongly to the discount and the in white users respond lightly to the discount.\n\n\n```python\nintrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=10)\r\nintrp.interpret(est_nonparam_dw, X_test)\r\nplt.figure(figsize=(25, 5))\r\nintrp.plot(feature_names=[\"income\"], fontsize=12)\n```\n\n# Make Policy Decision with EconML \nWe want to make policy decisions to maximum the **revenue** instead of the demand. In this scenario,\n\n\n\\begin{align}\nRev & = Y \\cdot T \\\\\n & = \\exp^{log(Y)} \\cdot T\\\\\n & = \\exp^{(\\theta(X) \\cdot log(T) + f(X,W) + \\epsilon)} \\cdot T \\\\\n & = \\exp^{(f(X,W) + \\epsilon)} \\cdot T^{(\\theta(X)+1)}\n\\end{align}\n\n\nWith the decrease of price, revenue will increase only if $\\theta(X)+1<0$. Thus, we set `sample_treatment_cast=-1` here to learn **what kinds of customers we should give a small discount to maximum the revenue**.\n\nThe EconML library includes policy interpretability tools such as `SingleTreePolicyInterpreter` that take in a treatment cost and the treatment effects to learn simple rules about which customers to target profitably. In the figure below we can see the model recommends to give discount for people with income less than $0.985$ and give original price for the others.\n\n\n```python\nintrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=2, min_samples_leaf=1, min_impurity_decrease=0.001)\r\nintrp.interpret(est_nonparam_dw, X_test, sample_treatment_costs=-1)\r\nplt.figure(figsize=(25, 5))\r\nintrp.plot(feature_names=[\"income\"], treatment_names=[\"Discount\", \"No-Discount\"], fontsize=12)\n```\n\nNow, let us compare our policy with other baseline policies! Our model says which customers to give a small discount to, and for this experiment, we will set a discount level of 10% for those users. Because the model is misspecified we would not expect good results with large discounts. Here, because we know the ground truth, we can evaluate the value of this policy.\n\n\n```python\n# define function to compute revenue\r\ndef revenue_fn(data, discount_level1, discount_level2, baseline_T, policy):\r\n policy_price = baseline_T * (1 - discount_level1) * policy + baseline_T * (1 - discount_level2) * (1 - policy)\r\n demand = demand_fn(data, policy_price)\r\n rev = demand * policy_price\r\n return rev\n```\n\n\n```python\npolicy_dic = {}\r\n# our policy above\r\npolicy = intrp.treat(X)\r\npolicy_dic[\"Our Policy\"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, policy))\r\n\r\n## previous strategy\r\npolicy_dic[\"Previous Strategy\"] = np.mean(train_data[\"price\"] * train_data[\"demand\"])\r\n\r\n## give everyone discount\r\npolicy_dic[\"Give Everyone Discount\"] = np.mean(revenue_fn(train_data, 0.1, 0, 1, np.ones(len(X))))\r\n\r\n## don't give discount\r\npolicy_dic[\"Give No One Discount\"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, np.ones(len(X))))\r\n\r\n## follow our policy, but give -10% discount for the group doesn't recommend to give discount\r\npolicy_dic[\"Our Policy + Give Negative Discount for No-Discount Group\"] = np.mean(revenue_fn(train_data, -0.1, 0.1, 1, policy))\r\n\r\n## give everyone -10% discount\r\npolicy_dic[\"Give Everyone Negative Discount\"] = np.mean(revenue_fn(train_data, -0.1, 0, 1, np.ones(len(X))))\n```\n\n\n```python\n# get policy summary table\r\nres = pd.DataFrame.from_dict(policy_dic, orient=\"index\", columns=[\"Revenue\"])\r\nres[\"Rank\"] = res[\"Revenue\"].rank(ascending=False)\r\nres\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
Revenue
\n
Rank
\n
\n \n \n
\n
Our Policy
\n
14.686241
\n
2.0
\n
\n
\n
Previous Strategy
\n
14.349342
\n
4.0
\n
\n
\n
Give Everyone Discount
\n
13.774469
\n
6.0
\n
\n
\n
Give No One Discount
\n
14.294606
\n
5.0
\n
\n
\n
Our Policy + Give Negative Discount for No-Discount Group
\n
15.564411
\n
1.0
\n
\n
\n
Give Everyone Negative Discount
\n
14.612670
\n
3.0
\n
\n \n
\n
\n\n\n\n**We beat the baseline policies!** Our policy gets the highest revenue except for the one raising the price for the No-Discount group. That means our currently baseline price is low, but the way we segment the user does help increase the revenue!\n\n# Conclusions \n\nIn this notebook, we have demonstrated the power of using EconML and DoWhy to:\n\n* Estimate the treatment effect correctly even the model is misspecified\n* Test causal assumptions and investigate the robustness of the resulting estimates\n* Interpret the resulting individual-level treatment effects\n* Make the policy decision beats the previous and baseline policies\n\nTo learn more about what EconML can do for you, visit our [website](https://aka.ms/econml), our [GitHub page](https://github.com/microsoft/EconML) or our [docummentation](https://econml.azurewebsites.net/). \n\nTo learn more about what DoWhy can do for you, visit the [GitHub page](https://github.com/microsoft/dowhy) or [documentation](https://microsoft.github.io/dowhy/index.html).\n\n", "meta": {"hexsha": "387d737104715969369c5a00019764c187fad05e", "size": 385971, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company - EconML + DoWhy.ipynb", "max_stars_repo_name": "RomaKoks/EconML", "max_stars_repo_head_hexsha": "b953f51e4e2965514ce1a88d1d717b1065a33d86", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 259, "max_stars_repo_stars_event_min_datetime": "2018-07-15T08:17:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-06T20:41:42.000Z", "max_issues_repo_path": "notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company - EconML + DoWhy.ipynb", "max_issues_repo_name": "RomaKoks/EconML", "max_issues_repo_head_hexsha": "b953f51e4e2965514ce1a88d1d717b1065a33d86", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2019-01-30T22:11:52.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-04T19:53:17.000Z", "max_forks_repo_path": "notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company - EconML + DoWhy.ipynb", "max_forks_repo_name": "RomaKoks/EconML", "max_forks_repo_head_hexsha": "b953f51e4e2965514ce1a88d1d717b1065a33d86", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 32, "max_forks_repo_forks_event_min_datetime": "2018-06-12T11:22:10.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-03T18:51:25.000Z", "avg_line_length": 315.5936222404, "max_line_length": 132233, "alphanum_fraction": 0.91596778, "converted": true, "num_tokens": 8372, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5964331319177487, "lm_q2_score": 0.4921881357207956, "lm_q1q2_score": 0.2935573112807121}}
{"text": "```python\nimport os\nos.environ['TRKXINPUTDIR']=\"/global/cfs/cdirs/m3443/data/trackml-kaggle/train_10evts\"\ninput_data_path = \"/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/run_dir\" \nos.environ['TRKXOUTPUTDIR']= input_data_path\n```\n\n\n```python\nimport pkg_resources\nimport yaml\nimport pprint\nimport random\nrandom.seed(1234)\nimport numpy as np\n# import pandas as pd\nimport itertools\nimport matplotlib.pyplot as plt\nimport tqdm\nfrom os import listdir\nfrom os.path import isfile, join\nimport matplotlib.cm as cm\nimport sys\nimport tqdm\nfrom tqdm import tqdm\nimport tqdm.notebook as tq\nimport pandas as pd\n# import sympy\n# from sympy import S, symbols, printing\n# %matplotlib widget\n\nsys.path.append('/global/homes/c/caditi97/exatrkx-iml2020/exatrkx/src/')\n\n# 3rd party\nimport torch\nimport torch.nn.functional as F\nfrom torch_geometric.data import Data\nfrom trackml.dataset import load_event\nfrom pytorch_lightning import Trainer\nfrom pytorch_lightning.callbacks import ModelCheckpoint\n\n\n# local import\nfrom exatrkx import config_dict # for accessing predefined configuration files\nfrom exatrkx import outdir_dict # for accessing predefined output directories\nfrom exatrkx.src import utils_dir\nfrom exatrkx.src import utils_robust\nfrom utils_robust import *\n\n\n# for preprocessing\nfrom exatrkx import FeatureStore\nfrom exatrkx.src import utils_torch\n\n# for embedding\nfrom exatrkx import LayerlessEmbedding\nfrom exatrkx.src import utils_torch\nfrom torch_cluster import radius_graph\nfrom utils_torch import build_edges\nfrom embedding.embedding_base import *\n\n# for filtering\nfrom exatrkx import VanillaFilter\n\n# for GNN\nimport tensorflow as tf\nfrom graph_nets import utils_tf\nfrom exatrkx import SegmentClassifier\nimport sonnet as snt\n\n# for labeling\nfrom exatrkx.scripts.tracks_from_gnn import prepare as prepare_labeling\nfrom exatrkx.scripts.tracks_from_gnn import clustering as dbscan_clustering\n\n# track efficiency\nfrom trackml.score import _analyze_tracks\nfrom exatrkx.scripts.eval_reco_trkx import make_cmp_plot, pt_configs, eta_configs\nfrom functools import partial\n\n\n```\n\n\n```python\nplt.rcParams.update({'axes.titlesize' : 16, 'axes.labelsize' : 16, 'lines.linewidth' : 2, 'lines.markersize' : 10,\n 'xtick.labelsize' : 14, 'xtick.major.width' : 2,\n 'ytick.labelsize' : 14, 'ytick.major.width' : 2,\n 'grid.alpha' : 0.5, \"legend.frameon\" : False, 'legend.fontsize' : 16})\n\n```\n\n\n```python\nfile_path = \"/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/run_dir/feature_store/\"\nevent_id = '1025'\nproc_path = os.path.join(file_path, event_id)\n```\n\n\n```python\ndata = torch.load(proc_path)\ndata\n```\n\n\n\n\n Data(event_file=\"WmuHNL15GeV_NoPileUp_Generic/event000001025\", hid=[202], layerless_true_edges=[2, 181], layers=[202], pid=[202], x=[202, 3])\n\n\n\n\n```python\nevent_path = f\"/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/particles/event00000{event_id}-particles.csv\"\n```\n\n\n```python\npids = data.pid.numpy()\nunq_pids = np.unique(pids)\n```\n\n\n```python\nparticles = pd.read_csv(event_path)\n```\n\n\n```python\nparticles.loc[particles.index[particles['particle_id'] == 4503599644147712]]\n```\n\n\n\n\n
\n\n
\n \n
\n
\n
particle_id
\n
particle_type
\n
process
\n
vx
\n
vy
\n
vz
\n
vt
\n
px
\n
py
\n
pz
\n
m
\n
q
\n
\n \n \n
\n
0
\n
4503599644147712
\n
2212
\n
0
\n
-0.143223
\n
-1.994651
\n
-27.815966
\n
-4.814418
\n
0.088711
\n
0.05035
\n
-314.095673
\n
0.93827
\n
1
\n
\n \n
\n
\n\n\n\n\n```python\ndef get_r(particles,pid):\n pid_idx = particles.index[particles['particle_id'] == pid]\n vx = particles.loc[pid_idx]['vx']\n vy = particles.loc[pid_idx]['vy']\n vz = particles.loc[pid_idx]['vz']\n return pid_idx, vx, vy, vz\n```\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,10))\nfor pid in pids:\n pid_idx, vx, vy, vz = get_r(particles,pid)\n hids = data.x.numpy()[pids == pid]\n for i in range(0,len(hids)):\n x = hids[i,][0]\n y = hids[i,][1]\n z = hids[i,][2]\n r = np.sqrt((x - vx)**2 + (y - vy)**2 + (z - vz)**2)\n if z.size != r.size:\n continue\n ax.scatter(z,r, color='gray', s=3)\nax.set_xlabel('z')\nax.set_ylabel('r')\n\n# true_edges = data.layerless_true_edges\n\n# for idx in range(true_edges.shape[1]):\n# edge1 = true_edges[0][idx]\n# # edge2 = true_edges[1][idx]\n# pid1 = data.pid[edge1]\n# # pid2 = data.pid[edge2]\n# _, vx1, vy1, vz1 = get_r(particles,pid1)\n# # _, vx2, vy2, vz2 = get_r(particles,pid2)\n# r1 = np.sqrt((data.x[edge1][0] - vx1)**2 + (data.x[edge1][1] - vy1)**2 + (data.x[edge1][2] - vz1)**2)\n# ax.scatter(data.x[edge1][2],data.x[edge1][1], color='red',s=1)\n \n\n```\n\n\n```python\n# def e_model(data, embed_ckpt_dir):\n# device = 'cuda' if torch.cuda.is_available() else 'cpu'\n# e_ckpt = torch.load(embed_ckpt_dir, map_location=device)\n# e_config = e_ckpt['hyper_parameters']\n# e_config['clustering'] = 'build_edges'\n# e_config['knn_val'] = 500\n# e_config['r_val'] = 1.7\n# e_model = LayerlessEmbedding(e_config).to(device)\n# e_model.load_state_dict(e_ckpt[\"state_dict\"])\n# e_model.eval()\n# return e_model\n\n# def embedding_hits(e_model,data):\n# with torch.no_grad():\n# # had to move everything to device\n# spatial = e_model(data.x.to(device))\n# e_spatial_build = utils_torch.build_edges(spatial.to(device), e_model.hparams['r_val'], e_model.hparams['knn_val'])\n \n# R_dist = torch.sqrt(data.x[:,0]**2 + data.x[:,2]**2) # distance away from origin...\n# e_spatial = e_spatial_build[:, (R_dist[e_spatial_build[0]] <= R_dist[e_spatial_build[1]])]\n# return e_spatial\n\n# def doublet_metrics(data, e_spatial):\n# e_bidir = torch.cat([data.layerless_true_edges,torch.stack([data.layerless_true_edges[1],\n# data.layerless_true_edges[0]], axis=1).T], axis=-1)\n# # did not have to convert e_spatail to tensor??\n# e_spatial_n, y_cluster = graph_intersection(e_spatial, e_bidir)\n# cluster_true = len(data.layerless_true_edges[0])\n# cluster_true_positive = y_cluster.sum()\n# cluster_positive = len(e_spatial_n[0])\n# pur = cluster_true_positive/cluster_positive\n# eff = cluster_true_positive/cluster_true \n# return pur,eff\n```\n\n\n```python\n# embed_ckpt_dir = '/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/run_dir/embedding_output/ckpt-epoch=11-val_loss=0.78.ckpt'\n\n```\n\n\n```python\n# e_model = e_model(data, embed_ckpt_dir)\n# e_spatial = embedding_hits(e_model,data)\n# # e_spatial_np = e_spatial.cpu().detach().numpy()\n# # e_spatial_np_t = e_spatial_np.T\n# pur,eff = doublet_metrics(data,e_spatial)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "f2b90a5364cb056177952dafc7bb52898b708778", "size": 26345, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/visualize_pileup_tracks.ipynb", "max_stars_repo_name": "caditi97/exatrkx-iml2020", "max_stars_repo_head_hexsha": "f4b1e4438cda7db2d40c8e572b1b682c12781e6c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-24T18:54:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-24T18:54:55.000Z", "max_issues_repo_path": "notebooks/visualize_pileup_tracks.ipynb", "max_issues_repo_name": "caditi97/exatrkx-iml2020", "max_issues_repo_head_hexsha": "f4b1e4438cda7db2d40c8e572b1b682c12781e6c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/visualize_pileup_tracks.ipynb", "max_forks_repo_name": "caditi97/exatrkx-iml2020", "max_forks_repo_head_hexsha": "f4b1e4438cda7db2d40c8e572b1b682c12781e6c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.4243119266, "max_line_length": 13148, "alphanum_fraction": 0.7510343519, "converted": true, "num_tokens": 2196, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118642792044, "lm_q2_score": 0.5774953651858118, "lm_q1q2_score": 0.2932589980076071}}
{"text": "```python\n# %load /Users/facai/Study/book_notes/preconfig.py\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(color_codes=True)\nsns.set(font='SimHei')\nplt.rcParams['axes.grid'] = False\n\nfrom IPython.display import SVG\n\ndef show_image(filename, figsize=None):\n if figsize:\n plt.figure(figsize=figsize)\n\n plt.imshow(plt.imread(filename))\n```\n\nxgboost\u7684\u57fa\u672c\u539f\u7406\u4e0e\u5b9e\u73b0\u7b80\u4ecb\n========================\n\nxgboost\u662f\u4e2a\u5f88\u68d2\u7684\u5de5\u5177\uff0c\u5176\u4f18\u70b9\u5f88\u591a\uff1a\u8fd0\u884c\u901f\u5ea6\uff0c\u652f\u6301\u6b63\u5219\uff0c\u76f4\u63a5\u7528\u635f\u5931\u51fd\u6570\u6307\u5bfc\u6811\u7684\u751f\u6210\u7b49\u7b49\uff0c\u4e0d\u4e00\u800c\u8db3\u3002\u76f8\u8f83spark\u81ea\u5e26\u7684gbdt\uff0cxgboost\u66f4\u9002\u5408\u5de5\u7a0b\u5e94\u7528\u3002\n\n\u672c\u6587\u53c2\u7167\u8bba\u6587 Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System\uff0c\u4ecb\u7ecd\u4e0bxgboost\u5bf9\u539f\u59cbgbdt\u7684\u6539\u8fdb\u5730\u65b9\uff0c\u5e76\u8bf4\u660e\u4e0b\u5177\u4f53\u7684\u5b9e\u73b0\u7ec6\u8282\u3002\n\n\u4ee3\u7801\u7248\u672c\u4fe1\u606f\uff1a\n\n```bash\n~/W/xgboost \u276f\u276f\u276f git log -n 1\ncommit 74db1e8867eb9ecfacf07311131c2724dbc3fdbd\nAuthor: Nan Zhu \nDate: Sun Aug 28 21:25:49 2016 -0400\n\n [jvm-packages] remove APIs with DMatrix from xgboost-spark (#1519)\n\n * test consistency of prediction functions between DMatrix and RDD\n\n * remove APIs with DMatrix from xgboost-spark\n\n * fix compilation error in xgboost4j-example\n\n * fix test cases\n```\n\n### 0. \u5927\u7eb2\n\nxgboost\u7684\u8d21\u732e\uff0c\u65e2\u6709\u7406\u8bba\u4e0a\u7684\u62d3\u5c55\uff0c\u4e5f\u6709\u5de5\u7a0b\u4e0a\u7684\u6027\u80fd\u4f18\u5316\u3002\u672c\u6587\u53ea\u5173\u5fc3\u5728\u5176\u5728\u7406\u8bba\u4e0a\u7684\u6539\u8fdb\uff0c\u4e3b\u8981\u662f\u5c06\u6b63\u5219\uff08Regularized\uff09\u5f15\u5165\u635f\u5931\u51fd\u6570\uff0c\u5e76\u4e14\u7528\u635f\u5931\u51fd\u6570\u76f4\u63a5\u6307\u5bfc\u6811\u7684\u751f\u6210\u3002\u6211\u4eec\u7528\u4f20\u7edfGBDT\u505a\u5f15\uff0c\u4e00\u6b65\u6b65\u8d70\u5230xgboost\u3002\n\n\u8fd9\u91cc\u5047\u8bbe\u8bfb\u8005\u5df2\u7ecf\u4e86\u89e3\u51b3\u7b56\u6811\u548cGBDT\u7684\u539f\u7406\uff0c\u4e0d\u4f1a\u8fc7\u591a\u94fa\u57ab\u3002\n\n### 1. GBDT\u6846\u67b6\n\n\u4e3a\u4e86\u4fbf\u4e8e\u540e\u7eed\u8bb2\u89e3\uff0c\u9996\u5148\u4f1a\u5f15\u5165\u516c\u5f0f\u63cf\u8ff0\u51b3\u7b56\u6811\u548cGBDT\u6a21\u578b\uff0c\u6d89\u53ca\u7684\u6570\u5b66\u77e5\u8bc6\u662f\u6bd4\u8f83\u7b80\u6613\u7684\uff0c\u4e0d\u8981\u6050\u614c\u3002\n\n\u5bf9\u4e8e\u4e00\u4e2a$J$\u9897\u53f6\u5b50\u7684\u51b3\u7b56\u6811\uff08CART\uff09\uff0c\u53ef\u4ee5\u8868\u793a\u4e3a\u52a0\u6cd5\u6a21\u578b[1]\uff1a\n\n\\begin{equation}\n f(x) = h \\left (x; \\{b_j, R_j\\}^J_1 \\right ) = \\displaystyle \\sum_{j=1}^J b_j \\mathbf{1}(x \\in R_j)\n\\end{equation}\n\n\u5176\u4e2d$R_j$\u662f\u53f6\u5b50\uff0c$b_j$\u662f\u53f6\u5b50\u5bf9\u5e94\u7684\u503c\u3002\n\n\u5bf9\u4e8e\u4e00\u4e2atree ensemble model\uff0c\u5c31\u662f$K$\u9897\u6811\u7684\u7ed3\u679c\u53e0\u52a0\uff1a\n\n\\begin{equation}\n \\hat{y}_i = \\phi(x_i) = \\displaystyle \\sum_{k=1}^K f_k(x_i)\n\\end{equation}\n\n\u5177\u4f53\u5230\u4f20\u7edf\u7684GBDT\uff0c\u5176\u53ef\u63cf\u8ff0\u6210\u6700\u4f18\u95ee\u9898\uff1a\n\n\\begin{align}\n f_m &= \\displaystyle \\operatorname{arg \\, min}_f \\sum_{i=1}^n L \\left ( y_i, \\hat{y}_i + f(x_i) \\right ) \\\\\n &= \\operatorname{arg \\, min} \\mathcal{L}(f)\n\\end{align}\n\n\u4f20\u7edf\u7684\u601d\u8def\uff0c\u4fbf\u662f\u501f\u7528\u6700\u901f\u4e0b\u964d\u7684\u60f3\u6cd5\uff0c\u8ba4\u5b9a\u635f\u5931\u51fd\u6570\u4e2d$f(x_i)$\u662f\u5173\u4e8e\u68af\u5ea6\u7684\u4e00\u4e2a\u51fd\u6570\u3002\u800cxgboost\u6b63\u662f\u5728\u8fd9\u91cc\u505a\u4e86\u65b0\u7684\u6587\u7ae0\uff0c\u5f15\u5165\u6b63\u5219\uff0c\u6cf0\u52d2\u5c55\u5f00\uff0c\u518d\u63c9\u8fdb\u6811\u6a21\u578b\u91cc\u3002\n\n[1]: Friedman - Greedy function approximation: A gradient boosting machine\n\n### 2. xgboost\n\n#### 2.0 \u6b63\u5219\n\nxgboost\u5c06\u6b63\u5219 $\\Omega(f)$ \u5f15\u5165\u8fdb\u635f\u5931\u51fd\u6570 $\\mathcal{L}(f)$\uff0c\u63a7\u5236\u4e86\u6811\u7684\u53f6\u5b50\u6570$\\|R_j\\|$\u548c\u53f6\u5b50\u503c$b_j$\uff0c\u8868\u793a\u5982\u4e0b\uff1a \n\n\\begin{align}\n \\mathcal{L}(f) &= \\displaystyle \\sum_{i=1}^{n} L(y_i, \\hat{y}_i + f(x_i)) + \\Omega(f) \\\\\n \\Omega(f) &= \\gamma \\|R_j\\| + \\frac{1}{2} \\lambda \\|b_j\\|^2\n\\end{align}\n\n\u4f46\u8fd9\u6837\u7eaf\u7406\u8bba\u7684\u516c\u5f0f\u662f\u65e0\u6cd5\u5e94\u7528\u5230\u5de5\u7a0b\u4e0a\u7684\uff0c\u8981\u7528\u5b83\u6307\u5bfc\u6811\u6a21\u578b\u7684\u751f\u6210\uff0c\u5fc5\u987b\u5c06\u5b83\u7684\u8fd9\u4e24\u4e2a\u52a0\u6cd5\u9879$\\sum L(f)$\u548c$\\Omega(f)$\u6574\u5408\uff0c\u624d\u53ef\u4ee5\u76f4\u63a5\u6307\u5bfc\u6a21\u578b\u751f\u6210\u3002\u8981\u6574\u5408\uff0c\u91cd\u70b9\u6709\u4e24\u4e2a\uff0c\u4e00\u662f\u6253\u5f00$\\sum L(f)$\uff0c\u653e\u51fa$f$\uff1b\u4e8c\u662f\u5c06$\\mathcal{L}(f)$\u8f6c\u6362\u6210\u6307\u5bfc\u6811\u751f\u957f\u7684\u51fd\u6570\u3002\n\n#### 2.1 \u6cf0\u52d2\u5c55\u5f00\n\n\u5bf9\u4e8e\u7b2c\u4e00\u4e2a\u95ee\u9898\uff0c\u6253\u5f00 $\\sum L(f)$\uff0cxgboost\u7684\u65b9\u6cd5\u662f\u5229\u7528[\u6cf0\u52d2\u5c55\u5f00](https://zh.wikipedia.org/wiki/\u6cf0\u52d2\u516c\u5f0f)\u6210\u4e8c\u9636\u591a\u9879\u5f0f:\n\n\\begin{align}\n \\mathcal{L} &\\approx \\sum_{i=1}^n \\left [ L(y_i, \\hat{y}_i) + g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f) \\\\\n g_i &= \\frac{\\partial \\, L(y_i, \\hat{y}_i)}{\\partial \\hat{y}_i} \\\\\n h_i &= \\frac{\\partial^2 \\, L(y_i, \\hat{y}_i)}{\\partial \\hat{y}^2_i} \\\\\n\\end{align}\n\n\u4e3a\u4e86\u4fbf\u4e8e\u7406\u89e3\uff0c\u6211\u5c06\u63a8\u5bfc\u8fc7\u7a0b\u7ec6\u81f4\u8bf4\u4e0b\uff1a\n\n\u5229\u7528\u6cf0\u52d2\u516c\u5f0f\uff0c\u6211\u4eec\u5728\u5355\u70b9\u53ef\u5c06\u4e00\u4e2a\u51fd\u6570$f(x)$\u5c55\u5f00\u4e3a\u9ad8\u9636\u5bfc\u6570\u7684\u53e0\u52a0\uff1a \n\\begin{equation}\n \\sum _{n=0}^{\\infty }{\\frac {f^{(n)}(a)}{n!}}(x-a)^{n}\n\\end{equation}\n\n\u5177\u4f53\u5230\u4e8c\u9636\u5bfc\u6570\u4e3a\n\\begin{equation}\nf(x) \\approx f(a)+{\\frac {f'(a)}{1!}}(x-a)+{\\frac {f''(a)}{2!}}(x-a)^{2}\n\\end{equation}\n\n\u6211\u4eec\u505a\u70b9\u53d8\u5f62\u4ee5\u5229\u4e8e\u7406\u89e3\uff0c\u4ee4$t = \\hat{y}_i + f(x_i)$\uff0c\u5219\n\\begin{align}\n L(y_i, \\hat{y}_i + f(x_i)) &= L(y_i, t)\\\\\n &= L(t) \\quad \\text{\u7ed9\u5b9a$x_i$\uff0c\u5219$y_i$\u662f\u5b9a\u503c\uff0c\u5373\u5e38\u6570}\n\\end{align}\n\n\u63a5\u7740\u5229\u7528\u6cf0\u52d2\u5c06$L(t)$\u5728$\\hat{y}_i$\u70b9\u5c55\u5f00\uff1a\n\n\\begin{align}\nL(t) &\\approx L(\\hat{y}_i) + {\\frac {L'(\\hat{y}_i)}{1!}}(t-\\hat{y}_i)+{\\frac {f''(\\hat{y}_i)}{2!}}(t-\\hat{y}_i)^{2} \\\\\n &= L(\\hat{y}_i) + L'(\\hat{y}_i) f(x_i) + \\frac{1}{2}f''(\\hat{y}_i) f^{2}(x_i) \\\\\n &= L(y_i, \\hat{y}_i) + L'(\\hat{y}_i) f(x_i) + \\frac{1}{2}f''(\\hat{y}_i) f^{2}(x_i) \\\\\n &= L(y_i, \\hat{y}_i) + g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \n\\end{align}\n\n#### 2.2 \u5f15\u5165\u6811\u6a21\u578b\n\n\u6cf0\u52d2\u89e3\u51b3\u4e86\u7b2c\u4e00\u4e2a\u95ee\u9898\uff0c\u6253\u5f00$L(f)$\uff0c\u5f97\u5230\uff1a\n\n\\begin{equation}\n \\mathcal{L} \\approx \\sum_{i=1}^n \\left [ L(y_i, \\hat{y}_i) + g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f)\n\\end{equation}\n\n\u63a5\u7740\u9700\u8981\u56de\u7b54\u7b2c\u4e8c\u4e2a\u95ee\u9898\uff0c\u5728\u8fd9\u4e4b\u524d\uff0c\u6211\u4eec\u5148\u505a\u70b9\u9884\u5904\u7406\u3002\n\n\u5bf9\u4e8e\u635f\u5931\u51fd\u6570$\\mathcal{L}$\uff0c\u5e38\u6570\u9879$L(y_i, \\hat{y}_i)$\u662f\u53ef\u4ee5\u76f4\u63a5\u820d\u6389\u7684\uff0c\u8bb0\u4e3a\uff1a \n\n\\begin{equation}\n \\mathcal{L} = \\sum_{i=1}^n \\left [ g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f)\n\\end{equation}\n\n\u5982\u4f55\u7528\u8fd9\u4e2a\u5916\u90e8\u7684\u635f\u5931\u51fd\u6570\u6307\u5bfc\u6811\u6a21\u578b\u5462\uff1f\n\n\u5728\u8bba\u6587Friedman - Greedy function approximation: A gradient boosting machine\u91cc\u63a8\u5bfcTreeBoost\u65b9\u6cd5\u65f6\u7ed9\u4e86\u4e2a\u5957\u8def\uff1a\n\n1. \u5c06\u51b3\u7b56\u6811\u7684\u6570\u5b66\u6a21\u578b$f(x)$\u4ee3\u5165\u5916\u90e8\u635f\u5931\u51fd\u6570$\\mathcal{L}$\u3002\n2. \u56fa\u5b9a$J$\uff0c\u5f97\u5230\u6700\u4f18\u7684$b_j$\u89e3\u6790\u5f0f\u3002\n3. \u53cd\u4ee3$b_j$\u56de$\\mathcal{L}$\uff0c\u5c06\u5b83\u4f5c\u4e3a\u6811\u751f\u6210\u7684\u8bc4\u4ef7\u51fd\u6570\uff08\u5185\u90e8\u635f\u5931\u51fd\u6570\uff09\u3002\n\n\u8fd9\u4e2a\u5957\u8def\u7684\u601d\u60f3\u662f\uff0c\u5bf9\u4e8e\u6811\u6a21\u578b\u6765\u8bf4\uff0c\u4e3b\u8981\u6709\u4e24\u4e2a\u53c2\u6570\uff1a\u53f6\u5b50\u6570\u3001\u53f6\u5b50\u503c\u3002\u901a\u8fc7\u56fa\u5b9a\u53f6\u5b50\u6570\uff0c\u5c31\u53ef\u4ee5\u5229\u7528\u5916\u90e8\u635f\u5931\u51fd\u6570$\\mathcal{L}(f)$\u5bfb\u4f18\u5230\u6700\u4f73\u7684\u53f6\u5b50\u503c\u8ba1\u7b97\u89e3\u6790\u5f0f\u3002\u5c06\u53c2\u6570\u53cd\u4ee3\uff0c\u5916\u90e8\u635f\u5931\u51fd\u6570\u5c31\u53d8\u6210\u53ea\u548c$x$\u76f8\u5173$\\mathcal{L}(x)$\uff0c\u8fd9\u65f6\u5b83\u5c31\u53ef\u4ee5\u4f5c\u4e3a\u6811\u751f\u957f\u7684\u8bc4\u4ef7\u51fd\u6570\u3002\u8fd9\u4e2a\u601d\u8def\u633a\u5de7\u5999\u7684\uff0c\u53ef\u80fd\u6709\u70b9\u7ed5\u3002\n\n\n##### 2.2.0 \u4ee3\u5165\u6811\u6a21\u578b\n\n\u6211\u4eec\u5c31\u5c06\u6811\u6a21\u578b$f(x)$\u7684\u6570\u5b66\u4ee3\u5165\uff0c\n\n\\begin{align}\n \\mathcal{L} &= \\sum_{i=1}^n \\left [ g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f) \\\\\n &= \\sum_{i=1}^n \\left [ g_i \\sum_{j=1}^J b_j \\mathbf{1}(x_i \\in R_j) + \\frac{1}{2} h_i (\\sum_{j=1}^J \\color{red}{b_j} \\mathbf{1}(x_i \\in R_j))^{\\color{red}{2}} \\right ] + \\Omega(f) \\\\ \n &= \\sum_{i=1}^n \\left [ g_i \\sum_{j=1}^J b_j \\mathbf{1}(x_i \\in R_j) + \\frac{1}{2} h_i \\sum_{j=1}^J \\color{red}{b_j^2} \\mathbf{1}(x_i \\in R_j) \\right ] + \\Omega(f) \\quad \\text{\u56e0\u4e3a$x_i$\u53ea\u5c5e\u4e8e\u4e00\u4e2a$R_j$} \\\\\n &= \\sum_{j=1}^J \\color{blue}{\\sum_{i=1}^n \\mathbf{1}(x_i \\in R_j)} g_i b_j + \\frac{1}{2} \\color{blue}{\\sum_{j=1}^J \\sum_{i=1}^n \\mathbf{1}(x_i \\in R_j)} h_i b_j^2 + \\Omega(f) \\quad \\text{\u4e58\u6cd5\u4ea4\u6362} \\\\\n &\\text{\u4ee4} I_j = \\{ i \\, | \\, x_i \\in R_j \\} \\quad \\text{\u5373\u5c5e\u4e8e$R_j$\u7684\u5168\u90e8\u4e0b\u6807$i$} \\\\\n &= \\sum_{j=1}^J \\color{blue}{\\sum_{i \\in I_j}} g_i b_j + \\frac{1}{2} \\sum_{j=1}^J \\color{blue}{\\sum_{i \\in I_j}} h_i b_j^2 + \\Omega(f) \\\\\n &= \\sum_{j=1}^J ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} \\sum_{j=1}^J (\\sum_{i \\in I_j} h_i) b_j^2 + \\Omega(f) \\quad \\text{\u4e58\u6cd5\u5206\u914d\u5f8b}\\\\\n &= \\sum_{j=1}^J ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} \\sum_{j=1}^J (\\sum_{i \\in I_j} h_i) b_j^2 + \\gamma \\|R_j\\| + \\frac{1}{2} \\lambda \\|b_j\\|^2 \\quad \\text{\u4ee3\u5165\u6b63\u5219$\\Omega(f)$}\\\\\n &= \\sum_{j=1}^J ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} \\sum_{j=1}^J (\\sum_{i \\in I_j} h_i) b_j^2 + \\gamma \\|R_j\\| + \\frac{1}{2} \\lambda \\sum_{j=1}^J b_j^2 \\\\\n &= \\sum_{j=1}^J \\left ( ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} (\\sum_{i \\in I_j} h_i) b_j^2 + \\frac{1}{2} \\lambda b_j^2 \\right ) + \\gamma \\|R_j\\| \\\\\n &= \\sum_{j=1}^J \\left ( ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} (\\sum_{i \\in I_j} h_i + \\lambda) b_j^2 \\right ) + \\gamma \\|R_j\\| \\\\\n\\end{align}\n\n##### 2.2.1 $b_j$\u6700\u4f18\u89e3\u6790\u5f0f\n\n\\begin{align}\n b_j &= \\operatorname{arg \\, min}_{b_j} \\mathcal{L} \\\\\n &= \\operatorname{arg \\, min}_{b_j} \\sum_{j=1}^J \\left ( ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} (\\sum_{i \\in I_j} h_i + \\lambda) b_j^2 \\right ) + \\gamma \\|R_j\\| \\\\\n &\\approx \\sum_{j=1}^J \\operatorname{arg \\, min}_{b_j} \\left ( ( \\sum_{i \\in I_j} g_i ) \\color{red}{b_j} + \\frac{1}{2} (\\sum_{i \\in I_j} h_i + \\lambda) \\color{red}{b_j^2} \\right ) + \\gamma \\|R_j\\| \n\\end{align}\n\n\u5168\u5c40\u6700\u4f18\u4e0d\u597d\u6c42\uff0c\u6211\u4eec\u8f6c\u800c\u6c42\u5c40\u90e8\u6700\u4f18\u3002\u6ce8\u610f\u5230\u6bcf\u4e2a\u5c40\u90e8\u9879\u662f\u4e00\u4e2a\u4e8c\u9879\u5f0f\uff0c\u5b83\u7684\u6700\u5c0f\u70b9\u53ef\u7528\u516c\u5f0f\u76f4\u63a5\u5957\u51fa\uff1a\n\n\\begin{align}\n b^*_j &= - \\frac{\\sum_{i \\in I_j} g_i}{\\sum_{i \\in I_j} h_i + \\lambda} \\\\\n\\end{align}\n\n##### 2.2.2 \u6811\u751f\u957f\u7684\u8bc4\u4ef7\u51fd\u6570\n\n\u5c06$b_j$\u91cd\u4ee3\u56de$\\mathcal{L}$\u5f97\u5230\u8bc4\u4ef7\u51fd\u6570\uff1a\n\n\\begin{align}\n \\mathcal{L} &= - \\frac{1}{2} \\sum_{j=1}^J \\frac{(\\sum_{i \\in I_j} g_i)^2}{\\sum_{i \\in I_j} h_i + \\lambda} + \\gamma \\|R_j\\| \\\\\n &= - \\frac{1}{2} H + \\gamma T\n\\end{align}\n\n\u4e8e\u662f\u5f97\u5230\u6811\u751f\u6210\u7684\u5206\u5272\u4f9d\u636e\uff1a\n\n\\begin{align}\n \\mathcal{L}_{\\text{split}} &= \\mathcal{L} - \\mathcal{L}_L - \\mathcal{L}_R \\\\\n &= \\frac{1}{2} (H_L + H_R - H) + \\gamma (T - T_L - T_R) \\\\\n &= \\frac{1}{2} (H_L + H_R - H) - \\gamma \\\\\n\\end{align}\n\n#### 2.3 \u5c0f\u7ed3\n\n\u81f3\u6b64\uff0cxgboost\u5bf9GBDT\u6846\u67b6\u7684\u4e3b\u8981\u6539\u8fdb\u5c31\u8bf4\u660e\u5b8c\u4e86\u3002\u6211\u4eec\u68b3\u7406\u4e0b\uff0c\u5bf9\u4e8e\u4e00\u4e2a\u635f\u5931\u51fd\u6570 $L$\uff0c\u7528\u5b83\u89e3\u51fa\u8001\u6a21\u578b\u7684\u8f93\u51fa $\\hat{y}_i$ \u7684\u4e00\u9636\u5bfc $g_i$ \u548c\u4e8c\u9636\u5bfc $h_i$\u3002\u6709\u4e86\u8fd9\u4e24\u9636\u5bfc\u6570\uff0c\u5c31\u53ef\u4ee5\u505a\u4e3a\u8bc4\u4ef7\u51fd\u6570\u76f4\u63a5\u6307\u5bfc\u51b3\u7b56\u6811\u7684\u751f\u957f\uff0c\u540c\u65f6\u7b97\u51fa\u53f6\u5b50\u7684\u503c\uff0c\u4e8e\u662f\u5c31\u5f97\u5230\u4e86\u65b0\u6811\uff0c\u518d\u52a0\u56de\u8001\u6a21\u578b\u5f97\u5230\u65b0\u6a21\u578b\u3002\n\n\u6211\u4eec\u53ef\u4ee5\u770b\u5230\uff0c\u539f\u59cb\u7684GBDT\u53ea\u662f\u5c06Gradient Boost\u6846\u67b6\u4e2d\u7684\u5b66\u4e60\u6a21\u578b\u6307\u5b9a\u662f\u51b3\u7b56\u6811\uff0c\u8fd9\u65f6\u8fd8\u662f\u4e00\u4e2a\u901a\u7528\u6846\u67b6\u3002\u800cTreeBoost\u548cxbgoost\uff0c\u5219\u66f4\u8fdb\u4e00\u6b65\uff0c\u5c06\u51b3\u7b56\u6811\u7684\u6570\u5b66\u6a21\u578b\u4ee3\u5165Gradeint Boost\uff0c\u4ece\u800c\u89e3\u51fa\u76f4\u63a5\u6307\u5bfc\u51b3\u7b56\u6811\u751f\u957f\u7684\u89e3\u6790\u5f0f\u3002\u4e5f\u5c31\u8bf4\uff0c\u628a\u5916\u90e8\u7684\u635f\u5931\u51fd\u6570\uff0c\u5f15\u5165\u5230\u4e86\u51b3\u7b56\u6811\u7684\u5efa\u7acb\u8fc7\u7a0b\u4e2d\uff0c\u8fd9\u5176\u5b9e\u5c31\u662f\u4ece\u901a\u7528\u5230\u5b9a\u5236\u7684\u8fc7\u7a0b\u3002\u6240\u4ee5\uff0c\u53ef\u4ee5\u8bf4\uff0cTreeBoost\u548cxgboost\u7684\u672c\u8d28\uff0c\u5c31\u662f\u5bf9\u6811\u6a21\u578b\u8fdb\u884c\u5b9a\u5236\u4f18\u5316\u7684Gradient Boost\u3002\n\n### 3 \u5de5\u7a0b\u5b9e\u73b0\n\n#### 3.0 \u8bad\u7ec3\n\n\u6b63\u5982\u524d\u9762\u5c0f\u7ed3\u6240\u8a00\uff0cxgboost\u7684\u8bad\u7ec3\u4e3b\u8981\u662f\u4e09\u6b65\uff1a\n\n1. \u8001\u6a21\u578b\u7684\u9884\u6d4b\u503c$\\hat{y}_i$\uff1b\n2. \u8ba1\u7b97\u51fa\u4e24\u9636\u5bfc\u6570$g_i$\u548c$h_i$\uff1b\n3. \u5c06\u5bfc\u6570\u4fe1\u606f\u4f20\u5165\uff0c\u6307\u5bfc\u51b3\u7b56\u6811\u751f\u957f\u3002\n\n\u4e3b\u4f53\u4ee3\u7801\u4f4d\u4e8e `src/learner.cc`\uff0c\u5982\u4e0b\uff1a\n\n```C++\n288 void UpdateOneIter(int iter, DMatrix* train) override {\n289 CHECK(ModelInitialized())\n290 << \"Always call InitModel or LoadModel before update\";\n291 if (tparam.seed_per_iteration || rabit::IsDistributed()) {\n292 common::GlobalRandom().seed(tparam.seed * kRandSeedMagic + iter);\n293 }\n294 this->LazyInitDMatrix(train);\n295 this->PredictRaw(train, &preds_);\n296 obj_->GetGradient(preds_, train->info(), iter, &gpair_);\n297 gbm_->DoBoost(train, this->FindBufferOffset(train), &gpair_);\n298 }\n```\n\n\u7406\u8bba\u6bd4\u5de5\u7a0b\u597d\u8bb2\uff0c\u5de5\u7a0b\u5b9e\u73b0\u4f1a\u6709\u5927\u91cf\u7ec6\u8282\uff0c\u76f8\u5f53\u7e41\u7410\u3002\u73b0\u5728\u65f6\u95f4\u4e0d\u591a\uff0c\u6ca1\u6709\u5fc3\u529b\u9762\u9762\u4ff1\u5230\uff0c\u4e0d\u518d\u51c6\u5907\u7ec6\u5199\u4e86\u3002\n\n\u6211\u753b\u4e86\u7b80\u7248\u7684UML\u56fe\uff0c\u53ef\u80fd\u4e0d\u51c6\u786e\uff0c\u611f\u5174\u8da3\u7684\u670b\u53cb\u53ef\u4ee5\u53c2\u8003\u5b83\uff0c\u81ea\u884c\u4ece\u8fd9\u4e2a\u5165\u53e3\u53bb\u8ffd\u4e00\u904d\u3002\n\n\n```python\nSVG(\"./res/Main.svg\")\n```\n\n\n\n\n \n\n \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "14d8584e854584f3755a0f4a6d4a1a97d5fc1d23", "size": 61365, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "machine_learning/tree/gbdt/xgboost/intro.ipynb", "max_stars_repo_name": "ningchi/book_notes", "max_stars_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-12-31T12:10:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T15:49:34.000Z", "max_issues_repo_path": "machine_learning/tree/gbdt/xgboost/intro.ipynb", "max_issues_repo_name": "ningchi/book_notes", "max_issues_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-12-05T13:04:14.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-07T16:24:50.000Z", "max_forks_repo_path": "machine_learning/tree/gbdt/xgboost/intro.ipynb", "max_forks_repo_name": "ningchi/book_notes", "max_forks_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-06-27T07:19:28.000Z", "max_forks_repo_forks_event_max_datetime": "2017-11-19T08:57:35.000Z", "avg_line_length": 165.4043126685, "max_line_length": 48123, "alphanum_fraction": 0.6119449197, "converted": true, "num_tokens": 4715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.5467381519846138, "lm_q1q2_score": 0.29255872599718435}}
{"text": "```python\nimport sys\nimport numpy as np\n```\n\n\n```python\nsys.path.insert(0, '../')\nsys.path.insert(0, '../detmodel/')\n```\n\n\n```python\nimport elements\n```\n\n\n```python\nimport si_mu_late\n```\n\n\n```python\ndet = si_mu_late.setup_detector('../cards/atlas_mm.yml')\n```\n\n\n```python\ndet\n```\n\n\n\n\n \n\n\n\n\n```python\nclass conf():\n def __init__(self, nevs,ismu,muxmin,muxmax,muymin,muymax,muamin,muamax,bkgr):\n self.nevs=nevs\n self.ismu=ismu\n self.muxmin=muxmin\n self.muxmax=muxmax\n self.muymin=muymin\n self.muymax=muymax\n self.muamin=muamin\n self.muamax=muamax\n self.bkgr=bkgr\n```\n\n\n```python\nmyconf = conf(1,1,0,0,0,0,0,0,100000)\n```\n\n\n```python\nsignals = si_mu_late.event(det,myconf)\n```\n\n 1\n 1\n 1\n 1\n 1\n 1\n 1\n 1\n\n\n\n```python\nsignals\n```\n\n\n```python\nimport sympy\n```\n\n\n```python\nl1 = sympy.Line3D( sympy.Point3D(-1/2, 0, 1), sympy.Point3D(-1/2, 5, 1) )\nl2 = sympy.Line3D(sympy.Point3D(5, -5, 1), sympy.Point3D(5, 5, 1))\n```\n\n\n```python\nl1.intersection(l2)\n```\n\n\n```python\ndet.get_signals()\n```\n\n\n```python\ndet.planes[1].z\n```\n\n\n```python\nmu = elements.Muon(0,0,0,0)\n```\n\n\n```python\ndet.planes[3].pass_muon(mu)\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "dd291176e741e05aaa0791c79b3b25a20cf3e44e", "size": 4354, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/.ipynb_checkpoints/R_and_D-checkpoint.ipynb", "max_stars_repo_name": "rateixei/si-mu-lator", "max_stars_repo_head_hexsha": "53505146c3a9098138d52999079cd45b92903bf9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-10-06T16:09:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-02T14:12:45.000Z", "max_issues_repo_path": "notebooks/R_and_D.ipynb", "max_issues_repo_name": "rateixei/si-mu-lator", "max_issues_repo_head_hexsha": "53505146c3a9098138d52999079cd45b92903bf9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-11-17T19:23:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-15T15:08:46.000Z", "max_forks_repo_path": "notebooks/R_and_D.ipynb", "max_forks_repo_name": "rateixei/si-mu-lator", "max_forks_repo_head_hexsha": "53505146c3a9098138d52999079cd45b92903bf9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.6991869919, "max_line_length": 90, "alphanum_fraction": 0.4735875057, "converted": true, "num_tokens": 485, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6187804337438501, "lm_q2_score": 0.47268347662043286, "lm_q1q2_score": 0.29248728668674245}}
{"text": "# snapReactors\n\nCopyright (c) Dan Kotlyar and CoRE group\n\n# Materials Container\n\n* This container stores materials information such as isotopic definition, abundances, uncertainties and material properties.\n* The container also stores, modifies or creates new h5 files for data storage.\n\n## Code: \n\n\n```python\nimport numpy as np\nimport sympy as sp\nfrom snapReactors.containers.materials import Material\nfrom snapReactors.containers.property import Constant, Table, Correlation\n```\n\n## Defining a new material\n\n### Define with Materials container\n\n1. Give name of material\n2. Give type of uncertainty, must be in ``Enum.UTYPE`` which are:\n - ABSOLUTE \n - RELATIVE\n - PERCENTAGE\n - NONE\n3. Give the composition type, must be in ``Enum.CTYPE`` which are:\n - RELATIVE\n - WEIGHT\n - ATOMIC\n4. Give the isotopes that define a material as a np.array\n5. Give the abundances for each isotope as a np.array\n6. The uncertainty value, reference, description, and properties are left as optional parameters.\n 1. Note that properties must be under ``ALLOWED_PROPERTIES``\n\n\n\n```python\nmat1 = Material(\"mat1\", 'NONE', 'WEIGHT', np.array([]), np.array([]), None, reference=None, description='This is an example', _properties=None)\nprint(mat1)\n```\n\n {'matName': ['mat1'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': [None]}\n\n\n#### Properties may be inputted as a list of properties or a single property\n\n\n```python\np1 = Constant(id='cv', value=1, unit= \"J/kg/K\", unc=None, ref=None, description=None)\nmat2 = Material(\"mat2\", 'NONE', 'WEIGHT', np.array([]), np.array([]), None, reference=None, description='This is an example', _properties=p1)\nprint(mat2)\n\np2 = Table('h', np.array([1, 2, 3, 4]), 'W/K/m^2', np.array([100, 200, 300, 400]), 'K', unc = np.array([.01, .01, .01, .01]), dependency2=None, dependencyUnit2=None, ref=None, description=None)\ncorr1 = \"T**2 + P + 1/2\"\nsyms1 = \"T, P\"\np3 = Correlation('h', corr1, syms1, 'W/K/m^2', np.array([300, 600]), 'K', np.array([10, 20]), 'Pa', unc=None, ref=None, description=None)\nproperties = [p2, p3]\nmat3 = Material(\"mat3\", 'NONE', 'WEIGHT', np.array([]), np.array([]), None, reference=None, description='This is an example', _properties=properties)\nprint(mat3)\n```\n\n {'matName': ['mat2'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': []}\n {'matName': ['mat3'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': [, ]}\n\n\n### Reading in isotopic defintion through a text file\n\nThe ``Materials.readData(filename)`` method is used to read in all relevant materials information through a text file. There are several rules to keep in mind for the structure of the text file:\n1. The ``Material Name`` must be indicated at the beginning of every material data section with the following format:\n - Material Name: Example Name\n2. The ``ctype``, ``utype``, and ``Number of isotopes`` must be indicated before the beginning of ``Isotopic Definition`` although their order doesn't matter. \n3. The ``Isotopic Definition`` must have a line between itself and where the definition begins. In the example below a dashed line is used to indicate this seperation.\n4. For each input a colon is used to seperate the keyword and input, for example:\n - utype: NONE\n5. The location of ``Reference`` and ``Description`` for a specific material must be placed before the beginning of the next ``Material Name`` if present. \n\nMaterial Property data can be read in by adding a ``Properties`` section. \n1. The location of ``Properties`` must be before the beginning of the next ``Material Name`` and is indicated with curly brackets:\n ```\n Properties: {\n \n }\n ```\n2. The formatting for the ``Properties`` information only requires there to be a colon in between the keyword and value, and that each keyword be on its own line\n ```\n type = const, table, corr\n id = property id\n unit = SI or imperial\n must have a \":\" between keyword and value i.e \"keyword: value\"\n each keyword must on its own line i.e \n keyword1: val1 \n keyword: val2\n\n array type inputs are denoted using \"[]\" i.e [1, 2] or [1 2] \n multidimensional arrays can be denoted using the \";\" matlab style i.e\n [1 2; 3 4] or [1, 2;\n 3, 4]\n or by using a newline i.e\n [1 2\n 3 4] \n Supports comments by preceeding a line with \"%\"\n Examples are included below\n }\n ```\n3. Structure of ``Properties`` input is outlined in Property Container documentation.\n\nOptional parameters such as reference or uncertainty values can be left out, however, warnings will be highlighted to the user. Two examples for the Material Property data are shown below.\n\n#### Example text file shown below\n\n\n```python\ntext_file = open('test.txt')\nfile_content = text_file.read()\nprint(file_content)\ntext_file.close()\n```\n\n Material Name: hasteC\n ctype: RELATIVE\n utype: NONE\n Number of isotopes: 33\n Isotopic Definition:\n -------------------\n 6000.03c 0.0007 \n 27059.03c 0.0125 \n 24050.03c 0.006952\n 24052.03c 0.1340624\n 24053.03c 0.0152016\n 24054.03c 0.003784\n 42092.03c 0.0249033\n 42094.03c 0.0156179\n 42095.03c 0.0269841\n 42096.03c 0.0283441\n 42097.03c 0.0162894\n 42098.03c 0.0412964\n 42100.03c 0.0165648\n 23050.03c 0.0000075\n 23051.03c 0.0029925\n 74180.03c 0.00048\n 74182.03c 0.106\n 74183.03c 0.05724\n 74184.03c 0.12256\n 74186.03c 0.11372\n 26054.03c 0.003360875\n 26056.03c 0.05275855\n 26057.03c 0.001218425\n 26058.03c 0.00016215\n 25055.03c 0.01 \n 14028.03c 0.00645561\n 14029.03c 0.00032795\n 14030.03c 0.00032795\n 28058.03c 0.1220600887\n 28060.03c 0.0470180183\n 28061.03c 0.0020438407\n 28062.03c 0.0065166585\n 28064.03c 0.0016596008\n \n Properties: {\n %property values for material\n %type = const, table, corr\n %id = property id\n %unit = SI or imperial\n %must have a \":\" between keyword and value i.e \"keyword: value\"\n %each keyword must on its own line i.e \n % keyword1: val1 \n % keyword: val2\n %array type inputs are denoted using \"[]\" i.e [1, 2] or [1 2] \n %multidimensional arrays can be denoted using the \";\" matlab style i.e\n % [1 2; 3 4] or [1, 2;\n % 3, 4]\n % or by using a newline i.e\n % [1 2\n 3 4] \n %Supports comments by preceeding a line with \"%\"\n %Examples are included below\n \n type:const\n id:cp\n unit:SI \n value:[1]\n unc:[.01]\n \n type:table \n id:h \n unit:imperial \n ref:NAA-SR-6160 \n dep1unit:K \n dep1values: [1 2]\n dep2unit:Pa \n dep2values: [.1 .2]\n value: [1.1 2.1\n 3.1 4.1]\n unc: [1 1\n 1 1]\n \n type:corr\n id:r \n unit:SI \n ref:NAA-SR-3120\n corr:T+P**2\n deps:T,P\n dep1unit:K \n dep2unit:Pa\n dep1range: [300,900] \n dep2range: [16,48]\n }\n Reference: NA-Examples\n Description: This is an example input file\n \n Material Name: hasteB\n ctype: RELATIVE\n utype: NONE\n Number of isotopes: 33\n Isotopic Definition:\n --------------------\n 6000.03c 0.0007 \n 27059.03c 0.0125 \n 24050.03c 0.006952\n 24052.03c 0.1340624\n 24053.03c 0.0152016\n 24054.03c 0.003784\n 42092.03c 0.0249033\n 42094.03c 0.0156179\n 42095.03c 0.0269841\n 42096.03c 0.0283441\n 42097.03c 0.0162894\n 42098.03c 0.0412964\n 42100.03c 0.0165648\n 23050.03c 0.0000075\n 23051.03c 0.0029925\n 74180.03c 0.00048\n 74182.03c 0.106\n 74183.03c 0.05724\n 74184.03c 0.12256\n 74186.03c 0.11372\n 26054.03c 0.003360875\n 26056.03c 0.05275855\n 26057.03c 0.001218425\n 26058.03c 0.00016215\n 25055.03c 0.01 \n 14028.03c 0.00645561\n 14029.03c 0.00032795\n 14030.03c 0.00032795\n 28058.03c 0.1220600887\n 28060.03c 0.0470180183\n 28061.03c 0.0020438407\n 28062.03c 0.0065166585\n 28064.03c 0.0016596008\n \n Properties: {\n %property values for material\n %type = const, table, corr\n %id = property id\n %unit = SI or imperial\n %must have a \":\" between keyword and value i.e \"keyword: value\"\n %each keyword must on its own line i.e \n % keyword1: val1 \n % keyword: val2\n %array type inputs are denoted using \"[]\" i.e [1, 2] or [1 2] \n %multidimensional arrays can be denoted using the \";\" matlab style i.e\n % [1 2; 3 4] or [1, 2;\n % 3, 4]\n % or by using a newline i.e\n % [1 2\n 3 4] \n %Supports comments by preceeding a line with \"%\"\n %Examples are included below\n \n type:const\n id:cp\n unit:SI \n value:[1]\n unc:[.01]\n \n type:table \n id:h \n unit:imperial \n ref:NAA-SR-6160 \n dep1unit:K \n dep1values: [1 2]\n dep2unit:Pa \n dep2values: [.1 .2]\n value: [1.1 2.1\n 3.1 4.1]\n unc: [1 1\n 1 1]\n \n type:corr\n id:r \n unit:SI \n ref:NAA-SR-3120\n corr:T+P**2\n deps:T,P\n dep1unit:K \n dep2unit:Pa\n dep1range: [300,900] \n dep2range: [16,48]\n }\n Reference: NA-Examples\n Description: This is an example input file\n\n\n#### Materials definition returned by readData\n\n\n```python\nmats = Material.readData('test.txt')\nprint(mats)\n```\n\n [, ]\n\n\n c:\\Users\\iaguirre6\\Documents\\GitHub\\docTEST\\snapReactors\\containers\\property.py:669: InputFileSyntaxWarning: reference not given for cp const property @ line: 19\n warnings.warn(\"reference not given for {} {} property @\"\n c:\\Users\\iaguirre6\\Documents\\GitHub\\docTEST\\snapReactors\\containers\\property.py:868: InputFileSyntaxWarning: uncertainty not given for r corr property @ line: 38\n warnings.warn(\"uncertainty not given for {} {} property @\"\n\n\n## Updating properties to materials\n\n1. The properties must be from the following list: ['cp', 'cv', 'g', 'h', 'my', 'pr', 'r', 's', 'tc', 'v']\n\n\n```python\np4 = Constant(id='cv', value=1, unit= \"J/kg/K\", unc=None, ref=None, description=None)\nprint(p4)\n\nmat3.addproperty([p4])\nprint(mat3)\n\n```\n\n {'id': 'cv', 'dtype': , 'vtype': , 'value': array([1]), 'valueUnit': 'J/kg/K', 'unc': None, 'dependents': None, 'dependentsUnit': None, 'description': None, 'ref': None}\n {'matName': ['mat3'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': [, , ]}\n\n", "meta": {"hexsha": "10873bfe3a1384d858a0a34a7f2dda28e152ea66", "size": 16758, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "snapReactors/jupyter_notebooks/materials.ipynb", "max_stars_repo_name": "CORE-GATECH-GROUP/SNAP-REACTORS", "max_stars_repo_head_hexsha": "ce0d4502a4dea877264e18b7ea11b41c124c5a60", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "snapReactors/jupyter_notebooks/materials.ipynb", "max_issues_repo_name": "CORE-GATECH-GROUP/SNAP-REACTORS", "max_issues_repo_head_hexsha": "ce0d4502a4dea877264e18b7ea11b41c124c5a60", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-03T15:58:52.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-03T15:58:52.000Z", "max_forks_repo_path": "snapReactors/jupyter_notebooks/materials.ipynb", "max_forks_repo_name": "CORE-GATECH-GROUP/SNAP-REACTORS", "max_forks_repo_head_hexsha": "ce0d4502a4dea877264e18b7ea11b41c124c5a60", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-30T20:17:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-30T20:42:28.000Z", "avg_line_length": 33.9230769231, "max_line_length": 485, "alphanum_fraction": 0.5358634682, "converted": true, "num_tokens": 3894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5544704502361149, "lm_q2_score": 0.5273165233795671, "lm_q1q2_score": 0.2923814301352114}}
{"text": "# SageMaker Factorization Machine(FM) \ubc0f KNN\uc73c\ub85c \ucd94\ucc9c \uc2dc\uc2a4\ud15c \uad6c\ucd95\ud558\uae30\n\n\n**\ubcf8 \ub178\ud2b8\ubd81\uc740 \uae40\ub300\uadfc\ub2d8\uc758 \ub178\ud2b8\ubd81 \ub0b4\uc6a9\uc744 \ub9ce\uc774 \uac00\uc9c0\uace0 \uc654\uc2b5\ub2c8\ub2e4.** \n\uae30\uc874 movielens \ub370\uc774\ud0c0\ub97c \uc5d0\uc5b4\ub77c\uc778 \ub9ac\ubdf0 \ub370\uc774\ud130\ub85c \uad50\uccb4\ud558\uace0 \uc774\ub97c \uac00\uacf5\ud558\ub294 \ubd80\ubd84\uc744 \ucd94\uac00 \ud558\uc600\uc2b5\ub2c8\ub2e4.\n\uc54c\uace0\ub9ac\uc998 \uc124\uba85 \ubc0f \ucf54\ub4dc\ub294 \uac70\uc758 \uc720\uc0ac\ud569\ub2c8\ub2e4.\n- https://github.com/daekeun-ml/recommendation-workshop/blob/master/0.Recommendation-System-FM-KNN.ipynb\n\n*\ubcf8 \ub178\ud2b8\ubd81 \uc608\uc81c\ub294 AWS \uba38\uc2e0 \ub7ec\ub2dd \ube14\ub85c\uadf8\uc5d0 \uae30\uace0\ub41c \uae00\ub4e4\uc5d0 \uae30\ubc18\ud558\uc5ec SageMaker\uc758 Factorization Machine(FM)\uc73c\ub85c \ucd94\ucc9c \uc2dc\uc2a4\ud15c\uc744 \uad6c\ucd95\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.*\n\nReferences\n- [Build a movie recommender with factorization machines on Amazon SageMaker](https://aws.amazon.com/ko/blogs/machine-learning/build-a-movie-recommender-with-factorization-machines-on-amazon-sagemaker/)\n- [Amazon SageMaker Factorization Machines \uc54c\uace0\ub9ac\uc998\uc744 \ud655\uc7a5\ud558\uc5ec \ucd94\ucc9c \uc2dc\uc2a4\ud15c \uad6c\ud604\ud558\uae30](https://aws.amazon.com/ko/blogs/korea/extending-amazon-sagemaker-factorization-machines-algorithm-to-predict-top-x-recommendations/)\n- [Factorization Machine \ub17c\ubb38](https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf)\n\n#### \uc774 \ub178\ud2b8\ubd81\uc740 \uc544\ub798\uc640 \uac19\uc740 \uc791\uc5c5\uc744 \ud569\ub2c8\ub2e4.\n1. FM \uc54c\uace0\ub9ac\uc998 \ud559\uc2b5, \ubc30\ud3ec \ubc0f \ucd94\ub860\n- \uc5d0\uc5b4\ub77c\uc778 \ub9ac\ubdf0 \ub370\uc774\ud130 \ub2e4\uc6b4\ub85c\ub4dc\n- \ub370\uc774\ud130 \uc804\ucc98\ub9ac\n - \ud544\uc694\ud55c \uceec\ub7fc\uc744 \ucd94\ucd9c\ud558\uc5ec interaction data set \ud615\ud0dc\ub85c \ub9cc\ub4e4\uae30 (timestamp, user_id, item_id, rating)\n - \uc0c1\ud638\uc791\uc6a9\uc774 5\uac1c \uc774\uc0c1\uc778 \uc720\uc800\ub9cc \ucd94\ucd9c\n - \uc720\uc800, \uc544\uc774\ud15c\uc744 \ubb38\uc790\uc5f4\uc5d0\uc11c \uc22b\uc790\ub85c \ubcc0\ud658\n - \ub370\uc774\ud130\ub97c \uc720\uc800 \uae30\uc900\uc73c\ub85c \uc2dc\uac04\uc21c\uc73c\ub85c \uc815\ub82c\ud55c \ud6c4\uc5d0 \ud559\uc2b5, \uac80\uc99d\uc73c\ub85c 8:2 \ub85c \ubd84\ub9ac\n - \ud559\uc2b5\uc758 \uc778\uc2a4\ud134\uc2a4 * \uceec\ub7fc\uc758 \uac2f\uc218 \ub300\ube44 \uc2e4\uc81c \uc0ac\uc6a9\ud55c \ub370\uc774\ud130 \uc601\uc5ed\uc744 \uc54c\uc544 \ubd05\ub2c8\ub2e4. (\ud76c\uc18c\uc131, Sparsity)\n - \uc6d0\ud56b \uc778\ucf54\ub529\uc73c\ub85c \ud76c\uc18c \ud589\ub82c \ubcc0\ud658\n - \uc810\uc218\uac00 8\uc810 \uc774\uc0c1\uc774\uba74 1, \ubbf8\ub9cc\uc774\uba74 0\uc73c\ub85c \ud558\uc5ec y \ub808\uc774\ube14 \uc0dd\uc131\n - Protobuf \ud3ec\ub9f7\uc73c\ub85c \ubcc0\ud658 \ud6c4\uc5d0 S3\uc5d0 \uc800\uc7a5\n- FM \ud559\uc2b5 (\uc774\uc9c4 \ubd84\ub958 \ubb38\uc81c)\n - Python SDK SageMaker2.0\uc5d0 \ub9de\uac8c \ubcc0\uacbd\n- \ubaa8\ub378 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubc30\ud3ec\n- \ucee4\uc2a4\ud140 \uc2dc\ub9ac\uc5bc\ub77c\uc774\uc988 \uad6c\ud604 (Python SDK SageMaker2.0\uc5d0 \ub9de\uac8c \ubcc0\uacbd\ub9de\uac8c \uc2e0\uaddc \uad6c\ud604)\n\n2.FM \ubaa8\ub378 \ud30c\ub77c\ubbf8\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec KNN \ud559\uc2b5, \ubc30\ud3ec \ubc0f \ubc30\uce58 \ucd94\ub860\n- FM \ubaa8\ub378 \uc544\ud2f0\ud399\ud2b8 \ub2e4\uc6b4\ub85c\ub4dc\n- \ubaa8\ub378 \ud30c\ub9ac\ubbf8\ud130 ($w_{0}, \\mathbf{w}, \\mathbf{V}$)\ub97c \ucd94\ucd9c\n- \ud559\uc2b5/\ucd94\ub860 \ub370\uc774\ud130\ub97c \uc544\ub798\uc640 \uac19\uc774 \uc0c8\ub86d\uac8c \uad6c\uc131\n - Item latent \ud589\ub82c: k-NN \ubaa8\ub378 \ud559\uc2b5\uc5d0 \uc0ac\uc6a9; $a_i = concat(V, \\; w)$\n - User latent \ud589\ub82c: \ucd94\ub860\uc5d0 \uc0ac\uc6a9; $a_u = concat(V, \\; 1)$\n- KNN \ubaa8\ub378 \ud559\uc2b5\n- \ubc30\uce58 \ucd94\ub860\n- \uc5d0\uc5b4\ub77c\uc778 Top-K \ucd94\ucc9c\n\n\n\n\n\n# 1. \uc5d0\uc5b4\ub77c\uc778 \ub370\uc774\ud130\uc14b\uc73c\ub85c FM \ubaa8\ub378 \ud6c8\ub828 \ubc0f \ubc30\ud3ec\ud558\uae30\n---\n\nMovieLens \ub370\uc774\ud130\uc758 user_id, item_id\uac00 \uc22b\uc790\ub85c \uc774\ubbf8 \ud559\uc2b5\uc5d0 \ubc14\ub85c \uc0ac\uc6a9\uac00\ub2a5\ud55c \ub370\uc774\ud130\uc785\ub2c8\ub2e4.\n\ud558\uc9c0\ub9cc, \uc5d0\uc5b4\ub77c\uc778 \ub370\uc774\ud130\ub294 \ubaa8\ub450 \ubb38\uc790\uc5f4\ub85c \ub370\uc774\ud0c0\uac00 \uc8fc\uc5b4\uc84c\uc2b5\ub2c8\ub2e4. \n\uc774 \ub370\uc774\ud0c0\ub97c labelencoder\ub97c \ud1b5\ud574\uc11c \uc22b\uc790\ub85c \ubc14\uafb8\uc5b4\uc11c \uc5ec\uae30\uc11c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \n\ubcf8 \uc608\uc81c\uc5d0\uc11c\ub294 \uc0ac\uc6a9\uc790\uac00 5\ubc88 \uc774\uc0c1\uc758 \ub9ac\ubdf0\ub97c \ub0a8\uae34 \uc0ac\uc6a9\uc790\ub9cc\uc744 \ub300\uc0c1\uc73c\ub85c \ud558\uc5ec \ucd1d 748\uba85\uc758 \uc0ac\uc6a9\uc790\uc640 293\uac1c\uc758 \uc5d0\uc5b4\ub77c\uc778 \ub300\ud574 1\ubd80\ud130 10\uae4c\uc9c0\uc758 \ub4f1\uae09\uc774 \ubd80\uc5ec\ub41c \ub370\uc774\ud130\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n### Factorization Machine\n---\n\n### \uac1c\uc694\n\n\uc77c\ubc18\uc801\uc778 \ucd94\ucc9c \ubb38\uc81c\ub4e4\uc740 user\uac00 \ud589, item\uc774 \uc5f4, rating\uc774 \uac12\uc73c\ub85c \uc774\ub8e8\uc5b4\uc9c4 \ud589\ub82c\uc744 \ub370\uc774\ud130\uc14b\uc73c\ub85c \ud558\uc5ec Matrix Factorization \uae30\ubc95\uc744 \ud65c\uc6a9\ud558\ub294\ub370, real-world\uc758 \ub2e4\uc591\ud55c \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98(feature)\ub4e4\uc744 \uadf8\ub300\ub85c \uc801\uc6a9\ud558\uae30\uc5d0\ub294 \uc5b4\ub824\uc6c0\uc774 \uc788\uc2b5\ub2c8\ub2e4. Factoriztion Machine(\uc774\ud558 FM) \uc54c\uace0\ub9ac\uc998\uc740 Matrix Factorization\uc758 \uac1c\ub150\uc744 \ud655\uc7a5\ud558\uc5ec \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98\ub4e4\uc744 \uac19\uc774 \uace0\ub824\ud558\uace0 \ud53c\ucc98 \uac04\uc758 \uc0c1\ud638 \uad00\uacc4(interaction)\ub97c \uc120\ud615 \uacc4\uc0b0 \ubcf5\uc7a1\ub3c4\ub85c \uc790\ub3d9\uc73c\ub85c \ubaa8\ub378\ub9c1\ud560 \uc218 \uc788\uae30\uc5d0, \ud53c\ucc98 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1\uc5d0 \ub4e4\uc5b4\uac00\ub294 \ub178\ub825\uc744 \ud06c\uac8c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc124\uba85\n\n\ub2e4\uc591\ud55c \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98\ub97c \uace0\ub824\ud558\uae30 \uc704\ud574 \uc544\ub798 \uadf8\ub9bc\ucc98\ub7fc user\uc640 item\uc744 \uc6d0-\ud56b \uc778\ucf54\ub529\uc73c\ub85c \ubcc0\ud658\ud558\uace0 \ucd94\uac00 \ud53c\ucc98\ub4e4\uc744 \uadf8\ub300\ub85c concatenate\ud558\uc5ec `f(user, item, additional features) = rating` \ud615\ud0dc\uc758 \uc120\ud615 \ud68c\uadc0(Linear Regression) \ubb38\uc81c\ub85c \ubcc0\ud658\ud558\uc5ec \ud480 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\n\n\ud558\uc9c0\ub9cc, \ucd94\ucc9c \ubb38\uc81c\ub97c \uc120\ud615 \ud68c\uadc0\ub85c\ub9cc \ud480\ub824\uace0 \ud558\uba74 \ud53c\ucc98 \uac04\uc758 \uc0c1\ud638 \uad00\uacc4\ub97c \uace0\ub824\ud560 \uc218 \uc5c6\uae30\uc5d0 \uc544\ub798 \uc218\uc2dd\ucc98\ub7fc \ud53c\ucc98 \uac04\uc758 \uc0c1\ud638 \uad00\uacc4\ub97c \ubaa8\ub378\ub9c1\ud558\ub294 \ud56d\uc744 \ucd94\uac00\ud558\uc5ec \ub2e4\ud56d \ud68c\uadc0(Polynomial Regression)\ub85c \ubcc0\ud658\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n$$\n\\begin{align} \\hat{y}(\\mathbf{x}) = w_{0} + \\sum_{i=1}^{d} w_{i} x_{i} + \\sum_{i=1}^d \\sum_{j=i+1}^d x_{i} x_{j} w_{ij}, \\;\\; x \\in \\mathbb{R}^d \\tag {1} \n\\end{align}\n$$ \n$d$\ub294 \ud53c\ucc98 \uac2f\uc218\ub85c, $x \\in \\mathbb{R}^d$\ub294 \ub2e8\uc77c \uc0d8\ud50c\uc758 \ud53c\ucc98 \ubca1\ud130\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\n\ud558\uc9c0\ub9cc \ub300\ubd80\ubd84\uc758 \ucd94\ucc9c \uc2dc\uc2a4\ud15c \ub370\uc774\ud130\uc14b\uc740 \ud76c\uc18c\ud558\uae30\uc5d0(sparse) cold-start \ubb38\uc81c\uac00 \uc788\uc73c\uba70, \ucd94\uac00\uc801\uc73c\ub85c \uace0\ub824\ud574\uc57c \ud558\ub294 \ud53c\ucc98\ub4e4\uc774 \ub9ce\uc544\uc9c8 \uc218\ub85d \uacc4\uc0b0\uc774 \ub9e4\uc6b0 \ubcf5\uc7a1\ud574\uc9d1\ub2c8\ub2e4. (\uc608: user\uac00 6\ub9cc\uba85, item \uac2f\uc218\uac00 5\ucc9c\uac1c, \ucd94\uac00 \ud53c\ucc98\uac00 5\ucc9c\uac1c\uc77c \uacbd\uc6b0 70,000x70,000 \ud589\ub82c\uc744 \uc608\uce21\ud574\uc57c \ud569\ub2c8\ub2e4.) \n\nFM\uc740 \uc774\ub7ec\ud55c \ubb38\uc81c\ub4e4\uc744 \ud589\ub82c \ubd84\ud574 \uae30\ubc95\uc744 \ud65c\uc6a9\ud558\uc5ec feature \uc30d(\uc608: user, item) \uac04\uc758 \uc0c1\ud638 \uad00\uacc4\ub97c \ub0b4\uc801(dot product)\uc73c\ub85c \ubcc0\ud658\ud558\uace0 \n\uc218\uc2dd\uc744 \uc7ac\uad6c\uc131\ud558\uc5ec \uacc4\uc0b0 \ubcf5\uc7a1\ub3c4\ub97c $O(kd^2)$\uc5d0\uc11c $O(kd)$\ub85c \uac10\uc18c\uc2dc\ucf30\uc2b5\ub2c8\ub2e4. (\uc218\uc2dd (2)\uc5d0\uc11c \ucd94\uac00\uc801\uc778 \uacc4\uc0b0\uc744 \uac70\uce58\uba74 \uacc4\uc0b0 \ubcf5\uc7a1\ub3c4\ub97c \uc120\ud615\uc73c\ub85c \uac10\uc18c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ub17c\ubb38\uc744 \ucc38\uc870\ud558\uc138\uc694.) \n\n$$\n\\begin{align}\n\\hat{y}(\\mathbf{x}) = w_{0} + \\sum_{i=1}^{d} w_i x_i + \\sum_{i=1}^d\\sum_{j=i+1}^d x_{i} x_{j} \\langle\\mathbf{v}_i, \\mathbf{v}_j\\rangle \\tag{2} \n\\end{align}\n$$\n\n$$\n\\begin{align}\n\\langle \\textbf{v}_i , \\textbf{v}_{j} \\rangle = \\sum_{f=1}^k v_{i,f} v_{j,f},\\; k: \\text{dimension of latent feature} \\tag{3}\n\\end{align}\n$$\n\n\uc704\uc758 \ubaa8\ub378\uc744 2-way(degree = 2) FM\uc774\ub77c\uace0 \ud558\uba70, \uc774\ub97c \uc77c\ubc18\ud654\ud55c d-way FM\ub3c4 \uc788\uc9c0\ub9cc, \ubcf4\ud1b5 2-way FM\ub97c \ub9ce\uc774 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SageMaker\uc758 FM \ub610\ud55c 2-way FM\uc785\ub2c8\ub2e4.\n\nFM\uc774 \ud6c8\ub828\ud558\ub294 \ud30c\ub77c\uba54\ud130 \ud29c\ud50c\uc740 ($w_{0}, \\mathbf{w}, \\mathbf{V}$) \uc774\uba70, \uc758\ubbf8\ub294 \uc544\ub798\uc640 \uac19\uc2b5\ub2c8\ub2e4.\n- $w_{0} \\in \\mathbb{R}$: global bias\n- $\\mathbf{w} \\in \\mathbb{R}^d$: \ud53c\ucc98 \ubca1\ud130 $\\mathbf{x}_i$\uc758 \uac00\uc911\uce58\n- $\\mathbf{V} \\in \\mathbb{R}^{n \\times k}$: \ud53c\ucc98 \uc784\ubca0\ub529 \ud589\ub82c\ub85c i\ubc88\uc9f8 \ud589\uc740 $\\mathbf{v}_i$\n\n\nFM\uc740 \uc704\uc758 \uc218\uc2dd\uc5d0\uc11c \uc54c \uc218 \uc788\ub4ef\uc774 closed form\uc774\uba70 \uc2dc\uac04 \ubcf5\uc7a1\ub3c4\uac00 \uc120\ud615\uc774\uae30 \ub54c\ubb38\uc5d0, \ub2e4\uc218\uc758 user & item\uacfc \uba54\ud0c0\ub370\uc774\ud130\ub4e4\uc774 \ub9ce\uc740 \ucd94\ucc9c \ubb38\uc81c\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4.\n\ud6c8\ub828 \ubc29\ubc95\uc740 \ub300\ud45c\uc801\uc73c\ub85c Gradient Descent, ALS(Alternating Least Square), MCMC(Markov Chain Monte Carlo)\uac00 \uc788\uc73c\uba70, AWS\uc5d0\uc11c\ub294 \uc774 \uc911 \ub525\ub7ec\ub2dd \uc544\ud0a4\ud14d\ucc98\uc5d0 \uae30\ubc18\ud55c Gradient Descent\ub97c MXNet \ud504\ub808\uc784\uc6cc\ud06c\ub97c \uc774\uc6a9\ud558\uc5ec \ud6c8\ub828\ud569\ub2c8\ub2e4.\n\n\n```python\nimport sagemaker\nimport sagemaker.amazon.common as smac\nfrom sagemaker import get_execution_role\n# from sagemaker.predictor import json_deserializer\nfrom sagemaker.deserializers import JSONDeserializer\n# from sagemaker.amazon.amazon_estimator import get_image_uri\nimport numpy as np\nfrom scipy.sparse import lil_matrix\nimport pandas as pd\nimport boto3, io, os, csv, json\n```\n\n## \uc5d0\uc5b4\ub77c\uc778 \ub9ac\ubdf0 \ub370\uc774\ud130 \ub2e4\uc6b4\ub85c\ub4dc\n\n\ub370\uc774\ud130\ub294 \ube44\ud589\uae30 \uc5d0\uc5b4\ub77c\uc778\uc758 \ub9ac\ubdf0\uc5d0 \ub300\ud55c \ub370\uc774\ud0c0 \uc14b \uc785\ub2c8\ub2e4. \n\ud3c9\uac00\uc5d0 \ub300\ud55c \ub370\uc774\ud130\uac00 \uc548\ub77d\ud568, \uccad\uacb0, \uc74c\ub8cc, \uc74c\uc2dd, \ud654\uc7a5\uc2e4, \uc9c1\uc6d0 \uc11c\ube44\uc2a4, \uc885\ud569\uc5d0 \ub300\ud55c \ud3c9\uac00\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc5ec\uae30\uc11c\ub294 \"\uc885\ud569\"\uc5d0 \ub300\ud55c \ud3c9\uac00 \uc810\uc218\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n- Skytrax User Reviews Dataset (August 2nd, 2015)\n - https://github.com/quankiquanki/skytrax-reviews-dataset\n\n\n```python\nimport os\ndata_dir = \"airlines_data\"\nos.makedirs(data_dir, exist_ok=True)\n!cd $data_dir && wget https://raw.githubusercontent.com/quankiquanki/skytrax-reviews-dataset/master/data/airline.csv\n```\n\n --2020-10-03 09:21:10-- https://raw.githubusercontent.com/quankiquanki/skytrax-reviews-dataset/master/data/airline.csv\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.228.133\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 34752262 (33M) [text/plain]\n Saving to: \u2018airline.csv.3\u2019\n \n airline.csv.3 100%[===================>] 33.14M 16.9MB/s in 2.0s \n \n 2020-10-03 09:21:15 (16.9 MB/s) - \u2018airline.csv.3\u2019 saved [34752262/34752262]\n \n\n\n\n```python\nimport pandas as pd\nairline_df = pd.read_csv(data_dir + '/airline.csv', parse_dates=['date'])\nprint(\"airline_df: \", airline_df.shape)\nairline_df.head()\n```\n\n airline_df: (41396, 20)\n\n\n\n\n\n