{"text": "# Parameter Selection for the Functionally Assembled Terrestrial Ecosystem Simulator\n\n__Summary__
\nNumerical, process-based models that simulate tropical forest ecosystem dynamics, such as the Functionally Assembled Terrestrial Ecosystem Simulator (FATES), have been proposed as a way to improve climate change projections. However, parameterizing these complex models is challenging due to their numerous and interconnected non-linear relationships. Here, we to identify three high-performing parameter sets for use in future FATES simulations by quantitatively evaluating the performance of nearly 300 simulations, each run with a unique parameter set, against observations at a tropical forest test site.\n\n__Motivation__
\nGreenhouse gas emissions from human activities are warming the Earth with potentially catastrophic implications for human health. Projections of twenty-first century warming are critical for climate change mitigation and adaptation efforts. Unfortunately, the numerical models used to project future climate differ in their predictions of warming even for the same greenhouse gas emissions scenario. This variability in projections across models stems largely from uncertainty about how Earth's vegetation will respond to climate change. In particular, predictions of tropical forest responses to climate change must be improved, as these forests exert strong control over global climate. Numerical, process-based models of vegetated ecosystem dynamics, such as the Functionally Assembled Terrestrial Ecosystem Simulator (FATES; Fisher et al., 2015, 2018), have been proposed as a way of improving projections of tropical forests and thus future climate. However, parameterizing these complex models remains challenging due to their numerous parameters and interconnected, non-linear relationships.\n\n__Goal__
\nThe goal of this analysis is to identify model parameter sets that allows the FATES model to best match observations of tropical forest structure and functioning at a test site.\n\n__Methods__
\nThis analysis identifies high-performing parameter sets for use in future experiments by quantitatively evaluating the performance of nearly 300 different FATES parameter sets against observations at a tropical forest test site, Barro Colorado Island, Panama. \n\n_Parameter ensemble simulations_
\nPrior to this analysis, we ran an ensemble of FATES simulations at our test site, Barro Colorado Island, Panama. Each simulation in the ensemble was initialized with a unique parameter set but was otherwise set-up identically to all other simulations. The 287 parameter sets we tested differed in 12 key plant trait parameters, which were sampled from observed distributions when possible (following Koven et al., _in prep_). All simulations were forced with repeating meteorological data for Barro Colorado Island, Panama, from the years 2003 to 2016 (Faybishenko et al., 2018). See Kovenock (2019) for further details of the parameter ensemble simulations.\n\nPlant structure and functioning are sensitive to atmospheric carbon dioxide concentration, which increased over the observational time period. We therefore repeat all parameter ensemble simulations and our analysis below for two carbon dioxide concentrations that approximately bookend the observational time period (367ppm and 400ppm carbon dioxide).\n\n_Parameter set evaluation and selection_
\nThis code uses the above ensemble of simulations to quantitatively evaluate the performance of each parameter set against observations of six variables at our tropical forest test site. These variables characterize ecosystem structure (leaf area index, above-ground biomass, basal area) and functioning (gross primary productivity, latent heat flux, sensible heat flux). Observations come from the following sources: leaf area index from Detto et al. (2018); above-ground biomass from Meakem et al. (2018), Feeley et al. (2007), and Baraloto et al. (2013); basal area from Condit et al. (2017, 2012), Condit (1998), and Hubbell et al. (1999); and gross primary productivity, latent heat flux, and sensible heat flux from Koven et al. (in prep). As some of these data sets require permission to use or are not yet publicly available, observational data sets are not included for download here.\n\nWe use two performance metrics $-$ error rate and normalized root mean square error $-$ to quantify each parameter set's performance. The error rate measures how frequently the model output falls within the observed range for each variable. The normalized root mean square error measures the distance between the model output and the observed mean, relative to the observed range for each variable. The expectation is that high-performing parameter sets will result in model output that falls within the range of observations (low error rate) and near the observed mean (low normalized root mean square error). After evaluating a parameter set's performance for each individual variable, we calculate the weighted average performance across all variables for that parameter set. To ensure that our selection of a high-performing parameter set is robust to weighting method, we consider three different weighting approaches: even weighting, weighting favoring structural variables, and weighting accounting for correlation between individual variable performance.\n\nLastly, we identify parameter sets for use in future experiments by assigning an overall rank to each parameter set based on its performance across both performance metrics, three weighted averaging approaches, and two background carbon dioxide concentrations.\n\n__Results__
\nThis analysis identifies three high-performing parameter sets for use in future FATES simulations. We recommend the highest-performing parameter set for use in primary experiments and the next two highest-performing parameter sets for use in parameter sensitivity tests. These high-performing parameter sets are publicly available through the University of Washington ResearchWorks digital repository at http://hdl.handle.net/1773/43779. The performance of these parameter sets is reported in further detail in the [Results](#Results) section below and in Kovenock (2019).\n\n\n## Analysis\n## Step 1: Load libraries\n\n\n```python\nimport netCDF4 as nc4\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy\nfrom scipy import stats\n```\n\n## Step 2: Load and preprocess data\n\nHere we load the data and calculate time series of annual mean values for six ecosystem characteristics for all the simulations and observations. This code section returns two multidimensional arrays, one for model output and one for observations, that contain annual mean values for each variable organized by parameter set and background carbon dioxide concentration.\n\nThe six ecosystem characteristics included in these arrays are:\n\n- Leaf area index,\n- Above-ground biomass,\n- Basal area,\n- Gross primary productivity,\n- Latent heat flux, and\n- Sensible heat flux.\n\n### 2.1 Model output\n\nThis part of the code loads the model output for all FATES simulations in our parameter ensemble. Then it calculates time series of annual mean values for the six variables we use to evaluate the performance of each parameter set.\n\n\n```python\ndef annmeants(filepath,var,varfiletype,nyrs,conv_factor):\n ''' Calculate time series of annual means for a model output variable.\n :param filepath (str): file path to data file\n :param var (str): name of variable to call from filename\n :param nyrs (int, float): number of years to analyze\n :param conv_factor (float): conversion factor specific to variable specified by var\n :return: 2-D array containing annual mean time series (ensemble member, nyrs)\n '''\n \n # If model output is stored as monthly average for all tree sizes,\n # need to calculate annual mean. \n if varfiletype == 0:\n \n # Load monthly time series\n # For all cases except latent heat flux (FLH):\n if var != 'FLH':\n mthts_temp = nc4.Dataset(filepath).variables[var][:,:,0]\n \n # For the special case of latent heat flux:\n elif var == 'FLH':\n # Sum of three terms:\n mthts_temp = (nc4.Dataset(filepath).variables['FCTR'][:,:,0] \n + nc4.Dataset(filepath).variables['FGEV'][:,:,0] \n + nc4.Dataset(filepath).variables['FCEV'][:,:,0])\n \n \n # Calculate annual mean time series for nyrs and convert units if necessary\n annmeants = np.nanmean(np.reshape((mthts_temp[:,int(-1*nyrs*12):] * conv_factor),\n (mthts_temp.shape[0],-1,12)),axis=2)\n \n # Else if model output is stored as annual mean but structured by tree size,\n # need to sum across tree sizes.\n elif varfiletype == 1:\n # Calculate annual mean time series for entire ecosystem by summing across tree sizes\n annmeants = np.squeeze(np.nansum((\n nc4.Dataset(filepath).variables[var + '_SCLS'][:,int(-1*nyrs):,:]),\n axis=2))\n \n mthts_temp = None\n \n return annmeants\n```\n\nFirst, we will specify the information required to load and calculate annual mean time series of model output for each simulation, including file paths and names, variables to analyze, and conversion factors.\n\n\n```python\n# Filepath\nmodel_filepath = 'data/'\n\n# Filenames\n# {1} = carbon dioxide concentration specified by CO2level;\n# {2} = variable file type specified by varfiletype.\nmodel_filenames =[\n 'fates_clm5_fullmodel_bci_parameter_ensemble_1pft_slaprofile_{}_v001.I2000Clm50FatesGs.Cdf9b02d-Fb178808.2018-07-27.h{}.ensemble.sofar.nc',\n 'fates_clm5_fullmodel_bci_parameter_ensemble_1pft_slaprofile_{}_v001.I2000Clm50FatesGs.Cdf9b02d-Fb178808.2018-07-27.h{}.ensemble.sofar.nc']\n\n# Background carbon dioxide (CO2) concentration\nCO2levels = ['367ppm', '400ppm']\n\n# Variable list for model output\nvarlist = ['TLAI','AGB','BA','GPP','FLH','FSH']\n\n# Data structure for each variable in varlist:\n# 0 = monthly data for entire ecosystem;\n# 1 = annual data structured by tree size structure.\nvarfiletype = [0,1,1,0,0,0]\n\n# Conversion factor for each variable in varlist:\nvarconv = [1, 1, 1, 86400*365, 1, 1]\n\n# Variable units after applying conversion factor for each variable in varlist:\nvarunits = ['$m^2/m^2$','$kgC/m^2$','$m^2/ha$','$gC/m^2/yr$','$W/m^2$','$W/m^2$']\n\n# Number of years of model output to analyze\nnyrs = 50\n\n# Number of parameter sets in ensemble\nnens = nc4.Dataset(model_filepath + model_filenames[0].format(CO2levels[0],varfiletype[0])).variables[varlist[0]].shape[0]\n```\n\nNext, we create a multidimensional array that contains a time series of annual means for each variable, parameter set and background carbon dioxide concentration.\n\n\n```python\n# Return model_data (float): a 4-D array of annual mean values for\n # each variable with dimensions\n # (CO2levels, varlist, nens, nyrs)\n # with the following indexing:\n # CO2levels: \n # 0 = 367ppm CO2; \n # 1 = 400ppm CO2.\n # varlist: \n # 0 = Leaf area index;\n # 1 = Above ground biomass;\n # 2 = Basal area;\n # 3 = Gross primary productivity;\n # 4 = Latent heat flux;\n # 5 = Sensible heat flux.\n # nens: \n # 0:286 = parameter set index.\n\n# Initialize array\nmodel_data = np.zeros([len(CO2levels), len(varlist), nens, nyrs])\n\nfor c in range(len(CO2levels)):\n for v in range(len(varlist)):\n \n filepath = model_filepath + model_filenames[c].format(CO2levels[c],varfiletype[v])\n \n model_data[c, v, :, :] = annmeants(filepath, varlist[v], varfiletype[v], nyrs, varconv[v])\n\n filepath = None\n```\n\n### 2.2 Observations\n\nThis code section loads data for the observations we will use to evaluate the performance of each parameter set in our FATES parameter ensemble. It then calculates annual mean values when necessary.\n\n#### Leaf area index\n\nLeaf area index observations come from Detto et al. (2018) and were made using hemispherical photographs taken approximately monthly from January 2015 to August 2017 at 188 locations at our test site, Barro Colorado Island, Panama. We calculate annual mean values from the monthly means reported by Detto et al. (2018). (Note that monthly data consists of spatial means across photograph locations, rather than temporal means.) In order to use all the data available, we calculate two time series of annual means - one starting in from January and the second starting from September. Data was captured from Detto et al. (2018) Figure 7a using GraphClick software.\n\n\n```python\n# Return obs_data_lai (float): 2-D array of annual mean leaf area index\n# (sample number, years) using the following index coding for\n# sample number: \n# 0 = sample months starting from January; \n# 1 = sample months starting from September.\n\n\n# File path\nfilepath = 'data/LAI_Detto2018Obs.csv'\n\n# Monthly spatial means\nlai_mthts = np.asarray([col[2] for col in (pd.read_csv(filepath)).values])\n\n# Specify start months for observations\nstartmonth_list = np.array([1,9])\n\n# Number of annual means per sample\nnyears_lai = round(len(lai_mthts)/12-0.5)\n\n# Initialize array\nobs_data_lai = np.zeros([len(startmonth_list), nyears_lai])\n\n# Calculate annual means and fill array\nfor x in range(len(startmonth_list)):\n obs_data_lai[x,:] = np.nanmean(np.reshape(lai_mthts[startmonth_list[x]-1:24+startmonth_list[x]-1],(nyears_lai,12)),1)\n```\n\n#### Above-ground carbon biomass\n\nAbove-ground carbon biomass estimates were calculated from a 1995 census survey at our test site, Barro Colorado Island, by Meakem et al. (2018). They estimate above-ground biomass using two different methods (the standard and Chave allometric formulations). We use values from these two methods to represent uncertainty in the observational estimate. \n\nAlternatively, we can approximate above-ground carbon biomass from estimates of total biomass (rather than just carbon biomass) from census survey data reproted in Baraloto et al. (2013) and Feeley et al. (2007) for the following years: 1985, 1990,1995, 2000, and 2005. This alternative method yields similar results and can be implemented by setting use_alt_agb_obs to 1 in the code below.\n\n\n```python\n# Return obs_data_agb (float): vector of above-ground\n# carbon biomass (KgC/m2) indexed by allometric \n# formulation:\n# 0 = standard;\n# 1 = Chave.\n\nuse_alt_agb_obs = 0;\n\nfilepath = 'data/BCI_biomass.csv'\n\nif use_alt_agb_obs == 0:\n # Above-ground carbon biomass from Meakem et al. 2018 (MgC/ha) \n cbiomass_obs_Mgha = np.asarray([col[2] for col in (pd.read_csv(filepath)).values])[-2:,]\n # Convert from MgC/ha to KgC/m2\n ha_to_m2 = 1/10000\n Mg_to_kg = 1000\n obs_data_agb = cbiomass_obs_Mgha * ha_to_m2 * Mg_to_kg\n \nelif use_alt_agb_obs == 1:\n # Total aboveground biomass (Mg biomass/ha) from \n # Baraloto et al. (2013) and Feeley et al. (2007)\n agb_biomass_obs = np.asarray([col[1] for col in (pd.read_csv(filepath)).values])[:-2,]\n # Estimate of carbon biomass from total biomass using\n # following Meakem et al. 2018\n obs_data_agb_v2 = agb_biomass_obs*0.47\n obs_data_agb = obs_data_agb_v2\n```\n\n#### Basal area\n\nWe use estimates of the median basal area for our test site Barro Colorado Island, Panama, from census surveys conducted in 1999, 2001, 2006, and 2011 by Condit (1998), Condit et al. (2012, 2017), and Hubbell et al. (1999).\n\n\n```python\n# Return obs_data_ba (float): vector containing basal area (m^2/ha)\n# indexed by census year in chronological order\n\nfilepath = 'data/census_bmks_bci_171208.nc'\n\n# Load basal area median values for the last 5 census dates\n# Data structured as follows:\n# [census number, tree diameter size class,...\n# distribution percentiles (0.05,0.5,0.95)]\nbasalarea_bysize = nc4.Dataset(filepath).variables['basal_area_by_size_census'][-5:,:,1]\n\n# Sum across tree size classes\nobs_data_ba = np.nansum(basalarea_bysize,1)\n```\n\n#### Gross primary productivity, latent heat flux, and sensible heat flux\n\nEstimates of gross primary productivity, latent heat fluxes, and sensible heat fluxes were calculated from fluxtower eddy covariance measurements made from July 2012 to August 2017 at Barro Colorado Islana by Koven et al. (in prep). To use all available data in our analysis, we calculate two versions of the annual means time series, one beginning in July and the second beginning in September.\n\n\n```python\ndef annmeants_fluxobs(mthts,startmth):\n ''' Calculate time series of annual means from monthly fluxtower estimates.\n :param mthts (float): 2-D array containing fluxtower observations (years, months)\n :param startmth (int): number corresponding to start month for this annual mean time series\n (e.g. 7 = start with July, 9 = start with Sept)\n :return: vector containing annual mean time series of size (nyrs) \n '''\n # Discard number of months specified by dif\n mthts_dif = np.reshape(mthts,(1,-1))[:,startmth-1:startmth-1-12]\n \n # Calculate annual mean time series\n annmeants = np.nanmean(np.reshape(mthts_dif,(5,12)),axis=1)\n \n return annmeants\n```\n\n\n```python\n# Return obs_data_flux (float): 3-D array containing annual mean values\n# indexed as (sample number, variable, year).\n# sample number: \n# 0 = sample months starting from July; \n# 1 = sample months starting from September.\n# variable:\n# 0 = gross primary productivity;\n# 1 = latent heat flux;\n# 2 = sensible heat flux.\n\n# Load observations\nGPP_data = np.load('data/fluxdata_GPP.npy')\nLH_data = np.load('data/fluxdata_LH.npy')\nSH_data = np.load('data/fluxdata_SH.npy')\nfluxdata_mask= np.load('data/fluxdata_mask.npy')\n\n# Apply mask to arrays\nGPP_monthyear = np.ma.masked_array(GPP_data, mask=fluxdata_mask)\nLH_monthyear = np.ma.masked_array(LH_data, mask=fluxdata_mask)\nSH_monthyear = np.ma.masked_array(SH_data, mask=fluxdata_mask)\n\n# Specify start months for observations\nstartmonth_list = np.array([7,9])\n\n# Number of years\nnyrs_obsflux = len(annmeants_fluxobs(GPP_monthyear,startmonth_list[0]))\n\n# Initialize array\nobs_data_flux = np.zeros([len(startmonth_list), 3, nyrs_obsflux])\n\n# Fill array\nfor x in range(len(startmonth_list)):\n obs_data_flux[x,0,:] = annmeants_fluxobs(GPP_monthyear,startmonth_list[x])\n obs_data_flux[x,1,:] = annmeants_fluxobs(LH_monthyear,startmonth_list[x])\n obs_data_flux[x,2,:] = annmeants_fluxobs(SH_monthyear,startmonth_list[x])\n```\n\n## Step 3: Quanitfy performance of each parameter set\n\nIn this section we evaluate the model performance against observations for each parameter set and background carbon dioxide level. As we would like to identify parameter sets that robsutly perform well regardless of performance metric we use two metrics to evaluate performance: error rate and normalized root mean square error (NRMSE). We calculate both metrics for each variable. Then, we take a weighted average across all variables for each metric and parameter set.\n\n### Performance Metric #1: Error Rate\n\nThe error rate measures the percent of model annual means that fall within the observed range for each variable and ensemble member. The observed range is defined as the difference between the maximum and minimum observed values. To account for relatively small sample sizes and potential measurment error within the observations we extend the observational range by 10% in both directions.\n\n\n```python\ndef error_rate(model_ts,obs_ts,dg):\n '''Function calculates the error rate for each simulation\n as the percentage of model output annual means that fall\n within the observed range for the variable.\n param model_ts (float): a 2-D array containing the time series of annual means\n for a given variable indexed by (parameter set, years)\n param obs_ts (float): a vecotr or 2-D array containing the observed time series for\n the given variable indexed as (years) or (sample number, years)\n param dg (float): a scalar specifying the degradation level for observed range\n as a fraction\n return error_rate (float): a vector containing the error rates indexed by parameter set\n (nens)'''\n \n # Number of ensemble members\n nens = model_ts.shape[0]\n \n # Empty array to fill\n error_rate = np.zeros([nens])\n\n # Observed minimum and maximum\n obs_min = np.nanmin(obs_ts)\n obs_max = np.nanmax(obs_ts)\n \n error_rate = 100*np.nansum(np.where((model_ts <= obs_min*(1-dg)) | (model_ts >= obs_max*(1+dg)),1,0),1)/model_ts.shape[1]\n return error_rate\n```\n\n\n```python\n# Return error_rate_array (float): a 3-D array containing error rates\n# indexed by (CO2 level, varlist, parameter set)\n\n# Specify observed data arrays in order corresponding to varlist\nobs_data_list = [obs_data_lai,obs_data_agb,obs_data_ba,\n obs_data_flux[:,0,:],obs_data_flux[:,1,:],\n obs_data_flux[:,2,:]]\n\n# Degradation level for the observational range \n# as fraction, not percent\ndg = np.array([0.10])\n\n# Calculate error rate\nerror_rate_array = np.zeros([len(CO2levels),len(varlist), nens])\nfor i in range(len(CO2levels)):\n for j in range(len(varlist)):\n error_rate_array[i,j,:] = error_rate(model_data[i,j,:,:], obs_data_list[j], dg) \n```\n\n### Performance Metric #2: Normalized root mean square error (NRMSE)\n\nThe normalized root mean square error (NRMSE) measures the distance between the model output for each parameter set and the observed mean value. We normalize the root mean square error by the observed range for each variable so that we can compare NRMSE values across variables. In other words, normalizing the error tells us whether the distance from the observation is large compared to the spread in observations for each varialbe. This becomes especially useful when we take a weighted average NRMSE across all variables in the next code section. We caluclate this metric for each variable, parameter set, and background carbon dioxide level. The NRMSE is calculated as follows:\n\n\\begin{equation}\\Large\nNRMSE = \\frac{ \\sqrt{ \\sum_{k=1}^n \\frac{(x_{model,k} - \\bar{X}_{obs})^2}{n}}} {x_{obs,max} - x_{obs,min}}\n\\end{equation}\n\nwhere $NRMSE$ is the normalized root mean square error for a single variable (e.g. leaf area index) and parameter set, $n$ is the number of years of model output, $x_{model,k}$ is the model annual mean for year $k$, $\\bar{X}_{obs}$ is the overall annual mean of the observed values, and $x_{obs,max}$ and $x_{obs,min}$ are the maximum and minimum observed values, respectively. When mulitple annual mean time series were sampled for an observed variable (e.g. observations for leaf area index spanned a partial year), we calculate the difference between the observed mean and model output using the time series that minimizes this difference.\n\n\n```python\ndef nrmse(model_ts,obs_ts):\n '''Function calculates the normalized root mean square error(NRMSE)\n for each model ensemble member. When multiple observation time series \n are available, this function calculates the NRMSE for each time series \n and then selects the lowest of those NRMSE values.\n param model_ts (float): a 2-D array containing the time series of annual means\n for a given variable for all model ensemble members \n (ensemble member, years)\n param obs_ts (float): a vecotr or 2-D array containing the observed time series for\n the given variable indexed as (years) or (sample number, years)\n return nrmse (float): a vector containing the normalized root mean square error \n for each ensemble member indexed by (parameter set)'''\n \n # Number of ensemble members\n nens = model_ts.shape[0]\n\n # If multiple observation time series, \n # take the lowest NRMSE for each ensemble member\n try:\n if obs_ts.shape[1]>0:\n # Number of observation time series\n nobs = obs_ts.shape[0]\n obs_min = np.nanmin(obs_ts,axis=1)\n obs_max = np.nanmax(obs_ts, axis=1)\n obs_mean = np.nanmean(obs_ts,axis=1)\n \n temp_nrmse = np.zeros([nobs,nens])\n \n for obsnum in range(nobs):\n temp_nrmse[obsnum,:] = np.sqrt(np.nansum((model_ts[:,:] - obs_mean[obsnum])**2,axis=1) / model_ts.shape[1]) / (obs_max[obsnum]-obs_min[obsnum])\n \n nrmse = np.nanmin(temp_nrmse,axis=0)\n\n temp_nrmse = None\n \n # Otherwise, simply calculate NRMSE\n except IndexError:\n obs_min = np.nanmin(obs_ts,axis=0)\n obs_max = np.nanmax(obs_ts,axis=0)\n obs_mean = np.nanmean(obs_ts,axis=0) \n \n nrmse = np.sqrt(np.nansum((model_ts[:,:] - obs_mean)**2,axis=1) / model_ts.shape[1]) / (obs_max-obs_min)\n\n return nrmse\n```\n\n\n```python\n# Return nrsmse_array (float): a 3-D array containing the NRMSE \n# indexed by [CO2level, varlist, parameer set] \n\nnrmse_array = np.zeros([len(CO2levels),len(varlist), nens])\n\nfor i in range(len(CO2levels)):\n for j in range(len(varlist)):\n nrmse_array[i,j,:] = nrmse(model_data[i,j,:,:], obs_data_list[j])\n```\n\n### Weighted average permformance metrics across variables\n\nWe calculate weighted average performance metrics across variables for both the normalized root mean square error and the error rate. We calculate and consider three different weighting approaches to ensure that our selection of high-performing parameter sets is robust to weighting method. The weighting approaches we use are:\n\n1. Even: All variables are evenly weighted.\n\n2. Structure: This weighting favors structural ecosystem properties (leaf area index, above-ground biomass, and basal area). This weighting scheme reflects the likelihood that structural variables at our test site include less measurment uncertainty than flux variables.\n\n3. Correlation: This weighting scheme is informed by correlations between individual variable performance metrics. The ability of a parameter set to mach observations of flux variables (gross primary productivity, sensible heat, and latent heat) was correlated with ability to match observations of leaf area index, as well as other flux variables. As leaf area index observations likely include smaller measurement uncertainty, we choose to give a greater weighting to leaf area index at the expense of flux variables. We also reduced the weightings of basal area and above-ground biomass performance to account for their correlation with one another.\n\n#### Weighted average error rate\n\n\n```python\n# Even weighting across all variables\ner_wavg_even = np.nansum(error_rate_array,1) / error_rate_array.shape[1]\n\n# Weighted average favoring structural properties\nw = 0.3\ner_wavg_strct = (w*(error_rate_array[:,0,:]) \n + w*(error_rate_array[:,1,:])\n + w*(error_rate_array[:,2,:])\n + (1-3*w)*((error_rate_array[:,3,:])\n +(error_rate_array[:,4,:])\n +(error_rate_array[:,5,:]))/3)\n\n# Weighting based on correlations between performance metric for each variable\nw1 = 0.4\nw2 = 0.25\nw3 = 0.1\ner_wavg_corr = ( w1*(error_rate_array[:,0,:]) \n + w2*(error_rate_array[:,1,:]) \n + w2*(error_rate_array[:,2,:]) \n + w3*((error_rate_array[:,3,:])\n +(error_rate_array[:,4,:])\n +(error_rate_array[:,5,:]))/3)\n```\n\n#### Weighted average NRMSE\n\nWe quantify the distance of model output from the mean observations in multivariate space by calculating the weighted Euclidean distance as follows:\n\n\\begin{equation}\nNRMSE_{avg} = \\sqrt{ \\sum_{i=1}^m (\\omega_{i} \\cdot NRMSE_{i})^2}\n\\end{equation}\n\nwhere $NRMSE_{avg}$ is the weighted average normalized root mean square error across variables, $m$ is the number of variables we consider, $NRMSE_{i}$ is the normalized root mean square error for each individual variable, and $\\omega_{i}$ is the weighting for each variable. \n\n\n```python\n# Even weighting across all variables\nw = 1/6\nnrmse_wavg_even = np.sqrt(w*(nrmse_array[:,0,:])**2 \n + w*(nrmse_array[:,1,:])**2 \n + w*(nrmse_array[:,2,:])**2 \n + w*(nrmse_array[:,3,:])**2 \n + w*(nrmse_array[:,4,:])**2 \n + w*(nrmse_array[:,5,:])**2)\n\n\n# Weighted average favoring structural properties\nw = 0.3\nnrmse_wavg_strct = np.sqrt(w*(nrmse_array[:,0,:])**2 \n + w*(nrmse_array[:,1,:])**2 \n + w*(nrmse_array[:,2,:])**2 \n + (1-3*w)*((nrmse_array[:,3,:])**2 \n +(nrmse_array[:,4,:])**2 \n +(nrmse_array[:,5,:])**2)/3)\n\n# Weighting based on correlations between performance metric for each variable\nw1 = 0.4\nw2 = 0.25\nw3 = 0.1\nnrmse_wavg_corr = np.sqrt(w1*(nrmse_array[:,0,:])**2 \n + w2*(nrmse_array[:,1,:])**2 \n + w2*(nrmse_array[:,2,:])**2 \n + w3*((nrmse_array[:,3,:])**2 \n +(nrmse_array[:,4,:])**2 \n +(nrmse_array[:,5,:])**2)/3)\n```\n\n## Step 4: Rank parameter sets by performance\nHere we assign an overall rank to each parameter set based on its performance across both performance metrics (error rate and NRMSE), three weighting schemes (even, structure, and correlated), and two cases (low and high atmospheric carbon dioxide concentration). The goal of this analysis is to identify parameter sets that robsutly perform well at our test site.\n\n\n```python\nall_avg_array = np.stack([er_wavg_even,nrmse_wavg_even,\n er_wavg_strct,nrmse_wavg_strct,\n er_wavg_corr,nrmse_wavg_corr])\n\nrank_array = scipy.stats.mstats.rankdata(all_avg_array,axis=2)\n\n# Sum ranks across cases and ranking methods\nsum_rank_array = np.nansum(np.nansum(rank_array,axis=0),axis=0)\n\n# Sort the index number for each ensemble member by their summed rank (best to worst performance)\nsum_rank_index = np.argsort(sum_rank_array)\n\n#Print Index # for highest-performing parameter sets\nhighperform_num = np.transpose(sum_rank_index)[:10,]+1\nhighperform_indx = np.transpose(sum_rank_index)[:10,]\nprint(\"Indices for the highest-performing parameter sets: \", highperform_num[0:3])\n```\n\n Indices for the highest-performing parameter sets: [ 86 260 151]\n\n\n__Result:__ Parameter sets 86, 260, and 151 resulted in the highest model performance (in descending order) at our test site.\n\n\n## Step 5: Plot performance metrics for each parameter set\n\nIn this section we visualize the model performance for each parameter set to gain insight into how the highest-performing parameter sets performed in individual variables and in comparison to all other parameter sets. More specifically, we plot heat maps of each parameter set's performance by performance metric and background carbon dioxide concentration.\n\n\n```python\ndef heatsubplotfxn(heatdata, CO2indx, minval, maxval, plotnum, metriclabel,\n highperform_indx, heat_var_labels, ens10label, enslabel):\n \n '''Function creates a heatmap for each performance metric\n for high-performing ensemble members and then all ensemble members.\n\n param heatdata (float): 3-D array containing a performance metric (CO2levels, variable, nens)\n param CO2indx(int, float): index for background CO2 level (0 = 367 ppm; 1 = 400ppm)\n param minval (int): minimum value for heatmap colorbar\n param maxval (int): maximum value for heatmap colorbar\n param plotnum (int): subplot number\n param metriclabel (str): label for performance metric\n param highperform_indx (int, float): vector of index numbers for\n high-performing parameter sets\n param heat_var_labels (str): list of variable labels indexed by (variable)\n param ens10label (str): list of high-performing parameter set numbers\n to be used as plot labels\n param enslabel (str): list parameter set numbers to label in plot of\n all parameter sets' performance metrics\n \n return heatmap subplot for one performance metric\n '''\n \n #Subplot indexing paramter\n i = 2\n \n # Highest Performing Ensemble Members\n ax1 = plt.subplot(3,i,plotnum)\n im1 = ax1.imshow(\n heatdata[CO2indx,:,highperform_indx],\n vmin = minval, vmax = maxval,\n cmap=\"viridis_r\",aspect='auto')\n\n ax1.set_xticks(np.arange(len(heat_var_labels)))\n ax1.xaxis.tick_top()\n ax1.set_xticklabels(heat_var_labels)\n ax1.xaxis.set_label_position('top')\n\n ax1.set_ylabel('High Performing Parameter Sets (#)')\n ax1.set_yticks(np.arange(len(ens10)))\n ax1.set_yticklabels(ens10)\n\n # All Ensemble Members\n ax2 = plt.subplot(3,i,(i+plotnum,i*2+plotnum))\n im2 = ax2.imshow(\n np.transpose(heatdata[CO2indx,:,:]),\n vmin = minval, vmax = maxval,\n cmap=\"viridis_r\",aspect='auto')\n \n # Colorbar\n cbar = ax1.figure.colorbar(\n im2, ax=ax2, orientation=\"horizontal\", \n pad=0.025)\n # Labels\n cbar.ax.set_xlabel(metriclabel, fontsize = 16, \n fontweight ='bold')\n ax2.set_xticks([]) # hide xticks/labels\n ax2.set_ylabel('All Parameter Sets (#)')\n ax2.set_yticks(ens)\n ax2.set_yticklabels(enslist)\n```\n\n\n```python\n# Return error_heatdata: a 3-D array containing error rates\n# indexed by (CO2level, variable, nens)\n# Return nrmse_heatdata: a 3-D array containing NRMSE \n# indexed by (CO2level, variable, parameter set)\n# Variable indexing as follows:\n# 0-5 = variables in order of varlist, \n# 6-8 = weighted averages across variables\n# using even,\n# structure, and correlation weights,\n# respectively\n\n# Concatenate data for error rate heatmap\nerror_rate_wavg_array = np.stack([er_wavg_even,er_wavg_strct,er_wavg_corr],axis=1)\nerror_heatdata = np.concatenate([error_rate_array,error_rate_wavg_array],axis=1)\n\n# Concatenate data for NRMSE heatmap\nnrmse_wavg_array = np.stack([nrmse_wavg_even,nrmse_wavg_strct,nrmse_wavg_corr],axis=1)\nnrmse_heatdata = np.concatenate([nrmse_array,nrmse_wavg_array],axis=1)\n```\n\n\n```python\n# Additional information for figures\n\n# Metric/Variable labels\nheat_var_labels = [\"LAI\",\"AGB\",\"BA\",\n \"GPP\",\"LH\",\"SH\",\"Av$_{E}$\",\"Av$_{S}$\",\"Av$_{C}$\"]\n\n# Ensemble member labels\n# 10 highest performing\nens10 = [str(int(x)) for x in highperform_num]\n# All (label every 25th ensemble member)\nens = np.array(range(25,300,25))\nenslist = [str(x) for x in ens]\n```\n\n### Plot heat maps of performance by parameter set, performance metric, and background carbon dioxide concentration\n\n\n```python\nfig1 = plt.figure(figsize=(12,12))\n\n# Set CO2levels index number\ncasenum = 0\n\n# Plot Error Rate\nplotnum = 1\nheatsubplotfxn(error_heatdata, casenum, 0, 100, plotnum, 'A. Error Rate (%)', \n highperform_indx, heat_var_labels, ens10, enslist)\n\n# Plot NRMSE\nplotnum = plotnum+1\nheatsubplotfxn(nrmse_heatdata, casenum, 0, 10, plotnum, 'B. NRMSE', \n highperform_indx, heat_var_labels, ens10, enslist)\n\nplt.tight_layout()\n```\n\n__Figure 1.__ FATES model performance as measured by (A) error rate and (B) normalized root mean square error (NRMSE) for simulations run with 367 ppm atmospheric carbon dioxide. Performance metrics are shown for leaf area index (LAI), above-ground biomass (AGB), basal area (BA), gross primary productivity (GPP), latent heat flux (LH), sensible heat flux (SH) and weighted averages across variables using three weighting approaches: even (Av$_E$), favoring structural variables (Av$_S$), and considering correlations (Av$_C$). Top panels highlight the performance of the top 10 highest-perfoming parameter sets. Bottom panels show the performance of all 287 parameter sets we tested.\n\n\n```python\nfig2 = plt.figure(figsize=(12,12))\n\n# Set CO2levels index number\ncasenum = 1\n\n# Plot Error Rate\nplotnum = 1\nheatsubplotfxn(error_heatdata, casenum, 0, 100, plotnum, 'A. Error Rate (%)',\n highperform_indx, heat_var_labels, ens10, enslist)\n\n# Plot NRMSE\nplotnum = plotnum+1\nheatsubplotfxn(nrmse_heatdata, casenum, 0, 10, plotnum, 'B. NRMSE',\n highperform_indx, heat_var_labels, ens10, enslist)\n\nplt.tight_layout()\n```\n\n__Figure 2.__ FATES model performance as measured by (A) error rate and (B) normalized root mean square error (NRMSE) for simulations run with 400 ppm atmospheric carbon dioxide. Performance metrics are shown for leaf area index (LAI), above-ground biomass (AGB), basal area (BA), gross primary productivity (GPP), latent heat flux (LH), sensible heat flux (SH) and weighted averages across variables using three weighting approaches: even (Av$_E$), favoring structural variables (Av$_S$), and considering correlations (Av$_C$). Top panels highlight the performance of the top 10 highest-perfoming parameter sets. Bottom panels show the performance of all 287 parameter sets we tested.\n\n## Results\nThis analysis identifies three high-performing parameter sets for use in future FATES simulations. We recommend the highest-performing parameter set (parameter set 86; Figures 1 and 2) as the primary parameter set for future experiments. Two other high-performing parameter sets (151 and 260; Figures 1 and 2) may be of interest for testing the sensitivity of simulation results to parameterization. Parameter set 260 is similar in parameter values and performance to the highest-performing parameter set (number 86). Parameter set 151 has the third highest performance but, has greater differences in key parameter values and results in a different simulated ecosystem. Specifically, parameter set 151 has higher performance in the above-ground biomass and basal area structural properties but, lower performance in leaf area index and gross primary productivity (Figures 1 and 2). These three high-performing parameter sets are publicly available through the University of Washington ResearchWorks digital repository at http://hdl.handle.net/1773/43779.\n\n\n## Further Information\n__Details of the parameter ensemble and analysis herein:__\n\nKovenock, M. (2019). Ecosystem and large-scale climate impacts of plant leaf dynamics (Doctoral dissertation). Chapter 4: \"Within-canopy gradient of specific leaf area improves simulation of tropical forest structure and functioning in a demographic vegetation model.\" http://hdl.handle.net/1773/44061\n\n__Details of the Functionally Assembled Ecosystem Simulator (FATES):__\n\nhttps://github.com/NGEET/fates\n\nFisher, R. A., Koven, C. D., Anderegg, W. R., Christoffersen, B. O., Dietze, M. C., Farrior, C. E., et al. (2018). Vegetation demographics in Earth System Models: A review of progress and priorities. Global Change Biology, 24(1), 35\u201354. https://doi.org/10.1111/gcb.13910\n\nFisher, R. A., Muszala, S., Verteinstein, M., Lawrence, P., Xu, C., McDowell, N. G., et al. (2015). Taking off the training wheels: the properties of a dynamic vegetation model without climate envelopes. _Geoscientific Model Development, 8_(4), 3293\u20133357. https://doi.org/10.5194/gmdd-8-3293-2015\n\n\n## References\n\nBaraloto, C., Molto, Q., Rabaud, S., H\u00e9rault, B., Valencia, R., Blanc, L., et al. (2013). Rapid simultaneous estimation of aboveground biomass and tree diversity across Neotropical forests: a comparison of field inventory methods. _Biotropica, 45_(3), 288\u2013298. https://doi.org/10.1111/btp.12006\n\nCondit, R. (1998). Tropical forest census plots. Berlin, Germany, and Georgetown, Texas: Springer-Verlag and R. G. Landes Company.\n\nCondit, R. S., Aguilar, S., Perez, R., Lao, S., Hubbell, S. P., & Foster, R. B. (2017). Barro Colorado 50-ha Plot Taxonomy as of 2017. https://doi.org/10.25570/stri/10088/32990\n\nCondit, R., Lao, S., P\u00e9rez, R., Dolins, S. B., Foster, R., & Hubbell, S. (2012). Barro Colorado forest census plot data (version 2012). Center for Tropical Forest Science Databases. https://doi.org/10.5479/data.bci.20130603\n\nDetto, M., Wright, S. J., Calder\u00f3n, O., & Muller-Landau, H. C. (2018). Resource acquisition and reproductive strategies of tropical forest in response to the El Ni\u00f1o$-$Southern Oscillation. _Nature Communications, 9_(1), 913. https://doi.org/10.1038/s41467-018-03306-9\n\nFaybishenko, B., Paton, S., Powell, T., Knox, R., Pastorello, G., Varadharajan, C., et al. (2018). QA/QC-ed BCI meteorological drivers. United States: Next-Generation Ecosystem Experiments Tropics; STRI; LBNL. https://doi.org/doi:10.15486/ngt/1423307\n\nFeeley, K. J., Davies, S. J., Ashton, P. S., Bunyavejchewin, S., Supardi, M. N., Kassim, A. R., et al. (2007). The role of gap phase processes in the biomass dynamics of tropical forests. _Proceedings of the Royal Society B: Biological Sciences, 274_(1627), 2857\u20132864. https://doi.org/10.1098/rspb.2007.0954\n\nFisher, R. A., Koven, C. D., Anderegg, W. R., Christoffersen, B. O., Dietze, M. C., Farrior, C. E., et al. (2018). Vegetation demographics in Earth System Models: A review of progress and priorities. _Global Change Biology, 24_(1), 35\u201354. https://doi.org/10.1111/gcb.13910\n\nFisher, R. A., Muszala, S., Verteinstein, M., Lawrence, P., Xu, C., McDowell, N. G., et al. (2015). Taking off the training wheels: the properties of a dynamic vegetation model without climate envelopes. _Geoscientific Model Development, 8_(4), 3293\u20133357. https://doi.org/10.5194/gmdd-8-3293-2015\n\nHubbell, S. P., Foster, R. B., O'Brien, S. T., Harms, K. E., Condit, R., Wechsler, B., et al. (1999). Light-gap disturbances, recruitment limitation, and tree diversity in a neotropical forest. _Science, 283_(5401), 554\u2013557. https://doi.org/10.1126/science.283.5401.554 \n\nKoven, C. D., et al. (_in prep_). Benchmarking and parameter sensitivity of physiological and vegetation dynamics using the Functionally Assembled Terrestrial Ecosystem Simulator (FATES) at Barro Colorado Island, Panama.\n\nKovenock, M. (2019). Ecosystem and large-scale climate impacts of plant leaf dynamics (Doctoral dissertation). http://hdl.handle.net/1773/44061\n\nMeakem, V., Tepley, A. J., Gonzalez-Akre, E. B., Herrmann, V., Muller-Landau, H. C., Wright, S. J., et al. (2018). Role of tree size in moist tropical forest carbon cycling and water deficit responses. _New Phytologist, 219_, 947\u2013958. https://doi.org/doi:10.1111/nph.14633\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2b65f6441997858538a8a830607bba9fc4aabc83", "size": 182341, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook/fates_parameter_ensemble_benchmarking_190703.ipynb", "max_stars_repo_name": "kovenock/FATES_Parameter_Selection", "max_stars_repo_head_hexsha": "eb38cc96b3cb6c02ae71426b6351e60b16ed8a56", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook/fates_parameter_ensemble_benchmarking_190703.ipynb", "max_issues_repo_name": "kovenock/FATES_Parameter_Selection", "max_issues_repo_head_hexsha": "eb38cc96b3cb6c02ae71426b6351e60b16ed8a56", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/fates_parameter_ensemble_benchmarking_190703.ipynb", "max_forks_repo_name": "kovenock/FATES_Parameter_Selection", "max_forks_repo_head_hexsha": "eb38cc96b3cb6c02ae71426b6351e60b16ed8a56", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 175.6657032755, "max_line_length": 65096, "alphanum_fraction": 0.866716756, "converted": true, "num_tokens": 10551, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6187804478040616, "lm_q2_score": 0.4843800842769844, "lm_q1q2_score": 0.29972492545628154}} {"text": "# Section IV. DYNAMICS AND CONTROL\n \n# Chapter 13. What are Dynamics and Control?\n\nThe purpose of dynamics is to study how time and force act on a\nmechanism, while the purpose of controls is to study how a system should\nrespond to errors and disturbances. At this point, we have described how\nto reason about the positions of robots and how to generate continuous\npaths. But actually executing those paths requires us to think much more\ncarefully about the physics of robot mechanisms, and the role of time\nand velocity. Even the strongest robots cannot instantaneously change\nvelocities, and driving and flying robots cannot move sideways.\n\nIt is through the use of control that an industrial robot can move to a\nposition with sub-millimeter accuracy, and an aircraft can fly for\nthousands of kilometers but land on an airstrip a few meters wide. It is\nalso a means for understanding locomotion and reflexes in the biological\nsensorimotor system. It is important to note that both dynamics and\ncontrol are deep fields of study that are more than one hundred years\nold, and yet they are still undergoing significant change! Classical\napproaches rely heavily on mathematical analysis, while more modern\napproaches to control rely on computation as a key tool. Due to this\ndepth, to master these fields requires years of specialized\ninvestigation, and this part of the book can only survey the main points\nas they relate to robotics. We will see some of both the historical and\nmodern approaches in the next few chapters.\n\nIn the topic of dynamics we will cover 1) basic terminology of dynamical\nsystems, 2) simple dynamical systems from physics, 3) the dynamics of\narticulated robots, and 4) contact mechanics. In controls will describe\nmethods for 1) analyzing stability of controlled dynamical systems, 2)\ncontrolling articulated robots with high accuracy, and 3) generating\nfeasible and optimal control strategies.\n\nBasic terminology\n-----------------\n\nA *dynamical system* is one in which the state of the system changes\ncontinuously over time. The notion of *state* is similar to that of a\nconfiguration, although it can also include terms like joint velocities.\nIn this section, we let $x \\in \\mathbb{R}^n$ be the quantity defining\nthe *state* of the system. Robots are able to apply forces and otherwise\n*alter the rate of change of the state* using their actuators. We define\nthe *control* (aka control input) as $u \\in \\mathbb{R}^m$, where $m$ is\nthe number of independently chosen variables.\n\nFor example, in a 6-joint industrial robot, the state of the robot is\ntypically considered as $x=(q,v) \\in \\mathbb{R}^{12}$. The inclusion of\na velocity term allows us to express how the robot's momentum affects\nits future movement, and how joint forces affect velocities. The control\nvariable $u$ can take on many forms, depending on how the controller is\ndesigned. For example, if the controller takes desired joint velocities\nas inputs, the control variable is $u=(v_{d1},\\ldots,v_{d6})$ where\n$v_{di}$ indicates the desired velocity of joint $i$. On the other hand,\nif it takes joint torques as inputs, the control variable is\n$u=(\\tau_{1},\\ldots,\\tau_{6})$.\n\nThe standard terminology for modeling a dynamical system is an\nexpression relating the state and control to the derivative of the\nstate. In the case that we do not have the ability to control a system,\nwe have an *uncontrolled dynamics equation* of the form\n$$\\dot{x} = f(x).\n\\label{eq:UncontrolledDynamicEquation}$$ If the system can indeed be\ncontrolled by a control $u$, we have a *controlled dynamics equation*:\n$$\\dot{x} = f(x,u)\n\\label{eq:DynamicEquation}$$ where $x$ is the state, $u$ is the control,\nand $\\dot{x}$ is the time derivative of the state $\\frac{dx}{dt}$. The function $f$ is\nknown as the dynamics of the system. These equations are also known as\nthe *equations of motion*.\n\nIt is important to note that $x$ and $u$ are actually *functions of\ntime*. If we need to explicitly represent the dependence on time we\nshall write $x(t)$ and $u(t)$. Hence, the dot notation is simply the\ntime derivative $\\dot{x} = \\frac{d}{dt}x$. (Or more explicitly,\n$\\dot{x}(t) = \\frac{dx}{dt}(t)$). Also note that from this chapter\nonward, all variables except for time will be vector quantities unless\nstated otherwise.\n\nIt should be noted that we have introduced the terms \"dynamical\" and\n\"dynamics\" which should be taken to be *almost* synonyms. Being quite\npedantic, we will say something is dynamic when it changes over time,\nwhile something is dynamical if it *regards* dynamics. When we say\n\"dynamical system\" it means that the system regards a dynamic quantity\n(the state) but the system itself is not changing over time. We shall\nalso sometimes say \"dynamic equation\" which is a synonym with \"dynamics\nequation\" and is chosen according to author preference. But why don't we\ncall it a \"dynamical equation?\" Let's just move on, and let the grammar\nNazis squabble over terminology\\...\n\n### Open-loop and closed-loop control\n\nGiven a dynamics function $f$, our job is to decide upon the control $u$\nin order to accomplish some desired task. There are two primary types of\ncontrols: 1) *open-loop* control, in which case $u \\equiv u(t)$ only\ndepends on time, and 2) closed-loop control, in which case\n$u \\equiv u(x)$ depends on state. (It may also depend on time, in which\ncase we write $u \\equiv u(x,t)$).\n\nThe significance of closed-loop control is that the control function can\n\"observe\" the state of the system and change accordingly in order to\nachieve the desired task. The control function in this case is also\nknown as a *control policy*. This allows a robot to adapt to\ndisturbances to achieve high accuracy and to prevent veering off-course.\nHowever, for purposes of planning, it will often be easier to compute an\nopen-loop trajectory. Later, we shall see how to convert an open loop\nplan into a closed-loop one via the approach of model predictive\ncontrol.\n\n### Discrete-time systems\n\nIn many cases it is convenient to talk about *discrete-time* systems in\nwhich time is no longer a continuous variable but a discrete quantity\n$t=0,1,2,\\ldots$, and the dynamics are specified in the form\n$$x_{t+1} = f(x_t,u_t).\n\\label{eq:DiscreteTimeDynamicEquation}$$ \nHere, the control is allowed to\nchange only at discrete points in time, and the state is only observed\nat discrete points in time. This more accurately characterizes digital\ncontrol systems which operate on a given clock frequency. However, in\nmany situations the *control frequency* is so high that the\ncontinuous-time\nmodel ($\\ref{eq:DynamicEquation}$) is appropriate.\n\n### Converting higher-order dynamic systems into first-order systems\n\nOften, we shall see systems of the form\n\n$$\\ddot{x} = f(x,\\dot{x},u)\n\\label{eq:SecondOrderSystem}\n$$\n\nwhich relate state and controls to *accelerations* of the state $\\ddot{x} = \\frac{d^2 x}{dt^2}$. This does not seem to satisfy our definition of a dynamic system, since we've never seen a double time derivative. However, we can employ a *stacking trick* to define a first order system, but of twice the dimension. Let us define the stacked state vector\n\n$$y \\equiv \\begin{bmatrix} x \\\\ \\dot{x} \\end{bmatrix}.$$\n\nThen, we can rewrite ($\\ref{eq:SecondOrderSystem}$) in a first-order form as:\n\n$$\\dot{y} = g(y,u)$$\n\nwhere $g(y,u) \\equiv f(x,\\dot{x},u)$ simply \"unstacks\" the state and velocity from $y$. Now all of the machinery of first-order systems can be applied to the second order system. This can also be done for dynamic systems of order 3 and higher, wherein all derivatives are stacked into a single vector.\n\n(Note that to define an initial state $y_0$, we will need to specify the initial position $x_0$ as well as the velocity $\\dot{x}_0$.)\n\nODE integration\n--------------------------\n\nConsider a controlled, continuous time dynamic system $\\dot{x}= f(x,u)$, with $x\\in \\mathbb{R}^n$\nand $u\\in \\mathbb{R}^m$. Suppose we are given an _initial state_ $x_0$ encountered at $t=0$, and a control $u(x,t)$ defined for $t \\geq 0$. Solving for the state trajectory requires solving an **initial value problem** of an **ordinary differential equation** (ODE):\n\n$$\\text{Find }x(t) \\text{ for } t > 0 \\text{ subject to }\\dot{x}(t) = g(x(t),t) \\text{ and } x(0)=x_0. $$\n\nwhere $g(x,t) \\equiv f(x,u(x,t))$ is a time-varying dynamics function. (Note that we have applied the simple trick of pushing the control $u$ inside $g$, which turns the controlled system into an uncontrolled system.)\n\nFor some limited classes of dynamic systems and control trajectories we can solve the ODE analytically. We shall see some of these solutions for the [Dubins car](#Dubins-car) and [linear time invariant systems](#Linear-Time-Invariant-Systems). However, in the general case, we shall need to resort to numerical methods. This problem is known as **ODE integration** (also known as **simulation**).\n\n### Euler's method\n\nThe simplest numerical integration technique is known as **Euler's method**, which divides time into a sequence of small steps of $\\Delta t$ in which the dynamics are assumed constant. Each subsequent movement simply displaces the state by the first-order approximation $\\Delta t g(x(t),t)$. What emerges is a sequence of states $x_0,\\ldots,x_N$ given by:\n\n$$x_1 = x_0 + \\Delta t \\cdot g(x_0,t)$$\n\n$$x_2 = x_1 + \\Delta t \\cdot g(x_1,t)$$\n\n$$...$$\n\n$$x_N = x_{N-1} + \\Delta t \\cdot g(x_{N-1},\\Delta t\\cdot(N-1))$$\n\nThis is a widely-used technique due to its straightforward implementation, and it is also easy to analyze. Code for this method is given below.\n\n\n```python\nimport numpy as np\n\ndef integrate_euler(f,x0,N,dt,t0=0):\n \"\"\"Approximates the trajectory resulting from the initial value problem x'=f(x,t)\n using euler's method.\n \n Arguments:\n - f(x,t): a function of state and time giving the derivative dx\n - x0: the initial state at time t0, x(t0)=x0\n - N: the number of steps to take\n - t0: the initial time\n \n Return value: a trajectory ([t0,t1,...,tN],[x0,x1,...,xN])\n \"\"\"\n t = t0\n x = x0\n ts = [t0]\n xs = [x0]\n for i in range(N):\n dx = f(x,t)\n x = x + dt*dx\n t = t + dt\n ts.append(t)\n xs.append(x)\n return (ts,xs)\n```\n\nThe below code plots the result of Euler's method applied to a simple 2D particle under a gravity field, with the control u giving an external acceleration (here, $u(t)=0$ for all $t$).\n\n\n```python\n# Code for plotting Euler integration of a 2D particle\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport math\n\ng = -9.8 #the gravitational constant\n#the 4D state is [px,py,vx,vy]\ndef zero_control(x,t):\n return np.zeros(2)\n#you might try replacing zero_control with sin_control below and seeing what happens...\ndef sin_control(x,t):\n return np.array([5.0*math.sin(t*15),0])\ndef f_grav(x,t):\n u = zero_control(x,t)\n return np.hstack((x[2:4],u + np.array([0,g])))\n#initial px,py,vx,vy (at origin, with forward velocity 1, upwards velocity 10)\nx0 = np.array([0.0,0.0,1.0,10.0])\n#integrate for total time T\nT = 2.0\n#compare several time steps\ndts = [0.025,0.05,0.1,0.2]\nfor dt in dts:\n N = int(T/dt)\n times,points = integrate_euler(f_grav,x0,N,dt)\n times = np.array(times)\n points = np.array(points)\n plt.plot(points[:,0],points[:,1],label='Euler, dt='+str(dt))\ntimes = np.linspace(0,T,50)\nground_truth = np.vstack((x0[2]*times,x0[3]*times+0.5*g*times**2)).T\nplt.plot(ground_truth[:,0],ground_truth[:,1],label='Exact')\nplt.xlabel('x')\nplt.ylabel('y')\n\nplt.legend()\nplt.show()\n```\n\nNote that the accuracy of the integration depends heavily on the timestep chosen. In general, the smaller\nthe timestep, the more accurate the integration will be. More formally, define the *integration error*\nas $\\epsilon(t) = x(t) - x_{\\lfloor t/Delta t\\rfloor}$. \nHigher errors result as\n\n* The spatial variation of the dynamics function is large. More precisely, the error will grow if the Jacobian of f (in either x or u) are large.\n\n* The time $t$ is large (i.e., the error generally gets worse over time.)\n\n* $\\Delta t$ is large.\n\n### Higher order integrators\n\nA great deal of work has investigated ODE integration techniques that are more accurate than Euler integration. Rather than approximate the dynamics function as a first order Taylor expansion, they may use higher order terms to achieve lower approximation error. A popular class of higher order methods are the **Runge-Kutta methods**, which use multiple evaluations of the dynamics function to achieve far lower error than standard Euler integration.\nMore advanced methods may also use **adaptive step size**, which take smaller steps where the dynamics function is found to be more highly varying. \n\nMany numerical libraries have a variety of integrators to choose from. For example, the below plot shows an integrator used in Scipy library, which is, in fact, exact for this dynamics function.\n\n\n```python\n# Code for the plot using scipy ODE integration\ndef integrate_scipy(f,x0,N,dt,t0=0):\n \"\"\"Same arguments and return type as euler, but using the integrators in the Scipy library\"\"\"\n from scipy.integrate import ode\n r = ode(lambda t,x:f(x,t)) #need to swap the order of arguments for scipy's ode function\n r.set_integrator('dopri5') #lots of options here... see function documentation\n r.set_initial_value(x0, t0)\n t = t0\n ts = [t0]\n xs = [x0]\n for i in range(N):\n x = r.integrate(t+dt)\n t += dt\n ts.append(t)\n xs.append(x)\n return (ts,xs)\n\ndt = 0.1\ntimes,points = integrate_scipy(f_grav,x0,int(T/dt),dt)\ntimes = np.array(times)\npoints = np.array(points)\nplt.plot(points[:,0],points[:,1],label='Scipy, dt='+str(dt))\nplt.plot(ground_truth[:,0],ground_truth[:,1],label='Exact')\nplt.xlabel('x')\nplt.ylabel('y')\n\nplt.legend()\nplt.show()\n```\n\n### Stability, convergence, and divergence\n\nA dynamic system is said to be:\n\n* **Stable** for some class of initial states if its solution trajectories do not\ngrow without bound,\n\n* **Unstable** (or **divergent**) if the trajectories grow without bound, and\n\n* **Convergent** if the solution trajectories approach a single point.\n\nA *stable point* is a state $x$ such that for some neighborhood\nof $x$, the ODE is convergent toward $x$. A necessary condition for a\npoint to be stable is $f(x) = 0$, and points that satisfy this criteria\nare known as *equilibrium points*. All stable points are equilibria, but\nthe converse is not true.\n\nThe trajectories derived from Euler integration can be divergent even when the underlying system itself is stable or convergent. As an example, consider the damped harmonic oscillator system $$\\ddot{x} = -10x - \\dot{x}$$.\n\nWith the initial condition $x(0)=1$, $\\dot{x}(0)=0$, the solution trajectory is $x(t) = e^{-t/2}\\cos(\\omega t)$ with $\\omega=\\sqrt{10-1/2^2}$. But see what happens when this is integrated using Euler's method:\n\n\n\n```python\n# Code for integration of a damped harmonic oscillator with Euler's method\ndef f_harmonic_oscillator(x,t):\n return np.array([x[1],-10*x[0]-x[1]])\n\n#initial x,dx\nx0 = np.array([1.0,0.0])\n#integrate for total time T\nT = 4.0\n#compare several time steps\ndts = [0.025,0.1,0.2]\nfor dt in dts:\n N = int(T/dt)\n times,points = integrate_euler(f_harmonic_oscillator,x0,N,dt)\n #times,points = integrate_scipy(f_harmonic_oscillator,x0,N,dt)\n times = np.array(times)\n points = np.array(points)\n #plt.plot(points[:,0],points[:,1],label='Euler, dt='+str(dt))\n plt.plot(times,points[:,0],label='Euler, dt='+str(dt))\ntimes = np.linspace(0,T,100)\nd = 0.5\nw = math.sqrt(10-d**2)\nground_truth = np.vstack((np.multiply(np.exp(-d*times),np.cos(times*w)),\n -d*np.multiply(np.exp(-d*times),w*np.sin(times*w)))).T\n#plt.plot(ground_truth[:,0],ground_truth[:,1],label='Exact')\nplt.plot(times,ground_truth[:,0],label='Exact')\nplt.xlabel('t')\nplt.ylabel('x')\n\nplt.legend()\nplt.show()\n```\n\nWhen the time step is small, the integrated trajectory does indeed converge torward 0, like the exact solution. However, at $\\Delta t=0.1$, the solution is oscillatory between $[-1,1]$ and never converges. At $\\Delta t = 0.2$, the solution \"blows up\" toward infinity! This is a serious problem for simulation, since we would like to avoid the computational expense of taking tiny steps, but while also integrating accurately.\n\nIn fact there are systems that are stable everywhere for which Euler's\nmethod is unstable everywhere! An example is the oscillator:\n$$\\begin{bmatrix}\\dot{x} \\\\ \\dot{y} \\end{bmatrix} = \\begin{bmatrix}0 & -1 \\\\ 1& 0\\end{bmatrix} \\begin{bmatrix}x \\\\ y \\end{bmatrix}.$$\nHere, the flow vector at a point is always perpendicular and CCW to the\nvector from the origin to that point. The solution trajectories are\ncircles $(r \\cos (t - \\theta), r \\sin (t - \\theta))$, where $(r,\\theta)$\nare the polar coordinates of the initial point. If we were to\napproximate this using Euler integration, each integration step brings the state\nfurther and further from the origin, spiraling outward without bound. Taking\nsmaller time steps helps a little, but cannot completely remedy the problem.\n\n\n```python\n#Code for plotting the phase space of a pure oscillator\ndef f_oscillator(x,t):\n return np.array([-x[1],x[0]])\nX, Y = np.meshgrid(np.arange(-3, 3, .5), np.arange(-3, 3, .5))\nUV = np.array([f_oscillator([x,y],0) for x,y in zip(X,Y)])\nU = UV[:,0]\nV = UV[:,1]\nplt.quiver(X, Y, U, V)\n\n#compare several time steps\nT = 8.0\ndts = [0.025,0.1,0.25]\nx0 = np.array([1,0])\nfor dt in dts:\n N = int(T/dt)\n times,points = integrate_euler(f_oscillator,x0,N,dt)\n #times,points = integrate_scipy(f_harmonic_oscillator,x0,N,dt)\n times = np.array(times)\n points = np.array(points)\n plt.plot(points[:,0],points[:,1],label='Euler, dt='+str(dt))\nplt.legend()\nplt.show()\n```\n\nSimple dynamic systems \n-----------------------------------\n### Basic physics: mass, force, and torque\n\nNewton's laws\n\nF = m a\n\nTorques and moment arms\n\n### Particle driven by forces\nA 1D particle with mass $m$, position $p$ and velocities $v$, controlled by forces $u$, follows Newton's laws under the second-order controlled dynamics:\n$$\\ddot{p} = u / m$$\n\nThis problem can be modeled\nwith a state $x = (p,v) \\in \\mathbb{R}^2$ and control\n$u = f \\in \\mathbb{R}$ with the dynamics equation\n\n\n\n\\begin{equation}\n\\dot{x} \\equiv \\begin{bmatrix} \\dot{p}\\\\ \\dot{v} \\end{bmatrix} = f(x,u) = \\begin{bmatrix}v \\\\ f/m \\end{bmatrix}. \\label{eq:PointMass}\n\\end{equation}\n\nThis function $f$ can be thought of as a *vector\nfield* that maps each 2D point to a 2D vector. If we plot this vector\nfield on the $(p,v)$ plane for various values of $f$, we observe a few\nthings. First, it is invariant to $p$. Second, the value of $f$ varies\nthe length and direction of the vectors in the $v$ direction.\n\nFor any initial state $x_0=(p_0,v_0)$ under a constant forcing\n$u(t) = f$, the velocity of the solution trajectory $x(t)$ can be\ndetermined through simple integration:\n$$v(t) = v_0+\\int_0^t f/m dt = v_0 + t f/m.\n\\label{eq:PointMassVelocity}$$ Continuing with the position, we see that\n$$p(t) = p_0+\\int_0^t v(t) dt = p_0+\\int_0^t (v_0 + t f/m) dt =p_0 + t v_0 + \\frac{1}{2}t^2 f/m.\n\\label{eq:PointMassPosition}$$ This means that the velocity of the\nparticle increases or decreases over time according to a linear function\nwith slope depending on $f$, and its position takes on a parabola\ntrajectory over time. Note that this model generalizes to any point\nparticle in $n$-D space, except that position, velocity, and force\nbecome vector quantities. The state is then a $2n$-D vector and control\nis $n$-D.\n\nNow, let us suppose we wish to drive the particle from some position to\nanother (say, from 0 to 1) while starting and stopping at 0 velocity.\nCan we use a constant force to do so? We start with $x(0)=(0,0)$ and\nwish to achieve $x(T)=(1,0)$ at some future time $T$. Well,\nby (\\ref{eq:PointMassPosition}) we would need $T^2 f/m = 1$, but\nby (\\ref{eq:PointMassVelocity}), we would need $T f / m = 0$. This is\na contradiction, so we could not reach this other state via a constant\nforce.\n\nCan we use a linear interpolation instead? If we define $u=t/T$ as the\ninterpolation parameter, such a trajectory would have\n$v(t) = 0\\cdot (1-u) + 0\\cdot u = 0$ and\n$p(t) = 0\\cdot (1-u) + 1\\cdot u = t/T$. However, this trajectory does\nnot satisfy dynamic constraints for any value of $t>0$ and any value of\n$f$!\n\nThere are a couple ways to solve this problem. One is to make $f$ a\nclosed-loop control, such as the PD controller\n$f(t) \\equiv u(x(t)) = -k_P (p-1) - k_D v$. We will show when we discuss [PID control](Control.ipynb) that for certain constants $k_P$ and $k_D$,\nthis choice can be shown to force the system to converge toward the\ntarget $(1,0)$. Another is to design a clever open loop control that\nsatisfies the endpoint constraints and the dynamic constraints, such as\n$T = 2$, $f(t) = 1$ for $t\\leq 1$ and $f(t) = -1$ for $1 < t \\leq 2$.\nThis control accelerates the system to the point $(p,v)=(0.5,1)$ at\n$t=1$, and then decelerates to $(1,0)$ at $t=2$. We shall see more\ngeneral ways of designing such control functions using the optimal\ncontrol methods presented in [later chapters](OptimalControl.ipynb).\n\n### Pendulum swing-up\n\nThe pendulum swing up problem asks to control an actuator with limited\ntorque to drive a pendulum with progressively larger and larger\nmomentum, until it can then reach and stabilize about the vertical\nposition. The pendulum is assumed to with a point mass at the end of a\nbar of length $L$, with the other end fixed to rotate about the origin.\nThe system has a state space of $x=(\\theta,\n\\omega)$, with $\\theta$ the CCW angle of the mass with respect to the\n$x$ axis, and $\\omega$ its angular velocity. The start state is\n$x=(3\\pi/2,0)$ and the goal state is $x=(\\pi/2,0)$.\n\n************\n\n\n\n
Figure 1.\nIllustrating the dynamics of a controlled pendulum moving from the\ndown ($\\theta = 3\\pi/2 \\approx 4.71$) to the up\n($\\theta = \\pi/2 \\approx 1.57$) position. If the motor is strong enough,\nit can proceed almost directly toward the goal state. The legend\ndisplays torque requirement needed to implement such a controller.\n
\n\n************\n\n\nThe actuator $u$ applies a torque about the origin, and is usually\nassumed bounded $|u|\\leq u_{max}$. The force of gravity produces a\ntorque of magnitude $mg L \\cos \\theta$ about the origin. Since the\nmoment of inertia of the point mass is $mL$, the overall acceleration of\nthe system is: $$\\ddot{\\theta} = g \\cos \\theta + u/mL.$$ Writing this in\ncanonical form, we have\n$$\\dot{x} \\equiv \\begin{bmatrix}\\dot{\\theta}\\\\\\ddot{\\omega}\\end{bmatrix} = f(x,u) = \\begin{bmatrix}{\\omega}\\\\{g \\cos \\theta}\\end{bmatrix} + u \\begin{bmatrix}1 \\\\ 1/mL \\end{bmatrix}.$$\nThis is a nonlinear equation without an analytical solution.\n\nWith $u_{max}$ sufficiently large ($u_{max} > mLg$) the motor has enough\nstrength to hold the pendulum steady horizontally, and it is possible to\ndrive it monotonically to the goal\n([Fig. 1](#fig:PendulumStrongMotor)). But if the maximum torque is\nlowered beyond some amount, the motor can no longer achieve sufficient\ninertia to raise the pendulum without \"pumping,\" like a child on a\nswing, to increase the kinetic energy of the system. As we shall see when we discuss [bang-bang control](OptimalControl.ipynb), the optimal controller will then\nalternate between extreme controls to build up enough kinetic energy to\nreach the goal. This implies that the time evolution of the system will\nswitch between the flow fields shown in\n[Fig. 2](#fig:PendulumWeakMotor).\n\n************\n\n|Max CW|Max CCW|\n|----|----|\n| | |\n\n
Figure 2.\nThe flow fields corresponding to minimum (left) and maximum (right)\ncontrols for a pendulum swing-up problem with unit mass, unit length,\nand torque bounded at\n$|u| \\leq 5$\u2006N$\\cdot$m.\n
\n\n************\n\n### Cart-pole\n\nThe cart-pole problem is a toy underactuated system in which a cart that\ncan translate in $x$ direction needs to swing up and/or balance a pole\nattached to it with a pin joint\n([Fig. 3](#fig:Cartpole)). Its control has been studied quite\nextensively, and it has similar dynamics to the Segway mobility\nscooters.\n\n************\n\n\n\n
Figure 3.\n Illustration of the cart-pole problem.\n
\n\n************\n\n\nIn this problem, the system's configuration has two parameters\n$q=(q_1,q_2)$ which denote the $x$ translation of the cart and the angle\nof the pole, respectively. In the below convention we treat the\ncart-pole as a PR robot, so that $q_2$ is the CCW angle of the pole from\nthe $x$ axis. In the balancing task, we wish to design a controller to\nmaintain the state near the unstable equilibrium point $q_2=\\pi/2$ under\ndisturbances. In the swing-up task, we wish to go from $q_2=-\\pi/2$ to\n$\\pi/2$. (Keep in mind that the topology of $q_2$ is SO(2), so the pole\ncan swing either left or right.)\n\nThis is a highly dynamic system where the cart's motors can apply forces\n$u_1$ in the positive and negative $x$ direction. Optionally, the pole\ncould apply torques $u_2$, but it is typical to enforce $u_2=0$ so that\nthe pole swings passively. The cart and pole have masses $m_1$ and $m_2$\nrespectively, and the pole is assumed to have all of its mass\nconcentrated at a point distance $L$ away from the pin.\n\nIn [Chapter 14](RobotDynamics.ipynb), we shall derive the equations\nof motion for the cart-pole system to be the second-order system of\nequations: $$\\begin{aligned}\n(m_1+m_2) \\ddot{q_1} -\\frac{m_2 L}{2} \\ddot{q}_2 \\sin q_2 - \\frac{m_2 L}{2} \\dot{q}_2^2 \\cos q_2 = u_1 \\\\\n-\\frac{m_2 L}{2} \\ddot{q}_1 \\sin q_2 + \\frac{m_2 L^2}{4} \\ddot{q}_2 + m_2 g \\cos q_2 = u_2\n\\end{aligned}$$ where $g$ is the gravitational constant. Notice here\nthat the accelerations $\\ddot{q}_1$ and $\\ddot{q}_2$ are coupled, in\nthat they appear in both equations. Solving this system of equations, we\nobtain a solution: $$\\begin{bmatrix}{\\ddot{q}_1}\\\\{\\ddot{q}_2}\\end{bmatrix} = \n\\frac{1}{d} \\begin{bmatrix}\n\\frac{m_2 L^2}{4} & \\frac{m_2 L}{2} \\sin q_2 \\\\\n\\frac{m_2 L}{2} \\sin q_2 & m_1+m_2 \n\\end{bmatrix}\n\\begin{bmatrix}{u_1 + \\frac{m_2 L}{2} \\dot{q}_2^2 \\cos q_2}\\\\{u_2-m_2 g \\cos q_2}\\end{bmatrix}$$\nwith $d= \\frac{m_1 m_2 L^2}{4}+\\frac{m_2^2 L^2}{4} \\cos^2 q_2$. For any\ngiven choice of $u_1$ and $u_2$, this can then be integrated to obtain\nsolution trajectories.\n\nThe cart-pole system is highly sensitive to the behavior of the cart.\n[Fig. 4](#fig:CartpoleSpin) displays the behavior of the swing-up\nproblem under 1.5 sinusoidal movements of the cart with amplitude 0.5.\nEach plot shows a slighly different period. In this setup, the pole\nswings over the upright position only for periods in approximately the\nrange $[1.12,1.29]$. There is another range of periods where the pole is\nswung about the upright position in the range $[1.32,1.39]$.\n\n*************\n\n|Period 1.288s | Period 1.5s |\n|----|----|\n| | |\n\n
Figure 4\n Behavior of the cart pole problem as a function of time. Slightly\nchanging the period of the cart's movement from 1.288\u2006s to 1.5\u2006s fails\nto swing the pendulum past the upright position. A good swing-up\ncontroller might use a period of 1.288 and then switch to a stabilizing\ncontroller around\n$t=2$.\n
\n\n*************\n\n### Dubins car\n\nA Dubins car model approximates the mobility of a standard 2-axle car\nmoving on a flat surface, ignoring accelerations. In this model,\n$(p_x,p_y)$ is the center of its rear axle, $\\theta$ is its heading, and\n$L$ is the distance between the front and rear axles. The control\n$u=(v,\\phi)$ specifies the velocity $v$ and the steering angle of the\nfront wheels $\\phi$. The dynamics of this system are given as follows:\n$$\\dot{x} \\equiv \\begin{bmatrix}{\\dot{p}_x}\\\\{\\dot{p}_y}\\\\{\\dot{\\theta}}\\end{bmatrix} = f(x,u) = \\begin{bmatrix}{v \\cos \\theta}\\\\{v \\sin \\theta}\\\\{\\frac{v}{L}\\tan \\phi}\\end{bmatrix}$$\nNote that the velocity vector is always parallel to the heading\n$(\\cos \\theta,\\sin \\theta)$, and the turning rate $\\dot{\\theta}$ depends\non both the steering angle and the velocity. For constant $u$, the\nposition $(p_x,p_y)$ traces out straight lines (with $\\phi=0$) or arcs\n(with $\\phi\\neq 0$).\n\nTypically, the control is subject to bounds\n$v_{min} \\leq v \\leq v_{max}$ and $|\\phi| \\leq \\phi_{max}$. With these\nlimits, the vehicle has a minimum turning radius of\n$\\frac{1}{L \\tan \\phi_{max}}$. The vehicle cannot move sideways, and\nmust instead perform \"parallel parking\" maneuvers in order to move in\nthe state-space direction $(-\\sin \\theta,\\cos \\theta,0)$.\n\nLinear time invariant systems\n-----------------------------\n\n\nIn general, the $f$ function may be nonlinear in its arguments. However,\na widely studied class of dynamical system is the *linear,\ntime-invariant* (LTI) system. In an LTI system, the dynamics equation\ntakes on the form $$\\dot{x} = Ax + Bu$$ where $A$ and $B$ are constant\nmatrices of size $n \\times n$ and $n \\times m$, respectively. This type\nof system is easily analyzed using results from linear algebra can\nrepresent a wide range of dynamic behavior.\n\nFor example, the 1D point\nmass system (\\ref{eq:PointMass}) can be represented as an LTI system with:\n$$\\dot{x} \\equiv \\begin{bmatrix}\\dot{p} \\\\ \\dot{v} \\end{bmatrix} = \\begin{bmatrix}0&1\\\\0&0\\end{bmatrix} \\begin{bmatrix}p \\\\ v \\end{bmatrix} + \\begin{bmatrix} 0 \\\\ 1/m \\end{bmatrix}u$$\n\nIn the discrete-time\nform (\\ref{eq:DiscreteTimeDynamicEquation}), an LTI system takes the\nform $$x_{t+1} = A x_t + B u_t.$$ A continuous-time LTI system can be\nconverted to an equivalent discrete-time LTI system through integration.\n\nFor example, the point mass system with time step $\\Delta t$ and\nconstant control can be represented in discrete time as\n$$\\begin{aligned}\nx_{t+1} &\\equiv \\begin{bmatrix}{p(t+\\Delta t)}\\\\{v(t+\\Delta t)}\\end{bmatrix} = \\begin{bmatrix}{p(t) + \\Delta t v(t) + \\frac{1}{2} f /m\\Delta t^2}\\\\{v(t)+\\Delta t f/m}\\end{bmatrix} \\\\\n& = \\begin{bmatrix}1 & \\Delta t \\\\ 0 & 1\\end{bmatrix} x_t + \\begin{bmatrix}{\\frac{1}{2}\\Delta t^2 / m}\\\\{\\Delta t/m}\\end{bmatrix} u_t\n\\end{aligned}$$\n\nMoreover, nonlinear systems can be approximated by an LTI system about\nany stable point in state space using linearization. Consider\nlinearizing a system of the form $\\dot{x} = f(x) + g(x)u$ about state\n$x_0$ and control $u_0$. Also assume that $u_0$ applied at $x_0$ leads\nto no derivative (i.e., $f(x_0)+g(x_0) u_0=0$). Perform a change of\nvariables to $(\\Delta x, \\Delta u)$ such that $x = x_0 + \\Delta x$ and\n$u = u_0 + \\Delta u$. Then $$\\begin{aligned}\n\\dot{x} & = \\dot {\\Delta x} = (f(x_0)+g(x_0) u_0) + \\left(\\frac{\\partial f}{\\partial x}(x_0) + \\frac{\\partial g}{\\partial x}(x_0)u_0\\right) \\Delta x + g(x_0) \\Delta u \\\\\n & = \\left(\\frac{\\partial f}{\\partial x}(x_0) + \\frac{\\partial g}{\\partial x}(x_0)u_0\\right) \\Delta x + g(x_0) \\Delta u \n\\end{aligned}$$ This is LTI in $(\\Delta x,\\Delta u)$ with\n$A=\\frac{\\partial f}{\\partial x}(x_0) + \\frac{\\partial g}{\\partial x}(x_0)$\nand $B=\\frac{\\partial g}{\\partial x}(x_0)$.\n\n\nNoise, uncertainty, disturbances, errors\n----------------------------------------\n\nBesides handling the differential constraint of the dynamics function,\nthe purpose of control is to handle deviations from an idealized state\nor trajectory. These deviations are in various contexts called noise,\nbias, uncertainty, disturbances, or errors. When they do occur, a\nvariety of problems could happen: the robot could fail to reach a goal,\nhit an obstacle, reach an unrecoverable state, or even run into a\nperson! A *robust* planner or controller is designed to produce\nhigh-quality behavior even when such deviations exist. It is important\nto recognize *errors are a fact of life* for all robots outside of\ntightly controlled industrial environments.\n\nGenerally speaking, errors can be characterized as being either *noisy*\nor *systematic*. A noisy error is one obeys no obvious pattern each time\nit is measured. A systematic error is one that does obey a pattern. We\nshall also see that for the purposes of control, these deviations fall\nunder two fundamental classes, which we call *motion uncertainty* and\n*state uncertainty*.\n\n*Disturbances* are a form of motion uncertainty that cause the state to\nbe moved in unexpected ways at future points in time. For example, wind\ngusts are very hard to predict in advance, and can move a drone from a\ndesired path.\n\n*Actuation error* occurs when a desired control is not executed\nfaithfully. An example would be a controller that outputs desired\ntorques for a robot, but where these are not followed exactly by the\nlow-level motor controller. These errors can be treated as motion\nuncertainty.\n\n*Measurement error* is a type of state uncertainty where due to sensor\nnoise the state is observed incorrectly. Understanding measurement error\nis critical for closed-loop controllers which base their behavior on the\nmeasured state.\n\n*Partial observability* means that only certain aspects of the state\n*can possibly be measured* by the available sensors. For example, a\nmobile robot with a GPS sensor can only measure position, whereas it may\nneed to model velocity as part of its state. State estimation techniques,\nsuch as Kalman filtering and particle filtering,\ncan be used to extrapolate the unobserved components of state to provide\nreasonable state estimates. With those estimates, there will be some\nremaining *localization error* that the controller will still need to\nhandle.\n\n*Modeling error*, or *parameter uncertainty* means that the true\ndynamics function differs from what is *known* to the robot. This is\nsometimes considered a third class of uncertainty, but could also be\ntreated as state uncertainty as we shall see below.\n\nMotion uncertainty can be modeled as a disturbance to the dynamics\n$$\\dot{x} = f(x,u) + \\epsilon_d$$ where $\\epsilon_d(t) \\in E_d$ is some\nerror. Here $E_d$ is a set of possible disturbances, or a probability\ndistribution over disturbances. Motion uncertainty will cause an\nopen-loop system to \"drift\" from its intended trajectory over time. A\nproperly designed closed-loop controller can regulate the disturbances\nby choosing controls that drive the system back to intended trajectory.\n\nState uncertainty can be modeled as a discrepancy between the estimated\nstate $\\hat{x}$ and the \"true\" state of the system $x$, such that\n$\\hat{x} = x + \\epsilon_x$. This means that in open-loop trajectory\nplanning, we will start a plan from the estimated state $\\hat{x}$. Then,\neven if there was no motion uncertainty and we planned the best control\nsequence possible $u(t)$ starting from $\\hat{x}$, bad things could still\nhappen when it is executed. For closed-loop control, the control policy\n$u(\\hat{x})$ is *always chosen based on an incorrect estimate*. This\nmakes it much more difficult to ensure that it is correcting for true\ndeviations from the intended trajectory, rather than phantom errors\ncaused by uncertainty.\n\nTo design a robust controller, we might try to characterize $E_d$ and\n$E_x$ by observing likely disturbance values. If we observe systematic\nerrors like a constant *bias*, then perhaps we can improve our models to\nbe more accurate and cancel out the systematic error (called\n*calibration*). On the other hand, noisy errors are much harder to\ncancel out. To make any theoretical guarantees about a system's behavior\nin the case of motion uncertainty, it is usually necessary to ensure\nthat noise in $E_x$ and $E_d$ are relatively small.\n\nFinally, let us note that modeling error can often be treated as state\nuncertainty on a different dynamical system on an *augmented state*\nvector. Suppose that we are controlling a 1D point mass, but we do not\nobserve the true mass $m$. Instead, we observe $\\hat{m}$ which is\ndisturbed from the true value by $\\epsilon_m$ such that\n$\\hat{m} = m + \\epsilon_m$. If we construct the augmented state vector\n$(p,v,m)\\in \\mathbb{R}^3$, then the state follows dynamics\n$$\\dot{x} \\equiv \\begin{bmatrix}\\dot{p} \\\\ \\dot{v} \\\\ \\dot{m} \\end{bmatrix} = f(x,u) = \\begin{bmatrix} v \\\\ f/m \\\\ 0 \\end{bmatrix}.$$\nHence, the modeling error is equivalent to the state uncertainty vector\n$$\\epsilon_x = \\begin{bmatrix} 0 \\\\ 0 \\\\ \\hat{m}-m \\end{bmatrix}.$$\n\nTrajectories with timing\n-------------------------------------\n\nIt is important to discuss the difference between trajectories of a\ndynamic system vs. the geometric paths that we worked with in kinematic\nmotion planning. In a dynamic system, the trajectory in state space\n$x(t):[0,T]\\rightarrow \\mathbb{R}^n$ is parameterized by time. The state\nspace of a robotic system typically includes both configuration and\nvelocity components. By contrast, a geometric path moves in\nconfiguration space and has no inherent notion of time.\n\nMoreover, a geometric path can move in any direction as long as it does\nnot touch an obstacle, whereas a valid dynamic trajectory can only move\nin directions that can be generated by feasible controls. Hence we must\nconsider both time and dynamic constraints when representing valid\ntrajectories.\n\n### Trajectory representation\n\nOne basic representation is to store a trajectory as a sequence of\nstates sampled along the trajectory $(x_0,\\ldots,x_n)$ along with the\ninitial time $t_0$ (often assumed to be 0) and the time step $\\Delta t$\nbetween each point. An approximate interpolation between each point can\nbe performed piecewise-linearly or with splines. For example, the\npiecewise linear approximation has\n$$x(t) = x_k + \\frac{t-t_0-k\\Delta t}{\\Delta t}(x_{k+1} - x_k)$$ defined\nover $t \\in [t_0,t_0+n\\Delta t]$, where\n$k = \\lfloor \\frac{t-t_0}{\\Delta t} \\rfloor$ is the index of the\ntrajectory segment corresponding to the time $t$.\n\nMore generally, the\ntrajectory could store both states $(x_0,\\ldots,x_n)$ and times\n$(t_0,\\ldots,t_n)$, with a slightly modified interpolation function\n$$x(t) = x_k + \\frac{t-t_k}{t_{k+1}-t_k}(x_{k+1} - x_k)$$ defined over\nthe range $[t_0,t_n]$ and $k$ determined to be the point in time so that\n$t_k \\leq t \\leq t_{k+1}$.\n\nIf we are given an *integrator* (i.e., a *simulator*) for the dynamics\nfunction, trajectories can be encoded in a control-space representation\n$(x_0,u)$, which captures the initial state $x_0$ and an arbitrary control trajectory $u(t)$.\nFrom these items, the integrator *generates* the state trajectory\n$x(t)$. Specifically, we assume the existence of a function\n$Sim(f,x_0,u,t)$ that integrates the dynamics $f$ forward over time $t$,\nstarting from $x_0$ and using the control trajectory $u$. The control\n$u$ can be stored using arbitrary path representations, like\npiecewise-constant functions, piecewise-linear functions, polynomials,\nand splines. Then, we can regenerate the state-space trajectory\n$x(t) \\equiv Sim(f,x_0,u,t)$ as needed.\n\n### Path to trajectory conversion\n\nIt is almost trivial to convert trajectories to paths: simply \nconstruct a state space path and dropping the time component.\nThe converse --- creating a timed, dynamically-feasible trajectory from\na path --- can in some cases be quite challenging or even impossible. The reason is that the speed at which a robot should execute a path requires foresight into future twists and turns, like a race car driver slowing down ahead of a hairpin turn. \n\nIf a piecewise linear path were to be executed at a constant rate, then the timed trajectory would instantaneously change velocity at each milestone. But infinite forces are needed to execute instantaneous changes of velocity, so sending such trajectories to motors would lead to overshooting corners. We will examine\nbetter methods for industrial robots to start and stop smoothly at milestones\nwhen we discuss [motion generation](RobotControl.ipynb#Motion-queues-(motion-generation)). The basic idea is to speed up and slow down gradually, while choosing the point in time when the robot slows so that the robot ends exactly at the next milestone.\n\nThe more general case is known as a *time-scaling* problem. Mathematically, we describe such a problem as being given a geometric path $p(s)$ as input, and we wish to find a timed path $x(t)$ such that:\n\n* The trajectory follows the path: for all $t$, there exists an $s$ such that $x(t) = p(s)$\n* First-order dynamic constraints satisfied: $g(t,\\dot{x}(t)) \\leq 0$ for all $t$\n* Second-order dynamic constraints satisfied: $h(t,\\dot{x}(t),\\ddot{x}(t)) \\leq 0$ for all $t$\n* Possibly higher-order constraints as well...\n\nThis is formulated as finding a smooth, monotonically increasing 1D function $t(s)$ that defines the timing along the path. At one end of the domain, there is a boundary constraint $t(0)=0$. Since $t(s)$ is monotonically increasing, it has a well-defined inverse $s(t)$, so that the trajectory is defined via $x(t) = p(s(t))$. and we can define the trajectory velocity, acceleration, and higher order derivatives using the chain rule:\n\n* $\\dot{x}(t) = p^\\prime(s(t)) s^\\prime(t)$ \n* $\\ddot{x}(t) = p^{\\prime\\prime}(s(t)) s^\\prime(t)^2 + p^\\prime(s(t)) s^{\\prime\\prime}(t)$ \n* ...\n\nThen, a dynamic constraint of order $k$ can then be rewritten in terms of $p$ (which is known), $s$, and their derivatives up to order $k$. Choosing $s(t)$ then becomes a constrained trajectory optimization problem, which we will discuss when we visit the topic of [optimal control](OptimalControl.ipynb).\n\n\nSummary\n-------\n\n* Continuous-time dynamic systems are represented by a dynamics equation in the canonical form $\\dot{x}(t) = f(x(t),u(t))$, where $x$ is the state trajectory and $u$ is the control trajectory. Discrete-time systems are represented by the form $x_{t+1} = f(x_t,u_t)$.\n* Integration (or simulation) is needed to determine the trajectory that the state will follow under a given control. Numerical instability can result with a time step that is too large.\n* Dynamic systems can be convergent, stable, or divergent under a given controller.\n* \n\nExercises\n---------\n\n\n```python\n\n```\n", "meta": {"hexsha": "193e673f1c9bd07441cd2e331d76bea71a904272", "size": 190375, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "WhatAreDynamicsAndControl.ipynb", "max_stars_repo_name": "Nitin-Mane/RoboticSystemsBook", "max_stars_repo_head_hexsha": "715e327c42d57e54cf0505ab05f4d34bb91d8899", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "WhatAreDynamicsAndControl.ipynb", "max_issues_repo_name": "Nitin-Mane/RoboticSystemsBook", "max_issues_repo_head_hexsha": "715e327c42d57e54cf0505ab05f4d34bb91d8899", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "WhatAreDynamicsAndControl.ipynb", "max_forks_repo_name": "Nitin-Mane/RoboticSystemsBook", "max_forks_repo_head_hexsha": "715e327c42d57e54cf0505ab05f4d34bb91d8899", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 181.6555343511, "max_line_length": 56604, "alphanum_fraction": 0.8698305975, "converted": true, "num_tokens": 11515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5926665999540697, "lm_q2_score": 0.5039061705290805, "lm_q1q2_score": 0.2986483567833458}} {"text": "# Learning disentangled representations in Flatland\n\nThis notebook uses our method to learn either entangled or disentangled representations in the Flatland environment (see Caselles-Dupr\u00e9, Hugo, et al. \"Flatland: a lightweight first-person 2-d environment for reinforcement learning.\" arXiv preprint arXiv:1809.00510 (2018).)\n\n\n```python\nimport os\nimport gym\nimport math\nimport numpy as np\nimport time\nfrom PIL import Image\nimport matplotlib.pyplot as plt\nfrom IPython import display\nimport torch\nimport random\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nos.chdir('src/flatland/flat_game/')\nfrom env import Env\n\n```\n\n pygame 1.9.6\n Hello from the pygame community. https://www.pygame.org/contribute.html\n Loading chipmunk for Linux (64bit) [/home/william/.local/lib/python3.6/site-packages/pymunk/libchipmunk.so]\n\n\n# Flatworld environment\n\nWe start by defining the flatworld environment, which is based on the code available at https://github.com/Caselles/NeurIPS19-SBDRL. This environment returns pixel observations of a ball on a cyclical 2D grid. The available (discrete) actions step the ball by a fixed amount in all four directions.\n\n\n```python\nRADIUS = 15\nPERIOD = 10\n\nclass FlatWorld():\n \n class action_space():\n def __init__(self,n_actions):\n self.n = n_actions\n \n def sample(self, k=1):\n return torch.randint(0,self.n,(k,)) \n\n class observation_space():\n def __init__(self):\n self.shape = [84,84]\n \n def __init__(self, env_parameters, period=10, radius=15):\n\n self.action_space = self.action_space(4)\n self.observation_space = self.observation_space() \n self.period = period\n \n self.step_size = 0.1*(63-2*env_parameters['agent']['radius'])/period\n start_positions_list = [27 + 10*self.step_size*i for i in range(period)]\n self.start_positions = []\n for i in start_positions_list:\n for j in start_positions_list:\n self.start_positions.append((i,j))\n \n env_parameters['agent']['radius'] = radius\n self.env = Env(**env_parameters)\n \n def reset(self, start_position=None):\n if start_position==None:\n obs = self.env.reset(position=random.sample(self.start_positions, 1)[0])\n else:\n obs = self.env.reset(position=start_position)\n return torch.FloatTensor(obs)/255\n \n def step(self, action):\n action_dict = self.create_action_dict(action)\n obs, reward, done, info = self.env.step(action_dict)\n return torch.FloatTensor(obs)/255\n \n def create_action_dict(self, action):\n action_dict = {}\n if action == 0:\n action_dict['longitudinal_velocity'] = 0\n action_dict['lateral_velocity'] = self.step_size\n action_dict['angular_velocity'] = 0\n if action == 1:\n action_dict['longitudinal_velocity'] = 0\n action_dict['lateral_velocity'] = -self.step_size\n action_dict['angular_velocity'] = 0\n if action == 2:\n action_dict['longitudinal_velocity'] = self.step_size\n action_dict['lateral_velocity'] = 0\n action_dict['angular_velocity'] = 0\n if action == 3:\n action_dict['longitudinal_velocity'] = -self.step_size\n action_dict['lateral_velocity'] = 0\n action_dict['angular_velocity'] = 0\n return action_dict\n \n```\n\n\n```python\nagent_parameters = {\n 'radius': 15,\n 'speed': 10,\n 'rotation_speed' : math.pi/8,\n 'living_penalty': 0,\n 'position': (30,30),\n 'angle': 0,\n 'sensors': [\n \n {\n 'nameSensor' : 'proximity_test',\n 'typeSensor': 'proximity',\n 'fovResolution': 64,\n 'fovRange': 300,\n 'fovAngle': math.pi ,\n 'bodyAnchor': 'body',\n 'd_r': 0,\n 'd_theta': 0,\n 'd_relativeOrientation': 0,\n 'display': False,\n }\n \n \n ],\n 'actions': ['forward', 'turn_left', 'turn_right', 'left', 'right', 'backward'],\n 'measurements': ['health', 'poisons', 'fruits'],\n 'texture': {\n 'type': 'color',\n 'c': (255, 255, 255)\n },\n 'normalize_measurements': False,\n 'normalize_states': False,\n 'normalize_rewards': False\n}\n\nenv_parameters = {\n 'map':False,\n 'n_rooms': 2,\n 'display': False,\n 'horizon': 10001,\n 'shape': (84, 84),\n 'mode': 'time',\n 'poisons': {\n 'number': 0,\n 'positions': 'random',\n 'size': 10,\n 'reward': -10,\n 'respawn': True,\n 'texture': {\n 'type': 'color',\n 'c': (255, 255, 255),\n }\n },\n 'fruits': {\n 'number': 0,\n 'positions': 'random',\n 'size': 10,\n 'reward': 10,\n 'respawn': True,\n 'texture': {\n 'type': 'color',\n 'c': (255, 150, 0),\n }\n },\n 'obstacles': [\n \n ],\n 'walls_texture': {\n 'type': 'color',\n 'c': (1, 1, 1)\n },\n 'agent': agent_parameters\n}\n```\n\n**Now show a few consecutive states from this environment**\n\n\n```python\nimport matplotlib.gridspec as gridspec\n\nenv = FlatWorld(env_parameters, period=PERIOD, radius=RADIUS)\nplt.figure(figsize = (3,3))\ngs1 = gridspec.GridSpec(3, 3)\ngs1.update(wspace=0.02, hspace=0.02)\nplt.grid(None)\nstate = env.reset()\nfor i in range(9):\n ax = plt.subplot(gs1[i])\n ax.axis('off')\n ax.set_aspect('equal')\n ax.imshow(state)\n display.display(plt.gcf())\n time.sleep(0.2)\n display.clear_output(wait=True)\n action = random.sample([0,1,2,3],k=1)[0]\n action = 2\n #print(env.env.agent.body.position)\n state = env.step(action)\n \nplt.savefig(\"env.png\", bbox_inches='tight')\n```\n\n### Latent space\n\n**Encoder/Decoder**\n\nNow we want to learn to represent this environment in some latent space (which we, for now, simply assume to be 4-dimensional). We will require both an encoder and decoder, which will use convolutional neural networks.\n\n\n```python\nclass Encoder(nn.Module):\n\n def __init__(self, n_out=4, n_hid = 64):\n\n super().__init__()\n\n self.conv = nn.Conv2d(1, 5, 10, stride=3)\n self.fc1 = nn.Linear(180, n_hid)\n self.fc2 = nn.Linear(n_hid, n_out)\n\n def forward(self, x):\n x = F.relu(self.conv(x.unsqueeze(0).unsqueeze(1)))\n x = F.max_pool2d(x, 4, 4)\n x = x.view(-1, 180)\n x = F.relu(self.fc1(x))\n return F.normalize(self.fc2(x)).squeeze()\n\nclass Decoder(nn.Module):\n \n def __init__(self, n_in=4, n_hid = 64):\n\n super().__init__()\n \n self.fc1 = nn.Linear(n_in, n_hid)\n self.fc2 = nn.Linear(n_hid, 180)\n self.conv = nn.ConvTranspose2d(5, 1, 34, stride=10)\n\n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = x.view(1,5,6,6)\n x = self.conv(x)\n return torch.sigmoid(x).squeeze()\n```\n\n\n```python\nencoder = Encoder(n_out=4)\ndecoder = Decoder(n_in=4)\nprint(encoder)\nprint(decoder)\n```\n\n Encoder(\n (conv): Conv2d(1, 5, kernel_size=(10, 10), stride=(3, 3))\n (fc1): Linear(in_features=180, out_features=64, bias=True)\n (fc2): Linear(in_features=64, out_features=4, bias=True)\n )\n Decoder(\n (fc1): Linear(in_features=4, out_features=64, bias=True)\n (fc2): Linear(in_features=64, out_features=180, bias=True)\n (conv): ConvTranspose2d(5, 1, kernel_size=(34, 34), stride=(10, 10))\n )\n\n\n**Make sure dimensions match**\n\n\n```python\nobs = torch.FloatTensor(state)\nprint(obs.shape)\nlatent = encoder(obs)\nprint(latent.shape)\nreconstructed = decoder(latent)\nprint(reconstructed.shape)\n```\n\n torch.Size([84, 84])\n torch.Size([4])\n torch.Size([84, 84])\n\n\n**Representation**\n\nThe crux of the matter is learning to 'represent' actions in the observation space with actions in latent space. Here, we will do this by assuming every action is a generalized rotation in latent space, which we denote with a series of 2-dimensional rotations.\n\nA 2-d rotation is given by:\n\n\\begin{pmatrix}\n\\cos(\\theta) & \\sin(\\theta) \\\\\n-\\sin(\\theta) & \\cos(\\theta)\n\\end{pmatrix}\n\nand we denote a rotation in dimensions $i$ and $j$ of a higher dimensional space as $R_{i,j}(\\theta)$. For $i=1$, $j=4$, in a 4-dimensional space:\n\n\\begin{equation}\nR_{1,4}(\\theta) = \n\\begin{pmatrix}\n\\cos(\\theta) & 0 & 0 & \\sin(\\theta) \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n-\\sin(\\theta) & 0 & 0 & \\cos(\\theta)\n\\end{pmatrix}\n\\end{equation}\n\nAn arbitrary rotation, denoted $g$ as I am subtly moving towards this being a group action, can then be written as:\n\n\\begin{equation}\n g(\\theta_{1,2},\\theta_{1,3},\\dots,\\theta_{n-1,n}) = \\prod_{i=1}^{n-1} \\prod_{j=1+1}^{n} R_{i,j}(\\theta_{i,j})\n\\end{equation}\n\nwhich has $n(n-1)/2$ free parameters (i.e. $\\theta_{i,j}$'s).\n\n\n```python\nclass Representation():\n\n def __init__(self, dim=4):\n self.dim = dim\n self.params = dim*(dim-1)//2\n self.thetas = torch.autograd.Variable(np.pi*(2*torch.rand(self.params)-1)/dim, requires_grad=True)\n\n self.__matrix = None\n \n def set_thetas(self, thetas):\n self.thetas = thetas\n self.thetas.requires_grad = True\n self.clear_matrix()\n \n def clear_matrix(self):\n self.__matrix = None\n \n def get_matrix(self):\n if self.__matrix is None:\n k = 0\n mats = []\n for i in range(self.dim-1):\n for j in range(self.dim-1-i):\n theta_ij = self.thetas[k]\n k+=1\n c, s = torch.cos(theta_ij), torch.sin(theta_ij)\n\n rotation_i = torch.eye(self.dim, self.dim)\n rotation_i[i, i] = c\n rotation_i[i, i+j+1] = s\n rotation_i[j+i+1, i] = -s\n rotation_i[j+i+1, j+i+1] = c\n\n mats.append(rotation_i)\n\n def chain_mult(l):\n if len(l)>=3:\n return l[0]@l[1]@chain_mult(l[2:])\n elif len(l)==2:\n return l[0]@l[1]\n else:\n return l[0]\n\n self.__matrix = chain_mult(mats)\n \n return self.__matrix\n```\n\n**LatentWorld**\n\nNow, for symmetry's sake, we'll also have a `LatentWorld` which acts as the environment in the latent space.\n\n\n```python\nclass LatentWorld():\n \n class action_space():\n def __init__(self,n_actions):\n self.n = n_actions\n \n def sample(self, k=1):\n return torch.randint(0,self.n,(k,))\n\n class observation_space():\n def __init__(self,n_features):\n self.shape = [n_features]\n \n def __init__(self,\n dim=4,\n n_actions=4,\n action_reps=None):\n\n self.dim = dim\n\n self.action_space = self.action_space(n_actions)\n self.observation_space = self.observation_space(dim)\n \n if action_reps is None:\n self.action_reps = [Representation(dim=self.dim) for _ in range(n_actions)]\n else:\n if len(action_reps)!=n_actions:\n raise Exception(\"Must pass an action representation for every action.\")\n if not all([rep.dim==self.dim]):\n raise Exception(\"Action representations do not act on the dimension of the latent space.\")\n self.action_reps = action_reps\n \n def reset(self, state_init):\n self.state = state_init\n return self.get_observation()\n \n def clear_representations(self):\n for rep in self.action_reps:\n rep.clear_matrix()\n \n def get_representation_params(self):\n params = []\n for rep in self.action_reps:\n params.append(rep.thetas)\n return params\n \n def save_representations(self, path):\n if os.path.splitext(path)[-1] != '.pth':\n path += '.pth'\n rep_thetas = [rep.thetas for rep in self.action_reps]\n return torch.save(rep_thetas, path)\n \n def load_reprentations(self, path):\n rep_thetas = torch.load(path)\n for rep in self.action_reps:\n rep.set_thetas(rep_thetas.pop(0))\n \n def get_observation(self):\n return self.state\n \n def step(self,action):\n self.state = torch.mv(self.action_reps[action].get_matrix(), self.state)\n obs = self.get_observation()\n return obs\n```\n\n## 3. Training\n\nSo the basic training loop is pretty straightfoward. We simply play out episodes from random starting configurations, encoded by the `Encoder`, for `ep_steps` time-steps. Each random action is executed in both the `FlatWorld` and the `LatentWorld`, and then the latent state is transformed to the observation space by the `Decoder` where the loss function measures its deviation from the true state.\n\nNote: sometimes training without the disentanglement regularization fails to find toroidal structure, especially when the radius of the ball is very small. \n\n\n```python\ndim = 4\n\nobs_env = FlatWorld(env_parameters, period=PERIOD, radius=RADIUS)\nlat_env = LatentWorld(dim = dim,\n n_actions = obs_env.action_space.n)\ndecoder = Decoder(n_in = dim, n_hid = 64)\nencoder = Encoder(n_out = dim, n_hid = 64)\n\noptimizer_dec = optim.Adam(decoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_enc = optim.Adam(encoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_rep = optim.Adam(lat_env.get_representation_params(),\n lr=1e-2,\n weight_decay=0)\n\nlosses = []\n```\n\n\n```python\nn_sgd_steps = 3000\nep_steps = 5\nbatch_eps = 16\n\ni = 0\n\nt_start = time.time()\n\ntemp = 0\n\nwhile i < n_sgd_steps:\n \n loss = torch.zeros(1)\n \n for _ in range(batch_eps):\n t_ep = -1\n while t_ep < ep_steps:\n if t_ep == -1:\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n t_ep += 1 \n \n obs_x_recon = decoder(obs_z)\n\n loss += F.binary_cross_entropy(obs_x_recon, obs_x)\n \n loss /= (ep_steps*batch_eps)\n \n losses.append(loss.item())\n \n optimizer_dec.zero_grad()\n optimizer_enc.zero_grad()\n optimizer_rep.zero_grad()\n loss.backward()\n optimizer_enc.step()\n optimizer_dec.step()\n optimizer_rep.step()\n \n # Remember to clear the cached action representations after we update the parameters!\n lat_env.clear_representations()\n\n i+=1\n \n if i%10==0:\n print(\"iter {} : loss={:.3f} : last 10 iters in {:.3f}s\".format(i, loss.item(), time.time() - t_start),\n end=\"\\r\" if i%100 else \"\\n\")\n t_start = time.time()\n```\n\n## 4. Testing\n\nTesting is easy too, we just play out an episode and see how well the reconstructed image agrees with the ground truth!\n\n\n```python\ndef plot_state(obs, ax):\n ax.imshow(obs)\n ax.set_aspect('equal')\n ax.set_xticks([])\n ax.set_yticks([])\n \n return ax\n \nn_steps = 10\n\nfig, (ax1,ax2) = plt.subplots(1, 2)\n\nax1.set_title(\"Ground truth\")\nax2.set_title(\"Reconstruction\")\n\nfor i in range(n_steps+1):\n \n if i==0:\n action = \"N\\A\"\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n obs_x_recon = decoder(obs_z)\n \n fig.suptitle('step {} : last action = {}'.format(i, action), fontsize=16)\n \n plot_state(obs_x.detach().numpy(),ax1)\n plot_state(obs_x_recon.detach().numpy(),ax2)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())\n time.sleep(0.5)\n \ndisplay.clear_output(wait=False)\n```\n\nWe will now have a look at the latent space, we will make a 2D projection of the 4D latent space for every possible frame (There are 121 possible frames in this environment). Note that since we use random projections, in some cases the toric structure we find is more obvious than in others.\n\n**Positions in latent space**\n\n\n```python\nfrom sklearn.random_projection import GaussianRandomProjection\n\nlatent_points = []\n\nfor start_position in obs_env.start_positions:\n obs = obs_env.reset(start_position=start_position)\n latent = encoder(obs)\n latent_points.append(latent.detach().tolist())\n\nlatent_map = np.array(latent_points)\n```\n\n\n```python\nperiod = obs_env.period\n\ncolor=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf',\"#2fa36b\"]\nmarks=[\"1\",\"2\",\"3\",\"4\",\"+\",\">\",\"<\",\"^\",\"v\",\"x\",\"d\"]\npca = GaussianRandomProjection(n_components=2)\n\nlatent_2d = pca.fit_transform(latent_map)\n\nfig = plt.figure(figsize=(6,4))\nax = fig.add_subplot(111)#, projection='3d')\ns=[120]*5+[50]*6\nfor i in range (period**2):\n ax.scatter(x=latent_2d.transpose()[0][i],\n y=latent_2d.transpose()[1][i],\n c=color[i//period], \n s=s[i%period],\n marker=marks[i%period])\nplt.title('Representations - Our method',fontsize=16)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nfig.show()\nplt.savefig(\"latent_flatland.png\", bbox_inches='tight')\n```\n\n**2) Plot Action Representations**\n\n\n```python\nwidth=0.5\n\nrep_thetas = [rep.thetas.detach().numpy() for rep in lat_env.action_reps]\n\nfor rep in lat_env.action_reps:\n print(rep.get_matrix())\n print(torch.matrix_power(rep.get_matrix(), 5))\n\nplt_lim = max( 0.12, max([max(t) for t in rep_thetas])/(2*np.pi) )\ntitles = [\"up\", \"down\", \"right\", \"left\"]\n\nwith plt.style.context('seaborn-paper', after_reset=True):\n\n fig, axs = plt.subplots(1, len(rep_thetas), figsize=(15, 3), gridspec_kw={\"wspace\":0.4})\n \n for i, thetas in enumerate(rep_thetas):\n x = np.arange(len(thetas))\n axs[i].bar(x - width/2, thetas/(2*np.pi), width, label='Rep {}'.format(i))\n axs[i].hlines((0.2,-0.2), -2., 7., linestyles=\"dashed\")\n axs[i].hlines(0., -2., 7.)\n axs[i].set_xticks(x-0.25)\n axs[i].set_xticklabels([\"12\",\"13\",\"14\",\"23\",\"24\",\"34\"], fontsize = 15)\n axs[i].set_xlabel(\"$ij$\", fontsize = 15)\n \n axs[i].set_ylim(-plt_lim,plt_lim)\n axs[i].set_xlim(-.75, 5.75)\n axs[i].set_title(titles[i], fontsize = 15)\n \n axs[i].tick_params(labelsize=15)\n\n axs[0].set_ylabel(r\"$\\theta / 2\\pi$\", fontsize = 15)\n plt.savefig(\"action_rep_entangled.png\", bbox_inches='tight')\n \n```\n\n## 5. Disentanglement\n\n***Some jargon***\n\nIt's nice that it works, but the real point here is to try and learn a *disentangled* representation of the actions.\n\nBefore considering how best to do this, we want to define a metric of 'disentanglement'. We consider the evolution of an observable (latent) vector, $x \\in X$ ($z \\in Z$), under the element $g \\in G$ of the group of symmetries generating transformations of the object. Then we are looking for a representation, $\\rho:G \\rightarrow GL(V)$, such that the transformation is linear in the latent space, i.e.\n\\begin{equation}\n z^{\\prime} = \\rho(g) \\cdot z.\n\\end{equation}\nNote, in our case, the representations are the rotation matrices we learn.\n\nFor this representation to be disentangled, it means that if there exists a subgroup decomposition of $G$\n\\begin{equation}\n G = G_1 \\times G_2 \\times \\dots \\times G_n,\n\\end{equation}\nthen we equivalently decompose the representation, $(\\rho, G)$, into subrepresentations:\n\\begin{equation}\n V = V_1 \\oplus V_2 \\oplus \\dots \\oplus V_n\n\\end{equation}\nsuch that the restricted subrepresentations $(\\rho_{\\vert G_i}, V_i)_i$ are non-trivial, and the restricted subrepresentations $(\\rho_{\\vert G_i}, V_j)_{j \\neq i}$ are trivial.\n\nIn our context, a GridWorld with 5 points in each dimension is represented by $G = C_5 \\times C_5$ (where $C_5$ is the cyclic group). This is a subgroup of $\\mathrm{SO}(2) \\times \\mathrm{SO}(2)$, therefore we hope to find the disentangled representation of the actions (up, down, left, right) that corresponds to this.\n\n***Some practicalities***\n\nOur intuition is that the disentangled representation acts as the identity on as many dimensions as possible. We could attempt to enforce this with some regularization during training. Normal weight decay won't cut it, as that tries to reduce all weights, where as what we really want to do is have all *but one* of our thetas (which corresponds to the rotation/coupling of two dimensions) to be zero.\n\n**1. Entanglement regularisation**\n\nSo for $m$ parameters, ${\\theta_1, \\dots, \\theta_m}$, we want to regularise with\n\\begin{equation}\n \\sum_{i \\neq j} \\vert\\theta_i\\vert^2, \\mathrm{where\\ } \\theta_j {=} \\mathrm{max_k}({\\vert\\theta_k\\vert}).\n\\end{equation}\nWe will also use this term as our metric of 'entanglement'.\n\n\n```python\ndef calc_entanglement(params):\n params = params.abs().pow(2)\n return params.sum() - params.max()\n\nparams = torch.FloatTensor([1,1,0.5,0,0])\ncalc_entanglement(params)\n```\n\n\n\n\n tensor(1.2500)\n\n\n\n### Training with regularization\n\n\n```python\ndim = 4\n\nobs_env = FlatWorld(env_parameters, period=PERIOD, radius=RADIUS)\nlat_env = LatentWorld(dim = dim,\n n_actions = obs_env.action_space.n)\ndecoder = Decoder(n_in = dim, n_hid = 64)\nencoder = Encoder(n_out = dim, n_hid = 64)\n\noptimizer_dec = optim.Adam(decoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_enc = optim.Adam(encoder.parameters(),\n lr=1e-2,\n weight_decay=0)\n\noptimizer_rep = optim.Adam(lat_env.get_representation_params(),\n lr=1e-2,\n weight_decay=0)\n\nlosses = []\nentanglement = []\n```\n\n\n```python\nn_sgd_steps = 3000\nep_steps = 5\nbatch_eps = 16\nentanglement_target = 0\n\ni = 0\n\nt_start = time.time()\n\ntemp = 0\n\nwhile i < n_sgd_steps:\n \n loss = torch.zeros(1)\n \n for _ in range(batch_eps):\n t_ep = -1\n while t_ep < ep_steps:\n if t_ep == -1:\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n t_ep += 1 \n \n obs_x_recon = decoder(obs_z)\n\n loss += F.binary_cross_entropy(obs_x_recon, obs_x)\n \n loss /= (ep_steps * batch_eps)\n raw_loss = loss.item()\n \n reg_loss = sum([calc_entanglement(r.thetas) for r in lat_env.action_reps])/4\n \n loss += (reg_loss-entanglement_target).abs() * 1e-2\n \n losses.append(raw_loss)\n entanglement.append(reg_loss.item())\n \n optimizer_dec.zero_grad()\n optimizer_enc.zero_grad()\n optimizer_rep.zero_grad()\n loss.backward()\n optimizer_enc.step()\n optimizer_dec.step()\n optimizer_rep.step()\n \n # Remember to clear the cached action representations after we update the parameters!\n lat_env.clear_representations()\n\n i+=1\n \n if i%10==0:\n print(\"iter {} : loss={:.3f} : entanglement={:.2e} : last 10 iters in {:.3f}s\".format(\n i, raw_loss, reg_loss.item(), time.time() - t_start\n ), end=\"\\r\" if i%100 else \"\\n\")\n t_start = time.time()\n```\n\n iter 100 : loss=0.196 : entanglement=1.28e-01 : last 10 iters in 2.261s\n iter 200 : loss=0.110 : entanglement=1.64e-01 : last 10 iters in 2.717s\n iter 300 : loss=0.079 : entanglement=1.37e-01 : last 10 iters in 2.428s\n iter 400 : loss=0.068 : entanglement=1.21e-01 : last 10 iters in 2.503s\n iter 500 : loss=0.058 : entanglement=9.97e-02 : last 10 iters in 2.390s\n iter 600 : loss=0.052 : entanglement=7.73e-02 : last 10 iters in 2.173s\n iter 700 : loss=0.047 : entanglement=6.18e-02 : last 10 iters in 2.457s\n iter 720 : loss=0.047 : entanglement=5.78e-02 : last 10 iters in 2.508s\r\n\n /home/william/Bureau/Python/gantime/rep-learning/paris/src/flatland/flat_game/sensors/proximity_sensor.py:42: RuntimeWarning: invalid value encountered in not_equal\n mask = resized_img != 0\n\n\n iter 800 : loss=0.045 : entanglement=4.67e-02 : last 10 iters in 2.678s\n iter 900 : loss=0.045 : entanglement=3.69e-02 : last 10 iters in 3.733s\n iter 1000 : loss=0.041 : entanglement=2.65e-02 : last 10 iters in 2.215s\n iter 1100 : loss=0.042 : entanglement=2.10e-02 : last 10 iters in 2.264s\n iter 1200 : loss=0.040 : entanglement=1.57e-02 : last 10 iters in 2.423s\n iter 1300 : loss=0.040 : entanglement=1.22e-02 : last 10 iters in 2.254s\n iter 1400 : loss=0.039 : entanglement=8.43e-03 : last 10 iters in 3.703s\n iter 1500 : loss=0.040 : entanglement=7.17e-03 : last 10 iters in 3.049s\n iter 1600 : loss=0.039 : entanglement=5.31e-03 : last 10 iters in 2.675s\n iter 1700 : loss=0.037 : entanglement=4.04e-03 : last 10 iters in 2.599s\n iter 1800 : loss=0.039 : entanglement=3.47e-03 : last 10 iters in 2.330s\n iter 1900 : loss=0.038 : entanglement=2.71e-03 : last 10 iters in 2.874s\n iter 2000 : loss=0.038 : entanglement=2.13e-03 : last 10 iters in 2.481s\n iter 2100 : loss=0.042 : entanglement=1.38e-03 : last 10 iters in 2.716s\n iter 2200 : loss=0.037 : entanglement=1.36e-03 : last 10 iters in 2.523s\n iter 2300 : loss=0.036 : entanglement=9.73e-04 : last 10 iters in 3.021s\n iter 2400 : loss=0.036 : entanglement=8.62e-04 : last 10 iters in 2.309s\n iter 2500 : loss=0.036 : entanglement=5.97e-04 : last 10 iters in 2.594s\n iter 2600 : loss=0.037 : entanglement=5.08e-04 : last 10 iters in 2.333s\n iter 2700 : loss=0.036 : entanglement=5.65e-04 : last 10 iters in 2.319s\n iter 2800 : loss=0.037 : entanglement=6.81e-04 : last 10 iters in 2.877s\n iter 2900 : loss=0.037 : entanglement=2.65e-04 : last 10 iters in 2.333s\n iter 3000 : loss=0.035 : entanglement=1.87e-04 : last 10 iters in 2.304s\n\n\n### Testing: action representations\n\n\n```python\nwidth=0.5\n\nrep_thetas = [rep.thetas.detach().numpy() for rep in lat_env.action_reps]\n\nfor rep in lat_env.action_reps:\n print(rep.get_matrix())\n print(torch.matrix_power(rep.get_matrix(), 5))\n\nplt_lim = max( 0.12, max([max(t) for t in rep_thetas])/(2*np.pi) )\ntitles = [\"up\", \"down\", \"right\", \"left\"]\n\nwith plt.style.context('seaborn-paper', after_reset=True):\n\n fig, axs = plt.subplots(1, len(rep_thetas), figsize=(20, 3), gridspec_kw={\"wspace\":0.4})\n \n for i, thetas in enumerate(rep_thetas):\n x = np.arange(len(thetas))\n axs[i].bar(x - width/2, thetas/(2*np.pi), width, label='Rep {}'.format(i))\n axs[i].hlines((0.2,-0.2), -2., 6., linestyles=\"dashed\")\n axs[i].hlines(0., -2., 6.)\n axs[i].set_xticks(x-0.25)\n axs[i].set_xticklabels([\"12\",\"13\",\"14\",\"23\",\"24\",\"34\"], fontsize = 15)\n axs[i].set_xlabel(\"$ij$\", fontsize = 15)\n \n axs[i].set_ylim(-plt_lim,plt_lim)\n axs[i].set_xlim(-.75, 5.5)\n axs[i].set_title(titles[i], fontsize = 15)\n \n axs[i].tick_params(labelsize=15)\n\n axs[0].set_ylabel(r\"$\\theta / 2\\pi$\", fontsize = 15)\n plt.savefig(\"action_rep_entangled.png\", bbox_inches='tight')\n \n```\n\n**Show predictions made by trained network with disentangled representations**\n\n\n```python\ndef plot_state(obs, ax):\n ax.imshow(obs)\n ax.set_aspect('equal')\n ax.set_xticks([])\n ax.set_yticks([])\n \n return ax\n\nn_steps = 10\n\nfig, (ax1,ax2) = plt.subplots(1, 2)\n\nax1.set_title(\"Ground truth\")\nax2.set_title(\"Reconstruction\")\n\nfor i in range(n_steps+1):\n \n if i==0:\n action = \"N\\A\"\n obs_x = obs_env.reset()\n obs_z = lat_env.reset(encoder(obs_x))\n else:\n action = obs_env.action_space.sample().item()\n obs_x = obs_env.step(action)\n obs_z = lat_env.step(action)\n \n obs_x_recon = decoder(obs_z)\n \n fig.suptitle('step {} : last action = {}'.format(i, action), fontsize=16)\n \n plot_state(obs_x.detach().numpy(),ax1)\n plot_state(obs_x_recon.detach().numpy(),ax2)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())\n time.sleep(0.5)\n \ndisplay.clear_output(wait=False)\n```\n\n**Show 2D projections of learned representations in latent space**\n\n\n```python\nfrom sklearn.random_projection import GaussianRandomProjection\n\nlatent_points = []\n\nfor start_position in obs_env.start_positions:\n obs = obs_env.reset(start_position=start_position)\n latent = encoder(obs)\n latent_points.append(latent.detach().tolist())\n\nlatent_map = np.array(latent_points)\n```\n\n\n```python\nperiod = obs_env.period\n\ncolor=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf',\"#2fa36b\",'#2f14fb','#ada854','#c353cc','#1c1392','#8eeeb0']\nmarks=[\"1\",\"2\",\"3\",\"4\",\"+\",\">\",\"<\",\"^\",\"v\",\"x\",\"d\",\"p\",\"P\",\"X\",'_','|']\npca = GaussianRandomProjection(n_components=2)\n\nlatent_2d = pca.fit_transform(latent_map)\n\nfig = plt.figure(figsize=(6,4))\nax = fig.add_subplot(111)#, projection='3d')\ns=[120]*5+[50]*6\nfor i in range (period**2):\n ax.scatter(x=latent_2d.transpose()[0][i],\n y=latent_2d.transpose()[1][i],\n # zs=latent_2d.transpose()[2][i],\n c=color[i//period],\n #s=s[i%period],\n marker=marks[i%period])\n # ax.set_xlim(-.6/1.4,.6/1.4)\n # ax.set_ylim(-.8/1.4,.8/1.4)\n # ax.set_zlim(-1./1.6,1./1.6)\n #ax.view_init(elev=45, azim=45)\nplt.title('Representations - Our method',fontsize=16)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nfig.show()\nplt.savefig(\"latent_flatland.png\", bbox_inches='tight')\n```\n", "meta": {"hexsha": "76a2921e4741f2b1fae2b8bd7a9cd242851f7651", "size": 114092, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "fig3_flatland.ipynb", "max_stars_repo_name": "luis-armando-perez-rey/learning-group-structure", "max_stars_repo_head_hexsha": "e238308de73a29506d9281e1b55cdd2de2795ebb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2020-02-16T10:34:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-20T00:27:19.000Z", "max_issues_repo_path": "fig3_flatland.ipynb", "max_issues_repo_name": "luis-armando-perez-rey/learning-group-structure", "max_issues_repo_head_hexsha": "e238308de73a29506d9281e1b55cdd2de2795ebb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-06-08T22:32:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:42.000Z", "max_forks_repo_path": "fig3_flatland.ipynb", "max_forks_repo_name": "luis-armando-perez-rey/learning-group-structure", "max_forks_repo_head_hexsha": "e238308de73a29506d9281e1b55cdd2de2795ebb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-03T08:24:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-16T02:02:10.000Z", "avg_line_length": 87.830638953, "max_line_length": 26600, "alphanum_fraction": 0.7791606773, "converted": true, "num_tokens": 8452, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5698526514141572, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.2982724736454876}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (12, 9)\nplt.rcParams[\"font.size\"] = 18\n```\n\n# Reactor Kinetics\n\nReactor kinetics focuses on the relationship between criticality, power, and time. Delayed neutrons and reactor control are at the heart of reactor kinetics. As changing conditions in the core impact neutron multiplication, control rods, burnable poisons, and chemical shim may be introduced to keep $k_{eff}$ near 1. \n\nTime-dependent fluctuations in neutron population, fluid flow, and heat transfer are essential to understanding the performance and safety of a reactor. Such transients include normal reactor startup and shutdown as well as abnormal scenarios including Beyond Design Basis Events (BDBEs) such as Accident Transients Without Scram (ATWS). \n\n## Learning Objectives\n\nAt the end of this lesson, you will be equipped to:\n\n- State the relationship between criticality and reactivity\n- State the Point Reactor Kinetics Equations\n- Describe temperature feedback of reactivity\n- Apply the Point Reactor Kinetics Equations\n\n## Fission and Delayed Neutrons\n\n\n\n\n$$\\sigma(E,\\vec{r},\\hat{\\Omega},T,t,x,i)$$\n \n
$k_{eff}$
\n\n
i
\n\n
$\\beta_i$
\n\n \\begin{align}\n\t\t k &= \\mbox{\"neutron multiplication factor\"}\\\\\n\t\t &= \\frac{\\mbox{neutrons causing fission}}{\\mbox{neutrons produced by fission}}\\\\\n\t\t \\rho &= \\frac{k-1}{k}\\\\\n\t\t \\rho &= \\mbox{reactivity}\\\\\n\\end{align}\n\n\n\n\\begin{align} \n\\rho(t) = \\rho_0 + \\rho_f(t) + \\rho_{ext}\n\\end{align}\n\nwhere\n
\n\\begin{align}\n \\rho(t) &= \\mbox{total reactivity}\\\\\n \\rho_f(t) &= \\mbox{reactivity from feedback}\\\\\n \\rho_{ext}(t) &= \\mbox{external reactivity insertion}\\\\\n \\rho_f(t) &= \\sum_i \\alpha_i\\delta T_i\\\\\n T_i &= \\mbox{temperature of component i}\\\\\n \\alpha_i &= \\mbox{temperature reactivity coefficient of i}.\n\\end{align}\n\n## Point Reactor Kinetics Equations\n\nSimplifying assumptions: \n\n- The reactor is a point\n- Assume all delayed neutron precursors have the same decay constant, $\\lambda$.\n- Let C(t) be the total number of delayed neutron precursors at time t.\n\nIn each neutron cycle, $k_{eff}n(t)$ is the number of new neutrons eventually produced, and is a combination of both prompt and delayed neutrons.\n\n\n\\begin{align}\n(1-\\beta)k_{eff}n(t) &= \\mbox{ prompt neutrons at the end of cycle}\\\\\n\\beta k_{eff}n(t) &= \\mbox{ delayed neutron precursors produced in the cycle}\\\\\n\\mathscr{l}' &= \\mbox{ cycle length of each cycle}\\\\\n\\frac{\\beta k_{eff}n(t)}{\\mathscr{l}'} &= \\mbox{ delayed neutron precursors per unit time}\\\\\n\\lambda C(t) &= \\mbox{ rate of decay by precursors}\n\\end{align}\n\nThus, the net rate of increase in the number of precursors is :\n\n\\begin{align}\n\\frac{dC(t)}{dt} &= \\frac{\\beta}{\\mathscr{l}'}n(t) - \\lambda C(t)\\\\\n\\end{align}\n\n### In each cycle:\n\n\\begin{align}\nn(t) &= \\mbox{ neutrons disappear}\\\\\n(1-\\beta)k_{eff}n(t) &= \\mbox{ prompt neutrons are produced by fission}\\\\\n\\frac{\\left[(1-\\beta)k_{eff} - 1\\right]n(t)}{\\mathscr{l}'} &=\\mbox{ net rate of increase by prompt neutrons}\\\\\n\\lambda C(t) &= \\mbox{ rate of neutron production by delayed neutron precursors}\\\\\nS(t) &= \\mbox{ rate of neutron production by non-fission sources}\n\\end{align}\n\nThus, the rate of increase in neutron population is the sum of production mechanisms:\n\n\n\\begin{align}\n\\frac{dn(t)}{dt} &= \\frac{1}{\\mathscr{l}'}\\left[(1-\\beta)k_{eff} - 1\\right]n(t) + \\lambda C(t) + S(t)\\\\\n &= \\frac{k_{eff}}{\\mathscr{l}'}\\left[\\frac{k_{eff} -1}{k_{eff}} - \\beta \\right]n(t) + \\lambda C(t) + S(t)\\\\\n\\end{align}\n\nWe can define the effective neutron lifetime as $\\Lambda = \\mathscr{l} = \\frac{\\mathscr{l}'}{k_{eff}}$ to simplify this equation:\n\n\n\\begin{align}\n\\frac{dn(t)}{dt} &= \\frac{\\rho - \\beta}{\\mathscr{l}}n(t) + \\lambda C(t) + S(t)\\\\\n\\frac{dC(t)}{dt} &= \\frac{\\beta}{\\mathscr{l}}n(t) - \\lambda C(t) \\\\\n\\end{align}\n\nThis can be solved by assuming a solution of the form:\n\n\\begin{align}\nn(t) = \\phi(t) = Ae^{\\omega t}\\\\\nC(t) = C_0 e^{\\omega t}\n\\end{align}\n\n\n### Multiple Delayed Neutron Precursor Groups\n\nIn reality, the delayed neutron precursors have very different halflives. As there are dozens of delayed neutron precursors, a typical strategy is to group these precurors into a small number of groups with similar halflives, as in the table below.\n\n\n|Group\t| Half-Life (s)\t| Decay Constant (s\u22121)\t| Energy (keV) |\tYield\t| Fraction |\n|-------------|-------------|-------------|-------------|-------------|-------------|\n|1 |\t55.72 |\t0.0124 |\t250 |\t0.00052 |\t0.000215 |\n|2 |\t22.72 |\t0.0305 |\t560 |\t0.00346 |\t0.001424 |\n|3 |\t6.22 |\t0.111 |\t405 |\t0.00310 |\t0.001274 |\n|4 |\t2.30 |\t0.301 |\t450 |\t0.00624 |\t0.002568 |\n|5 |\t0.610 |\t1.14 |\t- |\t0.00182 |\t0.000748 |\n|6 |\t0.230 |\t3.01 |\t- |\t0.00066 |\t0.000273 |\n\n\n\n\\begin{align}\n\\frac{dn(t)}{dt} &= \\frac{\\rho - \\beta}{\\mathscr{l}}n(t) + \\sum_{i=1}^G\\lambda_iC_i(t) + S(t)\\\\\n\\frac{dC_i(t)}{dt} &= \\frac{\\beta_i}{\\mathscr{l}}n(t) - \\lambda_iC_i(t)\\\\\n\\beta &= \\sum_i^G \\beta_i\\\\\ni&= 1,...,G\\\\\n\\end{align}\n\n## Feedback\n\nPutting all of this together, the point reactor kinetics equations, with feedback, result in a \"stiff\" set of PDEs:\n\n\\begin{align}\n\\frac{d}{dt}\\left[\n \\begin{array}{c}\n p\\\\\n \\zeta_1\\\\\n .\\\\\n \\zeta_j\\\\\n .\\\\\n \\zeta_J\\\\\n \\omega_1\\\\\n .\\\\\n \\omega_k\\\\\n .\\\\\n \\omega_K\\\\\n T_{i}\\\\\n .\\\\\n T_{I}\\\\\n \\end{array}\n \\right]\n =\n \\left[\n \\begin{array}{ c }\n \\frac{\\rho(t,T_{i},\\cdots)-\\beta}{\\Lambda}p +\n \\displaystyle\\sum^{j=J}_{j=1}\\lambda_{d,j}\\zeta_j\\\\\n \\frac{\\beta_1}{\\Lambda} p - \\lambda_{d,1}\\zeta_1\\\\\n .\\\\\n \\frac{\\beta_j}{\\Lambda}p-\\lambda_{d,j}\\zeta_j\\\\\n .\\\\\n \\frac{\\beta_J}{\\Lambda}p-\\lambda_{d,J}\\zeta_J\\\\\n \\kappa_1p - \\lambda_{FP,1}\\omega_1\\\\\n .\\\\\n \\kappa_kp - \\lambda_{FP,k}\\omega_k\\\\\n .\\\\\n \\kappa_{k p} - \\lambda_{FP,k}\\omega_{k}\\\\\n f_{i}(p, C_{p,i}, T_{i}, \\cdots)\\\\\n .\\\\\n f_{I}(p, C_{p,I}, T_{I}, \\cdots)\\\\\n \\end{array}\n \\right]\n \\end{align}\n\\begin{align}\n p &= \\mbox{ reactor power }\\\\\n \\rho(t,&T_{fuel},T_{cool},T_{mod}, T_{refl}) = \\mbox{ reactivity}\\\\\n \\beta &= \\mbox{ fraction of neutrons that are delayed}\\\\\n \\beta_j &= \\mbox{ fraction of delayed neutrons from precursor group j}\\\\\n \\zeta_j &= \\mbox{ concentration of precursors of group j}\\\\\n \\lambda_{d,j} &= \\mbox{ decay constant of precursor group j}\\\\\n \\Lambda &= \\mbox{ mean generation time }\\\\\n \\omega_k &= \\mbox{ decay heat from FP group k}\\\\\n \\kappa_k &= \\mbox{ heat per fission for decay FP group k}\\\\\n \\lambda_{FP,k} &= \\mbox{ decay constant for decay FP group k}\\\\\n T_i &= \\mbox{ temperature of component i}\n\\end{align}\n\n## Units of Reactivity\n\n### Delayed neutron fraction\n\nIn all of this, recall that the **delayed neutron fraction** is thus:\n\n\\begin{align}\n\\beta &= \\mbox{Delayed neutron fraction}\\\\\n&=\\frac{\\mbox{precursor atoms}}{\\mbox{prompt neutrons }+\\mbox{ precursor atoms}}\\\\\n&= \\frac{\\mbox{delayed neutrons}}{\\mbox{prompt neutrons }+\\mbox{ delayed neutrons}}\n\\end{align}\n\nThe delayed neutron fraction **$\\beta$ varies by fission isotope.**\n- For a reactor where all fissions are from $^{235}U$, this fraction is $\\beta_{235U} = 0.0064$.\n- In $^{238}U$, the fraction is lower, $\\beta_{238U} = 0.00157$.\n- And $^{239}Pu$ generates even fewer delayed neutrons per fission $\\beta_{239Pu} = 0.002$.\n\n\n#### Think-pair-share\n**Can you think of a physical reason that $\\beta$ should vary by fission isotope?**\n\n\nThe **effective delayed neutron fraction** ($\\beta_{eff}$) varies dramatically by reactor because:\n\n- in most reactors, not all fissions are from $^{235}U$\n- breeding and burning occurs over time, $\\beta$ changes with burnup \n\n\n### Dollar\n\nWe define one dollar of reactivity in a particular reactor as $\\frac{\\rho}{\\beta}$.\n\n### Cent\n\nA cent is $\\frac{1}{100^{th}}$ of a dollar of reactivity, so it's $\\frac{\\rho}{100\\beta}$.\n\n### Per Cent Mille (pcm)\n\nA cent is $\\frac{1}{100^{th}}$ and one mille is $\\frac{1}{1000^{th}}$ of a dollar.\nThus, one per cent mille (pcm) of a dollar is $10^{-5}\\frac{\\rho}{\\beta}$.\n\n## The Delayed Neutron Fraction and Criticality\n\n### Subcriticality \n$k<1$ is a subcritical reactor. This is immediate and all subcriticality is effectively prompt, though delayed neutrons have a slight impact.\n\n### Supercriticality\n$k>1$ is a supercritical reactor.\n\n### Prompt\nIf the reactor is supercritical on prompt neutrons alone, then it is *prompt supercritical*. This is **bad** because control rods take more than $10^{-14}s$ to drop. Prompt supercriticality occurs when reactivity is equal to one dollar :\n\n\\begin{align}\n\\rho \\ge \\beta_{eff}\n\\end{align}\n\n### Delayed\n\nIf the reactor is supercritical only if delayed neutrons are included, then it is just supercritical, or *delayed supercritical*. Delayed, controllable supercriticality occurs when reactivity is positive but below one dollar :\n\n\\begin{align}\n0 < \\rho < \\beta_{eff}\n\\end{align}\n\n## Example\n\nLet's imagine a reactor with $\\beta_{eff} = 0.006$\n\\begin{align}\nk_{eff} = 0.99 \\\\\n\\rho &= \\frac{k_{eff} - 1}{k_{eff}} \\\\\n&= -0.01\n\\end{align}\n\nThis reactivity, in units of dollars, for this reactor, is:\n\\begin{align}\n\\frac{\\rho}{\\beta} &= \\frac{-0.01}{0.006}\\\\\n&= -1.67 [$]\\\\\n&= -167 [cents]\\\\\n\\end{align}\n\nThe same reactivity in a reactor with $\\beta_{eff} = 0.005$: \n\n\\begin{align}\n\\frac{\\rho}{\\beta} &= \\frac{-0.01}{0.005}\\\\\n&= -2.00 [$]\\\\\n&= -200 [cents]\\\\\n\\end{align}\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "aa34086b254cb050f6087fdf845f0a4475f76dd3", "size": 16589, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "reactor-physics/reactor-kinetics.ipynb", "max_stars_repo_name": "katyhuff/npr247", "max_stars_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-12-17T06:07:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:14:51.000Z", "max_issues_repo_path": "reactor-physics/reactor-kinetics.ipynb", "max_issues_repo_name": "katyhuff/npr247", "max_issues_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-29T17:27:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T17:46:50.000Z", "max_forks_repo_path": "reactor-physics/reactor-kinetics.ipynb", "max_forks_repo_name": "katyhuff/npr247", "max_forks_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-25T20:00:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T03:05:26.000Z", "avg_line_length": 38.850117096, "max_line_length": 347, "alphanum_fraction": 0.5049128941, "converted": true, "num_tokens": 3265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118791767282, "lm_q2_score": 0.5851011542032312, "lm_q1q2_score": 0.29712131662441543}} {"text": "# Extended Rant Version\n\n> This version needs significant refactoring. It is fairly complete tough. \n\n## What happened\n\nI do not normally write blog posts.\n\nToday is not a normal day tough. \n\nI believe that I have found a missing link in my understanding of bayesian modelling theory.\n\nThis is not so much about the bayes belief update equation. It's good. The problem is that the notation is confusing, making it difficult to distinguish and source(compute, obtain) the various terms in the equation, in connection to the real-world phenomena. And then, when solving a real world problem, this is not even where you put most of the time-effort in. \n\nYou might think that bayes update rule:\n\n$p( A | B) = \\frac{p(B|A)p(A)}{p(B)}$\n\nis all that it takes.\n\nThis form is nice for proving that the bayes' theorem is correct. \n\nHowever, what do these terms mean? How does one apply this? \n\n\n\n\n\n\nYou might have heard that $A$ is a model parameter, and $B$ is observation. And that you can derive the model parameter $A$ from observations. \n\nStill, it is not at all obvious to someone that tries to figure it out for himself. Sources of confusion include:\n\n* `probability` and `belief` are not the same thing, yet they are denoted with the same letter\n* $p(B|A)$ does not need to sum to 1.0 while $p(A|B)$ does ? \n* values of $p(B|A)$ need to be between 0 and 1, while values of $p(A|B)$ do not ? \n* The name and function body for $p(A)$ and $p(B)$ is **completely** different, even tough they use the same symbols?\n* Some symbols evaluate to scalars, and others evaluate to vector? then, some places that used to be scalar can be vectors sometimes?\n* Data (observations) seems to have the same type as probability (`float`) , even tough the data can be `nominal`, like `T`/`H`, distribution, or vector what then?\n* `Discrete events`, `datapoints`, `sets` and `distributions` are very different things, yet they are denoted by the same symbol and *location*\n* `latent model parameters` versus `observable data` -- and where they come from, and where they go into? these are conceptually very different, yet they are denoted by location in a composite symbol as if they were interchangeable. Granted, the interchangeability is proven and even exploited in deriving the Bayes update rule: $p(A|B) p(B) = p(B|A)p(A)$ for any meaning of $A$ and $B$. However, this neat algebraic trick does no favours to the conceptual understanding and practical applications of this equation. \n* Most books go straight to continuous distributions and multidimensional variables. These are beautiful exhibits of maths' notations' brevity and generalism, but also are absolutely redundant. Discrete events are fine. Scalars are fine. The additional hoop created by going straight to things like fractional dimensions is a serious barrier for the children AND their teachers.\n* `class of function`, `instance of function`, `function name`, `function body` and `value of function` are all separate concepts in computer programming, but they are used interchangeably in mathematics books. \n* The update equation looks simple, but all practical problems have more than 2 dimensions that we are interested in exploring. Even for the most simple model $m$, we will want a (1) distribution of posterior beliefs for (2) possible $A$. That's already a 2D plot. but then, the $A$ can be 1,2,n dimensional, we can have different observation scenarios or multidimensional observations, and a variety of prior beliefs to worry about. And then, there is the variety of possible $models$. How do you visualize that? \n* It seems that, in paper-and-pencil maths, symbols are often aliased and shortened to save on hand movements. Modern practical computer science has long demonstrated that such savings are counter-productive. \n\nAnd the most confusing of all -- how does one build the `likelihood function`? Where does it come from? This is not specified in the bayes update equation at all, yet when doing exercises, one is expected to just magically come up with a correct one. \n\nFor the longest time I thought that this is something that needs to be derived from the Bayes' equation itself. This is how you solve all the other problems in the school, right? The teacher told you that $F=m*a$ so that's all that you should need. . . . actually, no. Bayes' theory tells you a true nothing, zero, nil, about the model $M$ of the world. This you have to invent separately. Only after you are done inventing, Bayes can tell you something about how good your model $M$ is. This is a true stunner for a high-school children who, up to this moment, were lead to believe that there exists only one correct solution to all problems. \n\nAmazingly, despite all the talk about probabilities, coin tossing and chances of getting cured by a new medicine -- the likelihood function, and the value of the likelihood function, as well as prior and posterior beliefs, have no random chance in them at all. In bayesian thinking, some things are hidden, but no things are random! \n\nThe only place where randomness is allowed is the plant. This randomness can be modelled in the model. \n\n--- \n\nOK, so ranting over, let me try to clarify some of these things.\n\n> Warning: Over the course of this section, I will rewrite the classic equations a couple of times. Do not call me out on the blatant fact that I use different symbols for the same thing across the first part of this article. This is necessary to make my point.\n\n**1. Reality check.** \n\nTo ground the concepts in some tangible reality, let's consider that we have a **plant**. \n\nPlant is just a name for some object, or process, that has inputs and outputs. It does not need to be a manufacturing plant, or like a greenhouse plant. All that it does is that it exists, has inputs, and outputs:\n\n\n>>> Drawing here <<<\n\nFor now, we will only be concerned with the plant's outputs.\n\nThis plant produces data $\\{ D \\}$, which we can observe. The process of producing instances of $D$ can be approximated and described by a $model$ $M$. The model $M$ describes the plant somehow, but at this point we should be clear that there can be more than one good model for that plant (even tough most models will be useless). \n\nLet's suppose that we have a model $M$ of the plant, that takes a hidden(latent) parameter $\\phi$. It could be then said that the data $D = M(\\phi)$. \n\nWe'd like to know what the true $\\phi$ is, but we cannot be sure. We cannot measure $\\phi$ directly. The $M$ can have a probabilistic nature to it, in sense that depending on $\\phi$ it can produce given $D$ more or less often. \n\nStill, we are allowed to have suspicions and beliefs about what the true $\\phi$ is. We can observe the events $D$ that happen with $M$'s and $\\phi$'s contribution.\n\n**2. Use the reasoning to figure out $\\phi$**\n\n\nFor a start, let's replace $A$ and $B$ with model parameter $\\phi$ and data(observation) $D$:\n\n$$p( \\phi | D) = \\frac{p(D|\\phi)p(\\phi)}{p(D)}$$\n\nNext, let's rewrite the bayes' update equation *slightly* differently (this is not the end of the re-writing):\n\n$$p( \\phi | D) = \\frac{p(D|\\phi)}{p(D)}\\times p(\\phi)$$\n\nCompared to the original form, this already gives you a better hint: you can get posterior belief, $p( \\phi | D)$, from the prior $p(\\phi)$, modified by an \"updater term\", $\\frac{p(D|\\phi)}{p(D)}$.\n\nOk, so what about this updater term seems to be so difficult?\n\nWait. Let me clarify the concepts one by one. And there is quite a bit to untangle, before the road becomes straightforward for us.\n\n\n**Untangle the concepts of Belief, Probability, and Sample**\n\n\nLet's take in a new concept: that `belief` is not the same as `probability`, and both are distinct from `sample`.\n\nHere are some hints as to how to separate the two:\n\n* `Belief` is something that you hold in your head. `Probability` is a property of the world. Since there exists an impenetrable epistemological barrier between your \"inner\" and \"outer\" world, these things are already distinct.\n* Things do not happen because of `belief`, nor because of their `probability`. \n* Things happen inside the `plant` according to the plant's model and parameters. The plant's model is $M$ and it's parameter is $\\phi$. We can observe data $D=f(M, \\phi)$.\n* `Belief` is something that we can change ourselves. `Probability` is not something that we can change.\n* `Probability` we can estimate from frequency of observation. It follows that for things that are never observed, we cannot talk about their probability. \n* `Belief` we can assume, virtually out of nothing. We can then either hold oto that belief, or update it. We can update it, for example, arbitrarily(without any reasoning), or using some rules. For example, using Bayes' update function (maximum amount of reason).\n* Estimate of the value of `probability` can be a function of assumptions, including `beliefs`. Updates of `belief` can be a function of `data`. \n* `Probability` and `belief` seem to have the same unit, \"fraction of $\\Omega$\" -- where $\\Omega$ is \"all that there is, all possibilities\". Maybe this is the reason why historically they both have been noted as $p()$. This is a poor reason. I propose that we give them a units. \n* I propose that, to make it easier to keep a distinction between `probability` and `belief`, they carry distinct units.\n\nHere, I propose the following units:\n\nI will be using unit of `Rey`, or R for belief, that certain statement is true.\n\nI will be using the symbol $\\Omega$ for probability on that a certain parameter, event, or statement, or data point $E$ will happen in the future, or is happening while we do not see it happening. Note again, that this is very different from having a completely certain data set that shows that \"$E$ has happened 70% of the time\". \n\nWhile we are at it, we can also add an unit for that observed dataset, $\\{ D \\}$. The reason for a separate unit here is that the set of all possibilities, $\\Omega$ is infinite, while the dataset $\\{ D \\}$ is a finite **sample** from $\\Omega$. Hence, it is **not true** that if we have a dataset that shows \"$D$ has, so far, happened 70% of the time\" is equal to \"$D$ is 70% of $\\Omega$\". Instead, let's define a new unit, [$S$] to describe the prevalence of $E$ in dataset $\\{ D \\}$.\n\nTo summarize:\n\nGiven that, for us, $E$ could mean either $\\phi$ or $D$, we have:\n\n0.7[R]=700[mR] means \"My belief is such that I am 70% sure that a specific value of $\\phi$ is true. I leave the 30% to beliefs that some other value of $\\phi$ can be true.\"\n\n0.8[$\\Omega$]=800[m$\\Omega$] means \"In this chance process, If I sample forever, I will get $D$ 80% of the time. I will get something else 30% of the time\".\n\n0.9[$S$]=900[m$S$] means \"In this dataset, which is a sample of $\\Omega$, event $D$ occurred 90% of the time\".\n\nAgain, although [R], [$\\Omega$], and [$S$] seem to be mathematically interchangeable and can be expressed in percent, or fraction of a whole -- semantically they are distinct, as they refer to different concepts.\n\n\n\nHaving these insights, I propose a following notation.\n\n## The stage star $b(\\phi)$\n\nLet's use function symbols, $f(something)$ : \n\n* $b$ for belief, both prior and posterior,\n* $l$ is for likelihood, \n* $m$ for marginal probability. \n\nLet's use the variable symbols, $something$: \n* $\\phi$ for a certain model parameter that belongs to a set of considered model parameters $\\{ \\phi \\}$ \n* $D$ for a datapoint that already occurred, and we have it in a dataset $\\{ D \\}$; the dataset $\\{ D \\}$ has been sampled from a population $\\Omega$\n\nIt follows that, \n\nThe **new belief** about parameter $\\phi$, that takes into account the **new information** from observing a datapoint $D$, is noted as $b(\\phi|D)$, \n\nAnd that $b(\\phi|D)$ **can be calculated** as coming from old(prior) belief about $\\phi$, noted as $b(\\phi)$ \n\nAnd that this calculation involves evaluating a Bayesian modifier term, $\\frac{l(D|\\phi)}{m(D)}$:\n\n$$b(\\phi|D) \\leftarrow \\frac{l(D|\\phi)}{m(D)} \\times b(\\phi)$$\n\n\n## The ugly duck $m(D)$\n\nThe marginal probability $m(D)$ is also sometimes called \"evidence\". It is calculated by summing (or integrating) the likelihood over all considered values of $\\phi$, weighted by the prior belief in $\\phi$. \n\n$$m(D) = \\sum_{\\phi}{l(D|\\phi)b(\\phi)}$$\n\nBut wait!!! In the above equation, the $\\phi$ is not the same $\\phi$ as in the previous equation! If you didn't know that, I do not blame you. \n\nInstead, here $\\phi$ is to say \"for all $\\phi$s\". Because of this, one is not allowed to substitute this expression for $m(D)$ into the bayes' update equation from the previous section -- at least, not using regular algebraic rules as learned in high school : that would create a notation conflict!\n\nIn my school times, changing the meaning of notation between different classes was the single most confusing obstacle to quick assimilation of new concepts. If I learned something in the chemistry class, I had to forget about it before going to the Physics class, or else I was in for trouble!\n\nBe advised that for a child's mind, like for the early computers, all symbols and variables are \"global\". Have you seen that viral video where a 3-year old girl says that a black man ate the cookies? She did not learn to remap the concepts for political correctness when being filmed. She merely rehashed the concepts she heard from the people that surround her. \n\nFor adults, switching the context is possible, but still taxing. \n\nChanging the meaning of notation in a middle of doing a school problem is a good recipe for catastrophe. This is in no small part due to that the children are plain not afforded the time and slowdowns that it takes to create correct concept remapping in their heads. They are expected to solve a problem in under 5 minutes, or fail. You tell me what happens to most of us.\n\nLet's make things easier to learn, by using a clearer, non ambiguous notation.\n\nIn order to note that the $\\phi$ in the following equation has a \"set of $\\phi$\" meaning, rather than just \"single scalar $\\phi$\" meaning, let's note it as $\\{\\phi\\}$. The symbols $\\{, \\}$ are used in secondary schools to denote closed sets. That hints the student that he must consider the entire set of $\\phi$s and not just a single $\\phi$. \n\nMoreover, we can prepare the same person for using list comprehension semantic from Python (and hence, make Python easier for this person), by using notation like \"for $\\alpha$ in $\\{\\phi\\}$\" or \"for $\\alpha \\in \\left<\\phi\\right>$\":\n\n$$m(D) = \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{l(D|\\alpha)b(\\alpha)}$$\n\nNow we are safe to write that\n\n$$b(\\phi|D) \\leftarrow b(\\phi) \\times \\frac{l(D|\\phi)}{ \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{(l(D|\\alpha)b(\\alpha))} }$$\n\n\nThe veterans will notice how now, no symbol aliasing occurs. Every symbol has unique semantic meaning.\n\nMoreover, we can be much more clear about where the specific numbers needed are to come from. Only now, having this unambiguous expression, one is equipped to attempt to solve practical problems.\n\n\n\n> Warning: In the following part of the article, I will not be changing the notation any more, so feel free to call me out on any errors.\n\n---\n\n## Prior $b(\\phi)$\n\n$b(\\phi)$ is the initial, or prior **belief** about one of the possible $\\phi$s. \n\n$b(\\{\\phi\\})$ is the initial, or prior **belief** about all of the possible values for $\\phi$s, and is a vector, or a set: that is, there is a new value $b(...)$ for each $\\phi$ from the set of $\\{ \\phi \\}$s\n\nValues for $b(\\{\\phi\\})$ come truly from outside of the dataset and model. In other words, one has to start with some beliefs, something based on external knowledge of the problem at hand. \n\nAt this point, most books go into depths about the challenges of holding an \"informative\" or \"uninformative\" prior, and give (often unclear) experimental examples on how do they affect the result.\n\nHere I hope to make the examples clearer, by the virtue of using the unambiguous notation developed in the previous section.\n\nFor example, we can have, for $\\{\\phi\\}$ = [0.1, 0.5, 0.9], $b(\\{\\phi\\})$ = [0.1, 0.8, 0.1]. This means that our belief is that $\\phi$ is most likely 0.5, but we also allow for a total doubt of 20% that it can give way to $\\phi$ being 0.1 or 0.9. \n\nImportantly, this distribution does sum up to 1, that is $\\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}b(\\alpha) \\ \\ = 1$\n\nOne can start either with an \"uninformative\" prior or an \"informative\" prior. \"uninformative\" priors are fine if there is enough experimental data to come by; however, if the data is scarce, and there is something that we already know about the problem (e.g., someone told us, or we made a related but not identical experiment), then would be a mistake not to incorporate it into our thought process.\n\n---\n\n## Posterior $b(\\phi | D)$\n\n$b(\\phi | D)$ is the posterior **belief** after update. \n\nFor simple problems, we often talk not about different $D$s, but rather just one discrete $D$. Obviously, data sets $\\{ D\\}$ will also enter the fray as we become more proficient.\n\nIt can be important to note that there is a difference between:\n* $b(\\phi | D)$ -> Scalar\n* $b(\\{\\phi\\} | D)$ -> vector\n* $b(phi | \\{ D\\})$ -> vector\n\nAs all of these involve a vector computation or a loop of some kind. \n\nHaving that, it is even more clear to see that $b(\\{ \\phi \\} | \\{ D\\})$ denotes and evaluates to a 2D array of numbers.\n\n\nAlternatively, we could talk about updating our belief for one specific $\\phi$ depending on what $D$ we get. In other words, building a closed-form function. Although this view can be useful, this is not what most examples in the literature are about. $D$ is most often a given constant.\n\n$b(\\{\\phi\\} | D)$ over a range of $\\phi$, evaluates to a 1D numeric vector, and this is what you typically want to get to: a description of the final belief about what the latent $\\phi$ could be. In other words, an indication of \"what should you believe about the different possible $\\phi$\". This distribution does sum up to 1.\n\nFor example, let's say, that after having our prior beliefs $b(\\{\\phi\\})$ = [0.1, 0.8, 0.1] of what the latent value of $\\phi$ could be, we observe new data point, $D$. Having this new data point, we update (it doesn't yet matter how) our belief for values of $\\{\\phi\\}$=[0.1, 0.5, 0.9] to be new $b(\\{\\phi\\} | D)$=[0.45, 0.45, 0.1]. Meaning, that we still believe that $\\phi$ value of 0.9 is incredible, but now give equal credibility to values $\\phi$=0.1 and $\\phi$=0.5.\n\nTo summarize, using the $\"|D\"$ part of the notation is there to symbolize that this is about the updated belief, given the data point $D$. Not that we have a sweep of datapoints, and not that we distribute our belief across different $\\phi$. \n\n---\n\n## The Likelihood function $l(D|\\phi)$\n\n$l(D|\\phi)$ is the \"likelihood function\". \n\n\"Likelihood\" itself, is a word that typically doesn't really tell you much, because the meaning used here is quite different and distinct from the common-language synonyms of \"likelihood\". Here, by \"likelihood\" we do not mean any of \"frequency\", \"chance\", \"odds\", \"feasibility\", \"plausibility\" e.t.c. We mean something quite specific here.\n\nLet's attach concrete meaning to the word \"likelihood\".\n\nHere, \"likelihood\" it means \"the function, and the values of probabilities that single point(or single batch) of data $D$ has been generated by the $model$ $M$ with a given parameter $\\phi$\".\n\nLikelihood values carry the unit of probability, $[\\Omega]$\n\n\n\nNotably, \n\n* A value array $l(D|\\{\\phi\\})$ does not need to sum to 1.0 as in the case of belief distribution. \n* the individual values of $l(D|\\phi)$ for any discrete $\\phi$ must still be in the range of (0, 1), as it should be with probability.\n\n\nFor a contrived example, we can have $l(D|\\{ \\phi \\})$ = [1.0, 0.1, 0.5] for $\\{ \\phi \\}$ = [0.1, 0.5, 0.9] and data $D$ = [0]. \n\nThis means that: \n* The data value \"0\" is reliably always generated by the $model$ $M$ when the $model$'s $\\phi$==0.1. \n* if the $\\phi$==1, then this data would only be generated rarely, approx. 10% of cases. \n* if the $model$'s $\\phi$=0.9, then the data value \"0\" is still generated approximately half of the time.\n\nWhen sampling infinitely from the model $M$ having the parameter $\\phi$, we cannot get a $D$ population fraction bigger than 1.0$[\\Omega]$ or smaller than 0.0$[\\Omega]$.\n\nHowever, the sum of all $l(D|\\{ \\phi \\})$, or $\\sum_{\\alpha \\in \\phi}{p(D|\\alpha)}$ can be more or less than 1.00 \n\nMoreover, if we observe a single data point $D$=[0], then we still cannot be sure if the true value of model's $\\phi$ was 0.1, 0.5, or 0.9. All possible values of $\\phi$ are consistent with getting $D$=[0] sometimes.\n\nNote that I have used the word `probability that` rather than `belief that`. This is because for likelihood, there is no guessing of any kind involved, and there is nothing latent(hidden). Instead, given a datapoint, and given one (or a list) of model parameter values, we calculate the already-happened chance that the data, as seen, has happened. We can calculate this chance for all possible hidden parameter values, irrespectively of our belief in them. This is kind of like a grid search, or grid view. Us doing this calculation does not favour any $\\phi$ out of the set of $\\{ \\phi \\}$ (yet) and the result does not mean that any specific $\\phi$ is the right one. \n\nObviously, in order to perform this calculation, we need a model of the world, and a kind of that uses these hidden parameters and is conductive to our calculation. Here much of the trouble and effort of the user of bayes' rule comes in. Many treatises on how awesome the bayes rule is will not help you with this.\n\nIt is up to the user to construct a model, make sure that the model is representative of the reality, and that it is conductive to the likelihood function value evaluation. \n\nLet's take a look at two most common, basic models, and how is the likelihood function constructed for them. The examples listed below are NOT to say that this is the only way the likelihood function can be constructed! \n\n\n\n## Example model of coin toss, and constructing it's likelihood function.\n\n\nThe classic coin-toss toy model is a good one, and widely applicable and extensible, if explained correctly. \n\n### Model description\n\nHere it is so that you don't have to go back to the book.\n\nLet's say that the result of the coin toss can be 0 or 1. Our model approximates the real coin by ignoring the possibility of the real coin landing on the edge.\n\n---\n\nProposition: biased coin gives \"1s\" more often. \n\n---\n\nLet's say that the coin could be biased(that is, unfair) and we describe this bias with a parameter $\\phi$. We do not know, and cannot know directly what the true $\\phi$ is. What we do know is that the $\\phi$ can be anywhere from 0.0 to 1.0, with a 'fair' coin being at $\\phi \\equiv$ 0.50. \n\n* 0.10 means that the coin is very unfair towards only giving zeroes, \n* 0.90 would mean that it is very unfair towards only giving ones. \n* 0.50 means that it gives 0 in 50% times, and 1 in 50% of times, that is, the coin is fair\n\n\n### The Likelihood function for the model\n\nTo compile this description into something computable, we can say that our model for probability of getting a 1 is $l(D \\equiv 1 | \\phi)= \\phi$. Symmetrically, the model of probability of getting a zero, is $l(D \\equiv 0 | \\phi)=(1-\\phi)$\n\nThis likelihood function is not something that comes out of the world. It does not come from the bayes rule. It is a new construct that we have created to link our suspected coin bias $\\phi$ with the probability of getting an observation $D$. We have created a $model M$. We are using the model $M$ to approximate and describe the real coin. \n\nNote that there is no belief involved here, only assumptions; we assume that $model M$ is an approximation of a real coin.\n\n### Prior belief\n\nWe can give our prior belief that the coin is fair, and our disbelief that it is unfair. We can do it by setting, for a possible values of $\\{\\phi\\}$ = [0.1, 0.5, 0.9], values of initial belief as $b(\\{\\phi\\})$ = b([0.1, 0.5, 0.9]) = [0.1, 0.8, 0.1]\n\n### Calculations\n\nPutting in some concrete numbers, let's say that we believe that $\\phi=0.5$ (perfectly fair) and then we toss the coin, and get $D\\equiv1$. What was the likelihood to get such a result? $l(D \\equiv1 | \\phi \\equiv 0.5) = 0.5$. \n\nAt this point, we can end our lame discussion -- the coin is fair, the chance of getting a one was 50%, so everything is fine, right?\n\nThat's correct. However, what if we do not fully believe that the coin is fair? what if we suspect that the coin is actually biased towards zero?\n\nLet's compute what was the chance of getting a 1, if the latent parameter $\\phi$ was 0.1: $l(D \\equiv 1 | \\phi \\equiv 0.1) = 0.1$.\n\nSo, **If** the coin was heavily biased towards zero, then we still could get a 1, 10% of the time, and getting a result of \"1\" in one coin toss is not very surprising. \n\nThere is seemingly nothing to worry about, except for that we did not make any progress on discovering the true value of $\\phi$, even tough we have made an indirect observation of it, through observing the $D$.\n\nAll that we know so far is that the likelihood of observing the result, depends on the parameter in a model $M$ of the plant $P$ that generated that result. To say this, is to say nearly quite nothing.\n\n### No result?\n\nWhat we really care to know, is what to believe about $\\{ \\phi \\}$ -- is the real value closer to 0.5 or to 0.1? So far we have nothing from reason to go by to believe either of these things. Or do we?\n\nHere is the first time when the bayesian update comes in. And, this is what the books on bayesian reasoning fail the hardest at. They start with discussing prior belief, posterior belief, and worry about how the prior affects the posterior, and why having a prior belief is or is not a good idea. And how to convince other people to your priors. They droll about informative or uninformative priors. Prior this, prior that. \n\nHowever, all this fails because the Bayesian update is truly useless unless you explain and understand the likelihood function first.\n\nAgain: Be aware that the likelihood function DOES NOT COME FROM THE BAYES EQUATION. It comes from the model of the plant!\n\nBefore we get to the bayesian belief update, please muster your patience, and take a minute and think about other possible models of likelihood for coin toss-- even if they are implausible. Or for any other model of other world phenomena that you care about. Another classic problem in this category is looking for disease in population, using an imperfect test. I am sure you have heard about it.\n\n### Why bother?\n\nIn the previous section, we have created and used a \"forward model\" of the world. \n\nThe reason we do it this way,\n\nis that the \"forward models\" that is, casual models (where the principle of action-reaction is held) \n\nof this kind are relatively easy to make for a very wide variety of real-world phenomena. \n\n\n## Example model with linear regression, and constructing its likelihood function.\n\nI bet that you are anxious to hear if bayesian reasoning can be used to any more than coin tosses, or figuring out how if taking a coronavirus test makes sense or not.\n\nYes, it can. Here is an example on how would you treat a linear regression problem, of the kind:\n\n\nProposition: Taller people are heavier. \n\nQuestion: What is the proportionality coefficient?\n\nClarification: In Bayesian belief terms, what should we believe about various propositions for the value of proportionality coefficient?\n\n### Model description\n\n$\\mapsto$ model input: height $h$\n\n$\\mapsto$ model output: weight $w$\n\n$\\mapsto$ model parameters: proportionality coefficient $\\phi_{prop}$, and the uncertainty descriptor $\\phi_{\\sigma}$. In other words, we have two latent parameters, not one.\n\n$\\mapsto$ Plant's model of uncertainty: random variable $N(0,\\sigma )$\n\n$\\mapsto$ The complete model for predicting the weight for a person of height $h$ is $\\begin{equation}w = h * \\phi_{prop} + N(0,\\phi_{\\sigma} )\\end{equation}$\n\n$\\mapsto$ nose-free version of the model -- that is, if we set noise to zero, we get weight a model $w = h * \\phi_{prop}$ \n\n### Likelihood function for linear regression\n\nHere is the critical bit. We construct the equation for the likelihood function of data point $D_{weight}$ happening, given the pair of ($\\phi_{prop}$, $\\phi_{\\sigma}$). \n\nLet's say that the likelihood, or probability of this data point happening is inversely proportional to the distance of the data point value from the model's predicted noiseless value. Hence we want something like\n\n$ l( D_{weight} | (\\phi_{p}, \\phi_{\\sigma})) \\ \\ \\propto \\ \\ 1/distance$\n\nHere are some proposals on how this could look like.\n\nDistance, is simply the difference between the model's predicted, noiseless $w$ and the observed $D_{w}$. For clarity of notation, let's use symbol delta defined as $\\Delta = D_{w} - h * \\phi_{prop}$\n\nOne possible distance metric is :\n\n$_{proposed}distance = abs ( \\Delta ) $\n\nWe can also use a square of difference, which has the nice property that it weighs large distance more. (It also has a couple of other nice properties):\n\n$_{proposed}distance = \\Delta^{2} $\n\nWe also want to weight the distance by the amount of uncertainty.\n* If the model certainty is high ($\\sigma$ is small), then the \"improbability\" due to distance will be higher.\n* If the model certainty is low ($\\sigma$ is large), then the probability is higher even at a distance.\n\nWe account for this by \"weighing\" the distance by $\\sigma$:\n\n$_{proposed}weightedDistance = (\\frac{\\Delta}{\\phi_{std}})^{2}$ \n\nOne more property that we need, before we can get to the likelihood, is that the likelihood for any prospective parameter must evaluate to between 0..1. For that, we need to get a bit creative . . . or \"inspired\".\n\n\nHere's a proposed likelihood function that has all the properties required above, sourced from https://en.wikipedia.org/wiki/Bayesian_linear_regression#Model_setup\n\n$ l( D_{weight} | (\\phi_{prop}, \\phi_{\\sigma})) = \\sigma * \\exp(-\\frac{1}{2\\sigma^{2}} \\Delta^{2}) $\n\nThis function has this nice property that the maximum value is 1.0, quickly decays towards zero as the $\\Delta$ increases, and works nice with $\\Delta$ weighted by $\\sigma$.\n\nThis specific function also has the property that it integrates to 1.00 for $\\Delta$ of $(-\\infty , +\\infty )$ -- but we really do not require this property to be there in the likelihood function. As long as you choose a function that outputs something between 0 and 1, it's fine. To see that this is the case, recall that we are talking about the probability of data given the model parameter. There can be more than one more model parameter that will often (or always) produce a given datapoint value. That's fine. We will make the probabilities in the likelihood compatible with beliefs using a normalizing term - \"Evidence\".\n\nAgain, let me be clear that the above presented likelihood function, is merely an example function that happens to have desirable properties. It is by far not the only function that exhibits this properties! For example, you can have a process that does not exhibit gaussian uncertainty like $\\propto \\exp(-\\frac{1}{x^{2}})$. That's fine for Bayesian reasoning!.\n\nNow, that you see that it is possible to build a likelihood function, suitable to describe the problem of \"what should I believe...\" in bayesian terms for a regression problem, you can begin to believe that it is possible to construct such a function for a very wide variety of problems!\n\nWe are not done yet. There is one more hoop to clear before we can get to the posteriority: The Evidence function.\n\n\n## Evidence, $m()$\n\nThe marginal probability $m(D)$ is also sometimes called \"evidence\".\n\n\nWhy marginal? this word is not to be confused with things like \"unimportant\", \"extreme\", or \"bad\". The word \"marginal\" is just a code-word that came from the depths of history. One used to write the results of an experiment (when taking the data) in a table (on paper!). Then, came the time to analise the data. Since copying the numbers by hand is time consuming, one would use the margin of the page to squeeze in additional scribblings. Hence, these computed numbers are \"marginal\". \n\n\nWhy \"evidence\"? I am not sure, but what I do understand is that again, the name-word, code-word \"evidence\" does not have anything to do with common-sense meaning of that word. It does not mean \"certainty\", \"proof\" , \"confirmation\", \"verification\", \"display\", \"demonstration\", e.t.c. Here we are only using this word as a name for a certain operation.\n\nAs hinted above, the value of the marginal probability, $m(D)$ is calculated by summing (or integrating) the likelihood function of $D$ for all of the considered values of $\\phi$, weighted by the prior belief in $\\phi$. \n\n$$m(D) = \\sum_{\\alpha\\in\\{\\phi\\}}{l(D|\\alpha)b(\\alpha})$$\n\nWhat we get in effect, is a scaling, or normalization term that makes the units and scale of belief and likelihood match. \n\nIn the original formula, the same thing is denoted as $p(D)$. Which is triple as confusing because (a) requires an integral or summation over all $\\phi$, (b) it integrates to more than 1.0 and hence (c) it has none of the properties of the other $p()$'s. Uhh. (?!?!?). \n\nIf you are confused, I do not blame you.\n\nInstead, I propose that it would make much more sense never to write $p(D)$ nor $p(B)$, but rather, teach what the marginal probability function really is, right away: that $m() = m(D,l(),\\{\\phi\\}, b(\\{\\phi\\}))$. Sounds complicated? It's tedious to write, but it is not complicated. (That's why people `in the know` shorten it). Let's see an example:\n\n\nHere is an example how to perform this calculation. For our coin toss example, we have:\n\n$m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) ) = \\ldots$\n\n$\\ldots \\sum_{\\alpha\\in\\{\\phi\\}}{l(D \\equiv 1|\\alpha)b(\\alpha}) = \\ldots $\n\n$ \\ldots l(D \\equiv 1 | \\phi \\equiv 0.1)b(\\phi \\equiv 0.1) \\ldots$\n\n$ \\ldots + l(D \\equiv 1 | \\phi \\equiv 0.5)b(\\phi \\equiv 0.5) \\ldots$\n\n$ \\ldots + l(D \\equiv 1 | \\phi \\equiv 0.9)b(\\phi \\equiv 0.9) \\ldots$\n\n$ = \\ldots \\\\ $\n\n$ \\ldots 0.1*0.1+0.5*0.8+0.9*0.1 = 0.01 + 0.4 + 0.09 \\ldots \\\\ $\n\n$ \\ldots \\large{= 0.5}$\n\nSo, the marginal probability of the \"1\" happening, under current belief system, is 0.5.\n\nKind of anticlimactic?\n\nWait until you see what happens when the prior beliefs were different (unbalanced prior). \n\nFor example, for $b(\\{\\phi\\})$ = [0.8, 0.1, 0.1] we get $m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) )$ = 0.22 . In other words, if our prior belief was that the coin is biased towards zero, then the \"marginal probability\" (Evidence!) of getting $D \\equiv 1$ is lower!\n\n\nAnd then, for $b(\\{\\phi\\})$ = [0.1, 0.1, 0.8], $m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) )$ = 0.78\n\nFor \"uninformative\" prior belief of $b(\\{\\phi\\})$ = [0.33, 0.33, 0.33], $m(D \\equiv 1, l(),\\{\\phi\\}, b(\\{\\phi\\}) )$ = 0.50 again. \n\nHow come?\n\n\nIf this surprises you, you are in a good company. The surprise comes from the historical fact that the \"marginal probability function\" has the word \"probability\" in it. It would seem that the probability of something happening depends on our belief about it???\n\nAlas, this is not the case. \"Marginal Probability\" or \"evidence\" is merely a scaling factor that we need to apply to the likelihood, in our full equation for update of the prior belief:\n\n$$b_{\\phi, updated} \\ = \\ b(\\phi|D) \\leftarrow b_{\\phi, prior} \\times \\frac{l()}{m()} \\ = \\ b(\\phi) \\times \\frac{l(D|\\phi)}{ m(D,l(),\\{\\phi\\}, b(\\{\\phi\\})) } \\ = \\ b(\\phi) \\times \\frac{l(D|\\phi)}{ \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{(l(D|\\alpha)b(\\alpha))} }$$\n\nHence, it is much more enlightening to say that the \"Bayes Factor\" -- the ratio $\\frac{l()}{m()}$ -- tells us how we should modify our prior belief, given the data point $D$. \n\nThis factor can be less than one, or more than one. It could be close to zero if $D$ is unlikely, or it could be tending to infinity if our prior belief was very very low. The \"Bayes Factor\" is composed of the interplay between the likelihood function and our prior belief about all possible $\\{ \\phi \\}$. Seeing it this way dispels any magic about it. \n\n\n\n## Finally, the Bayesian update\n\nPhew.\n\nAfter all this introduction -- which is not really introduction, it is THE meat that should be taught in school in the first place -- we can get to the bayesian method for updating beliefs:\n\n$$b(\\phi|D) \\leftarrow b(\\phi) \\frac{l(D|\\phi)}{ \\sum_{\\alpha \\ \\in \\ \\{\\phi\\}}{(l(D|\\alpha)b(\\alpha))} }$$\n\nSee the next chapter for a demonstration of this method in action.\n\n\n\n```\n#hide\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "b065c68dbf946e156e289c625796c24aef501aa8", "size": 44411, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10-gr-extended-rant-version.ipynb", "max_stars_repo_name": "jerzydziewierz/bayesian_reasoning_v0", "max_stars_repo_head_hexsha": "cdb76d4a7f2c406f32e1d5fad25b07abb8af772c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "10-gr-extended-rant-version.ipynb", "max_issues_repo_name": "jerzydziewierz/bayesian_reasoning_v0", "max_issues_repo_head_hexsha": "cdb76d4a7f2c406f32e1d5fad25b07abb8af772c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10-gr-extended-rant-version.ipynb", "max_forks_repo_name": "jerzydziewierz/bayesian_reasoning_v0", "max_forks_repo_head_hexsha": "cdb76d4a7f2c406f32e1d5fad25b07abb8af772c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.940027894, "max_line_length": 684, "alphanum_fraction": 0.639323591, "converted": true, "num_tokens": 9432, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6334102498375401, "lm_q2_score": 0.46879062662624377, "lm_q1q2_score": 0.2969367879328261}} {"text": "```python\nimport pandas as pd\nimport xarray as xr\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%load_ext autoreload\n%autoreload 2\nfrom ar6_ch6_rcmipfigs.constants import INPUT_DATA_DIR_BADC, BASE_DIR\n```\n\n The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n\n\n\n```python\nfrom ar6_ch6_rcmipfigs.utils.badc_csv import write_badc_header\n```\n\n\n```python\nfrom ar6_ch6_rcmipfigs.utils.plot import get_cmap_dic\nfrom ar6_ch6_rcmipfigs.constants import OUTPUT_DATA_DIR, RESULTS_DIR\n```\n\n# Code + figures\n\n\n```python\noutput_name = 'fig_em_based_ERF_GSAT_period2010-2019_1850-1900'\n```\n\n### Path input data\n\n\n```python\n\n# PATH_DATASET = OUTPUT_DATA_DIR / 'ERF_data.nc'\nPATH_DATASET = OUTPUT_DATA_DIR / 'historic_delta_GSAT/dT_data_hist_recommendation.nc'\n\nfn_ERF_2019 = OUTPUT_DATA_DIR / 'historic_delta_GSAT/2019_ERF_est.csv'\n# fn_output_decomposition = OUTPUT_DATA_DIR / 'historic_delta_GSAT/hist_ERF_est_decomp.csv'\n\nfn_ERF_timeseries = OUTPUT_DATA_DIR / 'historic_delta_GSAT/hist_ERF_est.csv'\n\nfp_collins_sd = RESULTS_DIR / 'tables_historic_attribution/table_std_smb_orignames.csv'\n\nfn_TAB2_THORNHILL = INPUT_DATA_DIR_BADC / 'table2_thornhill2020.csv'\n```\n\n### Path output data\n\n\n```python\nPATH_DF_OUTPUT = OUTPUT_DATA_DIR / 'historic_delta_GSAT/dT_data_hist_recommendation.csv'\n\nprint(PATH_DF_OUTPUT)\n```\n\n /Users/sarablichner/science/PHD/IPCC/public/AR6_CH6_RCMIPFIGSv2/ar6_ch6_rcmipfigs/data_out/historic_delta_GSAT/dT_data_hist_recommendation.csv\n\n\n### various definitions\n\n**Set reference year for temperature change:**\n\n\n```python\n\nref_period = [1850, 1900]\npd_period = [2010, 2019]\n```\n\n\n```python\n# variables to plot:\nvariables_erf_comp = [\n 'CO2', 'N2O', 'CH4', 'HC', 'NOx', 'SO2', 'BC', 'OC', 'NH3', 'VOC'\n]\n# total ERFs for anthropogenic and total:\nvariables_erf_tot = []\nvariables_all = variables_erf_comp + variables_erf_tot\n# Scenarios to plot:\nscenarios_fl = []\n```\n\n\n```python\nvarn = ['co2', 'N2O', 'HC', 'HFCs', 'ch4', 'o3', 'H2O_strat', 'ari', 'aci']\nvar_dir = ['CO2', 'N2O', 'HC', 'HFCs', 'CH4_lifetime', 'O3', 'Strat_H2O', 'Aerosol', 'Cloud']\n```\n\nNames for labeling:\n\n\n```python\nrename_dic_cat = {\n 'CO2': 'Carbon dioxide (CO$_2$)',\n 'GHG': 'WMGHG',\n 'CH4_lifetime': 'Methane (CH$_4$)',\n 'O3': 'Ozone (O$_3$)',\n 'Strat_H2O': 'H$_2$O (strat)',\n 'Aerosol': 'Aerosol-radiation',\n 'Cloud': 'Aerosol-cloud',\n 'N2O': 'N$_2$O',\n 'HC': 'CFC + HCFC',\n 'HFCs': 'HFC'\n\n}\nrename_dic_cols = {\n 'co2': 'CO$_2$',\n 'CO2': 'CO$_2$',\n 'CH4': 'CH$_4$',\n 'ch4': 'CH$_4$',\n 'N2O': 'N$_2$O',\n 'n2o': 'N$_2$O',\n 'HC': 'CFC + HCFC + HFC',\n 'HFCs': 'HFC',\n 'NOx': 'NO$_x$',\n 'VOC': 'NMVOC + CO',\n 'SO2': 'SO$_2$',\n 'OC': 'Organic carbon',\n 'BC': 'Black carbon',\n 'NH3': 'Ammonia'\n}\n```\n\n\n```python\nrn_dic_cat_o = {}\nfor key in rename_dic_cat.keys():\n rn_dic_cat_o[rename_dic_cat[key]]=key\nrn_dic_cols_o = {}\nfor key in rename_dic_cols.keys():\n rn_dic_cols_o[rename_dic_cols[key]]=key\n```\n\n### Open ERF dataset:\n\n\n```python\nds = xr.open_dataset(PATH_DATASET)\nds # ['Delta T']\n```\n\n\n\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
<xarray.Dataset>\nDimensions:     (year: 270, variable: 10, percentile: 1)\nCoordinates:\n  * year        (year) int64 1750 1751 1752 1753 1754 ... 2016 2017 2018 2019\n  * variable    (variable) object 'CO2' 'N2O' 'CH4' 'NOx' ... 'NH3' 'VOC' 'HC'\n  * percentile  (percentile) object 'recommendation'\nData variables:\n    ERF         (variable, year) float64 0.0 0.001126 0.002252 ... 0.21 0.2114\n    time        (year) datetime64[ns] 1750-01-01 1751-01-01 ... 2019-01-01\n    delta_t     (year) float64 1.0 1.0 1.0 1.0 1.0 1.0 ... 1.0 1.0 1.0 1.0 1.0\n    Delta T     (percentile, variable, year) float64 0.0 0.0 ... 0.09901 0.09978
\n\n\n\n### Overview plots\n\n\n```python\ncols = get_cmap_dic(ds['variable'].values)\n```\n\n (0.9568627450980393, 0.796078431372549, 0.21176470588235294)\n (0.8274509803921568, 0.0, 0.1568627450980392)\n (1.0, 0.4196078431372549, 0.07450980392156863)\n (0.26666666666666666, 0.0, 0.5254901960784314)\n (0.3764705882352941, 0.5725490196078431, 0.796078431372549)\n (0.5411764705882353, 0.2235294117647059, 0.0)\n (0.4745098039215686, 0.792156862745098, 0.9333333333333333)\n (0.0, 0.6901960784313725, 0.6039215686274509)\n (0.0, 0.5019607843137255, 0.23137254901960785)\n (0.47843137254901963, 0.5058823529411764, 0.5058823529411764)\n\n\n\n```python\nfig, axs = plt.subplots(2, sharex=True, figsize=[6, 6])\n\nax_erf = axs[0]\nax_dT = axs[1]\nfor v in ds['variable'].values:\n ds.sel(variable=v)['Delta T'].plot(ax=ax_dT, label=v, c=cols[v])\n ds.sel(variable=v)['ERF'].plot(ax=ax_erf, c=cols[v])\nds.sum('variable')['Delta T'].plot(ax=ax_dT, label='Sum', c='k', linewidth=2)\nds.sum('variable')['ERF'].plot(ax=ax_erf, c='k', linewidth=2)\n\nax_dT.set_title('Temperature change')\nax_erf.set_title('ERF')\nax_erf.set_ylabel('ERF [W m$^{-2}$]')\nax_dT.set_ylabel('$\\Delta$ GSAT [$^{\\circ}$C]')\nax_erf.set_xlabel('')\nax_dT.legend(ncol=4, loc='upper left', frameon=False)\nplt.tight_layout()\nfig.savefig('hist_timeseries_ERF_dT.png', dpi=300)\n```\n\n\n```python\ndf_deltaT = ds['Delta T'].squeeze().drop('percentile').to_dataframe().unstack('variable')['Delta T']\n\ncol_list = [cols[c] for c in df_deltaT.columns]\n\ndf_deltaT = ds['Delta T'].squeeze().drop('percentile').to_dataframe().unstack('variable')['Delta T']\n\nfig, ax = plt.subplots(figsize=[10, 5])\nax.hlines(0, 1740, 2028, linestyle='solid', alpha=0.9, color='k',\n linewidth=0.5) # .sum(axis=1).plot(linestyle='dashed', color='k', linewidth=3)\n\ndf_deltaT.plot.area(color=col_list, ax=ax)\ndf_deltaT.sum(axis=1).plot(linestyle='dashed', color='k', linewidth=3, label='Sum')\nplt.legend(loc='upper left', ncol=3, frameon=False)\nplt.ylabel('$\\Delta$ GSAT ($^\\circ$ C)')\nax.set_xlim([1740, 2028])\nsns.despine()\n```\n\n\n```python\n\n```\n\n# Split up ERF/warming into sources by using data from Thornhill\n\nWe use the original split up in ERF from Thornhill/Bill Collin's plot \n\nOpen dataset from Bill Collin's script:\n\n\n```python\n\n```\n\n\n```python\ndf_collins = pd.read_csv(fn_ERF_2019, index_col=0)\ndf_collins.index = df_collins.index.rename('emission_experiment')\ndf_collins_sd = pd.read_csv(fp_collins_sd, index_col=0)\ndf_collins\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CO2CH4_lifetimeStrat_H2OAerosolCloudO3HCN2OHFCs
emission_experiment
CO22.0575540.0000000.000.0000000.0000000.0000000.000.000.000000
CH40.0175490.8444570.05-0.0026530.0184210.2667360.000.000.000000
N2O0.000000-0.0359670.00-0.0020900.0425030.0261240.000.210.000000
HC0.000053-0.0509270.00-0.008080-0.017419-0.1620330.410.000.039772
NOx0.000000-0.3800250.00-0.009166-0.0144580.1371020.000.000.000000
VOC0.0694910.1624620.00-0.0025730.0088840.2020710.000.000.000000
SO20.0000000.0000000.00-0.234228-0.7037840.0000000.000.000.000000
OC0.0000000.0000000.00-0.072143-0.1369190.0000000.000.000.000000
BC0.0000000.0000000.000.144702-0.0372270.0000000.000.000.000000
NH30.0000000.0000000.00-0.0337690.0000000.0000000.000.000.000000
\n
\n\n\n\n\n```python\nwidth = 0.7\nkwargs = {'linewidth': .1, 'edgecolor': 'k'}\n```\n\n## Decompose GSAT signal as the ERF signal\n\n### GSAT\n\nGet period mean difference for GSAT:\n\n\n```python\ndf_deltaT = ds['Delta T'].squeeze().drop('percentile').to_dataframe().unstack('variable')['Delta T']\nmean_PD = df_deltaT.loc[pd_period[0]:pd_period[1]].mean()\nmean_PD\n\nmean_PI = df_deltaT.loc[ref_period[0]:ref_period[1]].mean()\n\ndT_period_diff = pd.DataFrame(mean_PD - mean_PI, columns=['diff']) # df_deltaT.loc[2019])\ndT_period_diff.index = dT_period_diff.index.rename('emission_experiment')\n```\n\nMake normalized decomposition of different components from emission based ERF. \n\n\n```python\ndf_coll_t = df_collins.transpose()\nif 'Total' in df_coll_t.index:\n df_coll_t = df_coll_t.drop('Total')\n# scale by total:\nscale = df_coll_t.sum()\n# normalized ERF: \ndf_col_normalized = df_coll_t / scale\n#\ndf_col_normalized.transpose().plot.barh(stacked=True)\n```\n\nWe multiply the change in GSAT in 2010-2019 vs 1850-1900 by the matrix describing the source distribution from the ERF:\n\n\n```python\ndT_period_diff['diff']\n```\n\n\n\n\n emission_experiment\n CO2 0.788254\n N2O 0.095969\n CH4 0.515223\n NOx -0.138916\n SO2 -0.597658\n BC 0.051592\n OC -0.083104\n NH3 -0.014523\n VOC 0.224660\n HC 0.096768\n Name: diff, dtype: float64\n\n\n\n\n```python\ndf_dt_sep = dT_period_diff['diff'] * df_col_normalized\n\ndf_dt_sep = df_dt_sep.transpose()\ndf_dt_sep\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CO2CH4_lifetimeStrat_H2OAerosolCloudO3HCN2OHFCs
emission_experiment
BC0.0000000.0000000.0000000.069463-0.0178710.0000000.0000000.0000000.000000
CH40.0075690.3642360.021566-0.0011440.0079450.1150500.0000000.0000000.000000
CO20.7882540.0000000.0000000.0000000.0000000.0000000.0000000.0000000.000000
HC0.000024-0.0233150.000000-0.003699-0.007975-0.0741820.1877060.0000000.018209
N2O0.000000-0.0143480.000000-0.0008340.0169560.0104210.0000000.0837750.000000
NH30.0000000.0000000.000000-0.0145230.0000000.0000000.0000000.0000000.000000
NOx0.000000-0.1980570.000000-0.004777-0.0075350.0714540.0000000.0000000.000000
OC0.0000000.0000000.000000-0.028678-0.0544270.0000000.0000000.0000000.000000
SO20.0000000.0000000.000000-0.149239-0.4484190.0000000.0000000.0000000.000000
VOC0.0354540.0828890.000000-0.0013130.0045330.1030970.0000000.0000000.000000
\n
\n\n\n\n\n```python\ndf_dt_sep.plot.bar(stacked=True)\ndT_period_diff['diff'].reindex(df_dt_sep.index).plot()\n```\n\n### ERF\n\nGet period mean difference for ERF:\n\n\n```python\ndf_ERF = ds['ERF'].squeeze().to_dataframe().unstack('variable')['ERF']\nmean_ERF_PD = df_ERF.loc[pd_period[0]:pd_period[1]].mean()\n\nmean_ERF_PI = df_ERF.loc[ref_period[0]:ref_period[1]].mean()\n```\n\n\n```python\nERF_period_diff = pd.DataFrame(mean_ERF_PD - mean_ERF_PI, columns=['diff']) # df_deltaT.loc[2019])\nERF_period_diff.index = ERF_period_diff.index.rename('emission_experiment')\n```\n\n\nWe multiply the change in ERF in 2010-2019 vs 1850-1900 by the matrix describing the source distribution from the ERF:\n\n\n```python\ndf_erf_sep = ERF_period_diff['diff'] * df_col_normalized\ndf_erf_sep = df_erf_sep.transpose()\n```\n\n\n```python\nERF_period_diff\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diff
emission_experiment
CO21.703144
N2O0.202558
CH41.018111
NOx-0.274165
SO2-0.979315
BC0.097273
OC-0.157024
NH3-0.030328
VOC0.402956
HC0.205605
\n
\n\n\n\n\n```python\ndf_erf_sep.plot.bar(stacked=True)\nERF_period_diff['diff'].reindex(df_erf_sep.index).plot.line()\nplt.show()\n```\n\n# Accounting for non-linearities in ERFaci, we scale down the GSAT change from aci contribution to fit with chapter 7 \n\nThe GSAT change from aerosol cloud interactions in 2019 vs 1750 is estimated to -0.38 degrees by chapter 7, which accounts for non-linearities in ERFaci. When considering the 1750-2019 change in GSAT, we therefore scaled the GSAT change by aerosol cloud interactions to fit this total. This constituted a 25% reduction. \nFor the GSAT averaged over the period 2010-2019 vs 1850-1900 we thus reduce by 25%. \n\nFurthermore, ERFaci over the same period (2010-2019 vs 1850-1900) is also sligtly overestimated due to higher emissions in the period versus 2019. To scale this, we use the ratio between ERFaci and ERFari as estimated by CHRIS [INSERT PROPER REFS] in the two periods respectively. The logic is that these both originate from the same emissions, so their ratio reflects the dampening of ERFaci with increased emissions. \n\nLet $\\alpha$ be the ratio \n\n\\begin{equation}\n\\frac{ERF_{aci}}{ERF_{ari}} = \\alpha \n\\end{equation}\n\nFrom the data (from FaIR, Chris) $\\alpha_{period}=3.42$ for 1850-1900 vs 2010-2019, while for the standard period from 1750 to 2019, it is $\\alpha_{standard} = 3.91$. \n\nThus, the ratio is \n\\begin{equation}\n\\frac{\\alpha_{period}}{\\alpha_{stanard}} = 0.874\n\\end{equation}\n\nThis results in a scaling down of approximately 12.5% of ERFaci. \n\n\n\n```python\nscale_down_by = 0.25\naci_tot = df_dt_sep.sum()['Cloud']\naci_tot\ndf_dt_sep['Cloud'] = df_dt_sep['Cloud'] * (1 - scale_down_by) # scale_by\ndf_dt_sep.sum()\n```\n\n\n\n\n CO2 0.831302\n CH4_lifetime 0.211404\n Strat_H2O 0.021566\n Aerosol -0.134744\n Cloud -0.380095\n O3 0.225841\n HC 0.187706\n N2O 0.083775\n HFCs 0.018209\n dtype: float64\n\n\n\n\n```python\ndf_erf_sep.sum()\n```\n\n\n\n\n CO2 1.781745\n CH4_lifetime 0.397714\n Strat_H2O 0.042616\n Aerosol -0.221752\n Cloud -0.843504\n O3 0.417665\n HC 0.398825\n N2O 0.176819\n HFCs 0.038688\n dtype: float64\n\n\n\n\n```python\nscale_down_by = 0.125\naci_tot = df_erf_sep.sum()['Cloud']\naci_tot\ndf_erf_sep['Cloud'] = df_erf_sep['Cloud'] * (1 - scale_down_by) # scale_by\ndf_erf_sep.sum()\n```\n\n\n\n\n CO2 1.781745\n CH4_lifetime 0.397714\n Strat_H2O 0.042616\n Aerosol -0.221752\n Cloud -0.738066\n O3 0.417665\n HC 0.398825\n N2O 0.176819\n HFCs 0.038688\n dtype: float64\n\n\n\n# Uncertainties\n\n\n```python\nfrom ar6_ch6_rcmipfigs.utils.badc_csv import read_csv_badc\n\nnum_mod_lab = 'Number of models (Thornhill 2020)'\nthornhill = read_csv_badc(fn_TAB2_THORNHILL, index_col=0)\nthornhill.index = thornhill.index.rename('Species')\nthornhill\n\n# ratio between standard deviation and 5-95th percentile.\nstd_2_95th = 1.645\n\nsd_tot = df_collins_sd['Total_sd']\ndf_err = pd.DataFrame(sd_tot.rename('std'))\ndf_err['SE'] = df_err\n\ndf_err['SE'] = df_err['std'] / np.sqrt(thornhill[num_mod_lab])\ndf_err['95-50_SE'] = df_err['SE'] * std_2_95th\ndf_err.loc['CO2', '95-50_SE'] = df_err.loc['CO2', 'std']\ndf_err\n\ndf_err['95-50'] = df_err['std'] * std_2_95th\n# CO2 is already 95-50 percentile: \ndf_err.loc['CO2', '95-50'] = df_err.loc['CO2', 'std']\ndf_err\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
stdSE95-50_SE95-50
Species
CO20.246907NaN0.2469070.246907
CH40.2365380.0836290.1375690.389105
N2O0.0617360.0276090.0454170.101555
HC0.1165830.0475950.0782930.191779
NOx0.1700360.0760430.1250900.279710
VOC0.1366830.0611270.1005530.224844
SO20.4197100.1713460.2818640.690423
OC0.1399320.0571270.0939740.230188
BC0.1879900.0710530.1168830.309243
NH30.0048240.0034110.0056110.007936
\n
\n\n\n\n### Uncertainty on period mean ERF is scaled from uncertainty in 2019: \n\n\n\n```python\nERF_2019_tot = df_collins.sum(axis=1).reindex(df_err.index)\nERF_period_diff_tot = df_erf_sep.sum(axis=1).reindex(df_err.index)\n```\n\nScale by the period mean to the original 1750-2019 difference. \n\n\n```python\ndf_err['95-50_period'] = df_err['95-50'] * np.abs(ERF_period_diff_tot / ERF_2019_tot)\n```\n\n\n```python\ndf_err\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
stdSE95-50_SE95-5095-50_period
Species
CO20.246907NaN0.2469070.2469070.204377
CH40.2365380.0836290.1375690.3891050.331005
N2O0.0617360.0276090.0454170.1015550.083621
HC0.1165830.0475950.0782930.1917790.188474
NOx0.1700360.0760430.1250900.2797100.285754
VOC0.1366830.0611270.1005530.2248440.205239
SO20.4197100.1713460.2818640.6904230.653220
OC0.1399320.0571270.0939740.2301880.158738
BC0.1879900.0710530.1168830.3092430.292008
NH30.0048240.0034110.0056110.0079360.007127
\n
\n\n\n\n### Uncertainties $\\Delta$ GSAT\n\n\n\\begin{align*} \n\\Delta T (t) &= \\int_0^t ERF(t') IRF(t-t') dt' \\\\\n\\end{align*}\n\nmost of the uncertainty in the IRF derives from the uncertainty in the climate sensitivity which is said 3 (2.5-4), i.e. relative std 0.5/3 for the lower and 1/3 for the higher. If we treat this as two independent normally distributed variables multiplied together, $X$ and $Y$ and $X \\cdot Y$, we may propagate the uncertainty: \n\n\\begin{align*} \n\\frac{\\sigma_{XY}^2}{(XY)^2} = \\Big[(\\frac{\\sigma_X}{X})^2 + (\\frac{\\sigma_Y}{Y})^2 \\Big]\n\\end{align*}\n\n\n```python\nERF_2019_tot\n```\n\n\n\n\n Species\n CO2 2.057554\n CH4 1.194509\n N2O 0.240569\n HC 0.211366\n NOx -0.266546\n VOC 0.440334\n SO2 -0.938012\n OC -0.209062\n BC 0.107475\n NH3 -0.033769\n dtype: float64\n\n\n\n\n```python\nstd_ERF = df_err['std']\nstd_ECS_lw_rl = 0.5 / 3\nstd_ECS_hg_rl = 1 / 3\n\ntot_ERF = ERF_2019_tot # df_collins.loc[::-1,var_dir].reindex(std_ERF.index).sum(axis=1)#tab_plt_ERF.sum(axis=1)\nstd_erf_rl = np.abs(std_ERF / tot_ERF)\nstd_erf_rl # .rename(rename_dic_cols)\n```\n\n\n\n\n Species\n CO2 0.120000\n CH4 0.198021\n N2O 0.256624\n HC 0.551568\n NOx 0.637925\n VOC 0.310408\n SO2 0.447446\n OC 0.669331\n BC 1.749148\n NH3 0.142857\n dtype: float64\n\n\n\n\n```python\n\n```\n\n\n```python\ndef rel_sigma_prod(rel_sigmaX, rel_sigmaY):\n var_prod_rel = (rel_sigmaX ** 2 + rel_sigmaY ** 2)\n rel_sigma_product = np.sqrt(var_prod_rel)\n return rel_sigma_product\n\n\nrel_sig_lw = rel_sigma_prod(std_erf_rl, std_ECS_lw_rl)\nrel_sig_hg = rel_sigma_prod(std_erf_rl, std_ECS_hg_rl)\n```\n\n\n```python\ntot_dT = df_dt_sep.sum(axis=1).reindex(std_ERF.index)\n\nneg_v = (tot_dT < 0) # .squeeze()\n```\n\n\n```python\nstd_2_95th\n```\n\n\n\n\n 1.645\n\n\n\n\n```python\nrel_sig_hg\n```\n\n\n\n\n Species\n CO2 0.354275\n CH4 0.387716\n N2O 0.420674\n HC 0.644468\n NOx 0.719763\n VOC 0.455482\n SO2 0.557960\n OC 0.747740\n BC 1.780626\n NH3 0.362656\n dtype: float64\n\n\n\n\n```python\nerr_dT = pd.DataFrame(index=tot_dT.index)\nerr_dT['min 1 sigma'] = np.abs(tot_dT * rel_sig_lw) # *tot_dT\nerr_dT['plus 1 sigma'] = np.abs(tot_dT * rel_sig_hg)\nerr_dT['plus 1 sigma'][neg_v] = np.abs(tot_dT * rel_sig_lw)[neg_v] # .iloc[neg_v].iloc[neg_v].iloc[neg_v]\nerr_dT['min 1 sigma'][neg_v] = np.abs(tot_dT * rel_sig_hg)[neg_v] # .iloc[neg_v].iloc[neg_v].iloc[neg_v]\n# err_dT['min 1 sigma'].iloc[neg_v] =np.abs(tot_dT*rel_sig_hg).iloc[neg_v]\n# err_dT['plus 1 sigma'][neg_v] = np.abs(tot_dT*rel_sig_lw)[neg_v]\n# err_dT['min 1 sigma'][neg_v] = np.abs(tot_dT*rel_sig_hg)[neg_v]\n# [::-1]\nerr_dT['p50-05'] = err_dT['min 1 sigma'] * std_2_95th\nerr_dT['p95-50'] = err_dT['plus 1 sigma'] * std_2_95th\nerr_dT\nerr_dT = err_dT.rename(rename_dic_cat, axis=1).rename(rename_dic_cols, axis=0)\n# var_nn_dir = [rename_dic_cols[v] for v in varn]\n```\n\n\n```python\ndf_err = df_err.rename(rename_dic_cols, axis=0)\n```\n\n# Reorder and rename\n\n\n```python\nexps_ls = ['CO2', 'CH4', 'N2O', 'HC', 'NOx', 'VOC', 'SO2', 'OC', 'BC', 'NH3']\n```\n\n\n```python\ntab_plt_dT = df_dt_sep.loc[::-1, var_dir] # .rename(rename_dic_cat, axis=1).rename(rename_dic_cols, axis=0)\ntab_plt_dT = tab_plt_dT.loc[exps_ls]\ntab_plt_dT = tab_plt_dT.rename(rename_dic_cat, axis=1).rename(rename_dic_cols, axis=0)\n```\n\n\n```python\ntab_plt_erf = df_erf_sep.loc[::-1, var_dir] # .rename(rename_dic_cat, axis=1).rename(rename_dic_cols, axis=0)\ntab_plt_erf = tab_plt_erf.loc[exps_ls]\ntab_plt_erf = tab_plt_erf.rename(rename_dic_cat, axis=1).rename(rename_dic_cols, axis=0)\ntab_plt_erf = tab_plt_erf # .T\n```\n\n\n```python\ncmap = get_cmap_dic(var_dir)\ncol_ls = [cmap[c] for c in cmap.keys()]\n```\n\n (0.9568627450980393, 0.796078431372549, 0.21176470588235294)\n (0.8274509803921568, 0.0, 0.1568627450980392)\n (0.47843137254901963, 0.5058823529411764, 0.5058823529411764)\n (0.21568627450980393, 0.49411764705882355, 0.7215686274509804)\n (1.0, 0.4196078431372549, 0.07450980392156863)\n (0.5254901960784314, 0.7803921568627451, 0.29411764705882354)\n (0.47843137254901963, 0.5058823529411764, 0.5058823529411764)\n (0.792156862745098, 0.6980392156862745, 0.8392156862745098)\n (0.5607843137254902, 0.0, 0.6470588235294118)\n\n\n\n```python\n\n```\n\n\n```python\nybar = np.arange(len(tab_plt_erf.T) + 1) # , -1)\n```\n\n\n```python\nindex_order = tab_plt_dT[::-1].index\nindex_order\n```\n\n\n\n\n Index(['Ammonia', 'Black carbon', 'Organic carbon', 'SO$_2$', 'NMVOC + CO',\n 'NO$_x$', 'CFC + HCFC + HFC', 'N$_2$O', 'CH$_4$', 'CO$_2$'],\n dtype='object', name='emission_experiment')\n\n\n\n# Plot\n\n\n```python\nsns.set_style()\nfig, axs = plt.subplots(1, 2, dpi=300, figsize=[10, 4]) # , dpi=150)\nwidth = .8\nkws = {\n 'width': .8,\n 'linewidth': .1,\n 'edgecolor': 'k',\n\n}\n\nax = axs[0]\nax.axvline(x=0., color='k', linewidth=0.25)\n\ntab_plt_erf.reindex(index_order).plot.barh(stacked=True, color=col_ls, ax=ax, **kws)\n# tot = table['Total'][::-1]\ntot = tab_plt_erf.reindex(index_order).sum(axis=1) # tab_plt\nxerr = df_err['95-50_period'].reindex(index_order)\ny = np.arange(len(tot))\nax.errorbar(tot, y, xerr=xerr, marker='d', linestyle='None', color='k', label='Sum', )\n# ax.legend(frameon=False)\nax.set_ylabel('')\n\nfor lab, y in zip(index_order, ybar):\n # plt.text(-1.55, ybar[i], species[i], ha='left')#, va='left')\n ax.text(-1.9, y - 0.1, lab, ha='left') # , va='left')\nax.set_title('Effective radiative forcing, 1850-1900 to 2010-2019')\nax.set_xlabel(r'(W m$^{-2}$)')\n# ax.set_xlim(-1.5, 2.6)\n# plt.xlim(-1.6, 2.0)\n# sns.despine(fig, left=True, trim=True)\nax.legend(loc='lower right', frameon=False)\nax.set_yticks([])\n\nax.get_legend().remove()\n\nax.set_xticks(np.arange(-1.5, 2.1, .5))\nax.set_xticks(np.arange(-1.5, 2, .1), minor=True)\n\nax = axs[1]\nax.axvline(x=0., color='k', linewidth=0.25)\n\ntab_plt_dT.reindex(index_order).plot.barh(stacked=True, color=col_ls, ax=ax, **kws)\ntot = tab_plt_dT.reindex(index_order).sum(axis=1)\n# xerr =0# df_err['95-50'][::-1]\ny = np.arange(len(tot))\nxerr_dT = err_dT[['p50-05', 'p95-50']].reindex(index_order).transpose().values\nax.errorbar(tot, y,\n xerr=xerr_dT,\n # xerr=err_dT[['min 1 sigma','plus 1 sigma']].loc[tot.index].transpose().values,\n marker='d', linestyle='None', color='k', label='Sum', )\n# ax.legend(frameon=False)\nax.set_ylabel('')\n\nax.set_title('Change in GSAT, 1850-1900 to 2010-2019')\nax.set_xlabel(r'($^{\\circ}$C)')\nax.set_xlim(-1.3, 1.8)\n\nsns.despine(fig, left=True, trim=True)\nax.spines['bottom'].set_bounds(-1., 1.5)\nax.legend(loc='lower right', frameon=False)\n\nax.set_xticks(np.arange(-1, 2.1, .5))\n# ax.xaxis.set_major_locator(MultipleLocator(.5))\n\nax.set_xticks(np.arange(-1, 1.6, .5))\nax.set_xticks(np.arange(-1, 1.5, .1), minor=True)\n\nfn = output_name + '.png'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\nfp.parent.mkdir(parents=True, exist_ok=True)\nax.set_yticks([])\nfig.tight_layout()\nplt.savefig(fp, dpi=300, bbox_inches='tight')\nplt.savefig(fp.with_suffix('.pdf'), dpi=300, bbox_inches='tight')\nplt.savefig(fp.with_suffix('.png'), dpi=300, bbox_inches='tight')\nplt.show()\n```\n\n\n```python\n\n```\n\n\n```python\ntab_plt_erf.T.sum(axis=0)\n```\n\n\n\n\n emission_experiment\n CO$_2$ 1.703144\n CH$_4$ 1.016149\n N$_2$O 0.198085\n CFC + HCFC + HFC 0.207723\n NO$_x$ -0.272306\n NMVOC + CO 0.401940\n SO$_2$ -0.887468\n Organic carbon -0.144169\n Black carbon 0.101485\n Ammonia -0.030328\n dtype: float64\n\n\n\n\n```python\ntab_plt_dT.sum(axis=1)\n```\n\n\n\n\n emission_experiment\n CO$_2$ 0.788254\n CH$_4$ 0.513237\n N$_2$O 0.091731\n CFC + HCFC + HFC 0.098762\n NO$_x$ -0.137032\n NMVOC + CO 0.223527\n SO$_2$ -0.485553\n Organic carbon -0.069498\n Black carbon 0.056060\n Ammonia -0.014523\n dtype: float64\n\n\n\n\n```python\ntab_plt_dT.sum()\n```\n\n\n\n\n Carbon dioxide (CO$_2$) 0.831302\n N$_2$O 0.083775\n CFC + HCFC 0.187706\n HFC 0.018209\n Methane (CH$_4$) 0.211404\n Ozone (O$_3$) 0.225841\n H$_2$O (strat) 0.021566\n Aerosol-radiation -0.134744\n Aerosol-cloud -0.380095\n dtype: float64\n\n\n\n# Write vales to csv\n\nfn = output_name + '_values_ERF.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\ntab_plt_erf.to_csv(fp)\n\nfn = output_name + '_values_ERF_uncertainty.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\ndf_err.to_csv(fp)\n\nfn = output_name + '_values_dT.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\ntab_plt_dT.to_csv(fp)\n\nfn = output_name + '_values_dT_uncertainty.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\nerr_dT.to_csv(fp)\n\nfrom ar6_ch6_rcmipfigs.utils.badc_csv import write_badc_header\n\n### Write plotted data to file\n\n\n```python\ndic_head = dict(\n title='Data for Figure 6.12, emission based ERF and warming for the historical period',\n last_revised_date='2021-06-29',\n location='global',\n reference='https://github.com/sarambl/AR6_CH6_RCMIPFIGS/',\n source='IPCC AR6 output',\n creator='Sara Marie Blichner (s.m.blichner@geo.uio.no)',\n\n)\nadd_global_comments = [\n ['comments', 'G', 'This data is based on various input datasets,'],\n ['comments', 'G', 'please see https://github.com/sarambl/AR6_CH6_RCMIPFIGS for methods'],\n]\n\n\ndef get_add_global_from_dic(_dic_head):\n add_global = [[key, 'G', _dic_head[key]] for key in _dic_head.keys()]\n add_global = add_global + add_global_comments\n return add_global\n\n\npath_header_def = BASE_DIR / 'misc/header_empty.csv'\npath_header_def.exists()\n\n\ndef to_csv_w_header(df, var_name, perc, _ref_year, end_year, fn, \n unit):\n fn_out = RESULTS_DIR / fn\n df_out = df.rename(rn_dic_cat_o, axis=1)\n df_out = df_out.rename(rn_dic_cols_o)\n df_out.to_csv(fn_out)\n\n dic_head['title'] = get_title(perc, var_name)\n\n add_global = get_add_global_from_dic(dic_head)\n\n write_badc_header(fn_out, fn_out, add_global, default_unit=unit,\n fp_global_default=path_header_def,\n fp_var_default=path_header_def)\n\ndef get_title(perc,var):\n if perc == 'mean':\n txt = f'Data for Figure 6.12, emission based {var} for the historical period'\n else:\n txt = f'Data for Figure 6.12, uncertainty in emission based {var} and warming for the historical period'\n \n return txt\n\n\n```\n\n\n```python\nfn = output_name + '_values_ERF.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\n\nto_csv_w_header(tab_plt_erf, 'ERF', 'mean', '', '', fp, \n 'W/m2')\n\nfn = output_name + '_values_ERF_uncertainty.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\ndf_err.to_csv(fp)\nto_csv_w_header(df_err, 'ERF', 'uncertainty', '', '', fp, \n 'W/m2')\n\nfn = output_name + '_values_dT.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\nto_csv_w_header(tab_plt_dT, 'warming', 'mean', '', '', fp, \n 'degrees C')\n\nfn = output_name + '_values_dT_uncertainty.csv'\nfp = RESULTS_DIR / 'figures_historic_attribution_DT' / fn\nto_csv_w_header(err_dT, 'warming', 'uncertainty', '', '', fp, \n 'degrees C')\n```\n", "meta": {"hexsha": "0afb133dddc1e38726dd8d9feed5f9f142028fd8", "size": 474016, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ar6_ch6_rcmipfigs/notebooks/GSAT_change_hist_attribution/04_01_plot-period.ipynb", "max_stars_repo_name": "sarambl/AR6_CH6_RCMIPFIGS", "max_stars_repo_head_hexsha": "3033c595c4a9c05ad3ea0fe01886d10fdb117141", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-05-27T09:23:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T14:45:56.000Z", "max_issues_repo_path": "ar6_ch6_rcmipfigs/notebooks/GSAT_change_hist_attribution/04_01_plot-period.ipynb", "max_issues_repo_name": "sarambl/AR6_CH6_RCMIPFIGS", "max_issues_repo_head_hexsha": "3033c595c4a9c05ad3ea0fe01886d10fdb117141", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-02-10T09:50:00.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-06T16:24:53.000Z", "max_forks_repo_path": "ar6_ch6_rcmipfigs/notebooks/GSAT_change_hist_attribution/04_01_plot-period.ipynb", "max_forks_repo_name": "sarambl/AR6_CH6_RCMIPFIGS", "max_forks_repo_head_hexsha": "3033c595c4a9c05ad3ea0fe01886d10fdb117141", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-09T08:04:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-09T08:04:55.000Z", "avg_line_length": 172.1816200509, "max_line_length": 217244, "alphanum_fraction": 0.8711013974, "converted": true, "num_tokens": 19402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5506073655352404, "lm_q2_score": 0.5389832206876841, "lm_q1q2_score": 0.2967681312105448}} {"text": "# \u935b\u934a\u4f60\u7684\u8cc7\u6599\u5206\u6790\u529b | Python \u8cc7\u6599\u5206\u6790\n\n> \u8cc7\u6599\u5206\u6790\u5957\u4ef6\u5feb\u901f\u5165\u9580\n\n\u90ed\u8000\u4ec1 from [DATAINPOINT](https://www.datainpoint.com/)\n\n## \u5927\u7db1\n\n- \u8cc7\u6599\u5206\u6790\u8207\u7ba1\u7dda\u6d41\n- NumPy \u5feb\u901f\u5165\u9580\n- Pandas \u5feb\u901f\u5165\u9580\n- Matplotlib \u5feb\u901f\u5165\u9580\n\n## \u8cc7\u6599\u5206\u6790\u8207\u7ba1\u7dda\u6d41\n\n## \u8cc7\u6599\u5206\u6790\u7684\u4e00\u5207\u90fd\u8207\u7ba1\u7dda\u6d41\u606f\u606f\u76f8\u95dc\n\n\n\n\u5716\u7247\u4f86\u6e90: \n\n## \u6bcf\u500b\u74b0\u7bc0\u7684\u610f\u6db5\n\n- Import\uff1a\u5f9e\u5e38\u898b\u4f86\u6e90\u5c07\u8cc7\u6599\u8f09\u5165\u5206\u6790\u74b0\u5883\n- Tidy/Transform\uff1a\u4ee5\u9069\u7576\u7684\u8cc7\u6599\u7d50\u69cb\u6574\u4f75\u5167\u5bb9\u8207\u8f49\u63db\u6a23\u5f0f\n- Visualize\uff1a\u63a2\u7d22\u5206\u6790\u8cc7\u6599\u7684\u5f62\u72c0\u3001\u76f8\u95dc\u3001\u7d44\u6210\u8207\u8da8\u52e2\n- Model\uff1a\u9810\u6e2c\u6216\u6316\u6398\u8cc7\u6599\u7684\u96b1\u542b\u7279\u5fb5\n- Communicate\uff1a\u5411\u7522\u54c1\u3001\u884c\u92b7\u8207\u7ba1\u7406\u5718\u968a\u7cbe\u6e96\u4e14\u6709\u6548\u5730\u50b3\u9054\u5206\u6790\u6d1e\u5bdf\n- Program\uff1a\u5229\u7528\u7a0b\u5f0f\u8a9e\u8a00\u638c\u63e1\u7ba1\u7dda\u6d41\n\n## \u6bcf\u500b\u74b0\u7bc0\u90fd\u6709\u5c0d\u61c9\u5957\u4ef6\u652f\u63f4\n\n- [NumPy](https://numpy.org/) \u53ef\u4ee5\u652f\u63f4 Import\u3001Tidy/Transform \u8207 Model\n- [Pandas](https://pandas.pydata.org/) \u53ef\u4ee5\u652f\u63f4 Import\u3001Tidy/Transform \u8207 Visualise\n- [Matplotlib](https://matplotlib.org/) \u53ef\u4ee5\u652f\u63f4 Visualise\u3001Model \u8207 Communicate\n\n## \u78ba\u8a8d\u5206\u6790\u74b0\u5883\u80fd\u5920\u4f7f\u7528\u9019\u4e09\u500b\u5957\u4ef6\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\n```\n\n## \u6aa2\u8996\u9019\u4e9b\u7b2c\u4e09\u65b9\u5957\u4ef6\u7684\u7248\u672c\u8cc7\u8a0a\n\n\n```python\nprint(np.__version__)\nprint(pd.__version__)\nprint(mpl.__version__)\n```\n\n 1.18.4\n 1.0.3\n 3.2.1\n\n\n## NumPy \u5feb\u901f\u5165\u9580\n\n## \u7c21\u4ecb NumPy \u5957\u4ef6\n\n> NumPy\uff0cNumerical Python \u7684\u7c21\u7a31\uff0c\u662f\u4f7f\u7528 Python \u9032\u884c\u79d1\u5b78\u8a08\u7b97\u7684\u7b2c\u4e09\u65b9\u5957\u4ef6\uff0c\u5275\u9020\u4e86\u4e00\u500b\u7a31\u70ba N \u7dad\u9663\u5217\uff08ndarray\uff09\u7684\u985e\u5225\uff0c\u900f\u904e N \u7dad\u9663\u5217\uff0c\u53ef\u4ee5\u5c07 Python \u5f9e\u4e00\u500b\u6cdb\u7528\uff08general purposed\uff09\u7a0b\u5f0f\u8a9e\u8a00\u8f49\u8b8a\u6210\u4e00\u500b\u79d1\u5b78\u8a08\u7b97\uff08scientific computing\uff09\u7a0b\u5f0f\u8a9e\u8a00\uff0c\u4e26\u4e14\u6709\u8c50\u5bcc\u7684\u7d71\u8a08\u3001\u7dda\u6027\u4ee3\u6578\u8207\u96a8\u6a5f\u7684\u51fd\u5f0f\u3002\n\n## \u7bc4\u4f8b\u8cc7\u6599\uff1aCOVID-19 \u6bcf\u65e5\u5831\u544a\n\n\u8cc7\u6599\u4f86\u6e90\uff1a\n\n\n```python\nimport datetime\n\ndef get_latest_daily_report():\n \"\"\"\n This function returns the latest global daily report from https://github.com/CSSEGISandData/COVID-19 and its file date.\n \"\"\"\n latest_date = datetime.date.today()\n day_delta = datetime.timedelta(days=1)\n fmt = '%m-%d-%Y'\n while True:\n try:\n latest_date_fmt = latest_date.strftime(fmt)\n csv_url = \"https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{}.csv\".format(latest_date_fmt)\n daily_report = pd.read_csv(csv_url)\n print(\"\u8f09\u5165\u4e86 {} \u7684\u6bcf\u65e5\u5831\u544a\u3002\".format(latest_date_fmt))\n break\n except:\n latest_date_fmt = latest_date.strftime(fmt)\n print(\"\u5c1a\u672a\u6709 {} \u7684\u6bcf\u65e5\u5831\u544a\u3002\".format(latest_date_fmt))\n latest_date -= day_delta\n return latest_date, daily_report\n```\n\n\n```python\nlatest_date, daily_report = get_latest_daily_report()\n```\n\n \u5c1a\u672a\u6709 09-13-2020 \u7684\u6bcf\u65e5\u5831\u544a\u3002\n \u8f09\u5165\u4e86 09-12-2020 \u7684\u6bcf\u65e5\u5831\u544a\u3002\n\n\n## N \u7dad\u9663\u5217\u53ef\u7531\u8cc7\u6599\u6846\u53d6\u51fa\n\n\n```python\nconfirmed = daily_report['Confirmed'].values\ntype(confirmed)\n```\n\n\n\n\n numpy.ndarray\n\n\n\n## N \u7dad\u9663\u5217\u63d0\u4f9b\u5169\u500b\u597d\u7528\u7684\u529f\u80fd\n\n- \u5411\u91cf\u5316\uff08Vectorization\uff09\u904b\u7b97\u3001\u6709\u6642\u4e5f\u7a31\u70ba\u5143\u7d20\u7d1a\u5225\uff08Element-wise\uff09\u904b\u7b97\n- \u5e03\u6797\u7d22\u5f15\uff08Boolean indexing\uff09\n\n## \u5411\u91cf\u5316\u904b\u7b97\n\n\n```python\ndeaths = daily_report['Deaths'].values\nrecovered = daily_report['Recovered'].values\nactive = confirmed - deaths - recovered # Vectorization\n```\n\n## \u5e03\u6797\u7d22\u5f15\n\n\n```python\nprint(confirmed.size)\nprint(confirmed > 100000) # Creating a boolean array\nprint(confirmed[confirmed > 100000].size) # Boolean indexing\n```\n\n 3954\n [False False False ... False False False]\n 65\n\n\n## \u8a66\u8457\u4f7f\u7528\u5167\u5efa\u8cc7\u6599\u7d50\u69cb `list` \u5b8c\u6210\u524d\u5169\u500b\u4efb\u52d9\uff08\u85c9\u6b64\u9ad4\u9a57 N \u7dad\u9663\u5217\u7684\u4fbf\u5229\u6027\uff09\n\n1. \u8a08\u7b97\u6cbb\u7642\u4e2d\u500b\u6848\u6578\uff1b\n2. \u6311\u9078\u78ba\u8a3a\u6578\u8d85\u904e 10 \u842c\u7684\u89c0\u6e2c\u503c\u3002\n\n\n```python\n# \u8a08\u7b97\u6cbb\u7642\u4e2d\u500b\u6848\u6578\nconfirmed = list(daily_report['Confirmed'].values)\ndeaths = list(daily_report['Deaths'].values)\nrecovered = list(daily_report['Recovered'].values)\n```\n\n\n```python\n# \u6311\u9078\u78ba\u8a3a\u6578\u8d85\u904e 10 \u842c\u7684\u89c0\u6e2c\u503c\nconfirmed = list(daily_report['Confirmed'].values)\n```\n\n## \u9664\u4e86 N \u7dad\u9663\u5217\uff0cNumPy \u9084\u9644\u5e36\u8c50\u5bcc\u7684\u51fd\u5f0f\n\n- \u901a\u7528\u51fd\u5f0f\n- \u805a\u5408\u51fd\u5f0f\n- \u96a8\u6a5f\u51fd\u5f0f\n- \u7dda\u6027\u4ee3\u6578\u51fd\u5f0f\n- ...etc.\n\n## \u66b8\u89e3\u66f4\u591a NumPy \u63d0\u4f9b\u7684\u529f\u80fd\n\n[NumPy User Guide](https://www.numpy.org/devdocs/user/index.html)\n\n## Pandas \u5feb\u901f\u5165\u9580\n\n## \u7c21\u4ecb Pandas \u5957\u4ef6\n\n> Pandas\uff0cPanel DataFrame Series \u7684\u7c21\u7a31\uff0c\u662f\u5728 Python \u4e2d\u5206\u6790\u8868\u683c\u8cc7\u6599\u7684\u7b2c\u4e09\u65b9\u5957\u4ef6\uff0c\u5275\u9020\u4e86\u7a31\u70ba\u7d22\u5f15\uff08Index\uff09\u3001\u5e8f\u5217\uff08Series\uff09\u8207\u8cc7\u6599\u6846\uff08DataFrame\uff09\u7684\u985e\u5225\uff0c\u900f\u904e\u9019\u4e9b\u985e\u5225\uff0c\u53ef\u4ee5\u8b93 Python \u5728\u9762\u5c0d\u6587\u5b57\u6a94\u6848\u3001Excel \u8a66\u7b97\u8868\u8207\u95dc\u806f\u5f0f\u8cc7\u6599\u5eab\u6642\u80fd\u5920\u4f7f\u7528\u66f4\u76f4\u89ba\u7684\u89c0\u5ff5\u64cd\u4f5c\u3002\n\n## \u5c07 CSV \u6587\u5b57\u6a94\u6848\u8b80\u5165\u6210\u70ba\u8cc7\u6599\u6846\n\n- \u7bc4\u4f8b\u8cc7\u6599\uff1aCOVID-19 \u6bcf\u65e5\u5831\u544a\n- \u8cc7\u6599\u4f86\u6e90\uff1a\n\n\n```python\nlatest_date, daily_report = get_latest_daily_report()\ntype(daily_report)\n```\n\n \u5c1a\u672a\u6709 09-13-2020 \u7684\u6bcf\u65e5\u5831\u544a\u3002\n \u8f09\u5165\u4e86 09-12-2020 \u7684\u6bcf\u65e5\u5831\u544a\u3002\n\n\n\n\n\n pandas.core.frame.DataFrame\n\n\n\n## Pandas \u63d0\u4f9b\u4e09\u7a2e\u65b0\u7684\u8cc7\u6599\u7d50\u69cb\n\n- `DataFrame`\n- `Series`\n- `Index`\n\n\n```python\nprint(type(daily_report)) # DataFrame\nprint(type(daily_report['Confirmed'])) # Series\nprint(type(daily_report.columns)) # Index\nprint(type(daily_report.index)) # Index\n```\n\n \n \n \n \n\n\n## \u4f7f\u7528\u66f4\u76f4\u89ba\u7684\u89c0\u5ff5\u64cd\u4f5c\u8cc7\u6599\n\n- \u5982\u4f55\u5b9a\u7fa9\u300c\u66f4\u76f4\u89ba\u300d\uff1f\n - Spreadsheet-like\uff1b\n - SQL-like\n\n## \u57fa\u790e\u7684\u8cc7\u6599\u6846\u64cd\u4f5c\n\n- \u884d\u751f\uff08mutate\uff09\uff0c\u65b0\u589e\u6b04\u4f4d\u5230\u8cc7\u6599\u6846\u4e2d\uff0c\u7279\u5225\u662f\u65b0\u6b04\u4f4d\u8207\u65e2\u6709\u6b04\u4f4d\u5177\u6709\u51fd\u5f0f\u7684\u8f38\u51fa\u4ee5\u53ca\u8f38\u5165\u95dc\u4fc2\uff1b\n- \u9078\u64c7\uff08select\uff09\uff0c\u5f9e\u8cc7\u6599\u6846\u4e2d\u4f9d\u64da\u540d\u7a31\u6311\u51fa\u55ae\u500b\u6216\u591a\u500b\u6b04\u4f4d\uff1b\n- \u7be9\u9078\uff08filter\uff09\uff0c\u4f9d\u64da\u5224\u65b7\u689d\u4ef6\uff08\u5e03\u6797\u503c\uff09\u5f9e\u8cc7\u6599\u6846\u4e2d\u6311\u51fa\u7b26\u5408\uff08\u5e03\u6797\u503c\u70ba True\uff09\u7684\u89c0\u6e2c\u503c\uff1b\n- \u6458\u8981\uff08summarise\uff09\uff0c\u5c0d\u6b04\u4f4d\u9032\u884c\u805a\u5408\uff08Aggregate\uff09\u7684\u904b\u7b97\u5c07\u591a\u7b46\u89c0\u6e2c\u503c\u7e3d\u7d50\uff1b\n- \u6392\u5e8f\uff08arrange\uff09\uff0c\u5c0d\u89c0\u6e2c\u503c\u7531\u5c0f\u5230\u5927\uff08\u905e\u589e\uff09\u6216\u8005\u7531\u5927\u5230\u5c0f\uff08\u905e\u6e1b\uff09\u8b8a\u52d5\u6392\u5217\u9806\u5e8f\uff1b\n- \u5206\u7d44\uff08group by\uff09\uff0c\u5c07\u6b04\u4f4d\u4f9d\u7167\u7368\u4e00\u985e\u5225\u9032\u884c\u6458\u8981\u3002\n\n## \u884d\u751f\uff08mutate\uff09\n\n\u6cbb\u7642\u4e2d\u6848\u4f8b\u6578\u53ef\u4ee5\u7528\u65e2\u6709\u7684\u6b04\u4f4d\u8a08\u7b97\u800c\u5f97\uff0c\u5e38\u898b\u7684\u5b9a\u7fa9\u70ba\uff1a\u78ba\u8a3a\u6578\u6e1b\u53bb\u6b7b\u4ea1\u6578\u518d\u6e1b\u75ca\u7652\u6578\u3002\n\n\\begin{equation}\n\\text{Active} = \\text{Confirmed} - \\text{Deaths} - \\text{Recovered}\n\\end{equation}\n\n\n```python\nactive = daily_report[\"Confirmed\"] - daily_report[\"Deaths\"] - daily_report[\"Recovered\"]\nactive\n```\n\n\n\n\n 0 5987\n 1 4361\n 2 12527\n 3 348\n 4 1914\n ... \n 3949 9717\n 3950 1\n 3951 216\n 3952 1147\n 3953 1609\n Length: 3954, dtype: int64\n\n\n\n## \u9078\u64c7\uff08select\uff09\n\n\u5728\u4e2d\u62ec\u865f\u88e1\u982d\u8f38\u5165\u6b04\u4f4d\u540d\u7a31\u53ef\u4ee5\u5c07\u8cc7\u6599\u4ee5 `Series` \u5916\u578b\u5f9e\u8cc7\u6599\u6846\u4e2d\u53d6\u51fa\u3002\n\n\n```python\ndaily_report[\"Country_Region\"]\n```\n\n\n\n\n 0 Afghanistan\n 1 Albania\n 2 Algeria\n 3 Andorra\n 4 Angola\n ... \n 3949 West Bank and Gaza\n 3950 Western Sahara\n 3951 Yemen\n 3952 Zambia\n 3953 Zimbabwe\n Name: Country_Region, Length: 3954, dtype: object\n\n\n\n## \u9078\u64c7\uff08select\uff09\n\n\u82e5\u60f3\u8981\u9078\u64c7\u591a\u500b\u6b04\u4f4d\uff0c\u5c07\u591a\u500b\u6b04\u4f4d\u540d\u7a31\u4ee5 `list` \u50b3\u5165\u4e2d\u62ec\u865f\u3002\n\n\n```python\nmultiple_columns = [\"Country_Region\", \"Confirmed\", \"Deaths\", \"Recovered\"]\ndaily_report[multiple_columns]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Country_RegionConfirmedDeathsRecovered
0Afghanistan38641142031234
1Albania111853306494
2Algeria48007160533875
3Andorra134453943
4Angola33351321289
...............
3949West Bank and Gaza2990621019979
3950Western Sahara1018
3951Yemen20095821211
3952Zambia1346631212007
3953Zimbabwe75082245675
\n

3954 rows \u00d7 4 columns

\n
\n\n\n\n## \u7be9\u9078\uff08filter\uff09\n\n\u5728\u4e2d\u62ec\u865f\u88e1\u982d\u8f38\u5165\u5224\u65b7\u689d\u4ef6\u6240\u7372\u5f97\u7684\u5e03\u6797\u503c `Series` \u53ef\u4ee5\u7372\u5f97\u6307\u5b9a\u7684\u89c0\u6e2c\u503c\u3002\n\n\n```python\nis_tw = daily_report['Country_Region'] == 'Taiwan*' # \u53f0\u7063\u5728\u54ea\u88e1\nis_tw\n```\n\n\n\n\n 0 False\n 1 False\n 2 False\n 3 False\n 4 False\n ... \n 3949 False\n 3950 False\n 3951 False\n 3952 False\n 3953 False\n Name: Country_Region, Length: 3954, dtype: bool\n\n\n\n\n```python\ndaily_report[is_tw] # is_tw \u662f\u4e00\u500b\u5e03\u6797\u503c Series\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
FIPSAdmin2Province_StateCountry_RegionLast_UpdateLatLong_ConfirmedDeathsRecoveredActiveCombined_KeyIncidence_RateCase-Fatality_Ratio
622NaNNaNNaNTaiwan*2020-09-13 04:30:5223.7121.0498747516.0Taiwan*2.0909631.405622
\n
\n\n\n\n## \u6458\u8981\uff08summarize\uff09\n\n\u5c0d `Series` \u547c\u53eb\u805a\u5408\u65b9\u6cd5\uff0c\u4f8b\u5982\u52a0\u7e3d `.sum()`\u3002\n\n\n```python\ndaily_report['Confirmed'].sum()\n```\n\n\n\n\n 28759036\n\n\n\n## \u5206\u7d44\uff08group by\uff09\u8207\u6458\u8981\uff08summarise\uff09\n\n\u4ee5\u570b\u5bb6\u70ba\u5206\u7d44\u5c64\u7d1a\uff0c\u6240\u4ee5\u5728 `groupby()` \u65b9\u6cd5\u4e2d\u50b3\u5165 Country_Region \u6b04\u4f4d\uff0c\u7372\u5f97\u4e00\u500b `DataFrameGroupBy` \u985e\u5225\u3002\n\n\n```python\ndaily_report.groupby(\"Country_Region\")\n```\n\n\n\n\n \n\n\n\n## \u5206\u7d44\uff08group by\uff09\u8207\u6458\u8981\uff08summarise\uff09 \n\n\u6307\u5b9a `DataFrameGroupBy` \u985e\u5225\u7684\u6b04\u4f4d\u8207\u805a\u5408\u51fd\u5f0f\uff0c\u7372\u5f97\u5206\u7d44\u6458\u8981\u7684\u7d50\u679c\uff0c\u662f\u4e00\u500b `Series`\u3002\n\n\n```python\ndaily_report.groupby(\"Country_Region\")['Confirmed'].sum()\n```\n\n\n\n\n Country_Region\n Afghanistan 38641\n Albania 11185\n Algeria 48007\n Andorra 1344\n Angola 3335\n ... \n West Bank and Gaza 29906\n Western Sahara 10\n Yemen 2009\n Zambia 13466\n Zimbabwe 7508\n Name: Confirmed, Length: 188, dtype: int64\n\n\n\n## \u6392\u5e8f\uff08arrange\uff09\n\n\u547c\u53eb `Series` \u7684 `sort_values(ascending=False)` \u65b9\u6cd5\u5c07\u6458\u8981\u7d50\u679c\u7531\u5927\u5230\u5c0f\u905e\u6e1b\u6392\u5e8f\u3002\n\n\n```python\nconfirmed_by_country = daily_report.groupby(\"Country_Region\")['Confirmed'].sum()\nconfirmed_by_country.sort_values(ascending=False)[:10] # \u986f\u793a\u78ba\u8a3a\u4eba\u6578\u524d 10 \u9ad8\u7684\u570b\u5bb6\n```\n\n\n\n\n Country_Region\n US 6485214\n India 4754356\n Brazil 4315687\n Russia 1053663\n Peru 716670\n Colombia 708964\n Mexico 663973\n South Africa 648214\n Spain 566326\n Argentina 546481\n Name: Confirmed, dtype: int64\n\n\n\n## \u9664\u4e86\u57fa\u790e\u64cd\u4f5c\uff0cPandas \u9084\u80fd\u5920\u9032\u884c\n\n- \u7db2\u9801\u8868\u683c\u5167\u5bb9\u8f09\u5165\n- \u95dc\u806f\u5f0f\u8cc7\u6599\u5eab\u64cd\u4f5c\n- \u8996\u89ba\u5316\n- ...etc.\n\n## \u66b8\u89e3\u66f4\u591a Pandas \u63d0\u4f9b\u7684\u529f\u80fd\n\n[pandas: powerful Python data analysis toolkit](http://pandas.pydata.org/pandas-docs/stable/)\n\n## Matplotlib \u5feb\u901f\u5165\u9580\n\n## \u7c21\u4ecb Matplotlib \u5957\u4ef6\n\n> Matplotlib\uff0cMatlab Plotting Library \u7684\u7c21\u7a31\uff0c\u662f\u5728 Python \u4e2d\u5c07\u8cc7\u6599\u8996\u89ba\u5316\u7684\u7b2c\u4e09\u65b9\u5957\u4ef6\u3002\n\n## pyplot \u6a21\u7d44\n\n> \u96b8\u5c6c\u65bc Matplotlib \u7684\u7e6a\u5716\u5de5\u5177\uff0c\u7528\u985e\u4f3c Matlab \u7684\u8a9e\u6cd5\u5efa\u7acb\u8996\u89ba\u5316\u3002\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n## \u4f7f\u7528\u6d41\u7a0b\n\n- \u5c07\u8cc7\u6599\u6574\u7406\u70ba `ndarray` \u6216 `Series` \u683c\u5f0f\n- \u547c\u53eb `plt.figure()` \u5c55\u958b\u300c\u756b\u5e03\u7269\u4ef6\u300d\u3001\u547c\u53eb `plt.axes()` \u5c55\u958b\u300c\u8ef8\u7269\u4ef6\u300d\n- \u4f9d\u7167\u63a2\u7d22\u9700\u6c42\u547c\u53eb\u300c\u8ef8\u7269\u4ef6\u300d\u7684\u4f5c\u5716\u65b9\u6cd5\n- \u4f9d\u7167\u8a2d\u8a08\u9700\u6c42\u6dfb\u52a0\u300c\u8ef8\u7269\u4ef6\u300d\u7684\u5143\u7d20\n- \u547c\u53eb `plt.show()` \u986f\u793a\u5716\u5f62\uff08\u6216\u8005 `plt.savefig()` \u5132\u5b58\u5716\u5f62\uff09\n\n## \u5c07 CSV \u6587\u5b57\u6a94\u6848\u8b80\u5165\u6210\u70ba\u8cc7\u6599\u6846\n\n- \u7bc4\u4f8b\u8cc7\u6599\uff1aCOVID-19 \u6642\u9593\u5e8f\u5217\n- \u8cc7\u6599\u4f86\u6e90\uff1a\n\n\n```python\nrequest_url = \"https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv\"\ntime_series = pd.read_csv(request_url)\ntype(time_series)\n```\n\n\n\n\n pandas.core.frame.DataFrame\n\n\n\n\n```python\ntime_series.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...9/3/209/4/209/5/209/6/209/7/209/8/209/9/209/10/209/11/209/12/20
0NaNAfghanistan33.9391167.709953000000...38288383043832438398384943852038544385723860638641
1NaNAlbania41.1533020.168300000000...984499671010210255104061055310704108601102111185
2NaNAlgeria28.033901.659600000000...45469457734607146364466534693847216474884775248007
3NaNAndorra42.506301.521800000000...1199121512151215126112611301130113441344
4NaNAngola-11.2027017.873900000000...2805287629352965298130333092321732793335
\n

5 rows \u00d7 239 columns

\n
\n\n\n\n## \u8cc7\u6599\u6574\u7406\uff1a\u8f49\u7f6e\n\n\n```python\nid_cols = time_series.columns[:4]\ntime_series_long = pd.melt(time_series, id_vars=id_cols, var_name='Date', value_name='Confirmed')\ntime_series_long.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLongDateConfirmed
0NaNAfghanistan33.9391167.7099531/22/200
1NaNAlbania41.1533020.1683001/22/200
2NaNAlgeria28.033901.6596001/22/200
3NaNAndorra42.506301.5218001/22/200
4NaNAngola-11.2027017.8739001/22/200
\n
\n\n\n\n## \u8cc7\u6599\u6574\u7406\uff1a\u8f49\u63db `Date` \u578b\u5225\n\n\n```python\ndate = pd.to_datetime(time_series_long['Date'])\ntime_series_long = time_series_long.drop('Date', axis=1)\ntime_series_long.insert(4, 'Date', date)\ntime_series_long.info()\n```\n\n \n RangeIndex: 62510 entries, 0 to 62509\n Data columns (total 6 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 Province/State 19035 non-null object \n 1 Country/Region 62510 non-null object \n 2 Lat 62510 non-null float64 \n 3 Long 62510 non-null float64 \n 4 Date 62510 non-null datetime64[ns]\n 5 Confirmed 62510 non-null int64 \n dtypes: datetime64[ns](1), float64(2), int64(1), object(2)\n memory usage: 2.9+ MB\n\n\n## \u7e6a\u88fd\u5169\u500b\u8996\u89ba\u5316\n\n1. \u53f0\u7063\u7d2f\u8a08\u78ba\u8a3a\u4eba\u6578\u8da8\u52e2\u5716\uff1b\n2. \u53f0\u7063\u6bcf\u65e5\u65b0\u589e\u78ba\u8a3a\u4eba\u6578\u9577\u689d\u5716\u3002\n\n## \u7e6a\u88fd\u53f0\u7063\u7d2f\u8a08\u78ba\u8a3a\u4eba\u6578\u8da8\u52e2\u5716\n\n## \u5c07\u8cc7\u6599\u6574\u7406\u70ba `ndarray` \u6216 `Series` \u683c\u5f0f\n\n\n```python\ntw = time_series_long[time_series_long['Country/Region'] == 'Taiwan*']\nx = tw['Date'].values\ny = tw['Confirmed'].values\n```\n\n## \u547c\u53eb `plt.figure()` \u5c55\u958b\u300c\u756b\u5e03\u7269\u4ef6\u300d\u3001\u547c\u53eb `plt.axes()` \u5c55\u958b\u300c\u8ef8\u7269\u4ef6\u300d\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\n```\n\n## \u4f9d\u7167\u63a2\u7d22\u9700\u6c42\u547c\u53eb\u300c\u8ef8\u7269\u4ef6\u300d\u7684\u4f5c\u5716\u65b9\u6cd5\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\nax.plot(x, y)\n```\n\n## \u4f9d\u7167\u8a2d\u8a08\u9700\u6c42\u6dfb\u52a0\u300c\u8ef8\u7269\u4ef6\u300d\u7684\u5143\u7d20\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\nax.plot(x, y, label=\"Taiwan\")\nax.set_xlabel('Date')\nax.set_title('Cumulative COVID-19 confirmed cases')\nax.legend(loc='upper left')\n```\n\n## \u547c\u53eb plt.show() \u986f\u793a\u5716\u5f62\n\n\u6216\u8005 `plt.savefig()` \u5132\u5b58\u5716\u5f62\u3002\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\nax.plot(x, y, label=\"Taiwan\")\nax.set_xlabel('Date')\nax.set_title('Cumulative COVID-19 confirmed cases')\nax.legend(loc='upper left')\nplt.show() # plt.savefig('tw_covid19_time_series.png')\n```\n\n## \u7e6a\u88fd\u53f0\u7063\u6bcf\u65e5\u65b0\u589e\u78ba\u8a3a\u4eba\u6578\u9577\u689d\u5716\n\n## \u5c07\u8cc7\u6599\u6574\u7406\u70ba `ndarray` \u6216 `Series` \u683c\u5f0f\n\n\n```python\ndaily_increase = np.diff(tw['Confirmed'].values, n=1)\n```\n\n## \u547c\u53eb `plt.figure()` \u5c55\u958b\u300c\u756b\u5e03\u7269\u4ef6\u300d\u3001\u547c\u53eb `plt.axes()` \u5c55\u958b\u300c\u8ef8\u7269\u4ef6\u300d\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\n```\n\n## \u4f9d\u7167\u63a2\u7d22\u9700\u6c42\u547c\u53eb\u300c\u8ef8\u7269\u4ef6\u300d\u7684\u4f5c\u5716\u65b9\u6cd5\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\nax.bar(x[1:], daily_increase)\n```\n\n## \u4f9d\u7167\u8a2d\u8a08\u9700\u6c42\u6dfb\u52a0\u300c\u8ef8\u7269\u4ef6\u300d\u7684\u5143\u7d20\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\nax.bar(x[1:], daily_increase, label='Taiwan')\nax.set_title('Daily COVID-19 confirmed cases')\nax.set_xlabel('Date')\nax.set_ylabel('Daily Increase')\nax.set_ylim(0, None)\nax.legend(loc='upper right')\n```\n\n## \u547c\u53eb plt.show() \u986f\u793a\u5716\u5f62\n\n\u6216\u8005 `plt.savefig()` \u5132\u5b58\u5716\u5f62\u3002\n\n\n```python\nfig = plt.figure()\nax = plt.axes()\nax.bar(x[1:], daily_increase, label='Taiwan')\nax.set_title('Daily COVID-19 confirmed cases')\nax.set_xlabel('Date')\nax.set_ylabel('Daily Increase')\nax.set_ylim(0, None)\nax.legend(loc='upper right')\nplt.show() # plt.savefig('tw_covid19_daily_increase.png')\n```\n", "meta": {"hexsha": "359a0aabcb819ed3ea5aa64bea3f19610fe9bbdd", "size": 127070, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/10-python-data-analysis-glimpse.ipynb", "max_stars_repo_name": "datainpoint/classroom-introduction-to-python", "max_stars_repo_head_hexsha": "a5d4036829eda3a0ed1a0a0af752f541e4e015e7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/10-python-data-analysis-glimpse.ipynb", "max_issues_repo_name": "datainpoint/classroom-introduction-to-python", "max_issues_repo_head_hexsha": "a5d4036829eda3a0ed1a0a0af752f541e4e015e7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/10-python-data-analysis-glimpse.ipynb", "max_forks_repo_name": "datainpoint/classroom-introduction-to-python", "max_forks_repo_head_hexsha": "a5d4036829eda3a0ed1a0a0af752f541e4e015e7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.5351089588, "max_line_length": 15572, "alphanum_fraction": 0.7620130637, "converted": true, "num_tokens": 8254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5312093733737562, "lm_q2_score": 0.5583269943353744, "lm_q1q2_score": 0.2965885327985469}} {"text": "\n\n\n```python\n!pip install qiskit\n```\n\n Collecting qiskit\n Downloading https://files.pythonhosted.org/packages/6f/61/cb7506e17a2566dc8a31a3e1924d91ac0bdd8ff07c71ec698c06647b6306/qiskit-0.26.2.tar.gz\n Collecting qiskit-terra==0.17.4\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/b3/0c/3c7a8dd451dae0907263e9de9e3e34909e15e18c88a589b44581972c8511/qiskit_terra-0.17.4-cp37-cp37m-manylinux2010_x86_64.whl (6.0MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6.0MB 7.3MB/s \n \u001b[?25hCollecting qiskit-aer==0.8.2\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/c2/d2/6ff15c370b5465b32529b528bf3f4ce1e01f74498be16203aa1c04b67022/qiskit_aer-0.8.2-cp37-cp37m-manylinux2010_x86_64.whl (18.0MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 18.0MB 187kB/s \n \u001b[?25hCollecting qiskit-ibmq-provider==0.13.1\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/be/99/74bbb901f88603a7d850d4889abc06d81ba702e4227151f4a5b66f2631fe/qiskit_ibmq_provider-0.13.1-py3-none-any.whl (228kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 235kB 52.0MB/s \n \u001b[?25hCollecting qiskit-ignis==0.6.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/54/be/a13c828e457e09d979667a61bddbd8c7246aafa94e2501b6a9154429cbea/qiskit_ignis-0.6.0-py3-none-any.whl (207kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 215kB 57.9MB/s \n \u001b[?25hCollecting qiskit-aqua==0.9.1\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/88/79/392c57b978decbb24b902344b536af52c40a751aed0ebbaefa8bc2964cb5/qiskit_aqua-0.9.1-py3-none-any.whl (2.1MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.1MB 43.4MB/s \n \u001b[?25hRequirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (1.7.1)\n Requirement already satisfied: psutil>=5 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (5.4.8)\n Collecting fastjsonschema>=2.10\n Downloading https://files.pythonhosted.org/packages/d1/fb/ea090e917b18320f79be31d754bbe496b715175e865603cfce1eaed2e774/fastjsonschema-2.15.1-py3-none-any.whl\n Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (2.8.1)\n Collecting retworkx>=0.8.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/1b/92/f007f8b9d88dcd5b90e363967e5d54431a68c5fe06d83400732e3b438084/retworkx-0.8.0-cp37-cp37m-manylinux2010_x86_64.whl (1.0MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.0MB 36.8MB/s \n \u001b[?25hRequirement already satisfied: dill>=0.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (0.3.3)\n Requirement already satisfied: scipy>=1.4 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (1.4.1)\n Collecting python-constraint>=1.4\n Downloading https://files.pythonhosted.org/packages/37/8b/5f1bc2734ca611943e1d6733ee244238679f6410a10cd45ede55a61a8402/python-constraint-1.4.0.tar.bz2\n Collecting ply>=3.10\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl (49kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 51kB 9.3MB/s \n \u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (1.19.5)\n Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.4->qiskit) (2.6.0)\n Collecting pybind11>=2.6\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/8d/43/7339dbabbc2793718d59703aace4166f53c29ee1c202f6ff5bf8a26c4d91/pybind11-2.6.2-py2.py3-none-any.whl (191kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 194kB 55.3MB/s \n \u001b[?25hRequirement already satisfied: nest-asyncio!=1.1.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.13.1->qiskit) (1.5.1)\n Collecting requests-ntlm>=1.1.0\n Downloading https://files.pythonhosted.org/packages/03/4b/8b9a1afde8072c4d5710d9fa91433d504325821b038e00237dc8d6d833dc/requests_ntlm-1.1.0-py2.py3-none-any.whl\n Collecting websockets>=8\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/84/64/78c2b3fe37730b30dca3c93d1f7f4a4286767f86e7c04cf3571b39bc2fb7/websockets-9.1-cp37-cp37m-manylinux2010_x86_64.whl (103kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 112kB 54.5MB/s \n \u001b[?25hRequirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.13.1->qiskit) (1.24.3)\n Requirement already satisfied: requests>=2.19 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.13.1->qiskit) (2.23.0)\n Requirement already satisfied: setuptools>=40.1.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-ignis==0.6.0->qiskit) (56.1.0)\n Requirement already satisfied: scikit-learn<=0.24.1,>=0.20.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.1->qiskit) (0.22.2.post1)\n Requirement already satisfied: pandas<=1.2.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.1->qiskit) (1.1.5)\n Collecting docplex<=2.20.204; sys_platform != \"darwin\"\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/87/99/6f7c219b39fd58c84688ad0713eb932bfcf6be81fc74519e43ea9c915b56/docplex-2.20.204.tar.gz (611kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 614kB 45.4MB/s \n \u001b[?25hRequirement already satisfied: fastdtw<=0.3.4 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.1->qiskit) (0.3.4)\n Requirement already satisfied: h5py<=3.1.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.1->qiskit) (3.1.0)\n Collecting yfinance<=0.1.55\n Downloading https://files.pythonhosted.org/packages/7a/e8/b9d7104d3a4bf39924799067592d9e59119fcfc900a425a12e80a3123ec8/yfinance-0.1.55.tar.gz\n Collecting dlx<=1.0.4\n Downloading https://files.pythonhosted.org/packages/54/c0/b8fb5bb727e983b6f5251433ef941b48f38c65bb0bd6ec509e9185bcd406/dlx-1.0.4.tar.gz\n Collecting quandl<=3.6.0\n Downloading https://files.pythonhosted.org/packages/c2/58/9f0e69d836045e3865d263e9ed49f42b23a58526fdabb30f74c430baee3f/Quandl-3.6.0-py2.py3-none-any.whl\n Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy>=1.3->qiskit-terra==0.17.4->qiskit) (1.2.1)\n Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.8.0->qiskit-terra==0.17.4->qiskit) (1.15.0)\n Collecting ntlm-auth>=1.0.2\n Downloading https://files.pythonhosted.org/packages/ff/84/97c550164b54942b0e908c31ef09d9469f3ba4cd7332a671e2125732f63b/ntlm_auth-1.5.0-py2.py3-none-any.whl\n Collecting cryptography>=1.3\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/b2/26/7af637e6a7e87258b963f1731c5982fb31cd507f0d90d91836e446955d02/cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.2MB 44.5MB/s \n \u001b[?25hRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.13.1->qiskit) (2.10)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.13.1->qiskit) (2020.12.5)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.13.1->qiskit) (3.0.4)\n Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<=0.24.1,>=0.20.0->qiskit-aqua==0.9.1->qiskit) (1.0.1)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas<=1.2.3->qiskit-aqua==0.9.1->qiskit) (2018.9)\n Requirement already satisfied: cached-property; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from h5py<=3.1.0->qiskit-aqua==0.9.1->qiskit) (1.5.2)\n Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance<=0.1.55->qiskit-aqua==0.9.1->qiskit) (0.0.9)\n Collecting lxml>=4.5.1\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/30/c0/d0526314971fc661b083ab135747dc68446a3022686da8c16d25fcf6ef07/lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6.3MB 40.7MB/s \n \u001b[?25hRequirement already satisfied: more-itertools in /usr/local/lib/python3.7/dist-packages (from quandl<=3.6.0->qiskit-aqua==0.9.1->qiskit) (8.7.0)\n Collecting inflection>=0.3.1\n Downloading https://files.pythonhosted.org/packages/59/91/aa6bde563e0085a02a435aa99b49ef75b0a4b062635e606dab23ce18d720/inflection-0.5.1-py2.py3-none-any.whl\n Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.13.1->qiskit) (1.14.5)\n Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.13.1->qiskit) (2.20)\n Building wheels for collected packages: qiskit, python-constraint, docplex, yfinance, dlx\n Building wheel for qiskit (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for qiskit: filename=qiskit-0.26.2-cp37-none-any.whl size=10491 sha256=666e1fd749f97a682396d19c5a25326091c9fbb38b8c3cf918598c55047db16d\n Stored in directory: /root/.cache/pip/wheels/89/89/34/524839952d5a58a7be9789e580bfc1ca883bf6579152444568\n Building wheel for python-constraint (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for python-constraint: filename=python_constraint-1.4.0-py2.py3-none-any.whl size=24079 sha256=fe66da1806a62e21b08beee63f584dee725a21f2a42ddbf0103272734bee68b1\n Stored in directory: /root/.cache/pip/wheels/34/31/15/7b070b25d0a549d20ce2e9fe6d727471c2c61ef904720fd40c\n Building wheel for docplex (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for docplex: filename=docplex-2.20.204-cp37-none-any.whl size=675362 sha256=299b3561a249f4090b2196044c9478e5da2a5135b6d74ae3647489782ed52c23\n Stored in directory: /root/.cache/pip/wheels/ae/2c/e2/a099ebb6fda8adeba9c5fc2e25659d195ad2f5c6cc5fb75fd4\n Building wheel for yfinance (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for yfinance: filename=yfinance-0.1.55-py2.py3-none-any.whl size=22616 sha256=db5a2cccdf4394d08d52f3dfa138c24d86890db1fa04261f25aac7972b754d25\n Stored in directory: /root/.cache/pip/wheels/04/98/cc/2702a4242d60bdc14f48b4557c427ded1fe92aedf257d4565c\n Building wheel for dlx (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for dlx: filename=dlx-1.0.4-cp37-none-any.whl size=5712 sha256=81e3bce62c9feaf1c351edacb3e998d33f0d9018045464143d75ec6c562cc2c7\n Stored in directory: /root/.cache/pip/wheels/bb/ba/15/fdd0deb104df3254912998150ba9245668db06b00af5912d1a\n Successfully built qiskit python-constraint docplex yfinance dlx\n Installing collected packages: fastjsonschema, retworkx, python-constraint, ply, qiskit-terra, pybind11, qiskit-aer, ntlm-auth, cryptography, requests-ntlm, websockets, qiskit-ibmq-provider, qiskit-ignis, docplex, lxml, yfinance, dlx, inflection, quandl, qiskit-aqua, qiskit\n Found existing installation: lxml 4.2.6\n Uninstalling lxml-4.2.6:\n Successfully uninstalled lxml-4.2.6\n Successfully installed cryptography-3.4.7 dlx-1.0.4 docplex-2.20.204 fastjsonschema-2.15.1 inflection-0.5.1 lxml-4.6.3 ntlm-auth-1.5.0 ply-3.11 pybind11-2.6.2 python-constraint-1.4.0 qiskit-0.26.2 qiskit-aer-0.8.2 qiskit-aqua-0.9.1 qiskit-ibmq-provider-0.13.1 qiskit-ignis-0.6.0 qiskit-terra-0.17.4 quandl-3.6.0 requests-ntlm-1.1.0 retworkx-0.8.0 websockets-9.1 yfinance-0.1.55\n\n\n\n```python\nimport qiskit\nfrom qiskit import *\nfrom qiskit.compiler import assemble\nfrom qiskit.visualization import plot_histogram\n\n\nimport tensorflow as tf\nfrom tensorflow.keras import layers\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport pandas as pd\nfrom tqdm import tqdm\n```\n\n\n```python\nshots = 4000\nnumber = 4\nequality = shots / np.power(2, number)\n```\n\n\n```python\nDataset_size = 2048\nbackend = Aer.get_backend('qasm_simulator')\n\ndf = pd.DataFrame()\n\nfor i in tqdm(range(Dataset_size)):\n \n qc = QuantumCircuit(number, number)\n\n rng = np.random.default_rng()\n \n random_value = []\n\n for i in range(number):\n inside_random = [rng.random() * np.pi for i in range(3)]\n qc.rx(inside_random[0], i)\n qc.ry(inside_random[1], i)\n qc.rz(inside_random[2], i)\n \n random_value = np.concatenate((random_value, inside_random))\n\n qc.measure(range(number), range(number))\n circ = transpile(qc, backend)\n\n qobj = assemble(circ, shots=shots)\n\n # Run and get counts\n result = backend.run(qobj).result()\n counts = result.get_counts()\n \n df_temp = pd.DataFrame(counts, index=[i])\n \n for i in range(number * 3):\n df_temp[f\"target_{i}\"] = [random_value[i]]\n \n df = df.append(df_temp)\n \n \n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2048/2048 [00:54<00:00, 37.51it/s]\n\n\n\n```python\ncopied_df = df.copy()\n\n# copied_df = (copied_df - copied_df.min()) / (copied_df.max() - copied_df.min())\ntrain_mean = copied_df.mean()\ntrain_std = copied_df.std()\n\ncopied_df = (copied_df - train_mean) / train_std\n\n# Replace all nan with 0\ncopied_df = copied_df.fillna(0)\n\ncopied_df.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
0010001100000111100110111101011001001110100000010101110010101111target_0target_1target_2target_3target_4target_5target_6target_7target_8target_9target_10target_11
3-0.501552-0.544240-0.326182-0.5597620.6064130.0467230.612794-0.575505-0.2267990.1267530.764865-0.351582-0.3794720.8681670.164312-0.003416-1.125179-0.096933-0.247424-1.412077-0.334593-1.6795240.038427-1.665409-0.9164501.289488-0.632902-1.503149
3-0.1481732.597203-0.3195931.516007-0.562863-0.528980-0.656767-0.279782-0.498350-0.786893-0.7779651.6130230.870405-0.845428-0.739046-0.547400-1.4302280.7805821.510446-0.3122480.573721-1.138797-0.180041-1.3585400.886665-1.510330-1.008921-1.545038
31.108655-0.0143990.0856210.113460-0.642082-0.522126-0.6433681.6305220.4388320.322534-0.485989-0.497240-0.355502-0.355830-0.041145-0.382139-1.742204-0.4845121.1669191.128968-0.532727-0.908971-0.2165650.878581-0.302743-1.380963-0.432370-1.072903
3-0.801590-0.766431-0.757751-0.137348-0.524838-0.6626252.739897-0.3409660.0315050.133622-0.718243-0.7565820.9320431.116414-0.7716591.136196-1.2238310.373700-0.9089191.2697360.3235610.0982861.558154-1.1137170.2829600.587915-0.969978-1.099384
3-0.174843-0.0656740.029616-0.5201610.7933700.711522-0.130854-0.551711-0.531466-0.3197660.8046800.156444-0.468504-0.2282580.490434-0.296065-0.0966410.5719750.785265-1.351434-0.160765-0.023088-0.793584-0.706839-0.8227271.760039-0.3658271.215943
\n
\n\n\n\n\n```python\n# Check if there are any NaNs. \ncopied_df.isnull().values.any()\n```\n\n\n\n\n False\n\n\n\n\n```python\nbatch_size = 1\n\ntarget = copied_df.iloc[:, -3 * number:]\ndf = copied_df.iloc[:, :-3 * number]\n\n# target = df.pop(\"target\")\ndataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))\n\ntrain_dataset = dataset.cache().shuffle(len(df)).batch(batch_size)\ntrain_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)\n```\n\n\n```python\nDense = tf.keras.layers.Dense\n\ntf.keras.backend.clear_session()\n\nmodel = tf.keras.Sequential([\n Dense(3 * number, input_shape=(len(df.columns), ), activation=tf.nn.relu),\n Dense(1024, activation=tf.nn.relu),\n Dense(256, activation=tf.nn.relu),\n Dense(3 * number, activation=tf.nn.sigmoid)\n])\n\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.1), loss=\"mean_squared_error\")\n\nmodel.fit(train_dataset, epochs=10,)\n```\n\n Epoch 1/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4998\n Epoch 2/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 3/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 4/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 5/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 6/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 7/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 8/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 9/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n Epoch 10/10\n 2048/2048 [==============================] - 4s 2ms/step - loss: 1.4995\n\n\n\n\n\n \n\n\n\n\n```python\nvalue_this = np.rint(np.random.dirichlet(np.ones(16), size=1) * shots)\n# value_this = np.reshape([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, shots], (1, 16))\nnew_val = (value_this - train_mean[:-3 * number].values) / train_std[:-3 * number].values\n```\n\n\n```python\npred = model.predict(new_val)\npred = (pred * train_std[-3 * number:].values) + train_mean[-3 * number:].values\npred = np.squeeze(pred)\n```\n\n\n```python\nqc = QuantumCircuit(number, number)\n\nfor i in range(number):\n qc.rx(pred[0 * (i + 1)], i)\n qc.ry(pred[0 * (i + 2)], i)\n qc.rz(pred[0 * (i + 3)], i)\n\nqc.measure(range(number), range(number))\ncirc = transpile(qc, backend)\n\nqobj = assemble(circ, shots=shots)\n\n# Run and get counts\nresult = backend.run(qobj).result()\ncounts = result.get_counts()\nplot_histogram(counts)\n```\n\n\n```python\npred\n```\n\n\n\n\n array([2.49347505, 2.47280132, 2.51305636, 1.53896315, 2.53338271,\n 1.57226614, 2.45784733, 1.52983895, 1.5794965 , 1.55878848,\n 2.47334759, 1.55470769])\n\n\n\n\n```python\nnames = ['{0:04b}'.format(i) for i in range(2 ** number)]\npred_df = pd.DataFrame(value_this, columns = names)\n```\n\n\n```python\nplot_histogram(pred_df.to_dict(\"records\"))\n```\n\n\n```python\ndiff = 0\nvalue_this = np.squeeze(value_this)\nfor key, value in counts.items():\n diff += abs(value_this[int(key, 2)] - value)\n\nprint(diff)\n```\n\n 3548.0\n\n\nHowever as you can see, the result are not very beautiful. In fact, it's bad using this method of approximating results. \n\n\n```python\n\n```\n", "meta": {"hexsha": "93be7da1bf094ab2bbabcc8a311fb576375c8ed6", "size": 82642, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Supervised_method.ipynb", "max_stars_repo_name": "Wabinab/hackqthon_team27", "max_stars_repo_head_hexsha": "9beff2114be271b2d04b2674379972dd2ef8946c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Supervised_method.ipynb", "max_issues_repo_name": "Wabinab/hackqthon_team27", "max_issues_repo_head_hexsha": "9beff2114be271b2d04b2674379972dd2ef8946c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Supervised_method.ipynb", "max_forks_repo_name": "Wabinab/hackqthon_team27", "max_forks_repo_head_hexsha": "9beff2114be271b2d04b2674379972dd2ef8946c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.6506765068, "max_line_length": 24978, "alphanum_fraction": 0.7504658648, "converted": true, "num_tokens": 8335, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5156199157230156, "lm_q2_score": 0.5736784074525096, "lm_q1q2_score": 0.2958000121027768}} {"text": "\n\n# Machine learning for material science: using CrystalFeatures to predict the bandgap\n\n*By: Sherif Abdulkader Tawfik Abbas, https://scholar.google.com/citations?user=NhT7vZIAAAAJ*, https://questionid.org/r/sherif_tawfik_abbas\n\nThis tutorials covers the basics of using machine learning for material science. I will walk you through a few Python scripts that will enable you to classify materials and predict their properties.\n\nThe tutorial targets Masters/PhD students and scientists who have some experience with crystal structure (just the basics, like lattice constants and atomic positions). I also expect you to have basic experience with programming (in any language; Python, C, C++, C#, Java, JavaScript, etc).\n\nTopics we will cover:\n- Part 1: The basics of machine learning for material science: Using Python to access and process crystal structures\n - Bird's eye view on machine learning for materials\n - Google Colab\n - The MaterialsProject database\n - The PyMatGen python library\n - Structure file formats\n - Querying structures using PyMatGen\n- Part 2: Doing machine learning: Predicting the band gap of materials\n - Descriptors\n - Building a simple descriptor vector for crystals\n - Building a data set\n - Machine learning\n\n#Part 1\n\n## Bird's eye view on machine learning for materials\n\n### What is machine learning used for here\nSo what to we use machine learning for in material science? The main purpose of machine learning here is the prediction of properties of a given material. This property can either be a class (the classification problem) or a quantity (the regression problem).\n\n### Example problems/papers\n\nExample problems that machine learning solve:\n- Is a given material metallic or semiconducting? An example paper: https://doi.org/10.1038/ncomms15679.\n- What is the specific heat capacity of a material? An example paper, one of mine: https://doi.org/10.1002/adts.201900208. I wrote a blog, that includes code, on this: http://www.sheriftawfikabbas.com/blog/ai/ml/machine-learning-to-predict-the-specific-heat-capacity-of-crystals/\n\n\n### The machine learning workflow\n\nSo how is this all done? Generally, we can think of machine learning as a 3-step process:\n- Step A: First, find numerical/categoricall **descriptors** that can describe your material. That is: every material in your dataset should be uniquely represented by an array of numbers/categories.\n- Step B: Then, apply your procedure to your entire dataset of structures to form a sheet of material descriptors vs. **target properties**.\n- Step C: Use machine learning to predict the target properties based on the descriptors.\n\nA schematic diagram for these steps is shown below.\n\n\n\nBefore we go deeper into the details, we will need to learn a few things today:\n- The programming language (python)\n- The database of materials, from which we will get our data and create the dataset\n\nLet's start!\n\n## Google Colab\n\nIf you wish to learn to program in python and you don't have python installed on your computer, or you don't wish to struggle with the headache of setting it up, then you can use the Google Colab. This is a website that was developed by Google and is publicly available for anyone to use. It is pretty much an online python compiler, where you can write python code and run it, all on your web browser.\n\nThis tutorial, as you can see, is already running on Google Colab (let's just call it Colab from now on). On Colab, you can create python **notebooks**, which are known as Jupyter notebooks. Jupyter is a programming environment for python that allows the programmer to write documents, just like this one, where there is both text and code. To demonstrate the placement of code here, check out the below section. This is some python code that prints out the text `\"Hello material science!\"`. Press the play button, and it will run that code, and actually print out `\"Hello material science!\"`.\n\n\n```\nprint(\"Hello material science! Happy you're here!\")\n```\n\n Hello material science! Happy you're here!\n\n\nNow, let's have a quick overview on python.\n\n## Basic python: variables, operations, if statement, for loop\n\nPython is quite a nice and easy language to learn and understand. In fact, it is one of the easiest languages to get you hit the road running. This is because you can virtually run your python code, or script, anywhere: on your laptop or even on your phone. This is thanks to the many online python servers that will allow you to run python code online. \n\nLet me give you an example of how simple it is to run python code.\n\n### The print statement\nLet's start with the simplest thing you can do in a program: to ask the computer to print out something, such as `\"Like coffee?\"`\n\nHow do we get the computer to print out this question on the screen? For that, we use the print statement. This statement orders the computer to print *something* to the screen. That something is called a *string*, which we will deal with shortly.\n\nLet's write that code, which will be our first ever line of python code in this course:\n\n\n```\nprint(\"Like coffee?\")\n```\n\n Like coffee?\n\n\nDo you notice something in th above line? It looks different from normal text on this page. It is actually an *executable* line of code! This means that, when you press the play button to the left of the print statement, the program wil actually **run**! That's without needing to install any software on any computer. This is the power of using Google' Colab: it lets you execute python code on the fly.\n\nSo what just happened? We wrote a print statement, in which we called the print function. It is a function because it receives something, the \"Like coffee?\" text, and then performs some task with the text.\n\nLet's try something else. Let's print two sentences:\n\n\n```\nprint(\"Like coffee?\",\"Yes, with milk please!\")\n```\n\n Like coffee? Yes, with milk please!\n\n\nYou notice what happened here? The print statement received two, not one, strings. They are separated by a comma. The print function then prints both sentences one after the other, leaving a space in between.\n\nNow let's try to understand how do we write strings like `\"Like coffee?\"`\n\n### Strings\n\nA string is any collection of characters, numbers, dots, symbols, or anything else you can access via the keyboard (or beyond!), enclosed between two quotes, or between two double quotes. So, for example, typing `\"a\"` is a valid string in python:\n\n\n```\n\"a\"\n```\n\n\n\n\n 'a'\n\n\n\nHere we just typed the string `\"a\"`, and python only repeated the value of that string, which again is `'a'`. Did you spot the difference?\n\nA string in python is composed of three things: an opening quotation mark, a set of characters, and a closing quotation mark. Python accepts two types of quotation marks: single, and double. So we can also type `'a'` up there, and we will get the same result.\n\nThe string is an example of a python data type: it is something that represents a value.\n\n### Python data types: strings, numbers and boolean values\n\nPython has a number of data types that enable you to perform various types of operations on data. Here we discuss the fundamental three data types of python: strings, which we just discussed above, numbers and logical, or boolean values.\n\nNumbers are just numbers! Type a number in the python interpreter, say `5`, and you will just get `5` back.\n\n### Variables\n\nA variable in python, as well as in programming language, enables you to store a value in memory. Let us create a variable that will store the string value \"I love coffee\" in memory.\n\n\n\n```\ns = \"I love coffee\"\nprint(s)\n```\n\n I love coffee\n\n\n\n\n\n### Arithmetic operations: `+`, `-`, `*`, `/`, `%`, `//`\n\nBack to operators. You can work with the standard maths operators in python, which are `+`, `-`, `*`, `/` and `%`. The first four are obvious, but `%` might need an introduction. The `//` operator is related to the `/`: it lets you get the quotient of the division, and therefore it is called the floor operator `//`. That is, it just removes the decimal part of the division result. For example, `9//4=2`.\n\n`%` is the modulus operator. It is related to the `//` operator in that: while `a//b` gives you the integer *quotient*, `%` gives the *remainder*. For example, in the division `9/4=2+1/4`, the quotient is `2` and the remainder is `1`. Then in python, `9%4=1`.\n\n### Comparison operations: `==`, `!=`, `<`, `>`, `<=`, `>=`\n\nNow that we know how to use operators on numbers to create numbers, the above operators create boolean values. This is because they ask questions about the variables they operate on.\n\n- `a == b` means: is `a` exactly equal to `b`?\n- `a != b` means: is `a` **not** equal to `b`?\n- `a < b` means: is `a` less than `b`?\n- `a > b` means: is `a` greater than `b`?\n- `a <= b` means: is `a` less than **or equal to** `b`?\n- `a >= b` means: is `a` greater than **or equal to** `b`?\n\nLet's evaluate an expression baesd on these operators:\n\n\n\n```\n#This code demonstrates operations\na = 3\nb = 5\nc = a == b\nprint(c)\nd = a > b\nprint(d)\ne = a <= b\nprint(e)\n```\n\n False\n False\n True\n\n\n\n### Lists\nA `list` in python is exactly what its name suggests, a list of things. Like a list of numbers, names, or even a mix of both. To create a list, we have to follow a simple syntax rule: enclose the things in the list between two *square brackets*, like those `[` and `]`, and separate between the list elements using commas, `,`. So for example, here is a list of numbers: `a = [4,6,7,1,0]`, a list of strings: `a = [\"a\",\"?\",\"neptune is a planet\"]`, a list of both: `a = [3,0,\"Where is my car?\"]`.\n\nWell, you can also create a list of lists in python! And you can *nest* as many lists as you want. Here is an example: `a = [[1,2],[3,4],[5,6]]`. This is a list of three elements, each element being itself a list of two elements.\n\n#### Accessing and changing list elements\n\nTo access a list element, we apply this syntax rule: find out what the *order* of that element is, and then access it using the square brackets. The order of an element in a list is an integer. The order of the first element is always `0`. Here is an example.\n\n\n\n\n```\na = [\"I\",\"want\",\"to\",\"order\",\"the\",4,\"dollars\",\"mocha\"]\nprint(a[0])\n```\n\n I\n\n\nThe string `'I'` is the first element of list `a`, and therefore its index is `0` and can be retrieved by typing `a[0]`.\n\nWe can change the element in a list by just assigning it a new value.\n\n\n\n\n```\na = [\"I\",\"want\",\"to\",\"order\",\"the\",4,\"dollars\",\"mocha\"]\na[5] = \"yes you can!\"\nprint(a)\n```\n\n ['I', 'want', 'to', 'order', 'the', 'yes you can!', 'dollars', 'mocha']\n\n\n\n\n#### Checking if a value exists in a list\n\nTo find if some value belongs to a given list, we use the `in` keyword in python. For example, given the list `a = [1,2,3]`, the boolean expression `2 in a` will return `True` because that's a correct statement.\n\n#### Adding elements to a list\n\nThere are two ways to add elements to a list:\n- By using the `+` operator e.g. `[1,2,3]+[4]` gives `[1,2,3,4]`.\n- By using the `append()` function e.g. `a = [1,2,3];a.append(4);print(a)` gives `[1,2,3,4]`.\n- By using the `inset()` function if you want to add an item at a specified index e.g. `a = [1,2,3];a.insert(1,4);print(a)` gives `[1, 4, 2, 3]`.\n\n#### Adding a list to a list\n\nLet's say you have two lists `a=[1,2]` and `b=[3,4]`, and you wish to create the list `[[1,2],[3,4]]`. Let's try to use `+` and `append()` and see if they will give us what we are after.\n\n### Tuples\n\nA tuple, like a list, is a collection of things, but the things are enclosed between `(` and `)`. But there is even a more important difference: once you group things in a tuple, you cannot change them. That is, a tuple is *immutable*.\n\nFor example, let's create a tuple and attempt to change the value of one of the elements.\n\n\n\n\n```\na = (3,4,5)\n#a[0] = 2\n#The above line gives an error!\n```\n\n### Dictionaries\n\nWe learned in lists and tuples that the elements are indexed. The index is an integer that starts from `0`. A dictionary extends the indexing concept: a dictionary is a collection of indexed objects, where the indices themselves can be anything *immutable*: numbers, float, strings and tuples (and frozensets, but we won't discuss that one today).\n\nThe syntax for creating a dictionary is as follows: `{key:value}`, where `key` is the index, `value` is any data type. For example,\n\n\n```\na = {'apple':3.5, 'pear': 2.5, 'banana':4}\nprint(a['apple'])\nb = {'a':\"lists\",'b':\"tuples\",'c':\"sets\",'d':\"dictionaries\"}\nprint(b['c'])\n```\n\n 3.5\n sets\n\n\n### The conditional statement\n\nSo far, we have been dealing with simple python statements. Each statement could be written in a single line of code, and they instructed the computer to perform a single task. For example, `a = 4` instructs the computer to put `4` into `a` and that's it.\n\nSerious programming starts when we let the computer make decisions after it tests certain conditions. Instead of just printing a name, how about we get the computer to print the name only **if** it starts with letter `A`?\n\nI just said **if**, which means: some condition should be tested before the print statement is executed. Now let me introduce the `if` statement in python. This statement has the following syntax:\n\n```\nif boolean_expression:\n some_statements\n```\n\nThe `boolean_expression` evaluates to either True or False. The `if` statement will only execute the statements if `boolean_expression` evaluates to `True`. Otherwise, these statements will not be executed.\n\nSo, to solve the above problem, here is the code:\n\n\n\n```\ns = 'Ahmed'\nif s[0] == 'A':\n print(s)\n```\n\n Ahmed\n\n\n#### The `elif` clause\n\nSometimes the condition we are testing might evaluate to more than two possible outcomes. For a simple demonstration: I will decide to put on a jacket if it's very cold outside. But if it's fair, maybe just a jumper. Otherwise, a t-shirt. So here we have three possible outcomes for the condition testing. The testing will check the temperature, and decide accordingly.\n\n\n```\ntemperature = 15\nif temperature < 15:\n print('I am wearing a jacket')\nelif temperature < 20:\n print('I am wearing a jumper')\nelse:\n print('I am wearing a t-shirt')\n```\n\n I am wearing a jumper\n\n\n### The loop statements\n\nPython has two loop statements: the `for` and the `while` loop statements. The loop is a very important programming construct. It enables you to repetitively run a block of statements as long as a given condition is correct. Loops let us start write complex code that can solve complex problems; it is actually the starting point for doing serious programming!\n\n#### The `for` loop\n\nThe syntax of the `for` loop is:\n\n```\nfor x in collection:\n statement1\n statement2\n ...\n statementN\n```\n\nHere `collection` could be any of the four collection types in python that we covered in Class 3. Note the `in` operator here.\n\nIn the `for` statement, `x` is called the *index* of the loop.\n\nFor example, the following loop will print out the elements from a list:\n\n\n\n```\nfor k in [1,2,3]:\n print(k)\n```\n\n 1\n 2\n 3\n\n\n\n#### The `while` loop\n\nThe syntax of the `while` loop is:\n\n```\nwhile x:\n statement1\n statement2\n ...\n statementN\n```\n\nThe `while` loop keeps running the statement block as long as `x` is true. So `x` here is a boolean expression. For example:\n\n\n\n```\na = 10\nwhile a > 0:\n print(a)\n a-=1\n```\n\n 10\n 9\n 8\n 7\n 6\n 5\n 4\n 3\n 2\n 1\n\n\n### Python libraries\n\nOne of the most powerful features of python is its libraries. A library is a python script that was written by someone, and that can perform a set of tasks. You can make use of a python library by just using the `import` command. For example, when you want to calculate the logarithm, the `log()` function you would look for exists in the `numpy` library.\n\n\n\n```\nimport numpy as np\nprint(np.log(11))\n```\n\n 2.3978952727983707\n\n\n\n## The MaterialsProject database\n\nThe MaterialsProject (MP) database is a massive amount of material science data that was generated using density functional theory (DFT). Have a look at the database here: https://materialsproject.org. Check the statistics at the bottom of the page. There are 124,000 inorganic crystals in the database, along with the DFT-calculated properties of these materials. There are also 530,000 nanoporous materials, as well as other stuff. It's a huge amount of material data.\n\n### Signing up and the API key\n\nYou will need to **sign up** in materialsproject.org to be able to access the database. Signing up there is free.\n\nOnce you sign up, you can obtain an **API key** that will enable you to access the database using python. Will discuss this further shortly.\n\n### A look at MaterialsProject\n\nLet's have a look at the 124,000 inorganic crystals data. Each one of these crystals is a 3D crystal that includes: a number of elements, arranged in a lattice structure. Check, for example, the MP page for diamond: https://materialsproject.org/materials/mp-66/.\n\n\n\nNote that each material on MP is identified by an ID that goes like `mp-X` where `X` is a number. The ID of diamond is `mp-66`. People use these identifiers when referring to MP materials in papers, and we will use them soon when we start querying materials from MP using python.\n\nThere you will find the crystal structure, the lattice parameters, the basic properties (in a column to the right of the figure that displays the crystal), and then a range of DFT-calculated properties.\n\n### The DFT properties\n\nThese are quantities that are calculated for each crystal in MP. In fact, every thing you see on the MP page for diamond was calculated using DFT. \n\n- For a given elemental composition, the lattice parameters and the positions of the atoms within the lattice are all obtained using DFT.\n- For the obtained crystal structure, the `Final Magnetic Moment`, `Formation Energy / Atom`, `Energy Above Hull / Atom`, `Band Gap` are calculated. The `Density` is derived from the obtained crystal structure.\n- Further DFT calculations are performed to obtain the band structure as well as other properties that you can find as you scroll down the structure page on MP.\n\nSome of the crystals on MP correspond to crystals that exist in nature, and some are purely hypothetical. The hypothetical crystals have been generated by some algorithm that uses artificial intelligence, or probably by simple elemental substitution.\n\n### Why is MP great for machine learning?\n\nBecause of the huge amount of materials (dataset) and DFT-calculated properties (target properties). That much data can be utilised using a range of machine learning methods to make predictions with various levels of accuracy.\n\n## The PyMatGen python library\n\nTo be able to query the MP database, the MP team provided the community with a python library that you can install on your computer (or in Colab as I will show you).\n\nRemember: to be able to run the codes in this section, you must obtain an API key from the MP website: https://materialsproject.org/docs/api#API_keys.\n\nThe first thing we do here is to install PyMatGen in Colab.\n\n\n\n```\n!pip3 install pymatgen\n```\n\n Requirement already satisfied: pymatgen in /usr/local/lib/python3.7/dist-packages (2022.0.8)\n Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from pymatgen) (2.23.0)\n Requirement already satisfied: matplotlib>=1.5 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.2.2)\n Requirement already satisfied: spglib>=1.9.9.44 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.16.1)\n Requirement already satisfied: ruamel.yaml>=0.15.6 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (0.17.7)\n Requirement already satisfied: palettable>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.3.0)\n Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from pymatgen) (0.8.9)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.1.5)\n Requirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (2.5.1)\n Requirement already satisfied: typing-extensions>=3.7.4.3; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.7.4.3)\n Requirement already satisfied: numpy>=1.20.1 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.20.3)\n Requirement already satisfied: scipy>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.6.3)\n Requirement already satisfied: uncertainties>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (3.1.5)\n Requirement already satisfied: monty>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (2021.5.9)\n Requirement already satisfied: plotly>=4.5.0 in /usr/local/lib/python3.7/dist-packages (from pymatgen) (4.14.3)\n Requirement already satisfied: sympy in /usr/local/lib/python3.7/dist-packages (from pymatgen) (1.7.1)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (1.24.3)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (2.10)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (3.0.4)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen) (2020.12.5)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (1.3.1)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (0.10.0)\n Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (2.8.1)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen) (2.4.7)\n Requirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == \"CPython\" and python_version < \"3.10\" in /usr/local/lib/python3.7/dist-packages (from ruamel.yaml>=0.15.6->pymatgen) (0.2.2)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->pymatgen) (2018.9)\n Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.2->pymatgen) (4.4.2)\n Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from uncertainties>=3.1.4->pymatgen) (0.16.0)\n Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from plotly>=4.5.0->pymatgen) (1.15.0)\n Requirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.7/dist-packages (from plotly>=4.5.0->pymatgen) (1.3.3)\n Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy->pymatgen) (1.2.1)\n\n\nThis will download PyMatGen into your Golab environment. Now, we are going to use PyMatGen to do two things: open a CIF crystal file to view its content, and query MP for crystals that satisfy certain properties.\n\nBy the way, I used the `pip3` command to install PyMatGen. On your computer, if you have python installed, you can install PyMatGen by typing the same command with the `!`:\n\n```\npip3 install pymatgen\n```\n\n## Structure file formats\n\nOne of the most common file formats that describe crystal structure is the CIF format (Crystallographic Information File). The official definition of this formnat is here: https://www.iucr.org/resources/cif.\n\n\nBut we are not going to learn the details of the format. We will just learn how to open a CIF with pythton. Here is how we can do this.\n\n\n\n```\nfrom pymatgen.io.cif import CifParser\nfrom urllib.request import urlopen\n\nrequest = urlopen(\"https://raw.githubusercontent.com/sheriftawfikabbas/crystalfeatures/master/Li10Ge(PS6)2_mp-696128_conventional_standard.cif\")\ncifFile = request.read().decode('utf-8')\nparser = CifParser.from_string(cifFile)\n```\n\nIn the above code, we imported a **class** from the PyMatGen library: the `CifParser` class. It allows us to create a new CIF file **object**. This object will then represent the CIF structure, and can be used to access its information.\n\nNext, let's extract some information from the `CifParser` object.\n\n\n\n```\n\nstructure = parser.get_structures()\n# Returns a list of Structure objects\n# #http://pymatgen.org/_modules/pymatgen/core/structure.html\n# Let's print the first (and only) Structure object\nprint(structure[0])\n\n```\n\n Full Formula (Li20 Ge2 P4 S24)\n Reduced Formula: Li10Ge(PS6)2\n abc : 8.787600 8.787600 12.657500\n angles: 90.000000 90.000000 90.000000\n Sites (50)\n # SP a b c\n --- ---- ------ ------ ------\n 0 Li 0.2287 0.273 0.2946\n 1 Li 0.7713 0.727 0.2946\n 2 Li 0.273 0.7713 0.7946\n 3 Li 0.727 0.2287 0.7946\n 4 Li 0.2287 0.727 0.2946\n 5 Li 0.7713 0.273 0.2946\n 6 Li 0.273 0.2287 0.7946\n 7 Li 0.727 0.7713 0.7946\n 8 Li 0 0 0.9397\n 9 Li 0 0 0.4397\n 10 Li 0.5 0.5 0.548\n 11 Li 0.5 0.5 0.048\n 12 Li 0.2563 0.7248 0.0367\n 13 Li 0.7437 0.2752 0.0367\n 14 Li 0.2752 0.2563 0.5367\n 15 Li 0.7248 0.7437 0.5367\n 16 Li 0.2752 0.7437 0.5367\n 17 Li 0.7248 0.2563 0.5367\n 18 Li 0.2563 0.2752 0.0367\n 19 Li 0.7437 0.7248 0.0367\n 20 Ge 0.5 0.5 0.801\n 21 Ge 0.5 0.5 0.301\n 22 P 0 0 0.6861\n 23 P 0 0 0.1861\n 24 P 0 0.5 0.5041\n 25 P 0.5 0 0.0041\n 26 S 0 0.6944 0.4121\n 27 S 0 0.3056 0.4121\n 28 S 0.3056 0 0.9121\n 29 S 0.6944 0 0.9121\n 30 S 0.5 0.1898 0.0971\n 31 S 0.5 0.8102 0.0971\n 32 S 0.1898 0.5 0.5971\n 33 S 0.8102 0.5 0.5971\n 34 S 0 0.8047 0.0941\n 35 S 0 0.1953 0.0941\n 36 S 0.1953 0 0.5941\n 37 S 0.8047 0 0.5941\n 38 S 0.5 0.29 0.4032\n 39 S 0.5 0.71 0.4032\n 40 S 0.29 0.5 0.9032\n 41 S 0.71 0.5 0.9032\n 42 S 0 0.192 0.7771\n 43 S 0 0.8081 0.7771\n 44 S 0.8081 0 0.2771\n 45 S 0.192 0 0.2771\n 46 S 0.5 0.7074 0.6982\n 47 S 0.5 0.2926 0.6982\n 48 S 0.7074 0.5 0.1982\n 49 S 0.2926 0.5 0.1982\n\n\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:709: UserWarning: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(msg)\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:1121: UserWarning: Issues encountered while parsing CIF: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(\"Issues encountered while parsing CIF: %s\" % \"\\n\".join(self.warnings))\n\n\nHere we have the details of the CIF structure in a human-readable format, which include the formula, the lattice parameters and the positions of the atoms in the crystal.\n\n\n\n```\n\nstructure = structure[0]\n\nprint(structure.lattice)\nprint(structure.species)\nprint(structure.charge)\nprint(structure.cart_coords)\nprint(structure.atomic_numbers)\nprint(structure.distance_matrix)\n```\n\n 8.787600 0.000000 0.000000\n 0.000000 8.787600 0.000000\n 0.000000 0.000000 12.657500\n [Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Li, Element Ge, Element Ge, Element P, Element P, Element P, Element P, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S, Element S]\n 0\n [[2.00972412e+00 2.39901480e+00 3.72889950e+00]\n [6.77787588e+00 6.38858520e+00 3.72889950e+00]\n [2.39901480e+00 6.77787588e+00 1.00576495e+01]\n [6.38858520e+00 2.00972412e+00 1.00576495e+01]\n [2.00972412e+00 6.38858520e+00 3.72889950e+00]\n [6.77787588e+00 2.39901480e+00 3.72889950e+00]\n [2.39901480e+00 2.00972412e+00 1.00576495e+01]\n [6.38858520e+00 6.77787588e+00 1.00576495e+01]\n [0.00000000e+00 0.00000000e+00 1.18942527e+01]\n [0.00000000e+00 0.00000000e+00 5.56550275e+00]\n [4.39380000e+00 4.39380000e+00 6.93631000e+00]\n [4.39380000e+00 4.39380000e+00 6.07560000e-01]\n [2.25226188e+00 6.36925248e+00 4.64530250e-01]\n [6.53533812e+00 2.41834752e+00 4.64530250e-01]\n [2.41834752e+00 2.25226188e+00 6.79328025e+00]\n [6.36925248e+00 6.53533812e+00 6.79328025e+00]\n [2.41834752e+00 6.53533812e+00 6.79328025e+00]\n [6.36925248e+00 2.25226188e+00 6.79328025e+00]\n [2.25226188e+00 2.41834752e+00 4.64530250e-01]\n [6.53533812e+00 6.36925248e+00 4.64530250e-01]\n [4.39380000e+00 4.39380000e+00 1.01386575e+01]\n [4.39380000e+00 4.39380000e+00 3.80990750e+00]\n [0.00000000e+00 0.00000000e+00 8.68431075e+00]\n [0.00000000e+00 0.00000000e+00 2.35556075e+00]\n [7.06576930e-16 4.39380000e+00 6.38064575e+00]\n [4.39380000e+00 0.00000000e+00 5.18957500e-02]\n [9.81294040e-16 6.10210944e+00 5.21615575e+00]\n [4.31859820e-16 2.68549056e+00 5.21615575e+00]\n [2.68549056e+00 0.00000000e+00 1.15449058e+01]\n [6.10210944e+00 0.00000000e+00 1.15449058e+01]\n [4.39380000e+00 1.66788648e+00 1.22904325e+00]\n [4.39380000e+00 7.11971352e+00 1.22904325e+00]\n [1.66788648e+00 4.39380000e+00 7.55779325e+00]\n [7.11971352e+00 4.39380000e+00 7.55779325e+00]\n [1.13716491e-15 7.07138172e+00 1.19107075e+00]\n [2.75988949e-16 1.71621828e+00 1.19107075e+00]\n [1.71621828e+00 0.00000000e+00 7.51982075e+00]\n [7.07138172e+00 0.00000000e+00 7.51982075e+00]\n [4.39380000e+00 2.54840400e+00 5.10350400e+00]\n [4.39380000e+00 6.23919600e+00 5.10350400e+00]\n [2.54840400e+00 4.39380000e+00 1.14322540e+01]\n [6.23919600e+00 4.39380000e+00 1.14322540e+01]\n [2.71325541e-16 1.68721920e+00 9.83614325e+00]\n [1.14196963e-15 7.10125956e+00 9.83614325e+00]\n [7.10125956e+00 0.00000000e+00 3.50739325e+00]\n [1.68721920e+00 0.00000000e+00 3.50739325e+00]\n [4.39380000e+00 6.21634824e+00 8.83746650e+00]\n [4.39380000e+00 2.57125176e+00 8.83746650e+00]\n [6.21634824e+00 4.39380000e+00 2.50871650e+00]\n [2.57125176e+00 4.39380000e+00 2.50871650e+00]]\n (3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 32, 32, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16)\n [[0. 5.6632708 7.70578018 ... 5.64011881 4.81286828 2.40485506]\n [5.6632708 0. 7.70578018 ... 6.80832646 2.40485506 4.81286828]\n [7.70578018 7.70578018 0. ... 4.81286828 6.80832646 5.64011881]\n ...\n [5.64011881 6.80832646 4.81286828 ... 0. 6.8334794 6.8334794 ]\n [4.81286828 2.40485506 6.80832646 ... 6.8334794 0. 3.64509648]\n [2.40485506 4.81286828 5.64011881 ... 6.8334794 3.64509648 0. ]]\n\n\n## Querying structures using PyMatGen\n\nNow let's use PyMatGen to query structures from MP. To be able to do that, we need first to create a `MPRester` with the API key that we receive from MP.\n\n\n```\nfrom pymatgen.ext.matproj import MPRester\nfrom pymatgen.ext.matproj import MPRestError\n\nm = MPRester(\"Ua7LfrKkn9yTWA3t\")\n```\n\n**Note: I am hiding my API key here. I am not allowed to share it, sorry!**\n\nThen we can use the object variable, `m`, to access MP. For today, I willl just show you a simplle query: getting MP IDs for all materials with bandgap larger than 1:\n\n\n```\nresults=m.query({\"band_gap\": {\"$gt\": 6}},properties=[\"material_id\"])\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 918/918 [00:00<00:00, 2412.54it/s]\n\n\nYou can notice that this operation takes time: there are 918 such materials.\n\nFinally, let's have a look at the MP IDs we got. This code shows the total count, and the first 10 MP IDs.\n\n\n```\nprint(len(results))\nresults[0:10]\n```\n\n 918\n\n\n\n\n\n [{'material_id': 'mp-642824'},\n {'material_id': 'mp-1210031'},\n {'material_id': 'mp-1196163'},\n {'material_id': 'mp-1206658'},\n {'material_id': 'mp-1208862'},\n {'material_id': 'mp-1199215'},\n {'material_id': 'mp-1196584'},\n {'material_id': 'mp-1205912'},\n {'material_id': 'mp-1106386'},\n {'material_id': 'mp-28289'}]\n\n\n\n\nOne last thing: let's query a crystal from MP using its MP ID.\n\n\n\n\n```\nresults=m.query({\"material_id\": 'mp-1207450'},properties=[\"cif\"])\ncifFile = results[0]['cif']\nparser = CifParser.from_string(cifFile)\n\nstructure = parser.get_structures()\nprint(structure[0])\n```\n\n Full Formula (Zn18 Fe8)\n Reduced Formula: Zn9Fe4\n abc : 7.773250 7.773250 7.773250\n angles: 109.471221 109.471221 109.471221\n Sites (26)\n # SP a b c\n --- ---- -------- -------- --------\n 0 Zn 0.356237 0 0.751283\n 1 Zn 0.751283 0 0.356237\n 2 Zn 0.248717 0.604954 0.248717\n 3 Zn 0.604954 0.248717 0.248717\n 4 Zn 0.643763 0.395046 0.643763\n 5 Zn 0.395046 0.643763 0.643763\n 6 Zn 1 0.751283 0.356237\n 7 Zn 0 0.356237 0.751283\n 8 Zn 0.356237 0.751283 0\n 9 Zn 0.643763 0.643763 0.395046\n 10 Zn 0.248717 0.248717 0.604954\n 11 Zn 0.751283 0.356237 0\n 12 Zn 0 0.643607 0.643607\n 13 Zn 0 0.356393 0.356393\n 14 Zn 0.356393 0.356393 0\n 15 Zn 0.643607 0.643607 0\n 16 Zn 0.356393 0 0.356393\n 17 Zn 0.643607 0 0.643607\n 18 Fe 0.206521 0 0\n 19 Fe 0 1 0.206521\n 20 Fe 1 0.206521 1\n 21 Fe 0.793479 0.793479 0.793479\n 22 Fe 0.683339 0 0\n 23 Fe 0 0 0.683339\n 24 Fe 1 0.683339 1\n 25 Fe 0.316661 0.316661 0.316661\n\n\n#Part 2\n\n## Descriptors\n\nBefore we start ML, let\u2019s address a very important question that lies at the centre of the field of ML-driven material discovery: **how do we apply ML to predict crystal properties?**\n\nFirst, **what are crystal properties?** These are quantities that are measured or calculated for crystals. By crystal, I mean a structure that is endowed with **periodicity**. Let\u2019s take the example of **diamond**. This material has just one atom, carbon (C). If you could zoom-in very close into diamond such that you can see the C atoms (people sort of do that using advanced experimental techniques by the way), you will see something similar to the following figure.\n\n\n\nThe figure illustrates how a diamond crystal is created from the **diamond unit cell**, which is the little molecule on the left with 4 C atoms. The crystal has **infinitely many diamond unit cells* along the three directions, so it is a 3D pattern.\n\nBefore we discuss crystals further, consider molecules for a minute. Molecules are **fundamentally different** from crystals because of the pattern (periodicity) bit: a molecule is just that one single molecule, sitting on its own, in isolation, whereas a crystal is really composed of an infinite number of molecules. How would the pattern in the crystal make ML for molecules different from ML for crystals?\n\nTo predict the properties of the molecules one can derive a set of descriptors for the molecules in the data set that are based on the **positions of the atoms within the molecule**. One can derive descriptors based on the relative positions, in order to ensure that the descriptors are **invariant to transformations**: rotation and translation.\n\nThe key thing here in the molecular descriptors is that they are **based on the atomic positions**. In crystals, **we can't really use atomic positions** like we did with molecules to obtain descriptors. Why?\n\nThink about the diamond crystal pattern above. Because it is a pattern, it is **symmetric in all three directions**. Now let's say we are going to calculate the Coulomb matrix for diamond, so we start with the positions of the atoms in the unit cell of diamond, that figure on the left. But wait: even though these four atoms do form a valid unit cell for diamond, we can also come up with another valid unit cell, as shown below.\n\n\n\nThe figure above is also valid. How to obtain it? Look at the diamond crystal in Figure 1, and take a different repeating unit from it. As long as this repeating unit can also form the same pattern, it is a valid unit cell!\n\nSo, there are many different ways we can represent the unit cell of a crystal. Therefore, we cannot use the atomic coordinates to derive descriptors for crystals, otherwise the derived descriptors, such as the eigenvalues of the Coulomb matrix in the case of molecules, will **change dramatically for the same crystal**. That is, **the descriptor vector is not invariant with respect to translation of the unit cell**. What do we do then?\n\n## Building a simple descriptor vector for crystals\n\nA possible solution to this problem is to use some statistics of atomic properties as the descriptor vector. For example:\n\n- Average of the atomic numbers of all the elements in the crystal. For example, in silicon carbide, SiC, the average value would be the average of 14 (for Si) and 6 (for C), which is (14 + 6)/2 = 10. So that's now one number in the descriptor vector.\n- The average of the ionization potential of the atoms\n- The average of the electron affinity of the atoms\n- And more averages\n\nSo we can keep adding averages of properties to this list, to expand the descriptor vector. This vector will not suffer from the lack of invariance issue pointed out above, because these are average values of quantities that do not depend on the geometry of the crystal.\n\nAverage is just one statistic. We can also add other statistics, such as the standard deviation and the variance. Adding those will triple the number elements in the descriptor vector above.\n\n\n\n\n```\nimport numpy as np\nstructure = structure[0]\nmean_atomic_number=np.mean(structure.atomic_numbers)\nmax_atomic_number=np.max(structure.atomic_numbers)\nmin_atomic_number=np.min(structure.atomic_numbers)\nstd_atomic_number=np.std(structure.atomic_numbers)\n\nprint(mean_atomic_number,max_atomic_number,min_atomic_number,std_atomic_number)\n```\n\n 11.36 32 3 7.509354166637767\n\n\nHowever, there is a **problem**. A lot of materials exist in **various phases**. That is, for the same atomic composition, let's say SiC, there are several possible structures. Right now, there are 27 possible structures for SiC on MaterialsProject.org.\n\nSo, the above descriptors won't work. For example, for the case of SiC, all of the 27 SiC phases in MP will have the same values for the statistical values above.\n\nTo solve this problem, we have to add descriptors **based on the geometrical arrangement of atoms**. A simple such descriptor is to average the bond lengths (a bond is formed between two atoms).\n\n\n```\nmean_distance_matrix = np.mean(structure.distance_matrix)\nmax_distance_matrix = np.max(structure.distance_matrix)\nmin_distance_matrix = np.min(structure.distance_matrix)\nstd_distance_matrix = np.std(structure.distance_matrix)\n\nprint(mean_distance_matrix, max_distance_matrix,\n min_distance_matrix, std_distance_matrix)\n```\n\n 4.959684299990455 8.869274685254709 0.0 1.6490128596408165\n\n\n## Building a data set\n\nNow it's time to build our dataset, before we can do machine learning on it. We will do this in 3 steps:\n\n- Step 1: collecting the structures\n- Step 2: pre-processing the data\n\n### Step 1: Collecting the structures\n\nWe want to predict the bandgaps of structures, so we need to collect the structures (dataset) along with their corresponding bandgaps (target vector).\n\nFor this exercise, let's focus on stoichiometric perovskites: these are materials of the form ABC3. The followiing query will collect the CIFs and bandgaps for these materials from MP.\n\n\n\n```\nresults = m.query({\"formula_anonymous\": \"ABC3\"}, properties=[\"cif\", \"band_gap\"])\n\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4608/4608 [00:02<00:00, 1654.43it/s]\n\n\n### Step 2: Pre-processing the data\n\nHere we will extract the data we need from the structures, put them in a pandas DataFrame and then apply normalization.\n\n\n```\n\nfrom pymatgen.io.cif import CifParser\nfrom urllib.request import urlopen\nimport pandas as pd\nfrom pymatgen.ext.matproj import MPRester\nfrom pymatgen.ext.matproj import MPRestError\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\ndef descriptors(cif):\n\n atomic_numbers = []\n\n distance_matrix = []\n van_der_waals_radius = []\n electrical_resistivity = []\n velocity_of_sound = []\n reflectivity = []\n poissons_ratio = []\n molar_volume = []\n thermal_conductivity = []\n melting_point = []\n critical_temperature = []\n superconduction_temperature = []\n liquid_range = []\n bulk_modulus = []\n youngs_modulus = []\n brinell_hardness = []\n rigidity_modulus = []\n # mineral_hardness = []\n vickers_hardness = []\n density_of_solid = []\n coefficient_of_linear_thermal_expansion = []\n average_ionic_radius = []\n average_cationic_radius = []\n average_anionic_radius = []\n\n parser = CifParser.from_string(cif)\n\n structure = parser.get_structures()\n structure = structure[0]\n\n numElements = len(structure.atomic_numbers)\n\n num_metals = 0\n for e in structure.species:\n if e.Z in range(3, 4+1) or e.Z in range(11, 12+1) or e.Z in range(19, 30+1) or e.Z in range(37, 48+1) or e.Z in range(55, 80 + 1) or e.Z in range(87, 112+1):\n num_metals += 1\n metals_fraction = num_metals/numElements\n\n spg = structure.get_space_group_info()\n\n spacegroup_numbers = {}\n for i in range(1, 231):\n spacegroup_numbers[i] = 0\n\n spacegroup_numbers[spg[1]] = 1\n\n spacegroup_numbers_list = []\n for i in range(1, 231):\n spacegroup_numbers_list += [spacegroup_numbers[i]]\n\n atomic_numbers = [np.mean(structure.atomic_numbers), np.max(structure.atomic_numbers), np.min(\n structure.atomic_numbers), np.std(structure.atomic_numbers)]\n\n # Lattice parameters:\n a_parameters = structure.lattice.abc[0]\n b_parameters = structure.lattice.abc[1]\n c_parameters = structure.lattice.abc[2]\n alpha_parameters = structure.lattice.angles[0]\n beta_parameters = structure.lattice.angles[1]\n gamma_parameters = structure.lattice.angles[2]\n\n distance_matrix += [np.mean(structure.distance_matrix), np.max(structure.distance_matrix),\n np.min(structure.distance_matrix), np.std(structure.distance_matrix)]\n\n e1, e2, e3, e4, e5, e6, e7, e8, e9, e10, e11, e12, e13, e14, e15, e16, e17, e18, e19, e20, e21, e22, e23 = [\n ], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []\n for e in structure.species:\n e1 += [e.van_der_waals_radius]\n e2 += [e.electrical_resistivity]\n e3 += [e.velocity_of_sound]\n e4 += [e.reflectivity]\n e6 += [e.poissons_ratio]\n e7 += [e.molar_volume]\n e8 += [e.thermal_conductivity]\n e9 += [e.melting_point]\n e10 += [e.critical_temperature]\n e11 += [e.superconduction_temperature]\n e12 += [e.liquid_range]\n e13 += [e.bulk_modulus]\n e14 += [e.youngs_modulus]\n e15 += [e.brinell_hardness]\n e16 += [e.rigidity_modulus]\n # e17 +=[e.mineral_hardness ]\n e18 += [e.vickers_hardness]\n e19 += [e.density_of_solid]\n e20 += [e.coefficient_of_linear_thermal_expansion]\n e21 += [e.average_ionic_radius]\n e22 += [e.average_cationic_radius]\n e23 += [e.average_anionic_radius]\n\n e1 = [0 if v is None else v for v in e1]\n e2 = [0 if v is None else v for v in e2]\n e3 = [0 if v is None else v for v in e3]\n e4 = [0 if v is None else v for v in e4]\n # e5=[0 if v is None else v for v in e5]\n e6 = [0 if v is None else v for v in e6]\n e7 = [0 if v is None else v for v in e7]\n e8 = [0 if v is None else v for v in e8]\n e9 = [0 if v is None else v for v in e9]\n e10 = [0 if v is None else v for v in e10]\n e11 = [0 if v is None else v for v in e11]\n e12 = [0 if v is None else v for v in e12]\n e13 = [0 if v is None else v for v in e13]\n e14 = [0 if v is None else v for v in e14]\n e15 = [0 if v is None else v for v in e15]\n e16 = [0 if v is None else v for v in e16]\n # e17=[0 if v is None else v for v in e17]\n e18 = [0 if v is None else v for v in e18]\n e19 = [0 if v is None else v for v in e19]\n e20 = [0 if v is None else v for v in e20]\n e21 = [0 if v is None else v for v in e21]\n e22 = [0 if v is None else v for v in e22]\n e23 = [0 if v is None else v for v in e23]\n\n van_der_waals_radius = [np.mean(e1), np.max(e1), np.min(e1), np.std(e1)]\n electrical_resistivity = [np.mean(e2), np.max(e2), np.min(e2), np.std(e2)]\n velocity_of_sound = [np.mean(e3), np.max(e3), np.min(e3), np.std(e3)]\n reflectivity = [np.mean(e4), np.max(e4), np.min(e4), np.std(e4)]\n poissons_ratio = [np.mean(e6), np.max(e6), np.min(e6), np.std(e6)]\n molar_volume = [np.mean(e7), np.max(e7), np.min(e7), np.std(e7)]\n thermal_conductivity = [np.mean(e8), np.max(e8), np.min(e8), np.std(e8)]\n melting_point = [np.mean(e9), np.max(e9), np.min(e9), np.std(e9)]\n critical_temperature = [np.mean(e10), np.max(\n e10), np.min(e10), np.std(e10)]\n superconduction_temperature = [\n np.mean(e11), np.max(e11), np.min(e11), np.std(e11)]\n liquid_range = [np.mean(e12), np.max(e12), np.min(e12), np.std(e12)]\n bulk_modulus = [np.mean(e13), np.max(e13), np.min(e13), np.std(e13)]\n youngs_modulus = [np.mean(e14), np.max(e14), np.min(e14), np.std(e14)]\n brinell_hardness = [np.mean(e15), np.max(e15), np.min(e15), np.std(e15)]\n rigidity_modulus = [np.mean(e16), np.max(e16), np.min(e16), np.std(e16)]\n vickers_hardness = [np.mean(e18), np.max(e18), np.min(e18), np.std(e18)]\n density_of_solid = [np.mean(e19), np.max(e19), np.min(e19), np.std(e19)]\n coefficient_of_linear_thermal_expansion = [\n np.mean(e20), np.max(e20), np.min(e20), np.std(e20)]\n average_ionic_radius = [np.mean(e21), np.max(\n e21), np.min(e21), np.std(e21)]\n average_cationic_radius = [\n np.mean(e22), np.max(e22), np.min(e22), np.std(e22)]\n average_anionic_radius = [\n np.mean(e23), np.max(e23), np.min(e23), np.std(e23)]\n\n V = a_parameters*b_parameters*c_parameters\n Density = V / numElements\n\n descriptors_list = atomic_numbers +\\\n [Density] +\\\n [alpha_parameters] +\\\n [beta_parameters] +\\\n [gamma_parameters] +\\\n [metals_fraction] +\\\n distance_matrix +\\\n van_der_waals_radius +\\\n electrical_resistivity +\\\n velocity_of_sound +\\\n reflectivity +\\\n poissons_ratio +\\\n molar_volume +\\\n thermal_conductivity +\\\n melting_point +\\\n critical_temperature +\\\n superconduction_temperature +\\\n liquid_range +\\\n bulk_modulus +\\\n youngs_modulus +\\\n brinell_hardness +\\\n rigidity_modulus +\\\n vickers_hardness +\\\n density_of_solid +\\\n coefficient_of_linear_thermal_expansion +\\\n average_ionic_radius +\\\n average_cationic_radius +\\\n average_anionic_radius +\\\n spacegroup_numbers_list\n return descriptors_list\n\n\ndescriptors(cifFile)\n\n\n```\n\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:709: UserWarning: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(msg)\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:1121: UserWarning: Issues encountered while parsing CIF: No _symmetry_equiv_pos_as_xyz type key found. Spacegroup from _symmetry_space_group_name_H-M used.\n warnings.warn(\"Issues encountered while parsing CIF: %s\" % \"\\n\".join(self.warnings))\n\n\n\n\n\n [11.36,\n 32,\n 3,\n 7.509354166637767,\n 19.548727468344,\n 90.0,\n 90.0,\n 90.0,\n 0.4,\n 4.959684299990455,\n 8.869274685254709,\n 0.0,\n 1.6490128596408165,\n 1.8204000000000002,\n 2.11,\n 1.8,\n 0.059898580951471596,\n 4.799999999999999e+22,\n 1e+23,\n 9.5e-08,\n 4.995998398718718e+22,\n 2616.0,\n 6000.0,\n 0.0,\n 2953.463052079711,\n 0.0,\n 0,\n 0,\n 0.0,\n 0.0,\n 0,\n 0,\n 0.0,\n 14.5692,\n 17.02,\n 13.02,\n 1.3852477612326248,\n 36.51728,\n 85.0,\n 0.205,\n 41.23727548082681,\n 441.72880000000004,\n 1211.4,\n 317.3,\n 162.35371969425276,\n 1999.44,\n 3223.0,\n 0.0,\n 1032.056319393472,\n 0.0,\n 0,\n 0,\n 0.0,\n 716.5688,\n 1881.6,\n 232.7,\n 473.34035339759487,\n 8.976,\n 11.0,\n 0.0,\n 2.4434860343370084,\n 1.96,\n 4.9,\n 0.0,\n 2.4004999479275146,\n 0.0,\n 0,\n 0,\n 0.0,\n 1.68,\n 4.2,\n 0.0,\n 2.0575713839378698,\n 0.0,\n 0,\n 0,\n 0.0,\n 1513.56,\n 5323.0,\n 535.0,\n 1032.8763751775912,\n 1.864e-05,\n 4.6e-05,\n 0.0,\n 2.2369407681027228e-05,\n 0.8572,\n 0.9,\n 0.55,\n 0.0940008510599771,\n 0.6603999999999999,\n 0.9,\n 0.47,\n 0.2044989975525553,\n 0.8160000000000001,\n 1.7,\n 0.0,\n 0.8493197277821821,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 1,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0]\n\n\n\nNow let's iterate through the list of results and extract our descriptors into the above lists. This will take a few minutes.\n\n\n```\n\n\nband_gaps = []\ndataset = []\n\ncounter =0\nfor r in results:\n cif = r['cif']\n bg = r['band_gap']\n parser = CifParser.from_string(cif)\n\n structure = parser.get_structures()\n structure = structure[0]\n\n dataset += [descriptors(cif)]\n\n band_gaps += [bg]\n print(counter)\n counter +=1\n\ndataset_df = pd.DataFrame(dataset)\n\n```\n\n 0\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n 12\n 13\n 14\n 15\n 16\n 17\n 18\n 19\n 20\n 21\n 22\n 23\n 24\n 25\n 26\n 27\n 28\n 29\n 30\n 31\n 32\n 33\n 34\n 35\n 36\n 37\n 38\n 39\n 40\n 41\n 42\n 43\n 44\n 45\n 46\n 47\n 48\n\n\n /usr/local/lib/python3.7/dist-packages/pymatgen/io/cif.py:1121: UserWarning: Issues encountered while parsing CIF: Some fractional co-ordinates rounded to ideal values to avoid issues with finite precision.\n warnings.warn(\"Issues encountered while parsing CIF: %s\" % \"\\n\".join(self.warnings))\n\n\n 49\n 50\n 51\n 52\n 53\n 54\n 55\n 56\n 57\n 58\n 59\n 60\n 61\n 62\n 63\n 64\n 65\n 66\n 67\n 68\n 69\n 70\n 71\n 72\n 73\n 74\n 75\n 76\n 77\n 78\n 79\n 80\n 81\n 82\n 83\n 84\n 85\n 86\n 87\n 88\n 89\n 90\n 91\n 92\n 93\n 94\n 95\n 96\n 97\n 98\n 99\n 100\n 101\n 102\n 103\n 104\n 105\n 106\n 107\n 108\n 109\n 110\n 111\n 112\n 113\n 114\n 115\n 116\n 117\n 118\n 119\n 120\n 121\n 122\n 123\n 124\n 125\n 126\n 127\n 128\n 129\n 130\n 131\n 132\n 133\n 134\n 135\n 136\n 137\n 138\n 139\n 140\n 141\n 142\n 143\n 144\n 145\n 146\n 147\n 148\n 149\n 150\n 151\n 152\n 153\n 154\n 155\n 156\n 157\n 158\n 159\n 160\n 161\n 162\n 163\n 164\n 165\n 166\n 167\n 168\n 169\n 170\n 171\n 172\n 173\n 174\n 175\n 176\n 177\n 178\n 179\n 180\n 181\n 182\n 183\n 184\n 185\n 186\n 187\n 188\n 189\n 190\n 191\n 192\n 193\n 194\n 195\n 196\n 197\n 198\n 199\n 200\n 201\n 202\n 203\n 204\n 205\n 206\n 207\n 208\n 209\n 210\n 211\n 212\n 213\n 214\n 215\n 216\n 217\n 218\n 219\n 220\n 221\n 222\n 223\n 224\n 225\n 226\n 227\n 228\n 229\n 230\n 231\n 232\n 233\n 234\n 235\n 236\n 237\n 238\n 239\n 240\n 241\n 242\n 243\n 244\n 245\n 246\n 247\n 248\n 249\n 250\n 251\n 252\n 253\n 254\n 255\n 256\n 257\n 258\n 259\n 260\n 261\n 262\n 263\n 264\n 265\n 266\n 267\n 268\n 269\n 270\n 271\n 272\n 273\n 274\n 275\n 276\n 277\n 278\n 279\n 280\n 281\n 282\n 283\n 284\n 285\n 286\n 287\n 288\n 289\n 290\n 291\n 292\n 293\n 294\n 295\n 296\n 297\n 298\n 299\n 300\n 301\n 302\n 303\n 304\n 305\n 306\n 307\n 308\n 309\n 310\n 311\n 312\n 313\n 314\n 315\n 316\n 317\n 318\n 319\n 320\n 321\n 322\n 323\n 324\n 325\n 326\n 327\n 328\n 329\n 330\n 331\n 332\n 333\n 334\n 335\n 336\n 337\n 338\n 339\n 340\n 341\n 342\n 343\n 344\n 345\n 346\n 347\n 348\n 349\n 350\n 351\n 352\n 353\n 354\n 355\n 356\n 357\n 358\n 359\n 360\n 361\n 362\n 363\n 364\n 365\n 366\n 367\n 368\n 369\n 370\n 371\n 372\n 373\n 374\n 375\n 376\n 377\n 378\n 379\n 380\n 381\n 382\n 383\n 384\n 385\n 386\n 387\n 388\n 389\n 390\n 391\n 392\n 393\n 394\n 395\n 396\n 397\n 398\n 399\n 400\n 401\n 402\n 403\n 404\n 405\n 406\n 407\n 408\n 409\n 410\n 411\n 412\n 413\n 414\n 415\n 416\n 417\n 418\n 419\n 420\n 421\n 422\n 423\n 424\n 425\n 426\n 427\n 428\n 429\n 430\n 431\n 432\n 433\n 434\n 435\n 436\n 437\n 438\n 439\n 440\n 441\n 442\n 443\n 444\n 445\n 446\n 447\n 448\n 449\n 450\n 451\n 452\n 453\n 454\n 455\n 456\n 457\n 458\n 459\n 460\n 461\n 462\n 463\n 464\n 465\n 466\n 467\n 468\n 469\n 470\n 471\n 472\n 473\n 474\n 475\n 476\n 477\n 478\n 479\n 480\n 481\n 482\n 483\n 484\n 485\n 486\n 487\n 488\n 489\n 490\n 491\n 492\n 493\n 494\n 495\n 496\n 497\n 498\n 499\n 500\n 501\n 502\n 503\n 504\n 505\n 506\n 507\n 508\n 509\n 510\n 511\n 512\n 513\n 514\n 515\n 516\n 517\n 518\n 519\n 520\n 521\n 522\n 523\n 524\n 525\n 526\n 527\n 528\n 529\n 530\n 531\n 532\n 533\n 534\n 535\n 536\n 537\n 538\n 539\n 540\n 541\n 542\n 543\n 544\n 545\n 546\n 547\n 548\n 549\n 550\n 551\n 552\n 553\n 554\n 555\n 556\n 557\n 558\n 559\n 560\n 561\n 562\n 563\n 564\n 565\n 566\n 567\n 568\n 569\n 570\n 571\n 572\n 573\n 574\n 575\n 576\n 577\n 578\n 579\n 580\n 581\n 582\n 583\n 584\n 585\n 586\n 587\n 588\n 589\n 590\n 591\n 592\n 593\n 594\n 595\n 596\n 597\n 598\n 599\n 600\n 601\n 602\n 603\n 604\n 605\n 606\n 607\n 608\n 609\n 610\n 611\n 612\n 613\n 614\n 615\n 616\n 617\n 618\n 619\n 620\n 621\n 622\n 623\n 624\n 625\n 626\n 627\n 628\n 629\n 630\n 631\n 632\n 633\n 634\n 635\n 636\n 637\n 638\n 639\n 640\n 641\n 642\n 643\n 644\n 645\n 646\n 647\n 648\n 649\n 650\n 651\n 652\n 653\n 654\n 655\n 656\n 657\n 658\n 659\n 660\n 661\n 662\n 663\n 664\n 665\n 666\n 667\n 668\n 669\n 670\n 671\n 672\n 673\n 674\n 675\n 676\n 677\n 678\n 679\n 680\n 681\n 682\n 683\n 684\n 685\n 686\n 687\n 688\n 689\n 690\n 691\n 692\n 693\n 694\n 695\n 696\n 697\n 698\n 699\n 700\n 701\n 702\n 703\n 704\n 705\n 706\n 707\n 708\n 709\n 710\n 711\n 712\n 713\n 714\n 715\n 716\n 717\n 718\n 719\n 720\n 721\n 722\n 723\n 724\n 725\n 726\n 727\n 728\n 729\n 730\n 731\n 732\n 733\n 734\n 735\n 736\n 737\n 738\n 739\n 740\n 741\n 742\n 743\n 744\n 745\n 746\n 747\n 748\n 749\n 750\n 751\n 752\n 753\n 754\n 755\n 756\n 757\n 758\n 759\n 760\n 761\n 762\n 763\n 764\n 765\n 766\n 767\n 768\n 769\n 770\n 771\n 772\n 773\n 774\n 775\n 776\n 777\n 778\n 779\n 780\n 781\n 782\n 783\n 784\n 785\n 786\n 787\n 788\n 789\n 790\n 791\n 792\n 793\n 794\n 795\n 796\n 797\n 798\n 799\n 800\n 801\n 802\n 803\n 804\n 805\n 806\n 807\n 808\n 809\n 810\n 811\n 812\n 813\n 814\n 815\n 816\n 817\n 818\n 819\n 820\n 821\n 822\n 823\n 824\n 825\n 826\n 827\n 828\n 829\n 830\n 831\n 832\n 833\n 834\n 835\n 836\n 837\n 838\n 839\n 840\n 841\n 842\n 843\n 844\n 845\n 846\n 847\n 848\n 849\n 850\n 851\n 852\n 853\n 854\n 855\n 856\n 857\n 858\n 859\n 860\n 861\n 862\n 863\n 864\n 865\n 866\n 867\n 868\n 869\n 870\n 871\n 872\n 873\n 874\n 875\n 876\n 877\n 878\n 879\n 880\n 881\n 882\n 883\n 884\n 885\n 886\n 887\n 888\n 889\n 890\n 891\n 892\n 893\n 894\n 895\n 896\n 897\n 898\n 899\n 900\n 901\n 902\n 903\n 904\n 905\n 906\n 907\n 908\n 909\n 910\n 911\n 912\n 913\n 914\n 915\n 916\n 917\n 918\n 919\n 920\n 921\n 922\n 923\n 924\n 925\n 926\n 927\n 928\n 929\n 930\n 931\n 932\n 933\n 934\n 935\n 936\n 937\n 938\n 939\n 940\n 941\n 942\n 943\n 944\n 945\n 946\n 947\n 948\n 949\n 950\n 951\n 952\n 953\n 954\n 955\n 956\n 957\n 958\n 959\n 960\n 961\n 962\n 963\n 964\n 965\n 966\n 967\n 968\n 969\n 970\n 971\n 972\n 973\n 974\n 975\n 976\n 977\n 978\n 979\n 980\n 981\n 982\n 983\n 984\n 985\n 986\n 987\n 988\n 989\n 990\n 991\n 992\n 993\n 994\n 995\n 996\n 997\n 998\n 999\n 1000\n 1001\n 1002\n 1003\n 1004\n 1005\n 1006\n 1007\n 1008\n 1009\n 1010\n 1011\n 1012\n 1013\n 1014\n 1015\n 1016\n 1017\n 1018\n 1019\n 1020\n 1021\n 1022\n 1023\n 1024\n 1025\n 1026\n 1027\n 1028\n 1029\n 1030\n 1031\n 1032\n 1033\n 1034\n 1035\n 1036\n 1037\n 1038\n 1039\n 1040\n 1041\n 1042\n 1043\n 1044\n 1045\n 1046\n 1047\n 1048\n 1049\n 1050\n 1051\n 1052\n 1053\n 1054\n 1055\n 1056\n 1057\n 1058\n 1059\n 1060\n 1061\n 1062\n 1063\n 1064\n 1065\n 1066\n 1067\n 1068\n 1069\n 1070\n 1071\n 1072\n 1073\n 1074\n 1075\n 1076\n 1077\n 1078\n 1079\n 1080\n 1081\n 1082\n 1083\n 1084\n 1085\n 1086\n 1087\n 1088\n 1089\n 1090\n 1091\n 1092\n 1093\n 1094\n 1095\n 1096\n 1097\n 1098\n 1099\n 1100\n 1101\n 1102\n 1103\n 1104\n 1105\n 1106\n 1107\n 1108\n 1109\n 1110\n 1111\n 1112\n 1113\n 1114\n 1115\n 1116\n 1117\n 1118\n 1119\n 1120\n 1121\n 1122\n 1123\n 1124\n 1125\n 1126\n 1127\n 1128\n 1129\n 1130\n 1131\n 1132\n 1133\n 1134\n 1135\n 1136\n 1137\n 1138\n 1139\n 1140\n 1141\n 1142\n 1143\n 1144\n 1145\n 1146\n 1147\n 1148\n 1149\n 1150\n 1151\n 1152\n 1153\n 1154\n 1155\n 1156\n 1157\n 1158\n 1159\n 1160\n 1161\n 1162\n 1163\n 1164\n 1165\n 1166\n 1167\n 1168\n 1169\n 1170\n 1171\n 1172\n 1173\n 1174\n 1175\n 1176\n 1177\n 1178\n 1179\n 1180\n 1181\n 1182\n 1183\n 1184\n 1185\n 1186\n 1187\n 1188\n 1189\n 1190\n 1191\n 1192\n 1193\n 1194\n 1195\n 1196\n 1197\n 1198\n 1199\n 1200\n 1201\n 1202\n 1203\n 1204\n 1205\n 1206\n 1207\n 1208\n 1209\n 1210\n 1211\n 1212\n 1213\n 1214\n 1215\n 1216\n 1217\n 1218\n 1219\n 1220\n 1221\n 1222\n 1223\n 1224\n 1225\n 1226\n 1227\n 1228\n 1229\n 1230\n 1231\n 1232\n 1233\n 1234\n 1235\n 1236\n 1237\n 1238\n 1239\n 1240\n 1241\n 1242\n 1243\n 1244\n 1245\n 1246\n 1247\n 1248\n 1249\n 1250\n 1251\n 1252\n 1253\n 1254\n 1255\n 1256\n 1257\n 1258\n 1259\n 1260\n 1261\n 1262\n 1263\n 1264\n 1265\n 1266\n 1267\n 1268\n 1269\n 1270\n 1271\n 1272\n 1273\n 1274\n 1275\n 1276\n 1277\n 1278\n 1279\n 1280\n 1281\n 1282\n 1283\n 1284\n 1285\n 1286\n 1287\n 1288\n 1289\n 1290\n 1291\n 1292\n 1293\n 1294\n 1295\n 1296\n 1297\n 1298\n 1299\n 1300\n 1301\n 1302\n 1303\n 1304\n 1305\n 1306\n 1307\n 1308\n 1309\n 1310\n 1311\n 1312\n 1313\n 1314\n 1315\n 1316\n 1317\n 1318\n 1319\n 1320\n 1321\n 1322\n 1323\n 1324\n 1325\n 1326\n 1327\n 1328\n 1329\n 1330\n 1331\n 1332\n 1333\n 1334\n 1335\n 1336\n 1337\n 1338\n 1339\n 1340\n 1341\n 1342\n 1343\n 1344\n 1345\n 1346\n 1347\n 1348\n 1349\n 1350\n 1351\n 1352\n 1353\n 1354\n 1355\n 1356\n 1357\n 1358\n 1359\n 1360\n 1361\n 1362\n 1363\n 1364\n 1365\n 1366\n 1367\n 1368\n 1369\n 1370\n 1371\n 1372\n 1373\n 1374\n 1375\n 1376\n 1377\n 1378\n 1379\n 1380\n 1381\n 1382\n 1383\n 1384\n 1385\n 1386\n 1387\n 1388\n 1389\n 1390\n 1391\n 1392\n 1393\n 1394\n 1395\n 1396\n 1397\n 1398\n 1399\n 1400\n 1401\n 1402\n 1403\n 1404\n 1405\n 1406\n 1407\n 1408\n 1409\n 1410\n 1411\n 1412\n 1413\n 1414\n 1415\n 1416\n 1417\n 1418\n 1419\n 1420\n 1421\n 1422\n 1423\n 1424\n 1425\n 1426\n 1427\n 1428\n 1429\n 1430\n 1431\n 1432\n 1433\n 1434\n 1435\n 1436\n 1437\n 1438\n 1439\n 1440\n 1441\n 1442\n 1443\n 1444\n 1445\n 1446\n 1447\n 1448\n 1449\n 1450\n 1451\n 1452\n 1453\n 1454\n 1455\n 1456\n 1457\n 1458\n 1459\n 1460\n 1461\n 1462\n 1463\n 1464\n 1465\n 1466\n 1467\n 1468\n 1469\n 1470\n 1471\n 1472\n 1473\n 1474\n 1475\n 1476\n 1477\n 1478\n 1479\n 1480\n 1481\n 1482\n 1483\n 1484\n 1485\n 1486\n 1487\n 1488\n 1489\n 1490\n 1491\n 1492\n 1493\n 1494\n 1495\n 1496\n 1497\n 1498\n 1499\n 1500\n 1501\n 1502\n 1503\n 1504\n 1505\n 1506\n 1507\n 1508\n 1509\n 1510\n 1511\n 1512\n 1513\n 1514\n 1515\n 1516\n 1517\n 1518\n 1519\n 1520\n 1521\n 1522\n 1523\n 1524\n 1525\n 1526\n 1527\n 1528\n 1529\n 1530\n 1531\n 1532\n 1533\n 1534\n 1535\n 1536\n 1537\n 1538\n 1539\n 1540\n 1541\n 1542\n 1543\n 1544\n 1545\n 1546\n 1547\n 1548\n 1549\n 1550\n 1551\n 1552\n 1553\n 1554\n 1555\n 1556\n 1557\n 1558\n 1559\n 1560\n 1561\n 1562\n 1563\n 1564\n 1565\n 1566\n 1567\n 1568\n 1569\n 1570\n 1571\n 1572\n 1573\n 1574\n 1575\n 1576\n 1577\n 1578\n 1579\n 1580\n 1581\n 1582\n 1583\n 1584\n 1585\n 1586\n 1587\n 1588\n 1589\n 1590\n 1591\n 1592\n 1593\n 1594\n 1595\n 1596\n 1597\n 1598\n 1599\n 1600\n 1601\n 1602\n 1603\n 1604\n 1605\n 1606\n 1607\n 1608\n 1609\n 1610\n 1611\n 1612\n 1613\n 1614\n 1615\n 1616\n 1617\n 1618\n 1619\n 1620\n 1621\n 1622\n 1623\n 1624\n 1625\n 1626\n 1627\n 1628\n 1629\n 1630\n 1631\n 1632\n 1633\n 1634\n 1635\n 1636\n 1637\n 1638\n 1639\n 1640\n 1641\n 1642\n 1643\n 1644\n 1645\n 1646\n 1647\n 1648\n 1649\n 1650\n 1651\n 1652\n 1653\n 1654\n 1655\n 1656\n 1657\n 1658\n 1659\n 1660\n 1661\n 1662\n 1663\n 1664\n 1665\n 1666\n 1667\n 1668\n 1669\n 1670\n 1671\n 1672\n 1673\n 1674\n 1675\n 1676\n 1677\n 1678\n 1679\n 1680\n 1681\n 1682\n 1683\n 1684\n 1685\n 1686\n 1687\n 1688\n 1689\n 1690\n 1691\n 1692\n 1693\n 1694\n 1695\n 1696\n 1697\n 1698\n 1699\n 1700\n 1701\n 1702\n 1703\n 1704\n 1705\n 1706\n 1707\n 1708\n 1709\n 1710\n 1711\n 1712\n 1713\n 1714\n 1715\n 1716\n 1717\n 1718\n 1719\n 1720\n 1721\n 1722\n 1723\n 1724\n 1725\n 1726\n 1727\n 1728\n 1729\n 1730\n 1731\n 1732\n 1733\n 1734\n 1735\n 1736\n 1737\n 1738\n 1739\n 1740\n 1741\n 1742\n 1743\n 1744\n 1745\n 1746\n 1747\n 1748\n 1749\n 1750\n 1751\n 1752\n 1753\n 1754\n 1755\n 1756\n 1757\n 1758\n 1759\n 1760\n 1761\n 1762\n 1763\n 1764\n 1765\n 1766\n 1767\n 1768\n 1769\n 1770\n 1771\n 1772\n 1773\n 1774\n 1775\n 1776\n 1777\n 1778\n 1779\n 1780\n 1781\n 1782\n 1783\n 1784\n 1785\n 1786\n 1787\n 1788\n 1789\n 1790\n 1791\n 1792\n 1793\n 1794\n 1795\n 1796\n 1797\n 1798\n 1799\n 1800\n 1801\n 1802\n 1803\n 1804\n 1805\n 1806\n 1807\n 1808\n 1809\n 1810\n 1811\n 1812\n 1813\n 1814\n 1815\n 1816\n 1817\n 1818\n 1819\n 1820\n 1821\n 1822\n 1823\n 1824\n 1825\n 1826\n 1827\n 1828\n 1829\n 1830\n 1831\n 1832\n 1833\n 1834\n 1835\n 1836\n 1837\n 1838\n 1839\n 1840\n 1841\n 1842\n 1843\n 1844\n 1845\n 1846\n 1847\n 1848\n 1849\n 1850\n 1851\n 1852\n 1853\n 1854\n 1855\n 1856\n 1857\n 1858\n 1859\n 1860\n 1861\n 1862\n 1863\n 1864\n 1865\n 1866\n 1867\n 1868\n 1869\n 1870\n 1871\n 1872\n 1873\n 1874\n 1875\n 1876\n 1877\n 1878\n 1879\n 1880\n 1881\n 1882\n 1883\n 1884\n 1885\n 1886\n 1887\n 1888\n 1889\n 1890\n 1891\n 1892\n 1893\n 1894\n 1895\n 1896\n 1897\n 1898\n 1899\n 1900\n 1901\n 1902\n 1903\n 1904\n 1905\n 1906\n 1907\n 1908\n 1909\n 1910\n 1911\n 1912\n 1913\n 1914\n 1915\n 1916\n 1917\n 1918\n 1919\n 1920\n 1921\n 1922\n 1923\n 1924\n 1925\n 1926\n 1927\n 1928\n 1929\n 1930\n 1931\n 1932\n 1933\n 1934\n 1935\n 1936\n 1937\n 1938\n 1939\n 1940\n 1941\n 1942\n 1943\n 1944\n 1945\n 1946\n 1947\n 1948\n 1949\n 1950\n 1951\n 1952\n 1953\n 1954\n 1955\n 1956\n 1957\n 1958\n 1959\n 1960\n 1961\n 1962\n 1963\n 1964\n 1965\n 1966\n 1967\n 1968\n 1969\n 1970\n 1971\n 1972\n 1973\n 1974\n 1975\n 1976\n 1977\n 1978\n 1979\n 1980\n 1981\n 1982\n 1983\n 1984\n 1985\n 1986\n 1987\n 1988\n 1989\n 1990\n 1991\n 1992\n 1993\n 1994\n 1995\n 1996\n 1997\n 1998\n 1999\n 2000\n 2001\n 2002\n 2003\n 2004\n 2005\n 2006\n 2007\n 2008\n 2009\n 2010\n 2011\n 2012\n 2013\n 2014\n 2015\n 2016\n 2017\n 2018\n 2019\n 2020\n 2021\n 2022\n 2023\n 2024\n 2025\n 2026\n 2027\n 2028\n 2029\n 2030\n 2031\n 2032\n 2033\n 2034\n 2035\n 2036\n 2037\n 2038\n 2039\n 2040\n 2041\n 2042\n 2043\n 2044\n 2045\n 2046\n 2047\n 2048\n 2049\n 2050\n 2051\n 2052\n 2053\n 2054\n 2055\n 2056\n 2057\n 2058\n 2059\n 2060\n 2061\n 2062\n 2063\n 2064\n 2065\n 2066\n 2067\n 2068\n 2069\n 2070\n 2071\n 2072\n 2073\n 2074\n 2075\n 2076\n 2077\n 2078\n 2079\n 2080\n 2081\n 2082\n 2083\n 2084\n 2085\n 2086\n 2087\n 2088\n 2089\n 2090\n 2091\n 2092\n 2093\n 2094\n 2095\n 2096\n 2097\n 2098\n 2099\n 2100\n 2101\n 2102\n 2103\n 2104\n 2105\n 2106\n 2107\n 2108\n 2109\n 2110\n 2111\n 2112\n 2113\n 2114\n 2115\n 2116\n 2117\n 2118\n 2119\n 2120\n 2121\n 2122\n 2123\n 2124\n 2125\n 2126\n 2127\n 2128\n 2129\n 2130\n 2131\n 2132\n 2133\n 2134\n 2135\n 2136\n 2137\n 2138\n 2139\n 2140\n 2141\n 2142\n 2143\n 2144\n 2145\n 2146\n 2147\n 2148\n 2149\n 2150\n 2151\n 2152\n 2153\n 2154\n 2155\n 2156\n 2157\n 2158\n 2159\n 2160\n 2161\n 2162\n 2163\n 2164\n 2165\n 2166\n 2167\n 2168\n 2169\n 2170\n 2171\n 2172\n 2173\n 2174\n 2175\n 2176\n 2177\n 2178\n 2179\n 2180\n 2181\n 2182\n 2183\n 2184\n 2185\n 2186\n 2187\n 2188\n 2189\n 2190\n 2191\n 2192\n 2193\n 2194\n 2195\n 2196\n 2197\n 2198\n 2199\n 2200\n 2201\n 2202\n 2203\n 2204\n 2205\n 2206\n 2207\n 2208\n 2209\n 2210\n 2211\n 2212\n 2213\n 2214\n 2215\n 2216\n 2217\n 2218\n 2219\n 2220\n 2221\n 2222\n 2223\n 2224\n 2225\n 2226\n 2227\n 2228\n 2229\n 2230\n 2231\n 2232\n 2233\n 2234\n 2235\n 2236\n 2237\n 2238\n 2239\n 2240\n 2241\n 2242\n 2243\n 2244\n 2245\n 2246\n 2247\n 2248\n 2249\n 2250\n 2251\n 2252\n 2253\n 2254\n 2255\n 2256\n 2257\n 2258\n 2259\n 2260\n 2261\n 2262\n 2263\n 2264\n 2265\n 2266\n 2267\n 2268\n 2269\n 2270\n 2271\n 2272\n 2273\n 2274\n 2275\n 2276\n 2277\n 2278\n 2279\n 2280\n 2281\n 2282\n 2283\n 2284\n 2285\n 2286\n 2287\n 2288\n 2289\n 2290\n 2291\n 2292\n 2293\n 2294\n 2295\n 2296\n 2297\n 2298\n 2299\n 2300\n 2301\n 2302\n 2303\n 2304\n 2305\n 2306\n 2307\n 2308\n 2309\n 2310\n 2311\n 2312\n 2313\n 2314\n 2315\n 2316\n 2317\n 2318\n 2319\n 2320\n 2321\n 2322\n 2323\n 2324\n 2325\n 2326\n 2327\n 2328\n 2329\n 2330\n 2331\n 2332\n 2333\n 2334\n 2335\n 2336\n 2337\n 2338\n 2339\n 2340\n 2341\n 2342\n 2343\n 2344\n 2345\n 2346\n 2347\n 2348\n 2349\n 2350\n 2351\n 2352\n 2353\n 2354\n 2355\n 2356\n 2357\n 2358\n 2359\n 2360\n 2361\n 2362\n 2363\n 2364\n 2365\n 2366\n 2367\n 2368\n 2369\n 2370\n 2371\n 2372\n 2373\n 2374\n 2375\n 2376\n 2377\n 2378\n 2379\n 2380\n 2381\n 2382\n 2383\n 2384\n 2385\n 2386\n 2387\n 2388\n 2389\n 2390\n 2391\n 2392\n 2393\n 2394\n 2395\n 2396\n 2397\n 2398\n 2399\n 2400\n 2401\n 2402\n 2403\n 2404\n 2405\n 2406\n 2407\n 2408\n 2409\n 2410\n 2411\n 2412\n 2413\n 2414\n 2415\n 2416\n 2417\n 2418\n 2419\n 2420\n 2421\n 2422\n 2423\n 2424\n 2425\n 2426\n 2427\n 2428\n 2429\n 2430\n 2431\n 2432\n 2433\n 2434\n 2435\n 2436\n 2437\n 2438\n 2439\n 2440\n 2441\n 2442\n 2443\n 2444\n 2445\n 2446\n 2447\n 2448\n 2449\n 2450\n 2451\n 2452\n 2453\n 2454\n 2455\n 2456\n 2457\n 2458\n 2459\n 2460\n 2461\n 2462\n 2463\n 2464\n 2465\n 2466\n 2467\n 2468\n 2469\n 2470\n 2471\n 2472\n 2473\n 2474\n 2475\n 2476\n 2477\n 2478\n 2479\n 2480\n 2481\n 2482\n 2483\n 2484\n 2485\n 2486\n 2487\n 2488\n 2489\n 2490\n 2491\n 2492\n 2493\n 2494\n 2495\n 2496\n 2497\n 2498\n 2499\n 2500\n 2501\n 2502\n 2503\n 2504\n 2505\n 2506\n 2507\n 2508\n 2509\n 2510\n 2511\n 2512\n 2513\n 2514\n 2515\n 2516\n 2517\n 2518\n 2519\n 2520\n 2521\n 2522\n 2523\n 2524\n 2525\n 2526\n 2527\n 2528\n 2529\n 2530\n 2531\n 2532\n 2533\n 2534\n 2535\n 2536\n 2537\n 2538\n 2539\n 2540\n 2541\n 2542\n 2543\n 2544\n 2545\n 2546\n 2547\n 2548\n 2549\n 2550\n 2551\n 2552\n 2553\n 2554\n 2555\n 2556\n 2557\n 2558\n 2559\n 2560\n 2561\n 2562\n 2563\n 2564\n 2565\n 2566\n 2567\n 2568\n 2569\n 2570\n 2571\n 2572\n 2573\n 2574\n 2575\n 2576\n 2577\n 2578\n 2579\n 2580\n 2581\n 2582\n 2583\n 2584\n 2585\n 2586\n 2587\n 2588\n 2589\n 2590\n 2591\n 2592\n 2593\n 2594\n 2595\n 2596\n 2597\n 2598\n 2599\n 2600\n 2601\n 2602\n 2603\n 2604\n 2605\n 2606\n 2607\n 2608\n 2609\n 2610\n 2611\n 2612\n 2613\n 2614\n 2615\n 2616\n 2617\n 2618\n 2619\n 2620\n 2621\n 2622\n 2623\n 2624\n 2625\n 2626\n 2627\n 2628\n 2629\n 2630\n 2631\n 2632\n 2633\n 2634\n 2635\n 2636\n 2637\n 2638\n 2639\n 2640\n 2641\n 2642\n 2643\n 2644\n 2645\n 2646\n 2647\n 2648\n 2649\n 2650\n 2651\n 2652\n 2653\n 2654\n 2655\n 2656\n 2657\n 2658\n 2659\n 2660\n 2661\n 2662\n 2663\n 2664\n 2665\n 2666\n 2667\n 2668\n 2669\n 2670\n 2671\n 2672\n 2673\n 2674\n 2675\n 2676\n 2677\n 2678\n 2679\n 2680\n 2681\n 2682\n 2683\n 2684\n 2685\n 2686\n 2687\n 2688\n 2689\n 2690\n 2691\n 2692\n 2693\n 2694\n 2695\n 2696\n 2697\n 2698\n 2699\n 2700\n 2701\n 2702\n 2703\n 2704\n 2705\n 2706\n 2707\n 2708\n 2709\n 2710\n 2711\n 2712\n 2713\n 2714\n 2715\n 2716\n 2717\n 2718\n 2719\n 2720\n 2721\n 2722\n 2723\n 2724\n 2725\n 2726\n 2727\n 2728\n 2729\n 2730\n 2731\n 2732\n 2733\n 2734\n 2735\n 2736\n 2737\n 2738\n 2739\n 2740\n 2741\n 2742\n 2743\n 2744\n 2745\n 2746\n 2747\n 2748\n 2749\n 2750\n 2751\n 2752\n 2753\n 2754\n 2755\n 2756\n 2757\n 2758\n 2759\n 2760\n 2761\n 2762\n 2763\n 2764\n 2765\n 2766\n 2767\n 2768\n 2769\n 2770\n 2771\n 2772\n 2773\n 2774\n 2775\n 2776\n 2777\n 2778\n 2779\n 2780\n 2781\n 2782\n 2783\n 2784\n 2785\n 2786\n 2787\n 2788\n 2789\n 2790\n 2791\n 2792\n 2793\n 2794\n 2795\n 2796\n 2797\n 2798\n 2799\n 2800\n 2801\n 2802\n 2803\n 2804\n 2805\n 2806\n 2807\n 2808\n 2809\n 2810\n 2811\n 2812\n 2813\n 2814\n 2815\n 2816\n 2817\n 2818\n 2819\n 2820\n 2821\n 2822\n 2823\n 2824\n 2825\n 2826\n 2827\n 2828\n 2829\n 2830\n 2831\n 2832\n 2833\n 2834\n 2835\n 2836\n 2837\n 2838\n 2839\n 2840\n 2841\n 2842\n 2843\n 2844\n 2845\n 2846\n 2847\n 2848\n 2849\n 2850\n 2851\n 2852\n 2853\n 2854\n 2855\n 2856\n 2857\n 2858\n 2859\n 2860\n 2861\n 2862\n 2863\n 2864\n 2865\n 2866\n 2867\n 2868\n 2869\n 2870\n 2871\n 2872\n 2873\n 2874\n 2875\n 2876\n 2877\n 2878\n 2879\n 2880\n 2881\n 2882\n 2883\n 2884\n 2885\n 2886\n 2887\n 2888\n 2889\n 2890\n 2891\n 2892\n 2893\n 2894\n 2895\n 2896\n 2897\n 2898\n 2899\n 2900\n 2901\n 2902\n 2903\n 2904\n 2905\n 2906\n 2907\n 2908\n 2909\n 2910\n 2911\n 2912\n 2913\n 2914\n 2915\n 2916\n 2917\n 2918\n 2919\n 2920\n 2921\n 2922\n 2923\n 2924\n 2925\n 2926\n 2927\n 2928\n 2929\n 2930\n 2931\n 2932\n 2933\n 2934\n 2935\n 2936\n 2937\n 2938\n 2939\n 2940\n 2941\n 2942\n 2943\n 2944\n 2945\n 2946\n 2947\n 2948\n 2949\n 2950\n 2951\n 2952\n 2953\n 2954\n 2955\n 2956\n 2957\n 2958\n 2959\n 2960\n 2961\n 2962\n 2963\n 2964\n 2965\n 2966\n 2967\n 2968\n 2969\n 2970\n 2971\n 2972\n 2973\n 2974\n 2975\n 2976\n 2977\n 2978\n 2979\n 2980\n 2981\n 2982\n 2983\n 2984\n 2985\n 2986\n 2987\n 2988\n 2989\n 2990\n 2991\n 2992\n 2993\n 2994\n 2995\n 2996\n 2997\n 2998\n 2999\n 3000\n 3001\n 3002\n 3003\n 3004\n 3005\n 3006\n 3007\n 3008\n 3009\n 3010\n 3011\n 3012\n 3013\n 3014\n 3015\n 3016\n 3017\n 3018\n 3019\n 3020\n 3021\n 3022\n 3023\n 3024\n 3025\n 3026\n 3027\n 3028\n 3029\n 3030\n 3031\n 3032\n 3033\n 3034\n 3035\n 3036\n 3037\n 3038\n 3039\n 3040\n 3041\n 3042\n 3043\n 3044\n 3045\n 3046\n 3047\n 3048\n 3049\n 3050\n 3051\n 3052\n 3053\n 3054\n 3055\n 3056\n 3057\n 3058\n 3059\n 3060\n 3061\n 3062\n 3063\n 3064\n 3065\n 3066\n 3067\n 3068\n 3069\n 3070\n 3071\n 3072\n 3073\n 3074\n 3075\n 3076\n 3077\n 3078\n 3079\n 3080\n 3081\n 3082\n 3083\n 3084\n 3085\n 3086\n 3087\n 3088\n 3089\n 3090\n 3091\n 3092\n 3093\n 3094\n 3095\n 3096\n 3097\n 3098\n 3099\n 3100\n 3101\n 3102\n 3103\n 3104\n 3105\n 3106\n 3107\n 3108\n 3109\n 3110\n 3111\n 3112\n 3113\n 3114\n 3115\n 3116\n 3117\n 3118\n 3119\n 3120\n 3121\n 3122\n 3123\n 3124\n 3125\n 3126\n 3127\n 3128\n 3129\n 3130\n 3131\n 3132\n 3133\n 3134\n 3135\n 3136\n 3137\n 3138\n 3139\n 3140\n 3141\n 3142\n 3143\n 3144\n 3145\n 3146\n 3147\n 3148\n 3149\n 3150\n 3151\n 3152\n 3153\n 3154\n 3155\n 3156\n 3157\n 3158\n 3159\n 3160\n 3161\n 3162\n 3163\n 3164\n 3165\n 3166\n 3167\n 3168\n 3169\n 3170\n 3171\n 3172\n 3173\n 3174\n 3175\n 3176\n 3177\n 3178\n 3179\n 3180\n 3181\n 3182\n 3183\n 3184\n 3185\n 3186\n 3187\n 3188\n 3189\n 3190\n 3191\n 3192\n 3193\n 3194\n 3195\n 3196\n 3197\n 3198\n 3199\n 3200\n 3201\n 3202\n 3203\n 3204\n 3205\n 3206\n 3207\n 3208\n 3209\n 3210\n 3211\n 3212\n 3213\n 3214\n 3215\n 3216\n 3217\n 3218\n 3219\n 3220\n 3221\n 3222\n 3223\n 3224\n 3225\n 3226\n 3227\n 3228\n 3229\n 3230\n 3231\n 3232\n 3233\n 3234\n 3235\n 3236\n 3237\n 3238\n 3239\n 3240\n 3241\n 3242\n 3243\n 3244\n 3245\n 3246\n 3247\n 3248\n 3249\n 3250\n 3251\n 3252\n 3253\n 3254\n 3255\n 3256\n 3257\n 3258\n 3259\n 3260\n 3261\n 3262\n 3263\n 3264\n 3265\n 3266\n 3267\n 3268\n 3269\n 3270\n 3271\n 3272\n 3273\n 3274\n 3275\n 3276\n 3277\n 3278\n 3279\n 3280\n 3281\n 3282\n 3283\n 3284\n 3285\n 3286\n 3287\n 3288\n 3289\n 3290\n 3291\n 3292\n 3293\n 3294\n 3295\n 3296\n 3297\n 3298\n 3299\n 3300\n 3301\n 3302\n 3303\n 3304\n 3305\n 3306\n 3307\n 3308\n 3309\n 3310\n 3311\n 3312\n 3313\n 3314\n 3315\n 3316\n 3317\n 3318\n 3319\n 3320\n 3321\n 3322\n 3323\n 3324\n 3325\n 3326\n 3327\n 3328\n 3329\n 3330\n 3331\n 3332\n 3333\n 3334\n 3335\n 3336\n 3337\n 3338\n 3339\n 3340\n 3341\n 3342\n 3343\n 3344\n 3345\n 3346\n 3347\n 3348\n 3349\n 3350\n 3351\n 3352\n 3353\n 3354\n 3355\n 3356\n 3357\n 3358\n 3359\n 3360\n 3361\n 3362\n 3363\n 3364\n 3365\n 3366\n 3367\n 3368\n 3369\n 3370\n 3371\n 3372\n 3373\n 3374\n 3375\n 3376\n 3377\n 3378\n 3379\n 3380\n 3381\n 3382\n 3383\n 3384\n 3385\n 3386\n 3387\n 3388\n 3389\n 3390\n 3391\n 3392\n 3393\n 3394\n 3395\n 3396\n 3397\n 3398\n 3399\n 3400\n 3401\n 3402\n 3403\n 3404\n 3405\n 3406\n 3407\n 3408\n 3409\n 3410\n 3411\n 3412\n 3413\n 3414\n 3415\n 3416\n 3417\n 3418\n 3419\n 3420\n 3421\n 3422\n 3423\n 3424\n 3425\n 3426\n 3427\n 3428\n 3429\n 3430\n 3431\n 3432\n 3433\n 3434\n 3435\n 3436\n 3437\n 3438\n 3439\n 3440\n 3441\n 3442\n 3443\n 3444\n 3445\n 3446\n 3447\n 3448\n 3449\n 3450\n 3451\n 3452\n 3453\n 3454\n 3455\n 3456\n 3457\n 3458\n 3459\n 3460\n 3461\n 3462\n 3463\n 3464\n 3465\n 3466\n 3467\n 3468\n 3469\n 3470\n 3471\n 3472\n 3473\n 3474\n 3475\n 3476\n 3477\n 3478\n 3479\n 3480\n 3481\n 3482\n 3483\n 3484\n 3485\n 3486\n 3487\n 3488\n 3489\n 3490\n 3491\n 3492\n 3493\n 3494\n 3495\n 3496\n 3497\n 3498\n 3499\n 3500\n 3501\n 3502\n 3503\n 3504\n 3505\n 3506\n 3507\n 3508\n 3509\n 3510\n 3511\n 3512\n 3513\n 3514\n 3515\n 3516\n 3517\n 3518\n 3519\n 3520\n 3521\n 3522\n 3523\n 3524\n 3525\n 3526\n 3527\n 3528\n 3529\n 3530\n 3531\n 3532\n 3533\n 3534\n 3535\n 3536\n 3537\n 3538\n 3539\n 3540\n 3541\n 3542\n 3543\n 3544\n 3545\n 3546\n 3547\n 3548\n 3549\n 3550\n 3551\n 3552\n 3553\n 3554\n 3555\n 3556\n 3557\n 3558\n 3559\n 3560\n 3561\n 3562\n 3563\n 3564\n 3565\n 3566\n 3567\n 3568\n 3569\n 3570\n 3571\n 3572\n 3573\n 3574\n 3575\n 3576\n 3577\n 3578\n 3579\n 3580\n 3581\n 3582\n 3583\n 3584\n 3585\n 3586\n 3587\n 3588\n 3589\n 3590\n 3591\n 3592\n 3593\n 3594\n 3595\n 3596\n 3597\n 3598\n 3599\n 3600\n 3601\n 3602\n 3603\n 3604\n 3605\n 3606\n 3607\n 3608\n 3609\n 3610\n 3611\n 3612\n 3613\n 3614\n 3615\n 3616\n 3617\n 3618\n 3619\n 3620\n 3621\n 3622\n 3623\n 3624\n 3625\n 3626\n 3627\n 3628\n 3629\n 3630\n 3631\n 3632\n 3633\n 3634\n 3635\n 3636\n 3637\n 3638\n 3639\n 3640\n 3641\n 3642\n 3643\n 3644\n 3645\n 3646\n 3647\n 3648\n 3649\n 3650\n 3651\n 3652\n 3653\n 3654\n 3655\n 3656\n 3657\n 3658\n 3659\n 3660\n 3661\n 3662\n 3663\n 3664\n 3665\n 3666\n 3667\n 3668\n 3669\n 3670\n 3671\n 3672\n 3673\n 3674\n 3675\n 3676\n 3677\n 3678\n 3679\n 3680\n 3681\n 3682\n 3683\n 3684\n 3685\n 3686\n 3687\n 3688\n 3689\n 3690\n 3691\n 3692\n 3693\n 3694\n 3695\n 3696\n 3697\n 3698\n 3699\n 3700\n 3701\n 3702\n 3703\n 3704\n 3705\n 3706\n 3707\n 3708\n 3709\n 3710\n 3711\n 3712\n 3713\n 3714\n 3715\n 3716\n 3717\n 3718\n 3719\n 3720\n 3721\n 3722\n 3723\n 3724\n 3725\n 3726\n 3727\n 3728\n 3729\n 3730\n 3731\n 3732\n 3733\n 3734\n 3735\n 3736\n 3737\n 3738\n 3739\n 3740\n 3741\n 3742\n 3743\n 3744\n 3745\n 3746\n 3747\n 3748\n 3749\n 3750\n 3751\n 3752\n 3753\n 3754\n 3755\n 3756\n 3757\n 3758\n 3759\n 3760\n 3761\n 3762\n 3763\n 3764\n 3765\n 3766\n 3767\n 3768\n 3769\n 3770\n 3771\n 3772\n 3773\n 3774\n 3775\n 3776\n 3777\n 3778\n 3779\n 3780\n 3781\n 3782\n 3783\n 3784\n 3785\n 3786\n 3787\n 3788\n 3789\n 3790\n 3791\n 3792\n 3793\n 3794\n 3795\n 3796\n 3797\n 3798\n 3799\n 3800\n 3801\n 3802\n 3803\n 3804\n 3805\n 3806\n 3807\n 3808\n 3809\n 3810\n 3811\n 3812\n 3813\n 3814\n 3815\n 3816\n 3817\n 3818\n 3819\n 3820\n 3821\n 3822\n 3823\n 3824\n 3825\n 3826\n 3827\n 3828\n 3829\n 3830\n 3831\n 3832\n 3833\n 3834\n 3835\n 3836\n 3837\n 3838\n 3839\n 3840\n 3841\n 3842\n 3843\n 3844\n 3845\n 3846\n 3847\n 3848\n 3849\n 3850\n 3851\n 3852\n 3853\n 3854\n 3855\n 3856\n 3857\n 3858\n 3859\n 3860\n 3861\n 3862\n 3863\n 3864\n 3865\n 3866\n 3867\n 3868\n 3869\n 3870\n 3871\n 3872\n 3873\n 3874\n 3875\n 3876\n 3877\n 3878\n 3879\n 3880\n 3881\n 3882\n 3883\n 3884\n 3885\n 3886\n 3887\n 3888\n 3889\n 3890\n 3891\n 3892\n 3893\n 3894\n 3895\n 3896\n 3897\n 3898\n 3899\n 3900\n 3901\n 3902\n 3903\n 3904\n 3905\n 3906\n 3907\n 3908\n 3909\n 3910\n 3911\n 3912\n 3913\n 3914\n 3915\n 3916\n 3917\n 3918\n 3919\n 3920\n 3921\n 3922\n 3923\n 3924\n 3925\n 3926\n 3927\n 3928\n 3929\n 3930\n 3931\n 3932\n 3933\n 3934\n 3935\n 3936\n 3937\n 3938\n 3939\n 3940\n 3941\n 3942\n 3943\n 3944\n 3945\n 3946\n 3947\n 3948\n 3949\n 3950\n 3951\n 3952\n 3953\n 3954\n 3955\n 3956\n 3957\n 3958\n 3959\n 3960\n 3961\n 3962\n 3963\n 3964\n 3965\n 3966\n 3967\n 3968\n 3969\n 3970\n 3971\n 3972\n 3973\n 3974\n 3975\n 3976\n 3977\n 3978\n 3979\n 3980\n 3981\n 3982\n 3983\n 3984\n 3985\n 3986\n 3987\n 3988\n 3989\n 3990\n 3991\n 3992\n 3993\n 3994\n 3995\n 3996\n 3997\n 3998\n 3999\n 4000\n 4001\n 4002\n 4003\n 4004\n 4005\n 4006\n 4007\n 4008\n 4009\n 4010\n 4011\n 4012\n 4013\n 4014\n 4015\n 4016\n 4017\n 4018\n 4019\n 4020\n 4021\n 4022\n 4023\n 4024\n 4025\n 4026\n 4027\n 4028\n 4029\n 4030\n 4031\n 4032\n 4033\n 4034\n 4035\n 4036\n 4037\n 4038\n 4039\n 4040\n 4041\n 4042\n 4043\n 4044\n 4045\n 4046\n 4047\n 4048\n 4049\n 4050\n 4051\n 4052\n 4053\n 4054\n 4055\n 4056\n 4057\n 4058\n 4059\n 4060\n 4061\n 4062\n 4063\n 4064\n 4065\n 4066\n 4067\n 4068\n 4069\n 4070\n 4071\n 4072\n 4073\n 4074\n 4075\n 4076\n 4077\n 4078\n 4079\n 4080\n 4081\n 4082\n 4083\n 4084\n 4085\n 4086\n 4087\n 4088\n 4089\n 4090\n 4091\n 4092\n 4093\n 4094\n 4095\n 4096\n 4097\n 4098\n 4099\n 4100\n 4101\n 4102\n 4103\n 4104\n 4105\n 4106\n 4107\n 4108\n 4109\n 4110\n 4111\n 4112\n 4113\n 4114\n 4115\n 4116\n 4117\n 4118\n 4119\n 4120\n 4121\n 4122\n 4123\n 4124\n 4125\n 4126\n 4127\n 4128\n 4129\n 4130\n 4131\n 4132\n 4133\n 4134\n 4135\n 4136\n 4137\n 4138\n 4139\n 4140\n 4141\n 4142\n 4143\n 4144\n 4145\n 4146\n 4147\n 4148\n 4149\n 4150\n 4151\n 4152\n 4153\n 4154\n 4155\n 4156\n 4157\n 4158\n 4159\n 4160\n 4161\n 4162\n 4163\n 4164\n 4165\n 4166\n 4167\n 4168\n 4169\n 4170\n 4171\n 4172\n 4173\n 4174\n 4175\n 4176\n 4177\n 4178\n 4179\n 4180\n 4181\n 4182\n 4183\n 4184\n 4185\n 4186\n 4187\n 4188\n 4189\n 4190\n 4191\n 4192\n 4193\n 4194\n 4195\n 4196\n 4197\n 4198\n 4199\n 4200\n 4201\n 4202\n 4203\n 4204\n 4205\n 4206\n 4207\n 4208\n 4209\n 4210\n 4211\n 4212\n 4213\n 4214\n 4215\n 4216\n 4217\n 4218\n 4219\n 4220\n 4221\n 4222\n 4223\n 4224\n 4225\n 4226\n 4227\n 4228\n 4229\n 4230\n 4231\n 4232\n 4233\n 4234\n 4235\n 4236\n 4237\n 4238\n 4239\n 4240\n 4241\n 4242\n 4243\n 4244\n 4245\n 4246\n 4247\n 4248\n 4249\n 4250\n 4251\n 4252\n 4253\n 4254\n 4255\n 4256\n 4257\n 4258\n 4259\n 4260\n 4261\n 4262\n 4263\n 4264\n 4265\n 4266\n 4267\n 4268\n 4269\n 4270\n 4271\n 4272\n 4273\n 4274\n 4275\n 4276\n 4277\n 4278\n 4279\n 4280\n 4281\n 4282\n 4283\n 4284\n 4285\n 4286\n 4287\n 4288\n 4289\n 4290\n 4291\n 4292\n 4293\n 4294\n 4295\n 4296\n 4297\n 4298\n 4299\n 4300\n 4301\n 4302\n 4303\n 4304\n 4305\n 4306\n 4307\n 4308\n 4309\n 4310\n 4311\n 4312\n 4313\n 4314\n 4315\n 4316\n 4317\n 4318\n 4319\n 4320\n 4321\n 4322\n 4323\n 4324\n 4325\n 4326\n 4327\n 4328\n 4329\n 4330\n 4331\n 4332\n 4333\n 4334\n 4335\n 4336\n 4337\n 4338\n 4339\n 4340\n 4341\n 4342\n 4343\n 4344\n 4345\n 4346\n 4347\n 4348\n 4349\n 4350\n 4351\n 4352\n 4353\n 4354\n 4355\n 4356\n 4357\n 4358\n 4359\n 4360\n 4361\n 4362\n 4363\n 4364\n 4365\n 4366\n 4367\n 4368\n 4369\n 4370\n 4371\n 4372\n 4373\n 4374\n 4375\n 4376\n 4377\n 4378\n 4379\n 4380\n 4381\n 4382\n 4383\n 4384\n 4385\n 4386\n 4387\n 4388\n 4389\n 4390\n 4391\n 4392\n 4393\n 4394\n 4395\n 4396\n 4397\n 4398\n 4399\n 4400\n 4401\n 4402\n 4403\n 4404\n 4405\n 4406\n 4407\n 4408\n 4409\n 4410\n 4411\n 4412\n 4413\n 4414\n 4415\n 4416\n 4417\n 4418\n 4419\n 4420\n 4421\n 4422\n 4423\n 4424\n 4425\n 4426\n 4427\n 4428\n 4429\n 4430\n 4431\n 4432\n 4433\n 4434\n 4435\n 4436\n 4437\n 4438\n 4439\n 4440\n 4441\n 4442\n 4443\n 4444\n 4445\n 4446\n 4447\n 4448\n 4449\n 4450\n 4451\n 4452\n 4453\n 4454\n 4455\n 4456\n 4457\n 4458\n 4459\n 4460\n 4461\n 4462\n 4463\n 4464\n 4465\n 4466\n 4467\n 4468\n 4469\n 4470\n 4471\n 4472\n 4473\n 4474\n 4475\n 4476\n 4477\n 4478\n 4479\n 4480\n 4481\n 4482\n 4483\n 4484\n 4485\n 4486\n 4487\n 4488\n 4489\n 4490\n 4491\n 4492\n 4493\n 4494\n 4495\n 4496\n 4497\n 4498\n 4499\n 4500\n 4501\n 4502\n 4503\n 4504\n 4505\n 4506\n 4507\n 4508\n 4509\n 4510\n 4511\n 4512\n 4513\n 4514\n 4515\n 4516\n 4517\n 4518\n 4519\n 4520\n 4521\n 4522\n 4523\n 4524\n 4525\n 4526\n 4527\n 4528\n 4529\n 4530\n 4531\n 4532\n 4533\n 4534\n 4535\n 4536\n 4537\n 4538\n 4539\n 4540\n 4541\n 4542\n 4543\n 4544\n 4545\n 4546\n 4547\n 4548\n 4549\n 4550\n 4551\n 4552\n 4553\n 4554\n 4555\n 4556\n 4557\n 4558\n 4559\n 4560\n 4561\n 4562\n 4563\n 4564\n 4565\n 4566\n 4567\n 4568\n 4569\n 4570\n 4571\n 4572\n 4573\n 4574\n 4575\n 4576\n 4577\n 4578\n 4579\n 4580\n 4581\n 4582\n 4583\n 4584\n 4585\n 4586\n 4587\n 4588\n 4589\n 4590\n 4591\n 4592\n 4593\n 4594\n 4595\n 4596\n 4597\n 4598\n 4599\n 4600\n 4601\n 4602\n 4603\n 4604\n 4605\n 4606\n 4607\n\n\nNow that we created our dataset, we need to have a bird's eye view of the data. For now, let's have a look at how the bandgap values look like.\n\n\n```\n\nimport matplotlib.pyplot as plt\n\nplt.rcParams.update({'font.size': 20})\n\nplt.figure(figsize=(10, 10))\nplt.hist(band_gaps, bins=100)\nplt.savefig('Histogram_PDF', bbox_inches='tight')\n```\n\nThis plot shows that amost half of our structures are metals (zero bandgap). The bandgaps around 7 eV could be outliers, but we can deal with those in a later lecture. \n\nHow about a scatter plot?\n\n\n```\nband_gaps_sorted=sorted(band_gaps)\n\n# Scatter plot\nplt.figure(figsize=(10,10))\nplt.plot(band_gaps_sorted)\nplt.ylabel('')\nplt.xlabel('')\nplt.savefig('ScatterPlot', bbox_inches='tight')\n\n```\n\nNext, we split the dataset into training and test sets, and we use the 80/20 split ratio.\n\n\n```\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n dataset_df, band_gaps, test_size=.2, random_state=None)\n```\n\nThen we normalize our dataset using the normalization applied on the training set.\n\n\n```\n\nimport pandas as pd\nfrom sklearn.preprocessing import StandardScaler\n\n# We need to normalize the data using a scaler\n\n# Define the scaler\nscaler = StandardScaler().fit(X_train)\n\n# Scale the training and test sets\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n# Next, we create a pandas DataFrame object\n\n```\n\n## The machine learning task\n\nNow it's time to actually do machine learning. We will try two machine learning models: the random forests and the XGBOOST models. We will quantify the prediction accuracy using two measures: goodness of fit (R2) and the mean squared error (MSE).\n\n\n```\n\n\nfrom sklearn.metrics import mean_absolute_error, r2_score\nfrom xgboost import XGBRegressor\nfrom sklearn.ensemble import RandomForestRegressor\n\nregr = RandomForestRegressor(n_estimators=400, max_depth=400, random_state=0)\nregr.fit(X_train_scaled, y_train)\ny_predicted = regr.predict(X_test_scaled)\n\nprint('RF MAE\\t'+str(mean_absolute_error(y_test, y_predicted))+'\\n')\nprint('RF R2\\t'+str(r2_score(y_test, y_predicted))+'\\n')\n\nxPlot=y_test\nyPlot=y_predicted\nplt.figure(figsize=(10,10))\nplt.plot(xPlot,yPlot,'ro')\nplt.plot(xPlot,xPlot)\nplt.ylabel('RF')\nplt.xlabel('DFT')\nplt.savefig('RF_Correlation_Test', bbox_inches='tight')\n\n\nregr = XGBRegressor(objective='reg:squarederror', max_depth=10, n_estimators=400)\nregr.fit(X_train_scaled, y_train)\ny_predicted = regr.predict(X_test_scaled)\n\nprint('XGBOOST MAE\\t'+str(mean_absolute_error(y_test, y_predicted))+'\\n')\nprint('XGBOOST R2\\t'+str(r2_score(y_test, y_predicted))+'\\n')\n\n\nxPlot=y_test\nyPlot=y_predicted\nplt.figure(figsize=(10,10))\nplt.plot(xPlot,yPlot,'ro')\nplt.plot(xPlot,xPlot)\nplt.ylabel('XGBOOST')\nplt.xlabel('DFT')\nplt.savefig('XGBOOST_Correlation_Test', bbox_inches='tight')\n\n\n```\n\nAchieving R$^2$ > 0.71 and MAE < 0.5 is certainly good, but more can be done. That is why we have developed the CrystalFeatures suit with many more features to improve the prediction accuracy of the bandgap, as well as other things.\n\nThat's it for now!\n", "meta": {"hexsha": "8e3f109ceb31b0c3d18ef68148fb662f1c73d6d6", "size": 310325, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CrystalFeatures_AtomicFeatures.ipynb", "max_stars_repo_name": "jepaur5/2021_07_08_iqtcub_md", "max_stars_repo_head_hexsha": "ddad1f08bd640e930e926fbbb02cf6910a6f3f1f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CrystalFeatures_AtomicFeatures.ipynb", "max_issues_repo_name": "jepaur5/2021_07_08_iqtcub_md", "max_issues_repo_head_hexsha": "ddad1f08bd640e930e926fbbb02cf6910a6f3f1f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CrystalFeatures_AtomicFeatures.ipynb", "max_forks_repo_name": "jepaur5/2021_07_08_iqtcub_md", "max_forks_repo_head_hexsha": "ddad1f08bd640e930e926fbbb02cf6910a6f3f1f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6189791517, "max_line_length": 42806, "alphanum_fraction": 0.5835881737, "converted": true, "num_tokens": 39325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5544704796847396, "lm_q2_score": 0.5312093733737562, "lm_q1q2_score": 0.2945399160675765}} {"text": "```python\nimport local_models.local_models\nimport local_models.algorithms\nimport local_models.utils\nimport local_models.linear_projections\nimport local_models.loggin\nimport local_models.TLS_models\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.linear_model\nimport sklearn.cluster\nfrom importlib import reload\nfrom ml_battery.utils import cmap\nimport matplotlib as mpl\nimport sklearn.datasets\nimport sklearn.decomposition\nimport logging\nimport ml_battery.log\nimport time\nimport os\nimport mayavi\nimport mayavi.mlab\nimport string\nimport subprocess\nimport functools\nimport cv2\nimport itertools\n\n#on headless systems, tmux: \"Xvfb :1 -screen 0 1280x1024x24 -auth localhost\", then \"export DISPLAY=:1\" in the jupyter tmux\nmayavi.mlab.options.offscreen = True\n\n\n\nlogger = logging.getLogger(__name__)\n\n#reload(local_models.local_models)\n#reload(lm)\n#reload(local_models.loggin)\n#reload(local_models.TLS_models)\nnp.warnings.filterwarnings('ignore')\n\n```\n\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n /usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n\n\n\n```python\nimport importlib; importlib.reload(quadrics_utils)\n```\n\n\n\n\n \n\n\n\n\n```python\ndef collect_best(expr, measure=sympy.count_ops):\n best = expr\n best_score = measure(expr)\n perms = itertools.permutations(expr.free_symbols)\n permlen = np.math.factorial(len(expr.free_symbols))\n print(permlen)\n for i, perm in enumerate(perms):\n if (permlen > 1000) and not (i%int(permlen/100)):\n print(i)\n collected = sympy.collect(expr, perm)\n if measure(collected) < best_score:\n best_score = measure(collected)\n best = collected\n else:\n factored = sympy.factor(expr)\n if measure(factored) < best_score:\n best_score = measure(factored)\n best = factored\n return best\n \ndef product(args):\n arg = next(args)\n try:\n return arg*product(args)\n except:\n return arg\n \ndef rcollect_best(expr, measure=sympy.count_ops):\n best = collect_best(expr, measure)\n best_score = measure(best)\n if expr == best:\n return best\n if isinstance(best, sympy.Mul):\n return product(map(rcollect_best, best.args))\n if isinstance(best, sympy.Add):\n return sum(map(rcollect_best, best.args))\n```\n\n\n```python\ndef derive_quadratic_orthogonal_projection_1D_polynomial(n):\n import sympy\n \n Q_sym = sympy.symarray(\"q\", (n+1, n+1))\n Q = sympy.Matrix(np.zeros((n+1,n+1), dtype=int))\n for i, j in itertools.product(range(n+1), range(n+1)):\n if i == n or j == n or i == j:\n Q[i,j] = Q_sym[max(i,j),min(i,j)]\n print(Q)\n\n x_sym = sympy.symarray(\"x\", n+1)\n X = sympy.Matrix(np.ones((n+1, 1), dtype=int))\n for i in range(n):\n X[i] = x_sym[i]\n \n P = sympy.Matrix(np.zeros((n-1, n+1), dtype=int))\n for i in range(n-1):\n P[i,0] = X[i+1]\n P[i,i+1] = -X[0]\n \n QXP = P*Q*X\n \n other_dims_as_x0 = [sympy.solve(QXP[i], X[i+1])[0] for i in range(n-1)] \n \n XQX = sympy.expand((X.T*Q*X)[0])\n XQX_as_x0 = XQX.subs({X[i+1]:other_dims_as_x0[i] for i in range(n-1)})\n for sub in other_dims_as_x0:\n XQX_as_x0 *= sympy.fraction(sub)[1]**2\n XQX_as_x0 = sympy.cancel(XQX_as_x0)\n XQX_as_x0 = sympy.simplify(XQX_as_x0)\n XQX_as_x0 = sympy.poly(XQX_as_x0, X[0])\n \n return (X, Q, XQX_as_x0, other_dims_as_x0)\n \n```\n\n\n```python\ndef collectify_polynomial_coefficients(poly):\n return [rcollect_best(formula) for formula in poly.all_coeffs()]\n```\n\n\n```python\nsorted(map(lambda x: x**2, range(4)))\n```\n\n\n\n\n [0, 1, 4, 9]\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\ndef funcify(signature, returnable):\n return \"def {}: return {}\\n\".format(signature, returnable)\n\ndef librarify_quadric_equations():\n import sympy\n from sympy.abc import a,b,c,d,e,f,g,x,y,z\n\n Q = a*x**2 + b*y**2 + c*z**2 + 2*e*x + 2*f*y + 2*g*z + d\n y_as_x_num = f*x\n y_as_x_den = e-(b-a)*x\n y_as_x = y_as_x_num/y_as_x_den\n z_as_x_num = g*x\n z_as_x_den = e-(c-a)*x\n z_as_x = z_as_x_num/z_as_x_den\n Q_as_x = Q.subs({\n y: y_as_x,\n z: z_as_x,\n })\n\n bigQ = sympy.expand(sympy.simplify(Q_as_x*y_as_x_den**2*z_as_x_den**2))\n\n coeffs = list(map(sympy.factor, sympy.poly(bigQ,x).all_coeffs()))\n\n collected = []\n for coeff in coeffs:\n collected.append(rcollect_best(coeff))\n\n k_mat = sympy.Matrix(collected)\n k_jac = k_mat.jacobian([a,b,c,d,e,f,g])\n\n with open(\"quadrics_utils.py\", \"w\") as f:\n Q_func = funcify(\"Q(a,b,c,d,e,f,g,x,y,z)\",str(Q))\n y_as_x_func = funcify(\"y_as_x(a,b,c,d,e,f,g,x)\", str(y_as_x))\n z_as_x_func = funcify(\"z_as_x(a,b,c,d,e,f,g,x)\", str(z_as_x))\n Q_as_x_func = funcify(\"Q_as_x(a,b,c,d,e,f,g,x)\", str(Q_as_x))\n k_mat_func = funcify(\"k_mat(a,b,c,d,e,f,g)\", str(k_mat.transpose().tolist()[0]))\n k_jac_func = funcify(\"k_jac(a,b,c,d,e,f,g)\", str(k_jac.tolist()))\n individual_k_funcs = [funcify(\"k{:01d}(a,b,c,d,e,f,g)\".format(i), str(eq)) for i,eq in enumerate(collected[::-1])]\n list(map(f.write, [Q_func, y_as_x_func, z_as_x_func, Q_as_x_func, k_mat_func, k_jac_func] + individual_k_funcs))\n```\n", "meta": {"hexsha": "d5dd8d4e96a5b1159ebb67354c40f8acd8b7d5cb", "size": 10305, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/quadrics_solvers.ipynb", "max_stars_repo_name": "csbrown/pylomo", "max_stars_repo_head_hexsha": "377aa386427a32da8b42fe53aacbe3281fbf2bf6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/quadrics_solvers.ipynb", "max_issues_repo_name": "csbrown/pylomo", "max_issues_repo_head_hexsha": "377aa386427a32da8b42fe53aacbe3281fbf2bf6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-01T17:41:36.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-01T17:41:36.000Z", "max_forks_repo_path": "examples/quadrics_solvers.ipynb", "max_forks_repo_name": "csbrown/pylomo", "max_forks_repo_head_hexsha": "377aa386427a32da8b42fe53aacbe3281fbf2bf6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6574394464, "max_line_length": 261, "alphanum_fraction": 0.548568656, "converted": true, "num_tokens": 2149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5621765155565326, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.2942546279364071}} {"text": "\n# PHY321: Introduction to Classical Mechanics and plans for Spring 2021\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway \n\n **Scott Pratt**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA\n\nDate: **Jan 20, 2021**\n\nCopyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n\n\n\n## Aims and Overview\n\nThe first week starts on Monday January 11. This week is dedicated to a\nreview of learning material and reminder on programming aspects,\nuseful tools, where to find information and much more. There are no lectures during this week and the material here on vectors and Python programming will also be discussed during the week which begins with January 18. Feel free to look at the notes here before our first lecture on the 20th.\n\n* Introduction to the course and reminder on vectors, space, time and motion.\n\n* Python programming reminder, elements from [CMSE 201 INTRODUCTION TO COMPUTATIONAL MODELING](https://cmse.msu.edu/academics/undergraduate-program/undergraduate-courses/cmse-201-introduction-to-computational-modeling/) and how they are used in this course. Installing software (anaconda). . \n\n* Introduction to Git and GitHub. [Overview video on Git and GitHub](https://mediaspace.msu.edu/media/t/1_8mgx3cyf).\n\n**Recommended reading**: John R. Taylor, Classical Mechanics (Univ. Sci. Books 2005), , see also . Chapters 1.2 and 1.3 of Taylor.\n\n\n\n## Introduction to the course and where to find material\n\n[Overview video](https://mediaspace.msu.edu/media/t/1_zzl90pfu)\n\n\n\n\n## Classical mechanics\n\nClassical mechanics is a topic which has been taught intensively over\nseveral centuries. It is, with its many variants and ways of\npresenting the educational material, normally the first **real** physics\ncourse many of us meet and it lays the foundation for further physics\nstudies. Many of the equations and ways of reasoning about the\nunderlying laws of motion and pertinent forces, shape our approaches and understanding\nof the scientific method and discourse, as well as the way we develop our insights\nand deeper understanding about physical systems. \n\n## From Continuous to Discretized Approaches\n\nThere is a wealth of\nwell-tested (from both a physics point of view and a pedagogical\nstandpoint) exercises and problems which can be solved\nanalytically. However, many of these problems represent idealized and\nless realistic situations. The large majority of these problems are\nsolved by paper and pencil and are traditionally aimed\nat what we normally refer to as continuous models from which we may find an analytical solution. As a consequence,\nwhen teaching mechanics, it implies that we can seldomly venture beyond an idealized case\nin order to develop our understandings and insights about the\nunderlying forces and laws of motion.\n\nWe aim at changing this here by introducing throughout the course what\nwe will call a **computational path**, where with computations we mean\nsolving scientific problems with all possible tools and means, from\nplain paper an pencil exercises, via symbolic calculations to writing\na code and running a program to solve a specific\nproblem. Mathematically this normally means that we move from a\ncontinuous problem to a discretized one. This appproach enables us to\nsolve a much broader class of problems.\nIn mechanics this means, since we often rephrase the physical problems in terms of differential equations, that we can in most settings reuse the same program with some minimal changes. \n\n\n\n\n\n\n## Space, Time, Motion, Reference Frames and Reminder on vectors and other mathematical quantities\n\nOur studies will start with the motion of different types of objects\nsuch as a falling ball, a runner, a bicycle etc etc. It means that an\nobject's position in space varies with time.\nIn order to study such systems we need to define\n\n* choice of origin\n\n* choice of the direction of the axes\n\n* choice of positive direction (left-handed or right-handed system of reference)\n\n* choice of units and dimensions\n\nThese choices lead to some important questions such as\n\n* is the physics of a system independent of the origin of the axes?\n\n* is the physics independent of the directions of the axes, that is are there privileged axes?\n\n* is the physics independent of the orientation of system?\n\n* is the physics independent of the scale of the length?\n\n## Dimension, units and labels\n\nThroughout this course we will use the standardized SI units. The standard unit for length is thus one meter 1m, for mass\none kilogram 1kg, for time one second 1s, for force one Newton 1kgm/s$^2$ and for energy 1 Joule 1kgm$^2$s$^{-2}$.\n\nWe will use the following notations for various variables (vectors are always boldfaced in these lecture notes):\n* position $\\boldsymbol{r}$, in one dimention we will normally just use $x$,\n\n* mass $m$,\n\n* time $t$,\n\n* velocity $\\boldsymbol{v}$ or just $v$ in one dimension,\n\n* acceleration $\\boldsymbol{a}$ or just $a$ in one dimension,\n\n* momentum $\\boldsymbol{p}$ or just $p$ in one dimension,\n\n* kinetic energy $K$,\n\n* potential energy $V$ and\n\n* frequency $\\omega$.\n\nMore variables will be defined as we need them.\n\n## Dimensions and Units\n\nIt is also important to keep track of dimensionalities. Don't mix this\nup with a chosen unit for a given variable. We mark the dimensionality\nin these lectures as $[a]$, where $a$ is the quantity we are\ninterested in. Thus\n\n* $[\\boldsymbol{r}]=$ length\n\n* $[m]=$ mass\n\n* $[K]=$ energy\n\n* $[t]=$ time\n\n* $[\\boldsymbol{v}]=$ length over time\n\n* $[\\boldsymbol{a}]=$ length over time squared\n\n* $[\\boldsymbol{p}]=$ mass times length over time\n\n* $[\\omega]=$ 1/time\n\n## Scalars, Vectors and Matrices\n\nA scalar is something with a value that is independent of coordinate\nsystem. Examples are mass, or the relative time between events. A\nvector has magnitude and direction. Under rotation, the magnitude\nstays the same but the direction changes. Scalars have no spatial\nindex, whereas a three-dimensional vector has 3 indices, e.g. the\nposition $\\boldsymbol{r}$ has components $r_1,r_2,r_3$, which are often\nreferred to as $x,y,z$.\n\nThere are several categories of changes of coordinate system. The\nobserver can translate the origin, might move with a different\nvelocity, or might rotate her/his coordinate axes. For instance, a\nparticle's position vector changes when the origin is translated, but\nits velocity does not. When you study relativity you will find that\nquantities you thought of as scalars, such as time or an electric\npotential, are actually parts of four-dimensional vectors and that\nchanges of the velocity of the reference frame act in a similar way to\nrotations.\n\nIn addition to vectors and scalars, there are matrices, which have two\nindices. One also has objects with 3 or four indices. These are called\ntensors of rank $n$, where $n$ is the number of indices. A matrix is a\nrank-two tensor. The Levi-Civita symbol, $\\epsilon_{ijk}$ used for\ncross products of vectors, is a tensor of rank three.\n\n\n## Definitions of Vectors\n\n\nIn these lectures we will use boldfaced lower-case letters to label a\nvector. A vector $\\boldsymbol{a}$ in three dimensions is thus defined as\n\n$$\n\\boldsymbol{a} =(a_x,a_y, a_z),\n$$\n\nand using the unit vectors (see below) in a cartesian system we have\n\n$$\n\\boldsymbol{a} = a_x\\boldsymbol{e}_1+a_y\\boldsymbol{e}_2+a_z\\boldsymbol{e}_3,\n$$\n\nwhere the unit vectors have magnitude $\\vert\\boldsymbol{e}_i\\vert = 1$ with\n$i=1=x$, $i=2=y$ and $i=3=z$. Some authors use letters\n$\\boldsymbol{i}=\\boldsymbol{e}_1$, $\\boldsymbol{j}=\\boldsymbol{e}_2$ and $\\boldsymbol{k}=\\boldsymbol{e}_3$.\n\n\n## Other ways to define a Vector\n\nAlternatively, you may also encounter the above vector as\n\n$$\n\\boldsymbol{a} = a_1\\boldsymbol{e}_1+a_2\\boldsymbol{e}_2+a_3\\boldsymbol{e}_3.\n$$\n\nHere we have used that $a_1=a_x$, $a_2=a_y$ and $a_3=a_z$. Such a\nnotation is sometimes more convenient if we wish to represent vector\noperations in a mathematically more compact way, see below here. We may also find this useful if we want the different\ncomponents to represent other coordinate systems that the Cartesian one. A typical example would be going from a Cartesian representation to a spherical basis. We will encounter such cases many times in this course. \n\nWe use lower-case letters for vectors and upper-case letters for matrices. Vectors and matrices are always boldfaced.\n\n\n## Polar Coordinates\n\nAs an example, consider a two-dimensional Cartesian system with a vector $\\boldsymbol{r}=(x,y)$.\nOur vector is then written as\n\n$$\n\\boldsymbol{r} = x\\boldsymbol{e}_1+y\\boldsymbol{e}_2.\n$$\n\nTransforming to polar coordinates with the radius $\\rho\\in [0,\\infty)$\nand the angle $\\phi \\in [0,2\\pi]$ we have the familiar transformations\n\n$$\nx = \\rho \\cos{\\phi} \\hspace{0.5cm} y = \\rho \\sin{\\phi},\n$$\n\nand the inverse relations\n\n$$\n\\rho =\\sqrt{x^2+y^2} \\hspace{0.5cm} \\phi = \\mathrm{arctan}(\\frac{y}{x}).\n$$\n\nWe can rewrite the vector $\\boldsymbol{a}$ in terms of $\\rho$ and $\\phi$ as\n\n$$\n\\boldsymbol{a} = \\rho \\cos{\\phi}\\boldsymbol{e}_1+\\rho \\sin{\\phi}\\boldsymbol{e}_2,\n$$\n\nand we define the new unit vectors as $\\boldsymbol{e}'_1=\\cos{\\phi}\\boldsymbol{e}_1$ and $\\boldsymbol{e}'_2=\\sin{\\phi}\\boldsymbol{e}_2$, we have\n\n$$\n\\boldsymbol{a}' = \\rho\\boldsymbol{e}'_1+\\rho \\boldsymbol{e}'_2.\n$$\n\nBelow we will show that the norms of this vector in a Cartesian basis and a Polar basis are equal.\n\n\n\n## Unit Vectors\n\nAlso known as basis vectors, unit vectors point in the direction of\nthe coordinate axes, have unit norm, and are orthogonal to one\nanother. Sometimes this is referred to as an orthonormal basis,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{e}_i\\cdot\\boldsymbol{e}_j=\\delta_{ij}=\\begin{bmatrix}\n1 & 0 & 0\\\\\n0& 1 & 0\\\\\n0 & 0 & 1\n\\end{bmatrix}.\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nHere, $\\delta_{ij}$ is unity when $i=j$ and is zero otherwise. This is\ncalled the unit matrix, because you can multiply it with any other\nmatrix and not change the matrix. The **dot** denotes the dot product,\n$\\boldsymbol{a}\\cdot\\boldsymbol{b}=a_1b_1+a_2b_2+a_3b_3=|a||b|\\cos\\theta_{ab}$. Sometimes\nthe unit vectors are called $\\hat{x}$, $\\hat{y}$ and\n$\\hat{z}$.\n\n\n## Our definition of unit vectors\n\nVectors can be decomposed in terms of unit vectors,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{r}=r_1\\hat{e}_1+r_2\\hat{e}_2+r_3\\hat{e}_3.\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nThe vector components $r_1$, $r_2$ and $r_3$ might be\ncalled $x$, $y$ and $z$ for a displacement. Another way to write this is to define the vector $\\boldsymbol{r}=(x,y,z)$.\n\nSimilarly, for the velocity we will use in this course the components $\\boldsymbol{v}=(v_x, v_y,v_z$. The accelaration is then given by $\\boldsymbol{a}=(a_x,a_y,a_z)$.\n\n\n## More definitions, repeated indices\n\nAs mentioned above, repeated indices infer sums.\nThis means that when you encounter an expression like the one on the left-hand side here, it stands actually for a sum (right-hand side)\n\n$$\nx_iy_i=\\sum_i x_iy_i=\\boldsymbol{x}\\cdot\\boldsymbol{y}.\n$$\n\nWe will in our lectures seldom use this notation and rather spell out the summations. This inferred summation over indices is normally called [Einstein summation convention](https://en.wikipedia.org/wiki/Einstein_notation).\n\n## Vector Operations, Scalar Product (or dot product)\n\nFor two vectors $\\boldsymbol{a}$ and $\\boldsymbol{b}$ we have\n\n$$\n\\begin{eqnarray*}\n\\boldsymbol{a}\\cdot\\boldsymbol{b}&=&\\sum_ia_ib_i=|a||b|\\cos\\theta_{ab},\\\\\n|a|&\\equiv& \\sqrt{\\boldsymbol{a}\\cdot\\boldsymbol{a}},\n\\end{eqnarray*}\n$$\n\nor with a norm-2 notation\n\n$$\n|a|\\equiv \\vert\\vert \\boldsymbol{a}\\vert\\vert_2=\\sqrt{\\sum_i a_i^2}.\n$$\n\nNot of all of you are familiar with linear algebra. Numerically we will always deal with arrays and the dot product vector is given by the product of the transposed vector multiplied with the other vector, that is we have\n\n$$\n\\boldsymbol{a}^T\\boldsymbol{b}=\\sum_i a_ib_i=|a||b|\\cos\\theta_{ab}.\n$$\n\nThe superscript $T$ represents the transposition operations. \n\n\n## Digression, Linear Algebra Notation for Vectors\n\nAs an example, consider a three-dimensional velocity defined by a vector $\\boldsymbol{v}=(v_x,v_y,v_z)$. For those of you familiar with linear algebra, we would write this quantity as\n\n$$\n\\boldsymbol{v}=\\begin{bmatrix} v_x\\\\ v_y \\\\ v_z \\end{bmatrix},\n$$\n\nand the transpose as\n\n$$\n\\boldsymbol{v}^T=\\begin{bmatrix} v_x & v_y &v_z \\end{bmatrix}.\n$$\n\nThe norm is\n\n$$\n\\boldsymbol{v}^T\\boldsymbol{v}=v_x^2+v_y^2+v_z^2,\n$$\n\nas it should.\n\nSince we will use Python as a programming language throughout this course, the above vector, using the package **numpy** (see discussions below), can be written as\n\n\n```python\nimport numpy as np\n# Define the values of vx, vy and vz\nvx = 0.0\nvy = 1.0\nvz = 0.0\nv = np.array([vx, vy, vz])\nprint(v)\n# The print the transpose of v\nprint(v.T)\n```\n\nTry to figure out how to calculate the norm with **numpy**.\nWe will come back to **numpy** in the examples below.\n\n\n\n## Norm of a transformed Vector\n\nAs an example, consider our transformation of a two-dimensional Cartesian vector $\\boldsymbol{r}$ to polar coordinates.\nWe had\n\n$$\n\\boldsymbol{r} = x\\boldsymbol{e}_1+y\\boldsymbol{e}_2.\n$$\n\nTransforming to polar coordinates with the radius $\\rho\\in [0,\\infty)$\nand the angle $\\phi \\in [0,2\\pi]$ we have\n\n$$\nx = \\rho \\cos{\\phi} \\hspace{0.5cm} y = \\rho \\sin{\\phi}.\n$$\n\nWe can write this\n\n$$\n\\boldsymbol{r} = \\begin{bmatrix} x \\\\ y \\end{bmatrix}= \\begin{bmatrix} \\rho \\cos{\\phi} \\\\ \\rho \\sin{\\phi} \\end{bmatrix}.\n$$\n\nThe norm in Cartesian coordinates is $\\boldsymbol{r}\\cdot\\boldsymbol{r}=x^2+y^2$ and\nusing Polar coordinates we have\n$\\rho^2(\\cos{\\phi})^2+\\rho^2(\\cos{\\phi})^2=\\rho^2$, which shows that\nthe norm is conserved since we have $\\rho = \\sqrt{x^2+y^2}$. A\ntransformation to a new basis should not change the norm.\n\n\n\n\n## Vector Product (or cross product) of vectors $\\boldsymbol{a}$ and $\\boldsymbol{b}$\n\n$$\n\\begin{eqnarray*}\n\\boldsymbol{c}&=&\\boldsymbol{a}\\times\\boldsymbol{b},\\\\\nc_i&=&\\epsilon_{ijk}a_jb_k.\n\\end{eqnarray*}\n$$\n\nHere $\\epsilon$ is the third-rank anti-symmetric tensor, also known as\nthe Levi-Civita symbol. It is $\\pm 1$ only if all three indices are\ndifferent, and is zero otherwise. The choice of $\\pm 1$ depends on\nwhether the indices are an even or odd permutation of the original\nsymbols. The permutation $xyz$ or $123$ is considered to be $+1$. Its elements are\n\n$$\n\\begin{eqnarray}\n\\epsilon_{ijk}&=&-\\epsilon_{ikj}=-\\epsilon_{jik}=-\\epsilon_{kji}\\\\\n\\nonumber\n\\epsilon_{123}&=&\\epsilon_{231}=\\epsilon_{312}=1,\\\\\n\\nonumber\n\\epsilon_{213}&=&\\epsilon_{132}=\\epsilon_{321}=-1,\\\\\n\\nonumber\n\\epsilon_{iij}&=&\\epsilon_{iji}=\\epsilon_{jii}=0.\n\\end{eqnarray}\n$$\n\n## More on cross-products\n\nYou may have met cross-products when studying magnetic\nfields. Because the matrix is anti-symmetric, switching the $x$ and\n$y$ axes (or any two axes) flips the sign. If the coordinate system is\nright-handed, meaning the $xyz$ axes satisfy\n$\\hat{x}\\times\\hat{y}=\\hat{z}$, where you can point along the $x$ axis\nwith your extended right index finger, the $y$ axis with your\ncontracted middle finger and the $z$ axis with your extended\nthumb. Switching to a left-handed system flips the sign of the vector\n$\\boldsymbol{c}=\\boldsymbol{a}\\times\\boldsymbol{b}$.\n\nNote that\n$\\boldsymbol{a}\\times\\boldsymbol{b}=-\\boldsymbol{b}\\times\\boldsymbol{a}$. The vector $\\boldsymbol{c}$ is\nperpendicular to both $\\boldsymbol{a}$ and $\\boldsymbol{b}$ and the magnitude of\n$\\boldsymbol{c}$ is given by\n\n$$\n|c|=|a||b|\\sin{\\theta_{ab}}.\n$$\n\n## Pseudo-vectors\n\nVectors obtained by the cross product of two real vectors are called\npseudo-vectors because the assignment of their direction can be\narbitrarily flipped by defining the Levi-Civita symbol to be based on\nleft-handed rules. Examples are the magnetic field and angular\nmomentum. If the direction of a real vector prefers the right-handed\nover the left-handed direction, that constitutes a violation of\nparity. For instance, one can polarize the spins (angular momentum) of\nnuclei with a magnetic field so that the spins preferentially point\nalong the direction of the magnetic field. This does not violate\nparity because both are pseudo-vectors. Now assume these polarized\nnuclei decay and that electrons are one of the products. If these\nelectrons prefer to exit the decay parallel vs. antiparallel to the\npolarizing magnetic field, this constitutes parity violation because\nthe direction of the outgoing electron momenta are a real vector. This\nis precisely what is observed in weak decays.\n\n## Differentiation of a vector with respect to a scalar\n\nFor example, the\nacceleration $\\boldsymbol{a}$ is given by the change in velocity per unit time, $\\boldsymbol{a}=d\\boldsymbol{v}/dt$\nwith components\n\n$$\na_i = (d\\boldsymbol{v}/dt)_i=\\frac{dv_i}{dt}.\n$$\n\nHere $i=x,y,z$ or $i=1,2,3$ if we are in three dimensions.\n\n## Gradient operator $\\nabla$\n\nThis represents the derivatives $\\partial/\\partial\nx$, $\\partial/\\partial y$ and $\\partial/\\partial z$. An often used shorthand is $\\partial_x=\\partial/\\partial_x$.\n\nThe gradient of a scalar function of position and time\n$\\Phi(x,y,z)=\\Phi(\\boldsymbol{r},t)$ is given by\n\n$$\n\\boldsymbol{\\nabla}~\\Phi,\n$$\n\nwith components $i$\n\n$$\n(\\nabla\\Phi(x,y,z,t))_i=\\partial/\\partial r_i\\Phi(\\boldsymbol{r},t)=\\partial_i\\Phi(\\boldsymbol{r},t).\n$$\n\nNote that the gradient is a vector.\n\nTaking the dot product of the gradient with a vector, normally called the divergence,\nwe have\n\n$$\n\\mathrm{div} \\boldsymbol{a}, \\nabla\\cdot\\boldsymbol{a}=\\sum_i \\partial_i a_i.\n$$\n\nNote that the divergence is a scalar. \n\n## The curl\n\nThe **curl** of a vector is defined as\n$\\nabla\\times\\boldsymbol{a}$,\n\n$$\n{\\rm\\bf curl}~\\boldsymbol{a},\n$$\n\nwith components\n\n$$\n(\\boldsymbol{\\nabla}\\times\\boldsymbol{a})_i=\\epsilon_{ijk}\\partial_j a_k(\\boldsymbol{r},t).\n$$\n\n## The Laplacian\n\nThe Laplacian is referred to as $\\nabla^2$ and is defined as\n\n$$\n\\boldsymbol{\\nabla}^2=\\boldsymbol{\\nabla}\\cdot\\boldsymbol{\\nabla}=\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}+\\frac{\\partial^2}{\\partial z^2}.\n$$\n\nQuestion: is the Laplacian a scalar or a vector?\n\n\n## Some identities\n\nHere we simply state these, but you may wish to prove a few. They are useful for this class and will be essential when you study electromagnetism.\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{a}\\cdot(\\boldsymbol{b}\\times\\boldsymbol{c})&=&\\boldsymbol{b}\\cdot(\\boldsymbol{c}\\times\\boldsymbol{a})=\\boldsymbol{c}\\cdot(\\boldsymbol{a}\\times\\boldsymbol{b})\\\\\n\\nonumber\n\\boldsymbol{a}\\times(\\boldsymbol{b}\\times\\boldsymbol{c})&=&(\\boldsymbol{a}\\cdot\\boldsymbol{c})\\boldsymbol{b}-(\\boldsymbol{a}\\cdot\\boldsymbol{b})\\boldsymbol{c}\\\\\n\\nonumber\n(\\boldsymbol{a}\\times\\boldsymbol{b})\\cdot(\\boldsymbol{c}\\times\\boldsymbol{d})&=&(\\boldsymbol{a}\\cdot\\boldsymbol{c})(\\boldsymbol{b}\\cdot\\boldsymbol{d})\n-(\\boldsymbol{a}\\cdot\\boldsymbol{d})(\\boldsymbol{b}\\cdot\\boldsymbol{c})\n\\end{eqnarray}\n$$\n\n## More useful relations\n\nUsing the fact that multiplication of reals is distributive we can show that\n\n$$\n\\boldsymbol{a}(\\boldsymbol{b}+\\boldsymbol{c})=\\boldsymbol{a}\\boldsymbol{b}+\\boldsymbol{a}\\boldsymbol{c},\n$$\n\nSimilarly we can also show that (using product rule for differentiating reals)\n\n$$\n\\frac{d}{dt}(\\boldsymbol{a}\\boldsymbol{b})=\\boldsymbol{a}\\frac{d\\boldsymbol{b}}{dt}+\\boldsymbol{b}\\frac{d\\boldsymbol{a}}{dt}.\n$$\n\nWe can repeat these operations for the cross products and show that they are distribuitive\n\n$$\n\\boldsymbol{a}\\times(\\boldsymbol{b}+\\boldsymbol{c})=\\boldsymbol{a}\\times\\boldsymbol{b}+\\boldsymbol{a}\\times\\boldsymbol{c}.\n$$\n\nWe have also that\n\n$$\n\\frac{d}{dt}(\\boldsymbol{a}\\times\\boldsymbol{b})=\\boldsymbol{a}\\times\\frac{d\\boldsymbol{b}}{dt}+\\boldsymbol{b}\\times\\frac{d\\boldsymbol{a}}{dt}.\n$$\n\n## Gauss's Theorem\n\nFor an integral over a volume $V$ confined by a surface $S$, Gauss's theorem gives\n\n$$\n\\int_V dv~\\nabla\\cdot\\boldsymbol{A}=\\int_Sd\\boldsymbol{S}\\cdot\\boldsymbol{A}.\n$$\n\nFor a closed path $C$ which carves out some area $S$,\n\n$$\n\\int_C d\\boldsymbol{\\ell}\\cdot\\boldsymbol{A}=\\int_Sd\\boldsymbol{s} \\cdot(\\nabla\\times\\boldsymbol{A})\n$$\n\n## and Stokes's Theorem\n\nStoke's law can be understood by considering a small rectangle,\n$-\\Delta x\n\n Relations Name matrix elements \n\n\n $A = A^{T}$ symmetric $a_{ij} = a_{ji}$ \n $A = \\left (A^{T} \\right )^{-1}$ real orthogonal $\\sum_k a_{ik} a_{jk} = \\sum_k a_{ki} a_{kj} = \\delta_{ij}$ \n $A = A^{ * }$ real matrix $a_{ij} = a_{ij}^{ * }$ \n $A = A^{\\dagger}$ hermitian $a_{ij} = a_{ji}^{ * }$ \n $A = \\left (A^{\\dagger} \\right )^{-1}$ unitary $\\sum_k a_{ik} a_{jk}^{ * } = \\sum_k a_{ki}^{ * } a_{kj} = \\delta_{ij}$ \n\n\n\n\n## Some famous Matrices\n\n * Diagonal if $a_{ij}=0$ for $i\\ne j$\n\n * Upper triangular if $a_{ij}=0$ for $i > j$\n\n * Lower triangular if $a_{ij}=0$ for $i < j$\n\n * Upper Hessenberg if $a_{ij}=0$ for $i > j+1$\n\n * Lower Hessenberg if $a_{ij}=0$ for $i < j+1$\n\n * Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$\n\n * Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$\n\n * Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$\n\n * Banded, block upper triangular, block lower triangular....\n\n## More Basic Matrix Features\n\n**Some Equivalent Statements.**\n\nFor an $N\\times N$ matrix $\\mathbf{A}$ the following properties are all equivalent\n\n * If the inverse of $\\mathbf{A}$ exists, $\\mathbf{A}$ is nonsingular.\n\n * The equation $\\mathbf{Ax}=0$ implies $\\mathbf{x}=0$.\n\n * The rows of $\\mathbf{A}$ form a basis of $R^N$.\n\n * The columns of $\\mathbf{A}$ form a basis of $R^N$.\n\n * $\\mathbf{A}$ is a product of elementary matrices.\n\n * $0$ is not eigenvalue of $\\mathbf{A}$.\n\n\n\n\n\n## Rotations\n\n\nHere, we use rotations as an example of matrices and their operations. One can consider a different orthonormal basis $\\hat{e}'_1$, $\\hat{e}'_2$ and $\\hat{e}'_3$. The same vector $\\boldsymbol{r}$ mentioned above can also be expressed in the new basis,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{r}=r'_1\\hat{e}'_1+r'_2\\hat{e}'_2+r'_3\\hat{e}'_3.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nEven though it is the same vector, the components have changed. Each\nnew unit vector $\\hat{e}'_i$ can be expressed as a linear sum of the\nprevious vectors,\n\n\n
\n\n$$\n\\begin{equation}\n\\hat{e}'_i=\\sum_j U_{ij}\\hat{e}_j,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the matrix $U$ can be found by taking the dot product of both sides with $\\hat{e}_k$,\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\nonumber\n\\hat{e}_k\\cdot\\hat{e}'_i&=&\\sum_jU_{ij}\\hat{e}_k\\cdot\\hat{e}_j\\\\\n\\label{eq:lambda_angles} \\tag{5}\n\\hat{e}_k\\cdot\\hat{e}'_i&=&\\sum_jU_{ij}\\delta_{jk}=U_{ik}.\n\\end{eqnarray}\n$$\n\n## More on the matrix $U$\n\nThus, the matrix lambda has components $U_{ij}$ that are equal to the\ncosine of the angle between new unit vector $\\hat{e}'_i$ and the old\nunit vector $\\hat{e}_j$.\n\n\n
\n\n$$\n\\begin{equation}\nU = \\begin{bmatrix}\n\\hat{e}'_1\\cdot\\hat{e}_1& \\hat{e}'_1\\cdot\\hat{e}_2& \\hat{e}'_1\\cdot\\hat{e}_3\\\\\n\\hat{e}'_2\\cdot\\hat{e}_1& \\hat{e}'_2\\cdot\\hat{e}_2& \\hat{e}'_2\\cdot\\hat{e}_3\\\\\n\\hat{e}'_3\\cdot\\hat{e}_1& \\hat{e}'_3\\cdot\\hat{e}_2& \\hat{e}'_3\\cdot\\hat{e}_3\n\\end{bmatrix},~~~~~U_{ij}=\\hat{e}'_i\\cdot\\hat{e}_j=\\cos\\theta_{ij}.\n\\label{_auto5} \\tag{6}\n\\end{equation}\n$$\n\n## Properties of the matrix $U$\n\nNote that the matrix is not symmetric, $U_{ij}\\ne U_{ji}$. One can also look at the inverse transformation, by switching the primed and unprimed coordinates,\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\label{eq:inverseU} \\tag{7}\n\\hat{e}_i&=&\\sum_jU^{-1}_{ij}\\hat{e}'_j,\\\\\n\\nonumber\nU^{-1}_{ij}&=&\\hat{e}_i\\cdot\\hat{e}'_j=U_{ji}.\n\\end{eqnarray}\n$$\n\nThe definition of transpose of a matrix, $M^{t}_{ij}=M_{ji}$, allows one to state this as\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\label{eq:transposedef} \\tag{8}\nU^{-1}&=&U^{t}.\n\\end{eqnarray}\n$$\n\n## Tensors\n\nA tensor obeying Eq. ([8](#eq:transposedef)) defines what is known as\na unitary, or orthogonal, transformation.\n\nThe matrix $U$ can be used to transform any vector to the new basis. Consider a vector\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{r}&=&r_1\\hat{e}_1+r_2\\hat{e}_2+r_3\\hat{e}_3\\\\\n\\nonumber\n&=&r'_1\\hat{e}'_1+r'_2\\hat{e}'_2+r'_3\\hat{e}'_3.\n\\end{eqnarray}\n$$\n\nThis is the same vector expressed as a sum over two different sets of\nbasis vectors. The coefficients $r_i$ and $r'_i$ represent components\nof the same vector. The relation between them can be found by taking\nthe dot product of each side with one of the unit vectors,\n$\\hat{e}_i$, which gives\n\n$$\n\\begin{eqnarray}\nr_i&=&\\sum_j \\hat{e}_i\\cdot\\hat{e}'_j~r'_j.\n\\end{eqnarray}\n$$\n\nUsing Eq. ([7](#eq:inverseU)) one can see that the transformation of $r$ can be also written in terms of $U$,\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\label{eq:rotateR} \\tag{9}\nr_i&=&\\sum_jU^{-1}_{ij}~r'_j.\n\\end{eqnarray}\n$$\n\nThus, the matrix that transforms the coordinates of the unit vectors,\nEq. ([7](#eq:inverseU)) is the same one that transforms the\ncoordinates of a vector, Eq. ([9](#eq:rotateR)).\n\n\n## Rotation matrix\n\nAs a small exercise, find the rotation matrix $U$ for finding the\ncomponents in the primed coordinate system given from those in the\nunprimed system, given that the unit vectors in the new system are\nfound by rotating the coordinate system by and angle $\\phi$ about the\n$z$ axis.\n\nIn this case\n\n$$\n\\begin{eqnarray*}\n\\hat{e}'_1&=&\\cos\\phi \\hat{e}_1-\\sin\\phi\\hat{e}_2,\\\\\n\\hat{e}'_2&=&\\sin\\phi\\hat{e}_1+\\cos\\phi\\hat{e}_2,\\\\\n\\hat{e}'_3&=&\\hat{e}_3.\n\\end{eqnarray*}\n$$\n\nBy inspecting Eq. ([5](#eq:lambda_angles)), we get\n\n$$\nU=\\left(\\begin{array}{ccc}\n\\cos\\phi&-\\sin\\phi&0\\\\\n\\sin\\phi&\\cos\\phi&0\\\\\n0&0&1\\end{array}\\right).\n$$\n\n## Unitary Transformations\n\nUnder a unitary transformation $U$ (or basis transformation) scalars\nare unchanged, whereas vectors $\\boldsymbol{r}$ and matrices $M$ change as\n\n$$\n\\begin{eqnarray}\nr'_i&=&U_{ij}~ r_j, ~~({\\rm sum~inferred})\\\\\n\\nonumber\nM'_{ij}&=&U_{ik}M_{km}U^{-1}_{mj}.\n\\end{eqnarray}\n$$\n\nPhysical quantities with no spatial indices are scalars (or\npseudoscalars if they depend on right-handed vs. left-handed\ncoordinate systems), and are unchanged by unitary\ntransformations. This includes quantities like the trace of a matrix,\nthe matrix itself had indices but none remain after performing the\ntrace.\n\n$$\n\\begin{eqnarray}\n{\\rm Tr} M&\\equiv& M_{ii}.\n\\end{eqnarray}\n$$\n\nBecause there are no remaining indices, one expects it to be a scalar. Indeed one can see this,\n\n$$\n\\begin{eqnarray}\n{\\rm Tr} M'&=&U_{ij}M_{jm}U^{-1}_{mi}\\\\\n\\nonumber\n&=&M_{jm}U^{-1}_{mi}U_{ij}\\\\\n\\nonumber\n&=&M_{jm}\\delta_{mj}\\\\\n\\nonumber\n&=&M_{jj}={\\rm Tr} M.\n\\end{eqnarray}\n$$\n\nA similar example is the determinant of a matrix, which is also a scalar.\n\n\n\n## Numerical Elements\n\nNumerical algorithms call for approximate discrete models and much of\nthe development of methods for continuous models are nowadays being\nreplaced by methods for discrete models in science and industry,\nsimply because **much larger classes of problems can be addressed** with\ndiscrete models, often by simpler and more generic methodologies.\n\nAs we will see throughout this course, when properly scaling the equations at hand,\ndiscrete models open up for more advanced abstractions and the possibility to\nstudy real life systems, with the added bonus that we can explore and\ndeepen our basic understanding of various physical systems\n\nAnalytical solutions are as important as before. In addition, such\nsolutions provide us with invaluable benchmarks and tests for our\ndiscrete models. Such benchmarks, as we will see below, allow us \nto discuss possible sources of errors and their behaviors. And\nfinally, since most of our models are based on various algorithms from\nnumerical mathematics, we have a unique oppotunity to gain a deeper\nunderstanding of the mathematical approaches we are using.\n\n\n\nWith computing and data science as important elements in essentially\nall aspects of a modern society, we could then try to define Computing as\n**solving scientific problems using all possible tools, including\nsymbolic computing, computers and numerical algorithms, and analytical\npaper and pencil solutions**. \nComputing provides us with the tools to develope our own understanding of the scientific method by enhancing algorithmic thinking.\n\n## Computations and the Scientific Method\n\nThe way we will teach this course reflects this definition of\ncomputing. The course contains both classical paper and pencil\nexercises as well as computational projects and exercises. The hope is\nthat this will allow you to explore the physics of systems governed by\nthe degrees of freedom of classical mechanics at a deeper level, and\nthat these insights about the scientific method will help you to\ndevelop a better understanding of how the underlying forces and\nequations of motion and how they impact a given system.\n\nFurthermore,\nby introducing various numerical methods via computational projects\nand exercises, we aim at developing your competences and skills about\nthese topics.\n\n\n## Computational Competences\n\nThese competences will enable you to\n\n* understand how algorithms are used to solve mathematical problems,\n\n* derive, verify, and implement algorithms,\n\n* understand what can go wrong with algorithms,\n\n* use these algorithms to construct reproducible scientific outcomes and to engage in science in ethical ways, and\n\n* think algorithmically for the purposes of gaining deeper insights about scientific problems.\n\nAll these elements are central for maturing and gaining a better understanding of the modern scientific process *per se*.\n\nThe power of the scientific method lies in identifying a given problem\nas a special case of an abstract class of problems, identifying\ngeneral solution methods for this class of problems, and applying a\ngeneral method to the specific problem (applying means, in the case of\ncomputing, calculations by pen and paper, symbolic computing, or\nnumerical computing by ready-made and/or self-written software). This\ngeneric view on problems and methods is particularly important for\nunderstanding how to apply available, generic software to solve a\nparticular problem.\n\n*However, verification of algorithms and understanding their limitations requires much of the classical knowledge about continuous models.*\n\n\n## A well-known example to illustrate many of the above concepts\n\nBefore we venture into a reminder on Python and mechanics relevant applications, let us briefly outline some of the\nabovementioned topics using an example many of you may have seen before in for example CMSE201. \nA simple algorithm for integration is the Trapezoidal rule. \nIntegration of a function $f(x)$ by the Trapezoidal Rule is given by following algorithm for an interval $x \\in [a,b]$\n\n$$\n\\int_a^b(f(x) dx = \\frac{1}{2}\\left [f(a)+2f(a+h)+\\dots+2f(b-h)+f(b)\\right] +O(h^2),\n$$\n\nwhere $h$ is the so-called stepsize defined by the number of integration points $N$ as $h=(b-a)/(n)$.\nPython offers an extremely versatile programming environment, allowing for\nthe inclusion of analytical studies in a numerical program. Here we show an\nexample code with the **trapezoidal rule**. We use also **SymPy** to evaluate the exact value of the integral and compute the absolute error\nwith respect to the numerically evaluated one of the integral\n$\\int_0^1 dx x^2 = 1/3$.\nThe following code for the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to $10^8$ points one arrives at a region where numerical errors start to accumulate.\n\n\n```python\n%matplotlib inline\n\nfrom math import log10\nimport numpy as np\nfrom sympy import Symbol, integrate\nimport matplotlib.pyplot as plt\n# function for the trapezoidal rule\ndef Trapez(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n x = a\n for i in range(1,n,1):\n x = x+h\n s = s+ f(x)\n s = 0.5*(f(a)+f(b)) +s\n return h*s\n# function to compute pi\ndef function(x):\n return x*x\n# define integration limits\na = 0.0; b = 1.0;\n# find result from sympy\n# define x as a symbol to be used by sympy\nx = Symbol('x')\nexact = integrate(function(x), (x, a, b))\n# set up the arrays for plotting the relative error\nn = np.zeros(9); y = np.zeros(9);\n# find the relative error as function of integration points\nfor i in range(1, 8, 1):\n npts = 10**i\n result = Trapez(a,b,function,npts)\n RelativeError = abs((exact-result)/exact)\n n[i] = log10(npts); y[i] = log10(RelativeError);\nplt.plot(n,y, 'ro')\nplt.xlabel('n')\nplt.ylabel('Relative error')\nplt.show()\n```\n\n## Analyzing the above example\n\n\nThis example shows the potential of combining numerical algorithms\nwith symbolic calculations, allowing us to\n\n* Validate and verify their algorithms. \n\n* Including concepts like unit testing, one has the possibility to test and test several or all parts of the code.\n\n* Validation and verification are then included *naturally* and one can develop a better attitude to what is meant with an ethically sound scientific approach.\n\n* The above example allows the student to also test the mathematical error of the algorithm for the trapezoidal rule by changing the number of integration points. The students get **trained from day one to think error analysis**. \n\n* With a Jupyter notebook you can keep exploring similar examples and turn them in as your own notebooks. \n\n## Python practicalities, Software and needed installations\n\nWe will make extensive use of Python as programming language and its\nmyriad of available libraries. You will find\nJupyter notebooks invaluable in your work. \n\nIf you have Python installed (we strongly recommend Python3) and you feel\npretty familiar with installing different packages, we recommend that\nyou install the following Python packages via **pip** as \n\n1. pip install numpy scipy matplotlib ipython scikit-learn mglearn sympy pandas pillow \n\nFor Python3, replace **pip** with **pip3**.\n\nFor OSX users we recommend, after having installed Xcode, to\ninstall **brew**. Brew allows for a seamless installation of additional\nsoftware via for example \n\n1. brew install python3\n\nFor Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,\nyou can use **pip** as well and simply install Python as \n\n1. sudo apt-get install python3 (or python for pyhton2.7)\n\netc etc. \n\n\n\n## Python installers\n\nIf you don't want to perform these operations separately and venture\ninto the hassle of exploring how to set up dependencies and paths, we\nrecommend two widely used distrubutions which set up all relevant\ndependencies for Python, namely \n\n* [Anaconda](https://docs.anaconda.com/), \n\nwhich is an open source\ndistribution of the Python and R programming languages for large-scale\ndata processing, predictive analytics, and scientific computing, that\naims to simplify package management and deployment. Package versions\nare managed by the package management system **conda**. \n\n* [Enthought canopy](https://www.enthought.com/product/canopy/) \n\nis a Python\ndistribution for scientific and analytic computing distribution and\nanalysis environment, available for free and under a commercial\nlicense.\n\nFurthermore, [Google's Colab](https://colab.research.google.com/notebooks/welcome.ipynb) is a free Jupyter notebook environment that requires \nno setup and runs entirely in the cloud. Try it out!\n\n## Useful Python libraries\nHere we list several useful Python libraries we strongly recommend (if you use anaconda many of these are already there)\n\n* [NumPy](https://www.numpy.org/) is a highly popular library for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays\n\n* [The pandas](https://pandas.pydata.org/) library provides high-performance, easy-to-use data structures and data analysis tools \n\n* [Xarray](http://xarray.pydata.org/en/stable/) is a Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!\n\n* [Scipy](https://www.scipy.org/) (pronounced \u201cSigh Pie\u201d) is a Python-based ecosystem of open-source software for mathematics, science, and engineering. \n\n* [Matplotlib](https://matplotlib.org/) is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.\n\n* [Autograd](https://github.com/HIPS/autograd) can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives\n\n* [SymPy](https://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. \n\n* [scikit-learn](https://scikit-learn.org/stable/) has simple and efficient tools for machine learning, data mining and data analysis\n\n* [TensorFlow](https://www.tensorflow.org/) is a Python library for fast numerical computing created and released by Google\n\n* [Keras](https://keras.io/) is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano\n\n* And many more such as [pytorch](https://pytorch.org/), [Theano](https://pypi.org/project/Theano/) etc \n\nYour jupyter notebook can easily be\nconverted into a nicely rendered **PDF** file or a Latex file for\nfurther processing. For example, convert to latex as\n\n pycod jupyter nbconvert filename.ipynb --to latex \n\n\nAnd to add more versatility, the Python package [SymPy](http://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) and is entirely written in Python. \n\n\n## Numpy examples and Important Matrix and vector handling packages\n\nThere are several central software libraries for linear algebra and eigenvalue problems. Several of the more\npopular ones have been wrapped into ofter software packages like those from the widely used text **Numerical Recipes**. The original source codes in many of the available packages are often taken from the widely used\nsoftware package LAPACK, which follows two other popular packages\ndeveloped in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.\n\n * LINPACK: package for linear equations and least square problems.\n\n * LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.\n\n * BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from .\n\n## Numpy and arrays\n\n[Numpy](http://www.numpy.org/) provides an easy way to handle arrays in Python. The standard way to import this library is as\n\n\n```python\nimport numpy as np\n```\n\nHere follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,\n\n\n```python\nn = 10\nx = np.random.normal(size=n)\nprint(x)\n```\n\n [ 0.0894565 1.94971919 -1.00217137 1.04531722 1.65047376 -0.39492747\n 0.07211386 -0.43060307 -0.44633089 0.74168775]\n\n\nWe defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.\nAnother alternative is to declare a vector as follows\n\n\n```python\nimport numpy as np\nvx = 0.0\nvy = 1.0\nvz = 0.0\nv = np.array([vx, vy, vz])\nprint(v)\n```\n\n [0. 1. 0.]\n\n\nHere we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++\nstart numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \\dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4, 7, 8]))\nprint(x)\n```\n\n [1.38629436 1.94591015 2.07944154]\n\n\nIn the last example we used Numpy's unary function $np.log$. This function is\nhighly tuned to compute array elements since the code is vectorized\nand does not require looping. We normaly recommend that you use the\nNumpy intrinsic functions instead of the corresponding **log** function\nfrom Python's **math** module. The looping is done explicitely by the\n**np.log** function. The alternative, and slower way to compute the\nlogarithms of a vector would be to write\n\n\n```python\nimport numpy as np\nfrom math import log\nx = np.array([4, 7, 8])\nfor i in range(0, len(x)):\n x[i] = log(x[i])\nprint(x)\n```\n\nWe note that our code is much longer already and we need to import the **log** function from the **math** module. \nThe attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the **automatic** keyword in C++). To change this we could define our array elements to be double precision numbers as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4, 7, 8], dtype = np.float64))\nprint(x)\n```\n\nor simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4.0, 7.0, 8.0])\nprint(x)\n```\n\nTo check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the **itemsize** functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4.0, 7.0, 8.0])\nprint(x.itemsize)\n```\n\n## Matrices in Python\n\nHaving defined vectors, we are now ready to try out matrices. We can\ndefine a $3 \\times 3 $ real matrix $\\hat{A}$ as (recall that we user\nlowercase letters for vectors and uppercase letters for matrices)\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\nprint(A)\n```\n\nIf we use the **shape** function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\n# print the first column, row-major order and elements start with 0\nprint(A[:,0])\n```\n\nWe can continue this was by printing out other columns or rows. The example here prints out the second column\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\n# print the first column, row-major order and elements start with 0\nprint(A[1,:])\n```\n\nNumpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the [Numpy website for more details](http://www.numpy.org/). Useful functions when defining a matrix are the **np.zeros** function which declares a matrix of a given dimension and sets all elements to zero\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to zero\nA = np.zeros( (n, n) )\nprint(A)\n```\n\nor initializing all elements to\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to one\nA = np.ones( (n, n) )\nprint(A)\n```\n\nor as unitarily distributed random numbers (see the material on random number generators in the statistics part)\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \\in [0, 1]\nA = np.random.rand(n, n)\nprint(A)\n```\n\n## Meet the Pandas\n\n\n\n\n\n

\n\n\n\n\n\nAnother useful Python package is\n[pandas](https://pandas.pydata.org/), which is an open source library\nproviding high-performance, easy-to-use data structures and data\nanalysis tools for Python. **pandas** stands for panel data, a term borrowed from econometrics and is an efficient library for data analysis with an emphasis on tabular data.\n\n**pandas** has two major classes, the **DataFrame** class with\ntwo-dimensional data objects and tabular data organized in columns and\nthe class **Series** with a focus on one-dimensional data objects. Both\nclasses allow you to index data easily as we will see in the examples\nbelow. **pandas** allows you also to perform mathematical operations on\nthe data, spanning from simple reshapings of vectors and matrices to\nstatistical operations.\n\nThe following simple example shows how we can, in an easy way make\ntables of our data. Here we define a data set which includes names,\nplace of birth and date of birth, and displays the data in an easy to\nread way. We will see repeated use of **pandas**, in particular in\nconnection with classification of data.\n\n\n```python\nimport pandas as pd\nfrom IPython.display import display\ndata = {'First Name': [\"Frodo\", \"Bilbo\", \"Aragorn II\", \"Samwise\"],\n 'Last Name': [\"Baggins\", \"Baggins\",\"Elessar\",\"Gamgee\"],\n 'Place of birth': [\"Shire\", \"Shire\", \"Eriador\", \"Shire\"],\n 'Date of Birth T.A.': [2968, 2890, 2931, 2980]\n }\ndata_pandas = pd.DataFrame(data)\ndisplay(data_pandas)\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
First NameLast NamePlace of birthDate of Birth T.A.
0FrodoBagginsShire2968
1BilboBagginsShire2890
2Aragorn IIElessarEriador2931
3SamwiseGamgeeShire2980
\n
\n\n\nIn the above we have imported **pandas** with the shorthand **pd**, the latter has become the standard way we import **pandas**. We make then a list of various variables\nand reorganize the above lists into a **DataFrame** and then print out a neat table with specific column labels as *Name*, *place of birth* and *date of birth*.\nDisplaying these results, we see that the indices are given by the default numbers from zero to three.\n**pandas** is extremely flexible and we can easily change the above indices by defining a new type of indexing as\n\n\n```python\ndata_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])\ndisplay(data_pandas)\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
First NameLast NamePlace of birthDate of Birth T.A.
FrodoFrodoBagginsShire2968
BilboBilboBagginsShire2890
AragornAragorn IIElessarEriador2931
SamSamwiseGamgeeShire2980
\n
\n\n\nThereafter we display the content of the row which begins with the index **Aragorn**\n\n\n```python\ndisplay(data_pandas.loc['Aragorn'])\n```\n\n\n First Name Aragorn II\n Last Name Elessar\n Place of birth Eriador\n Date of Birth T.A. 2931\n Name: Aragorn, dtype: object\n\n\nWe can easily append data to this, for example\n\n\n```python\nnew_hobbit = {'First Name': [\"Peregrin\"],\n 'Last Name': [\"Took\"],\n 'Place of birth': [\"Shire\"],\n 'Date of Birth T.A.': [2990]\n }\ndata_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))\ndisplay(data_pandas)\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
First NameLast NamePlace of birthDate of Birth T.A.
FrodoFrodoBagginsShire2968
BilboBilboBagginsShire2890
AragornAragorn IIElessarEriador2931
SamSamwiseGamgeeShire2980
PippinPeregrinTookShire2990
\n
\n\n\nHere are other examples where we use the **DataFrame** functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix \nof dimensionality $10\\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.\n\n\n```python\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display\nnp.random.seed(100)\n# setting up a 10 x 5 matrix\nrows = 10\ncols = 5\na = np.random.randn(rows,cols)\ndf = pd.DataFrame(a)\ndisplay(df)\nprint(df.mean())\nprint(df.std())\ndisplay(df**2)\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
01234
0-1.7497650.3426801.153036-0.2524360.981321
10.5142190.221180-1.070043-0.1894960.255001
2-0.4580270.435163-0.5835950.8168470.672721
3-0.104411-0.5312801.029733-0.438136-1.118318
41.6189821.541605-0.251879-0.8424360.184519
50.9370820.7310001.361556-0.3262380.055676
60.222400-1.443217-0.7563520.8164540.750445
7-0.4559471.189622-1.690617-1.356399-1.232435
8-0.544439-0.6681720.007315-0.6129391.299748
9-1.733096-0.9833100.357508-1.6135791.470714
\n
\n\n\n 0 -0.175300\n 1 0.083527\n 2 -0.044334\n 3 -0.399836\n 4 0.331939\n dtype: float64\n 0 1.069584\n 1 0.965548\n 2 1.018232\n 3 0.793167\n 4 0.918992\n dtype: float64\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
01234
03.0616790.1174301.3294920.0637240.962990
10.2644210.0489201.1449930.0359090.065026
20.2097890.1893670.3405830.6672390.452553
30.0109020.2822591.0603490.1919631.250636
42.6211022.3765470.0634430.7096980.034047
50.8781230.5343621.8538350.1064310.003100
60.0494622.0828750.5720690.6665970.563167
70.2078881.4152012.8581851.8398181.518895
80.2964140.4464530.0000540.3756941.689345
93.0036200.9668990.1278122.6036362.162999
\n
\n\n\nThereafter we can select specific columns only and plot final results\n\n\n```python\ndf.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']\ndf.index = np.arange(10)\n\ndisplay(df)\nprint(df['Second'].mean() )\nprint(df.info())\nprint(df.describe())\n\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\ndf.cumsum().plot(lw=2.0, figsize=(10,6))\nplt.show()\n\n\ndf.plot.bar(figsize=(10,6), rot=15)\nplt.show()\n```\n\nWe can produce a $4\\times 4$ matrix\n\n\n```python\nb = np.arange(16).reshape((4,4))\nprint(b)\ndf1 = pd.DataFrame(b)\nprint(df1)\n```\n\nand many other operations. \n\n\nThe **Series** class is another important class included in\n**pandas**. You can view it as a specialization of **DataFrame** but where\nwe have just a single column of data. It shares many of the same\nfeatures as **DataFrame**. As with **DataFrame**, most operations are\nvectorized, achieving thereby a high performance when dealing with\ncomputations of arrays, in particular labeled arrays. As we will see\nbelow it leads also to a very concice code close to the mathematical\noperations we may be interested in. For multidimensional arrays, we\nrecommend strongly\n[xarray](http://xarray.pydata.org/en/stable/). **xarray** has much of\nthe same flexibility as **pandas**, but allows for the extension to\nhigher dimensions than two.\n\n\n\n\n## Introduction to Git and GitHub/GitLab and similar\n\n[Git](https://git-scm.com/) is a distributed version-control system\nfor tracking changes in any set of files, originally designed for\ncoordinating work among programmers cooperating on source code during\nsoftware development.\n\nThe [reference document and videos here](https://git-scm.com/doc)\ngive you an excellent introduction to the **git**.\n\nWe believe you will find version-control software very useful in your work. \n\n\n## GitHub, GitLab and many other\n\n[GitHub](https://github.com/), [GitLab](https://about.gitlab.com/), [Bitbucket](https://bitbucket.org/product?&aceid=&adposition=&adgroup=92266806717&campaign=1407243017&creative=414608923671&device=c&keyword=bitbucket&matchtype=e&network=g&placement=&ds_kids=p51241248597&ds_e=GOOGLE&ds_eid=700000001551985&ds_e1=GOOGLE&gclid=Cj0KCQiA6Or_BRC_ARIsAPzuer_yrxzs-R8KDVdF0-DduJR9hTBYcjdE8L9_CkA9eyz8XT7-3bFGOpQaAqe2EALw_wcB&gclsrc=aw.ds) and other are code hosting platforms for\nversion control and collaboration. They let you and others work\ntogether on projects from anywhere.\n\n\nAll teaching material related to this course is open and freely\navailable via the GitHub site of the course. The video here gives a\nshort intro to\n[GitHub](https://www.youtube.com/watch/w3jLJU7DT5E?reload=9).\n\nSee also the [overview video on Git and GitHub](https://mediaspace.msu.edu/media/t/1_8mgx3cyf).\n\n\n## Useful Git and GitHub links\n\nThese are a couple references that we have found useful (git commands, markdown, GitPages):\n* \n\n* \n\n* \n\n## Useful IDEs and text editors\n\nWhen dealing with homeworks, at some point you would need to use an\neditor, or an integrated development envinroment (IDE). As an IDE, we\nwould like to recommend **anaconda** since we end up using\njupyter-notebooks. **anaconda** runs on all known operating systems.\n\n\nIf you prefer editing **Python** codes, there are several excellent cross-platform editors.\nIf you are in a Windows environment, **word** is the classical text editor.\n\nThere is however a wealth of text editors and/ord IDEs that run on all operating\nsystems and functions well with Python. Some of the more popular ones are\n\n* [Atom](https://atom.io/)\n\n* [Sublime](https://www.sublimetext.com/)\n", "meta": {"hexsha": "46800be7371ca29fe65cc8563da2609454ecd853", "size": 96088, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_stars_repo_name": "schwartznicholas/Physics321", "max_stars_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_issues_repo_name": "schwartznicholas/Physics321", "max_issues_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_forks_repo_name": "schwartznicholas/Physics321", "max_forks_repo_head_hexsha": "b507f0411c2c92a669c85b8c47c502b9e7fa0c8f", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 33.5737246681, "max_line_length": 483, "alphanum_fraction": 0.5390995754, "converted": true, "num_tokens": 18155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.4921881357207956, "lm_q2_score": 0.5964331462646255, "lm_q1q2_score": 0.29355731834207466}} {"text": "\n\n# Customer Segmentation: Estimate Individualized Responses to Incentives\n\nNowadays, business decision makers rely on estimating the causal effect of interventions to answer what-if questions about shifts in strategy, such as promoting specific product with discount, adding new features to a website or increasing investment from a sales team. However, rather than learning whether to take action for a specific intervention for all users, people are increasingly interested in understanding the different responses from different users to the two alternatives. Identifying the characteristics of users having the strongest response for the intervention could help make rules to segment the future users into different groups. This can help optimize the policy to use the least resources and get the most profit.\n\nIn this case study, we will use a personalized pricing example to explain how the [EconML](https://aka.ms/econml) and [DoWhy](https://github.com/microsoft/dowhy) libraries could fit into this problem and provide robust and reliable causal solutions.\n\n### Summary\n\n1. [Background](#background)\n2. [Data](#data)\n3. [Create Causal Model and Identify Causal Effect with DoWhy](#identify)\n4. [Get Causal Effects with EconML](#estimate)\n5. [Test Estimate Robustness with DoWhy](#robustness)\n 1. [Add Random Common Cause](#random-common-cause)\n 2. [Add Unobserved Common Cause](#unobserved-common-cause)\n 3. [Replace Treatment with a Random (Placebo) Variable](#placebo-variable)\n 4. [Remove a Random Subset of the Data](#subset)\n6. [Understand Treatment Effects with EconML](#interpret)\n7. [Make Policy Decisions with EconML](#policy)\n8. [Conclusions](#conclusion)\n\n\n\n\n# Background \n\n\n\nThe global online media market is growing fast over the years. Media companies are always interested in attracting more users into the market and encouraging them to buy more songs or become members. In this example, we'll consider a scenario where one experiment a media company is running is to give small discount (10%, 20% or 0) to their current users based on their income level in order to boost the likelihood of their purchase. The goal is to understand the **heterogeneous price elasticity of demand** for people with different income level, learning which users would respond most strongly to a small discount. Furthermore, their end goal is to make sure that despite decreasing the price for some consumers, the demand is raised enough to boost the overall revenue.\n\nThe EconML and DoWhy libraries complement each other in implementing this solution. On one hand, the DoWhy library can help [build a causal model, indentify the causal effect](#identify) and [test causal assumptions](#robustness). On the other hand, EconML\u2019s `DML` based estimators can be used to take the discount variation in existing data, along with a rich set of user features, to [estimate heterogeneous price sensitivities](#estimate) that vary with multiple customer features. Then, the `SingleTreeCateInterpreter` provides a [presentation-ready summary](#interpret) of the key features that explain the biggest differences in responsiveness to a discount, and the `SingleTreePolicyInterpreter` recommends a [policy](#policy) on who should receive a discount in order to increase revenue (not only demand), which could help the company to set an optimal price for those users in the future.\n\n\n```python\n# Some imports to get us started\r\nimport warnings\r\nwarnings.simplefilter('ignore')\r\n\r\n# Utilities\r\nimport os\r\nimport urllib.request\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom networkx.drawing.nx_pydot import to_pydot\r\nfrom IPython.display import Image, display\r\n\r\n# Generic ML imports\r\nfrom sklearn.preprocessing import PolynomialFeatures\r\nfrom sklearn.ensemble import GradientBoostingRegressor\r\n\r\n# EconML imports\r\nfrom econml.dml import LinearDML, CausalForestDML\r\nfrom econml.cate_interpreter import SingleTreeCateInterpreter, SingleTreePolicyInterpreter\r\n\r\nimport matplotlib.pyplot as plt\r\n\r\n%matplotlib inline\n```\n\n# Data \n\n\nThe dataset* has ~10,000 observations and includes 9 continuous and categorical variables that represent user's characteristics and online behaviour history such as age, log income, previous purchase, previous online time per week, etc. \n\nWe define the following variables:\n\nFeature Name|Type|Details \n:--- |:---|:--- \n**account_age** |W| user's account age\n**age** |W|user's age\n**avg_hours** |W| the average hours user was online per week in the past\n**days_visited** |W| the average number of days user visited the website per week in the past\n**friend_count** |W| number of friends user connected in the account \n**has_membership** |W| whether the user had membership\n**is_US** |W| whether the user accesses the website from the US \n**songs_purchased** |W| the average songs user purchased per week in the past\n**income** |X| user's income\n**price** |T| the price user was exposed during the discount season (baseline price * small discount)\n**demand** |Y| songs user purchased during the discount season\n\n**To protect the privacy of the company, we use the simulated data as an example here. The data is synthetically generated and the feature distributions don't correspond to real distributions. However, the feature names have preserved their names and meaning.*\n\n\nThe treatment and outcome are generated using the following functions:\n$$\nT = \n\\begin{cases}\n 1 & \\text{with } p=0.2, \\\\\n 0.9 & \\text{with }p=0.3, & \\text{if income}<1 \\\\\n 0.8 & \\text{with }p=0.5, \\\\\n \\\\\n 1 & \\text{with }p=0.7, \\\\\n 0.9 & \\text{with }p=0.2, & \\text{if income}\\ge1 \\\\\n 0.8 & \\text{with }p=0.1, \\\\\n\\end{cases}\n$$\n\n\n\\begin{align}\n\\gamma(X) & = -3 - 14 \\cdot \\{\\text{income}<1\\} \\\\\n\\beta(X,W) & = 20 + 0.5 \\cdot \\text{avg_hours} + 5 \\cdot \\{\\text{days_visited}>4\\} \\\\\nY &= \\gamma(X) \\cdot T + \\beta(X,W)\n\\end{align}\n\n\n\n\n```python\n# Import the sample pricing data\r\nfile_url = \"https://msalicedatapublic.blob.core.windows.net/datasets/Pricing/pricing_sample.csv\"\r\ntrain_data = pd.read_csv(file_url)\n```\n\n\n```python\n# Data sample\r\ntrain_data.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
account_ageageavg_hoursdays_visitedfriends_counthas_membershipis_USsongs_purchasedincomepricedemand
03531.83423428114.9032370.9608631.03.917117
15547.17141179013.3301610.7324871.011.585706
23335.35192069013.0362031.1309371.024.675960
32346.72355108017.9119260.9291971.06.361776
44302.44824758107.1489670.5335270.812.624123
\n
\n\n\n\n\n```python\n# Define estimator inputs\r\ntrain_data[\"log_demand\"] = np.log(train_data[\"demand\"])\r\ntrain_data[\"log_price\"] = np.log(train_data[\"price\"])\r\n\r\nY = train_data[\"log_demand\"].values\r\nT = train_data[\"log_price\"].values\r\nX = train_data[[\"income\"]].values # features\r\nconfounder_names = [\"account_age\", \"age\", \"avg_hours\", \"days_visited\", \"friends_count\", \"has_membership\", \"is_US\", \"songs_purchased\"]\r\nW = train_data[confounder_names].values\n```\n\n\n```python\n# Get test data\r\nX_test = np.linspace(0, 5, 100).reshape(-1, 1)\r\nX_test_data = pd.DataFrame(X_test, columns=[\"income\"])\n```\n\n# Create Causal Model and Identify Causal Effect with DoWhy \n\nWe define the causal assumptions with DoWhy. For example, we can include features we believe as confounders and features we think will influence the heterogeneity of the effect. With these assumptions defined, DoWhy can generate a causal graph for us, and use that graph to first identify the causal effect.\n\n\n\n\n```python\n# initiate an EconML cate estimator\r\nest = LinearDML(model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor(),\r\n featurizer=PolynomialFeatures(degree=2, include_bias=False))\n```\n\n\n```python\n# fit through dowhy\r\nest_dw = est.dowhy.fit(Y, T, X=X, W=W, outcome_names=[\"log_demand\"], treatment_names=[\"log_price\"], feature_names=[\"income\"],\r\n confounder_names=confounder_names, inference=\"statsmodels\")\n```\n\n WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.\n INFO:dowhy.causal_graph:If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named \"Unobserved Confounders\" to reflect this.\n INFO:dowhy.causal_model:Model to find the causal effect of treatment ['log_price'] on outcome ['log_demand']\n WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.\n INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True.\n INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[]\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n\n\n\n```python\n# Visualize causal graph\r\ntry:\r\n # Try pretty printing the graph. Requires pydot and pygraphviz\r\n display(\r\n Image(to_pydot(est_dw._graph._graph).create_png())\r\n )\r\nexcept:\r\n # Fall back on default graph view\r\n est_dw.view_model() \n```\n\n\n```python\nidentified_estimand = est_dw.identified_estimand_\r\nprint(identified_estimand)\n```\n\n Estimand type: nonparametric-ate\n \n ### Estimand : 1\n Estimand name: backdoor1 (Default)\n Estimand expression:\n d \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500(Expectation(log_demand|is_US,has_membership,days_visited,age,inco\n d[log_price] \n \n \n me,account_age,avg_hours,songs_purchased,friends_count))\n \n Estimand assumption 1, Unconfoundedness: If U\u2192{log_price} and U\u2192log_demand then P(log_demand|log_price,is_US,has_membership,days_visited,age,income,account_age,avg_hours,songs_purchased,friends_count,U) = P(log_demand|log_price,is_US,has_membership,days_visited,age,income,account_age,avg_hours,songs_purchased,friends_count)\n \n ### Estimand : 2\n Estimand name: iv\n No such variable found!\n \n\n\n# Get Causal Effects with EconML \n\nBased on the identified causal effect above, we fit the model as follows using EconML:\n\n\n\\begin{align}\nlog(Y) & = \\theta(X) \\cdot log(T) + f(X,W) + \\epsilon \\\\\nlog(T) & = g(X,W) + \\eta\n\\end{align}\n\n\nwhere $\\epsilon, \\eta$ are uncorrelated error terms. \n\n\nThe models we fit here aren't an exact match for the data generation function above, but if they are a good approximation, they will allow us to create a good discount policy. Although the model is misspecified, we hope to see that our `DML` based estimators can still capture the right trend of $\\theta(X)$ and that the recommended policy beats other baseline policies (such as always giving a discount) on revenue. Because of the mismatch between the data generating process and the model we're fitting, there isn't a single true $\\theta(X)$ (the true elasticity varies with not only X but also T and W), but given how we generate the data above, we can still calculate the range of true $\\theta(X)$ to compare against.\n\n\n```python\n# Define underlying treatment effect function given DGP\r\ndef gamma_fn(X):\r\n return -3 - 14 * (X[\"income\"] < 1)\r\n\r\ndef beta_fn(X):\r\n return 20 + 0.5 * (X[\"avg_hours\"]) + 5 * (X[\"days_visited\"] > 4)\r\n\r\ndef demand_fn(data, T):\r\n Y = gamma_fn(data) * T + beta_fn(data)\r\n return Y\r\n\r\ndef true_te(x, n, stats):\r\n if x < 1:\r\n subdata = train_data[train_data[\"income\"] < 1].sample(n=n, replace=True)\r\n else:\r\n subdata = train_data[train_data[\"income\"] >= 1].sample(n=n, replace=True)\r\n te_array = subdata[\"price\"] * gamma_fn(subdata) / (subdata[\"demand\"])\r\n if stats == \"mean\":\r\n return np.mean(te_array)\r\n elif stats == \"median\":\r\n return np.median(te_array)\r\n elif isinstance(stats, int):\r\n return np.percentile(te_array, stats)\n```\n\n\n```python\n# Get the estimate and range of true treatment effect\r\ntruth_te_estimate = np.apply_along_axis(true_te, 1, X_test, 1000, \"mean\") # estimate\r\ntruth_te_upper = np.apply_along_axis(true_te, 1, X_test, 1000, 95) # upper level\r\ntruth_te_lower = np.apply_along_axis(true_te, 1, X_test, 1000, 5) # lower level\n```\n\n## Parametric heterogeneity\nFirst of all, we can try to learn a **linear projection of the treatment effect** assuming a polynomial form of $\\theta(X)$. We use the `LinearDML` estimator. Since we don't have any priors on these models, we use a generic gradient boosting tree estimators to learn the expected price and demand from the data.\n\n\n```python\nlineardml_estimate = est_dw.estimate_\r\nprint(lineardml_estimate)\n```\n\n *** Causal Estimate ***\n \n ## Identified estimand\n Estimand type: nonparametric-ate\n \n ## Realized estimand\n b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n Target units: ate\n \n ## Estimate\n Mean value: -0.9956103906192235\n \n\n\n\n```python\n# Get treatment effect and its confidence interval\r\nte_pred = est_dw.effect(X_test).flatten()\r\nte_pred_interval = est_dw.effect_interval(X_test)\n```\n\n\n```python\n# Compare the estimate and the truth\r\nplt.figure(figsize=(10, 6))\r\nplt.plot(X_test.flatten(), te_pred, label=\"Sales Elasticity Prediction\")\r\nplt.plot(X_test.flatten(), truth_te_estimate, \"--\", label=\"True Elasticity\")\r\nplt.fill_between(\r\n X_test.flatten(),\r\n te_pred_interval[0].flatten(),\r\n te_pred_interval[1].flatten(),\r\n alpha=0.2,\r\n label=\"95% Confidence Interval\",\r\n)\r\nplt.fill_between(\r\n X_test.flatten(),\r\n truth_te_lower,\r\n truth_te_upper,\r\n alpha=0.2,\r\n label=\"True Elasticity Range\",\r\n)\r\nplt.xlabel(\"Income\")\r\nplt.ylabel(\"Songs Sales Elasticity\")\r\nplt.title(\"Songs Sales Elasticity vs Income\")\r\nplt.legend(loc=\"lower right\")\n```\n\nFrom the plot above, it's clear to see that the true treatment effect is a **nonlinear** function of income, with elasticity around -1.75 when income is smaller than 1 and a small negative value when income is larger than 1. The model fits a quadratic treatment effect, which is not a great fit. But it still captures the overall trend: the elasticity is negative and people are less sensitive to the price change if they have higher income.\n\n\n```python\n# Get the final coefficient and intercept summary\r\nest_dw.summary(feature_names=[\"income\"])\n```\n\n\n\n\n\n\n\n \n\n\n \n\n\n \n\n
Coefficient Results
point_estimate stderr zstat pvalue ci_lower ci_upper
income 2.386 0.081 29.485 0.0 2.227 2.545
income^2 -0.42 0.028 -15.185 0.0 -0.474 -0.366
\n\n\n\n \n\n\n \n\n
CATE Intercept Results
point_estimate stderr zstat pvalue ci_lower ci_upper
cate_intercept -3.003 0.049 -60.738 0.0 -3.1 -2.906


A linear parametric conditional average treatment effect (CATE) model was fitted:
$Y = \\Theta(X)\\cdot T + g(X, W) + \\epsilon$
where for every outcome $i$ and treatment $j$ the CATE $\\Theta_{ij}(X)$ has the form:
$\\Theta_{ij}(X) = \\phi(X)' coef_{ij} + cate\\_intercept_{ij}$
where $\\phi(X)$ is the output of the `featurizer` or $X$ if `featurizer`=None. Coefficient Results table portrays the $coef_{ij}$ parameter vector for each outcome $i$ and treatment $j$. Intercept Results table portrays the $cate\\_intercept_{ij}$ parameter.
\n\n\n\n`LinearDML` estimator can also return the summary of the coefficients and intercept for the final model, including point estimates, p-values and confidence intervals. From the table above, we notice that $income$ has positive effect and ${income}^2$ has negative effect, and both of them are statistically significant.\n\n## Nonparametric Heterogeneity\nSince we already know the true treatment effect function is nonlinear, let us fit another model using `CausalForestDML`, which assumes a fully **nonparametric estimation of the treatment effect**.\n\n\n```python\n# initiate an EconML cate estimator\r\nest_nonparam = CausalForestDML(model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor())\r\n# fit through dowhy\r\nest_nonparam_dw = est_nonparam.dowhy.fit(Y, T, X=X, W=W, outcome_names=[\"log_demand\"], treatment_names=[\"log_price\"],\r\n feature_names=[\"income\"], confounder_names=confounder_names, inference=\"blb\")\n```\n\n WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.\n INFO:dowhy.causal_graph:If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named \"Unobserved Confounders\" to reflect this.\n INFO:dowhy.causal_model:Model to find the causal effect of treatment ['log_price'] on outcome ['log_demand']\n WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.\n INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True.\n INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[]\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n\n\n\n```python\n# Get treatment effect and its confidence interval\r\nte_pred = est_nonparam_dw.effect(X_test).flatten()\r\nte_pred_interval = est_nonparam_dw.effect_interval(X_test)\n```\n\n\n```python\n# Compare the estimate and the truth\r\nplt.figure(figsize=(10, 6))\r\nplt.plot(X_test.flatten(), te_pred, label=\"Sales Elasticity Prediction\")\r\nplt.plot(X_test.flatten(), truth_te_estimate, \"--\", label=\"True Elasticity\")\r\nplt.fill_between(\r\n X_test.flatten(),\r\n te_pred_interval[0].flatten(),\r\n te_pred_interval[1].flatten(),\r\n alpha=0.2,\r\n label=\"95% Confidence Interval\",\r\n)\r\nplt.fill_between(\r\n X_test.flatten(),\r\n truth_te_lower,\r\n truth_te_upper,\r\n alpha=0.2,\r\n label=\"True Elasticity Range\",\r\n)\r\nplt.xlabel(\"Income\")\r\nplt.ylabel(\"Songs Sales Elasticity\")\r\nplt.title(\"Songs Sales Elasticity vs Income\")\r\nplt.legend(loc=\"lower right\")\n```\n\nWe notice that this model fits much better than the `LinearDML`, the 95% confidence interval correctly covers the true treatment effect estimate and captures the variation when income is around 1. Overall, the model shows that people with low income are much more sensitive to the price changes than higher income people.\n\n# Test Estimate Robustness with DoWhy \n\n### Add Random Common Cause \n\nHow robust are our estimates to adding another confounder? We use DoWhy to test this!\n\n\n```python\nres_random = est_nonparam_dw.refute_estimate(method_name=\"random_common_cause\", num_simulations=5)\r\nprint(res_random)\n```\n\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count+w_random | income\n\n\n Refute: Add a Random Common Cause\n Estimated effect:-0.9594204479199662\n New effect:-0.9574777656374094\n \n\n\n### Add Unobserved Common Cause \n\nHow robust are our estimates to unobserved confounders? Since we assume the model is under unconfoundedness, adding an unobserved confounder might bias the estimates. We use DoWhy to test this!\n\n\n```python\nres_unobserved = est_nonparam_dw.refute_estimate(\r\n method_name=\"add_unobserved_common_cause\",\r\n confounders_effect_on_treatment=\"linear\",\r\n confounders_effect_on_outcome=\"linear\",\r\n effect_strength_on_treatment=0.1,\r\n effect_strength_on_outcome=0.1,\r\n)\r\nprint(res_unobserved)\n```\n\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n\n\n Refute: Add an Unobserved Common Cause\n Estimated effect:-0.9594204479199662\n New effect:0.20029340691678463\n \n\n\n### Replace Treatment with a Random (Placebo) Variable \n\nWhat happens our estimates if we replace the treatment variable with noise? Ideally, the average effect would be wildly different than our original estimate. We use DoWhy to investigate!\n\n\n```python\nres_placebo = est_nonparam_dw.refute_estimate(\r\n method_name=\"placebo_treatment_refuter\", placebo_type=\"permute\", \r\n num_simulations=3\r\n)\r\nprint(res_placebo)\n```\n\n INFO:dowhy.causal_refuters.placebo_treatment_refuter:Refutation over 3 simulated datasets of permute treatment\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~placebo+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~placebo+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~placebo+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n WARNING:dowhy.causal_refuters.placebo_treatment_refuter:We assume a Normal Distribution as the sample has less than 100 examples.\n Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size.\n\n\n Refute: Use a Placebo Treatment\n Estimated effect:-0.9594204479199662\n New effect:-0.0009044538846515711\n p value:0.4246571154416484\n \n\n\n### Remove a Random Subset of the Data \n\nDo we recover similar estimates on subsets of the data? This speaks to the ability of our chosen estimator to generalize well. We use DoWhy to investigate this!\n\n\n```python\nres_subset = est_nonparam_dw.refute_estimate(\r\n method_name=\"data_subset_refuter\", subset_fraction=0.8, \r\n num_simulations=3)\r\nprint(res_subset)\n```\n\n INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.8 simulated datasets of size 8000.0 each\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n INFO:dowhy.causal_estimator:INFO: Using EconML Estimator\n INFO:dowhy.causal_estimator:b: log_demand~log_price+is_US+has_membership+days_visited+age+income+account_age+avg_hours+songs_purchased+friends_count | income\n WARNING:dowhy.causal_refuters.data_subset_refuter:We assume a Normal Distribution as the sample has less than 100 examples.\n Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size.\n\n\n Refute: Use a subset of data\n Estimated effect:-0.9594204479199662\n New effect:-0.9571011772201145\n p value:0.19397906736405435\n \n\n\n# Understand Treatment Effects with EconML \nEconML includes interpretability tools to better understand treatment effects. Treatment effects can be complex, but oftentimes we are interested in simple rules that can differentiate between users who respond positively, users who remain neutral and users who respond negatively to the proposed changes.\n\nThe EconML `SingleTreeCateInterpreter` provides interperetability by training a single decision tree on the treatment effects outputted by the any of the EconML estimators. In the figure below we can see in dark red users respond strongly to the discount and the in white users respond lightly to the discount.\n\n\n```python\nintrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=10)\r\nintrp.interpret(est_nonparam_dw, X_test)\r\nplt.figure(figsize=(25, 5))\r\nintrp.plot(feature_names=[\"income\"], fontsize=12)\n```\n\n# Make Policy Decision with EconML \nWe want to make policy decisions to maximum the **revenue** instead of the demand. In this scenario,\n\n\n\\begin{align}\nRev & = Y \\cdot T \\\\\n & = \\exp^{log(Y)} \\cdot T\\\\\n & = \\exp^{(\\theta(X) \\cdot log(T) + f(X,W) + \\epsilon)} \\cdot T \\\\\n & = \\exp^{(f(X,W) + \\epsilon)} \\cdot T^{(\\theta(X)+1)}\n\\end{align}\n\n\nWith the decrease of price, revenue will increase only if $\\theta(X)+1<0$. Thus, we set `sample_treatment_cast=-1` here to learn **what kinds of customers we should give a small discount to maximum the revenue**.\n\nThe EconML library includes policy interpretability tools such as `SingleTreePolicyInterpreter` that take in a treatment cost and the treatment effects to learn simple rules about which customers to target profitably. In the figure below we can see the model recommends to give discount for people with income less than $0.985$ and give original price for the others.\n\n\n```python\nintrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=2, min_samples_leaf=1, min_impurity_decrease=0.001)\r\nintrp.interpret(est_nonparam_dw, X_test, sample_treatment_costs=-1)\r\nplt.figure(figsize=(25, 5))\r\nintrp.plot(feature_names=[\"income\"], treatment_names=[\"Discount\", \"No-Discount\"], fontsize=12)\n```\n\nNow, let us compare our policy with other baseline policies! Our model says which customers to give a small discount to, and for this experiment, we will set a discount level of 10% for those users. Because the model is misspecified we would not expect good results with large discounts. Here, because we know the ground truth, we can evaluate the value of this policy.\n\n\n```python\n# define function to compute revenue\r\ndef revenue_fn(data, discount_level1, discount_level2, baseline_T, policy):\r\n policy_price = baseline_T * (1 - discount_level1) * policy + baseline_T * (1 - discount_level2) * (1 - policy)\r\n demand = demand_fn(data, policy_price)\r\n rev = demand * policy_price\r\n return rev\n```\n\n\n```python\npolicy_dic = {}\r\n# our policy above\r\npolicy = intrp.treat(X)\r\npolicy_dic[\"Our Policy\"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, policy))\r\n\r\n## previous strategy\r\npolicy_dic[\"Previous Strategy\"] = np.mean(train_data[\"price\"] * train_data[\"demand\"])\r\n\r\n## give everyone discount\r\npolicy_dic[\"Give Everyone Discount\"] = np.mean(revenue_fn(train_data, 0.1, 0, 1, np.ones(len(X))))\r\n\r\n## don't give discount\r\npolicy_dic[\"Give No One Discount\"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, np.ones(len(X))))\r\n\r\n## follow our policy, but give -10% discount for the group doesn't recommend to give discount\r\npolicy_dic[\"Our Policy + Give Negative Discount for No-Discount Group\"] = np.mean(revenue_fn(train_data, -0.1, 0.1, 1, policy))\r\n\r\n## give everyone -10% discount\r\npolicy_dic[\"Give Everyone Negative Discount\"] = np.mean(revenue_fn(train_data, -0.1, 0, 1, np.ones(len(X))))\n```\n\n\n```python\n# get policy summary table\r\nres = pd.DataFrame.from_dict(policy_dic, orient=\"index\", columns=[\"Revenue\"])\r\nres[\"Rank\"] = res[\"Revenue\"].rank(ascending=False)\r\nres\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
RevenueRank
Our Policy14.6862412.0
Previous Strategy14.3493424.0
Give Everyone Discount13.7744696.0
Give No One Discount14.2946065.0
Our Policy + Give Negative Discount for No-Discount Group15.5644111.0
Give Everyone Negative Discount14.6126703.0
\n
\n\n\n\n**We beat the baseline policies!** Our policy gets the highest revenue except for the one raising the price for the No-Discount group. That means our currently baseline price is low, but the way we segment the user does help increase the revenue!\n\n# Conclusions \n\nIn this notebook, we have demonstrated the power of using EconML and DoWhy to:\n\n* Estimate the treatment effect correctly even the model is misspecified\n* Test causal assumptions and investigate the robustness of the resulting estimates\n* Interpret the resulting individual-level treatment effects\n* Make the policy decision beats the previous and baseline policies\n\nTo learn more about what EconML can do for you, visit our [website](https://aka.ms/econml), our [GitHub page](https://github.com/microsoft/EconML) or our [docummentation](https://econml.azurewebsites.net/). \n\nTo learn more about what DoWhy can do for you, visit the [GitHub page](https://github.com/microsoft/dowhy) or [documentation](https://microsoft.github.io/dowhy/index.html).\n\n", "meta": {"hexsha": "387d737104715969369c5a00019764c187fad05e", "size": 385971, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company - EconML + DoWhy.ipynb", "max_stars_repo_name": "RomaKoks/EconML", "max_stars_repo_head_hexsha": "b953f51e4e2965514ce1a88d1d717b1065a33d86", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 259, "max_stars_repo_stars_event_min_datetime": "2018-07-15T08:17:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-06T20:41:42.000Z", "max_issues_repo_path": "notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company - EconML + DoWhy.ipynb", "max_issues_repo_name": "RomaKoks/EconML", "max_issues_repo_head_hexsha": "b953f51e4e2965514ce1a88d1d717b1065a33d86", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2019-01-30T22:11:52.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-04T19:53:17.000Z", "max_forks_repo_path": "notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company - EconML + DoWhy.ipynb", "max_forks_repo_name": "RomaKoks/EconML", "max_forks_repo_head_hexsha": "b953f51e4e2965514ce1a88d1d717b1065a33d86", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 32, "max_forks_repo_forks_event_min_datetime": "2018-06-12T11:22:10.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-03T18:51:25.000Z", "avg_line_length": 315.5936222404, "max_line_length": 132233, "alphanum_fraction": 0.91596778, "converted": true, "num_tokens": 8372, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5964331319177487, "lm_q2_score": 0.4921881357207956, "lm_q1q2_score": 0.2935573112807121}} {"text": "```python\nimport os\nos.environ['TRKXINPUTDIR']=\"/global/cfs/cdirs/m3443/data/trackml-kaggle/train_10evts\"\ninput_data_path = \"/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/run_dir\" \nos.environ['TRKXOUTPUTDIR']= input_data_path\n```\n\n\n```python\nimport pkg_resources\nimport yaml\nimport pprint\nimport random\nrandom.seed(1234)\nimport numpy as np\n# import pandas as pd\nimport itertools\nimport matplotlib.pyplot as plt\nimport tqdm\nfrom os import listdir\nfrom os.path import isfile, join\nimport matplotlib.cm as cm\nimport sys\nimport tqdm\nfrom tqdm import tqdm\nimport tqdm.notebook as tq\nimport pandas as pd\n# import sympy\n# from sympy import S, symbols, printing\n# %matplotlib widget\n\nsys.path.append('/global/homes/c/caditi97/exatrkx-iml2020/exatrkx/src/')\n\n# 3rd party\nimport torch\nimport torch.nn.functional as F\nfrom torch_geometric.data import Data\nfrom trackml.dataset import load_event\nfrom pytorch_lightning import Trainer\nfrom pytorch_lightning.callbacks import ModelCheckpoint\n\n\n# local import\nfrom exatrkx import config_dict # for accessing predefined configuration files\nfrom exatrkx import outdir_dict # for accessing predefined output directories\nfrom exatrkx.src import utils_dir\nfrom exatrkx.src import utils_robust\nfrom utils_robust import *\n\n\n# for preprocessing\nfrom exatrkx import FeatureStore\nfrom exatrkx.src import utils_torch\n\n# for embedding\nfrom exatrkx import LayerlessEmbedding\nfrom exatrkx.src import utils_torch\nfrom torch_cluster import radius_graph\nfrom utils_torch import build_edges\nfrom embedding.embedding_base import *\n\n# for filtering\nfrom exatrkx import VanillaFilter\n\n# for GNN\nimport tensorflow as tf\nfrom graph_nets import utils_tf\nfrom exatrkx import SegmentClassifier\nimport sonnet as snt\n\n# for labeling\nfrom exatrkx.scripts.tracks_from_gnn import prepare as prepare_labeling\nfrom exatrkx.scripts.tracks_from_gnn import clustering as dbscan_clustering\n\n# track efficiency\nfrom trackml.score import _analyze_tracks\nfrom exatrkx.scripts.eval_reco_trkx import make_cmp_plot, pt_configs, eta_configs\nfrom functools import partial\n\n\n```\n\n\n```python\nplt.rcParams.update({'axes.titlesize' : 16, 'axes.labelsize' : 16, 'lines.linewidth' : 2, 'lines.markersize' : 10,\n 'xtick.labelsize' : 14, 'xtick.major.width' : 2,\n 'ytick.labelsize' : 14, 'ytick.major.width' : 2,\n 'grid.alpha' : 0.5, \"legend.frameon\" : False, 'legend.fontsize' : 16})\n\n```\n\n\n```python\nfile_path = \"/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/run_dir/feature_store/\"\nevent_id = '1025'\nproc_path = os.path.join(file_path, event_id)\n```\n\n\n```python\ndata = torch.load(proc_path)\ndata\n```\n\n\n\n\n Data(event_file=\"WmuHNL15GeV_NoPileUp_Generic/event000001025\", hid=[202], layerless_true_edges=[2, 181], layers=[202], pid=[202], x=[202, 3])\n\n\n\n\n```python\nevent_path = f\"/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/particles/event00000{event_id}-particles.csv\"\n```\n\n\n```python\npids = data.pid.numpy()\nunq_pids = np.unique(pids)\n```\n\n\n```python\nparticles = pd.read_csv(event_path)\n```\n\n\n```python\nparticles.loc[particles.index[particles['particle_id'] == 4503599644147712]]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
particle_idparticle_typeprocessvxvyvzvtpxpypzmq
0450359964414771222120-0.143223-1.994651-27.815966-4.8144180.0887110.05035-314.0956730.938271
\n
\n\n\n\n\n```python\ndef get_r(particles,pid):\n pid_idx = particles.index[particles['particle_id'] == pid]\n vx = particles.loc[pid_idx]['vx']\n vy = particles.loc[pid_idx]['vy']\n vz = particles.loc[pid_idx]['vz']\n return pid_idx, vx, vy, vz\n```\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,10))\nfor pid in pids:\n pid_idx, vx, vy, vz = get_r(particles,pid)\n hids = data.x.numpy()[pids == pid]\n for i in range(0,len(hids)):\n x = hids[i,][0]\n y = hids[i,][1]\n z = hids[i,][2]\n r = np.sqrt((x - vx)**2 + (y - vy)**2 + (z - vz)**2)\n if z.size != r.size:\n continue\n ax.scatter(z,r, color='gray', s=3)\nax.set_xlabel('z')\nax.set_ylabel('r')\n\n# true_edges = data.layerless_true_edges\n\n# for idx in range(true_edges.shape[1]):\n# edge1 = true_edges[0][idx]\n# # edge2 = true_edges[1][idx]\n# pid1 = data.pid[edge1]\n# # pid2 = data.pid[edge2]\n# _, vx1, vy1, vz1 = get_r(particles,pid1)\n# # _, vx2, vy2, vz2 = get_r(particles,pid2)\n# r1 = np.sqrt((data.x[edge1][0] - vx1)**2 + (data.x[edge1][1] - vy1)**2 + (data.x[edge1][2] - vz1)**2)\n# ax.scatter(data.x[edge1][2],data.x[edge1][1], color='red',s=1)\n \n\n```\n\n\n```python\n# def e_model(data, embed_ckpt_dir):\n# device = 'cuda' if torch.cuda.is_available() else 'cpu'\n# e_ckpt = torch.load(embed_ckpt_dir, map_location=device)\n# e_config = e_ckpt['hyper_parameters']\n# e_config['clustering'] = 'build_edges'\n# e_config['knn_val'] = 500\n# e_config['r_val'] = 1.7\n# e_model = LayerlessEmbedding(e_config).to(device)\n# e_model.load_state_dict(e_ckpt[\"state_dict\"])\n# e_model.eval()\n# return e_model\n\n# def embedding_hits(e_model,data):\n# with torch.no_grad():\n# # had to move everything to device\n# spatial = e_model(data.x.to(device))\n# e_spatial_build = utils_torch.build_edges(spatial.to(device), e_model.hparams['r_val'], e_model.hparams['knn_val'])\n \n# R_dist = torch.sqrt(data.x[:,0]**2 + data.x[:,2]**2) # distance away from origin...\n# e_spatial = e_spatial_build[:, (R_dist[e_spatial_build[0]] <= R_dist[e_spatial_build[1]])]\n# return e_spatial\n\n# def doublet_metrics(data, e_spatial):\n# e_bidir = torch.cat([data.layerless_true_edges,torch.stack([data.layerless_true_edges[1],\n# data.layerless_true_edges[0]], axis=1).T], axis=-1)\n# # did not have to convert e_spatail to tensor??\n# e_spatial_n, y_cluster = graph_intersection(e_spatial, e_bidir)\n# cluster_true = len(data.layerless_true_edges[0])\n# cluster_true_positive = y_cluster.sum()\n# cluster_positive = len(e_spatial_n[0])\n# pur = cluster_true_positive/cluster_positive\n# eff = cluster_true_positive/cluster_true \n# return pur,eff\n```\n\n\n```python\n# embed_ckpt_dir = '/global/cfs/projectdirs/m3443/usr/caditi97/iml2021/run_dir/embedding_output/ckpt-epoch=11-val_loss=0.78.ckpt'\n\n```\n\n\n```python\n# e_model = e_model(data, embed_ckpt_dir)\n# e_spatial = embedding_hits(e_model,data)\n# # e_spatial_np = e_spatial.cpu().detach().numpy()\n# # e_spatial_np_t = e_spatial_np.T\n# pur,eff = doublet_metrics(data,e_spatial)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "f2b90a5364cb056177952dafc7bb52898b708778", "size": 26345, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/visualize_pileup_tracks.ipynb", "max_stars_repo_name": "caditi97/exatrkx-iml2020", "max_stars_repo_head_hexsha": "f4b1e4438cda7db2d40c8e572b1b682c12781e6c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-24T18:54:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-24T18:54:55.000Z", "max_issues_repo_path": "notebooks/visualize_pileup_tracks.ipynb", "max_issues_repo_name": "caditi97/exatrkx-iml2020", "max_issues_repo_head_hexsha": "f4b1e4438cda7db2d40c8e572b1b682c12781e6c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/visualize_pileup_tracks.ipynb", "max_forks_repo_name": "caditi97/exatrkx-iml2020", "max_forks_repo_head_hexsha": "f4b1e4438cda7db2d40c8e572b1b682c12781e6c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.4243119266, "max_line_length": 13148, "alphanum_fraction": 0.7510343519, "converted": true, "num_tokens": 2196, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118642792044, "lm_q2_score": 0.5774953651858118, "lm_q1q2_score": 0.2932589980076071}} {"text": "```python\n# %load /Users/facai/Study/book_notes/preconfig.py\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(color_codes=True)\nsns.set(font='SimHei')\nplt.rcParams['axes.grid'] = False\n\nfrom IPython.display import SVG\n\ndef show_image(filename, figsize=None):\n if figsize:\n plt.figure(figsize=figsize)\n\n plt.imshow(plt.imread(filename))\n```\n\nxgboost\u7684\u57fa\u672c\u539f\u7406\u4e0e\u5b9e\u73b0\u7b80\u4ecb\n========================\n\nxgboost\u662f\u4e2a\u5f88\u68d2\u7684\u5de5\u5177\uff0c\u5176\u4f18\u70b9\u5f88\u591a\uff1a\u8fd0\u884c\u901f\u5ea6\uff0c\u652f\u6301\u6b63\u5219\uff0c\u76f4\u63a5\u7528\u635f\u5931\u51fd\u6570\u6307\u5bfc\u6811\u7684\u751f\u6210\u7b49\u7b49\uff0c\u4e0d\u4e00\u800c\u8db3\u3002\u76f8\u8f83spark\u81ea\u5e26\u7684gbdt\uff0cxgboost\u66f4\u9002\u5408\u5de5\u7a0b\u5e94\u7528\u3002\n\n\u672c\u6587\u53c2\u7167\u8bba\u6587 Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System\uff0c\u4ecb\u7ecd\u4e0bxgboost\u5bf9\u539f\u59cbgbdt\u7684\u6539\u8fdb\u5730\u65b9\uff0c\u5e76\u8bf4\u660e\u4e0b\u5177\u4f53\u7684\u5b9e\u73b0\u7ec6\u8282\u3002\n\n\u4ee3\u7801\u7248\u672c\u4fe1\u606f\uff1a\n\n```bash\n~/W/xgboost \u276f\u276f\u276f git log -n 1\ncommit 74db1e8867eb9ecfacf07311131c2724dbc3fdbd\nAuthor: Nan Zhu \nDate: Sun Aug 28 21:25:49 2016 -0400\n\n [jvm-packages] remove APIs with DMatrix from xgboost-spark (#1519)\n\n * test consistency of prediction functions between DMatrix and RDD\n\n * remove APIs with DMatrix from xgboost-spark\n\n * fix compilation error in xgboost4j-example\n\n * fix test cases\n```\n\n### 0. \u5927\u7eb2\n\nxgboost\u7684\u8d21\u732e\uff0c\u65e2\u6709\u7406\u8bba\u4e0a\u7684\u62d3\u5c55\uff0c\u4e5f\u6709\u5de5\u7a0b\u4e0a\u7684\u6027\u80fd\u4f18\u5316\u3002\u672c\u6587\u53ea\u5173\u5fc3\u5728\u5176\u5728\u7406\u8bba\u4e0a\u7684\u6539\u8fdb\uff0c\u4e3b\u8981\u662f\u5c06\u6b63\u5219\uff08Regularized\uff09\u5f15\u5165\u635f\u5931\u51fd\u6570\uff0c\u5e76\u4e14\u7528\u635f\u5931\u51fd\u6570\u76f4\u63a5\u6307\u5bfc\u6811\u7684\u751f\u6210\u3002\u6211\u4eec\u7528\u4f20\u7edfGBDT\u505a\u5f15\uff0c\u4e00\u6b65\u6b65\u8d70\u5230xgboost\u3002\n\n\u8fd9\u91cc\u5047\u8bbe\u8bfb\u8005\u5df2\u7ecf\u4e86\u89e3\u51b3\u7b56\u6811\u548cGBDT\u7684\u539f\u7406\uff0c\u4e0d\u4f1a\u8fc7\u591a\u94fa\u57ab\u3002\n\n### 1. GBDT\u6846\u67b6\n\n\u4e3a\u4e86\u4fbf\u4e8e\u540e\u7eed\u8bb2\u89e3\uff0c\u9996\u5148\u4f1a\u5f15\u5165\u516c\u5f0f\u63cf\u8ff0\u51b3\u7b56\u6811\u548cGBDT\u6a21\u578b\uff0c\u6d89\u53ca\u7684\u6570\u5b66\u77e5\u8bc6\u662f\u6bd4\u8f83\u7b80\u6613\u7684\uff0c\u4e0d\u8981\u6050\u614c\u3002\n\n\u5bf9\u4e8e\u4e00\u4e2a$J$\u9897\u53f6\u5b50\u7684\u51b3\u7b56\u6811\uff08CART\uff09\uff0c\u53ef\u4ee5\u8868\u793a\u4e3a\u52a0\u6cd5\u6a21\u578b[1]\uff1a\n\n\\begin{equation}\n f(x) = h \\left (x; \\{b_j, R_j\\}^J_1 \\right ) = \\displaystyle \\sum_{j=1}^J b_j \\mathbf{1}(x \\in R_j)\n\\end{equation}\n\n\u5176\u4e2d$R_j$\u662f\u53f6\u5b50\uff0c$b_j$\u662f\u53f6\u5b50\u5bf9\u5e94\u7684\u503c\u3002\n\n\u5bf9\u4e8e\u4e00\u4e2atree ensemble model\uff0c\u5c31\u662f$K$\u9897\u6811\u7684\u7ed3\u679c\u53e0\u52a0\uff1a\n\n\\begin{equation}\n \\hat{y}_i = \\phi(x_i) = \\displaystyle \\sum_{k=1}^K f_k(x_i)\n\\end{equation}\n\n\u5177\u4f53\u5230\u4f20\u7edf\u7684GBDT\uff0c\u5176\u53ef\u63cf\u8ff0\u6210\u6700\u4f18\u95ee\u9898\uff1a\n\n\\begin{align}\n f_m &= \\displaystyle \\operatorname{arg \\, min}_f \\sum_{i=1}^n L \\left ( y_i, \\hat{y}_i + f(x_i) \\right ) \\\\\n &= \\operatorname{arg \\, min} \\mathcal{L}(f)\n\\end{align}\n\n\u4f20\u7edf\u7684\u601d\u8def\uff0c\u4fbf\u662f\u501f\u7528\u6700\u901f\u4e0b\u964d\u7684\u60f3\u6cd5\uff0c\u8ba4\u5b9a\u635f\u5931\u51fd\u6570\u4e2d$f(x_i)$\u662f\u5173\u4e8e\u68af\u5ea6\u7684\u4e00\u4e2a\u51fd\u6570\u3002\u800cxgboost\u6b63\u662f\u5728\u8fd9\u91cc\u505a\u4e86\u65b0\u7684\u6587\u7ae0\uff0c\u5f15\u5165\u6b63\u5219\uff0c\u6cf0\u52d2\u5c55\u5f00\uff0c\u518d\u63c9\u8fdb\u6811\u6a21\u578b\u91cc\u3002\n\n[1]: Friedman - Greedy function approximation: A gradient boosting machine\n\n### 2. xgboost\n\n#### 2.0 \u6b63\u5219\n\nxgboost\u5c06\u6b63\u5219 $\\Omega(f)$ \u5f15\u5165\u8fdb\u635f\u5931\u51fd\u6570 $\\mathcal{L}(f)$\uff0c\u63a7\u5236\u4e86\u6811\u7684\u53f6\u5b50\u6570$\\|R_j\\|$\u548c\u53f6\u5b50\u503c$b_j$\uff0c\u8868\u793a\u5982\u4e0b\uff1a \n\n\\begin{align}\n \\mathcal{L}(f) &= \\displaystyle \\sum_{i=1}^{n} L(y_i, \\hat{y}_i + f(x_i)) + \\Omega(f) \\\\\n \\Omega(f) &= \\gamma \\|R_j\\| + \\frac{1}{2} \\lambda \\|b_j\\|^2\n\\end{align}\n\n\u4f46\u8fd9\u6837\u7eaf\u7406\u8bba\u7684\u516c\u5f0f\u662f\u65e0\u6cd5\u5e94\u7528\u5230\u5de5\u7a0b\u4e0a\u7684\uff0c\u8981\u7528\u5b83\u6307\u5bfc\u6811\u6a21\u578b\u7684\u751f\u6210\uff0c\u5fc5\u987b\u5c06\u5b83\u7684\u8fd9\u4e24\u4e2a\u52a0\u6cd5\u9879$\\sum L(f)$\u548c$\\Omega(f)$\u6574\u5408\uff0c\u624d\u53ef\u4ee5\u76f4\u63a5\u6307\u5bfc\u6a21\u578b\u751f\u6210\u3002\u8981\u6574\u5408\uff0c\u91cd\u70b9\u6709\u4e24\u4e2a\uff0c\u4e00\u662f\u6253\u5f00$\\sum L(f)$\uff0c\u653e\u51fa$f$\uff1b\u4e8c\u662f\u5c06$\\mathcal{L}(f)$\u8f6c\u6362\u6210\u6307\u5bfc\u6811\u751f\u957f\u7684\u51fd\u6570\u3002\n\n#### 2.1 \u6cf0\u52d2\u5c55\u5f00\n\n\u5bf9\u4e8e\u7b2c\u4e00\u4e2a\u95ee\u9898\uff0c\u6253\u5f00 $\\sum L(f)$\uff0cxgboost\u7684\u65b9\u6cd5\u662f\u5229\u7528[\u6cf0\u52d2\u5c55\u5f00](https://zh.wikipedia.org/wiki/\u6cf0\u52d2\u516c\u5f0f)\u6210\u4e8c\u9636\u591a\u9879\u5f0f:\n\n\\begin{align}\n \\mathcal{L} &\\approx \\sum_{i=1}^n \\left [ L(y_i, \\hat{y}_i) + g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f) \\\\\n g_i &= \\frac{\\partial \\, L(y_i, \\hat{y}_i)}{\\partial \\hat{y}_i} \\\\\n h_i &= \\frac{\\partial^2 \\, L(y_i, \\hat{y}_i)}{\\partial \\hat{y}^2_i} \\\\\n\\end{align}\n\n\u4e3a\u4e86\u4fbf\u4e8e\u7406\u89e3\uff0c\u6211\u5c06\u63a8\u5bfc\u8fc7\u7a0b\u7ec6\u81f4\u8bf4\u4e0b\uff1a\n\n\u5229\u7528\u6cf0\u52d2\u516c\u5f0f\uff0c\u6211\u4eec\u5728\u5355\u70b9\u53ef\u5c06\u4e00\u4e2a\u51fd\u6570$f(x)$\u5c55\u5f00\u4e3a\u9ad8\u9636\u5bfc\u6570\u7684\u53e0\u52a0\uff1a \n\\begin{equation}\n \\sum _{n=0}^{\\infty }{\\frac {f^{(n)}(a)}{n!}}(x-a)^{n}\n\\end{equation}\n\n\u5177\u4f53\u5230\u4e8c\u9636\u5bfc\u6570\u4e3a\n\\begin{equation}\nf(x) \\approx f(a)+{\\frac {f'(a)}{1!}}(x-a)+{\\frac {f''(a)}{2!}}(x-a)^{2}\n\\end{equation}\n\n\u6211\u4eec\u505a\u70b9\u53d8\u5f62\u4ee5\u5229\u4e8e\u7406\u89e3\uff0c\u4ee4$t = \\hat{y}_i + f(x_i)$\uff0c\u5219\n\\begin{align}\n L(y_i, \\hat{y}_i + f(x_i)) &= L(y_i, t)\\\\\n &= L(t) \\quad \\text{\u7ed9\u5b9a$x_i$\uff0c\u5219$y_i$\u662f\u5b9a\u503c\uff0c\u5373\u5e38\u6570}\n\\end{align}\n\n\u63a5\u7740\u5229\u7528\u6cf0\u52d2\u5c06$L(t)$\u5728$\\hat{y}_i$\u70b9\u5c55\u5f00\uff1a\n\n\\begin{align}\nL(t) &\\approx L(\\hat{y}_i) + {\\frac {L'(\\hat{y}_i)}{1!}}(t-\\hat{y}_i)+{\\frac {f''(\\hat{y}_i)}{2!}}(t-\\hat{y}_i)^{2} \\\\\n &= L(\\hat{y}_i) + L'(\\hat{y}_i) f(x_i) + \\frac{1}{2}f''(\\hat{y}_i) f^{2}(x_i) \\\\\n &= L(y_i, \\hat{y}_i) + L'(\\hat{y}_i) f(x_i) + \\frac{1}{2}f''(\\hat{y}_i) f^{2}(x_i) \\\\\n &= L(y_i, \\hat{y}_i) + g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \n\\end{align}\n\n#### 2.2 \u5f15\u5165\u6811\u6a21\u578b\n\n\u6cf0\u52d2\u89e3\u51b3\u4e86\u7b2c\u4e00\u4e2a\u95ee\u9898\uff0c\u6253\u5f00$L(f)$\uff0c\u5f97\u5230\uff1a\n\n\\begin{equation}\n \\mathcal{L} \\approx \\sum_{i=1}^n \\left [ L(y_i, \\hat{y}_i) + g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f)\n\\end{equation}\n\n\u63a5\u7740\u9700\u8981\u56de\u7b54\u7b2c\u4e8c\u4e2a\u95ee\u9898\uff0c\u5728\u8fd9\u4e4b\u524d\uff0c\u6211\u4eec\u5148\u505a\u70b9\u9884\u5904\u7406\u3002\n\n\u5bf9\u4e8e\u635f\u5931\u51fd\u6570$\\mathcal{L}$\uff0c\u5e38\u6570\u9879$L(y_i, \\hat{y}_i)$\u662f\u53ef\u4ee5\u76f4\u63a5\u820d\u6389\u7684\uff0c\u8bb0\u4e3a\uff1a \n\n\\begin{equation}\n \\mathcal{L} = \\sum_{i=1}^n \\left [ g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f)\n\\end{equation}\n\n\u5982\u4f55\u7528\u8fd9\u4e2a\u5916\u90e8\u7684\u635f\u5931\u51fd\u6570\u6307\u5bfc\u6811\u6a21\u578b\u5462\uff1f\n\n\u5728\u8bba\u6587Friedman - Greedy function approximation: A gradient boosting machine\u91cc\u63a8\u5bfcTreeBoost\u65b9\u6cd5\u65f6\u7ed9\u4e86\u4e2a\u5957\u8def\uff1a\n\n1. \u5c06\u51b3\u7b56\u6811\u7684\u6570\u5b66\u6a21\u578b$f(x)$\u4ee3\u5165\u5916\u90e8\u635f\u5931\u51fd\u6570$\\mathcal{L}$\u3002\n2. \u56fa\u5b9a$J$\uff0c\u5f97\u5230\u6700\u4f18\u7684$b_j$\u89e3\u6790\u5f0f\u3002\n3. \u53cd\u4ee3$b_j$\u56de$\\mathcal{L}$\uff0c\u5c06\u5b83\u4f5c\u4e3a\u6811\u751f\u6210\u7684\u8bc4\u4ef7\u51fd\u6570\uff08\u5185\u90e8\u635f\u5931\u51fd\u6570\uff09\u3002\n\n\u8fd9\u4e2a\u5957\u8def\u7684\u601d\u60f3\u662f\uff0c\u5bf9\u4e8e\u6811\u6a21\u578b\u6765\u8bf4\uff0c\u4e3b\u8981\u6709\u4e24\u4e2a\u53c2\u6570\uff1a\u53f6\u5b50\u6570\u3001\u53f6\u5b50\u503c\u3002\u901a\u8fc7\u56fa\u5b9a\u53f6\u5b50\u6570\uff0c\u5c31\u53ef\u4ee5\u5229\u7528\u5916\u90e8\u635f\u5931\u51fd\u6570$\\mathcal{L}(f)$\u5bfb\u4f18\u5230\u6700\u4f73\u7684\u53f6\u5b50\u503c\u8ba1\u7b97\u89e3\u6790\u5f0f\u3002\u5c06\u53c2\u6570\u53cd\u4ee3\uff0c\u5916\u90e8\u635f\u5931\u51fd\u6570\u5c31\u53d8\u6210\u53ea\u548c$x$\u76f8\u5173$\\mathcal{L}(x)$\uff0c\u8fd9\u65f6\u5b83\u5c31\u53ef\u4ee5\u4f5c\u4e3a\u6811\u751f\u957f\u7684\u8bc4\u4ef7\u51fd\u6570\u3002\u8fd9\u4e2a\u601d\u8def\u633a\u5de7\u5999\u7684\uff0c\u53ef\u80fd\u6709\u70b9\u7ed5\u3002\n\n\n##### 2.2.0 \u4ee3\u5165\u6811\u6a21\u578b\n\n\u6211\u4eec\u5c31\u5c06\u6811\u6a21\u578b$f(x)$\u7684\u6570\u5b66\u4ee3\u5165\uff0c\n\n\\begin{align}\n \\mathcal{L} &= \\sum_{i=1}^n \\left [ g_i f(x_i) + \\frac{1}{2} h_i f^2(x_i) \\right ] + \\Omega(f) \\\\\n &= \\sum_{i=1}^n \\left [ g_i \\sum_{j=1}^J b_j \\mathbf{1}(x_i \\in R_j) + \\frac{1}{2} h_i (\\sum_{j=1}^J \\color{red}{b_j} \\mathbf{1}(x_i \\in R_j))^{\\color{red}{2}} \\right ] + \\Omega(f) \\\\ \n &= \\sum_{i=1}^n \\left [ g_i \\sum_{j=1}^J b_j \\mathbf{1}(x_i \\in R_j) + \\frac{1}{2} h_i \\sum_{j=1}^J \\color{red}{b_j^2} \\mathbf{1}(x_i \\in R_j) \\right ] + \\Omega(f) \\quad \\text{\u56e0\u4e3a$x_i$\u53ea\u5c5e\u4e8e\u4e00\u4e2a$R_j$} \\\\\n &= \\sum_{j=1}^J \\color{blue}{\\sum_{i=1}^n \\mathbf{1}(x_i \\in R_j)} g_i b_j + \\frac{1}{2} \\color{blue}{\\sum_{j=1}^J \\sum_{i=1}^n \\mathbf{1}(x_i \\in R_j)} h_i b_j^2 + \\Omega(f) \\quad \\text{\u4e58\u6cd5\u4ea4\u6362} \\\\\n &\\text{\u4ee4} I_j = \\{ i \\, | \\, x_i \\in R_j \\} \\quad \\text{\u5373\u5c5e\u4e8e$R_j$\u7684\u5168\u90e8\u4e0b\u6807$i$} \\\\\n &= \\sum_{j=1}^J \\color{blue}{\\sum_{i \\in I_j}} g_i b_j + \\frac{1}{2} \\sum_{j=1}^J \\color{blue}{\\sum_{i \\in I_j}} h_i b_j^2 + \\Omega(f) \\\\\n &= \\sum_{j=1}^J ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} \\sum_{j=1}^J (\\sum_{i \\in I_j} h_i) b_j^2 + \\Omega(f) \\quad \\text{\u4e58\u6cd5\u5206\u914d\u5f8b}\\\\\n &= \\sum_{j=1}^J ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} \\sum_{j=1}^J (\\sum_{i \\in I_j} h_i) b_j^2 + \\gamma \\|R_j\\| + \\frac{1}{2} \\lambda \\|b_j\\|^2 \\quad \\text{\u4ee3\u5165\u6b63\u5219$\\Omega(f)$}\\\\\n &= \\sum_{j=1}^J ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} \\sum_{j=1}^J (\\sum_{i \\in I_j} h_i) b_j^2 + \\gamma \\|R_j\\| + \\frac{1}{2} \\lambda \\sum_{j=1}^J b_j^2 \\\\\n &= \\sum_{j=1}^J \\left ( ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} (\\sum_{i \\in I_j} h_i) b_j^2 + \\frac{1}{2} \\lambda b_j^2 \\right ) + \\gamma \\|R_j\\| \\\\\n &= \\sum_{j=1}^J \\left ( ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} (\\sum_{i \\in I_j} h_i + \\lambda) b_j^2 \\right ) + \\gamma \\|R_j\\| \\\\\n\\end{align}\n\n##### 2.2.1 $b_j$\u6700\u4f18\u89e3\u6790\u5f0f\n\n\\begin{align}\n b_j &= \\operatorname{arg \\, min}_{b_j} \\mathcal{L} \\\\\n &= \\operatorname{arg \\, min}_{b_j} \\sum_{j=1}^J \\left ( ( \\sum_{i \\in I_j} g_i ) b_j + \\frac{1}{2} (\\sum_{i \\in I_j} h_i + \\lambda) b_j^2 \\right ) + \\gamma \\|R_j\\| \\\\\n &\\approx \\sum_{j=1}^J \\operatorname{arg \\, min}_{b_j} \\left ( ( \\sum_{i \\in I_j} g_i ) \\color{red}{b_j} + \\frac{1}{2} (\\sum_{i \\in I_j} h_i + \\lambda) \\color{red}{b_j^2} \\right ) + \\gamma \\|R_j\\| \n\\end{align}\n\n\u5168\u5c40\u6700\u4f18\u4e0d\u597d\u6c42\uff0c\u6211\u4eec\u8f6c\u800c\u6c42\u5c40\u90e8\u6700\u4f18\u3002\u6ce8\u610f\u5230\u6bcf\u4e2a\u5c40\u90e8\u9879\u662f\u4e00\u4e2a\u4e8c\u9879\u5f0f\uff0c\u5b83\u7684\u6700\u5c0f\u70b9\u53ef\u7528\u516c\u5f0f\u76f4\u63a5\u5957\u51fa\uff1a\n\n\\begin{align}\n b^*_j &= - \\frac{\\sum_{i \\in I_j} g_i}{\\sum_{i \\in I_j} h_i + \\lambda} \\\\\n\\end{align}\n\n##### 2.2.2 \u6811\u751f\u957f\u7684\u8bc4\u4ef7\u51fd\u6570\n\n\u5c06$b_j$\u91cd\u4ee3\u56de$\\mathcal{L}$\u5f97\u5230\u8bc4\u4ef7\u51fd\u6570\uff1a\n\n\\begin{align}\n \\mathcal{L} &= - \\frac{1}{2} \\sum_{j=1}^J \\frac{(\\sum_{i \\in I_j} g_i)^2}{\\sum_{i \\in I_j} h_i + \\lambda} + \\gamma \\|R_j\\| \\\\\n &= - \\frac{1}{2} H + \\gamma T\n\\end{align}\n\n\u4e8e\u662f\u5f97\u5230\u6811\u751f\u6210\u7684\u5206\u5272\u4f9d\u636e\uff1a\n\n\\begin{align}\n \\mathcal{L}_{\\text{split}} &= \\mathcal{L} - \\mathcal{L}_L - \\mathcal{L}_R \\\\\n &= \\frac{1}{2} (H_L + H_R - H) + \\gamma (T - T_L - T_R) \\\\\n &= \\frac{1}{2} (H_L + H_R - H) - \\gamma \\\\\n\\end{align}\n\n#### 2.3 \u5c0f\u7ed3\n\n\u81f3\u6b64\uff0cxgboost\u5bf9GBDT\u6846\u67b6\u7684\u4e3b\u8981\u6539\u8fdb\u5c31\u8bf4\u660e\u5b8c\u4e86\u3002\u6211\u4eec\u68b3\u7406\u4e0b\uff0c\u5bf9\u4e8e\u4e00\u4e2a\u635f\u5931\u51fd\u6570 $L$\uff0c\u7528\u5b83\u89e3\u51fa\u8001\u6a21\u578b\u7684\u8f93\u51fa $\\hat{y}_i$ \u7684\u4e00\u9636\u5bfc $g_i$ \u548c\u4e8c\u9636\u5bfc $h_i$\u3002\u6709\u4e86\u8fd9\u4e24\u9636\u5bfc\u6570\uff0c\u5c31\u53ef\u4ee5\u505a\u4e3a\u8bc4\u4ef7\u51fd\u6570\u76f4\u63a5\u6307\u5bfc\u51b3\u7b56\u6811\u7684\u751f\u957f\uff0c\u540c\u65f6\u7b97\u51fa\u53f6\u5b50\u7684\u503c\uff0c\u4e8e\u662f\u5c31\u5f97\u5230\u4e86\u65b0\u6811\uff0c\u518d\u52a0\u56de\u8001\u6a21\u578b\u5f97\u5230\u65b0\u6a21\u578b\u3002\n\n\u6211\u4eec\u53ef\u4ee5\u770b\u5230\uff0c\u539f\u59cb\u7684GBDT\u53ea\u662f\u5c06Gradient Boost\u6846\u67b6\u4e2d\u7684\u5b66\u4e60\u6a21\u578b\u6307\u5b9a\u662f\u51b3\u7b56\u6811\uff0c\u8fd9\u65f6\u8fd8\u662f\u4e00\u4e2a\u901a\u7528\u6846\u67b6\u3002\u800cTreeBoost\u548cxbgoost\uff0c\u5219\u66f4\u8fdb\u4e00\u6b65\uff0c\u5c06\u51b3\u7b56\u6811\u7684\u6570\u5b66\u6a21\u578b\u4ee3\u5165Gradeint Boost\uff0c\u4ece\u800c\u89e3\u51fa\u76f4\u63a5\u6307\u5bfc\u51b3\u7b56\u6811\u751f\u957f\u7684\u89e3\u6790\u5f0f\u3002\u4e5f\u5c31\u8bf4\uff0c\u628a\u5916\u90e8\u7684\u635f\u5931\u51fd\u6570\uff0c\u5f15\u5165\u5230\u4e86\u51b3\u7b56\u6811\u7684\u5efa\u7acb\u8fc7\u7a0b\u4e2d\uff0c\u8fd9\u5176\u5b9e\u5c31\u662f\u4ece\u901a\u7528\u5230\u5b9a\u5236\u7684\u8fc7\u7a0b\u3002\u6240\u4ee5\uff0c\u53ef\u4ee5\u8bf4\uff0cTreeBoost\u548cxgboost\u7684\u672c\u8d28\uff0c\u5c31\u662f\u5bf9\u6811\u6a21\u578b\u8fdb\u884c\u5b9a\u5236\u4f18\u5316\u7684Gradient Boost\u3002\n\n### 3 \u5de5\u7a0b\u5b9e\u73b0\n\n#### 3.0 \u8bad\u7ec3\n\n\u6b63\u5982\u524d\u9762\u5c0f\u7ed3\u6240\u8a00\uff0cxgboost\u7684\u8bad\u7ec3\u4e3b\u8981\u662f\u4e09\u6b65\uff1a\n\n1. \u8001\u6a21\u578b\u7684\u9884\u6d4b\u503c$\\hat{y}_i$\uff1b\n2. \u8ba1\u7b97\u51fa\u4e24\u9636\u5bfc\u6570$g_i$\u548c$h_i$\uff1b\n3. \u5c06\u5bfc\u6570\u4fe1\u606f\u4f20\u5165\uff0c\u6307\u5bfc\u51b3\u7b56\u6811\u751f\u957f\u3002\n\n\u4e3b\u4f53\u4ee3\u7801\u4f4d\u4e8e `src/learner.cc`\uff0c\u5982\u4e0b\uff1a\n\n```C++\n288 void UpdateOneIter(int iter, DMatrix* train) override {\n289 CHECK(ModelInitialized())\n290 << \"Always call InitModel or LoadModel before update\";\n291 if (tparam.seed_per_iteration || rabit::IsDistributed()) {\n292 common::GlobalRandom().seed(tparam.seed * kRandSeedMagic + iter);\n293 }\n294 this->LazyInitDMatrix(train);\n295 this->PredictRaw(train, &preds_);\n296 obj_->GetGradient(preds_, train->info(), iter, &gpair_);\n297 gbm_->DoBoost(train, this->FindBufferOffset(train), &gpair_);\n298 }\n```\n\n\u7406\u8bba\u6bd4\u5de5\u7a0b\u597d\u8bb2\uff0c\u5de5\u7a0b\u5b9e\u73b0\u4f1a\u6709\u5927\u91cf\u7ec6\u8282\uff0c\u76f8\u5f53\u7e41\u7410\u3002\u73b0\u5728\u65f6\u95f4\u4e0d\u591a\uff0c\u6ca1\u6709\u5fc3\u529b\u9762\u9762\u4ff1\u5230\uff0c\u4e0d\u518d\u51c6\u5907\u7ec6\u5199\u4e86\u3002\n\n\u6211\u753b\u4e86\u7b80\u7248\u7684UML\u56fe\uff0c\u53ef\u80fd\u4e0d\u51c6\u786e\uff0c\u611f\u5174\u8da3\u7684\u670b\u53cb\u53ef\u4ee5\u53c2\u8003\u5b83\uff0c\u81ea\u884c\u4ece\u8fd9\u4e2a\u5165\u53e3\u53bb\u8ffd\u4e00\u904d\u3002\n\n\n```python\nSVG(\"./res/Main.svg\")\n```\n\n\n\n\n \n\n \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "14d8584e854584f3755a0f4a6d4a1a97d5fc1d23", "size": 61365, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "machine_learning/tree/gbdt/xgboost/intro.ipynb", "max_stars_repo_name": "ningchi/book_notes", "max_stars_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-12-31T12:10:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T15:49:34.000Z", "max_issues_repo_path": "machine_learning/tree/gbdt/xgboost/intro.ipynb", "max_issues_repo_name": "ningchi/book_notes", "max_issues_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-12-05T13:04:14.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-07T16:24:50.000Z", "max_forks_repo_path": "machine_learning/tree/gbdt/xgboost/intro.ipynb", "max_forks_repo_name": "ningchi/book_notes", "max_forks_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-06-27T07:19:28.000Z", "max_forks_repo_forks_event_max_datetime": "2017-11-19T08:57:35.000Z", "avg_line_length": 165.4043126685, "max_line_length": 48123, "alphanum_fraction": 0.6119449197, "converted": true, "num_tokens": 4715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.5467381519846138, "lm_q1q2_score": 0.29255872599718435}} {"text": "```python\nimport sys\nimport numpy as np\n```\n\n\n```python\nsys.path.insert(0, '../')\nsys.path.insert(0, '../detmodel/')\n```\n\n\n```python\nimport elements\n```\n\n\n```python\nimport si_mu_late\n```\n\n\n```python\ndet = si_mu_late.setup_detector('../cards/atlas_mm.yml')\n```\n\n\n```python\ndet\n```\n\n\n\n\n \n\n\n\n\n```python\nclass conf():\n def __init__(self, nevs,ismu,muxmin,muxmax,muymin,muymax,muamin,muamax,bkgr):\n self.nevs=nevs\n self.ismu=ismu\n self.muxmin=muxmin\n self.muxmax=muxmax\n self.muymin=muymin\n self.muymax=muymax\n self.muamin=muamin\n self.muamax=muamax\n self.bkgr=bkgr\n```\n\n\n```python\nmyconf = conf(1,1,0,0,0,0,0,0,100000)\n```\n\n\n```python\nsignals = si_mu_late.event(det,myconf)\n```\n\n 1\n 1\n 1\n 1\n 1\n 1\n 1\n 1\n\n\n\n```python\nsignals\n```\n\n\n```python\nimport sympy\n```\n\n\n```python\nl1 = sympy.Line3D( sympy.Point3D(-1/2, 0, 1), sympy.Point3D(-1/2, 5, 1) )\nl2 = sympy.Line3D(sympy.Point3D(5, -5, 1), sympy.Point3D(5, 5, 1))\n```\n\n\n```python\nl1.intersection(l2)\n```\n\n\n```python\ndet.get_signals()\n```\n\n\n```python\ndet.planes[1].z\n```\n\n\n```python\nmu = elements.Muon(0,0,0,0)\n```\n\n\n```python\ndet.planes[3].pass_muon(mu)\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "dd291176e741e05aaa0791c79b3b25a20cf3e44e", "size": 4354, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/.ipynb_checkpoints/R_and_D-checkpoint.ipynb", "max_stars_repo_name": "rateixei/si-mu-lator", "max_stars_repo_head_hexsha": "53505146c3a9098138d52999079cd45b92903bf9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-10-06T16:09:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-02T14:12:45.000Z", "max_issues_repo_path": "notebooks/R_and_D.ipynb", "max_issues_repo_name": "rateixei/si-mu-lator", "max_issues_repo_head_hexsha": "53505146c3a9098138d52999079cd45b92903bf9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-11-17T19:23:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-15T15:08:46.000Z", "max_forks_repo_path": "notebooks/R_and_D.ipynb", "max_forks_repo_name": "rateixei/si-mu-lator", "max_forks_repo_head_hexsha": "53505146c3a9098138d52999079cd45b92903bf9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.6991869919, "max_line_length": 90, "alphanum_fraction": 0.4735875057, "converted": true, "num_tokens": 485, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6187804337438501, "lm_q2_score": 0.47268347662043286, "lm_q1q2_score": 0.29248728668674245}} {"text": "# snapReactors\n\nCopyright (c) Dan Kotlyar and CoRE group\n\n# Materials Container\n\n* This container stores materials information such as isotopic definition, abundances, uncertainties and material properties.\n* The container also stores, modifies or creates new h5 files for data storage.\n\n## Code: \n\n\n```python\nimport numpy as np\nimport sympy as sp\nfrom snapReactors.containers.materials import Material\nfrom snapReactors.containers.property import Constant, Table, Correlation\n```\n\n## Defining a new material\n\n### Define with Materials container\n\n1. Give name of material\n2. Give type of uncertainty, must be in ``Enum.UTYPE`` which are:\n - ABSOLUTE \n - RELATIVE\n - PERCENTAGE\n - NONE\n3. Give the composition type, must be in ``Enum.CTYPE`` which are:\n - RELATIVE\n - WEIGHT\n - ATOMIC\n4. Give the isotopes that define a material as a np.array\n5. Give the abundances for each isotope as a np.array\n6. The uncertainty value, reference, description, and properties are left as optional parameters.\n 1. Note that properties must be under ``ALLOWED_PROPERTIES``\n\n\n\n```python\nmat1 = Material(\"mat1\", 'NONE', 'WEIGHT', np.array([]), np.array([]), None, reference=None, description='This is an example', _properties=None)\nprint(mat1)\n```\n\n {'matName': ['mat1'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': [None]}\n\n\n#### Properties may be inputted as a list of properties or a single property\n\n\n```python\np1 = Constant(id='cv', value=1, unit= \"J/kg/K\", unc=None, ref=None, description=None)\nmat2 = Material(\"mat2\", 'NONE', 'WEIGHT', np.array([]), np.array([]), None, reference=None, description='This is an example', _properties=p1)\nprint(mat2)\n\np2 = Table('h', np.array([1, 2, 3, 4]), 'W/K/m^2', np.array([100, 200, 300, 400]), 'K', unc = np.array([.01, .01, .01, .01]), dependency2=None, dependencyUnit2=None, ref=None, description=None)\ncorr1 = \"T**2 + P + 1/2\"\nsyms1 = \"T, P\"\np3 = Correlation('h', corr1, syms1, 'W/K/m^2', np.array([300, 600]), 'K', np.array([10, 20]), 'Pa', unc=None, ref=None, description=None)\nproperties = [p2, p3]\nmat3 = Material(\"mat3\", 'NONE', 'WEIGHT', np.array([]), np.array([]), None, reference=None, description='This is an example', _properties=properties)\nprint(mat3)\n```\n\n {'matName': ['mat2'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': []}\n {'matName': ['mat3'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': [, ]}\n\n\n### Reading in isotopic defintion through a text file\n\nThe ``Materials.readData(filename)`` method is used to read in all relevant materials information through a text file. There are several rules to keep in mind for the structure of the text file:\n1. The ``Material Name`` must be indicated at the beginning of every material data section with the following format:\n - Material Name: Example Name\n2. The ``ctype``, ``utype``, and ``Number of isotopes`` must be indicated before the beginning of ``Isotopic Definition`` although their order doesn't matter. \n3. The ``Isotopic Definition`` must have a line between itself and where the definition begins. In the example below a dashed line is used to indicate this seperation.\n4. For each input a colon is used to seperate the keyword and input, for example:\n - utype: NONE\n5. The location of ``Reference`` and ``Description`` for a specific material must be placed before the beginning of the next ``Material Name`` if present. \n\nMaterial Property data can be read in by adding a ``Properties`` section. \n1. The location of ``Properties`` must be before the beginning of the next ``Material Name`` and is indicated with curly brackets:\n ```\n Properties: {\n \n }\n ```\n2. The formatting for the ``Properties`` information only requires there to be a colon in between the keyword and value, and that each keyword be on its own line\n ```\n type = const, table, corr\n id = property id\n unit = SI or imperial\n must have a \":\" between keyword and value i.e \"keyword: value\"\n each keyword must on its own line i.e \n keyword1: val1 \n keyword: val2\n\n array type inputs are denoted using \"[]\" i.e [1, 2] or [1 2] \n multidimensional arrays can be denoted using the \";\" matlab style i.e\n [1 2; 3 4] or [1, 2;\n 3, 4]\n or by using a newline i.e\n [1 2\n 3 4] \n Supports comments by preceeding a line with \"%\"\n Examples are included below\n }\n ```\n3. Structure of ``Properties`` input is outlined in Property Container documentation.\n\nOptional parameters such as reference or uncertainty values can be left out, however, warnings will be highlighted to the user. Two examples for the Material Property data are shown below.\n\n#### Example text file shown below\n\n\n```python\ntext_file = open('test.txt')\nfile_content = text_file.read()\nprint(file_content)\ntext_file.close()\n```\n\n Material Name: hasteC\n ctype: RELATIVE\n utype: NONE\n Number of isotopes: 33\n Isotopic Definition:\n -------------------\n 6000.03c 0.0007 \n 27059.03c 0.0125 \n 24050.03c 0.006952\n 24052.03c 0.1340624\n 24053.03c 0.0152016\n 24054.03c 0.003784\n 42092.03c 0.0249033\n 42094.03c 0.0156179\n 42095.03c 0.0269841\n 42096.03c 0.0283441\n 42097.03c 0.0162894\n 42098.03c 0.0412964\n 42100.03c 0.0165648\n 23050.03c 0.0000075\n 23051.03c 0.0029925\n 74180.03c 0.00048\n 74182.03c 0.106\n 74183.03c 0.05724\n 74184.03c 0.12256\n 74186.03c 0.11372\n 26054.03c 0.003360875\n 26056.03c 0.05275855\n 26057.03c 0.001218425\n 26058.03c 0.00016215\n 25055.03c 0.01 \n 14028.03c 0.00645561\n 14029.03c 0.00032795\n 14030.03c 0.00032795\n 28058.03c 0.1220600887\n 28060.03c 0.0470180183\n 28061.03c 0.0020438407\n 28062.03c 0.0065166585\n 28064.03c 0.0016596008\n \n Properties: {\n %property values for material\n %type = const, table, corr\n %id = property id\n %unit = SI or imperial\n %must have a \":\" between keyword and value i.e \"keyword: value\"\n %each keyword must on its own line i.e \n % keyword1: val1 \n % keyword: val2\n %array type inputs are denoted using \"[]\" i.e [1, 2] or [1 2] \n %multidimensional arrays can be denoted using the \";\" matlab style i.e\n % [1 2; 3 4] or [1, 2;\n % 3, 4]\n % or by using a newline i.e\n % [1 2\n 3 4] \n %Supports comments by preceeding a line with \"%\"\n %Examples are included below\n \n type:const\n id:cp\n unit:SI \n value:[1]\n unc:[.01]\n \n type:table \n id:h \n unit:imperial \n ref:NAA-SR-6160 \n dep1unit:K \n dep1values: [1 2]\n dep2unit:Pa \n dep2values: [.1 .2]\n value: [1.1 2.1\n 3.1 4.1]\n unc: [1 1\n 1 1]\n \n type:corr\n id:r \n unit:SI \n ref:NAA-SR-3120\n corr:T+P**2\n deps:T,P\n dep1unit:K \n dep2unit:Pa\n dep1range: [300,900] \n dep2range: [16,48]\n }\n Reference: NA-Examples\n Description: This is an example input file\n \n Material Name: hasteB\n ctype: RELATIVE\n utype: NONE\n Number of isotopes: 33\n Isotopic Definition:\n --------------------\n 6000.03c 0.0007 \n 27059.03c 0.0125 \n 24050.03c 0.006952\n 24052.03c 0.1340624\n 24053.03c 0.0152016\n 24054.03c 0.003784\n 42092.03c 0.0249033\n 42094.03c 0.0156179\n 42095.03c 0.0269841\n 42096.03c 0.0283441\n 42097.03c 0.0162894\n 42098.03c 0.0412964\n 42100.03c 0.0165648\n 23050.03c 0.0000075\n 23051.03c 0.0029925\n 74180.03c 0.00048\n 74182.03c 0.106\n 74183.03c 0.05724\n 74184.03c 0.12256\n 74186.03c 0.11372\n 26054.03c 0.003360875\n 26056.03c 0.05275855\n 26057.03c 0.001218425\n 26058.03c 0.00016215\n 25055.03c 0.01 \n 14028.03c 0.00645561\n 14029.03c 0.00032795\n 14030.03c 0.00032795\n 28058.03c 0.1220600887\n 28060.03c 0.0470180183\n 28061.03c 0.0020438407\n 28062.03c 0.0065166585\n 28064.03c 0.0016596008\n \n Properties: {\n %property values for material\n %type = const, table, corr\n %id = property id\n %unit = SI or imperial\n %must have a \":\" between keyword and value i.e \"keyword: value\"\n %each keyword must on its own line i.e \n % keyword1: val1 \n % keyword: val2\n %array type inputs are denoted using \"[]\" i.e [1, 2] or [1 2] \n %multidimensional arrays can be denoted using the \";\" matlab style i.e\n % [1 2; 3 4] or [1, 2;\n % 3, 4]\n % or by using a newline i.e\n % [1 2\n 3 4] \n %Supports comments by preceeding a line with \"%\"\n %Examples are included below\n \n type:const\n id:cp\n unit:SI \n value:[1]\n unc:[.01]\n \n type:table \n id:h \n unit:imperial \n ref:NAA-SR-6160 \n dep1unit:K \n dep1values: [1 2]\n dep2unit:Pa \n dep2values: [.1 .2]\n value: [1.1 2.1\n 3.1 4.1]\n unc: [1 1\n 1 1]\n \n type:corr\n id:r \n unit:SI \n ref:NAA-SR-3120\n corr:T+P**2\n deps:T,P\n dep1unit:K \n dep2unit:Pa\n dep1range: [300,900] \n dep2range: [16,48]\n }\n Reference: NA-Examples\n Description: This is an example input file\n\n\n#### Materials definition returned by readData\n\n\n```python\nmats = Material.readData('test.txt')\nprint(mats)\n```\n\n [, ]\n\n\n c:\\Users\\iaguirre6\\Documents\\GitHub\\docTEST\\snapReactors\\containers\\property.py:669: InputFileSyntaxWarning: reference not given for cp const property @ line: 19\n warnings.warn(\"reference not given for {} {} property @\"\n c:\\Users\\iaguirre6\\Documents\\GitHub\\docTEST\\snapReactors\\containers\\property.py:868: InputFileSyntaxWarning: uncertainty not given for r corr property @ line: 38\n warnings.warn(\"uncertainty not given for {} {} property @\"\n\n\n## Updating properties to materials\n\n1. The properties must be from the following list: ['cp', 'cv', 'g', 'h', 'my', 'pr', 'r', 's', 'tc', 'v']\n\n\n```python\np4 = Constant(id='cv', value=1, unit= \"J/kg/K\", unc=None, ref=None, description=None)\nprint(p4)\n\nmat3.addproperty([p4])\nprint(mat3)\n\n```\n\n {'id': 'cv', 'dtype': , 'vtype': , 'value': array([1]), 'valueUnit': 'J/kg/K', 'unc': None, 'dependents': None, 'dependentsUnit': None, 'description': None, 'ref': None}\n {'matName': ['mat3'], 'utype': [], 'ctype': [], 'abundances': [array([], dtype=float64)], 'isotopes': [array([], dtype=float64)], 'unc': [None], 'reference': [None], 'description': ['This is an example'], '_properties': [, , ]}\n\n", "meta": {"hexsha": "10873bfe3a1384d858a0a34a7f2dda28e152ea66", "size": 16758, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "snapReactors/jupyter_notebooks/materials.ipynb", "max_stars_repo_name": "CORE-GATECH-GROUP/SNAP-REACTORS", "max_stars_repo_head_hexsha": "ce0d4502a4dea877264e18b7ea11b41c124c5a60", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "snapReactors/jupyter_notebooks/materials.ipynb", "max_issues_repo_name": "CORE-GATECH-GROUP/SNAP-REACTORS", "max_issues_repo_head_hexsha": "ce0d4502a4dea877264e18b7ea11b41c124c5a60", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-03T15:58:52.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-03T15:58:52.000Z", "max_forks_repo_path": "snapReactors/jupyter_notebooks/materials.ipynb", "max_forks_repo_name": "CORE-GATECH-GROUP/SNAP-REACTORS", "max_forks_repo_head_hexsha": "ce0d4502a4dea877264e18b7ea11b41c124c5a60", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-30T20:17:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-30T20:42:28.000Z", "avg_line_length": 33.9230769231, "max_line_length": 485, "alphanum_fraction": 0.5358634682, "converted": true, "num_tokens": 3894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5544704502361149, "lm_q2_score": 0.5273165233795671, "lm_q1q2_score": 0.2923814301352114}} {"text": "# SageMaker Factorization Machine(FM) \ubc0f KNN\uc73c\ub85c \ucd94\ucc9c \uc2dc\uc2a4\ud15c \uad6c\ucd95\ud558\uae30\n\n\n**\ubcf8 \ub178\ud2b8\ubd81\uc740 \uae40\ub300\uadfc\ub2d8\uc758 \ub178\ud2b8\ubd81 \ub0b4\uc6a9\uc744 \ub9ce\uc774 \uac00\uc9c0\uace0 \uc654\uc2b5\ub2c8\ub2e4.**
\n\uae30\uc874 movielens \ub370\uc774\ud0c0\ub97c \uc5d0\uc5b4\ub77c\uc778 \ub9ac\ubdf0 \ub370\uc774\ud130\ub85c \uad50\uccb4\ud558\uace0 \uc774\ub97c \uac00\uacf5\ud558\ub294 \ubd80\ubd84\uc744 \ucd94\uac00 \ud558\uc600\uc2b5\ub2c8\ub2e4.\n\uc54c\uace0\ub9ac\uc998 \uc124\uba85 \ubc0f \ucf54\ub4dc\ub294 \uac70\uc758 \uc720\uc0ac\ud569\ub2c8\ub2e4.\n- https://github.com/daekeun-ml/recommendation-workshop/blob/master/0.Recommendation-System-FM-KNN.ipynb\n\n*\ubcf8 \ub178\ud2b8\ubd81 \uc608\uc81c\ub294 AWS \uba38\uc2e0 \ub7ec\ub2dd \ube14\ub85c\uadf8\uc5d0 \uae30\uace0\ub41c \uae00\ub4e4\uc5d0 \uae30\ubc18\ud558\uc5ec SageMaker\uc758 Factorization Machine(FM)\uc73c\ub85c \ucd94\ucc9c \uc2dc\uc2a4\ud15c\uc744 \uad6c\ucd95\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.*\n\nReferences\n- [Build a movie recommender with factorization machines on Amazon SageMaker](https://aws.amazon.com/ko/blogs/machine-learning/build-a-movie-recommender-with-factorization-machines-on-amazon-sagemaker/)\n- [Amazon SageMaker Factorization Machines \uc54c\uace0\ub9ac\uc998\uc744 \ud655\uc7a5\ud558\uc5ec \ucd94\ucc9c \uc2dc\uc2a4\ud15c \uad6c\ud604\ud558\uae30](https://aws.amazon.com/ko/blogs/korea/extending-amazon-sagemaker-factorization-machines-algorithm-to-predict-top-x-recommendations/)\n- [Factorization Machine \ub17c\ubb38](https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf)\n\n#### \uc774 \ub178\ud2b8\ubd81\uc740 \uc544\ub798\uc640 \uac19\uc740 \uc791\uc5c5\uc744 \ud569\ub2c8\ub2e4.\n1. FM \uc54c\uace0\ub9ac\uc998 \ud559\uc2b5, \ubc30\ud3ec \ubc0f \ucd94\ub860\n- \uc5d0\uc5b4\ub77c\uc778 \ub9ac\ubdf0 \ub370\uc774\ud130 \ub2e4\uc6b4\ub85c\ub4dc\n- \ub370\uc774\ud130 \uc804\ucc98\ub9ac\n - \ud544\uc694\ud55c \uceec\ub7fc\uc744 \ucd94\ucd9c\ud558\uc5ec interaction data set \ud615\ud0dc\ub85c \ub9cc\ub4e4\uae30 (timestamp, user_id, item_id, rating)\n - \uc0c1\ud638\uc791\uc6a9\uc774 5\uac1c \uc774\uc0c1\uc778 \uc720\uc800\ub9cc \ucd94\ucd9c\n - \uc720\uc800, \uc544\uc774\ud15c\uc744 \ubb38\uc790\uc5f4\uc5d0\uc11c \uc22b\uc790\ub85c \ubcc0\ud658\n - \ub370\uc774\ud130\ub97c \uc720\uc800 \uae30\uc900\uc73c\ub85c \uc2dc\uac04\uc21c\uc73c\ub85c \uc815\ub82c\ud55c \ud6c4\uc5d0 \ud559\uc2b5, \uac80\uc99d\uc73c\ub85c 8:2 \ub85c \ubd84\ub9ac\n - \ud559\uc2b5\uc758 \uc778\uc2a4\ud134\uc2a4 * \uceec\ub7fc\uc758 \uac2f\uc218 \ub300\ube44 \uc2e4\uc81c \uc0ac\uc6a9\ud55c \ub370\uc774\ud130 \uc601\uc5ed\uc744 \uc54c\uc544 \ubd05\ub2c8\ub2e4. (\ud76c\uc18c\uc131, Sparsity)\n - \uc6d0\ud56b \uc778\ucf54\ub529\uc73c\ub85c \ud76c\uc18c \ud589\ub82c \ubcc0\ud658\n - \uc810\uc218\uac00 8\uc810 \uc774\uc0c1\uc774\uba74 1, \ubbf8\ub9cc\uc774\uba74 0\uc73c\ub85c \ud558\uc5ec y \ub808\uc774\ube14 \uc0dd\uc131\n - Protobuf \ud3ec\ub9f7\uc73c\ub85c \ubcc0\ud658 \ud6c4\uc5d0 S3\uc5d0 \uc800\uc7a5\n- FM \ud559\uc2b5 (\uc774\uc9c4 \ubd84\ub958 \ubb38\uc81c)\n - Python SDK SageMaker2.0\uc5d0 \ub9de\uac8c \ubcc0\uacbd\n- \ubaa8\ub378 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubc30\ud3ec\n- \ucee4\uc2a4\ud140 \uc2dc\ub9ac\uc5bc\ub77c\uc774\uc988 \uad6c\ud604 (Python SDK SageMaker2.0\uc5d0 \ub9de\uac8c \ubcc0\uacbd\ub9de\uac8c \uc2e0\uaddc \uad6c\ud604)\n\n2.FM \ubaa8\ub378 \ud30c\ub77c\ubbf8\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec KNN \ud559\uc2b5, \ubc30\ud3ec \ubc0f \ubc30\uce58 \ucd94\ub860\n- FM \ubaa8\ub378 \uc544\ud2f0\ud399\ud2b8 \ub2e4\uc6b4\ub85c\ub4dc\n- \ubaa8\ub378 \ud30c\ub9ac\ubbf8\ud130 ($w_{0}, \\mathbf{w}, \\mathbf{V}$)\ub97c \ucd94\ucd9c\n- \ud559\uc2b5/\ucd94\ub860 \ub370\uc774\ud130\ub97c \uc544\ub798\uc640 \uac19\uc774 \uc0c8\ub86d\uac8c \uad6c\uc131\n - Item latent \ud589\ub82c: k-NN \ubaa8\ub378 \ud559\uc2b5\uc5d0 \uc0ac\uc6a9; $a_i = concat(V, \\; w)$\n - User latent \ud589\ub82c: \ucd94\ub860\uc5d0 \uc0ac\uc6a9; $a_u = concat(V, \\; 1)$\n- KNN \ubaa8\ub378 \ud559\uc2b5\n- \ubc30\uce58 \ucd94\ub860\n- \uc5d0\uc5b4\ub77c\uc778 Top-K \ucd94\ucc9c\n\n\n\n\n\n# 1. \uc5d0\uc5b4\ub77c\uc778 \ub370\uc774\ud130\uc14b\uc73c\ub85c FM \ubaa8\ub378 \ud6c8\ub828 \ubc0f \ubc30\ud3ec\ud558\uae30\n---\n\nMovieLens \ub370\uc774\ud130\uc758 user_id, item_id\uac00 \uc22b\uc790\ub85c \uc774\ubbf8 \ud559\uc2b5\uc5d0 \ubc14\ub85c \uc0ac\uc6a9\uac00\ub2a5\ud55c \ub370\uc774\ud130\uc785\ub2c8\ub2e4.\n\ud558\uc9c0\ub9cc, \uc5d0\uc5b4\ub77c\uc778 \ub370\uc774\ud130\ub294 \ubaa8\ub450 \ubb38\uc790\uc5f4\ub85c \ub370\uc774\ud0c0\uac00 \uc8fc\uc5b4\uc84c\uc2b5\ub2c8\ub2e4.
\n\uc774 \ub370\uc774\ud0c0\ub97c labelencoder\ub97c \ud1b5\ud574\uc11c \uc22b\uc790\ub85c \ubc14\uafb8\uc5b4\uc11c \uc5ec\uae30\uc11c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.
\n\ubcf8 \uc608\uc81c\uc5d0\uc11c\ub294 \uc0ac\uc6a9\uc790\uac00 5\ubc88 \uc774\uc0c1\uc758 \ub9ac\ubdf0\ub97c \ub0a8\uae34 \uc0ac\uc6a9\uc790\ub9cc\uc744 \ub300\uc0c1\uc73c\ub85c \ud558\uc5ec \ucd1d 748\uba85\uc758 \uc0ac\uc6a9\uc790\uc640 293\uac1c\uc758 \uc5d0\uc5b4\ub77c\uc778 \ub300\ud574 1\ubd80\ud130 10\uae4c\uc9c0\uc758 \ub4f1\uae09\uc774 \ubd80\uc5ec\ub41c \ub370\uc774\ud130\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n### Factorization Machine\n---\n\n### \uac1c\uc694\n\n\uc77c\ubc18\uc801\uc778 \ucd94\ucc9c \ubb38\uc81c\ub4e4\uc740 user\uac00 \ud589, item\uc774 \uc5f4, rating\uc774 \uac12\uc73c\ub85c \uc774\ub8e8\uc5b4\uc9c4 \ud589\ub82c\uc744 \ub370\uc774\ud130\uc14b\uc73c\ub85c \ud558\uc5ec Matrix Factorization \uae30\ubc95\uc744 \ud65c\uc6a9\ud558\ub294\ub370, real-world\uc758 \ub2e4\uc591\ud55c \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98(feature)\ub4e4\uc744 \uadf8\ub300\ub85c \uc801\uc6a9\ud558\uae30\uc5d0\ub294 \uc5b4\ub824\uc6c0\uc774 \uc788\uc2b5\ub2c8\ub2e4. Factoriztion Machine(\uc774\ud558 FM) \uc54c\uace0\ub9ac\uc998\uc740 Matrix Factorization\uc758 \uac1c\ub150\uc744 \ud655\uc7a5\ud558\uc5ec \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98\ub4e4\uc744 \uac19\uc774 \uace0\ub824\ud558\uace0 \ud53c\ucc98 \uac04\uc758 \uc0c1\ud638 \uad00\uacc4(interaction)\ub97c \uc120\ud615 \uacc4\uc0b0 \ubcf5\uc7a1\ub3c4\ub85c \uc790\ub3d9\uc73c\ub85c \ubaa8\ub378\ub9c1\ud560 \uc218 \uc788\uae30\uc5d0, \ud53c\ucc98 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1\uc5d0 \ub4e4\uc5b4\uac00\ub294 \ub178\ub825\uc744 \ud06c\uac8c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc124\uba85\n\n\ub2e4\uc591\ud55c \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98\ub97c \uace0\ub824\ud558\uae30 \uc704\ud574 \uc544\ub798 \uadf8\ub9bc\ucc98\ub7fc user\uc640 item\uc744 \uc6d0-\ud56b \uc778\ucf54\ub529\uc73c\ub85c \ubcc0\ud658\ud558\uace0 \ucd94\uac00 \ud53c\ucc98\ub4e4\uc744 \uadf8\ub300\ub85c concatenate\ud558\uc5ec `f(user, item, additional features) = rating` \ud615\ud0dc\uc758 \uc120\ud615 \ud68c\uadc0(Linear Regression) \ubb38\uc81c\ub85c \ubcc0\ud658\ud558\uc5ec \ud480 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\n\n\ud558\uc9c0\ub9cc, \ucd94\ucc9c \ubb38\uc81c\ub97c \uc120\ud615 \ud68c\uadc0\ub85c\ub9cc \ud480\ub824\uace0 \ud558\uba74 \ud53c\ucc98 \uac04\uc758 \uc0c1\ud638 \uad00\uacc4\ub97c \uace0\ub824\ud560 \uc218 \uc5c6\uae30\uc5d0 \uc544\ub798 \uc218\uc2dd\ucc98\ub7fc \ud53c\ucc98 \uac04\uc758 \uc0c1\ud638 \uad00\uacc4\ub97c \ubaa8\ub378\ub9c1\ud558\ub294 \ud56d\uc744 \ucd94\uac00\ud558\uc5ec \ub2e4\ud56d \ud68c\uadc0(Polynomial Regression)\ub85c \ubcc0\ud658\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n$$\n\\begin{align} \\hat{y}(\\mathbf{x}) = w_{0} + \\sum_{i=1}^{d} w_{i} x_{i} + \\sum_{i=1}^d \\sum_{j=i+1}^d x_{i} x_{j} w_{ij}, \\;\\; x \\in \\mathbb{R}^d \\tag {1} \n\\end{align}\n$$ \n$d$\ub294 \ud53c\ucc98 \uac2f\uc218\ub85c, $x \\in \\mathbb{R}^d$\ub294 \ub2e8\uc77c \uc0d8\ud50c\uc758 \ud53c\ucc98 \ubca1\ud130\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\n\ud558\uc9c0\ub9cc \ub300\ubd80\ubd84\uc758 \ucd94\ucc9c \uc2dc\uc2a4\ud15c \ub370\uc774\ud130\uc14b\uc740 \ud76c\uc18c\ud558\uae30\uc5d0(sparse) cold-start \ubb38\uc81c\uac00 \uc788\uc73c\uba70, \ucd94\uac00\uc801\uc73c\ub85c \uace0\ub824\ud574\uc57c \ud558\ub294 \ud53c\ucc98\ub4e4\uc774 \ub9ce\uc544\uc9c8 \uc218\ub85d \uacc4\uc0b0\uc774 \ub9e4\uc6b0 \ubcf5\uc7a1\ud574\uc9d1\ub2c8\ub2e4. (\uc608: user\uac00 6\ub9cc\uba85, item \uac2f\uc218\uac00 5\ucc9c\uac1c, \ucd94\uac00 \ud53c\ucc98\uac00 5\ucc9c\uac1c\uc77c \uacbd\uc6b0 70,000x70,000 \ud589\ub82c\uc744 \uc608\uce21\ud574\uc57c \ud569\ub2c8\ub2e4.) \n\nFM\uc740 \uc774\ub7ec\ud55c \ubb38\uc81c\ub4e4\uc744 \ud589\ub82c \ubd84\ud574 \uae30\ubc95\uc744 \ud65c\uc6a9\ud558\uc5ec feature \uc30d(\uc608: user, item) \uac04\uc758 \uc0c1\ud638 \uad00\uacc4\ub97c \ub0b4\uc801(dot product)\uc73c\ub85c \ubcc0\ud658\ud558\uace0 \n\uc218\uc2dd\uc744 \uc7ac\uad6c\uc131\ud558\uc5ec \uacc4\uc0b0 \ubcf5\uc7a1\ub3c4\ub97c $O(kd^2)$\uc5d0\uc11c $O(kd)$\ub85c \uac10\uc18c\uc2dc\ucf30\uc2b5\ub2c8\ub2e4. (\uc218\uc2dd (2)\uc5d0\uc11c \ucd94\uac00\uc801\uc778 \uacc4\uc0b0\uc744 \uac70\uce58\uba74 \uacc4\uc0b0 \ubcf5\uc7a1\ub3c4\ub97c \uc120\ud615\uc73c\ub85c \uac10\uc18c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ub17c\ubb38\uc744 \ucc38\uc870\ud558\uc138\uc694.) \n\n$$\n\\begin{align}\n\\hat{y}(\\mathbf{x}) = w_{0} + \\sum_{i=1}^{d} w_i x_i + \\sum_{i=1}^d\\sum_{j=i+1}^d x_{i} x_{j} \\langle\\mathbf{v}_i, \\mathbf{v}_j\\rangle \\tag{2} \n\\end{align}\n$$\n\n$$\n\\begin{align}\n\\langle \\textbf{v}_i , \\textbf{v}_{j} \\rangle = \\sum_{f=1}^k v_{i,f} v_{j,f},\\; k: \\text{dimension of latent feature} \\tag{3}\n\\end{align}\n$$\n\n\uc704\uc758 \ubaa8\ub378\uc744 2-way(degree = 2) FM\uc774\ub77c\uace0 \ud558\uba70, \uc774\ub97c \uc77c\ubc18\ud654\ud55c d-way FM\ub3c4 \uc788\uc9c0\ub9cc, \ubcf4\ud1b5 2-way FM\ub97c \ub9ce\uc774 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SageMaker\uc758 FM \ub610\ud55c 2-way FM\uc785\ub2c8\ub2e4.\n\nFM\uc774 \ud6c8\ub828\ud558\ub294 \ud30c\ub77c\uba54\ud130 \ud29c\ud50c\uc740 ($w_{0}, \\mathbf{w}, \\mathbf{V}$) \uc774\uba70, \uc758\ubbf8\ub294 \uc544\ub798\uc640 \uac19\uc2b5\ub2c8\ub2e4.\n- $w_{0} \\in \\mathbb{R}$: global bias\n- $\\mathbf{w} \\in \\mathbb{R}^d$: \ud53c\ucc98 \ubca1\ud130 $\\mathbf{x}_i$\uc758 \uac00\uc911\uce58\n- $\\mathbf{V} \\in \\mathbb{R}^{n \\times k}$: \ud53c\ucc98 \uc784\ubca0\ub529 \ud589\ub82c\ub85c i\ubc88\uc9f8 \ud589\uc740 $\\mathbf{v}_i$\n\n\nFM\uc740 \uc704\uc758 \uc218\uc2dd\uc5d0\uc11c \uc54c \uc218 \uc788\ub4ef\uc774 closed form\uc774\uba70 \uc2dc\uac04 \ubcf5\uc7a1\ub3c4\uac00 \uc120\ud615\uc774\uae30 \ub54c\ubb38\uc5d0, \ub2e4\uc218\uc758 user & item\uacfc \uba54\ud0c0\ub370\uc774\ud130\ub4e4\uc774 \ub9ce\uc740 \ucd94\ucc9c \ubb38\uc81c\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4.\n\ud6c8\ub828 \ubc29\ubc95\uc740 \ub300\ud45c\uc801\uc73c\ub85c Gradient Descent, ALS(Alternating Least Square), MCMC(Markov Chain Monte Carlo)\uac00 \uc788\uc73c\uba70, AWS\uc5d0\uc11c\ub294 \uc774 \uc911 \ub525\ub7ec\ub2dd \uc544\ud0a4\ud14d\ucc98\uc5d0 \uae30\ubc18\ud55c Gradient Descent\ub97c MXNet \ud504\ub808\uc784\uc6cc\ud06c\ub97c \uc774\uc6a9\ud558\uc5ec \ud6c8\ub828\ud569\ub2c8\ub2e4.\n\n\n```python\nimport sagemaker\nimport sagemaker.amazon.common as smac\nfrom sagemaker import get_execution_role\n# from sagemaker.predictor import json_deserializer\nfrom sagemaker.deserializers import JSONDeserializer\n# from sagemaker.amazon.amazon_estimator import get_image_uri\nimport numpy as np\nfrom scipy.sparse import lil_matrix\nimport pandas as pd\nimport boto3, io, os, csv, json\n```\n\n## \uc5d0\uc5b4\ub77c\uc778 \ub9ac\ubdf0 \ub370\uc774\ud130 \ub2e4\uc6b4\ub85c\ub4dc\n\n\ub370\uc774\ud130\ub294 \ube44\ud589\uae30 \uc5d0\uc5b4\ub77c\uc778\uc758 \ub9ac\ubdf0\uc5d0 \ub300\ud55c \ub370\uc774\ud0c0 \uc14b \uc785\ub2c8\ub2e4.
\n\ud3c9\uac00\uc5d0 \ub300\ud55c \ub370\uc774\ud130\uac00 \uc548\ub77d\ud568, \uccad\uacb0, \uc74c\ub8cc, \uc74c\uc2dd, \ud654\uc7a5\uc2e4, \uc9c1\uc6d0 \uc11c\ube44\uc2a4, \uc885\ud569\uc5d0 \ub300\ud55c \ud3c9\uac00\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc5ec\uae30\uc11c\ub294 \"\uc885\ud569\"\uc5d0 \ub300\ud55c \ud3c9\uac00 \uc810\uc218\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n- Skytrax User Reviews Dataset (August 2nd, 2015)\n - https://github.com/quankiquanki/skytrax-reviews-dataset\n\n\n```python\nimport os\ndata_dir = \"airlines_data\"\nos.makedirs(data_dir, exist_ok=True)\n!cd $data_dir && wget https://raw.githubusercontent.com/quankiquanki/skytrax-reviews-dataset/master/data/airline.csv\n```\n\n --2020-10-03 09:21:10-- https://raw.githubusercontent.com/quankiquanki/skytrax-reviews-dataset/master/data/airline.csv\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.228.133\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 34752262 (33M) [text/plain]\n Saving to: \u2018airline.csv.3\u2019\n \n airline.csv.3 100%[===================>] 33.14M 16.9MB/s in 2.0s \n \n 2020-10-03 09:21:15 (16.9 MB/s) - \u2018airline.csv.3\u2019 saved [34752262/34752262]\n \n\n\n\n```python\nimport pandas as pd\nairline_df = pd.read_csv(data_dir + '/airline.csv', parse_dates=['date'])\nprint(\"airline_df: \", airline_df.shape)\nairline_df.head()\n```\n\n airline_df: (41396, 20)\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
airline_namelinktitleauthorauthor_countrydatecontentaircrafttype_travellercabin_flownrouteoverall_ratingseat_comfort_ratingcabin_staff_ratingfood_beverages_ratinginflight_entertainment_ratingground_service_ratingwifi_connectivity_ratingvalue_money_ratingrecommended
0adria-airways/airline-reviews/adria-airwaysAdria Airways customer reviewD ItoGermany2015-04-10Outbound flight FRA/PRN A319. 2 hours 10 min f...NaNNaNEconomyNaN7.04.04.04.00.0NaNNaN4.01
1adria-airways/airline-reviews/adria-airwaysAdria Airways customer reviewRon KuhlmannUnited States2015-01-05Two short hops ZRH-LJU and LJU-VIE. Very fast ...NaNNaNBusiness ClassNaN10.04.05.04.01.0NaNNaN5.01
2adria-airways/airline-reviews/adria-airwaysAdria Airways customer reviewE AlbinSwitzerland2014-09-14Flew Zurich-Ljubljana on JP365 newish CRJ900. ...NaNNaNEconomyNaN9.05.05.04.00.0NaNNaN5.01
3adria-airways/airline-reviews/adria-airwaysAdria Airways customer reviewTercon BojanSingapore2014-09-06Adria serves this 100 min flight from Ljubljan...NaNNaNBusiness ClassNaN8.04.04.03.01.0NaNNaN4.01
4adria-airways/airline-reviews/adria-airwaysAdria Airways customer reviewL JamesPoland2014-06-16WAW-SKJ Economy. No free snacks or drinks on t...NaNNaNEconomyNaN4.04.02.01.02.0NaNNaN2.00
\n
\n\n\n\n### \ud544\uc694\ud55c \ub370\uc774\ud0c0 \uceec\ub7fc\uc744 \ucd94\ucd9c\ud558\uc5ec Interaction Data Set\uc73c\ub85c \uc0ac\uc6a9\ud558\uae30\n\n\ub2e4\uc6b4\ub85c\ub4dc \ubc1b\uc740 \ub370\uc774\ud130\uc5d0\uc11c 'date','airline_name', 'author', 'overall_rating' \ub9cc\uc744 \uc120\ud0dd\ud558\uc5ec \uc0ac\uc6a9 \ud569\ub2c8\ub2e4. \n\ub610\ud55c \uc544\ub798\uc640 \uac19\uc774 \uceec\ub7fc \uc774\ub984\uc744 \ubcc0\uacbd \ud569\ub2c8\ub2e4.\n- airline_name --> item_id\n- author --> user_id\n- overall_rating --> rating\n\n\n\n```python\ndef get_interaction_data(airline_df):\n '''\n \ud544\uc694\ud55c \uceec\ub7fc\ub9cc \uc120\ubcc4\ud558\uace0, \uceec\ub7fc \uc774\ub984 \ubcc0\uacbd\n '''\n a_interactions_df = airline_df.copy()\n a_interactions_df = a_interactions_df[['date','airline_name', 'author', 'overall_rating']]\n a_interactions_df['author'] = a_interactions_df['author'].str.replace(\" \",\"\")\n a_interactions_df.rename(columns = {'airline_name':'item_id', 'author':'user_id',\n 'overall_rating': 'rating'}, inplace = True) \n print(\"a_interactions_df: \",a_interactions_df.shape)\n return a_interactions_df\n\na_interactions_df = get_interaction_data(airline_df)\na_interactions_df.head()\n```\n\n a_interactions_df: (41396, 4)\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
dateitem_iduser_idrating
02015-04-10adria-airwaysDIto7.0
12015-01-05adria-airwaysRonKuhlmann10.0
22014-09-14adria-airwaysEAlbin9.0
32014-09-06adria-airwaysTerconBojan8.0
42014-06-16adria-airwaysLJames4.0
\n
\n\n\n\n#### \uc720\uc800 \uc120\ubcc4\n\ub110 \ub370\uc774\ud130 \uc81c\uac70, \uc720\uc800\ubcc4\ub85c 5\uac1c \uc774\uc0c1\uc758 \uc778\ud130\ub809\uc158\uc774 \uc788\ub294 \uc720\uc800\ub9cc \uc120\ubcc4 \ud569\ub2c8\ub2e4.\n\n\n```python\ndef clean_data(air_df, user_limit=5):\n '''\n \ub110 \ub370\uc774\ud130 \uc81c\uac70, \uc720\uc800\ubcc4\ub85c 5\uac1c \uc774\uc0c1\uc758 \uc778\ud130\ub809\uc158\uc774 \uc788\ub294 \uc720\uc800\ub9cc \uc120\ubcc4\n '''\n df = air_df.copy()\n df = df.dropna() # \ub110\uc778 \ub370\uc774\ud0c0\ub294 \uc81c\uac70\n user_g = df.groupby('user_id').count() # \uc720\uc800 \ubcc4\ub85c \uc9d1\uacc4\n\n u5_list = user_g[user_g.item_id >= user_limit].index # 5\uac1c\uc758 \uc0c1\ud638\uc791\uc6a9 \uc720\uc800\ub9cc \uc120\ud0dd\n df = df[df.user_id.isin(u5_list)] # 5\uba85\uc758 \uc720\uc800\ub9cc \uc120\ud0dd\n # air_df.groupby('user_id').count().sort_values(by='item_id', ascending=False)\n \n return df\n\nair_df = clean_data(a_interactions_df)\n\n```\n\n#### \uc720\uc800 \ubc0f \uc544\uc774\ud15c\uc5d0 \ub300\ud574\uc11c \uc22b\uc790\ub85c \ubcc0\ud658\n\n- \ub808\uc774\ube14 \uc778\ucf54\ub354\ub97c \uc774\uc6a9\ud558\uc5ec \uc22b\uc790\ub85c \ubcc0\ud658\n- \uc0ac\uc6a9\ub41c \ub808\uc774\ube14 \uc778\ucf54\ub354\ub294 \ucd94\ud6c4\uc5d0 id-->\uc2e4\uc81c \uc774\ub984 \uc73c\ub85c \ubcc0\uacbd\ud558\uae30 \uc704\ud574\uc11c \ub2e4\uc2dc \uc0ac\uc6a9 \ub429\ub2c8\ub2e4.\n\n\n\n```python\nfrom sklearn import preprocessing\ndef make_labelencoder(a_interactions_df):\n '''\n user, item\uc5d0 \ub300\ud574\uc11c \ub808\uc774\ube14 \uc778\ucf54\ub354 \uc0dd\uc131\n '''\n le_user = preprocessing.LabelEncoder()\n le_item = preprocessing.LabelEncoder()\n le_user.fit(a_interactions_df.user_id)\n le_item.fit(a_interactions_df.item_id)\n print(f\"num_user: {len(le_user.classes_)}\")\n print(f\"num_item: {len(le_item.classes_)}\")\n return le_user, le_item\n\nle_user, le_item = make_labelencoder(air_df)\n\ndef format_interaction_tb(air_df,le_user,le_item ):\n df = air_df.copy()\n user_id = le_user.transform(df.user_id)\n item_id = le_item.transform(df.item_id) \n en_df = pd.DataFrame({'date': air_df.date, 'user_id':user_id, 'item_id':item_id, \n 'rating':air_df.rating.astype(int)})\n \n return en_df\n\nencode_df = format_interaction_tb(air_df,le_user,le_item ) \n\n```\n\n num_user: 748\n num_item: 293\n\n\n#### \ub370\uc774\ud0c0\ub97c Train(\ud559\uc2b5), Test (\uac80\uc99d) \ub97c \uc2dc\uac04\uc21c\uc73c\ub85c 8:2 \ub85c \uad6c\ubd84\n\n\ud55c \uba85\uc758 \uc720\uc800\uc5d0 \ub300\ud574\uc11c Train, Test\ub97c \uc2dc\uac04\uc21c\uc73c\ub85c 8:2 \ub97c \uad6c\ubd84 \ud569\ub2c8\ub2e4.
\n\uc608\ub85c\uc11c \ud2b9\uc815 \uc720\uc800\uac00 5\uac1c\uc758 \ucd94\ucc9c\uc774 \uc788\uc73c\uba74, \uc2dc\uac04\uc21c\uc73c\ub85c \uc18c\ud305\ud558\uc5ec \uc55e\uc758 4\uac1c\ub294 Train, \ub4a4\uc758 1\uac1c\ub294 Test\uc5d0 \uc18d\ud569\ub2c8\ub2e4.
\nTrain, Test\uc758 Unique User ID\ub294 \ub3d9\uc77c\ud558\uac8c 748\uba85 \uc785\ub2c8\ub2e4.\n\n\n```python\npd.options.display.max_rows = 5\ndef split_holdout(data, pct):\n '''\n \uc2dc\uac04\uc21c\uc73c\ub85c pct \ube44\uc728 \ub9cc\ud07c \ud559\uc2b5,\ud14c\uc2a4\ud2b8 \ub370\uc774\ud130 \uc14b\uc73c\ub85c \uad6c\ubd84\n '''\n df = data.copy()\n # Rank per each subgroup, 'USER_ID'\n ranks = df.groupby('user_id').date.rank(pct=True, method='first')\n df = df.join((ranks> pct).to_frame('holdout'))\n \n df = df.drop('date', axis=1)\n \n holdout = df[df['holdout']].drop('holdout', axis=1)\n holdout = holdout.sample(frac=1, random_state = 100)\n train = df[~df['holdout']].drop('holdout', axis=1) \n train = train.sample(frac=1, random_state = 100) \n \n return train, holdout\n\nuser_airline_ratings_train, user_aireline_ratings_test = split_holdout(encode_df, pct=0.8)\n```\n\n\n```python\nprint(\"train unique user_id # : \", user_airline_ratings_train.user_id.nunique())\nprint(\"test unique user_id # :\", user_aireline_ratings_test.user_id.nunique())\nprint(f\"train shape: {user_airline_ratings_train.shape}\")\nprint(f\"test shape: {user_aireline_ratings_test.shape}\")\n```\n\n train unique user_id # : 748\n test unique user_id # : 748\n train shape: (4968, 3)\n test shape: (1577, 3)\n\n\n### Sparsity(\ud76c\uc18c\uc131) \ube44\uc728 \uad6c\ud558\uae30\n\n\ud559\uc2b5 \ub370\uc774\ud130\ub294 \ucd1d 4,968 \uac1c \uc785\ub2c8\ub2e4. \uc774 \ub370\uc774\ud130\uac00 \uc65c \ud76c\uc18c\ud55c\uc9c0 \uad81\uae08\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc804\uccb4 \ub9e4\ud2b8\ub9ad\uc2a4\uac00 5,161,752 \uc5d0\uc11c 9,936 \ub9cc\uc744 \uc0ac\uc6a9\ud558\ubbc0\ub85c \ud76c\uc18c\uc131\uc740 99,8% \uc785\ub2c8\ub2e4.\n- \ud53c\uccd0\uc218: 1,039\n - \uc0ac\uc6a9\uc790\uc640 292\uac1c\uc758 \uc5d0\uc5b4\ub77c\uc778 \ud68c\uc0ac (747 + 292 = 1,039)\n- \ud559\uc2b5 \uc778\uc2a4\ud130\uc2a4 \uc218: 4,968\n- \ucd1d \ub9e4\ud2b8\ub9ad\uc2a4 \uc218: 1,039 * 4,968 = 5,161,752\n- \ucd1d \ub9e4\ud2b8\ub9ad\uc2a4 \uc911\uc5d0 \uc0ac\uc6a9\ub41c \uc140 \uc218: 4,968 * 2 = 9,936 (2\ub294 \uc0ac\uc6a9\uc790\uc5d0 1\uac1c, \uc544\uc774\ud15c\uc5d0 1\uac1c\ub97c \uc0ac\uc6a9\ud558\uae30\uc5d0 2 \uc785\ub2c8\ub2e4.)\n- \ub9e4\ud2b8\ub9ad\uc2a4 \ud76c\uc18c\uc131(Sparsity): 9,936 / 5,161,752 = 0.0019 (99.8%)\n\n\n```python\nnb_users = user_airline_ratings_train['user_id'].max()\nnb_movies = user_airline_ratings_train['item_id'].max()\nnb_features = nb_users + nb_movies\n\nnb_ratings_train = len(user_airline_ratings_train.index)\nnb_ratings_test = len(user_aireline_ratings_test.index)\n\ntotal_matrix_size = nb_features * nb_ratings_train\nfilled_matrix_size = nb_ratings_train * 2 # 2 means that 1 is for user and 1 is for item\n\nprint(\"# of users: {}\".format(nb_users))\nprint(\"# of airlines: {}\".format(nb_movies))\nprint(\"Training Count: {}\".format(nb_ratings_train))\nprint(\"Test Count: {}\".format(nb_ratings_test))\nprint(\"Features (# of users + # of movies): {}\".format(nb_features))\nprint(\"Sparsity: {:.6}%\".format(str((1 - filled_matrix_size / total_matrix_size) * 100)))\n```\n\n # of users: 747\n # of airlines: 292\n Training Count: 4968\n Test Count: 1577\n Features (# of users + # of movies): 1039\n Sparsity: 99.807%\n\n\n#### \uc6d0-\ud56b \uc778\ucf54\ub529 \ud76c\uc18c \ud589\ub82c \ubcc0\ud658\n\nFM\uc758 \uc785\ub825 \ub370\uc774\ud130 \ud3ec\ub9f7\uc778 \uc6d0-\ud56b \uc778\ucf54\ub529 \ud76c\uc18c \ud589\ub82c\ub85c \ubcc0\ud658\ud558\uaca0\uc2b5\ub2c8\ub2e4. \ubb3c\ub860 \ud76c\uc18c \ud589\ub82c\uc774 \uc544\ub2cc \ubc00\uc9d1(dense) \ud589\ub82c\ub3c4 \uac00\ub2a5\ud558\uc9c0\ub9cc, \ub370\uc774\ud130\uac00 \ub9ce\uc544\uc9c8\uc218\ub85d \uacc4\uc0b0 \uc18d\ub3c4\uac00 \ub290\ub824\uc9c0\ubbc0\ub85c, \ud76c\uc18c \ud589\ub82c\uc744 \ucd94\ucc9c\ud569\ub2c8\ub2e4. \n\n\ucc38\uace0\ub85c, \uc5d0\uc5b4\ub77c\uc778 \ub370\uc774\ud130\uc14b\uc740 \ubcc4\ub3c4\uc758 \uba54\ud0c0\ub370\uc774\ud130 \ud53c\ucc98\uac00 \uc874\uc7ac\ud558\uc9c0 \uc54a\uc544 747\uba85\uc758 \uc0ac\uc6a9\uc790\uc640 292\uac1c \uc5d0\uc5b4\ub77c\uc778\uc5d0 \ub300\ud574\uc11c\ub9cc \uc6d0-\ud56b \uc778\ucf54\ub529 \ubcc0\ud658\uc744 \uc218\ud589\ud558\ubbc0\ub85c \ubcc0\ud658 \ud6c4 \ud53c\ucc98\uc758 \ucc28\uc6d0\uc740 747+292=1,039\uc785\ub2c8\ub2e4.\n\n**\ub610\ud55c, \ubcf8 \uc608\uc2dc\uc5d0\uc11c\ub294 rating\uc774 8 \uc774\uc0c1\uc778 \uc5d0\uc5b4\ub77c\uc778\ub4e4\uc5d0 \ub300\ud55c \uc774\uc9c4 \ubd84\ub958 \ubb38\uc81c\ub85c \uac04\uc18c\ud654\ud569\ub2c8\ub2e4. (\uc989, rating\uc774 8 \uc774\uc0c1\uc77c \uacbd\uc6b0 y = 1, 8 \ubbf8\ub9cc\uc77c \uacbd\uc6b0 y = 0 \uc785\ub2c8\ub2e4.)**\n\n\uc544\ub798 \uc140\uc740 \uc57d 20\ucd08 \uc18c\uc694\ub418\uba70, \ubcc0\ud658 \ud6c4 \ub370\uc774\ud130\uc14b\uc758 \ucc28\uc6d0\uc740 rating \uac1c\uc218 x \ud53c\uccd0 \uac1c\uc218 \uc785\ub2c8\ub2e4.\n\n\n```python\npd.options.display.max_rows = 10\nuser_airline_ratings_train.rating.describe() # \uc911\uac04\uac12\uc774 8 \uc784. 8 \uc774\uc0c1\uc744 \uae0d\uc815, \uc774\ud558\ub97c \ubd80\uc815\uc73c\ub85c \uac04\uc8fc \ud568\n```\n\n\n\n\n count 4968.000000\n mean 7.090781\n std 2.485475\n min 1.000000\n 25% 6.000000\n 50% 8.000000\n 75% 9.000000\n max 10.000000\n Name: rating, dtype: float64\n\n\n\n\n```python\n%%time\ndef loadDataset(df, lines, columns):\n \n # \ud53c\ucc98\ub294 \uc6d0-\ud56b \uc778\ucf54\ub529 \ud76c\uc18c \ud589\ub82c\ub85c \ubcc0\ud658\ud569\ub2c8\ub2e4.\n X = lil_matrix((lines, columns)).astype('float32')\n Y = []\n line = 0\n for line, (index, row) in enumerate(df.iterrows()):\n X[line,row['user_id']-1] = 1\n X[line, nb_users+(row['item_id']-1)] = 1\n \n # Y lable \uc0dd\uc131\n if int(row['rating']) >= 8:\n Y.append(1) # \uae0d\uc815\n else:\n Y.append(0) # \ubd80\uc815\n\n Y = np.array(Y).astype('float32') \n return X,Y\n\nX_train, Y_train = loadDataset(user_airline_ratings_train, nb_ratings_train, nb_features)\nX_test, Y_test = loadDataset(user_aireline_ratings_test, nb_ratings_test, nb_features)\n```\n\n CPU times: user 745 ms, sys: 0 ns, total: 745 ms\n Wall time: 744 ms\n\n\n\ud559\uc2b5 \ubc0f \uac80\uc99d\uc758 \uc804\uccb4 Shape\ub97c \ud655\uc778\ud558\uace0, y\uc758 1 \uacfc 0 \ubd84\ud3ec\ub97c \ud655\uc778 \ud569\ub2c8\ub2e4.\n\n\n```python\nprint(X_train.shape)\nprint(Y_train.shape)\nassert X_train.shape == (nb_ratings_train, nb_features)\nassert Y_train.shape == (nb_ratings_train, )\nzero_labels = np.count_nonzero(Y_train)\nprint(\"Training labels: {} zeros, {} ones\".format(zero_labels, nb_ratings_train-zero_labels))\n\nprint(X_test.shape)\nprint(Y_test.shape)\nassert X_test.shape == (nb_ratings_test, nb_features)\nassert Y_test.shape == (nb_ratings_test, )\nzero_labels = np.count_nonzero(Y_test)\nprint(\"Test labels: {} zeros, {} ones\".format(zero_labels, nb_ratings_test-zero_labels))\n```\n\n (4968, 1039)\n (4968,)\n Training labels: 2718 zeros, 2250 ones\n (1577, 1039)\n (1577,)\n Test labels: 869 zeros, 708 ones\n\n\n#### Protobuf \ud3ec\ub9f7 \ubcc0\ud658 \ud6c4 S3\uc5d0 \uc800\uc7a5\n\n\n```python\nimport sagemaker\nbucket = sagemaker.Session().default_bucket()\n#bucket = '[YOUR-BUCKET]'\nprefix = 'fm-hol'\n\nif bucket.strip() == '':\n raise RuntimeError(\"bucket name is empty.\")\n\ntrain_key = 'train.protobuf'\ntrain_prefix = '{}/{}'.format(prefix, 'train')\n\ntest_key = 'test.protobuf'\ntest_prefix = '{}/{}'.format(prefix, 'test')\n\noutput_prefix = 's3://{}/{}/output'.format(bucket, prefix)\n```\n\n\uc544\ub798 \uc140\uc740 \uc57d 15\ucd08 \uc18c\uc694\ub429\ub2c8\ub2e4.\n\n\n```python\n%%time\ndef writeDatasetToProtobuf(X, bucket, prefix, key, d_type, Y=None):\n buf = io.BytesIO()\n if d_type == \"sparse\":\n smac.write_spmatrix_to_sparse_tensor(buf, X, labels=Y)\n else:\n smac.write_numpy_to_dense_tensor(buf, X, labels=Y)\n \n buf.seek(0)\n obj = '{}/{}'.format(prefix, key)\n boto3.resource('s3').Bucket(bucket).Object(obj).upload_fileobj(buf)\n return 's3://{}/{}'.format(bucket,obj)\n \nfm_train_data_path = writeDatasetToProtobuf(X_train, bucket, train_prefix, train_key, \"sparse\", Y_train) \nfm_test_data_path = writeDatasetToProtobuf(X_test, bucket, test_prefix, test_key, \"sparse\", Y_test) \n \nprint(\"Training data S3 path: \", fm_train_data_path)\nprint(\"Test data S3 path: \", fm_test_data_path)\nprint(\"FM model output S3 path: \", output_prefix)\n```\n\n Training data S3 path: s3://sagemaker-ap-northeast-2-057716757052/fm-hol/train/train.protobuf\n Test data S3 path: s3://sagemaker-ap-northeast-2-057716757052/fm-hol/test/test.protobuf\n FM model output S3 path: s3://sagemaker-ap-northeast-2-057716757052/fm-hol/output\n CPU times: user 577 ms, sys: 34.2 ms, total: 611 ms\n Wall time: 1.77 s\n\n\n## FM \ud559\uc2b5\n\n\ubcf8 \ud578\uc988\uc628\uc740 \ud558\uc774\ud37c\ud30c\ub77c\uba54\ud130 \ud29c\ub2dd \uc5c6\uc774 \ud734\ub9ac\uc2a4\ud2f1\ud55c \ud558\uc774\ud37c\ud30c\ub77c\uba54\ud130\ub4e4\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \n\n- `feature_dim`: \ud53c\ucc98 \uac1c\uc218\ub85c \ubcf8 \ud578\uc988\uc628\uc5d0\uc11c\ub294 1.039\uc73c\ub85c \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n- `mini_batch_size`: \ubcf8 \ud578\uc988\uc628\uc5d0\uc11c\ub294 256 \uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4.\n- `num_factors`: latent factor \ucc28\uc6d0\uc73c\ub85c \ubcf8 \ud578\uc988\uc628\uc5d0\uc11c\ub294 64\ucc28\uc6d0\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4.\n- `epochs`: \ubcf8 \ud578\uc988\uc628\uc5d0\uc11c\ub294 100 \uc5d0\ud3ed\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4.\n\n\n```python\n# \uc138\uc774\uc9c0 \uba54\uc774\ucee4 \uc2e0\uaddc \ubc84\uc804 \ub2e4\uc6b4\ub85c\ub4dc\n!pip install --upgrade sagemaker\n```\n\n Requirement already up-to-date: sagemaker in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (2.13.0)\n Requirement already satisfied, skipping upgrade: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (1.5.0)\n Requirement already satisfied, skipping upgrade: packaging>=20.0 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (20.3)\n Requirement already satisfied, skipping upgrade: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (0.1.5)\n Requirement already satisfied, skipping upgrade: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (1.14.59)\n Requirement already satisfied, skipping upgrade: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (1.18.1)\n Requirement already satisfied, skipping upgrade: google-pasta in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (0.2.0)\n Requirement already satisfied, skipping upgrade: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (3.13.0)\n Requirement already satisfied, skipping upgrade: smdebug-rulesconfig==0.1.5 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from sagemaker) (0.1.5)\n Requirement already satisfied, skipping upgrade: zipp>=0.5 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker) (2.2.0)\n Requirement already satisfied, skipping upgrade: six in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker) (1.14.0)\n Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker) (2.4.6)\n Requirement already satisfied, skipping upgrade: botocore<1.18.0,>=1.17.59 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker) (1.17.59)\n Requirement already satisfied, skipping upgrade: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker) (0.3.3)\n Requirement already satisfied, skipping upgrade: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker) (0.9.4)\n Requirement already satisfied, skipping upgrade: setuptools in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker) (46.1.3.post20200330)\n Requirement already satisfied, skipping upgrade: docutils<0.16,>=0.10 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from botocore<1.18.0,>=1.17.59->boto3>=1.14.12->sagemaker) (0.15.2)\n Requirement already satisfied, skipping upgrade: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from botocore<1.18.0,>=1.17.59->boto3>=1.14.12->sagemaker) (2.8.1)\n Requirement already satisfied, skipping upgrade: urllib3<1.26,>=1.20; python_version != \"3.4\" in /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages (from botocore<1.18.0,>=1.17.59->boto3>=1.14.12->sagemaker) (1.25.8)\n \u001b[33mWARNING: You are using pip version 20.0.2; however, version 20.2.3 is available.\n You should consider upgrading via the '/home/ec2-user/anaconda3/envs/mxnet_p36/bin/python -m pip install --upgrade pip' command.\u001b[0m\n\n\n\n```python\nfrom sagemaker import image_uris, session\nfm_image = image_uris.retrieve(\"factorization-machines\", session.Session().boto_region_name, version=\"latest\")\n```\n\n Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: latest.\n\n\n\uc774\uc81c \ud6c8\ub828\uc744 \uc2dc\uc791\ud558\uae30 \uc704\ud55c \ubaa8\ub4e0 \uc900\ube44\uac00 \uc644\ub8cc\ub418\uc5c8\uc73c\uba70, \uc5ec\ub7ec\ubd84\uaed8\uc11c\ub294 `fit()` \uba54\uc18c\ub4dc\ub9cc \ud638\ucd9c\ud558\uba74 \ub429\ub2c8\ub2e4.
\n\ud6c8\ub828\uc740 \uc57d 4\ubd84\uc5d0\uc11c 5\ubd84\uc774 \uc18c\uc694\ub418\uba70(\uc21c\uc218 \ud6c8\ub828\uc5d0 \uc18c\uc694\ub418\ub294 \uc2dc\uac04\uc740 \ud6e8\uc52c \uc9e7\uc9c0\ub9cc, \ud6c8\ub828 \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uc2dc\uac04\uc774 \uace0\uc815\uc801\uc73c\ub85c \uc18c\uc694\ub429\ub2c8\ub2e4),
\n\uac80\uc99d \ub370\uc774\ud130\uc14b\uc5d0 \ub300\ud55c accuracy\ub294 \uc57d 66%\uc5d0 F1 \uc2a4\ucf54\uc5b4\ub294 \uc57d 0.71\uc785\ub2c8\ub2e4. (\uc544\ub798 output \uba54\uc138\uc9c0 \ucc38\uc870)\n\n```\n[10/03/2020 09:24:01 INFO 140262587922240] #quality_metric: host=algo-1, test binary_classification_accuracy =0.667089410273\n[10/03/2020 09:24:01 INFO 140262587922240] #quality_metric: host=algo-1, test binary_classification_cross_entropy =0.61242604422\n[10/03/2020 09:24:01 INFO 140262587922240] #quality_metric: host=algo-1, test binary_f_1.000 =0.711696869852\n```\n\n\n```python\ninstance_type_training = 'ml.c4.xlarge'\nfm = sagemaker.estimator.Estimator(image_uri = fm_image,\n role = get_execution_role(), \n instance_count=1, \n instance_type=instance_type_training,\n output_path=output_prefix,\n sagemaker_session=sagemaker.Session(),\n \n )\n\nfm.set_hyperparameters(feature_dim=nb_features,\n predictor_type='binary_classifier',\n mini_batch_size=256,\n num_factors=64,\n epochs=100)\n```\n\n\n```python\n%%time\nfm.fit({'train': fm_train_data_path, 'test': fm_test_data_path})\n```\n\n 2020-10-03 09:21:22 Starting - Starting the training job...\n 2020-10-03 09:21:23 Starting - Launching requested ML instances...\n 2020-10-03 09:22:21 Starting - Preparing the instances for training......\n 2020-10-03 09:23:18 Downloading - Downloading input data\n 2020-10-03 09:23:18 Training - Downloading the training image...\n 2020-10-03 09:23:51 Training - Training image download completed. Training in progress..\u001b[34mDocker entrypoint called with argument(s): train\u001b[0m\n \u001b[34mRunning default environment configuration script\u001b[0m\n \u001b[34m/opt/amazon/lib/python2.7/site-packages/pandas/util/nosetester.py:13: DeprecationWarning: Importing from numpy.testing.nosetester is deprecated, import from numpy.testing instead.\n from numpy.testing import nosetester\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-conf.json: {u'factors_lr': u'0.0001', u'linear_init_sigma': u'0.01', u'epochs': 1, u'_wd': u'1.0', u'_num_kv_servers': u'auto', u'use_bias': u'true', u'factors_init_sigma': u'0.001', u'_log_level': u'info', u'bias_init_method': u'normal', u'linear_init_method': u'normal', u'linear_lr': u'0.001', u'factors_init_method': u'normal', u'_tuning_objective_metric': u'', u'bias_wd': u'0.01', u'use_linear': u'true', u'bias_lr': u'0.1', u'mini_batch_size': u'1000', u'_use_full_symbolic': u'true', u'batch_metrics_publish_interval': u'500', u'bias_init_sigma': u'0.01', u'_num_gpus': u'auto', u'_data_format': u'record', u'factors_wd': u'0.00001', u'linear_wd': u'0.001', u'_kvstore': u'auto', u'_learning_rate': u'1.0', u'_optimizer': u'adam'}\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'epochs': u'100', u'feature_dim': u'1039', u'mini_batch_size': u'256', u'predictor_type': u'binary_classifier', u'num_factors': u'64'}\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] Final configuration: {u'factors_lr': u'0.0001', u'linear_init_sigma': u'0.01', u'epochs': u'100', u'feature_dim': u'1039', u'num_factors': u'64', u'_wd': u'1.0', u'_num_kv_servers': u'auto', u'use_bias': u'true', u'factors_init_sigma': u'0.001', u'_log_level': u'info', u'bias_init_method': u'normal', u'linear_init_method': u'normal', u'linear_lr': u'0.001', u'factors_init_method': u'normal', u'_tuning_objective_metric': u'', u'bias_wd': u'0.01', u'use_linear': u'true', u'bias_lr': u'0.1', u'mini_batch_size': u'256', u'_use_full_symbolic': u'true', u'batch_metrics_publish_interval': u'500', u'predictor_type': u'binary_classifier', u'bias_init_sigma': u'0.01', u'_num_gpus': u'auto', u'_data_format': u'record', u'factors_wd': u'0.00001', u'linear_wd': u'0.001', u'_kvstore': u'auto', u'_learning_rate': u'1.0', u'_optimizer': u'adam'}\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 WARNING 140262587922240] Loggers have already been setup.\u001b[0m\n \u001b[34mProcess 1 is a worker.\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] Using default worker.\u001b[0m\n \u001b[34m[2020-10-03 09:23:53.920] [tensorio] [warning] TensorIO is already initialized; ignoring the initialization routine.\u001b[0m\n \u001b[34m[2020-10-03 09:23:53.921] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 0, \"duration\": 3, \"num_examples\": 1, \"num_bytes\": 16384}\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] nvidia-smi took: 0.0251958370209 secs to identify 0 gpus\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] Number of GPUs being used: 0\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] [Sparse network] Building a sparse network.\u001b[0m\n \u001b[34m[10/03/2020 09:23:53 INFO 140262587922240] Create Store: local\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"initialize.time\": {\"count\": 1, \"max\": 33.8749885559082, \"sum\": 33.8749885559082, \"min\": 33.8749885559082}}, \"EndTime\": 1601717033.955284, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717033.918091}\n \u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Total Records Seen\": {\"count\": 1, \"max\": 256, \"sum\": 256.0, \"min\": 256}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 256, \"sum\": 256.0, \"min\": 256}, \"Reset Count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601717033.955481, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"init_train_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717033.955431}\n \u001b[0m\n \u001b[34m[09:23:53] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.1.x.203343.0/AL2012/generic-flavor/src/src/kvstore/./kvstore_local.h:280: Warning: non-default weights detected during kvstore pull. This call has been ignored. Please make sure to use row_sparse_pull with row_ids.\u001b[0m\n \u001b[34m[09:23:53] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.1.x.203343.0/AL2012/generic-flavor/src/src/kvstore/./kvstore_local.h:280: Warning: non-default weights detected during kvstore pull. This call has been ignored. Please make sure to use row_sparse_pull with row_ids.\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=0, batch=0 train binary_classification_accuracy =0.50390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=0, batch=0 train binary_classification_cross_entropy =0.692751169205\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=0, batch=0 train binary_f_1.000 =0.644257703081\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.072] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 2, \"duration\": 96, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=0, train binary_classification_accuracy =0.5240234375\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=0, train binary_classification_cross_entropy =0.691170358658\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=0, train binary_f_1.000 =0.655839570682\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"epochs\": {\"count\": 1, \"max\": 100, \"sum\": 100.0, \"min\": 100}, \"update.time\": {\"count\": 1, \"max\": 117.68102645874023, \"sum\": 117.68102645874023, \"min\": 117.68102645874023}}, \"EndTime\": 1601717034.073378, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717033.955359}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 1 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 21, \"sum\": 21.0, \"min\": 21}, \"Total Records Seen\": {\"count\": 1, \"max\": 5224, \"sum\": 5224.0, \"min\": 5224}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}}, \"EndTime\": 1601717034.073656, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 0}, \"StartTime\": 1601717033.95566}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=42039.6770999 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=1, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=1, batch=0 train binary_classification_cross_entropy =0.692551374435\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=1, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.149] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 4, \"duration\": 73, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=1, train binary_classification_accuracy =0.4927734375\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=1, train binary_classification_cross_entropy =0.695379242301\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=1, train binary_f_1.000 =0.545422720112\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 76.16901397705078, \"sum\": 76.16901397705078, \"min\": 76.16901397705078}}, \"EndTime\": 1601717034.150192, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.073468}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 2 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 41, \"sum\": 41.0, \"min\": 41}, \"Total Records Seen\": {\"count\": 1, \"max\": 10192, \"sum\": 10192.0, \"min\": 10192}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 3, \"sum\": 3.0, \"min\": 3}}, \"EndTime\": 1601717034.150534, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 1}, \"StartTime\": 1601717034.073986}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=64774.4793808 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=2, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=2, batch=0 train binary_classification_cross_entropy =0.702881336212\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=2, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.233] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 6, \"duration\": 81, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=2, train binary_classification_accuracy =0.53125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=2, train binary_classification_cross_entropy =0.688527747989\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=2, train binary_f_1.000 =0.693721286371\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 83.60815048217773, \"sum\": 83.60815048217773, \"min\": 83.60815048217773}}, \"EndTime\": 1601717034.234546, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.150275}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 3 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 61, \"sum\": 61.0, \"min\": 61}, \"Total Records Seen\": {\"count\": 1, \"max\": 15160, \"sum\": 15160.0, \"min\": 15160}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 4, \"sum\": 4.0, \"min\": 4}}, \"EndTime\": 1601717034.234802, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 2}, \"StartTime\": 1601717034.150904}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=59090.1163579 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=3, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=3, batch=0 train binary_classification_cross_entropy =0.691893577576\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=3, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.315] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 8, \"duration\": 77, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=3, train binary_classification_accuracy =0.5513671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=3, train binary_classification_cross_entropy =0.686648726463\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=3, train binary_f_1.000 =0.700950397084\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 80.67798614501953, \"sum\": 80.67798614501953, \"min\": 80.67798614501953}}, \"EndTime\": 1601717034.315773, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.23462}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 4 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 81, \"sum\": 81.0, \"min\": 81}, \"Total Records Seen\": {\"count\": 1, \"max\": 20128, \"sum\": 20128.0, \"min\": 20128}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 5, \"sum\": 5.0, \"min\": 5}}, \"EndTime\": 1601717034.316064, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 3}, \"StartTime\": 1601717034.235061}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=61225.8576755 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=4, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=4, batch=0 train binary_classification_cross_entropy =0.691996693611\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=4, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.389] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 10, \"duration\": 69, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=4, train binary_classification_accuracy =0.5580078125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=4, train binary_classification_cross_entropy =0.684000715613\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=4, train binary_f_1.000 =0.703989535644\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 73.3330249786377, \"sum\": 73.3330249786377, \"min\": 73.3330249786377}}, \"EndTime\": 1601717034.389746, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.315861}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 5 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 101, \"sum\": 101.0, \"min\": 101}, \"Total Records Seen\": {\"count\": 1, \"max\": 25096, \"sum\": 25096.0, \"min\": 25096}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 6, \"sum\": 6.0, \"min\": 6}}, \"EndTime\": 1601717034.390008, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 4}, \"StartTime\": 1601717034.316356}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=67316.993836 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=5, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=5, batch=0 train binary_classification_cross_entropy =0.688360571861\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=5, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.460] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 12, \"duration\": 68, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=5, train binary_classification_accuracy =0.574609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=5, train binary_classification_cross_entropy =0.681511378288\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=5, train binary_f_1.000 =0.71083377589\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 70.91307640075684, \"sum\": 70.91307640075684, \"min\": 70.91307640075684}}, \"EndTime\": 1601717034.461247, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.389833}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 6 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 121, \"sum\": 121.0, \"min\": 121}, \"Total Records Seen\": {\"count\": 1, \"max\": 30064, \"sum\": 30064.0, \"min\": 30064}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 7, \"sum\": 7.0, \"min\": 7}}, \"EndTime\": 1601717034.461506, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 5}, \"StartTime\": 1601717034.390303}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=69630.5902057 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=6, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=6, batch=0 train binary_classification_cross_entropy =0.686040282249\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=6, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.527] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 14, \"duration\": 64, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=6, train binary_classification_accuracy =0.5888671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=6, train binary_classification_cross_entropy =0.678996151686\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=6, train binary_f_1.000 =0.716192530673\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 66.54095649719238, \"sum\": 66.54095649719238, \"min\": 66.54095649719238}}, \"EndTime\": 1601717034.528384, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.461345}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 7 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 141, \"sum\": 141.0, \"min\": 141}, \"Total Records Seen\": {\"count\": 1, \"max\": 35032, \"sum\": 35032.0, \"min\": 35032}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 8, \"sum\": 8.0, \"min\": 8}}, \"EndTime\": 1601717034.528796, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 6}, \"StartTime\": 1601717034.461812}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=74006.8769672 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=7, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=7, batch=0 train binary_classification_cross_entropy =0.683289408684\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=7, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.597] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 16, \"duration\": 66, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=7, train binary_classification_accuracy =0.6\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=7, train binary_classification_cross_entropy =0.67645213306\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=7, train binary_f_1.000 =0.719682452779\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 68.86887550354004, \"sum\": 68.86887550354004, \"min\": 68.86887550354004}}, \"EndTime\": 1601717034.598052, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.528513}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 8 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 161, \"sum\": 161.0, \"min\": 161}, \"Total Records Seen\": {\"count\": 1, \"max\": 40000, \"sum\": 40000.0, \"min\": 40000}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 9, \"sum\": 9.0, \"min\": 9}}, \"EndTime\": 1601717034.598311, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 7}, \"StartTime\": 1601717034.529151}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=71697.7509729 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=8, batch=0 train binary_classification_accuracy =0.48046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=8, batch=0 train binary_classification_cross_entropy =0.680659770966\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=8, batch=0 train binary_f_1.000 =0.64907651715\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.685] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 18, \"duration\": 79, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=8, train binary_classification_accuracy =0.6095703125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=8, train binary_classification_cross_entropy =0.673912924528\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=8, train binary_f_1.000 =0.7224767458\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 87.14890480041504, \"sum\": 87.14890480041504, \"min\": 87.14890480041504}}, \"EndTime\": 1601717034.685785, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.598136}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 9 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 181, \"sum\": 181.0, \"min\": 181}, \"Total Records Seen\": {\"count\": 1, \"max\": 44968, \"sum\": 44968.0, \"min\": 44968}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 10, \"sum\": 10.0, \"min\": 10}}, \"EndTime\": 1601717034.686056, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 8}, \"StartTime\": 1601717034.598598}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=56718.1994594 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=9, batch=0 train binary_classification_accuracy =0.48828125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=9, batch=0 train binary_classification_cross_entropy =0.677988171577\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=9, batch=0 train binary_f_1.000 =0.652519893899\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.755] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 20, \"duration\": 66, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=9, train binary_classification_accuracy =0.623828125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=9, train binary_classification_cross_entropy =0.67136297822\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=9, train binary_f_1.000 =0.728502960248\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 69.97299194335938, \"sum\": 69.97299194335938, \"min\": 69.97299194335938}}, \"EndTime\": 1601717034.75634, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.685874}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 10 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 201, \"sum\": 201.0, \"min\": 201}, \"Total Records Seen\": {\"count\": 1, \"max\": 49936, \"sum\": 49936.0, \"min\": 49936}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 11, \"sum\": 11.0, \"min\": 11}}, \"EndTime\": 1601717034.756825, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 9}, \"StartTime\": 1601717034.686335}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=70300.5781723 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=10, batch=0 train binary_classification_accuracy =0.5\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=10, batch=0 train binary_classification_cross_entropy =0.675319075584\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=10, batch=0 train binary_f_1.000 =0.657754010695\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.846] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 22, \"duration\": 87, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=10, train binary_classification_accuracy =0.63203125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=10, train binary_classification_cross_entropy =0.66881326139\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=10, train binary_f_1.000 =0.731164383562\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 90.09122848510742, \"sum\": 90.09122848510742, \"min\": 90.09122848510742}}, \"EndTime\": 1601717034.847301, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.756493}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 11 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 221, \"sum\": 221.0, \"min\": 221}, \"Total Records Seen\": {\"count\": 1, \"max\": 54904, \"sum\": 54904.0, \"min\": 54904}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 12, \"sum\": 12.0, \"min\": 12}}, \"EndTime\": 1601717034.847906, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 10}, \"StartTime\": 1601717034.757177}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=54517.370562 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=11, batch=0 train binary_classification_accuracy =0.51953125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=11, batch=0 train binary_classification_cross_entropy =0.672642409801\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=11, batch=0 train binary_f_1.000 =0.666666666667\u001b[0m\n \u001b[34m[2020-10-03 09:23:54.931] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 24, \"duration\": 76, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=11, train binary_classification_accuracy =0.6392578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=11, train binary_classification_cross_entropy =0.666260898113\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=11, train binary_f_1.000 =0.73374657633\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 83.85896682739258, \"sum\": 83.85896682739258, \"min\": 83.85896682739258}}, \"EndTime\": 1601717034.93239, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.847451}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #progress_metric: host=algo-1, completed 12 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 241, \"sum\": 241.0, \"min\": 241}, \"Total Records Seen\": {\"count\": 1, \"max\": 59872, \"sum\": 59872.0, \"min\": 59872}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 13, \"sum\": 13.0, \"min\": 13}}, \"EndTime\": 1601717034.932656, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 11}, \"StartTime\": 1601717034.848436}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=58871.5825114 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=12, batch=0 train binary_classification_accuracy =0.55078125\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=12, batch=0 train binary_classification_cross_entropy =0.669959664345\u001b[0m\n \u001b[34m[10/03/2020 09:23:54 INFO 140262587922240] #quality_metric: host=algo-1, epoch=12, batch=0 train binary_f_1.000 =0.679665738162\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.006] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 26, \"duration\": 70, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=12, train binary_classification_accuracy =0.64921875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=12, train binary_classification_cross_entropy =0.663708221912\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=12, train binary_f_1.000 =0.737580362361\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 74.17893409729004, \"sum\": 74.17893409729004, \"min\": 74.17893409729004}}, \"EndTime\": 1601717035.007186, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717034.932478}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 13 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 261, \"sum\": 261.0, \"min\": 261}, \"Total Records Seen\": {\"count\": 1, \"max\": 64840, \"sum\": 64840.0, \"min\": 64840}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 14, \"sum\": 14.0, \"min\": 14}}, \"EndTime\": 1601717035.007617, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 12}, \"StartTime\": 1601717034.93297}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=66377.9582376 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=13, batch=0 train binary_classification_accuracy =0.578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=13, batch=0 train binary_classification_cross_entropy =0.667271375656\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=13, batch=0 train binary_f_1.000 =0.693181818182\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.110] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 28, \"duration\": 100, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=13, train binary_classification_accuracy =0.658984375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=13, train binary_classification_cross_entropy =0.661155185103\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=13, train binary_f_1.000 =0.742249778565\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 103.38211059570312, \"sum\": 103.38211059570312, \"min\": 103.38211059570312}}, \"EndTime\": 1601717035.111474, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.007297}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 14 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 281, \"sum\": 281.0, \"min\": 281}, \"Total Records Seen\": {\"count\": 1, \"max\": 69808, \"sum\": 69808.0, \"min\": 69808}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 15, \"sum\": 15.0, \"min\": 15}}, \"EndTime\": 1601717035.111738, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 13}, \"StartTime\": 1601717035.008027}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=47833.6675818 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=14, batch=0 train binary_classification_accuracy =0.5859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=14, batch=0 train binary_classification_cross_entropy =0.664577245712\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=14, batch=0 train binary_f_1.000 =0.695402298851\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.190] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 30, \"duration\": 77, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=14, train binary_classification_accuracy =0.6654296875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=14, train binary_classification_cross_entropy =0.658602416515\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=14, train binary_f_1.000 =0.74451901566\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 79.43415641784668, \"sum\": 79.43415641784668, \"min\": 79.43415641784668}}, \"EndTime\": 1601717035.191489, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.111571}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 15 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 301, \"sum\": 301.0, \"min\": 301}, \"Total Records Seen\": {\"count\": 1, \"max\": 74776, \"sum\": 74776.0, \"min\": 74776}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 16, \"sum\": 16.0, \"min\": 16}}, \"EndTime\": 1601717035.191746, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 14}, \"StartTime\": 1601717035.112025}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=62199.6026113 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=15, batch=0 train binary_classification_accuracy =0.59375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=15, batch=0 train binary_classification_cross_entropy =0.661877691746\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=15, batch=0 train binary_f_1.000 =0.697674418605\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.269] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 32, \"duration\": 76, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=15, train binary_classification_accuracy =0.6697265625\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=15, train binary_classification_cross_entropy =0.656050252914\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=15, train binary_f_1.000 =0.745522949586\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 78.16600799560547, \"sum\": 78.16600799560547, \"min\": 78.16600799560547}}, \"EndTime\": 1601717035.27024, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.191581}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 16 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 321, \"sum\": 321.0, \"min\": 321}, \"Total Records Seen\": {\"count\": 1, \"max\": 79744, \"sum\": 79744.0, \"min\": 79744}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 17, \"sum\": 17.0, \"min\": 17}}, \"EndTime\": 1601717035.270512, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 15}, \"StartTime\": 1601717035.192043}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=63219.0114592 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=16, batch=0 train binary_classification_accuracy =0.59375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=16, batch=0 train binary_classification_cross_entropy =0.659172952175\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=16, batch=0 train binary_f_1.000 =0.695906432749\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.343] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 34, \"duration\": 70, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=16, train binary_classification_accuracy =0.67421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=16, train binary_classification_cross_entropy =0.65349906683\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=16, train binary_f_1.000 =0.746966019417\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 72.92795181274414, \"sum\": 72.92795181274414, \"min\": 72.92795181274414}}, \"EndTime\": 1601717035.343761, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.270312}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 17 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 341, \"sum\": 341.0, \"min\": 341}, \"Total Records Seen\": {\"count\": 1, \"max\": 84712, \"sum\": 84712.0, \"min\": 84712}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 18, \"sum\": 18.0, \"min\": 18}}, \"EndTime\": 1601717035.344006, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 16}, \"StartTime\": 1601717035.270803}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=67762.2624339 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=17, batch=0 train binary_classification_accuracy =0.62109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=17, batch=0 train binary_classification_cross_entropy =0.656463503838\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=17, batch=0 train binary_f_1.000 =0.708708708709\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.409] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 36, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=17, train binary_classification_accuracy =0.6822265625\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=17, train binary_classification_cross_entropy =0.650949198008\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=17, train binary_f_1.000 =0.750727746285\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.7958984375, \"sum\": 65.7958984375, \"min\": 65.7958984375}}, \"EndTime\": 1601717035.41009, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.343826}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 18 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 361, \"sum\": 361.0, \"min\": 361}, \"Total Records Seen\": {\"count\": 1, \"max\": 89680, \"sum\": 89680.0, \"min\": 89680}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 19, \"sum\": 19.0, \"min\": 19}}, \"EndTime\": 1601717035.410288, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 17}, \"StartTime\": 1601717035.344266}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75119.1545189 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=18, batch=0 train binary_classification_accuracy =0.62890625\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=18, batch=0 train binary_classification_cross_entropy =0.653749704361\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=18, batch=0 train binary_f_1.000 =0.711246200608\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.483] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 38, \"duration\": 71, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=18, train binary_classification_accuracy =0.68671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=18, train binary_classification_cross_entropy =0.648400983214\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=18, train binary_f_1.000 =0.752316244595\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 73.94886016845703, \"sum\": 73.94886016845703, \"min\": 73.94886016845703}}, \"EndTime\": 1601717035.484522, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.410151}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 19 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 381, \"sum\": 381.0, \"min\": 381}, \"Total Records Seen\": {\"count\": 1, \"max\": 94648, \"sum\": 94648.0, \"min\": 94648}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}}, \"EndTime\": 1601717035.484821, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 18}, \"StartTime\": 1601717035.410509}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=66704.3413758 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=19, batch=0 train binary_classification_accuracy =0.65625\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=19, batch=0 train binary_classification_cross_entropy =0.651031970978\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=19, batch=0 train binary_f_1.000 =0.725\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.558] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 40, \"duration\": 71, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=19, train binary_classification_accuracy =0.6921875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=19, train binary_classification_cross_entropy =0.645854780078\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=19, train binary_f_1.000 =0.754822650902\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 74.40590858459473, \"sum\": 74.40590858459473, \"min\": 74.40590858459473}}, \"EndTime\": 1601717035.559747, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.484606}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 20 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 401, \"sum\": 401.0, \"min\": 401}, \"Total Records Seen\": {\"count\": 1, \"max\": 99616, \"sum\": 99616.0, \"min\": 99616}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 21, \"sum\": 21.0, \"min\": 21}}, \"EndTime\": 1601717035.560126, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 19}, \"StartTime\": 1601717035.485233}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=66095.4011819 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=20, batch=0 train binary_classification_accuracy =0.6640625\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=20, batch=0 train binary_classification_cross_entropy =0.648310661316\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=20, batch=0 train binary_f_1.000 =0.729559748428\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.649] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 42, \"duration\": 87, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=20, train binary_classification_accuracy =0.6966796875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=20, train binary_classification_cross_entropy =0.643310925364\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=20, train binary_f_1.000 =0.757078054122\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 90.05093574523926, \"sum\": 90.05093574523926, \"min\": 90.05093574523926}}, \"EndTime\": 1601717035.650581, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.559837}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 21 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 421, \"sum\": 421.0, \"min\": 421}, \"Total Records Seen\": {\"count\": 1, \"max\": 104584, \"sum\": 104584.0, \"min\": 104584}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 22, \"sum\": 22.0, \"min\": 22}}, \"EndTime\": 1601717035.650904, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 20}, \"StartTime\": 1601717035.560493}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=54847.1300814 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=21, batch=0 train binary_classification_accuracy =0.6875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=21, batch=0 train binary_classification_cross_entropy =0.645586431026\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=21, batch=0 train binary_f_1.000 =0.74358974359\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.727] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 44, \"duration\": 72, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=21, train binary_classification_accuracy =0.7013671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=21, train binary_classification_cross_entropy =0.640769779682\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=21, train binary_f_1.000 =0.75925051173\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 77.8658390045166, \"sum\": 77.8658390045166, \"min\": 77.8658390045166}}, \"EndTime\": 1601717035.729151, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.650663}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 22 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 441, \"sum\": 441.0, \"min\": 441}, \"Total Records Seen\": {\"count\": 1, \"max\": 109552, \"sum\": 109552.0, \"min\": 109552}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 23, \"sum\": 23.0, \"min\": 23}}, \"EndTime\": 1601717035.729619, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 21}, \"StartTime\": 1601717035.651252}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=63227.0681842 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=22, batch=0 train binary_classification_accuracy =0.6875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=22, batch=0 train binary_classification_cross_entropy =0.642859697342\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=22, batch=0 train binary_f_1.000 =0.74358974359\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.805] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 46, \"duration\": 72, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=22, train binary_classification_accuracy =0.704296875\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=22, train binary_classification_cross_entropy =0.638231682777\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=22, train binary_f_1.000 =0.760443037975\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 76.26199722290039, \"sum\": 76.26199722290039, \"min\": 76.26199722290039}}, \"EndTime\": 1601717035.806279, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.729417}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 23 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 461, \"sum\": 461.0, \"min\": 461}, \"Total Records Seen\": {\"count\": 1, \"max\": 114520, \"sum\": 114520.0, \"min\": 114520}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 24, \"sum\": 24.0, \"min\": 24}}, \"EndTime\": 1601717035.806545, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 22}, \"StartTime\": 1601717035.729982}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=64766.8287249 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=23, batch=0 train binary_classification_accuracy =0.7109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=23, batch=0 train binary_classification_cross_entropy =0.640130877495\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=23, batch=0 train binary_f_1.000 =0.758169934641\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.882] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 48, \"duration\": 72, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=23, train binary_classification_accuracy =0.71015625\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=23, train binary_classification_cross_entropy =0.635696968436\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=23, train binary_f_1.000 =0.763844684914\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 76.06291770935059, \"sum\": 76.06291770935059, \"min\": 76.06291770935059}}, \"EndTime\": 1601717035.882925, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.806373}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 24 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 481, \"sum\": 481.0, \"min\": 481}, \"Total Records Seen\": {\"count\": 1, \"max\": 119488, \"sum\": 119488.0, \"min\": 119488}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 25, \"sum\": 25.0, \"min\": 25}}, \"EndTime\": 1601717035.883258, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 23}, \"StartTime\": 1601717035.806823}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=64875.7184951 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=24, batch=0 train binary_classification_accuracy =0.7109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=24, batch=0 train binary_classification_cross_entropy =0.637400627136\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=24, batch=0 train binary_f_1.000 =0.756578947368\u001b[0m\n \u001b[34m[2020-10-03 09:23:55.957] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 50, \"duration\": 70, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=24, train binary_classification_accuracy =0.71328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=24, train binary_classification_cross_entropy =0.633165997267\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=24, train binary_f_1.000 =0.765345268542\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 74.47600364685059, \"sum\": 74.47600364685059, \"min\": 74.47600364685059}}, \"EndTime\": 1601717035.958139, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.883005}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #progress_metric: host=algo-1, completed 25 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 501, \"sum\": 501.0, \"min\": 501}, \"Total Records Seen\": {\"count\": 1, \"max\": 124456, \"sum\": 124456.0, \"min\": 124456}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 26, \"sum\": 26.0, \"min\": 26}}, \"EndTime\": 1601717035.958386, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 24}, \"StartTime\": 1601717035.883628}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=66333.583566 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=25, batch=0 train binary_classification_accuracy =0.7109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=25, batch=0 train binary_classification_cross_entropy =0.634669303894\u001b[0m\n \u001b[34m[10/03/2020 09:23:55 INFO 140262587922240] #quality_metric: host=algo-1, epoch=25, batch=0 train binary_f_1.000 =0.754966887417\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.047] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 52, \"duration\": 80, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=25, train binary_classification_accuracy =0.7138671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=25, train binary_classification_cross_entropy =0.630639088154\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=25, train binary_f_1.000 =0.764885251164\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 89.21003341674805, \"sum\": 89.21003341674805, \"min\": 89.21003341674805}}, \"EndTime\": 1601717036.047901, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717035.958218}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 26 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 521, \"sum\": 521.0, \"min\": 521}, \"Total Records Seen\": {\"count\": 1, \"max\": 129424, \"sum\": 129424.0, \"min\": 129424}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 27, \"sum\": 27.0, \"min\": 27}}, \"EndTime\": 1601717036.04811, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 25}, \"StartTime\": 1601717035.95866}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=55463.0520659 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=26, batch=0 train binary_classification_accuracy =0.71875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=26, batch=0 train binary_classification_cross_entropy =0.631937503815\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=26, batch=0 train binary_f_1.000 =0.76\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.133] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 54, \"duration\": 84, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=26, train binary_classification_accuracy =0.7166015625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=26, train binary_classification_cross_entropy =0.628116601706\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=26, train binary_f_1.000 =0.766231673917\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 86.20500564575195, \"sum\": 86.20500564575195, \"min\": 86.20500564575195}}, \"EndTime\": 1601717036.134566, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.047971}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 27 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 541, \"sum\": 541.0, \"min\": 541}, \"Total Records Seen\": {\"count\": 1, \"max\": 134392, \"sum\": 134392.0, \"min\": 134392}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 28, \"sum\": 28.0, \"min\": 28}}, \"EndTime\": 1601717036.134839, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 26}, \"StartTime\": 1601717036.048328}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=57327.3896759 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=27, batch=0 train binary_classification_accuracy =0.72265625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=27, batch=0 train binary_classification_cross_entropy =0.629205644131\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=27, batch=0 train binary_f_1.000 =0.76254180602\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.203] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 56, \"duration\": 66, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=27, train binary_classification_accuracy =0.7203125\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=27, train binary_classification_cross_entropy =0.625598827004\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=27, train binary_f_1.000 =0.768359754125\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 68.76492500305176, \"sum\": 68.76492500305176, \"min\": 68.76492500305176}}, \"EndTime\": 1601717036.203947, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.134635}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 28 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 561, \"sum\": 561.0, \"min\": 561}, \"Total Records Seen\": {\"count\": 1, \"max\": 139360, \"sum\": 139360.0, \"min\": 139360}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 29, \"sum\": 29.0, \"min\": 29}}, \"EndTime\": 1601717036.20416, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 27}, \"StartTime\": 1601717036.135151}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=71877.055943 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=28, batch=0 train binary_classification_accuracy =0.72265625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=28, batch=0 train binary_classification_cross_entropy =0.626474261284\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=28, batch=0 train binary_f_1.000 =0.76254180602\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.287] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 58, \"duration\": 81, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=28, train binary_classification_accuracy =0.7228515625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=28, train binary_classification_cross_entropy =0.623086124659\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=28, train binary_f_1.000 =0.769978926893\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 84.27095413208008, \"sum\": 84.27095413208008, \"min\": 84.27095413208008}}, \"EndTime\": 1601717036.288694, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.204023}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 29 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 581, \"sum\": 581.0, \"min\": 581}, \"Total Records Seen\": {\"count\": 1, \"max\": 144328, \"sum\": 144328.0, \"min\": 144328}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 30, \"sum\": 30.0, \"min\": 30}}, \"EndTime\": 1601717036.289031, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 28}, \"StartTime\": 1601717036.204387}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=58568.2708655 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=29, batch=0 train binary_classification_accuracy =0.7265625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=29, batch=0 train binary_classification_cross_entropy =0.623743772507\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=29, batch=0 train binary_f_1.000 =0.765100671141\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.358] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 60, \"duration\": 67, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=29, train binary_classification_accuracy =0.726171875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=29, train binary_classification_cross_entropy =0.620578765869\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=29, train binary_f_1.000 =0.771809895833\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 69.27299499511719, \"sum\": 69.27299499511719, \"min\": 69.27299499511719}}, \"EndTime\": 1601717036.358767, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.288756}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 30 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 601, \"sum\": 601.0, \"min\": 601}, \"Total Records Seen\": {\"count\": 1, \"max\": 149296, \"sum\": 149296.0, \"min\": 149296}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 31, \"sum\": 31.0, \"min\": 31}}, \"EndTime\": 1601717036.358967, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 29}, \"StartTime\": 1601717036.289466}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=71319.8351354 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=30, batch=0 train binary_classification_accuracy =0.73046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=30, batch=0 train binary_classification_cross_entropy =0.62101483345\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=30, batch=0 train binary_f_1.000 =0.767676767677\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.421] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 62, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=30, train binary_classification_accuracy =0.7298828125\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=30, train binary_classification_cross_entropy =0.618077093363\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=30, train binary_f_1.000 =0.774204081633\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.470197677612305, \"sum\": 62.470197677612305, \"min\": 62.470197677612305}}, \"EndTime\": 1601717036.421678, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.358837}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 31 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 621, \"sum\": 621.0, \"min\": 621}, \"Total Records Seen\": {\"count\": 1, \"max\": 154264, \"sum\": 154264.0, \"min\": 154264}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 32, \"sum\": 32.0, \"min\": 32}}, \"EndTime\": 1601717036.421907, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 30}, \"StartTime\": 1601717036.35918}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79056.7441098 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=31, batch=0 train binary_classification_accuracy =0.7421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=31, batch=0 train binary_classification_cross_entropy =0.618287622929\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=31, batch=0 train binary_f_1.000 =0.775510204082\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.483] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 64, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=31, train binary_classification_accuracy =0.7330078125\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=31, train binary_classification_cross_entropy =0.615581381321\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=31, train binary_f_1.000 =0.776231789164\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.44492530822754, \"sum\": 62.44492530822754, \"min\": 62.44492530822754}}, \"EndTime\": 1601717036.484601, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.421773}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 32 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 641, \"sum\": 641.0, \"min\": 641}, \"Total Records Seen\": {\"count\": 1, \"max\": 159232, \"sum\": 159232.0, \"min\": 159232}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 33, \"sum\": 33.0, \"min\": 33}}, \"EndTime\": 1601717036.485007, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 31}, \"StartTime\": 1601717036.422127}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78855.9945202 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=32, batch=0 train binary_classification_accuracy =0.7421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=32, batch=0 train binary_classification_cross_entropy =0.615562796593\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=32, batch=0 train binary_f_1.000 =0.775510204082\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.546] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 66, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=32, train binary_classification_accuracy =0.73515625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=32, train binary_classification_cross_entropy =0.613091906905\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=32, train binary_f_1.000 =0.777413000657\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.466217041015625, \"sum\": 61.466217041015625, \"min\": 61.466217041015625}}, \"EndTime\": 1601717036.546832, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.484737}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 33 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 661, \"sum\": 661.0, \"min\": 661}, \"Total Records Seen\": {\"count\": 1, \"max\": 164200, \"sum\": 164200.0, \"min\": 164200}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 34, \"sum\": 34.0, \"min\": 34}}, \"EndTime\": 1601717036.547028, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 32}, \"StartTime\": 1601717036.485339}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=80389.8962281 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=33, batch=0 train binary_classification_accuracy =0.7421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=33, batch=0 train binary_classification_cross_entropy =0.612840771675\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=33, batch=0 train binary_f_1.000 =0.775510204082\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.611] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 68, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=33, train binary_classification_accuracy =0.7373046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=33, train binary_classification_cross_entropy =0.610608968139\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=33, train binary_f_1.000 =0.778673687675\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.0169849395752, \"sum\": 65.0169849395752, \"min\": 65.0169849395752}}, \"EndTime\": 1601717036.612288, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.5469}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 34 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 681, \"sum\": 681.0, \"min\": 681}, \"Total Records Seen\": {\"count\": 1, \"max\": 169168, \"sum\": 169168.0, \"min\": 169168}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 35, \"sum\": 35.0, \"min\": 35}}, \"EndTime\": 1601717036.612539, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 33}, \"StartTime\": 1601717036.547239}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75928.5591456 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=34, batch=0 train binary_classification_accuracy =0.74609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=34, batch=0 train binary_classification_cross_entropy =0.610121846199\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=34, batch=0 train binary_f_1.000 =0.776632302405\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.675] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 70, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=34, train binary_classification_accuracy =0.7392578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=34, train binary_classification_cross_entropy =0.608132833242\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=34, train binary_f_1.000 =0.780029658922\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.517093658447266, \"sum\": 63.517093658447266, \"min\": 63.517093658447266}}, \"EndTime\": 1601717036.676356, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.612365}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 35 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 701, \"sum\": 701.0, \"min\": 701}, \"Total Records Seen\": {\"count\": 1, \"max\": 174136, \"sum\": 174136.0, \"min\": 174136}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 36, \"sum\": 36.0, \"min\": 36}}, \"EndTime\": 1601717036.676565, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 34}, \"StartTime\": 1601717036.612809}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77778.1180484 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=35, batch=0 train binary_classification_accuracy =0.74609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=35, batch=0 train binary_classification_cross_entropy =0.607406616211\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=35, batch=0 train binary_f_1.000 =0.77508650519\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.742] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 72, \"duration\": 64, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=35, train binary_classification_accuracy =0.74140625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=35, train binary_classification_cross_entropy =0.605663749576\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=35, train binary_f_1.000 =0.781301618764\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 66.03312492370605, \"sum\": 66.03312492370605, \"min\": 66.03312492370605}}, \"EndTime\": 1601717036.742848, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.676426}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 36 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 721, \"sum\": 721.0, \"min\": 721}, \"Total Records Seen\": {\"count\": 1, \"max\": 179104, \"sum\": 179104.0, \"min\": 179104}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 37, \"sum\": 37.0, \"min\": 37}}, \"EndTime\": 1601717036.743014, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 35}, \"StartTime\": 1601717036.676783}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=74889.4026114 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=36, batch=0 train binary_classification_accuracy =0.74609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=36, batch=0 train binary_classification_cross_entropy =0.604695320129\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=36, batch=0 train binary_f_1.000 =0.77508650519\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.803] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 74, \"duration\": 58, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=36, train binary_classification_accuracy =0.74375\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=36, train binary_classification_cross_entropy =0.603201976418\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=36, train binary_f_1.000 =0.783140495868\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 60.804128646850586, \"sum\": 60.804128646850586, \"min\": 60.804128646850586}}, \"EndTime\": 1601717036.80405, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.742912}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 37 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 741, \"sum\": 741.0, \"min\": 741}, \"Total Records Seen\": {\"count\": 1, \"max\": 184072, \"sum\": 184072.0, \"min\": 184072}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 38, \"sum\": 38.0, \"min\": 38}}, \"EndTime\": 1601717036.804289, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 36}, \"StartTime\": 1601717036.743218}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=81191.1483645 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=37, batch=0 train binary_classification_accuracy =0.74609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=37, batch=0 train binary_classification_cross_entropy =0.601988494396\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=37, batch=0 train binary_f_1.000 =0.77508650519\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.869] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 76, \"duration\": 64, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=37, train binary_classification_accuracy =0.7458984375\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=37, train binary_classification_cross_entropy =0.600747770071\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=37, train binary_f_1.000 =0.784638304916\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.92392921447754, \"sum\": 65.92392921447754, \"min\": 65.92392921447754}}, \"EndTime\": 1601717036.870537, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.804118}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 38 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 761, \"sum\": 761.0, \"min\": 761}, \"Total Records Seen\": {\"count\": 1, \"max\": 189040, \"sum\": 189040.0, \"min\": 189040}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 39, \"sum\": 39.0, \"min\": 39}}, \"EndTime\": 1601717036.870764, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 37}, \"StartTime\": 1601717036.804582}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=74879.9834409 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=38, batch=0 train binary_classification_accuracy =0.75\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=38, batch=0 train binary_classification_cross_entropy =0.599286496639\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=38, batch=0 train binary_f_1.000 =0.777777777778\u001b[0m\n \u001b[34m[2020-10-03 09:23:56.940] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 78, \"duration\": 67, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=38, train binary_classification_accuracy =0.7482421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=38, train binary_classification_cross_entropy =0.598301330209\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=38, train binary_f_1.000 =0.786058091286\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 69.91791725158691, \"sum\": 69.91791725158691, \"min\": 69.91791725158691}}, \"EndTime\": 1601717036.94099, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.87061}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #progress_metric: host=algo-1, completed 39 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 781, \"sum\": 781.0, \"min\": 781}, \"Total Records Seen\": {\"count\": 1, \"max\": 194008, \"sum\": 194008.0, \"min\": 194008}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 40, \"sum\": 40.0, \"min\": 40}}, \"EndTime\": 1601717036.941183, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 38}, \"StartTime\": 1601717036.871042}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=70715.9466507 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=39, batch=0 train binary_classification_accuracy =0.75390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=39, batch=0 train binary_classification_cross_entropy =0.596589565277\u001b[0m\n \u001b[34m[10/03/2020 09:23:56 INFO 140262587922240] #quality_metric: host=algo-1, epoch=39, batch=0 train binary_f_1.000 =0.780487804878\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.004] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 80, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=39, train binary_classification_accuracy =0.750390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=39, train binary_classification_cross_entropy =0.595862871408\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=39, train binary_f_1.000 =0.787495843033\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.200950622558594, \"sum\": 63.200950622558594, \"min\": 63.200950622558594}}, \"EndTime\": 1601717037.004615, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717036.941053}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 40 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 801, \"sum\": 801.0, \"min\": 801}, \"Total Records Seen\": {\"count\": 1, \"max\": 198976, \"sum\": 198976.0, \"min\": 198976}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 41, \"sum\": 41.0, \"min\": 41}}, \"EndTime\": 1601717037.004804, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 39}, \"StartTime\": 1601717036.941389}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78211.6427022 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=40, batch=0 train binary_classification_accuracy =0.75390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=40, batch=0 train binary_classification_cross_entropy =0.593898296356\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=40, batch=0 train binary_f_1.000 =0.780487804878\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.066] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 82, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=40, train binary_classification_accuracy =0.7517578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=40, train binary_classification_cross_entropy =0.593432664871\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=40, train binary_f_1.000 =0.788131355226\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.9959831237793, \"sum\": 61.9959831237793, \"min\": 61.9959831237793}}, \"EndTime\": 1601717037.067035, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.00467}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 41 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 821, \"sum\": 821.0, \"min\": 821}, \"Total Records Seen\": {\"count\": 1, \"max\": 203944, \"sum\": 203944.0, \"min\": 203944}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 42, \"sum\": 42.0, \"min\": 42}}, \"EndTime\": 1601717037.067235, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 40}, \"StartTime\": 1601717037.00501}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79702.3484331 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=41, batch=0 train binary_classification_accuracy =0.7578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=41, batch=0 train binary_classification_cross_entropy =0.591212749481\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=41, batch=0 train binary_f_1.000 =0.783216783217\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.169] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 84, \"duration\": 99, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=41, train binary_classification_accuracy =0.7548828125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=41, train binary_classification_cross_entropy =0.591010868549\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=41, train binary_f_1.000 =0.790309106099\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 102.12016105651855, \"sum\": 102.12016105651855, \"min\": 102.12016105651855}}, \"EndTime\": 1601717037.169599, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.0671}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 42 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 841, \"sum\": 841.0, \"min\": 841}, \"Total Records Seen\": {\"count\": 1, \"max\": 208912, \"sum\": 208912.0, \"min\": 208912}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 43, \"sum\": 43.0, \"min\": 43}}, \"EndTime\": 1601717037.16983, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 41}, \"StartTime\": 1601717037.06745}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=48474.5121609 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=42, batch=0 train binary_classification_accuracy =0.7578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=42, batch=0 train binary_classification_cross_entropy =0.588533520699\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=42, batch=0 train binary_f_1.000 =0.783216783217\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.244] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 86, \"duration\": 72, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=42, train binary_classification_accuracy =0.7572265625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=42, train binary_classification_cross_entropy =0.588597682118\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=42, train binary_f_1.000 =0.791896869245\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 74.98002052307129, \"sum\": 74.98002052307129, \"min\": 74.98002052307129}}, \"EndTime\": 1601717037.24508, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.169663}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 43 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 861, \"sum\": 861.0, \"min\": 861}, \"Total Records Seen\": {\"count\": 1, \"max\": 213880, \"sum\": 213880.0, \"min\": 213880}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 44, \"sum\": 44.0, \"min\": 44}}, \"EndTime\": 1601717037.24546, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 42}, \"StartTime\": 1601717037.170044}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=65718.7173477 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=43, batch=0 train binary_classification_accuracy =0.765625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=43, batch=0 train binary_classification_cross_entropy =0.585860848427\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=43, batch=0 train binary_f_1.000 =0.788732394366\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.318] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 88, \"duration\": 68, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=43, train binary_classification_accuracy =0.76015625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=43, train binary_classification_cross_entropy =0.586193320155\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=43, train binary_f_1.000 =0.794028849379\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 73.3788013458252, \"sum\": 73.3788013458252, \"min\": 73.3788013458252}}, \"EndTime\": 1601717037.319265, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.245242}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 44 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 881, \"sum\": 881.0, \"min\": 881}, \"Total Records Seen\": {\"count\": 1, \"max\": 218848, \"sum\": 218848.0, \"min\": 218848}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 45, \"sum\": 45.0, \"min\": 45}}, \"EndTime\": 1601717037.31947, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 43}, \"StartTime\": 1601717037.245855}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=67374.6739051 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=44, batch=0 train binary_classification_accuracy =0.7734375\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=44, batch=0 train binary_classification_cross_entropy =0.583194971085\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=44, batch=0 train binary_f_1.000 =0.795774647887\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.387] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 90, \"duration\": 66, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=44, train binary_classification_accuracy =0.76328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=44, train binary_classification_cross_entropy =0.583797961473\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=44, train binary_f_1.000 =0.796370967742\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 68.73297691345215, \"sum\": 68.73297691345215, \"min\": 68.73297691345215}}, \"EndTime\": 1601717037.388449, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.319333}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 45 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 901, \"sum\": 901.0, \"min\": 901}, \"Total Records Seen\": {\"count\": 1, \"max\": 223816, \"sum\": 223816.0, \"min\": 223816}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 46, \"sum\": 46.0, \"min\": 46}}, \"EndTime\": 1601717037.388649, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 44}, \"StartTime\": 1601717037.319687}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=71928.1670165 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=45, batch=0 train binary_classification_accuracy =0.78125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=45, batch=0 train binary_classification_cross_entropy =0.580536305904\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=45, batch=0 train binary_f_1.000 =0.8\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.450] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 92, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=45, train binary_classification_accuracy =0.765625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=45, train binary_classification_cross_entropy =0.581411746144\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=45, train binary_f_1.000 =0.798047795355\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.235116958618164, \"sum\": 62.235116958618164, \"min\": 62.235116958618164}}, \"EndTime\": 1601717037.451123, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.38852}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 46 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 921, \"sum\": 921.0, \"min\": 921}, \"Total Records Seen\": {\"count\": 1, \"max\": 228784, \"sum\": 228784.0, \"min\": 228784}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 47, \"sum\": 47.0, \"min\": 47}}, \"EndTime\": 1601717037.451296, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 45}, \"StartTime\": 1601717037.388862}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79410.4507317 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=46, batch=0 train binary_classification_accuracy =0.79296875\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=46, batch=0 train binary_classification_cross_entropy =0.5778850317\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=46, batch=0 train binary_f_1.000 =0.808664259928\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.513] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 94, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=46, train binary_classification_accuracy =0.7671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=46, train binary_classification_cross_entropy =0.579034873843\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=46, train binary_f_1.000 =0.799123693967\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.150001525878906, \"sum\": 62.150001525878906, \"min\": 62.150001525878906}}, \"EndTime\": 1601717037.513705, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.451179}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 47 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 941, \"sum\": 941.0, \"min\": 941}, \"Total Records Seen\": {\"count\": 1, \"max\": 233752, \"sum\": 233752.0, \"min\": 233752}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 48, \"sum\": 48.0, \"min\": 48}}, \"EndTime\": 1601717037.513958, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 46}, \"StartTime\": 1601717037.451522}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79428.3099935 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=47, batch=0 train binary_classification_accuracy =0.80078125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=47, batch=0 train binary_classification_cross_entropy =0.575241565704\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=47, batch=0 train binary_f_1.000 =0.814545454545\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.576] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 96, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=47, train binary_classification_accuracy =0.7703125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=47, train binary_classification_cross_entropy =0.576667487621\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=47, train binary_f_1.000 =0.801418439716\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.87813186645508, \"sum\": 62.87813186645508, \"min\": 62.87813186645508}}, \"EndTime\": 1601717037.577169, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.513795}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 48 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 961, \"sum\": 961.0, \"min\": 961}, \"Total Records Seen\": {\"count\": 1, \"max\": 238720, \"sum\": 238720.0, \"min\": 238720}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 49, \"sum\": 49.0, \"min\": 49}}, \"EndTime\": 1601717037.577325, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 47}, \"StartTime\": 1601717037.514267}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78633.999917 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=48, batch=0 train binary_classification_accuracy =0.80078125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=48, batch=0 train binary_classification_cross_entropy =0.572606086731\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=48, batch=0 train binary_f_1.000 =0.814545454545\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.637] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 98, \"duration\": 58, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=48, train binary_classification_accuracy =0.7724609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=48, train binary_classification_cross_entropy =0.574309718609\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=48, train binary_f_1.000 =0.802909829132\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 60.228824615478516, \"sum\": 60.228824615478516, \"min\": 60.228824615478516}}, \"EndTime\": 1601717037.637791, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.577215}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 49 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 981, \"sum\": 981.0, \"min\": 981}, \"Total Records Seen\": {\"count\": 1, \"max\": 243688, \"sum\": 243688.0, \"min\": 243688}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 50, \"sum\": 50.0, \"min\": 50}}, \"EndTime\": 1601717037.637964, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 48}, \"StartTime\": 1601717037.577536}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=82083.158452 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=49, batch=0 train binary_classification_accuracy =0.8046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=49, batch=0 train binary_classification_cross_entropy =0.569978892803\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=49, batch=0 train binary_f_1.000 =0.817518248175\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.703] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 100, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=49, train binary_classification_accuracy =0.774609375\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=49, train binary_classification_cross_entropy =0.57196174264\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=49, train binary_f_1.000 =0.804340454391\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.66691398620605, \"sum\": 65.66691398620605, \"min\": 65.66691398620605}}, \"EndTime\": 1601717037.703861, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.637841}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 50 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1001, \"sum\": 1001.0, \"min\": 1001}, \"Total Records Seen\": {\"count\": 1, \"max\": 248656, \"sum\": 248656.0, \"min\": 248656}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 51, \"sum\": 51.0, \"min\": 51}}, \"EndTime\": 1601717037.7041, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 49}, \"StartTime\": 1601717037.638161}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75208.8987255 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=50, batch=0 train binary_classification_accuracy =0.8125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=50, batch=0 train binary_classification_cross_entropy =0.56736022234\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=50, batch=0 train binary_f_1.000 =0.823529411765\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.768] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 102, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=50, train binary_classification_accuracy =0.777734375\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=50, train binary_classification_cross_entropy =0.569623652101\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=50, train binary_f_1.000 =0.806725543478\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 64.63193893432617, \"sum\": 64.63193893432617, \"min\": 64.63193893432617}}, \"EndTime\": 1601717037.769032, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.703937}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 51 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1021, \"sum\": 1021.0, \"min\": 1021}, \"Total Records Seen\": {\"count\": 1, \"max\": 253624, \"sum\": 253624.0, \"min\": 253624}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 52, \"sum\": 52.0, \"min\": 52}}, \"EndTime\": 1601717037.769228, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 50}, \"StartTime\": 1601717037.704371}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=76464.9194592 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=51, batch=0 train binary_classification_accuracy =0.8125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=51, batch=0 train binary_classification_cross_entropy =0.564750313759\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=51, batch=0 train binary_f_1.000 =0.823529411765\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.831] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 104, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=51, train binary_classification_accuracy =0.7794921875\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=51, train binary_classification_cross_entropy =0.567295628786\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=51, train binary_f_1.000 =0.808156329652\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.88909912109375, \"sum\": 62.88909912109375, \"min\": 62.88909912109375}}, \"EndTime\": 1601717037.832409, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.7691}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 52 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1041, \"sum\": 1041.0, \"min\": 1041}, \"Total Records Seen\": {\"count\": 1, \"max\": 258592, \"sum\": 258592.0, \"min\": 258592}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 53, \"sum\": 53.0, \"min\": 53}}, \"EndTime\": 1601717037.832653, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 51}, \"StartTime\": 1601717037.769487}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78499.5093974 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=52, batch=0 train binary_classification_accuracy =0.81640625\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=52, batch=0 train binary_classification_cross_entropy =0.562149524689\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=52, batch=0 train binary_f_1.000 =0.826568265683\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.896] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 106, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=52, train binary_classification_accuracy =0.78125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=52, train binary_classification_cross_entropy =0.564977744222\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=52, train binary_f_1.000 =0.80945899966\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 64.41712379455566, \"sum\": 64.41712379455566, \"min\": 64.41712379455566}}, \"EndTime\": 1601717037.897419, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.832483}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 53 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1061, \"sum\": 1061.0, \"min\": 1061}, \"Total Records Seen\": {\"count\": 1, \"max\": 263560, \"sum\": 263560.0, \"min\": 263560}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 54, \"sum\": 54.0, \"min\": 54}}, \"EndTime\": 1601717037.897879, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 52}, \"StartTime\": 1601717037.832924}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=76291.3442489 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=53, batch=0 train binary_classification_accuracy =0.8203125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=53, batch=0 train binary_classification_cross_entropy =0.559557914734\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=53, batch=0 train binary_f_1.000 =0.82962962963\u001b[0m\n \u001b[34m[2020-10-03 09:23:57.962] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 108, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=53, train binary_classification_accuracy =0.782421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=53, train binary_classification_cross_entropy =0.5626701653\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=53, train binary_f_1.000 =0.810350697991\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 64.65387344360352, \"sum\": 64.65387344360352, \"min\": 64.65387344360352}}, \"EndTime\": 1601717037.962859, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.897524}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #progress_metric: host=algo-1, completed 54 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1081, \"sum\": 1081.0, \"min\": 1081}, \"Total Records Seen\": {\"count\": 1, \"max\": 268528, \"sum\": 268528.0, \"min\": 268528}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 55, \"sum\": 55.0, \"min\": 55}}, \"EndTime\": 1601717037.963052, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 53}, \"StartTime\": 1601717037.898176}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=76443.8804768 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=54, batch=0 train binary_classification_accuracy =0.8203125\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=54, batch=0 train binary_classification_cross_entropy =0.556975722313\u001b[0m\n \u001b[34m[10/03/2020 09:23:57 INFO 140262587922240] #quality_metric: host=algo-1, epoch=54, batch=0 train binary_f_1.000 =0.82962962963\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.024] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 110, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=54, train binary_classification_accuracy =0.784375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=54, train binary_classification_cross_entropy =0.560372939706\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=54, train binary_f_1.000 =0.81179679509\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.87081336975098, \"sum\": 61.87081336975098, \"min\": 61.87081336975098}}, \"EndTime\": 1601717038.025154, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717037.962921}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 55 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1101, \"sum\": 1101.0, \"min\": 1101}, \"Total Records Seen\": {\"count\": 1, \"max\": 273496, \"sum\": 273496.0, \"min\": 273496}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 56, \"sum\": 56.0, \"min\": 56}}, \"EndTime\": 1601717038.025296, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 54}, \"StartTime\": 1601717037.963259}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79949.7458926 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=55, batch=0 train binary_classification_accuracy =0.82421875\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=55, batch=0 train binary_classification_cross_entropy =0.554403185844\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=55, batch=0 train binary_f_1.000 =0.832713754647\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.087] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 112, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=55, train binary_classification_accuracy =0.78671875\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=55, train binary_classification_cross_entropy =0.558086207509\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=55, train binary_f_1.000 =0.813524590164\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.03889846801758, \"sum\": 62.03889846801758, \"min\": 62.03889846801758}}, \"EndTime\": 1601717038.087566, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.025198}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 56 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1121, \"sum\": 1121.0, \"min\": 1121}, \"Total Records Seen\": {\"count\": 1, \"max\": 278464, \"sum\": 278464.0, \"min\": 278464}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 57, \"sum\": 57.0, \"min\": 57}}, \"EndTime\": 1601717038.087803, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 55}, \"StartTime\": 1601717038.02549}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79557.8024542 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=56, batch=0 train binary_classification_accuracy =0.8359375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=56, batch=0 train binary_classification_cross_entropy =0.551840543747\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=56, batch=0 train binary_f_1.000 =0.842105263158\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.149] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 114, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=56, train binary_classification_accuracy =0.78828125\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=56, train binary_classification_cross_entropy =0.555810058117\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=56, train binary_f_1.000 =0.814447107155\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.50197982788086, \"sum\": 61.50197982788086, \"min\": 61.50197982788086}}, \"EndTime\": 1601717038.149608, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.087633}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 57 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1141, \"sum\": 1141.0, \"min\": 1141}, \"Total Records Seen\": {\"count\": 1, \"max\": 283432, \"sum\": 283432.0, \"min\": 283432}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 58, \"sum\": 58.0, \"min\": 58}}, \"EndTime\": 1601717038.149841, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 56}, \"StartTime\": 1601717038.088078}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=80291.0823437 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=57, batch=0 train binary_classification_accuracy =0.83984375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=57, batch=0 train binary_classification_cross_entropy =0.549288034439\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=57, batch=0 train binary_f_1.000 =0.846441947566\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.214] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 116, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=57, train binary_classification_accuracy =0.790234375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=57, train binary_classification_cross_entropy =0.553544589877\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=57, train binary_f_1.000 =0.816095890411\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 64.95285034179688, \"sum\": 64.95285034179688, \"min\": 64.95285034179688}}, \"EndTime\": 1601717038.21506, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.149672}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 58 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1161, \"sum\": 1161.0, \"min\": 1161}, \"Total Records Seen\": {\"count\": 1, \"max\": 288400, \"sum\": 288400.0, \"min\": 288400}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 59, \"sum\": 59.0, \"min\": 59}}, \"EndTime\": 1601717038.215306, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 57}, \"StartTime\": 1601717038.150074}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75988.922094 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=58, batch=0 train binary_classification_accuracy =0.83984375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=58, batch=0 train binary_classification_cross_entropy =0.546745657921\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=58, batch=0 train binary_f_1.000 =0.846441947566\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.291] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 118, \"duration\": 74, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=58, train binary_classification_accuracy =0.793359375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=58, train binary_classification_cross_entropy =0.551289877295\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=58, train binary_f_1.000 =0.818524871355\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 76.83396339416504, \"sum\": 76.83396339416504, \"min\": 76.83396339416504}}, \"EndTime\": 1601717038.292465, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.215139}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 59 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1181, \"sum\": 1181.0, \"min\": 1181}, \"Total Records Seen\": {\"count\": 1, \"max\": 293368, \"sum\": 293368.0, \"min\": 293368}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 60, \"sum\": 60.0, \"min\": 60}}, \"EndTime\": 1601717038.292708, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 58}, \"StartTime\": 1601717038.215596}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=64313.0583275 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=59, batch=0 train binary_classification_accuracy =0.84765625\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=59, batch=0 train binary_classification_cross_entropy =0.544213712215\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=59, batch=0 train binary_f_1.000 =0.852830188679\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.359] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 120, \"duration\": 64, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=59, train binary_classification_accuracy =0.7966796875\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=59, train binary_classification_cross_entropy =0.549045994878\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=59, train binary_f_1.000 =0.821103282351\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 66.49494171142578, \"sum\": 66.49494171142578, \"min\": 66.49494171142578}}, \"EndTime\": 1601717038.359581, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.292535}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 60 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1201, \"sum\": 1201.0, \"min\": 1201}, \"Total Records Seen\": {\"count\": 1, \"max\": 298336, \"sum\": 298336.0, \"min\": 298336}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 61, \"sum\": 61.0, \"min\": 61}}, \"EndTime\": 1601717038.359784, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 59}, \"StartTime\": 1601717038.293055}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=74243.4040661 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=60, batch=0 train binary_classification_accuracy =0.8515625\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=60, batch=0 train binary_classification_cross_entropy =0.541692376137\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=60, batch=0 train binary_f_1.000 =0.856060606061\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.423] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 122, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=60, train binary_classification_accuracy =0.798046875\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=60, train binary_classification_cross_entropy =0.546813002229\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=60, train binary_f_1.000 =0.822214580468\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.5838508605957, \"sum\": 63.5838508605957, \"min\": 63.5838508605957}}, \"EndTime\": 1601717038.423606, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.359651}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 61 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1221, \"sum\": 1221.0, \"min\": 1221}, \"Total Records Seen\": {\"count\": 1, \"max\": 303304, \"sum\": 303304.0, \"min\": 303304}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 62, \"sum\": 62.0, \"min\": 62}}, \"EndTime\": 1601717038.423862, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 60}, \"StartTime\": 1601717038.359988}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77625.9994039 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=61, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=61, batch=0 train binary_classification_cross_entropy =0.53918170929\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=61, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.485] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 124, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=61, train binary_classification_accuracy =0.800390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=61, train binary_classification_cross_entropy =0.544591024518\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=61, train binary_f_1.000 =0.824035812672\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.396121978759766, \"sum\": 61.396121978759766, \"min\": 61.396121978759766}}, \"EndTime\": 1601717038.48557, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.423685}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 62 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1241, \"sum\": 1241.0, \"min\": 1241}, \"Total Records Seen\": {\"count\": 1, \"max\": 308272, \"sum\": 308272.0, \"min\": 308272}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 63, \"sum\": 63.0, \"min\": 63}}, \"EndTime\": 1601717038.485846, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 61}, \"StartTime\": 1601717038.424138}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=80331.0122941 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=62, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=62, batch=0 train binary_classification_cross_entropy =0.536681950092\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=62, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.551] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 126, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=62, train binary_classification_accuracy =0.8017578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=62, train binary_classification_cross_entropy =0.542380085588\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=62, train binary_f_1.000 =0.825150732127\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.81711769104004, \"sum\": 65.81711769104004, \"min\": 65.81711769104004}}, \"EndTime\": 1601717038.551981, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.48565}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 63 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1261, \"sum\": 1261.0, \"min\": 1261}, \"Total Records Seen\": {\"count\": 1, \"max\": 313240, \"sum\": 313240.0, \"min\": 313240}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 64, \"sum\": 64.0, \"min\": 64}}, \"EndTime\": 1601717038.552224, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 62}, \"StartTime\": 1601717038.486137}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75014.7682737 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=63, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=63, batch=0 train binary_classification_cross_entropy =0.534193217754\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=63, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.615] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 128, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=63, train binary_classification_accuracy =0.8025390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=63, train binary_classification_cross_entropy =0.540180203319\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=63, train binary_f_1.000 =0.825719703499\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.369035720825195, \"sum\": 63.369035720825195, \"min\": 63.369035720825195}}, \"EndTime\": 1601717038.615893, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.552074}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 64 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1281, \"sum\": 1281.0, \"min\": 1281}, \"Total Records Seen\": {\"count\": 1, \"max\": 318208, \"sum\": 318208.0, \"min\": 318208}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 65, \"sum\": 65.0, \"min\": 65}}, \"EndTime\": 1601717038.616087, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 63}, \"StartTime\": 1601717038.552491}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77970.7843814 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=64, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=64, batch=0 train binary_classification_cross_entropy =0.531715631485\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=64, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.677] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 130, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=64, train binary_classification_accuracy =0.80390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=64, train binary_classification_cross_entropy =0.537991517782\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=64, train binary_f_1.000 =0.826777087647\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.34819984436035, \"sum\": 61.34819984436035, \"min\": 61.34819984436035}}, \"EndTime\": 1601717038.677688, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.615951}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 65 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1301, \"sum\": 1301.0, \"min\": 1301}, \"Total Records Seen\": {\"count\": 1, \"max\": 323176, \"sum\": 323176.0, \"min\": 323176}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 66, \"sum\": 66.0, \"min\": 66}}, \"EndTime\": 1601717038.677949, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 64}, \"StartTime\": 1601717038.616314}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=80470.3035096 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=65, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=65, batch=0 train binary_classification_cross_entropy =0.529249429703\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=65, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.741] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 132, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=65, train binary_classification_accuracy =0.8052734375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=65, train binary_classification_cross_entropy =0.535814011097\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=65, train binary_f_1.000 =0.827895736233\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 64.11910057067871, \"sum\": 64.11910057067871, \"min\": 64.11910057067871}}, \"EndTime\": 1601717038.742338, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.677784}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 66 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1321, \"sum\": 1321.0, \"min\": 1321}, \"Total Records Seen\": {\"count\": 1, \"max\": 328144, \"sum\": 328144.0, \"min\": 328144}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 67, \"sum\": 67.0, \"min\": 67}}, \"EndTime\": 1601717038.742556, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 65}, \"StartTime\": 1601717038.67818}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77024.2830602 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=66, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=66, batch=0 train binary_classification_cross_entropy =0.526794493198\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=66, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.810] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 134, \"duration\": 65, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=66, train binary_classification_accuracy =0.80625\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=66, train binary_classification_cross_entropy =0.533647760749\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=66, train binary_f_1.000 =0.828729281768\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 67.8567886352539, \"sum\": 67.8567886352539, \"min\": 67.8567886352539}}, \"EndTime\": 1601717038.810722, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.742398}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 67 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1341, \"sum\": 1341.0, \"min\": 1341}, \"Total Records Seen\": {\"count\": 1, \"max\": 333112, \"sum\": 333112.0, \"min\": 333112}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 68, \"sum\": 68.0, \"min\": 68}}, \"EndTime\": 1601717038.81101, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 66}, \"StartTime\": 1601717038.742829}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=72720.903587 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=67, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=67, batch=0 train binary_classification_cross_entropy =0.524351119995\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=67, batch=0 train binary_f_1.000 =0.862595419847\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.879] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 136, \"duration\": 66, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=67, train binary_classification_accuracy =0.8078125\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=67, train binary_classification_cross_entropy =0.53149279654\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=67, train binary_f_1.000 =0.829875518672\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 68.4199333190918, \"sum\": 68.4199333190918, \"min\": 68.4199333190918}}, \"EndTime\": 1601717038.879737, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.810841}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 68 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1361, \"sum\": 1361.0, \"min\": 1361}, \"Total Records Seen\": {\"count\": 1, \"max\": 338080, \"sum\": 338080.0, \"min\": 338080}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 69, \"sum\": 69.0, \"min\": 69}}, \"EndTime\": 1601717038.880024, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 67}, \"StartTime\": 1601717038.811287}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=72117.1129762 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=68, batch=0 train binary_classification_accuracy =0.85546875\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=68, batch=0 train binary_classification_cross_entropy =0.521919369698\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=68, batch=0 train binary_f_1.000 =0.857142857143\u001b[0m\n \u001b[34m[2020-10-03 09:23:58.959] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 138, \"duration\": 77, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=68, train binary_classification_accuracy =0.808984375\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=68, train binary_classification_cross_entropy =0.529349157214\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=68, train binary_f_1.000 =0.83067867036\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 79.45394515991211, \"sum\": 79.45394515991211, \"min\": 79.45394515991211}}, \"EndTime\": 1601717038.959786, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.879861}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #progress_metric: host=algo-1, completed 69 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1381, \"sum\": 1381.0, \"min\": 1381}, \"Total Records Seen\": {\"count\": 1, \"max\": 343048, \"sum\": 343048.0, \"min\": 343048}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 70, \"sum\": 70.0, \"min\": 70}}, \"EndTime\": 1601717038.96, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 68}, \"StartTime\": 1601717038.880299}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=62238.4311495 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=69, batch=0 train binary_classification_accuracy =0.85546875\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=69, batch=0 train binary_classification_cross_entropy =0.519499361515\u001b[0m\n \u001b[34m[10/03/2020 09:23:58 INFO 140262587922240] #quality_metric: host=algo-1, epoch=69, batch=0 train binary_f_1.000 =0.857142857143\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.020] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 140, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=69, train binary_classification_accuracy =0.8111328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=69, train binary_classification_cross_entropy =0.527216872573\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=69, train binary_f_1.000 =0.832263660017\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 60.77098846435547, \"sum\": 60.77098846435547, \"min\": 60.77098846435547}}, \"EndTime\": 1601717039.021031, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717038.959857}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 70 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1401, \"sum\": 1401.0, \"min\": 1401}, \"Total Records Seen\": {\"count\": 1, \"max\": 348016, \"sum\": 348016.0, \"min\": 348016}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 71, \"sum\": 71.0, \"min\": 71}}, \"EndTime\": 1601717039.021254, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 69}, \"StartTime\": 1601717038.960237}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=81261.4401672 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=70, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=70, batch=0 train binary_classification_cross_entropy =0.517091155052\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=70, batch=0 train binary_f_1.000 =0.860465116279\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.083] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 142, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=70, train binary_classification_accuracy =0.8126953125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=70, train binary_classification_cross_entropy =0.525095973909\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=70, train binary_f_1.000 =0.833478034381\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.10613250732422, \"sum\": 62.10613250732422, \"min\": 62.10613250732422}}, \"EndTime\": 1601717039.083659, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.021102}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 71 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1421, \"sum\": 1421.0, \"min\": 1421}, \"Total Records Seen\": {\"count\": 1, \"max\": 352984, \"sum\": 352984.0, \"min\": 352984}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 72, \"sum\": 72.0, \"min\": 72}}, \"EndTime\": 1601717039.083843, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 70}, \"StartTime\": 1601717039.021529}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79604.0000917 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=71, batch=0 train binary_classification_accuracy =0.859375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=71, batch=0 train binary_classification_cross_entropy =0.514694929123\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=71, batch=0 train binary_f_1.000 =0.860465116279\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.145] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 144, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=71, train binary_classification_accuracy =0.8150390625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=71, train binary_classification_cross_entropy =0.522986488044\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=71, train binary_f_1.000 =0.835504603092\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.383174896240234, \"sum\": 62.383174896240234, \"min\": 62.383174896240234}}, \"EndTime\": 1601717039.146456, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.083716}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 72 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1441, \"sum\": 1441.0, \"min\": 1441}, \"Total Records Seen\": {\"count\": 1, \"max\": 357952, \"sum\": 357952.0, \"min\": 357952}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 73, \"sum\": 73.0, \"min\": 73}}, \"EndTime\": 1601717039.146683, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 71}, \"StartTime\": 1601717039.084042}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79171.1871972 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=72, batch=0 train binary_classification_accuracy =0.86328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=72, batch=0 train binary_classification_cross_entropy =0.512310564518\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=72, batch=0 train binary_f_1.000 =0.863813229572\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.209] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 146, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=72, train binary_classification_accuracy =0.816796875\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=72, train binary_classification_cross_entropy =0.5208884269\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=72, train binary_f_1.000 =0.83698296837\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.163042068481445, \"sum\": 63.163042068481445, \"min\": 63.163042068481445}}, \"EndTime\": 1601717039.210114, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.146522}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 73 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1461, \"sum\": 1461.0, \"min\": 1461}, \"Total Records Seen\": {\"count\": 1, \"max\": 362920, \"sum\": 362920.0, \"min\": 362920}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 74, \"sum\": 74.0, \"min\": 74}}, \"EndTime\": 1601717039.210327, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 72}, \"StartTime\": 1601717039.146925}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78223.9742924 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=73, batch=0 train binary_classification_accuracy =0.86328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=73, batch=0 train binary_classification_cross_entropy =0.509938299656\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=73, batch=0 train binary_f_1.000 =0.863813229572\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.283] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 148, \"duration\": 70, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=73, train binary_classification_accuracy =0.8173828125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=73, train binary_classification_cross_entropy =0.518801812828\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=73, train binary_f_1.000 =0.837532580365\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 72.75986671447754, \"sum\": 72.75986671447754, \"min\": 72.75986671447754}}, \"EndTime\": 1601717039.284822, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.210177}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 74 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1481, \"sum\": 1481.0, \"min\": 1481}, \"Total Records Seen\": {\"count\": 1, \"max\": 367888, \"sum\": 367888.0, \"min\": 367888}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 75, \"sum\": 75.0, \"min\": 75}}, \"EndTime\": 1601717039.285212, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 73}, \"StartTime\": 1601717039.211993}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=66420.6985022 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=74, batch=0 train binary_classification_accuracy =0.86328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=74, batch=0 train binary_classification_cross_entropy =0.507578194141\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=74, batch=0 train binary_f_1.000 =0.863813229572\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.355] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 150, \"duration\": 68, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=74, train binary_classification_accuracy =0.81796875\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=74, train binary_classification_cross_entropy =0.516726651788\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=74, train binary_f_1.000 =0.838025721237\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 70.57905197143555, \"sum\": 70.57905197143555, \"min\": 70.57905197143555}}, \"EndTime\": 1601717039.356092, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.284912}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 75 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1501, \"sum\": 1501.0, \"min\": 1501}, \"Total Records Seen\": {\"count\": 1, \"max\": 372856, \"sum\": 372856.0, \"min\": 372856}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 76, \"sum\": 76.0, \"min\": 76}}, \"EndTime\": 1601717039.356284, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 74}, \"StartTime\": 1601717039.285481}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=70054.7744341 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=75, batch=0 train binary_classification_accuracy =0.86328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=75, batch=0 train binary_classification_cross_entropy =0.505230247974\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=75, batch=0 train binary_f_1.000 =0.863813229572\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.419] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 152, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=75, train binary_classification_accuracy =0.819140625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=75, train binary_classification_cross_entropy =0.514662954211\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=75, train binary_f_1.000 =0.838900487126\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.78006935119629, \"sum\": 63.78006935119629, \"min\": 63.78006935119629}}, \"EndTime\": 1601717039.420303, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.356155}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 76 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1521, \"sum\": 1521.0, \"min\": 1521}, \"Total Records Seen\": {\"count\": 1, \"max\": 377824, \"sum\": 377824.0, \"min\": 377824}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 77, \"sum\": 77.0, \"min\": 77}}, \"EndTime\": 1601717039.420541, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 75}, \"StartTime\": 1601717039.35649}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77413.1674109 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=76, batch=0 train binary_classification_accuracy =0.86328125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=76, batch=0 train binary_classification_cross_entropy =0.502894580364\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=76, batch=0 train binary_f_1.000 =0.863813229572\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.482] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 154, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=76, train binary_classification_accuracy =0.8216796875\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=76, train binary_classification_cross_entropy =0.51261074543\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=76, train binary_f_1.000 =0.840857591075\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.60204315185547, \"sum\": 62.60204315185547, \"min\": 62.60204315185547}}, \"EndTime\": 1601717039.483445, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.420366}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 77 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1541, \"sum\": 1541.0, \"min\": 1541}, \"Total Records Seen\": {\"count\": 1, \"max\": 382792, \"sum\": 382792.0, \"min\": 382792}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 78, \"sum\": 78.0, \"min\": 78}}, \"EndTime\": 1601717039.483635, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 76}, \"StartTime\": 1601717039.420813}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78927.3814686 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=77, batch=0 train binary_classification_accuracy =0.87109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=77, batch=0 train binary_classification_cross_entropy =0.500571191311\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=77, batch=0 train binary_f_1.000 =0.871595330739\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.545] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 156, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=77, train binary_classification_accuracy =0.8240234375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=77, train binary_classification_cross_entropy =0.510570012033\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=77, train binary_f_1.000 =0.842839699983\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.07394599914551, \"sum\": 62.07394599914551, \"min\": 62.07394599914551}}, \"EndTime\": 1601717039.545956, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.483506}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 78 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1561, \"sum\": 1561.0, \"min\": 1561}, \"Total Records Seen\": {\"count\": 1, \"max\": 387760, \"sum\": 387760.0, \"min\": 387760}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 79, \"sum\": 79.0, \"min\": 79}}, \"EndTime\": 1601717039.546234, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 77}, \"StartTime\": 1601717039.483851}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79493.1551195 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=78, batch=0 train binary_classification_accuracy =0.87109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=78, batch=0 train binary_classification_cross_entropy =0.498260200024\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=78, batch=0 train binary_f_1.000 =0.871595330739\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.612] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 158, \"duration\": 64, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=78, train binary_classification_accuracy =0.825\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=78, train binary_classification_cross_entropy =0.508540757\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=78, train binary_f_1.000 =0.843520782396\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 66.54906272888184, \"sum\": 66.54906272888184, \"min\": 66.54906272888184}}, \"EndTime\": 1601717039.613055, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.546018}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 79 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1581, \"sum\": 1581.0, \"min\": 1581}, \"Total Records Seen\": {\"count\": 1, \"max\": 392728, \"sum\": 392728.0, \"min\": 392728}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 80, \"sum\": 80.0, \"min\": 80}}, \"EndTime\": 1601717039.613285, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 78}, \"StartTime\": 1601717039.546475}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=74232.2954856 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=79, batch=0 train binary_classification_accuracy =0.87109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=79, batch=0 train binary_classification_cross_entropy =0.4959615767\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=79, batch=0 train binary_f_1.000 =0.871595330739\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.674] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 160, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=79, train binary_classification_accuracy =0.82578125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=79, train binary_classification_cross_entropy =0.506522983313\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=79, train binary_f_1.000 =0.844164919637\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.94710731506348, \"sum\": 61.94710731506348, \"min\": 61.94710731506348}}, \"EndTime\": 1601717039.675503, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.613129}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 80 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1601, \"sum\": 1601.0, \"min\": 1601}, \"Total Records Seen\": {\"count\": 1, \"max\": 397696, \"sum\": 397696.0, \"min\": 397696}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 81, \"sum\": 81.0, \"min\": 81}}, \"EndTime\": 1601717039.675708, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 79}, \"StartTime\": 1601717039.613524}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=79736.812533 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=80, batch=0 train binary_classification_accuracy =0.875\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=80, batch=0 train binary_classification_cross_entropy =0.493675351143\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=80, batch=0 train binary_f_1.000 =0.875\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.738] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 162, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=80, train binary_classification_accuracy =0.826953125\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=80, train binary_classification_cross_entropy =0.504516680539\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=80, train binary_f_1.000 =0.845213137666\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.02499771118164, \"sum\": 63.02499771118164, \"min\": 63.02499771118164}}, \"EndTime\": 1601717039.738993, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.675572}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 81 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1621, \"sum\": 1621.0, \"min\": 1621}, \"Total Records Seen\": {\"count\": 1, \"max\": 402664, \"sum\": 402664.0, \"min\": 402664}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 82, \"sum\": 82.0, \"min\": 82}}, \"EndTime\": 1601717039.739247, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 80}, \"StartTime\": 1601717039.675933}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78311.2873502 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=81, batch=0 train binary_classification_accuracy =0.87890625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=81, batch=0 train binary_classification_cross_entropy =0.491401612759\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=81, batch=0 train binary_f_1.000 =0.878431372549\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.800] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 164, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=81, train binary_classification_accuracy =0.8287109375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=81, train binary_classification_cross_entropy =0.502521833777\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=81, train binary_f_1.000 =0.846544181977\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.43617630004883, \"sum\": 61.43617630004883, \"min\": 61.43617630004883}}, \"EndTime\": 1601717039.801006, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.739071}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 82 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1641, \"sum\": 1641.0, \"min\": 1641}, \"Total Records Seen\": {\"count\": 1, \"max\": 407632, \"sum\": 407632.0, \"min\": 407632}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 83, \"sum\": 83.0, \"min\": 83}}, \"EndTime\": 1601717039.801235, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 81}, \"StartTime\": 1601717039.739523}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=80330.0832395 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=82, batch=0 train binary_classification_accuracy =0.87890625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=82, batch=0 train binary_classification_cross_entropy =0.489140361547\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=82, batch=0 train binary_f_1.000 =0.878431372549\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.864] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 166, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=82, train binary_classification_accuracy =0.8291015625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=82, train binary_classification_cross_entropy =0.5005384624\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=82, train binary_f_1.000 =0.846840539121\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.96603584289551, \"sum\": 63.96603584289551, \"min\": 63.96603584289551}}, \"EndTime\": 1601717039.865469, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.801082}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 83 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1661, \"sum\": 1661.0, \"min\": 1661}, \"Total Records Seen\": {\"count\": 1, \"max\": 412600, \"sum\": 412600.0, \"min\": 412600}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 84, \"sum\": 84.0, \"min\": 84}}, \"EndTime\": 1601717039.865708, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 82}, \"StartTime\": 1601717039.80147}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77162.0474734 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=83, batch=0 train binary_classification_accuracy =0.87890625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=83, batch=0 train binary_classification_cross_entropy =0.486891627312\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=83, batch=0 train binary_f_1.000 =0.878431372549\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.928] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 168, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=83, train binary_classification_accuracy =0.8302734375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=83, train binary_classification_cross_entropy =0.498566512764\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=83, train binary_f_1.000 =0.847837506566\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.48705291748047, \"sum\": 63.48705291748047, \"min\": 63.48705291748047}}, \"EndTime\": 1601717039.929531, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.865541}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 84 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1681, \"sum\": 1681.0, \"min\": 1681}, \"Total Records Seen\": {\"count\": 1, \"max\": 417568, \"sum\": 417568.0, \"min\": 417568}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 85, \"sum\": 85.0, \"min\": 85}}, \"EndTime\": 1601717039.929801, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 83}, \"StartTime\": 1601717039.866009}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77709.9532039 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=84, batch=0 train binary_classification_accuracy =0.87890625\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=84, batch=0 train binary_classification_cross_entropy =0.484655439854\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=84, batch=0 train binary_f_1.000 =0.878431372549\u001b[0m\n \u001b[34m[2020-10-03 09:23:59.997] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 170, \"duration\": 65, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=84, train binary_classification_accuracy =0.8318359375\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=84, train binary_classification_cross_entropy =0.496606004238\u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #quality_metric: host=algo-1, epoch=84, train binary_f_1.000 =0.849238312029\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 67.56711006164551, \"sum\": 67.56711006164551, \"min\": 67.56711006164551}}, \"EndTime\": 1601717039.997693, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.929599}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #progress_metric: host=algo-1, completed 85 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1701, \"sum\": 1701.0, \"min\": 1701}, \"Total Records Seen\": {\"count\": 1, \"max\": 422536, \"sum\": 422536.0, \"min\": 422536}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 86, \"sum\": 86.0, \"min\": 86}}, \"EndTime\": 1601717039.997963, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 84}, \"StartTime\": 1601717039.93009}\n \u001b[0m\n \u001b[34m[10/03/2020 09:23:59 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=73045.9340048 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=85, batch=0 train binary_classification_accuracy =0.88671875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=85, batch=0 train binary_classification_cross_entropy =0.482431799173\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=85, batch=0 train binary_f_1.000 =0.885375494071\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.063] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 172, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=85, train binary_classification_accuracy =0.834375\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=85, train binary_classification_cross_entropy =0.494656902552\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=85, train binary_f_1.000 =0.851436580238\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.5660629272461, \"sum\": 65.5660629272461, \"min\": 65.5660629272461}}, \"EndTime\": 1601717040.06385, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717039.99779}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 86 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1721, \"sum\": 1721.0, \"min\": 1721}, \"Total Records Seen\": {\"count\": 1, \"max\": 427504, \"sum\": 427504.0, \"min\": 427504}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 87, \"sum\": 87.0, \"min\": 87}}, \"EndTime\": 1601717040.064096, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 85}, \"StartTime\": 1601717039.998247}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75289.8792532 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=86, batch=0 train binary_classification_accuracy =0.88671875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=86, batch=0 train binary_classification_cross_entropy =0.480220705271\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=86, batch=0 train binary_f_1.000 =0.885375494071\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.128] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 174, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=86, train binary_classification_accuracy =0.835546875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=86, train binary_classification_cross_entropy =0.492719183862\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=86, train binary_f_1.000 =0.852384291725\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 64.7439956665039, \"sum\": 64.7439956665039, \"min\": 64.7439956665039}}, \"EndTime\": 1601717040.129149, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.063925}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 87 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1741, \"sum\": 1741.0, \"min\": 1741}, \"Total Records Seen\": {\"count\": 1, \"max\": 432472, \"sum\": 432472.0, \"min\": 432472}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 88, \"sum\": 88.0, \"min\": 88}}, \"EndTime\": 1601717040.12935, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 86}, \"StartTime\": 1601717040.064375}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=76317.8893101 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=87, batch=0 train binary_classification_accuracy =0.88671875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=87, batch=0 train binary_classification_cross_entropy =0.478022158146\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=87, batch=0 train binary_f_1.000 =0.885375494071\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.193] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 176, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=87, train binary_classification_accuracy =0.83671875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=87, train binary_classification_cross_entropy =0.490792843699\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=87, train binary_f_1.000 =0.853333333333\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.12093544006348, \"sum\": 65.12093544006348, \"min\": 65.12093544006348}}, \"EndTime\": 1601717040.194728, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.129218}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 88 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1761, \"sum\": 1761.0, \"min\": 1761}, \"Total Records Seen\": {\"count\": 1, \"max\": 437440, \"sum\": 437440.0, \"min\": 437440}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 89, \"sum\": 89.0, \"min\": 89}}, \"EndTime\": 1601717040.195014, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 87}, \"StartTime\": 1601717040.129573}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75764.2940791 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=88, batch=0 train binary_classification_accuracy =0.88671875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=88, batch=0 train binary_classification_cross_entropy =0.475836277008\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=88, batch=0 train binary_f_1.000 =0.885375494071\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.259] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 178, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=88, train binary_classification_accuracy =0.837890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=88, train binary_classification_cross_entropy =0.48887783885\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=88, train binary_f_1.000 =0.854385964912\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.09900093078613, \"sum\": 65.09900093078613, \"min\": 65.09900093078613}}, \"EndTime\": 1601717040.260419, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.194806}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 89 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1781, \"sum\": 1781.0, \"min\": 1781}, \"Total Records Seen\": {\"count\": 1, \"max\": 442408, \"sum\": 442408.0, \"min\": 442408}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 90, \"sum\": 90.0, \"min\": 90}}, \"EndTime\": 1601717040.260669, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 88}, \"StartTime\": 1601717040.195288}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75832.3984264 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=89, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=89, batch=0 train binary_classification_cross_entropy =0.473662853241\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=89, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.336] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 180, \"duration\": 73, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=89, train binary_classification_accuracy =0.8400390625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=89, train binary_classification_cross_entropy =0.486974161863\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=89, train binary_f_1.000 =0.856189640035\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 75.81400871276855, \"sum\": 75.81400871276855, \"min\": 75.81400871276855}}, \"EndTime\": 1601717040.336784, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.260499}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 90 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1801, \"sum\": 1801.0, \"min\": 1801}, \"Total Records Seen\": {\"count\": 1, \"max\": 447376, \"sum\": 447376.0, \"min\": 447376}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 91, \"sum\": 91.0, \"min\": 91}}, \"EndTime\": 1601717040.337041, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 89}, \"StartTime\": 1601717040.260942}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=65167.0740824 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=90, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=90, batch=0 train binary_classification_cross_entropy =0.471502065659\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=90, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.402] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 182, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=90, train binary_classification_accuracy =0.841015625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=90, train binary_classification_cross_entropy =0.485081762075\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=90, train binary_f_1.000 =0.857042500878\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.3998851776123, \"sum\": 65.3998851776123, \"min\": 65.3998851776123}}, \"EndTime\": 1601717040.402757, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.336863}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 91 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1821, \"sum\": 1821.0, \"min\": 1821}, \"Total Records Seen\": {\"count\": 1, \"max\": 452344, \"sum\": 452344.0, \"min\": 452344}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 92, \"sum\": 92.0, \"min\": 92}}, \"EndTime\": 1601717040.402984, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 90}, \"StartTime\": 1601717040.337323}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75519.6353713 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=91, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=91, batch=0 train binary_classification_cross_entropy =0.469353795052\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=91, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.466] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 184, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=91, train binary_classification_accuracy =0.842578125\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=91, train binary_classification_cross_entropy =0.483200639486\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=91, train binary_f_1.000 =0.85824832923\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.72404098510742, \"sum\": 63.72404098510742, \"min\": 63.72404098510742}}, \"EndTime\": 1601717040.466957, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.402845}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 92 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1841, \"sum\": 1841.0, \"min\": 1841}, \"Total Records Seen\": {\"count\": 1, \"max\": 457312, \"sum\": 457312.0, \"min\": 457312}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 93, \"sum\": 93.0, \"min\": 93}}, \"EndTime\": 1601717040.467157, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 91}, \"StartTime\": 1601717040.403202}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77547.4212219 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=92, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=92, batch=0 train binary_classification_cross_entropy =0.467218101025\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=92, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.529] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 186, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=92, train binary_classification_accuracy =0.84375\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=92, train binary_classification_cross_entropy =0.48133072257\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=92, train binary_f_1.000 =0.859204505456\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.13490867614746, \"sum\": 63.13490867614746, \"min\": 63.13490867614746}}, \"EndTime\": 1601717040.530555, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.467025}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 93 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1861, \"sum\": 1861.0, \"min\": 1861}, \"Total Records Seen\": {\"count\": 1, \"max\": 462280, \"sum\": 462280.0, \"min\": 462280}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 94, \"sum\": 94.0, \"min\": 94}}, \"EndTime\": 1601717040.530808, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 92}, \"StartTime\": 1601717040.467386}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78173.4981242 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=93, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=93, batch=0 train binary_classification_cross_entropy =0.465094953775\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=93, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.601] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 188, \"duration\": 68, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=93, train binary_classification_accuracy =0.844140625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=93, train binary_classification_cross_entropy =0.479472020268\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=93, train binary_f_1.000 =0.85960591133\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 70.74403762817383, \"sum\": 70.74403762817383, \"min\": 70.74403762817383}}, \"EndTime\": 1601717040.601869, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.53063}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 94 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1881, \"sum\": 1881.0, \"min\": 1881}, \"Total Records Seen\": {\"count\": 1, \"max\": 467248, \"sum\": 467248.0, \"min\": 467248}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 95, \"sum\": 95.0, \"min\": 95}}, \"EndTime\": 1601717040.602126, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 93}, \"StartTime\": 1601717040.53109}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=69803.6677532 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=94, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=94, batch=0 train binary_classification_cross_entropy =0.462984323502\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=94, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.665] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 190, \"duration\": 61, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=94, train binary_classification_accuracy =0.8453125\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=94, train binary_classification_cross_entropy =0.477624480426\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=94, train binary_f_1.000 =0.860612460401\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 63.65394592285156, \"sum\": 63.65394592285156, \"min\": 63.65394592285156}}, \"EndTime\": 1601717040.666142, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.601951}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 95 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1901, \"sum\": 1901.0, \"min\": 1901}, \"Total Records Seen\": {\"count\": 1, \"max\": 472216, \"sum\": 472216.0, \"min\": 472216}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 96, \"sum\": 96.0, \"min\": 96}}, \"EndTime\": 1601717040.666339, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 94}, \"StartTime\": 1601717040.602459}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=77629.7589664 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=95, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=95, batch=0 train binary_classification_cross_entropy =0.460886180401\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=95, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.731] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 192, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=95, train binary_classification_accuracy =0.8466796875\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=95, train binary_classification_cross_entropy =0.475788059831\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=95, train binary_f_1.000 =0.861722740884\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.5210018157959, \"sum\": 65.5210018157959, \"min\": 65.5210018157959}}, \"EndTime\": 1601717040.732094, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.666208}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 96 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1921, \"sum\": 1921.0, \"min\": 1921}, \"Total Records Seen\": {\"count\": 1, \"max\": 477184, \"sum\": 477184.0, \"min\": 477184}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 97, \"sum\": 97.0, \"min\": 97}}, \"EndTime\": 1601717040.732339, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 95}, \"StartTime\": 1601717040.666544}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75362.8564629 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=96, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=96, batch=0 train binary_classification_cross_entropy =0.458800524473\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=96, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.793] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 194, \"duration\": 59, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=96, train binary_classification_accuracy =0.8474609375\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=96, train binary_classification_cross_entropy =0.473962713778\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=96, train binary_f_1.000 =0.862330336683\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 61.26713752746582, \"sum\": 61.26713752746582, \"min\": 61.26713752746582}}, \"EndTime\": 1601717040.793911, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.732172}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 97 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1941, \"sum\": 1941.0, \"min\": 1941}, \"Total Records Seen\": {\"count\": 1, \"max\": 482152, \"sum\": 482152.0, \"min\": 482152}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 98, \"sum\": 98.0, \"min\": 98}}, \"EndTime\": 1601717040.794152, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 96}, \"StartTime\": 1601717040.732612}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=80560.5259227 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=97, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=97, batch=0 train binary_classification_cross_entropy =0.456727325916\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=97, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.859] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 196, \"duration\": 62, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=97, train binary_classification_accuracy =0.8484375\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=97, train binary_classification_cross_entropy =0.472148431838\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=97, train binary_f_1.000 =0.863235812478\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.2010440826416, \"sum\": 65.2010440826416, \"min\": 65.2010440826416}}, \"EndTime\": 1601717040.859662, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.793987}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 98 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1961, \"sum\": 1961.0, \"min\": 1961}, \"Total Records Seen\": {\"count\": 1, \"max\": 487120, \"sum\": 487120.0, \"min\": 487120}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 99, \"sum\": 99.0, \"min\": 99}}, \"EndTime\": 1601717040.859898, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 97}, \"StartTime\": 1601717040.794427}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75734.0035618 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=98, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=98, batch=0 train binary_classification_cross_entropy =0.454666554928\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=98, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.922] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 198, \"duration\": 60, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=98, train binary_classification_accuracy =0.8494140625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=98, train binary_classification_cross_entropy =0.470345155895\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=98, train binary_f_1.000 =0.864140969163\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 62.76583671569824, \"sum\": 62.76583671569824, \"min\": 62.76583671569824}}, \"EndTime\": 1601717040.922958, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.859732}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 99 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1981, \"sum\": 1981.0, \"min\": 1981}, \"Total Records Seen\": {\"count\": 1, \"max\": 492088, \"sum\": 492088.0, \"min\": 492088}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 100, \"sum\": 100.0, \"min\": 100}}, \"EndTime\": 1601717040.923164, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 98}, \"StartTime\": 1601717040.860163}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=78708.5528141 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=99, batch=0 train binary_classification_accuracy =0.890625\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=99, batch=0 train binary_classification_cross_entropy =0.452618181705\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=99, batch=0 train binary_f_1.000 =0.888888888889\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.987] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 200, \"duration\": 63, \"num_examples\": 20, \"num_bytes\": 317952}\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=99, train binary_classification_accuracy =0.849609375\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=99, train binary_classification_cross_entropy =0.468552854657\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, epoch=99, train binary_f_1.000 =0.864341085271\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, train binary_classification_accuracy =0.849609375\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, train binary_classification_cross_entropy =0.468552854657\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #quality_metric: host=algo-1, train binary_f_1.000 =0.864341085271\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"update.time\": {\"count\": 1, \"max\": 65.38009643554688, \"sum\": 65.38009643554688, \"min\": 65.38009643554688}}, \"EndTime\": 1601717040.9888, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.923023}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #progress_metric: host=algo-1, completed 100 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 20, \"sum\": 20.0, \"min\": 20}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Total Batches Seen\": {\"count\": 1, \"max\": 2001, \"sum\": 2001.0, \"min\": 2001}, \"Total Records Seen\": {\"count\": 1, \"max\": 497056, \"sum\": 497056.0, \"min\": 497056}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 4968, \"sum\": 4968.0, \"min\": 4968}, \"Reset Count\": {\"count\": 1, \"max\": 101, \"sum\": 101.0, \"min\": 101}}, \"EndTime\": 1601717040.989046, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\", \"epoch\": 99}, \"StartTime\": 1601717040.923393}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] #throughput_metric: host=algo-1, train throughput=75521.003907 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 WARNING 140262587922240] wait_for_all_workers will not sync workers since the kv store is not running distributed\u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] Pulling entire model from kvstore to finalize\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"finalize.time\": {\"count\": 1, \"max\": 2.2118091583251953, \"sum\": 2.2118091583251953, \"min\": 2.2118091583251953}}, \"EndTime\": 1601717040.991615, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.988888}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:00 INFO 140262587922240] Saved checkpoint to \"/tmp/tmpKl3xLW/state-0001.params\"\u001b[0m\n \u001b[34m[2020-10-03 09:24:00.998] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/test\", \"epoch\": 0, \"duration\": 7078, \"num_examples\": 1, \"num_bytes\": 16384}\u001b[0m\n \u001b[34m[2020-10-03 09:24:01.020] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/test\", \"epoch\": 1, \"duration\": 22, \"num_examples\": 7, \"num_bytes\": 100928}\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 7, \"sum\": 7.0, \"min\": 7}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 7, \"sum\": 7.0, \"min\": 7}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 1577, \"sum\": 1577.0, \"min\": 1577}, \"Total Batches Seen\": {\"count\": 1, \"max\": 7, \"sum\": 7.0, \"min\": 7}, \"Total Records Seen\": {\"count\": 1, \"max\": 1577, \"sum\": 1577.0, \"min\": 1577}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 1577, \"sum\": 1577.0, \"min\": 1577}, \"Reset Count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601717041.021083, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"test_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.998637}\n \u001b[0m\n \u001b[34m[10/03/2020 09:24:01 INFO 140262587922240] #test_score (algo-1) : ('binary_classification_accuracy', 0.6670894102726697)\u001b[0m\n \u001b[34m[10/03/2020 09:24:01 INFO 140262587922240] #test_score (algo-1) : ('binary_classification_cross_entropy', 0.6124260442197965)\u001b[0m\n \u001b[34m[10/03/2020 09:24:01 INFO 140262587922240] #test_score (algo-1) : ('binary_f_1.000', 0.7116968698517299)\u001b[0m\n \u001b[34m[10/03/2020 09:24:01 INFO 140262587922240] #quality_metric: host=algo-1, test binary_classification_accuracy =0.667089410273\u001b[0m\n \u001b[34m[10/03/2020 09:24:01 INFO 140262587922240] #quality_metric: host=algo-1, test binary_classification_cross_entropy =0.61242604422\u001b[0m\n \u001b[34m[10/03/2020 09:24:01 INFO 140262587922240] #quality_metric: host=algo-1, test binary_f_1.000 =0.711696869852\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"totaltime\": {\"count\": 1, \"max\": 7155.694007873535, \"sum\": 7155.694007873535, \"min\": 7155.694007873535}, \"setuptime\": {\"count\": 1, \"max\": 47.18303680419922, \"sum\": 47.18303680419922, \"min\": 47.18303680419922}}, \"EndTime\": 1601717041.022119, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"factorization-machines\"}, \"StartTime\": 1601717040.991675}\n \u001b[0m\n \n 2020-10-03 09:24:09 Uploading - Uploading generated training model\n 2020-10-03 09:24:09 Completed - Training job completed\n Training seconds: 58\n Billable seconds: 58\n CPU times: user 417 ms, sys: 43 ms, total: 460 ms\n Wall time: 3min 11s\n\n\n## \ubaa8\ub378 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubc30\ud3ec\n\ubc30\ud3ec \ub610\ud55c, \ub9e4\uc6b0 \uac04\ub2e8\ud558\uac8c `deploy()` \uba54\uc18c\ub4dc\ub85c \uc218\ud589\ud558\uc2e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc30\ud3ec\ub294 \uc57d 5\ubd84\uc5d0\uc11c 10\ubd84\uc774 \uc18c\uc694\ub429\ub2c8\ub2e4.\n\n\n```python\n%%time\ninstance_type_inference = 'ml.m5.large'\nfm_predictor = fm.deploy(instance_type=instance_type_inference, initial_instance_count=1)\n```\n\n -----------!CPU times: user 178 ms, sys: 5.19 ms, total: 183 ms\n Wall time: 5min 31s\n\n\n## \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ucd94\ub860\n\n\uc77c\ubd80 \ud14c\uc2a4\ud2b8\ub85c 10\uac1c\uc758 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub300\ud574\uc11c Rating\uc774 1 \uc778\uc9c0 0\uc778\uc9c0\ub97c \uc608\uce21 \ud574\ubcf4\uc558\uc2b5\ub2c8\ub2e4.\n\uc608\uc81c\ub85c \uc544\ub798\uc758 \uacb0\uacfc\ub97c \ubcf4\uba74 10\uac1c \uc911\uc5d0 8\uac1c\uc758 \uc608\uce21 \uc815\ud655\ub3c4\ub97c \ubcf4\uc785\ub2c8\ub2e4.\n\uc544\ub798\uc758 \uacb0\uacfc\ub294 \uc2e4\ud589\uc2dc\uc5d0 \ud558\uc774\ud37c \ud30c\ub77c\ubbf8\ud130\uc758 \uac12\uc5d0 \ub530\ub77c\uc11c \ub2e4\ub974\uac8c \ub098\uc62c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```\nprediction labels:\n [0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0]\ntrue list:\n [0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0]\nPrediction Accuracy:\n [True, False, True, True, True, True, True, False, True, True]\n```\n\n### \ucee4\uc2a4\ud140 \uc2dc\ub9ac\uc5bc\ub77c\uc774\uc838 \uad6c\ud604\n\n\uc544\ub798 URL\uc744 \ucc38\uc870\ud558\uc5ec \uad6c\ud604 \ud558\uc600\uc2b5\ub2c8\ub2e4.\n- https://github.com/aws/sagemaker-python-sdk/blob/8fff159389aa63941ccf0c59567c3eb7936a2c62/src/sagemaker/serializers.py \n\n\n```python\n\nclass CustomJSONSerializer(sagemaker.serializers.BaseSerializer):\n \"\"\"\n # Sample Code:\n # https://github.com/aws/sagemaker-python-sdk/blob/8fff159389aa63941ccf0c59567c3eb7936a2c62/src/sagemaker/serializers.py \n # How to test\n js = CustomJSONSerializer()\n sample = X_test[1000:1001].toarray()\n print(js.serialize(sample)) \n \"\"\"\n\n CONTENT_TYPE = \"application/json\"\n\n def serialize(self, data):\n \"\"\"Serialize data of various formats to a JSON formatted string.\n Args:\n data (object): Data to be serialized.\n Returns:\n str: The data serialized as a JSON string.\n \"\"\"\n \n if isinstance(data, np.ndarray):\n js = {'instances': []}\n for row in data:\n js['instances'].append({'features': row.tolist()})\n\n \n return json.dumps(js)\n else:\n print(\"Not np.ndarray type\")\n return json.dumps(data)\n\n\nfm_predictor.serializer = CustomJSONSerializer()\nJSONDeserializer.ACCEPT = 'application/json'\nfm_predictor.deserializer = JSONDeserializer()\n\n\n\n```\n\n\n```python\nresult = fm_predictor.predict(X_test[1000:1010].toarray())\npred_labels = list()\nfor pred in result['predictions']:\n label = pred['predicted_label']\n pred_labels.append(label)\n\ntrue_list = Y_test[1000:1010].tolist()\njudge = [p == t for p, t in zip (pred_labels,true_list)]\n\nprint(\"true list:\\n\", true_list)\nprint(\"prediction labels:\\n\", pred_labels)\nprint(\"Prediction Accuracy:\\n\", judge)\n```\n\n true list:\n [0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0]\n prediction labels:\n [0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0]\n Prediction Accuracy:\n [True, False, True, True, True, True, True, False, True, True]\n\n\n\n**\uc9c0\uae08\uae4c\uc9c0 \uae30\ubcf8\uc801\uc778 \uc0ac\uc6a9\ubc95\uc744 \uc54c\uc544\ubcf4\uc558\uc73c\uba70, \uc5ec\uae30\uc5d0\uc11c \uc2e4\uc2b5\uc744 \uc885\ub8cc\ud558\uc154\ub3c4 \ub429\ub2c8\ub2e4. \uc2e4\uc2b5\uc744 \uc77c\ucc0d \ub05d\ub0b4\uc168\uac70\ub098, \uc880 \ub354 \uae4a\uc740 \ub0b4\uc6a9\uc744 \uc6d0\ud558\uc2e0\ub2e4\uba74 \uc544\ub798 \uc140\ub4e4\uc744 \uc21c\ucc28\uc801\uc73c\ub85c \uc2e4\ud589\ud574 \uc8fc\uc138\uc694.\n[\uc8fc\uc758] \uc2e4\uc2dc\uac04 \uc608\uce21\uc744 \uc81c\uacf5\ud558\uae30 \uc704\ud574 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uacc4\uc18d \uc2e4\ud589\ud560 \ud544\uc694\uac00 \uc5c6\ub294 \uacbd\uc6b0, \uacfc\uae08\uc744 \ub9c9\uae30 \uc704\ud574 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ad\uc81c\ud574 \uc8fc\uc138\uc694.**\n
\n\n# 2. (Optional) top-k \ucd94\ucc9c\uc744 \uc704\ud558\uc5ec FM \ubaa8\ub378\uc758 \ubaa8\ub378 \ud30c\ub77c\uba54\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec knn\uc73c\ub85c \ud6c8\ub828 \ubc0f \ubc30\ud3ec\ud558\uae30\n---\n\n\uc774\uc81c SageMaker\uc5d0 \ubaa8\ub378\uc744 \uc791\uc131\ud558\uace0 \uc800\uc7a5 \ud588\uc73c\ubbc0\ub85c \ub3d9\uc77c\ud55c FM \ubaa8\ub378\uc744 \ub2e4\uc6b4\ub85c\ub4dc\ud558\uc5ec KNN \ubaa8\ub378\uc5d0 \ub9de\uac8c \ub2e4\uc2dc \ud328\ud0a4\uc9c0 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \ubaa8\ub378 \uc544\ud2f0\ud329\ud2b8 \ub2e4\uc6b4\ub85c\ub4dc\n\n\n```python\n#!pip install mxnet # \ud544\uc694\ud55c \uacbd\uc6b0 \uc8fc\uc11d\uc744 \ud574\uc81c\ud558\uc5ec mxnet\uc744 \uc124\uce58\ud574 \uc8fc\uc138\uc694\nimport mxnet as mx \nmodel_file_name = \"model.tar.gz\"\nmodel_full_path = fm.output_path + \"/\" + fm.latest_training_job.job_name + \"/output/\" + model_file_name\nprint(\"Model Path: \", model_full_path)\n\n# FM \ubaa8\ub378 \uc544\ud2f0\ud329\ud2b8(model.tar.gz) \ub2e4\uc6b4\ub85c\ub4dc \nos.system(\"aws s3 cp \" + model_full_path+ \" .\")\n\n# \ubaa8\ub378 \uc544\ud2f0\ud329\ud2b8 \uc555\ucd95 \ud574\uc81c\nos.system(\"tar xzvf \" + model_file_name)\nos.system(\"unzip -o model_algo-1\")\nos.system(\"mv symbol.json model-symbol.json\")\nos.system(\"mv params model-0000.params\")\n```\n\n Model Path: s3://sagemaker-ap-northeast-2-057716757052/fm-hol/output/factorization-machines-2020-10-03-09-21-21-934/output/model.tar.gz\n\n\n\n\n\n 0\n\n\n\n### \ubaa8\ub378 \ub370\uc774\ud130 \ubd84\ub9ac\n\nFM\uc5d0\uc11c \ud6c8\ub828\ud55c \ud30c\ub77c\uba54\ud130 \ud29c\ud50c ($w_{0}, \\mathbf{w}, \\mathbf{V}$)\uc744 \uac00\uc838\uc635\ub2c8\ub2e4.\n\n\n```python\n# \ubaa8\ub378 \ucd94\ucd9c\nm = mx.module.Module.load('./model', 0, False, label_names=['out_label'])\nV = m._arg_params['v'].asnumpy() # 2625 x 64\nw = m._arg_params['w1_weight'].asnumpy() # 2625 x1\nb = m._arg_params['w0_weight'].asnumpy() # 1\nprint(V.shape, w.shape, b.shape)\n```\n\n (1039, 64) (1039, 1) (1,)\n\n\n### \ub370\uc774\ud130\uc14b \uc7ac\uac00\uacf5\n\n\uc774\uc81c FM \ubaa8\ub378\uc5d0\uc11c \ucd94\ucd9c\ud55c \ubaa8\ub378 \ud30c\ub77c\uba54\ud130\ub97c \ub2e4\uc2dc \ud328\ud0a4\uc9c0\ud558\uc5ec k-NN \ubaa8\ub378\uc744 \ud6c8\ub828\ud558\uae30 \uc704\ud55c \uc900\ube44\ub97c \uc218\ud589\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\uc774 \ud504\ub85c\uc138\uc2a4\ub294 \ub450 \uac1c\uc758 \ub370\uc774\ud130\uc14b\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n- Item latent \ud589\ub82c: k-NN \ubaa8\ub378 \ud559\uc2b5\uc5d0 \uc0ac\uc6a9; $a_i = concat(V, \\; w)$\n- User latent \ud589\ub82c: \ucd94\ub860\uc5d0 \uc0ac\uc6a9; $a_u = concat(V, \\; 1)$\n\n\ucc38\uace0\ub85c, \ubcf8 \ud578\uc988\uc628 \ucf54\ub4dc\ub294 user \ubc0f item ID\uac00 \uc788\ub294 \uc2dc\ub098\ub9ac\uc624\uc5d0\ub9cc \uc801\uc6a9\ub429\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \uc2e4\uc81c \ub370\uc774\ud130\uc5d0\ub294 \ucd94\uac00 \uba54\ud0c0\ub370\uc774\ud130(\uc608: user\uc758 \uacbd\uc6b0 \ub098\uc774, \uc6b0\ud3b8\ubc88\ud638, \uc131\ubcc4\uc774 \ud3ec\ud568\ub418\uace0 \uc601\ud654\uc758 \uacbd\uc6b0 \uc601\ud654 \uc7a5\ub974, \uc8fc\uc694 \ud0a4\uc6cc\ub4dc)\ub4e4\uc774 \ud3ec\ud568\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uacbd\uc6b0\uc5d0\ub294 \uc544\ub798 \ubc29\ubc95\uc73c\ub85c user \ubc0f item \ubca1\ud130\ub97c \ucd94\ucd9c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- item\uacfc item feature\ub97c $x_i$\ub85c \uc778\ucf54\ub529 \ud6c4 $\\mathbf{V}, \\mathbf{w}$\uc5d0 \ub0b4\uc801; $a_i = concat(V^T \\cdot x_i , \\; w^T \\cdot x_i)$\n- user\uc640 user feature\ub97c $x_u$\ub85c \uc778\ucf54\ub529 \ud6c4 $\\mathbf{V}$\uc5d0 \ub0b4\uc801; $a_u = concat(V^T \\cdot x_u, \\; , 1)$\n\n$a_i$\ub97c \uc0ac\uc6a9\ud558\uc5ec k-NN \ubaa8\ub378\uc744 \ud6c8\ub828\ud558\uace0 $a_u$\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucd94\ub860\uc744 \uc218\ud589\ud558\uc2dc\uba74 \ub429\ub2c8\ub2e4.\n\n\n```python\n# item latent matrix - concat(V[i], w[i]). \nknn_item_matrix = np.concatenate((V[nb_users:], w[nb_users:]), axis=1) # 292 x 65\nknn_train_label = np.arange(1,nb_movies+1) # [1, 2, 3, ..., 291, 292]\n\n# user latent matrix - concat (V[u], 1) \nones = np.ones(nb_users).reshape((nb_users, 1)) # 747x1\nknn_user_matrix = np.concatenate((V[:nb_users], ones), axis=1) # 747 x 65\n```\n\n### k-NN \ubaa8\ub378 \ud6c8\ub828\n\nk-NN \ubaa8\ub378\uc740 \uae30\ubcf8 index_type (faiss.Flat)\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \ub370\uc774\ud130\uc14b\uc758 \uacbd\uc6b0 \uc18d\ub3c4\uac00 \ub290\ub824\uc9c0\uae30\uc5d0, \uc774\ub7f0 \uacbd\uc6b0 \ub354 \ube60\ub978 \ud6c8\ub828\uc744 \uc704\ud574 \ub2e4\ub978 index_type \ub9e4\uac1c \ubcc0\uc218\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. index \uc720\ud615\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 k-NN \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud574 \uc8fc\uc138\uc694.\n\n\n```python\nprint('KNN train features shape = ', knn_item_matrix.shape)\nknn_prefix = 'knn'\nknn_output_prefix = 's3://{}/{}/output'.format(bucket, knn_prefix)\nknn_train_data_path = writeDatasetToProtobuf(knn_item_matrix, bucket, knn_prefix, train_key, \"dense\", knn_train_label)\nprint('Uploaded KNN train data: {}'.format(knn_train_data_path))\n```\n\n KNN train features shape = (292, 65)\n Uploaded KNN train data: s3://sagemaker-ap-northeast-2-057716757052/knn/train.protobuf\n\n\n\n```python\nnb_recommendations = 100\nknn_image = image_uris.retrieve(\"knn\", session.Session().boto_region_name, version=\"latest\")\n\nknn = sagemaker.estimator.Estimator(knn_image,\n get_execution_role(),\n instance_count=1,\n instance_type=instance_type_training,\n output_path=knn_output_prefix,\n sagemaker_session=sagemaker.Session())\n\nknn.set_hyperparameters(feature_dim=knn_item_matrix.shape[1], k=nb_recommendations, \n index_metric=\"INNER_PRODUCT\", predictor_type='classifier', sample_size=200000)\nfit_input = {'train': knn_train_data_path}\n```\n\n Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: latest.\n\n\n\ud6c8\ub828\uc744 \uc2dc\uc791\ud569\ub2c8\ub2e4. \uc544\ub798 \uc140\uc758 \uc218\ud589 \uc2dc\uac04\uc740 \uc57d 4\ubd84\uc5d0\uc11c 5\ubd84\uc774 \uc18c\uc694\ub429\ub2c8\ub2e4.\n\n\n```python\n%%time\nknn.fit(fit_input)\nknn_model_name = knn.latest_training_job.job_name\nprint(\"Created model: \", knn_model_name)\n```\n\n 2020-10-03 09:46:33 Starting - Starting the training job...\n 2020-10-03 09:46:35 Starting - Launching requested ML instances...\n 2020-10-03 09:47:32 Starting - Preparing the instances for training......\n 2020-10-03 09:48:15 Downloading - Downloading input data...\n 2020-10-03 09:48:37 Training - Downloading the training image..\u001b[34mDocker entrypoint called with argument(s): train\u001b[0m\n \u001b[34mRunning default environment configuration script\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-conf.json: {u'index_metric': u'L2', u'_tuning_objective_metric': u'', u'_num_gpus': u'auto', u'_log_level': u'info', u'feature_dim': u'auto', u'faiss_index_ivf_nlists': u'auto', u'epochs': u'1', u'index_type': u'faiss.Flat', u'_faiss_index_nprobe': u'5', u'_kvstore': u'dist_async', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Merging with provided configuration from /opt/ml/input/config/hyperparameters.json: {u'sample_size': u'200000', u'feature_dim': u'65', u'index_metric': u'INNER_PRODUCT', u'predictor_type': u'classifier', u'k': u'100'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Final configuration: {u'index_metric': u'INNER_PRODUCT', u'predictor_type': u'classifier', u'_tuning_objective_metric': u'', u'_num_gpus': u'auto', u'_log_level': u'info', u'feature_dim': u'65', u'faiss_index_ivf_nlists': u'auto', u'sample_size': u'200000', u'epochs': u'1', u'index_type': u'faiss.Flat', u'_faiss_index_nprobe': u'5', u'_kvstore': u'dist_async', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'k': u'100'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 WARNING 139974427436864] Loggers have already been setup.\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Launching parameter server for role scheduler\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/7a005467-d037-4108-97ce-286f8302093b', 'ECS_CONTAINER_METADATA_URI_V4': 'http://169.254.170.2/v4/7a005467-d037-4108-97ce-286f8302093b', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'AWS_REGION': 'ap-northeast-2', 'SAGEMAKER_METRICS_DIRECTORY': '/opt/ml/output/metrics/sagemaker', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'knn-2020-10-03-09-46-33-691', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-171-204.ap-northeast-2.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/5796ccc5-823a-40a7-add4-1d82e7289c88', 'PWD': '/', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:ap-northeast-2:057716757052:training-job/knn-2020-10-03-09-46-33-691', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] envs={'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/7a005467-d037-4108-97ce-286f8302093b', 'ECS_CONTAINER_METADATA_URI_V4': 'http://169.254.170.2/v4/7a005467-d037-4108-97ce-286f8302093b', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_NUM_WORKER': '1', 'DMLC_PS_ROOT_PORT': '9000', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.171.204', 'AWS_REGION': 'ap-northeast-2', 'SAGEMAKER_METRICS_DIRECTORY': '/opt/ml/output/metrics/sagemaker', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'knn-2020-10-03-09-46-33-691', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-171-204.ap-northeast-2.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/5796ccc5-823a-40a7-add4-1d82e7289c88', 'DMLC_ROLE': 'scheduler', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:ap-northeast-2:057716757052:training-job/knn-2020-10-03-09-46-33-691', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Launching parameter server for role server\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/7a005467-d037-4108-97ce-286f8302093b', 'ECS_CONTAINER_METADATA_URI_V4': 'http://169.254.170.2/v4/7a005467-d037-4108-97ce-286f8302093b', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'AWS_REGION': 'ap-northeast-2', 'SAGEMAKER_METRICS_DIRECTORY': '/opt/ml/output/metrics/sagemaker', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'knn-2020-10-03-09-46-33-691', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-171-204.ap-northeast-2.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/5796ccc5-823a-40a7-add4-1d82e7289c88', 'PWD': '/', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:ap-northeast-2:057716757052:training-job/knn-2020-10-03-09-46-33-691', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] envs={'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/7a005467-d037-4108-97ce-286f8302093b', 'ECS_CONTAINER_METADATA_URI_V4': 'http://169.254.170.2/v4/7a005467-d037-4108-97ce-286f8302093b', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_NUM_WORKER': '1', 'DMLC_PS_ROOT_PORT': '9000', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.171.204', 'AWS_REGION': 'ap-northeast-2', 'SAGEMAKER_METRICS_DIRECTORY': '/opt/ml/output/metrics/sagemaker', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'knn-2020-10-03-09-46-33-691', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-171-204.ap-northeast-2.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/5796ccc5-823a-40a7-add4-1d82e7289c88', 'DMLC_ROLE': 'server', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:ap-northeast-2:057716757052:training-job/knn-2020-10-03-09-46-33-691', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Environment: {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/7a005467-d037-4108-97ce-286f8302093b', 'ECS_CONTAINER_METADATA_URI_V4': 'http://169.254.170.2/v4/7a005467-d037-4108-97ce-286f8302093b', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_PS_ROOT_PORT': '9000', 'DMLC_NUM_WORKER': '1', 'SAGEMAKER_HTTP_PORT': '8080', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.171.204', 'AWS_REGION': 'ap-northeast-2', 'SAGEMAKER_METRICS_DIRECTORY': '/opt/ml/output/metrics/sagemaker', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'knn-2020-10-03-09-46-33-691', 'HOME': '/root', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-171-204.ap-northeast-2.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/5796ccc5-823a-40a7-add4-1d82e7289c88', 'DMLC_ROLE': 'worker', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:ap-northeast-2:057716757052:training-job/knn-2020-10-03-09-46-33-691', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n \u001b[34mProcess 60 is a shell:scheduler.\u001b[0m\n \u001b[34mProcess 69 is a shell:server.\u001b[0m\n \u001b[34mProcess 1 is a worker.\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Using default worker.\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Checkpoint loading and saving are disabled.\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] nvidia-smi took: 0.0251090526581 secs to identify 0 gpus\u001b[0m\n \u001b[34m[10/03/2020 09:49:13 INFO 139974427436864] Create Store: dist_async\u001b[0m\n \u001b[34m[10/03/2020 09:49:14 ERROR 139974427436864] nvidia-smi: failed to run (127): /bin/sh: nvidia-smi: command not found\u001b[0m\n \u001b[34m[10/03/2020 09:49:14 INFO 139974427436864] Using per-worker sample size = 200000 (Available virtual memory = 6216159232 bytes, GPU free memory = 0 bytes, number of workers = 1). If an out-of-memory error occurs, choose a larger instance type, use dimension reduction, decrease sample_size, and/or decrease mini_batch_size.\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Total Batches Seen\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Total Records Seen\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Reset Count\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}}, \"EndTime\": 1601718554.87305, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"init_train_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KNN\"}, \"StartTime\": 1601718554.873006}\n \u001b[0m\n \u001b[34m[2020-10-03 09:49:14.873] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 0, \"duration\": 928, \"num_examples\": 1, \"num_bytes\": 89936}\u001b[0m\n \u001b[34m[2020-10-03 09:49:14.908] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 1, \"duration\": 34, \"num_examples\": 1, \"num_bytes\": 89936}\u001b[0m\n \u001b[34m[10/03/2020 09:49:14 INFO 139974427436864] push reservoir to kv... 1 num_workers 0 rank\u001b[0m\n \u001b[34m[10/03/2020 09:49:14 INFO 139974427436864] ...done (292)\u001b[0m\n \u001b[34m[10/03/2020 09:49:14 INFO 139974427436864] #progress_metric: host=algo-1, completed 100 % of epochs\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 292, \"sum\": 292.0, \"min\": 292}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Total Records Seen\": {\"count\": 1, \"max\": 292, \"sum\": 292.0, \"min\": 292}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 292, \"sum\": 292.0, \"min\": 292}, \"Reset Count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718554.928338, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KNN\", \"epoch\": 0}, \"StartTime\": 1601718554.87358}\n \u001b[0m\n \u001b[34m[10/03/2020 09:49:14 INFO 139974427436864] #throughput_metric: host=algo-1, train throughput=5311.22565201 records/second\u001b[0m\n \u001b[34m[10/03/2020 09:49:14 INFO 139974427436864] pulled row count... worker 0 rows 292\u001b[0m\n \u001b[34m[10/03/2020 09:49:15 INFO 139974427436864] pulled... worker 0 data (292, 65) labels (292,) nans 0\u001b[0m\n \u001b[34m[10/03/2020 09:49:15 INFO 139974427436864] calling index.train...\u001b[0m\n \u001b[34m[10/03/2020 09:49:15 INFO 139974427436864] ...done calling index.train\u001b[0m\n \u001b[34m[10/03/2020 09:49:15 INFO 139974427436864] calling index.add...\u001b[0m\n \u001b[34m[10/03/2020 09:49:15 INFO 139974427436864] ...done calling index.add\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"epochs\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"model.serialize.time\": {\"count\": 1, \"max\": 2.6509761810302734, \"sum\": 2.6509761810302734, \"min\": 2.6509761810302734}, \"finalize.time\": {\"count\": 1, \"max\": 114.50791358947754, \"sum\": 114.50791358947754, \"min\": 114.50791358947754}, \"initialize.time\": {\"count\": 1, \"max\": 923.2299327850342, \"sum\": 923.2299327850342, \"min\": 923.2299327850342}, \"update.time\": {\"count\": 1, \"max\": 54.33797836303711, \"sum\": 54.33797836303711, \"min\": 54.33797836303711}}, \"EndTime\": 1601718555.046006, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KNN\"}, \"StartTime\": 1601718553.944434}\n \u001b[0m\n \u001b[34m[10/03/2020 09:49:15 INFO 139974427436864] Test data is not provided.\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"totaltime\": {\"count\": 1, \"max\": 1755.47194480896, \"sum\": 1755.47194480896, \"min\": 1755.47194480896}, \"setuptime\": {\"count\": 1, \"max\": 32.621145248413086, \"sum\": 32.621145248413086, \"min\": 32.621145248413086}}, \"EndTime\": 1601718555.047265, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KNN\"}, \"StartTime\": 1601718555.046087}\n \u001b[0m\n \n 2020-10-03 09:49:26 Uploading - Uploading generated training model\n 2020-10-03 09:49:26 Completed - Training job completed\n Training seconds: 71\n Billable seconds: 71\n Created model: knn-2020-10-03-09-46-33-691\n CPU times: user 446 ms, sys: 13.6 ms, total: 460 ms\n Wall time: 3min 11s\n\n\n\ubc30\uce58 \ucd94\ub860\uc5d0\uc11c \ucc38\uc870\ud560 \uc218 \uc788\ub3c4\ub85d \ubaa8\ub378\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n\n```python\n# \ub2e4\uc74c \ub2e8\uacc4\uc5d0\uc11c \ubc30\uce58 \ucd94\ub860 \uc911\uc5d0 \ucc38\uc870\ud560 \uc218 \uc788\ub3c4\ub85d \ubaa8\ub378 \uc800\uc7a5\nsm = boto3.client(service_name='sagemaker')\nprimary_container = {\n 'Image': knn.image_uri,\n 'ModelDataUrl': knn.model_data,\n}\n\nknn_model = sm.create_model(\n ModelName = knn.latest_training_job.job_name,\n ExecutionRoleArn = knn.role,\n PrimaryContainer = primary_container)\n```\n\n### Batch Transform\n\nAmazon SageMaker\uc758 Batch Transform \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uba74 \ub300\uaddc\ubaa8\ub85c \ubc30\uce58 \ucd94\ub860 \uacb0\uacfc\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\n\uc544\ub798 \uc140\uc758 \uc2e4\ud589\uc774 \uc644\ub8cc\ub418\uae30\uae4c\uc9c0\ub294 \uc57d 4\ubd84 \uc18c\uc694\ub429\ub2c8\ub2e4.\n\n\n```python\n%%time\n# \ucd94\ub860 \ub370\uc774\ud130 S3\uc5d0 \uc5c5\ub85c\ub4dc\nknn_batch_data_path = writeDatasetToProtobuf(knn_user_matrix, bucket, knn_prefix, train_key, \"dense\")\nprint(\"Batch inference data path: \", knn_batch_data_path)\n\n# Transformer \uac1d\uccb4 \ucd08\uae30\ud654\ntransformer = sagemaker.transformer.Transformer(\n base_transform_job_name=\"knn\",\n model_name=knn_model_name,\n instance_count=1,\n instance_type=instance_type_inference,\n output_path=knn_output_prefix,\n accept=\"application/jsonlines; verbose=true\"\n)\n\n# \ubcc0\ud658 \uc791\uc5c5 \uc2dc\uc791\ntransformer.transform(knn_batch_data_path, content_type='application/x-recordio-protobuf')\ntransformer.wait()\n\n# S3\uc5d0\uc11c \ucd9c\ub825 \ud30c\uc77c \ub2e4\uc6b4\ub85c\ub4dc\nresults_file_name = \"inference_output\"\ninference_output_file = \"knn/output/train.protobuf.out\"\ns3_client = boto3.client('s3')\ns3_client.download_file(bucket, inference_output_file, results_file_name)\nwith open(results_file_name) as f:\n results = f.readlines() \n```\n\n Batch inference data path: s3://sagemaker-ap-northeast-2-057716757052/knn/train.protobuf\n ............................\u001b[32m2020-10-03T09:54:16.127:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD\u001b[0m\n \u001b[34mDocker entrypoint called with argument(s): serve\u001b[0m\n \u001b[34mRunning default environment configuration script\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded entry point class algorithm.serve.server_config:config_api\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loading entry points\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator text/csv\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/x-recordio-protobuf\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/json\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/jsonlines\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/x-recordio-protobuf\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/json\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/jsonlines\u001b[0m\n \u001b[35mDocker entrypoint called with argument(s): serve\u001b[0m\n \u001b[35mRunning default environment configuration script\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded entry point class algorithm.serve.server_config:config_api\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loading entry points\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator text/csv\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/x-recordio-protobuf\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/json\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/jsonlines\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/x-recordio-protobuf\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/json\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/jsonlines\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] loaded entry point class algorithm:model\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] Number of server workers: 1\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [1] [INFO] Using worker: sync\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [76] [INFO] Booting worker with pid: 76\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] loading model...\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] nvidia-smi took: 0.0251379013062 secs to identify 0 gpus\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] ...model loaded.\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"execution_parameters.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.114884, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.107669}\n \u001b[0m\n \u001b[34m[2020-10-03 09:54:16.243] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/tmp/tmpgCgluv/tmpa3TWtn\", \"epoch\": 0, \"duration\": 65, \"num_examples\": 1, \"num_bytes\": 409356}\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"model.evaluate.time\": {\"count\": 1, \"max\": 62.840938568115234, \"sum\": 62.840938568115234, \"min\": 62.840938568115234}, \"invocations.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.369231, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.114966}\n \u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] loaded entry point class algorithm:model\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] Number of server workers: 1\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [1] [INFO] Using worker: sync\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [76] [INFO] Booting worker with pid: 76\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] loading model...\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] nvidia-smi took: 0.0251379013062 secs to identify 0 gpus\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] ...model loaded.\u001b[0m\n \u001b[35m#metrics {\"Metrics\": {\"execution_parameters.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.114884, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.107669}\n \u001b[0m\n \u001b[35m[2020-10-03 09:54:16.243] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/tmp/tmpgCgluv/tmpa3TWtn\", \"epoch\": 0, \"duration\": 65, \"num_examples\": 1, \"num_bytes\": 409356}\u001b[0m\n \u001b[35m#metrics {\"Metrics\": {\"model.evaluate.time\": {\"count\": 1, \"max\": 62.840938568115234, \"sum\": 62.840938568115234, \"min\": 62.840938568115234}, \"invocations.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.369231, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.114966}\n \u001b[0m\n \n \u001b[32m2020-10-03T09:54:16.127:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD\u001b[0m\n \u001b[34mDocker entrypoint called with argument(s): serve\u001b[0m\n \u001b[34mRunning default environment configuration script\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded entry point class algorithm.serve.server_config:config_api\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loading entry points\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator text/csv\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/x-recordio-protobuf\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/json\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/jsonlines\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/x-recordio-protobuf\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/json\u001b[0m\n \u001b[34m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/jsonlines\u001b[0m\n \u001b[35mDocker entrypoint called with argument(s): serve\u001b[0m\n \u001b[35mRunning default environment configuration script\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded entry point class algorithm.serve.server_config:config_api\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loading entry points\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator text/csv\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/x-recordio-protobuf\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/json\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded request iterator application/jsonlines\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/x-recordio-protobuf\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/json\u001b[0m\n \u001b[35m[10/03/2020 09:54:15 INFO 140511654475584] loaded response encoder application/jsonlines\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] loaded entry point class algorithm:model\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] Number of server workers: 1\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [1] [INFO] Using worker: sync\u001b[0m\n \u001b[34m[2020-10-03 09:54:16 +0000] [76] [INFO] Booting worker with pid: 76\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] loading model...\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] nvidia-smi took: 0.0251379013062 secs to identify 0 gpus\u001b[0m\n \u001b[34m[10/03/2020 09:54:16 INFO 140511654475584] ...model loaded.\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"execution_parameters.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.114884, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.107669}\n \u001b[0m\n \u001b[34m[2020-10-03 09:54:16.243] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/tmp/tmpgCgluv/tmpa3TWtn\", \"epoch\": 0, \"duration\": 65, \"num_examples\": 1, \"num_bytes\": 409356}\u001b[0m\n \u001b[34m#metrics {\"Metrics\": {\"model.evaluate.time\": {\"count\": 1, \"max\": 62.840938568115234, \"sum\": 62.840938568115234, \"min\": 62.840938568115234}, \"invocations.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.369231, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.114966}\n \u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] loaded entry point class algorithm:model\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] Number of server workers: 1\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [1] [INFO] Using worker: sync\u001b[0m\n \u001b[35m[2020-10-03 09:54:16 +0000] [76] [INFO] Booting worker with pid: 76\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] loading model...\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] nvidia-smi took: 0.0251379013062 secs to identify 0 gpus\u001b[0m\n \u001b[35m[10/03/2020 09:54:16 INFO 140511654475584] ...model loaded.\u001b[0m\n \u001b[35m#metrics {\"Metrics\": {\"execution_parameters.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.114884, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.107669}\n \u001b[0m\n \u001b[35m[2020-10-03 09:54:16.243] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/tmp/tmpgCgluv/tmpa3TWtn\", \"epoch\": 0, \"duration\": 65, \"num_examples\": 1, \"num_bytes\": 409356}\u001b[0m\n \u001b[35m#metrics {\"Metrics\": {\"model.evaluate.time\": {\"count\": 1, \"max\": 62.840938568115234, \"sum\": 62.840938568115234, \"min\": 62.840938568115234}, \"invocations.count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1601718856.369231, \"Dimensions\": {\"Host\": \"UNKNOWN\", \"Operation\": \"scoring\", \"Algorithm\": \"KNNModel\"}, \"StartTime\": 1601718856.114966}\n \u001b[0m\n CPU times: user 654 ms, sys: 33.5 ms, total: 688 ms\n Wall time: 5min 13s\n\n\n## \uc5d0\uc5b4\ub77c\uc778 \ucd94\ucc9c (top-k \ucd94\ub860 \uc608\uc2dc)\n\n\uc544\ub798\ub294 90\ubc88 \uc0ac\uc6a9\uc790\uc778 BobMotto\uc5d0 \ub300\ud574\uc11c \ucd94\ucc9c\ub41c \uc5d0\uc5b4\ub77c\uc778 \uc774\ub984\uc744 \ubcf4\uc5ec\uc8fc\uace0 \uc788\uc2b5\ub2c8\ub2e4.\n'airline_dist'\ub294 airline\uc758 \uc720\uc0ac\ub3c4\ub97c \uc758\ubbf8\ud569\ub2c8\ub2e4. \n\n\n```python\nimport json\npd.options.display.max_rows = 20\ntest_user_idx = 89 # \uc778\ub371\uc2a4\ub294 0\ubd80\ud130 \uc2dc\uc791\ud558\ubbc0\ub85c 90\ubc88 \uc0ac\uc6a9\uc790\uc758 \uc778\ub371\uc2a4\ub294 89\uc785\ub2c8\ub2e4.\nu_one_json = json.loads(results[test_user_idx])\n\nairline_id_list = [int(airline_id) for airline_id in u_one_json['labels']]\nairline_name_list = [le_item.inverse_transform([airline_name])[0] for airline_name in airline_id_list]\nairline_dist_list = [round(distance, 4) for distance in u_one_json['distances']]\n\nrecommend_df = pd.DataFrame({'airline_id': airline_id_list, \n 'airline_name': airline_name_list, \n 'airline_dist': airline_dist_list})\nuser_name = le_user.inverse_transform([test_user_idx])[0]\nprint(\"Recommendations for user: \", user_name)\nrecommend_df.head(30)\n```\n\n Recommendations for user: BobMotto\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
airline_idairline_nameairline_dist
045aircalin0.1268
1241sun-express0.1386
2133hawaiian-airlines0.1576
342airasia-x0.1673
4179mango0.1757
............
25291yangon-airways0.2760
26192olympic-air0.2770
2710air-arabia0.2820
28244swiss-international-air-lines0.2833
29267turkish-airlines0.2845
\n

30 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "37d26072a403d33a484f5a2c2cf36d126797a2ce", "size": 381156, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Starter-Code-kr/Recommend-FM-KNN.ipynb", "max_stars_repo_name": "gonsoomoon-ml/amazon-sagemaker-architecting-for-ml", "max_stars_repo_head_hexsha": "c436256f70dd59f105957c495db062d12acafa6a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-10-19T23:49:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-20T07:51:49.000Z", "max_issues_repo_path": "Starter-Code-kr/Recommend-FM-KNN.ipynb", "max_issues_repo_name": "gonsoomoon-ml/amazon-sagemaker-architecting-for-ml", "max_issues_repo_head_hexsha": "c436256f70dd59f105957c495db062d12acafa6a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Starter-Code-kr/Recommend-FM-KNN.ipynb", "max_forks_repo_name": "gonsoomoon-ml/amazon-sagemaker-architecting-for-ml", "max_forks_repo_head_hexsha": "c436256f70dd59f105957c495db062d12acafa6a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2020-09-24T02:34:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-20T02:24:20.000Z", "avg_line_length": 117.7861557478, "max_line_length": 1628, "alphanum_fraction": 0.6316941095, "converted": true, "num_tokens": 123759, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.4416730056646256, "lm_q1q2_score": 0.29235346609483975}} {"text": "\n\n\n\n\n```python\nfrom IPython.display import HTML\nhide_me = ''\nHTML('''\nTo toggle on/off the raw code, click here.''')\n```\n\n\n\n\n\nTo toggle on/off the raw code, click here.\n\n\n\n\n```python\nhide_me\n\nimport ipywidgets as widgets\nfrom IPython.display import display, Math, Latex, HTML, IFrame\nfrom ipywidgets import IntSlider, Label\nfrom ipywidgets import interact, interactive\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\nimport pandas as pd\n```\n\n\n# Uniform Motion and Uniformly Accelerated Motion\n\nGrade 11\n\n \n\n
https://giphy.com/gifs/usain-bolt-sgjJkkJutglMY
\n\n## Introduction\n\nEverything in the universe is constantly moving. Objects can be moving incredibly slow, so slow that they appear to be at rest, or so incredibly fast that you may not even see it. Even if you're standing still on Earth, you're moving and incomprehensible speeds. On Earth, you are moving around the Sun at approximately 108,000 km/h, and the Sun is orbiting galactic center at approximately 720,000 km/h. But that's not it: our galaxy the Milky Way is moving at approximately 2,268,000 km/h. As motion is so universal, understanding motion is an important topic of physics. Motion is defined by the change of position of an object with respect to other surrounding objects. For example a car is moving with respect to trees on the roadside. Motion can be described using three important quantities: velocity, speed and acceleration. In this notebook, we will familiarize ourselves with two types of motion: uniform motion, and uniformly accelerated motion.\n\n## Concepts of Uniform Motion\n\nMotion is described by three variables: distance ($d$), velocity ($\\vec{v}$), and acceleration ($\\vec{a}$). Let's define and explore these quantities below.\n\n### Distance Vs. Displacement\nTo begin, let us outline the difference between distance and displacement. \n> Distance describes the length of the actual path travelled to travel from one point to another.\n\n> Displacement is identical to distance as it describes the amount of space between two points. However, displacement is a vector quantity which means it also specifies the _direction_ of travel, as well as the amount of space between two points\u200b.\n\nBelow is a video which demonstrates the difference between distance and displacement:\n\n\n```python\n%%html\n\n```\n\n\n\n\n\n\n**Practise**\n\nCalculate the distance and displacement based on the image below. Imagine you start at point $\\textrm{A}$ and you move around the field in the following order \n\n$$\\textrm{A} \\rightarrow \\textrm{B} \\rightarrow \\textrm{C} \\rightarrow \\textrm{D} \\rightarrow \\textrm{E} \\rightarrow \\textrm{A}$$\n\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Distance: 2m and Displacement: 2m East\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Displacement contains both measurement and direction\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"Calculate the distance and displacement at point B?\")\n\na1 = 'Distance: 2m and Displacement: 2m'\na2 = \"Distance: 2m and Displacement: 2m East\"\na3 = \"Distance: 2m North and Displacement: 2m\"\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'Calculate the distance and displacement at point B?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'Distance: 2m and Displacement: 2m', '\u2026\n\n\n\n```python\nhide_me\n\ndef q_2(val):\n if val == \"Distance: 9m North and Displacement: 2.2m NW\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Distance: Actual path covered and Displacement: Shortest path covered with direction\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"Calculate the distance and displacement at point E?\")\n\na1 = 'Distance: 9m and Displacement: 7m'\na2 = \"Distance: 3m and Displacement: 2.2m NW\"\na3 = \"Distance: 9m North and Displacement: 2.2m NW\"\ninteract(q_2, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'Calculate the distance and displacement at point E?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'Distance: 9m and Displacement: 7m', '\u2026\n\n\n### Speed Vs. Velocity\n\nAnalogous to distance and displacement are speed and velocity, however now these quantities also imply that the object's distance/displacement is _changing_. Let's take a look at how speed and velocity are defined.\n\nSpeed is the rate of change of distance over time.\n\n$$\n\\begin{equation}\n\\textrm{speed } = \\frac{\\textrm{change} \\ \\textrm{of} \\ \\textrm{distance (m)}}{\\textrm{time (s)}}\n\\end{equation}\n$$\n\nVelocity is the rate of change of displacement over time.\n\n$$\n\\begin{equation}\n\\textrm{velocity, } \\vec{v} = \\frac{\\textrm{change} \\textrm{ of} \\textrm{ displacement (m)}}{\\textrm{time (s)}} \\\\\n\\textrm{m} = \\textrm{meter } \\textrm{and} \\textrm{ s} = \\textrm{second}\n\\end{equation}\n$$\n\n**Practise**\n\nNow, let's repeat a similar question as we did with displacement and distance, but now include velocity. The required time of one point to another point is given below:\n\n* $\\textrm{A} \\rightarrow \\textrm{C}: 4 \\textrm{ sec}$\n* $\\textrm{A} \\rightarrow \\textrm{D}: 10 \\textrm{ sec}$\n* $\\textrm{A} \\rightarrow \\textrm{E}: 16 \\textrm{ sec}$\n* $\\textrm{A} \\rightarrow \\textrm{A}: 20 \\textrm{ sec}$\n\n\n\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Speed: 0.75 m/s and Velocity: 0.55 m/s NE\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Velocity contains both measurement and direction as it relates to displacement\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"Calculate the speed and velocity at point C?\")\n\na1 = 'Speed: 0.75 m/s and Velocity: 0.55 m/s NE'\na2 = \"Speed: 0.55 m/s and Velocity: 0.75 m/s NE\"\na3 = \"Speed: 1 m/s and Velocity: 0.55 m/s NE\"\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'Calculate the speed and velocity at point C?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'Speed: 0.75 m/s and Velocity: 0.55 m/\u2026\n\n\n\n```python\nhide_me\n\ndef q_2(val):\n if val == \"Speed: 0.60 m/s and Velocity: 0.20 m/s South\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Speed and Velocity are not always equivalent\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"Calculate the speed and velocity at point D?\")\n\na1 = 'Speed: 0.80 m/s and Velocity: 0.60 m/s South'\na2 = \"Speed: 0.60 m/s and Velocity: 0.20 m/s South\"\na3 = \"Speed: 0.60 m/s and Velocity: 0.60 m/s South\"\ninteract(q_2, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'Calculate the speed and velocity at point D?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'Speed: 0.80 m/s and Velocity: 0.60 m/\u2026\n\n\n### Acceleration\n\nVelocity can also change with respect to time. A change in velocity requires _acceleration_, which is defined as rate of change of velocity over time. As acceleration depends on the change in the vector quantity velocity, acceleration is also a vector. Below is an example of acceleration.\n\n\n
https://giphy.com/gifs/cell-concept-acceleration-139Qnnkbg2pefe
\n\n Acceleration is the rate of change of velocity over time. Acceleration is a vector quantity as it depends on velocity. \n\n$$\n\\begin{equation}\n\\textrm{acceleration, } \\vec{a} = \\frac{\\textrm{change} \\textrm{ of} \\textrm{ velocity } (\\frac{\\text{m}}{\\text{s}}) } {\\textrm{time (s)}} \n\\end{equation}\n$$\n\n [Here is an interactive animation demonstrating acceleration further.](https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/ClassMechanics/MotionDiagram/MotionDiagram.html)\n\n**Practice**\n\nLet's go back to our field example and think about where we may see acceleration. Once again we are traveling from $\\textrm{A} \\rightarrow \\textrm{B} \\rightarrow \\textrm{C} \\rightarrow \\textrm{D} \\rightarrow \\textrm{E} \\rightarrow \\textrm{A}$\u200b: \n\n\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Acceleration: 0 m/s\\u00b2 East\":\n display(Latex(\"Correct!\"))\n display(Latex(\"No change in velocity and direction is also straight line\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"Calculate the acceleration at point B?\")\n\na1 = 'Acceleration: 0 m/s\\u00b2 East'\na2 = \"Acceleration: 1 m/s\\u00b2 East\"\na3 = \"Acceleration: 2 m/s\\u00b2 East\"\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'Calculate the acceleration at point B?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'Acceleration: 0 m/s\u00b2 East', 'Accelera\u2026\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Yes\":\n display(Latex(\"Correct!\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"Does a change in direction imply that there was acceleration?\")\n\na1 = 'Yes'\na2 = \"No\"\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2 ],value = ' ',description = 'Choose One:',disabled = False));\n\ndef q_2(val):\n if val == 'To change direction is to change your velocity, and this requires acceleration':\n display(Latex(\"Correct!\"))\n display(Latex(\"Velocity is a vector quantity. To change your direction requires acceleration in another direction.\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Why?\"))\n\na1 = 'To change direction is to change your velocity, and this requires acceleration'\na2 = 'When you change direction you gain speed, and this requires acceleration'\n\n\ninteract(q_2, val = widgets.Dropdown(options=[' ',a1 ,a2],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'Does a change in direction imply that there was acceleration?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'Yes', 'No'), value=' '), Output()), _\u2026\n\n\n\nWhy?\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'To change direction is to change your\u2026\n\n\n### Uniform Motion\n\nNow that we have an understanding of distance, displacement, velocity and acceleration, let's use those quantities to describe \"uniform motion\".\n\nUniform motion has the following two properties:\n\n* Motion is constant or steady that means object covers equal distance in equal time interval.\n\n* The object travels in a straight line.\n\nConsider the blue car in the animation below\n\n\n
http://www.ninetyeast.net/physics/grade-9-10-gcse-hsc/forces/newtons-laws-of-motion/newtons-first-law-of-motion
\n\nThe blue car is travelling at a velocity of $10 \\textrm{ ms}^{-1}$ to the right. That means every second the car is travelling $10 \\textrm{ m}$. If we record this car's displacement and velocity for $10 \\textrm{ sec}$, we get the following table. \n\n\n| Time $(sec$) | Displacement ($m$) | Velocity ($ms^{-1}$)|\n|:-------------:|:-----------------:|:--------:|\n| $ 1$ | $ 10$ | $10$ | \n| $ 2$ |$ 20$ | $10$ | \n| $ 3$ |$30$ | $10$ | \n| $4$ |$40$ | $10$ | \n| $ 5$ | $ 50$ | $10$ | \n| $ 6$ |$ 60$ | $10$ | \n| $ 7$ |$70$ | $10$ | \n| $8$ |$80$ | $10$ | \n| $ 9$ |$90$ | $10$ | \n| $10$ |$100$ | $10$ | \n\nWe can also use this table to create the animation below:\n\n\n```python\nhide_me\n\n# Data\nt = np.linspace(0,10,11)\nd = np.linspace(0,100,11)\nv = np.linspace(10,10,11)\n\n# Create a figure with two subplots\nfig, (ax1, ax2) = plt.subplots(1,2, figsize=(10, 4), dpi= 90, facecolor='w', edgecolor='k')\n\n# Same X axis limt and grid initalizations\nfor ax in [ax1, ax2]:\n ax.set_xlim(0, 10)\n ax.grid()\n \nax1.set_ylim(0,100)\nax2.set_ylim(0,20) \n\n# Initialize the plot\nl1, = ax1.plot([],[], 'go-', label='Displacement', linewidth=2)\nleg = ax1.legend(loc='best')\nl2, = ax2.plot([],[], 'rs-', label='Velocity')\nleg = ax2.legend(loc='best')\n\nfig.suptitle('Uniform Motion')\nax1.set_xlabel('Time (sec)')\nax2.set_xlabel('Time (sec)')\nax1.set_ylabel('Displacement (m)')\nax2.set_ylabel (r'$\\mathrm{Velocity} \\ (\\mathrm{m/s})$')\n \n# Initiate the animation\ndef animate(i): \n l1.set_data(t[:i+1], d[:i+1])\n l2.set_data(t[:i+1], v[:i+1])\n \nani = FuncAnimation(fig, animate, interval = 800, frames=len(t))\nplt.close()\n\n# Convert animation to video\nfrom IPython.display import HTML\nHTML(ani.to_html5_video())\n\n```\n\n\n\n\n\n\n\n\nFrom the table and animations, we find that the blue car's displacement is changing and its velocity is constant. This is an example of uniform motion. The car travels equal distance in equal time.\n\n$$\n\\begin{equation}\n\\textrm{velocity, } \\vec{v} = \\frac{\\textrm{change} \\ \\textrm{of} \\ \\textrm{displacement (m)}}{\\textrm{time (s)}} = \\textrm{constant}\n\\end{equation}\n$$\n\n**Based on your knowledge of uniform motion, what is the blue car's acceleration?**\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"0\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Speed is constant and direction is in a straight line\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\na1 = '0'\na2 = 'Constant'\na3 = 'Variable'\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', '0', 'Constant', 'Variable'), value=' \u2026\n\n\n### Uniform Accelerated Motion\n\n\n```python\nhide_me\n\nfrom IPython.display import HTML\n# Youtube\n#HTML('')\n```\n\n\n```python\n%%html\n\n```\n\n\n\n\n\n\nUniformly accelerated motion is a little different than the uniform motion we discussed earlier. In the case of uniformly accelerated motion, your velocity is increasing constantly and equally in time. Your velocity is changing in a way similar to how displacement changes if you're traveling at constant velocity. Suppose you start at rest and begin running in order to catch a ball. In this scenario, you will need to _accelerate_. Suppose after each second, you are traveling 2 $\\frac{\\textrm{m}}{\\textrm{s}}$ faster than the second previous. In such a case, you have a uniform acceleration of 2 $\\frac{\\textrm{m}}{\\textrm{s}^2}$ (the seconds are squared in the units of acceleration as you are increasing your velocity constantly - i.e. you gain more velocity per second). Let's write down displacement, velocity and acceleration as you try to catch this ball in a table:\n\n| Time $(\\textrm{sec}$) | Displacement ($\\textrm{m}$) | Velocity ($\\textrm{ms}^{-1}$)| Acceleration ($\\textrm{ms}^{-2}$) \n|:-------------:|:-----------------:|:--------:|:--------:|\n| $ 0$ | $ 0$ | $0$ | $2$ |\n| $ 1$ |$ 1$ | $2$ | $2$ |\n| $ 2$ |$4$ | $4$ | $2$ |\n| $3$ |$9$ | $6$ | $2$ |\n| $ 4$ |$16$ | $8$ | $2$ |\n| $5$ |$25$ | $10$ | $2$ |\n| $6$ |$36$ | $12$ | $2$ |\n| $ 7$ |$49$ | $14$ | $2$ |\n| $8$ |$64$ | $16$ | $2$ |\n\nUsing the table, we can also create an animation\n\n\n```python\nhide_me\n\n# Data\nt = np.linspace(0,8,9);\nd = ([0,1,4,9,16,25,36,49,64]);\nv = ([0,2,4,6,8,10,12,14,16]);\na = np.linspace(2,2,9);\n\n# Create a figure with three subplots\nfig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(12.5, 4), dpi= 80, facecolor='w', edgecolor='k')\n\n# Same X axis limt and grid initalizations\nfor ax in [ax1, ax2, ax3]:\n ax.set_xlim(0, 8);\n ax.grid();\n \nax1.set_ylim(0,70);\nax2.set_ylim(0,16); \nax3.set_ylim(0,5);\n\n# Initialize the plot\nl, = ax1.plot([],[], 'go-', label='Displacement', linewidth=2);\nleg = ax1.legend(loc='best');\nl1, = ax2.plot([],[], 'rs-', label='Velocity');\nleg = ax2.legend(loc='best');\nl2, = ax3.plot([],[], 'b*-', label='Acceleration', linewidth=2);\nleg = ax3.legend(loc='best');\n\nfig.suptitle('Uniformly Accelerated Motion');\nax1.set_xlabel('Time (sec)');\nax2.set_xlabel('Time (sec)');\nax3.set_xlabel('Time (sec)');\nax1.set_ylabel('Displacement (m)');\nax2.set_ylabel (r'$\\mathrm{Velocity} \\ (\\mathrm{m/s})$');\nax3.set_ylabel (r'$\\mathrm{Acceleration} \\ (\\mathrm{m/s}^{2})$');\n\n# Initiate the animation\ndef animate(i):\n l.set_data(t[:i+1], d[:i+1]);\n l1.set_data(t[:i+1], v[:i+1]);\n l2.set_data(t[:i+1], a[:i+1]);\n \nani=FuncAnimation(fig, animate, interval = 800, frames=len(t));\nplt.close()\n\n# Convert animation to video\nfrom IPython.display import HTML\nHTML(ani.to_html5_video())\n```\n\n\n\n\n\n\n\n\nNotice how with constant acceleration, your velocity increases _linearly_. As well, when you're accelerating, your displacement changes _parabolically_. These relationships are explained further with the equations of motion below.\n\n\n## Equations of Motion\n\nUsing our relationships between displacement, velocity, acceleration and time, we can define \"equations of motion\" for a moving object. There are four such equations relevant to the principles of uniform motion and uniform acceleration.\n\nSuppose an object with initial velocity $\\vec{\\textrm{v}}_i$ $\\textrm{ms}^{-1}$ is travelling with uniform acceleration $\\vec{\\textrm{a}}$ $\\textrm{ms}^{-2}$. After traveling a displacement $\\vec{\\textrm{s}}$ in time $\\textrm{t}$ the object's final velocity would be $\\vec{\\textrm{v}_f}$ $\\textrm{ms}^{-1}$, these quantities are described by the following equations\n\n$$\n\\begin{align}\n \\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t} \\ \\ \\ \\ \\ \\ \\ \\ (1) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n \\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t} \\; \\; (2)\n\\end{align}\n$$\n\n\n$$\n\\begin{align}\n\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}\\vec{\\textrm{a}}\\textrm{t}^{2} \\; (3) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\vec{\\textrm{v}_f} = \\textrm{v}_i^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}} \\; \\; \\; \\; (4)\n\\end{align}\n$$\n\n\nThe equations above describe both uniform motion and uniformly accelerated motion. Equation (1) describes your final velocity given an intial velocity, and an acceleration over time. Notice how this is the equation of a line. This is what we see in the center plot of the animation above. Equation (3) describes your displacement moving at some initial velocity and accelerating uniformly for some time. Notice how time is squared in this equation. This means that this is the equation of a parabola. Equation (3) is what we see in the first plot of the animation above. \n\n### Mathematical Problems\n\nAs we work through the problems below, we will outline the following steps as we come to the solution. Breaking the problem down in this way makes the problem less intimidating, and makes the path to solution more straight foreward.\n\n> 1. Identify and write what information is given in the problem, and the answer that we are asked for.\n2. Using the information we know, and and the problem identified in step one, identify which equation(s) of motion we need to use to solve the problem.\n3. Ensure that all the values are in the correct units and fill them in the selected equation.\n4. Quote the answer and check the units.\n\n#### Problem 1\n\nA bus accelerates from rest at $4 \\textrm{ms}^{-2}$ until it reaches a final velocity of $40 \\ \\textrm{ms}^{-1}$. For how many seconds was the bus accelerating?\n\n#### Solution\n\n**Step 1:**\n\nGiven,\n\ninitial velocity, $\\vec{\\textrm{v}_i} = 0 \\ \\textrm{ms}^{-1}$ \n\nfinal velocity, $\\vec{\\textrm{v}_f} = 40 \\ \\textrm{ms}^{-1}$ \n\nacceleration, $\\vec{\\textrm{a}} = 4 \\ \\textrm{ms}^{-2}$\n\ntime, $\\textrm{t} = ?$\n\n**Step 2:**\n\nIn this problem, the value of $\\vec{\\textrm{v}_i}$, $\\vec{\\textrm{v}_f}$ and $\\vec{\\textrm{a}}$ are given and $\\textrm{t}$ is required. Now, check the above $4$ motion equations, find the equation is related to$\\vec{\\textrm{v}_i}$, $\\vec{\\textrm{v}_f}$ and $\\vec{\\textrm{a}}$ and $\\textrm{t}$. We find the equation and it is $\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}+\\vec{\\textrm{a}}\\textrm{t}$.\n\n**Step 3:** After checking all the units of the given variable $\\textrm{v}_i$, $\\textrm{v}_f$ and $\\textrm{a}$, we find that they are correct. Then fill them in selected equation:\n\n$$\n\\begin{equation}\n\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}+\\vec{\\textrm{a}}\\textrm{t} \\\\\n\\Rightarrow \\vec{\\textrm{a}}\\textrm{t} = \\vec{\\textrm{v}_f} - \\vec{\\textrm{v}_i} \\\\\n\\Rightarrow \\textrm{t} = \\frac{\\vec{\\textrm{v}_f}-\\vec{\\textrm{v}_i}}{\\vec{\\textrm{a}}} \\\\\n\\Rightarrow \\textrm{t} = \\frac{40\\textrm{ms}^{-1}-0\\textrm{ms}^{-1}}{4\\textrm{ms}^{-2}} \\\\\n\\Rightarrow \\textrm{t} = 10\\textrm{ s}\n\\end{equation}\n$$\n\n**Step 4:**\n\ntime = $10\\textrm{ s}$ $(Ans)$\n\nThis answer can be seen graphically in the animation below:\n\n\n\n```python\nhide_me\n\n# Data frame to create table\ndf = pd.DataFrame()\ndf['Velocity'] = np.arange(0,41,4)\ndf['Time'] = df['Velocity']/4\n\n# Create a figure with two subplots\nfig,(ax1,ax2) = plt.subplots(1,2,figsize=(6.8, 4), dpi= 100)\n\n# Limit and label axis\nax1.set_ylim(0,10)\nax1.set_xlim(0,40)\nax1.set_xlabel('Velocity ($ms^{-1}$)')\nax1.set_ylabel('Time (s)')\nax1.grid()\n\n# Initiate table properties\nfont_size=14\nbbox=[0, 0, 1, 1]\nax2.axis('off')\n\n# Initialize the plot\nl, = ax1.plot([],[], 'go-', label='Time', linewidth=2)\nleg = ax1.legend(loc='best')\n\n# Initiate the animation\ndef animate(i):\n l.set_data(df['Velocity'][:i+1], df['Time'][:i+1])\n table = ax2.table(cellText = df.values[:i+1],bbox=bbox, colLabels=df.columns)\n \n \nani = FuncAnimation(fig, animate, interval = 800, frames=len(df.index))\nplt.close()\n\n# Convert animation to video\nfrom IPython.display import HTML\nHTML(ani.to_html5_video())\n```\n\n\n\n\n\n\n\n\n#### Problem 2\n\nA car accelerates uniformly from $16 \\ \\textrm{m/s}$ to $38.8 \\ \\textrm{m/s}$ in $3.1$ seconds. Calculate the distance travelled by the car.\n\n#### Solution\n\n**Step 1:**\n\nGiven,\n\n$\\textrm{initial} \\ \\textrm{velocity}, \\vec{\\textrm{v}_i} = 16 \\ \\textrm{m/s}$\n\n$\\textrm{final} \\ \\textrm{velocity}, \\vec{\\textrm{v}_f} = 38.8 \\ \\textrm{m/s}$\n\n$\\textrm{time, t} = 3.1 \\ \\textrm{s}$ \n\n$\\textrm{displacement, } \\vec{\\textrm{s}} = ?$\n\n**Step 2:**\n\nTry yourself!\n\n\n```python\nhide_me\n\ndisplay('Which equation can be chosen to solve this problem?')\n\n#Create the box to select m=Motion of Equations\na=widgets.Checkbox(\n value=False,\n description=r\"$\\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t}$\",\n disabled=False\n)\n\nb=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t}$')\n\nc=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}\\vec{\\textrm{a}}\\textrm{t}^{2}$'\n)\nd=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}}$',\n disabled=False\n)\n\n#Display the check box\ndisplay(a)\ndisplay(b)\ndisplay(c)\ndisplay(d)\n\n#create a button to check the answer\nbutton_check = widgets.Button(description=\"check\")\ndisplay(button_check)\n\n#Check the answer\ndef check_button(x):\n if a.value==False and b.value==True and c.value==False and d.value==False:\n display(Latex(\"Correct. Well done!\"))\n else: \n display(Latex(\"Wrong one!\"))\n\nbutton_check.on_click(check_button)\n\n```\n\n\n 'Which equation can be chosen to solve this problem?'\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{v}_f}=\\\\vec{\\\\textrm{v}_i}+\\\\vec{\\\\textrm{a}} \\\\textrm{t}$'\u2026\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{s}} = (\\\\frac{\\\\vec{\\\\textrm{v}_i}+\\\\vec{\\\\textrm{v}_f}}{2}\u2026\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{s}} = \\\\vec{\\\\textrm{v}_i}\\\\textrm{t}+\\\\frac{1}{2}\\\\vec{\\\\t\u2026\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{v}_f} = \\\\vec{\\\\textrm{v}_i}^{2}+2 \\\\vec{\\\\textrm{a}} \\\\; \\\u2026\n\n\n\n Button(description='check', style=ButtonStyle())\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"84.94 m\":\n display(Latex(\"Correct!\"))\n display(Latex(r'$\\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t}$'))\n display(Latex(r'$\\vec{\\textrm{s}} = (\\frac{(16 + 38.8)} {2} \\times 3.1) \\textrm{ m}$'))\n display(Latex(r'$\\vec{\\textrm{s}} = 84.94 \\textrm{ m}$'))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"What is your final displacement?\")\n\na1 = '84.94 m'\na2 = '65.26 m'\na3 = '89.5 m'\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n 'What is your final displacement?'\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', '84.94 m', '65.26 m', '89.5 m'), value\u2026\n\n\n#### Problem 3\n\nA racing car is traveling with a velocity of $72 \\ \\textrm{km/h}$ N and accelerates at $5 \\ \\textrm{m/s}^{2}$ for $10 \\ \\textrm{s}$ N. What is the final velocity of the car and how far will it travel as it accelerates?\n\n#### Solution\n\n**Step 1:**\n\n\nGiven\n$\\textrm{initial} \\ \\textrm{velocity,} \\vec{\\textrm{v}_i} = 72 \\ \\textrm{km/h}$ N\n\n$\\textrm{acceleration, } \\vec{\\textrm{a}} = 5 \\ \\textrm{m/s}^{2}$ N\n\n$\\textrm{time}, \\textrm{t} = 10 \\ \\textrm{s}$\n\n$\\textrm{final} \\ \\textrm{velocity}, \\vec{\\textrm{v}_f} = ?$\n\n$\\textrm{displacement}, \\vec{\\textrm{s}} = ?$\n\n**Step 2:** Let's try,\n\n\n\n```python\nhide_me\n\ndisplay('Which equations can be chosen to solve this problem?')\n\n#Create the box to select m=Motion of Equations\na=widgets.Checkbox(\n value=False,\n description=r\"$\\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t}$\",\n disabled=False\n)\n\nb=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t}$')\n\nc=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}\\vec{\\textrm{a}}\\textrm{t}^{2}$'\n)\nd=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}}$',\n disabled=False\n)\n\n#Display the check box\ndisplay(a)\ndisplay(b)\ndisplay(c)\ndisplay(d)\n\n#create a button to check the answer\nbutton_check = widgets.Button(description=\"check\")\ndisplay(button_check)\n\n#Check the answer\ndef check_button(x):\n if a.value==True and b.value==True and c.value==True and d.value==True:\n display(Latex(\"Correct. Well done!\"))\n display(Latex(\"yes, all the equations can be used. It depends on which one you are picking.\")) \n else: \n display(Latex(\"Try Again!\"))\n display(Latex('Hint: More than one answers')) \n\nbutton_check.on_click(check_button)\n\n```\n\n\n 'Which equations can be chosen to solve this problem?'\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{v}_f}=\\\\vec{\\\\textrm{v}_i}+\\\\vec{\\\\textrm{a}} \\\\textrm{t}$'\u2026\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{s}} = (\\\\frac{\\\\vec{\\\\textrm{v}_i}+\\\\vec{\\\\textrm{v}_f}}{2}\u2026\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{s}} = \\\\vec{\\\\textrm{v}_i}\\\\textrm{t}+\\\\frac{1}{2}\\\\vec{\\\\t\u2026\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{v}_f} = \\\\vec{\\\\textrm{v}_i}^{2}+2 \\\\vec{\\\\textrm{a}} \\\\; \\\u2026\n\n\n\n Button(description='check', style=ButtonStyle())\n\n\n**Step 3:**\n\n$$\n\\begin{equation}\n\\textrm{initial} \\ \\textrm{velocity,} \\vec{\\textrm{v}_i} = 72 \\ \\textrm{km/h} N = \\frac{72\\times1000}{3600} \\ \\textrm{m/s} = 20 \\ \\textrm{m/s} \\; N \n\\end{equation}\n$$\n\nCalculate final velocity,\n\n$$\n\\begin{equation}\n\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}+\\vec{\\textrm{a}}\\textrm{t} \\\\\n\\vec{\\textrm{v}_f}= 20+(5\\times10) \\\\\n\\vec{\\textrm{v}_f}= 70 \\ \\textrm{m/s} N\n\\end{equation}\n$$\n\nCalculate displacement,\n\n$$\n\\begin{equation}\n\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}{\\vec{\\textrm{a}}}{\\textrm{t}^{2}} \\\\\n\\vec{\\textrm{s}}=20\\times10+\\frac{1}{2}{5\\times}{10^{2}} \\\\\n\\vec{\\textrm{s}}=(200+250)\\ \\textrm{m} \\\\\n\\vec{\\textrm{s}}=450 \\ \\textrm{m} \\; N\n\\end{equation}\n$$\n\n**Step 4:**\n\n$$\n\\begin{align}\n\\textrm{final} \\ \\textrm{velocity}, \\vec{\\textrm{v}_f}=70 \\ \\textrm{m/s}\\ \\; N \\;\\;(Ans) \\\\\n\\textrm{displacement} = 84.94 \\ \\textrm{m} \\ \\; N \\;\\; (Ans)\n\\end{align}\n$$\n\n#### Problem 4\n\nAn Air Canada plane requires a takeoff speed of $78.5 \\ \\textrm{m/s}$ and $1690 \\ \\textrm{m}$ of runway to reach that speed. Determine the acceleration of this plane and the time required to reach this speed.\n\n#### Solution\n\n**Step 1:** Let's try,\n\n\n```python\nhide_me\n\ndisplay('Which values are given in this problem?')\n\n#Create the box to select m=Motion of Equations\na=widgets.Checkbox(\n value=False,\n description=r\"$\\vec{\\textrm{v}_i}$\",\n disabled=False\n)\n\nb=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{v}_f}$')\n\nc=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{a}}$'\n)\nd=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}}$',\n disabled=False\n)\n\ne=widgets.Checkbox(\n value=False,\n description=r'$t$',\n disabled=False\n)\n\n\n#Display the check box\ndisplay(a)\ndisplay(b)\ndisplay(c)\ndisplay(d)\ndisplay(e)\n\n#create a button to check the answer\nbutton_check = widgets.Button(description=\"check\")\ndisplay(button_check)\n\n#Check the answer\ndef check_button(x):\n if a.value==True and b.value==True and c.value==False and d.value==True and e.value==False:\n display(Latex(\"Correct. Well done!\"))\n display(Latex(\"Initial velocity is also given as initially plane will start from $0$ $ms^{-1}$\"))\n else: \n display(Latex(\"Wrong one!\"))\n \n\nbutton_check.on_click(check_button)\n\n```\n\n\n 'Which values are given in this problem?'\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{v}_i}$')\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{v}_f}$')\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{a}}$')\n\n\n\n Checkbox(value=False, description='$\\\\vec{\\\\textrm{s}}$')\n\n\n\n Checkbox(value=False, description='$t$')\n\n\n\n Button(description='check', style=ButtonStyle())\n\n\n**Step 2:**\n\n$$\n\\begin{align}\n\\textrm{Motion} \\ \\textrm{equation, } \\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}} \\textrm{ and }\n\\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t}\n\\end{align}\n$$\n\n**Step 3**\n\nLet's calculate,\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"acceleration = 1.82 ms\\u00b2 and time = 43.06 s\":\n display(Latex(\"Correct!\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(\"What is the plane's required acceleration, and how long was it accelerating?\")\n\na1 = 'acceleration = 1.91 ms\\u00b2 and time = 48.1 s'\na2 = 'acceleration = 1.82 ms\\u00b2 and time = 43.06 s'\na3 = 'acceleration = 1.82 ms\\u00b2 and time = 45.4 s'\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n \"What is the plane's required acceleration, and how long was it accelerating?\"\n\n\n\n interactive(children=(Dropdown(description='Choose One:', options=(' ', 'acceleration = 1.91 ms\u00b2 and time = 48\u2026\n\n\n### Exercise\n\n **Question 1**: \n A car starting from rest in straight moves with uniform acceleration of $5 \\ \\textrm{ms}^{-2}$. What will be the velocity while crossing a person at a distance $40 \\ \\textrm{m}$? (Ans: $20 \\ \\textrm{ms}^{-1}$)\n\n **Question 2**:\n A bike accelerates uniformly from rest to a speed of $1 \\ \\frac{\\textrm{km}}{\\textrm{min}}$ over a distance of $65 \\ \\textrm{m}$. Determine the acceleration of the bike. (Ans: $2.137 \\ \\textrm{ms}^{-1}$)\n\n **Question 3**:\n A bullet leaves a rifle with a velocity of $521 \\ \\textrm{ms}^{-1}$. While accelerating through the barrel of the rifle, the bullet moves a distance of $0.840 \\ \\textrm{m}$. Determine the acceleration of the bullet (Consider a uniform acceleration). (Ans: $1.62 \\times 10^{5} \\ \\textrm{ms}^{-1}$)\n\n **Question 4**:\n The Velocity of a jeep decreases uniformly from $35 \\ \\textrm{ms}^{-1}$ and after $8 \\ \\textrm{s}$ it becomes $10 \\ \\textrm{ms}^{-1}$. Find the acceleration of the car? (Ans: $-3.125 \\ \\textrm{ms}^{-2}$, You do not have worry about the '-' sign. It defines the direction as velocity is decreasing)\n\n## Conclusion\n\nIn this notebook, we introduced two important concepts of motion: Uniform and Uniformly accelerated motion. We demonstrated how motion is related to distance, speed, velocity and acceleration. We also showed the various linear relationships between displacement, velocity and uniform acceleration using both the equations of motion, and the graphs of those functions. Using these equations, we then demonstrated several cases where they we can apply the ideas of uniform motion and uniformly accelerated motion to solve classic physics problems. This notebook serves as an introduction to the concept of uniform and uniformly accelerated motion, and will allow you to solve a great many problems using these concepts, and should act as a reasonable primer for more complex kinematic problems.\n\n[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)\n", "meta": {"hexsha": "9b15f4821f16b790daab3e049b06fc289966b98a", "size": 186286, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/html/_sources/curriculum-notebooks/Science/UniformMotionAndUniformlyAcceleratedMotion/uniform-motion-and-uniformly-accelerated-motion.ipynb", "max_stars_repo_name": "BryceHaley/curriculum-jbook", "max_stars_repo_head_hexsha": "d1246799ddfe62b0cf5c389394a18c2904383437", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-18T18:19:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T18:19:40.000Z", "max_issues_repo_path": "_build/html/_sources/curriculum-notebooks/Science/UniformMotionAndUniformlyAcceleratedMotion/uniform-motion-and-uniformly-accelerated-motion.ipynb", "max_issues_repo_name": "callysto/curriculum-jbook", "max_issues_repo_head_hexsha": "ffb685901e266b0ae91d1250bf63e05a87c456d9", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_build/html/_sources/curriculum-notebooks/Science/UniformMotionAndUniformlyAcceleratedMotion/uniform-motion-and-uniformly-accelerated-motion.ipynb", "max_forks_repo_name": "callysto/curriculum-jbook", "max_forks_repo_head_hexsha": "ffb685901e266b0ae91d1250bf63e05a87c456d9", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.906745415, "max_line_length": 964, "alphanum_fraction": 0.7368240233, "converted": true, "num_tokens": 10472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO\n\n", "lm_q1_score": 0.6076631556226291, "lm_q2_score": 0.48047867804790706, "lm_q1q2_score": 0.29196918971198044}} {"text": "##### Copyright 2020 The TensorFlow Authors.\n\n\n```python\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# Quantum data\n\n\n \n \n \n \n
\n View on TensorFlow.org\n \n Run in Google Colab\n \n View source on GitHub\n \n Download notebook\n
\n\nBuilding off of the comparisons made in the [MNIST](https://www.tensorflow.org/quantum/tutorials/mnist) tutorial, this tutorial explores the recent work of [Huang et al.](https://arxiv.org/abs/2011.01938) that shows how different datasets affect performance comparisons. In the work, the authors seek to understand how and when classical machine learning models can learn as well as (or better than) quantum models. The work also showcases an empirical performance separation between classical and quantum machine learning model via a carefully crafted dataset. You will:\n\n1. Prepare a reduced dimension Fashion-MNIST dataset.\n2. Use quantum circuits to re-label the dataset and compute Projected Quantum Kernel features (PQK).\n3. Train a classical neural network on the re-labeled dataset and compare the performance with a model that has access to the PQK features.\n\n## Setup\n\n\n```python\n!pip -q install tensorflow==2.3.1 tensorflow-quantum\n```\n\n\n```python\nimport cirq\nimport sympy\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\nnp.random.seed(1234)\n```\n\n## 1. Data preparation\n\nYou will begin by preparing the fashion-MNIST dataset for running on a quantum computer.\n\n### 1.1 Download fashion-MNIST\n\nThe first step is to get the traditional fashion-mnist dataset. This can be done using the `tf.keras.datasets` module.\n\n\n```python\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n# Rescale the images from [0,255] to the [0.0,1.0] range.\nx_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0\n\nprint(\"Number of original training examples:\", len(x_train))\nprint(\"Number of original test examples:\", len(x_test))\n```\n\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n 32768/29515 [=================================] - 0s 0us/step\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n 26427392/26421880 [==============================] - 1s 0us/step\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n 8192/5148 [===============================================] - 0s 0us/step\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n 4423680/4422102 [==============================] - 0s 0us/step\n Number of original training examples: 60000\n Number of original test examples: 10000\n\n\nFilter the dataset to keep just the T-shirts/tops and dresses, remove the other classes. At the same time convert the label, `y`, to boolean: True for 0 and False for 3.\n\n\n```python\ndef filter_03(x, y):\n keep = (y == 0) | (y == 3)\n x, y = x[keep], y[keep]\n y = y == 0\n return x,y\n```\n\n\n```python\nx_train, y_train = filter_03(x_train, y_train)\nx_test, y_test = filter_03(x_test, y_test)\n\nprint(\"Number of filtered training examples:\", len(x_train))\nprint(\"Number of filtered test examples:\", len(x_test))\n```\n\n Number of filtered training examples: 12000\n Number of filtered test examples: 2000\n\n\n\n```python\nprint(y_train[0])\n\nplt.imshow(x_train[0, :, :, 0])\nplt.colorbar()\n```\n\n### 1.2 Downscale the images\n\nJust like the MNIST example, you will need to downscale these images in order to be within the boundaries for current quantum computers. This time however you will use a PCA transformation to reduce the dimensions instead of a `tf.image.resize` operation.\n\n\n```python\ndef truncate_x(x_train, x_test, n_components=10):\n \"\"\"Perform PCA on image dataset keeping the top `n_components` components.\"\"\"\n n_points_train = tf.gather(tf.shape(x_train), 0)\n n_points_test = tf.gather(tf.shape(x_test), 0)\n\n # Flatten to 1D\n x_train = tf.reshape(x_train, [n_points_train, -1])\n x_test = tf.reshape(x_test, [n_points_test, -1])\n\n # Normalize.\n feature_mean = tf.reduce_mean(x_train, axis=0)\n x_train_normalized = x_train - feature_mean\n x_test_normalized = x_test - feature_mean\n\n # Truncate.\n e_values, e_vectors = tf.linalg.eigh(\n tf.einsum('ji,jk->ik', x_train_normalized, x_train_normalized))\n return tf.einsum('ij,jk->ik', x_train_normalized, e_vectors[:,-n_components:]), \\\n tf.einsum('ij,jk->ik', x_test_normalized, e_vectors[:, -n_components:])\n```\n\n\n```python\nDATASET_DIM = 10\nx_train, x_test = truncate_x(x_train, x_test, n_components=DATASET_DIM)\nprint(f'New datapoint dimension:', len(x_train[0]))\n```\n\n New datapoint dimension: 10\n\n\nThe last step is to reduce the size of the dataset to just 1000 training datapoints and 200 testing datapoints.\n\n\n```python\nN_TRAIN = 1000\nN_TEST = 200\nx_train, x_test = x_train[:N_TRAIN], x_test[:N_TEST]\ny_train, y_test = y_train[:N_TRAIN], y_test[:N_TEST]\n```\n\n\n```python\nprint(\"New number of training examples:\", len(x_train))\nprint(\"New number of test examples:\", len(x_test))\n```\n\n New number of training examples: 1000\n New number of test examples: 200\n\n\n## 2. Relabeling and computing PQK features\n\nYou will now prepare a \"stilted\" quantum dataset by incorporating quantum components and re-labeling the truncated fashion-MNIST dataset you've created above. In order to get the most seperation between quantum and classical methods, you will first prepare the PQK features and then relabel outputs based on their values. \n\n### 2.1 Quantum encoding and PQK features\nYou will create a new set of features, based on `x_train`, `y_train`, `x_test` and `y_test` that is defined to be the 1-RDM on all qubits of: \n\n$V(x_{\\text{train}} / n_{\\text{trotter}}) ^ {n_{\\text{trotter}}} U_{\\text{1qb}} | 0 \\rangle$\n\nWhere $U_\\text{1qb}$ is a wall of single qubit rotations and $V(\\hat{\\theta}) = e^{-i\\sum_i \\hat{\\theta_i} (X_i X_{i+1} + Y_i Y_{i+1} + Z_i Z_{i+1})}$\n\nFirst, you can generate the wall of single qubit rotations:\n\n\n```python\ndef single_qubit_wall(qubits, rotations):\n \"\"\"Prepare a single qubit X,Y,Z rotation wall on `qubits`.\"\"\"\n wall_circuit = cirq.Circuit()\n for i, qubit in enumerate(qubits):\n for j, gate in enumerate([cirq.X, cirq.Y, cirq.Z]):\n wall_circuit.append(gate(qubit) ** rotations[i][j])\n\n return wall_circuit\n```\n\nYou can quickly verify this works by looking at the circuit:\n\n\n```python\nSVGCircuit(single_qubit_wall(\n cirq.GridQubit.rect(1,4), np.random.uniform(size=(4, 3))))\n```\n\n\n\n\n \n\n \n\n\n\nNext you can prepare $V(\\hat{\\theta})$ with the help of `tfq.util.exponential` which can exponentiate any commuting `cirq.PauliSum` objects:\n\n\n```python\ndef v_theta(qubits):\n \"\"\"Prepares a circuit that generates V(\\theta).\"\"\"\n ref_paulis = [\n cirq.X(q0) * cirq.X(q1) + \\\n cirq.Y(q0) * cirq.Y(q1) + \\\n cirq.Z(q0) * cirq.Z(q1) for q0, q1 in zip(qubits, qubits[1:])\n ]\n exp_symbols = list(sympy.symbols('ref_0:'+str(len(ref_paulis))))\n return tfq.util.exponential(ref_paulis, exp_symbols), exp_symbols\n```\n\nThis circuit might be a little bit harder to verify by looking at, but you can still examine a two qubit case to see what is happening:\n\n\n```python\ntest_circuit, test_symbols = v_theta(cirq.GridQubit.rect(1, 2))\nprint(f'Symbols found in circuit:{test_symbols}')\nSVGCircuit(test_circuit)\n```\n\n Symbols found in circuit:[ref_0]\n\n\n\n\n\n \n\n \n\n\n\nNow you have all the building blocks you need to put your full encoding circuits together:\n\n\n```python\ndef prepare_pqk_circuits(qubits, classical_source, n_trotter=10):\n \"\"\"Prepare the pqk feature circuits around a dataset.\"\"\"\n n_qubits = len(qubits)\n n_points = len(classical_source)\n\n # Prepare random single qubit rotation wall.\n random_rots = np.random.uniform(-2, 2, size=(n_qubits, 3))\n initial_U = single_qubit_wall(qubits, random_rots)\n\n # Prepare parametrized V\n V_circuit, symbols = v_theta(qubits)\n exp_circuit = cirq.Circuit(V_circuit for t in range(n_trotter))\n \n # Convert to `tf.Tensor`\n initial_U_tensor = tfq.convert_to_tensor([initial_U])\n initial_U_splat = tf.tile(initial_U_tensor, [n_points])\n\n full_circuits = tfq.layers.AddCircuit()(\n initial_U_splat, append=exp_circuit)\n # Replace placeholders in circuits with values from `classical_source`.\n return tfq.resolve_parameters(\n full_circuits, tf.convert_to_tensor([str(x) for x in symbols]),\n tf.convert_to_tensor(classical_source*(n_qubits/3)/n_trotter))\n```\n\nChooe some qubits and prepare the data encoding circuits:\n\n\n```python\nqubits = cirq.GridQubit.rect(1, DATASET_DIM + 1)\nq_x_train_circuits = prepare_pqk_circuits(qubits, x_train)\nq_x_test_circuits = prepare_pqk_circuits(qubits, x_test)\n```\n\nNext, compute the PQK features based on the 1-RDM of the dataset circuits above and store the results in `rdm`, a `tf.Tensor` with shape `[n_points, n_qubits, 3]`. The entries in `rdm[i][j][k]` = $\\langle \\psi_i | OP^k_j | \\psi_i \\rangle$ where `i` indexes over datapoints, `j` indexes over qubits and `k` indexes over $\\lbrace \\hat{X}, \\hat{Y}, \\hat{Z} \\rbrace$ .\n\n\n```python\ndef get_pqk_features(qubits, data_batch):\n \"\"\"Get PQK features based on above construction.\"\"\"\n ops = [[cirq.X(q), cirq.Y(q), cirq.Z(q)] for q in qubits]\n ops_tensor = tf.expand_dims(tf.reshape(tfq.convert_to_tensor(ops), -1), 0)\n batch_dim = tf.gather(tf.shape(data_batch), 0)\n ops_splat = tf.tile(ops_tensor, [batch_dim, 1])\n exp_vals = tfq.layers.Expectation()(data_batch, operators=ops_splat)\n rdm = tf.reshape(exp_vals, [batch_dim, len(qubits), -1])\n return rdm\n```\n\n\n```python\nx_train_pqk = get_pqk_features(qubits, q_x_train_circuits)\nx_test_pqk = get_pqk_features(qubits, q_x_test_circuits)\nprint('New PQK training dataset has shape:', x_train_pqk.shape)\nprint('New PQK testing dataset has shape:', x_test_pqk.shape)\n```\n\n New PQK training dataset has shape: (1000, 11, 3)\n New PQK testing dataset has shape: (200, 11, 3)\n\n\n### 2.2 Re-labeling based on PQK features\nNow that you have these quantum generated features in `x_train_pqk` and `x_test_pqk`, it is time to re-label the dataset. To achieve maximum seperation between quantum and classical performance you can re-label the dataset based on the spectrum information found in `x_train_pqk` and `x_test_pqk`.\n\nNote: This preparation of your dataset to explicitly maximize the seperation in performance between the classical and quantum models might feel like cheating, but it provides a **very** important proof of existance for datasets that are hard for classical computers and easy for quantum computers to model. There would be no point in searching for quantum advantage in QML if you couldn't first create something like this to demonstrate advantage.\n\n\n```python\ndef compute_kernel_matrix(vecs, gamma):\n \"\"\"Computes d[i][j] = e^ -gamma * (vecs[i] - vecs[j]) ** 2 \"\"\"\n scaled_gamma = gamma / (\n tf.cast(tf.gather(tf.shape(vecs), 1), tf.float32) * tf.math.reduce_std(vecs))\n return scaled_gamma * tf.einsum('ijk->ij',(vecs[:,None,:] - vecs) ** 2)\n\ndef get_spectrum(datapoints, gamma=1.0):\n \"\"\"Compute the eigenvalues and eigenvectors of the kernel of datapoints.\"\"\"\n KC_qs = compute_kernel_matrix(datapoints, gamma)\n S, V = tf.linalg.eigh(KC_qs)\n S = tf.math.abs(S)\n return S, V\n```\n\n\n```python\nS_pqk, V_pqk = get_spectrum(\n tf.reshape(tf.concat([x_train_pqk, x_test_pqk], 0), [-1, len(qubits) * 3]))\n\nS_original, V_original = get_spectrum(\n tf.cast(tf.concat([x_train, x_test], 0), tf.float32), gamma=0.005)\n\nprint('Eigenvectors of pqk kernel matrix:', V_pqk)\nprint('Eigenvectors of original kernel matrix:', V_original)\n```\n\n Eigenvectors of pqk kernel matrix: tf.Tensor(\n [[ 0.02077353 -0.015596 -0.01320663 ... -0.05652976 -0.01059049\n 0.02284898]\n [ 0.01186311 -0.04885919 0.02122526 ... 0.00108375 -0.71264434\n 0.03107676]\n [ 0.02042142 0.01409236 0.0090412 ... 0.05545552 0.01048413\n 0.02915451]\n ...\n [-0.0792932 -0.00414834 -0.02020384 ... 0.11810818 -0.03445855\n 0.04563508]\n [-0.07588085 -0.02885237 0.00121487 ... 0.06278482 -0.06508292\n 0.04250367]\n [-0.06047752 -0.02858609 0.01928959 ... -0.02261813 -0.01793749\n 0.04086718]], shape=(1200, 1200), dtype=float32)\n Eigenvectors of original kernel matrix: tf.Tensor(\n [[ 0.03835681 0.0283473 -0.01169789 ... 0.02343717 0.0211248\n 0.03206972]\n [-0.04018159 0.00888097 -0.01388255 ... 0.00582427 0.717551\n 0.02881948]\n [-0.0166719 0.01350376 -0.03663862 ... 0.02467175 -0.00415936\n 0.02195409]\n ...\n [-0.03015648 -0.01671632 -0.01603392 ... 0.00100583 -0.00261221\n 0.02365689]\n [ 0.0039777 -0.04998879 -0.00528336 ... 0.01560401 -0.04330755\n 0.02782002]\n [-0.01665728 -0.00818616 -0.0432341 ... 0.00088256 0.00927396\n 0.01875088]], shape=(1200, 1200), dtype=float32)\n\n\nNow you have everything you need to re-label the dataset! Now you can consult with the flowchart to better understand how to maximize performance seperation when re-labeling the dataset:\n\n\n\nIn order to maximize the seperation between quantum and classical models, you will attempt to maximize the geometric difference between the original dataset and the PQK features kernel matrices $g(K_1 || K_2) = \\sqrt{ || \\sqrt{K_2} K_1^{-1} \\sqrt{K_2} || _\\infty}$ using `S_pqk, V_pqk` and `S_original, V_original`. A large value of $g$ ensures that you initially move to the right in the flowchart down towards a prediction advantage in the quantum case.\n\nNote: Computing quantities for $s$ and $d$ are also very useful when looking to better understand performance seperations. In this case ensuring a large $g$ value is enough to see performance seperation.\n\n\n```python\ndef get_stilted_dataset(S, V, S_2, V_2, lambdav=1.1):\n \"\"\"Prepare new labels that maximize geometric distance between kernels.\"\"\"\n S_diag = tf.linalg.diag(S ** 0.5)\n S_2_diag = tf.linalg.diag(S_2 / (S_2 + lambdav) ** 2)\n scaling = S_diag @ tf.transpose(V) @ \\\n V_2 @ S_2_diag @ tf.transpose(V_2) @ \\\n V @ S_diag\n\n # Generate new lables using the largest eigenvector.\n _, vecs = tf.linalg.eig(scaling)\n new_labels = tf.math.real(\n tf.einsum('ij,j->i', tf.cast(V @ S_diag, tf.complex64), vecs[-1])).numpy()\n # Create new labels and add some small amount of noise.\n final_y = new_labels > np.median(new_labels)\n noisy_y = (final_y ^ (np.random.uniform(size=final_y.shape) > 0.95))\n return noisy_y\n```\n\n\n```python\ny_relabel = get_stilted_dataset(S_pqk, V_pqk, S_original, V_original)\ny_train_new, y_test_new = y_relabel[:N_TRAIN], y_relabel[N_TRAIN:]\n```\n\n## 3. Comparing models\nNow that you have prepared your dataset it is time to compare model performance. You will create two small feedforward neural networks and compare performance when they are given access to the PQK features found in `x_train_pqk`.\n\n### 3.1 Create PQK enhanced model\nUsing standard `tf.keras` library features you can now create and a train a model on the `x_train_pqk` and `y_train_new` datapoints:\n\n\n```python\n#docs_infra: no_execute\ndef create_pqk_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[len(qubits) * 3,]))\n model.add(tf.keras.layers.Dense(16, activation='sigmoid'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\npqk_model = create_pqk_model()\npqk_model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.003),\n metrics=['accuracy'])\n\npqk_model.summary()\n```\n\n Model: \"sequential\"\n _________________________________________________________________\n Layer (type) Output Shape Param # \n =================================================================\n dense (Dense) (None, 32) 1088 \n _________________________________________________________________\n dense_1 (Dense) (None, 16) 528 \n _________________________________________________________________\n dense_2 (Dense) (None, 1) 17 \n =================================================================\n Total params: 1,633\n Trainable params: 1,633\n Non-trainable params: 0\n _________________________________________________________________\n\n\n\n```python\n#docs_infra: no_execute\npqk_history = pqk_model.fit(tf.reshape(x_train_pqk, [N_TRAIN, -1]),\n y_train_new,\n batch_size=32,\n epochs=1000,\n verbose=0,\n validation_data=(tf.reshape(x_test_pqk, [N_TEST, -1]), y_test_new))\n```\n\n### 3.2 Create a classical model\nSimilar to the code above you can now also create a classical model that doesn't have access to the PQK features in your stilted dataset. This model can be trained using `x_train` and `y_label_new`.\n\n\n```python\n#docs_infra: no_execute\ndef create_fair_classical_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[DATASET_DIM,]))\n model.add(tf.keras.layers.Dense(16, activation='sigmoid'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\nmodel = create_fair_classical_model()\nmodel.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.03),\n metrics=['accuracy'])\n\nmodel.summary()\n```\n\n Model: \"sequential_1\"\n _________________________________________________________________\n Layer (type) Output Shape Param # \n =================================================================\n dense_3 (Dense) (None, 32) 352 \n _________________________________________________________________\n dense_4 (Dense) (None, 16) 528 \n _________________________________________________________________\n dense_5 (Dense) (None, 1) 17 \n =================================================================\n Total params: 897\n Trainable params: 897\n Non-trainable params: 0\n _________________________________________________________________\n\n\n\n```python\n#docs_infra: no_execute\nclassical_history = model.fit(x_train,\n y_train_new,\n batch_size=32,\n epochs=1000,\n verbose=0,\n validation_data=(x_test, y_test_new))\n```\n\n### 3.3 Compare performance\nNow that you have trained the two models you can quickly plot the performance gaps in the validation data between the two. Typically both models will achieve > 0.9 accuaracy on the training data. However on the validation data it becomes clear that only the information found in the PQK features is enough to make the model generalize well to unseen instances.\n\n\n```python\n#docs_infra: no_execute\nplt.figure(figsize=(10,5))\nplt.plot(classical_history.history['accuracy'], label='accuracy_classical')\nplt.plot(classical_history.history['val_accuracy'], label='val_accuracy_classical')\nplt.plot(pqk_history.history['accuracy'], label='accuracy_quantum')\nplt.plot(pqk_history.history['val_accuracy'], label='val_accuracy_quantum')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend()\n```\n\nSuccess: You have engineered a stilted quantum dataset that can intentionally defeat classical models in a fair (but contrived) setting. Try comparing results using other types of classical models. The next step is to try and see if you can find new and interesting datasets that can defeat classical models without needing to engineer them yourself!\n\n## 4. Important conclusions\n\nThere are several important conclusions you can draw from this and the [MNIST](https://www.tensorflow.org/quantum/tutorials/mnist) experiments:\n\n1. It's very unlikely that the quantum models of today will beat classical model performance on classical data. Especially on today's classical datasets that can have upwards of a million datapoints.\n\n2. Just because the data might come from a hard to classically simulate quantum circuit, doesn't necessarily make the data hard to learn for a classical model.\n\n3. Datasets (ultimately quantum in nature) that are easy for quantum models to learn and hard for classical models to learn do exist, regardless of model architecture or training algorithms used.\n", "meta": {"hexsha": "4114f732e2c3c9a04e9345a1fdcae1759929f610", "size": 111063, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "QuantumCodeFiles/quantum_data.ipynb", "max_stars_repo_name": "SevdanurGENC/TensorFlow-Quantum-Machine-Learning", "max_stars_repo_head_hexsha": "a605081f60cb7f14507ed477f4155cb7eccf6897", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "QuantumCodeFiles/quantum_data.ipynb", "max_issues_repo_name": "SevdanurGENC/TensorFlow-Quantum-Machine-Learning", "max_issues_repo_head_hexsha": "a605081f60cb7f14507ed477f4155cb7eccf6897", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "QuantumCodeFiles/quantum_data.ipynb", "max_forks_repo_name": "SevdanurGENC/TensorFlow-Quantum-Machine-Learning", "max_forks_repo_head_hexsha": "a605081f60cb7f14507ed477f4155cb7eccf6897", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-19T17:47:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-19T17:47:55.000Z", "avg_line_length": 100.1469792606, "max_line_length": 49822, "alphanum_fraction": 0.7680145503, "converted": true, "num_tokens": 5836, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5156199157230157, "lm_q2_score": 0.5660185351961015, "lm_q1q2_score": 0.2918504294154786}} {"text": "```python\n# This cell is added by sphinx-gallery\n# It can be customized to whatever you like\n%matplotlib inline\n```\n\n\n\nBasic tutorial: qubit rotation\n==============================\n\n.. meta::\n :property=\"og:description\": To see how PennyLane allows the easy construction and optimization\n of quantum functions, let's consider the 'hello world' of QML: qubit rotation.\n :property=\"og:image\": https://pennylane.ai/qml/_images/bloch.png\n\n.. related::\n\n tutorial_plugins_hybrid Plugins and Hybrid computation\n tutorial_gaussian_transformation Gaussian transformation\n tutorial_state_preparation Training a quantum circuit with PyTorch\n\n*Author: PennyLane dev team. Last updated: 19 Jan 2021.*\n\nTo see how PennyLane allows the easy construction and optimization of quantum functions, let's\nconsider the simple case of **qubit rotation** the PennyLane version of the 'Hello, world!'\nexample.\n\nThe task at hand is to optimize two rotation gates in order to flip a single\nqubit from state $\\left|0\\right\\rangle$ to state $\\left|1\\right\\rangle$.\n\n\nThe quantum circuit\n-------------------\n\nIn the qubit rotation example, we wish to implement the following quantum circuit:\n\n.. figure:: ../demonstrations/qubit_rotation/rotation_circuit.png\n :align: center\n :width: 40%\n :target: javascript:void(0);\n\nBreaking this down step-by-step, we first start with a qubit in the ground state\n$|0\\rangle = \\begin{bmatrix}1 & 0 \\end{bmatrix}^T$,\nand rotate it around the x-axis by applying the gate\n\n\\begin{align}R_x(\\phi_1) = e^{-i \\phi_1 \\sigma_x /2} =\n \\begin{bmatrix} \\cos \\frac{\\phi_1}{2} & -i \\sin \\frac{\\phi_1}{2} \\\\\n -i \\sin \\frac{\\phi_1}{2} & \\cos \\frac{\\phi_1}{2}\n \\end{bmatrix},\\end{align}\n\nand then around the y-axis via the gate\n\n\\begin{align}R_y(\\phi_2) = e^{-i \\phi_2 \\sigma_y/2} =\n \\begin{bmatrix} \\cos \\frac{\\phi_2}{2} & - \\sin \\frac{\\phi_2}{2} \\\\\n \\sin \\frac{\\phi_2}{2} & \\cos \\frac{\\phi_2}{2}\n \\end{bmatrix}.\\end{align}\n\nAfter these operations the qubit is now in the state\n\n\\begin{align}| \\psi \\rangle = R_y(\\phi_2) R_x(\\phi_1) | 0 \\rangle.\\end{align}\n\nFinally, we measure the expectation value $\\langle \\psi \\mid \\sigma_z \\mid \\psi \\rangle$\nof the Pauli-Z operator\n\n\\begin{align}\\sigma_z =\n \\begin{bmatrix} 1 & 0 \\\\\n 0 & -1\n \\end{bmatrix}.\\end{align}\n\nUsing the above to calculate the exact expectation value, we find that\n\n\\begin{align}\\langle \\psi \\mid \\sigma_z \\mid \\psi \\rangle\n = \\langle 0 \\mid R_x(\\phi_1)^\\dagger R_y(\\phi_2)^\\dagger \\sigma_z R_y(\\phi_2) R_x(\\phi_1) \\mid 0 \\rangle\n = \\cos(\\phi_1)\\cos(\\phi_2).\\end{align}\n\nDepending on the circuit parameters $\\phi_1$ and $\\phi_2$, the\noutput expectation lies between $1$ (if $\\left|\\psi\\right\\rangle = \\left|0\\right\\rangle$)\nand $-1$ (if $\\left|\\psi\\right\\rangle = \\left|1\\right\\rangle$).\n\n\nLet's see how we can easily implement and optimize this circuit using PennyLane.\n\nImporting PennyLane and NumPy\n-----------------------------\n\nThe first thing we need to do is import PennyLane, as well as the wrapped version\nof NumPy provided by PennyLane.\n\n\n\n\n```python\nimport pennylane as qml\nfrom pennylane import numpy as np\n```\n\n.. important::\n\n When constructing a hybrid quantum/classical computational model with PennyLane,\n it is important to **always import NumPy from PennyLane**, not the standard NumPy!\n\n By importing the wrapped version of NumPy provided by PennyLane, you can combine\n the power of NumPy with PennyLane:\n\n * continue to use the classical NumPy functions and arrays you know and love\n * combine quantum functions (evaluated on quantum hardware/simulators) and\n classical functions (provided by NumPy)\n * allow PennyLane to automatically calculate gradients of both classical and\n quantum functions\n\n\n\nCreating a device\n-----------------\n\nBefore we can construct our quantum node, we need to initialize a **device**.\n\n.. admonition:: Definition\n :class: defn\n\n Any computational object that can apply quantum operations and return a measurement value\n is called a quantum **device**.\n\n In PennyLane, a device could be a hardware device (such as the IBM QX4, via the\n PennyLane-PQ plugin), or a software simulator (such as Strawberry Fields, via the\n PennyLane-SF plugin).\n\n.. tip::\n\n *Devices are loaded in PennyLane via the function* :func:`~.pennylane.device`\n\n\nPennyLane supports devices using both the qubit model of quantum computation and devices\nusing the CV model of quantum computation. In fact, even a hybrid computation containing\nboth qubit and CV quantum nodes is possible; see the\n`hybrid computation example ` for more details.\n\nFor this tutorial, we are using the qubit model, so let's initialize the ``'default.qubit'`` device\nprovided by PennyLane; a simple pure-state qubit simulator.\n\n\n\n\n```python\ndev1 = qml.device(\"default.qubit\", wires=1)\n```\n\nFor all devices, :func:`~.pennylane.device` accepts the following arguments:\n\n* ``name``: the name of the device to be loaded\n* ``wires``: the number of subsystems to initialize the device with\n\nHere, as we only require a single qubit for this example, we set ``wires=1``.\n\n\n\nConstructing the QNode\n----------------------\n\nNow that we have initialized our device, we can begin to construct a\n**quantum node** (or QNode).\n\n\n.. admonition:: Definition\n :class: defn\n\n QNodes are an abstract encapsulation of a quantum function, described by a\n quantum circuit. QNodes are bound to a particular quantum device, which is\n used to evaluate expectation and variance values of this circuit.\n\n.. tip::\n\n *QNodes can be constructed via the* :class:`~.pennylane.QNode`\n *class, or by using the provided* :func:`~.pennylane.qnode` decorator.\n\nFirst, we need to define the quantum function that will be evaluated in the QNode:\n\n\n\n\n```python\ndef circuit(params):\n qml.RX(params[0], wires=0)\n qml.RY(params[1], wires=0)\n return qml.expval(qml.PauliZ(0))\n```\n\nThis is a simple circuit, matching the one described above.\nNotice that the function ``circuit()`` is constructed as if it were any\nother Python function; it accepts a positional argument ``params``, which may\nbe a list, tuple, or array, and uses the individual elements for gate parameters.\n\nHowever, quantum functions are a **restricted subset** of Python functions.\nFor a Python function to also be a valid quantum function, there are some\nimportant restrictions:\n\n* **Quantum functions must contain quantum operations, one operation per line,\n in the order in which they are to be applied.**\n\n In addition, we must always specify the subsystem the operation applies to,\n by passing the ``wires`` argument; this may be a list or an integer, depending\n on how many wires the operation acts on.\n\n For a full list of quantum operations, see :doc:`the documentation `.\n\n* **Quantum functions must return either a single or a tuple of measured observables**.\n\n As a result, the quantum function always returns a classical quantity, allowing\n the QNode to interface with other classical functions (and also other QNodes).\n\n For a full list of observables, see :doc:`the documentation `.\n The documentation also provides details on supported :doc:`measurement return types `.\n\n

Note

Certain devices may only support a subset of the available PennyLane\n operations/observables, or may even provide additional operations/observables.\n Please consult the documentation for the plugin/device for more details.

\n\nOnce we have written the quantum function, we convert it into a :class:`~.pennylane.QNode` running\non device ``dev1`` by applying the :func:`~.pennylane.qnode` decorator.\n**directly above** the function definition:\n\n\n\n\n```python\n@qml.qnode(dev1)\ndef circuit(params):\n qml.RX(params[0], wires=0)\n qml.RY(params[1], wires=0)\n return qml.expval(qml.PauliZ(0))\n```\n\nThus, our ``circuit()`` quantum function is now a :class:`~.pennylane.QNode`, which will run on\ndevice ``dev1`` every time it is evaluated.\n\nTo evaluate, we simply call the function with some appropriate numerical inputs:\n\n\n\n\n```python\nprint(circuit([0.54, 0.12]))\n```\n\n 0.8515405859048368\n\n\nCalculating quantum gradients\n-----------------------------\n\nThe gradient of the function ``circuit``, encapsulated within the ``QNode``,\ncan be evaluated by utilizing the same quantum\ndevice (``dev1``) that we used to evaluate the function itself.\n\nPennyLane incorporates both analytic differentiation, as well as numerical\nmethods (such as the method of finite differences). Both of these are done\nautomatically.\n\nWe can differentiate by using the built-in :func:`~.pennylane.grad` function.\nThis returns another function, representing the gradient (i.e., the vector of\npartial derivatives) of ``circuit``. The gradient can be evaluated in the same\nway as the original function:\n\n\n\n\n```python\ndcircuit = qml.grad(circuit, argnum=0)\n```\n\nThe function :func:`~.pennylane.grad` itself **returns a function**, representing\nthe derivative of the QNode with respect to the argument specified in ``argnum``.\nIn this case, the function ``circuit`` takes one argument (``params``), so we\nspecify ``argnum=0``. Because the argument has two elements, the returned gradient\nis two-dimensional. We can then evaluate this gradient function at any point in the parameter space.\n\n\n\n\n```python\nprint(dcircuit([0.54, 0.12]))\n```\n\n [array(-0.51043865), array(-0.1026782)]\n\n\n**A note on arguments**\n\nQuantum circuit functions, being a restricted subset of Python functions,\ncan also make use of multiple positional arguments and keyword arguments.\nFor example, we could have defined the above quantum circuit function using\ntwo positional arguments, instead of one array argument:\n\n\n\n\n```python\n@qml.qnode(dev1)\ndef circuit2(phi1, phi2):\n qml.RX(phi1, wires=0)\n qml.RY(phi2, wires=0)\n return qml.expval(qml.PauliZ(0))\n```\n\nWhen we calculate the gradient for such a function, the usage of ``argnum``\nwill be slightly different. In this case, ``argnum=0`` will return the gradient\nwith respect to only the first parameter (``phi1``), and ``argnum=1`` will give\nthe gradient for ``phi2``. To get the gradient with respect to both parameters,\nwe can use ``argnum=[0,1]``:\n\n\n\n\n```python\ndcircuit = qml.grad(circuit2, argnum=[0, 1])\nprint(dcircuit(0.54, 0.12))\n```\n\n (array(-0.51043865), array(-0.1026782))\n\n\nKeyword arguments may also be used in your custom quantum function. PennyLane\ndoes **not** differentiate QNodes with respect to keyword arguments,\nso they are useful for passing external data to your QNode.\n\n\n\nOptimization\n------------\n\n.. admonition:: Definition\n :class: defn\n\n If using the default NumPy/Autograd interface, PennyLane provides a collection\n of optimizers based on gradient descent. These optimizers accept a cost function\n and initial parameters, and utilize PennyLane's automatic differentiation\n to perform gradient descent.\n\n.. tip::\n\n *See* :doc:`introduction/optimizers` *for details and documentation of available optimizers*\n\nNext, let's make use of PennyLane's built-in optimizers to optimize the two circuit\nparameters $\\phi_1$ and $\\phi_2$ such that the qubit, originally in state\n$\\left|0\\right\\rangle$, is rotated to be in state $\\left|1\\right\\rangle$. This is equivalent to measuring a\nPauli-Z expectation value of $-1$, since the state $\\left|1\\right\\rangle$ is an eigenvector\nof the Pauli-Z matrix with eigenvalue $\\lambda=-1$.\n\nIn other words, the optimization procedure will find the weights\n$\\phi_1$ and $\\phi_2$ that result in the following rotation on the Bloch sphere:\n\n.. figure:: ../demonstrations/qubit_rotation/bloch.png\n :align: center\n :width: 70%\n :target: javascript:void(0);\n\nTo do so, we need to define a **cost** function. By *minimizing* the cost function, the\noptimizer will determine the values of the circuit parameters that produce the desired outcome.\n\nIn this case, our desired outcome is a Pauli-Z expectation value of $-1$. Since we\nknow that the Pauli-Z expectation is bound between $[-1, 1]$, we can define our\ncost directly as the output of the QNode:\n\n\n\n\n```python\ndef cost(x):\n return circuit(x)\n```\n\nTo begin our optimization, let's choose small initial values of $\\phi_1$ and $\\phi_2$:\n\n\n\n\n```python\ninit_params = np.array([0.011, 0.012])\nprint(cost(init_params))\n```\n\n 0.9998675058299389\n\n\nWe can see that, for these initial parameter values, the cost function is close to $1$.\n\nFinally, we use an optimizer to update the circuit parameters for 100 steps. We can use the built-in\n:class:`~.pennylane.GradientDescentOptimizer` class:\n\n\n\n\n```python\n# initialise the optimizer\nopt = qml.GradientDescentOptimizer(stepsize=0.4)\n\n# set the number of steps\nsteps = 100\n# set the initial parameter values\nparams = init_params\n\nfor i in range(steps):\n # update the circuit parameters\n params = opt.step(cost, params)\n\n if (i + 1) % 5 == 0:\n print(\"Cost after step {:5d}: {: .7f}\".format(i + 1, cost(params)))\n\nprint(\"Optimized rotation angles: {}\".format(params))\n```\n\n Cost after step 5: 0.9961778\n Cost after step 10: 0.8974944\n Cost after step 15: 0.1440490\n Cost after step 20: -0.1536720\n Cost after step 25: -0.9152496\n Cost after step 30: -0.9994046\n Cost after step 35: -0.9999964\n Cost after step 40: -1.0000000\n Cost after step 45: -1.0000000\n Cost after step 50: -1.0000000\n Cost after step 55: -1.0000000\n Cost after step 60: -1.0000000\n Cost after step 65: -1.0000000\n Cost after step 70: -1.0000000\n Cost after step 75: -1.0000000\n Cost after step 80: -1.0000000\n Cost after step 85: -1.0000000\n Cost after step 90: -1.0000000\n Cost after step 95: -1.0000000\n Cost after step 100: -1.0000000\n Optimized rotation angles: [7.15266381e-18 3.14159265e+00]\n\n\nWe can see that the optimization converges after approximately 40 steps.\n\nSubstituting this into the theoretical result $\\langle \\psi \\mid \\sigma_z \\mid \\psi \\rangle = \\cos\\phi_1\\cos\\phi_2$,\nwe can verify that this is indeed one possible value of the circuit parameters that\nproduces $\\langle \\psi \\mid \\sigma_z \\mid \\psi \\rangle=-1$, resulting in the qubit being rotated\nto the state $\\left|1\\right\\rangle$.\n\n

Note

Some optimizers, such as :class:`~.pennylane.AdagradOptimizer`, have\n internal hyperparameters that are stored in the optimizer instance. These can\n be reset using the :meth:`reset` method.

\n\nContinue on to the next tutorial, `gaussian_transformation`, to see a similar example using\ncontinuous-variable (CV) quantum nodes.\n\n\n", "meta": {"hexsha": "6eab95401d7eaec3b10504493da081b255e07d16", "size": 21511, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "98_quantum/01_Rotate_Qubit.ipynb", "max_stars_repo_name": "dpai/workshop", "max_stars_repo_head_hexsha": "d4936da77dac759ba2bac95a9584fde8e86c6b2b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2327, "max_stars_repo_stars_event_min_datetime": "2020-03-01T09:47:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-25T12:38:42.000Z", "max_issues_repo_path": "98_quantum/01_Rotate_Qubit.ipynb", "max_issues_repo_name": "trideau/Data-Science-with-AWS-Workshop", "max_issues_repo_head_hexsha": "7dbe7989fa99e88544da8bf262beec907c536093", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 209, "max_issues_repo_issues_event_min_datetime": "2020-03-01T17:14:12.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-08T20:35:42.000Z", "max_forks_repo_path": "98_quantum/01_Rotate_Qubit.ipynb", "max_forks_repo_name": "trideau/Data-Science-with-AWS-Workshop", "max_forks_repo_head_hexsha": "7dbe7989fa99e88544da8bf262beec907c536093", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 686, "max_forks_repo_forks_event_min_datetime": "2020-03-03T17:24:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-25T23:39:12.000Z", "avg_line_length": 34.5280898876, "max_line_length": 136, "alphanum_fraction": 0.5808656036, "converted": true, "num_tokens": 3923, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5117166047041652, "lm_q2_score": 0.5698526514141571, "lm_q1q2_score": 0.2916030639633187}} {"text": "# Stability analysis of ICA components for transcriptomic data \n\nHere we propose a short analysis of the stability and the reproductibility of ICA components extracted from several gene expression data sets. We are mainly interested in studying the behavior of transcriptomic data extracted from [\"Defining the Biological Basis of Radiomic Phenotypes in Lung Cancer\" Grossman et al. 2017](https://elifesciences.org/articles/23421).\n\n\n```python\n%load_ext autoreload\n%autoreload 2\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\n```\n\n## 0. Load data sets \n\n### Grossman data set \n\nThese data sets were extracted from [\"Defining the Biological Basis of Radiomic Phenotypes in Lung Cancer\" Grossman et al. 2017](https://elifesciences.org/articles/23421). \n \ndf1 contains the expression of 21,766 unique genes for 269 patients with Non-small cell lung cancer (NSCLC) treated at the H. Lee Moffitt Cancer Center, Tampa, Florida, USA. df2 contains the expression of the same 21,766 unique genes for 89 patients with Non-small cell lung cancer (NSCLC)treated at MAASTRO clinical, Maastricht, NL. Gene expression values were measured on a custom Rosetta/Merck Affymetrix 2.0 microarray chipset and normalized with the robust multi-array average (RMA) algorithm. \n\n\n```python\ndf1 = pd.read_excel(\"Data sets/data_USA.xlsx\" , sheet_name = 'expression' , index_col= 0).transpose()\ndf2 = pd.read_excel(\"Data sets/data_UE.xlsx\" , sheet_name = 'expression' , index_col= 0).transpose()\n\n#With xlrd >= 2.0.0 you will need to use a different engine since xrld removed support for anything \n#other than .xls files (make sure openpyxl is installed). It will usually take a bit of time.\n#df1 = pd.read_excel(\"Data sets/data_USA.xlsx\" , engine = 'openpyxl' , sheet_name = 'expression' , index_col= 0).transpose()\n#df2 = pd.read_excel(\"Data sets/data_UE.xlsx\" , engine = 'openpyxl' , sheet_name = 'expression' , index_col= 0).transpose()\n\ndic = {}\nfor col in df1.columns:\n dic[col] = int(col.split('.')[1])\n \ndf1 = df1.rename(columns = dic)\ndf1['Dataset'] = ['Grossman USA']*df1.shape[0]\ndf1 = df1.reset_index().rename(columns = {'index' : 'Samples'}).set_index(['Dataset' , 'Samples'])\n\ndf2 = df2.rename(columns = dic)\ndf2['Dataset'] = ['Grossman UE']*df2.shape[0]\ndf2 = df2.reset_index().rename(columns = {'index' : 'Samples'}).set_index(['Dataset' , 'Samples'])\n\ndf1.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
36438426371712934110521241645357541934911165...64366915728551267842678326782267792677826777100132941
DatasetSamples
Grossman USARadioGenomic-0175.2051517.0979899.5596178.3968087.6037197.99060510.0444019.0549307.3831698.177010...6.4192733.8098266.5078806.5721215.4008485.9513913.3818609.8255842.9050915.622438
RadioGenomic-0555.6157386.5850529.7778699.0824158.6394986.7812749.5418268.8661106.4227027.196294...5.7538284.1861276.8215827.0314064.8524176.1408502.6297609.0051453.3664665.495330
RadioGenomic-2275.6792767.74785410.6487049.1279857.3694217.2037738.9722558.3283717.2692327.449183...5.6669994.3161306.6378556.2488244.6642285.7679702.9114708.6744663.3371946.308605
RadioGenomic-2225.3173417.19627610.9497718.0988967.6398827.97187610.1596378.6677028.4742507.271477...5.5310603.4037767.0594196.2018734.6900056.2562864.1196889.0996593.1817815.740033
RadioGenomic-2127.1969049.3464929.6737789.3586368.7416937.61649810.3766538.7014616.6019917.344651...5.5196423.7960497.3326356.0501214.8985236.5378953.6008958.7925102.9453915.835411
\n

5 rows \u00d7 21766 columns

\n
\n\n\n\n### Lim NSCL data set \n\nThis data set was extracted from [\"Data Descriptor: A merged lung cancer transcriptome dataset for clinical predictive modeling\" Bin Lim et al. 2018](https://www.nature.com/articles/sdata2018136). \nIt gathers 10 independent GEO cohorts of patients with Non-small cell lung cancer (NSCLC) or normal lung tissues. In total, it contains the expression of 10077 unique genes for 1118 patients with Non-small cell lung cancer (NSCLC) or normal lung tissues. The 10 GEO data sets use the same chip platform (Affymetrix Human Genome U133 Plus 2.0 Array). The expression values were normalized with the frozen Robust Multiarray Analysis algorithm (fRMA). \n\n**Note :** In order to be able to join this data set with the Grossman data set, we converted the official gene symbols to entrez ids. We used [the Database for Annotation, Visualization and Integrated Discovery(DAVID)](https://david.ncifcrf.gov/)\n\n\n```python\nnew_data = pd.read_excel(\"Data sets/new_data.xlsx\" , sheet_name=['expression' , 'clinical'] , index_col=0)\n\n#With xlrd >= 2.0.0 you will need to use a different engine since xrld removed support for anything \n#other than .xls files (make sure openpyxl is installed). It will usually take a bit of time.\n#new_data = pd.read_excel(\"Data sets/new_data.xlsx\" , engine = 'openpyxl' , sheet_name=['expression' , 'clinical'] , index_col=0)\n\nnew_data['expression'] = new_data['expression'].drop(columns = 'gene_name').transpose()\n\ntemp = new_data['expression'].join(new_data['clinical'][['Dataset' , 'Histology (ADC: adenocarcinoma; LCC: large cell carcinoma; SCC: squamous cell carcinoma)']])\ndf3 = temp.reset_index().rename(columns = {'index' : 'Samples'}).set_index(['Dataset' , 'Samples'])\n\ndel temp , new_data\n\ndf3 = df3[df3['Histology (ADC: adenocarcinoma; LCC: large cell carcinoma; SCC: squamous cell carcinoma)'] != 'Healthy'].drop(columns = 'Histology (ADC: adenocarcinoma; LCC: large cell carcinoma; SCC: squamous cell carcinoma)')\ndf3.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
121445716598513201651511661958277971922848...9183550551113077891585864405907969977912314026009
DatasetSamples
GSE33356GSM4945565.23480211.9227975.0310329.1410899.9348076.8680945.0387216.8458637.8770339.201101...7.1494227.7373128.3860045.9940527.6856434.5461108.8704847.9385697.5967898.049103
GSM4945575.06903511.1460064.7373308.9265584.4195423.2764707.5179266.9454667.3878789.279903...7.3076667.6938299.9860555.8086565.9688744.9796249.1777837.9769267.0445078.192161
GSM4945585.51497212.3789346.9903957.2689464.2539872.9781536.2860737.2019637.12531810.170862...6.8865567.9271038.1245585.5269096.0008924.9519398.6130278.6421087.7477917.947040
GSM4945596.87169511.8538624.5893148.8741695.9118534.4125166.3049757.7318577.3003808.649045...7.9391078.5890209.6732976.1909256.7366354.7657829.6437637.2956047.8393398.377581
GSM4945606.76178111.2005155.0011849.0277465.3562483.5152085.5454977.5525397.3093069.229319...7.8337348.68957210.3043456.0908436.7039185.8211319.1873497.5573807.3200868.827688
\n

5 rows \u00d7 10077 columns

\n
\n\n\n\nWe only keep data sets with more than 89 samples to avoid ending up with very noisy ICA components which will compromise the stability analysis.\n\n\n```python\ndataframe = pd.concat([df1, df2 , df3], join = 'inner')\nnames = list(dataframe.index.unique(0))\n\nexclude = []\nprint(\"Data sets in the study : \") \nfor name in names:\n n_samples = dataframe.loc[name].shape[0]\n if n_samples < 89:\n exclude.append(name)\n else:\n print(name , n_samples)\n \ndel df1 , df2 , df3\n\nnames = list(set(names) - set(exclude))\n```\n\n Data sets in the study : \n Grossman USA 262\n Grossman UE 89\n GSE19188 91\n GSE28571 100\n GSE50081 181\n GSE31210 226\n GSE18842 91\n\n\n## 1. Most Stable Transcriptome Dimension (MSTD) \n\nIn order to select the number of ICA components we refer to [\"Determining the optimal number of independent components for reproducible transcriptomic data analysis\" Kairov et al. 2017](https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-017-4112-9). The idea is to find a trade-off between a sufficiently large number of components to capture all essential information and a restricted dimension to avoid ending up with a lot of very unstable components which only capture noise. \n\nTo do so we plot the stability distribution for a wide range of different number of components M = 5, ... , 100 and we observe which choice of M most satisfies the trade-off the most.\n\n\n```python\n#MSTD plot for the data set GSE31210.\nfrom sica.base import MSTD\nX = dataframe.loc['GSE31210'].apply(func = lambda x : x - x.mean() , axis = 1).transpose()\nMSTD(X , 5 , 100 , 2 , 20 , max_iter = 3000)\n#MSTD(X.values , 5 , 100 , 2 , 20 , max_iter = 3000)\n```\n\n\n```python\nMSTD = 45\n```\n\n## 2. Stabilized ICA decompostion for each data set \n\nOur stabilized ICA algorithm is a copy of the ICASSO algorithm (implemented in MATLAB) ([\"ICASSO: software for investigating the reliability of ICA estimates by clustering and visualization\" Himber et al.](https://www.cs.helsinki.fi/u/ahyvarin/papers/Himberg03.pd) The main idea is to iterate several times the FastICA algorithm (ex: from sklearn), cluster the results (agglomerative hierarchical clustering) and define the final ICA components as the centrotype of each cluster.\n\n\n```python\nfrom sica.base import StabilizedICA as sICA\n```\n\n\n```python\nSources = []\ndecomp = sICA(n_components = MSTD , max_iter = 2000 , n_jobs = -1)\n\nfor name in names:\n start = time.time()\n X = dataframe.loc[name].apply(func = lambda x : x - x.mean() , axis = 1).transpose()\n #decomp.fit(X.values , n_runs = 30)\n decomp.fit(X , n_runs = 30)\n Sources.append(decomp.S_ )\n end = time.time()\n minutes, seconds = divmod(end - start, 60)\n print(name + \" done !\" , \"running time (min): \" + \"{:0>2}:{:05.2f}\".format(int(minutes),seconds))\n```\n\n Grossman USA done ! running time (min): 00:25.47\n GSE50081 done ! running time (min): 00:14.74\n GSE28571 done ! running time (min): 00:17.53\n GSE19188 done ! running time (min): 00:16.74\n GSE31210 done ! running time (min): 00:14.51\n GSE18842 done ! running time (min): 00:16.21\n Grossman UE done ! running time (min): 00:15.11\n\n\n## 3. Study of reproducibility - MNN graph\n\nIn order to study the reproducibility of ICA components between the four data sets we use the notion of Reciprocal Best Hit (RBH). We consider that a component $i$ extracted from a data set D1 and another component $j$ extracted from a data set D2 are linked if: \n\n\\begin{equation}\n|\\rho_{ij}| = \\max \\{ |\\rho_{ik}| , \\forall k \\, \\in D2 \\} = \\max \\{ |\\rho_{lj}| , \\forall l \\, \\in D1 \\}\n\\end{equation}\n\nHere $\\rho_{ij}$ represents the Pearson's correlation coefficient between the components $i$ and $j$. We chose to use Pearson's coefficient over non-parametric coefficients such as Kendall's or Spearman's coefficients since it is strongly influenced by extreme values. Yet in our case these extreme values characterize our ICA components .\n\n\n```python\nfrom sica.mutualknn import MNNgraph\n```\n\n**Note :** The layout of the following RBH network is strongly influenced by a random initialisation (Fruchterman-Reingold force-directed algorithm). One will have to run the following result several times until a satisfying result is obtained.\n\n\n```python\nfig , ax = plt.subplots(figsize = (10 , 7))\n\ncg = MNNgraph(data = Sources , names = names , k=1 , weighted = True)\ncg.draw(ax=ax, colors = ['r' , 'k' , 'b' , 'g' , 'yellow' , 'purple' , 'brown'] , spacing = 2)\n\nax.legend(bbox_to_anchor=(1.25 , 1))\n```\n\n\n```python\ncg.export_json(\"stability.json\")\n```\n\nWe can also draw the bipartite RBH graph for the ICA components of two data sets. We display the Pearson correlation coefficient for each link. In the following we display the bipartite graphs for the couples of sets ('Grossman UE' , 'GSE19188') and ('Grossman UE' , 'GSE50081').\n\n\n```python\nfig , axes = plt.subplots(1 , 2 , figsize=(20 , 10))\nfor i in range(2):\n cg = MNNgraph(data = [Sources[0] , Sources[i+1]] , names = [names[i] , names[i+1]] , k=1 , weighted = True)\n cg.draw(ax=axes[i] , bipartite_graph = True)\n```\n\n## 4. Cytoscape visualization of the stability network\n\nIn order to visualize the stability network with cytoscape directly on the jupyter notebook we use the [ipycytoscape package](https://github.com/QuantStack/ipycytoscape). Please install it with: \n* conda install -c conda-forge ipycytoscape\n* pip install ipycytoscape\n\n\n```python\nimport ipycytoscape\nimport json\n\nwith open(\"stability.json\") as fi:\n json_file = json.load(fi)\n \ncytoscapeobj = ipycytoscape.CytoscapeWidget()\ncytoscapeobj.graph.add_graph_from_json(json_file['elements'])\n\ncytoscapeobj.set_layout(name = 'cose')\n```\n\n\n```python\ncytoscapeobj.set_style([{'selector': 'node',\n 'css': {\n 'content': 'data(name)',\n 'font-size': '12px',\n 'text-valign': 'center',\n 'text-halign': 'center',\n 'background-color': 'blue',\n 'color': 'b',\n }\n },\n {'selector': 'edge',\n 'css': {\n 'curve-style': 'haystack',\n 'haystack-radius': '0.5',\n 'opacity': '0.7',\n 'line-color': '#bbb',\n 'width': 'mapData(weight , 0 , 1 , 1 , 15)',\n 'overlay-padding': '3px'\n }\n }])\n```\n\n\n```python\ncytoscapeobj\n```\n\n\n CytoscapeWidget(cytoscape_layout={'name': 'cose'}, cytoscape_style=[{'selector': 'node', 'css': {'content': 'd\u2026\n\n", "meta": {"hexsha": "c1abbc5d1e816d32a0ce2c7f7c2bc158e2c1ef83", "size": 457980, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/_examples/stability_study.ipynb", "max_stars_repo_name": "ncaptier/stabilized-ica", "max_stars_repo_head_hexsha": "945d6811d0353ced2aa866917440e218f05dd4d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-11-22T11:28:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T09:03:00.000Z", "max_issues_repo_path": "docs/source/_examples/stability_study.ipynb", "max_issues_repo_name": "ncaptier/stabilized-ica", "max_issues_repo_head_hexsha": "945d6811d0353ced2aa866917440e218f05dd4d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/source/_examples/stability_study.ipynb", "max_forks_repo_name": "ncaptier/stabilized-ica", "max_forks_repo_head_hexsha": "945d6811d0353ced2aa866917440e218f05dd4d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 468.7615148414, "max_line_length": 161540, "alphanum_fraction": 0.925420324, "converted": true, "num_tokens": 6396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5389832206876841, "lm_q2_score": 0.5389832206876841, "lm_q1q2_score": 0.2905029121828688}} {"text": "```python\nclass ClassName():\n def __init__(self,):\n print(hagehage)\n```\n\n\n```python\nhagehage = 6\n\nc = ClassName()\n```\n\n 6\n\n\n\n```python\nclass Myclass:\n def set_value(self, text):\n self.value = text\n def print_value(self):\n print(self.value)\n```\n\n\n```python\nif __name__ == \"__main__\":\n a = Myclass() # MyClass \u306e\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u751f\u6210\n a.set_value(\"abc\") # \u5909\u6570 value \u306b\u6587\u5b57\u5217 \"abc\" \u3092\u4ee3\u5165 \n a.print_value() # abc\n```\n\n abc\n\n\n\n```python\nclass Oya:\n def __init__(self,a,b):\n self.a = a\n self.b = b\n def return_oya(self):\n oya = self.a + self.b\n return oya\n \nclass kodomo(Oya):\n def multiple_oya(self):\n ans = self.a * self.b\n return ans\n```\n\n\n```python\no = Oya(2, 3)\n\nk = kodomo(1,2)\nk.multiple_oya()\n```\n\n\n\n\n 2\n\n\n\n\n```python\nclass Cournot_bad:\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n def product(self):\n self.q1 = (a-c)/b\n self.q2 = (a-c)/b\n def whole_product(self, q1, q2):\n self.q = self.q1 + self.q2\n def price(self, q):\n self.p = self.a - (self.b * self.q)\n def ans(self):\n return self.q1, self.q2, self.q, self.p\n```\n\n\n```python\nclass Cournot:\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n def formula(self):\n print(\"P =\", self.a, \"-\", self.b, \"Q\")\n print(\"C =\", self.c, \"Q\")\n def ans(self):\n self.q1 = (self.a-self.c)/self.b\n self.q2 = (self.a-self.c)/self.b\n self.q = self.q1 + self.q2\n self.p = (self.a + (2 * self.b)) / 3\n return self.q1, self.q2, self.q, self.p\n```\n\n\n```python\ncournot = Cournot(400, 10, 5)\ncournot.formula()\ncournot.ans()\n```\n\n P = 400 - 10 Q\n C = 5 Q\n\n\n\n\n\n (39.5, 39.5, 79.0, 140.0)\n\n\n\n\n```python\nclass Test:\n def __init__(self, a, b):\n self.a = a\n self.b = b\n def first(self):\n self.c = self.a + self.b\n self.d = self.a * self.b\n def second(self):\n val = []\n for i in range(10):\n self.first()\n val.append((self.c, self.d))\n self.a = self.c\n self.b = self.d\n return val\n```\n\n\n```python\ntes = Test(1, 1)\ntes.second()\n```\n\n\n\n\n [(2, 1),\n (3, 2),\n (5, 6),\n (11, 30),\n (41, 330),\n (371, 13530),\n (13901, 5019630),\n (5033531, 69777876630),\n (69782910161, 351229105131280530),\n (351229174914190691, 24509789089304573335878465330)]\n\n\n\n\n```python\nl = ['me', 'he', 'she']\nfor i, name in enumerate(l):\n print(i, name)\n```\n\n 0 me\n 1 he\n 2 she\n\n\n\n```python\nclass Newton:\n def __init__(self, a, tol):\n self.a = a\n self.tol = tol\n def f(self, x):\n return x**2 - x + 1\n def new_slope(self, y):\n import sympy\n z = sympy.Symbol('z')\n der = sympy.diff(self.f(z), z)\n new_slope = der.subs(z, y)\n return new_slope\n def new_x(self, old_x):\n new_slope = self.new_slope(old_x)\n new_x = old_x - (self.f(old_x) / float(new_slope))\n return new_x\n def newton_method_zero(self):\n x = self.a\n loops = []\n while self.f(x) >= self.tol:\n loops.append(x)\n new_x = self.new_x(x)\n x = new_x\n return x, loops\n def newton_method_min(self):\n x = self.a\n loops = []\n while self.new_slope(x) > self.tol:\n loops.append(x)\n new_x = self.new_x(x)\n x = new_x\n return x, loops\n```\n\n\n```python\nn = Newton(5, 1e-5)\nn.newton_method_min()\n```\n\n\n\n\n (-8.167752356861968,\n [5, 2.6666666666666665, 1.41025641025641, 0.5431563741422893])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "553877244a04c4b5f8d64122bf75e12579f364f0", "size": 7781, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "class_practice.ipynb", "max_stars_repo_name": "NakajimaZemi/exercises2018", "max_stars_repo_head_hexsha": "8266f1d289cb08b3a9625c9d6c2c4e80592e0e7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2018-04-26T07:39:18.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-20T06:53:47.000Z", "max_issues_repo_path": "class_practice.ipynb", "max_issues_repo_name": "NakajimaZemi/exercises2018", "max_issues_repo_head_hexsha": "8266f1d289cb08b3a9625c9d6c2c4e80592e0e7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-09-27T07:06:32.000Z", "max_issues_repo_issues_event_max_datetime": "2018-09-27T07:06:32.000Z", "max_forks_repo_path": "class_practice.ipynb", "max_forks_repo_name": "NakajimaZemi/exercises2018", "max_forks_repo_head_hexsha": "8266f1d289cb08b3a9625c9d6c2c4e80592e0e7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-09-27T06:42:58.000Z", "max_forks_repo_forks_event_max_datetime": "2018-12-20T07:11:26.000Z", "avg_line_length": 22.2951289398, "max_line_length": 72, "alphanum_fraction": 0.4196118751, "converted": true, "num_tokens": 1202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5350984286266116, "lm_q2_score": 0.5428632831725052, "lm_q1q2_score": 0.29048528978469085}} {"text": "```python\nimport numpy as np\nfrom pycalphad import Model, Database, calculate, equilibrium\nimport pycalphad.variables as v\n\n#dbf = Database('2016-08-10-AlGdMgand18RLPSO-for 3d plot.tdb')\ndbf = Database('alfe_sei.TDB')\nmodels = {key: Model(dbf, ['AL', 'FE', 'VA'], key) for key in dbf.phases.keys()}\n```\n\n\n```python\n#Set compiler directives (cf. http://docs.cython.org/src/reference/compilation.html)\n%load_ext cython\nfrom Cython.Compiler.Options import _directive_defaults\n\n_directive_defaults['linetrace'] = True\n_directive_defaults['binding'] = True\n```\n\n The cython extension is already loaded. To reload it, use:\n %reload_ext cython\n\n\n\n```cython\n%%cython -a -f --compile-args=-DCYTHON_TRACE=1\nimport pycalphad.variables as v\nfrom pycalphad.core.utils import unpack_kwarg\nfrom pycalphad.core.utils import unpack_condition, unpack_phases\nfrom pycalphad import calculate, Model\nfrom pycalphad.constraints import mole_fraction\nfrom pycalphad.core.lower_convex_hull import lower_convex_hull\nfrom pycalphad.core.sympydiff_utils import build_functions as compiled_build_functions\nfrom pycalphad.core.constants import MIN_SITE_FRACTION\nfrom pycalphad.core.eqsolver import _solve_eq_at_conditions\nfrom sympy import Add, Symbol\nimport dask\nfrom dask import delayed\nimport dask.multiprocessing, dask.async\nfrom xarray import Dataset\nimport numpy as np\ncimport numpy as np\nfrom collections import namedtuple, OrderedDict\nfrom datetime import datetime\nfrom pycalphad.core.eqsolver import *\nfrom pycalphad.core.eqsolver import _build_multiphase_gradient, \\\n _build_multiphase_system, _compute_constraints, _compute_phase_dof, remove_degenerate_phases\ncimport cython\n\ndef _solve_eq_at_conditions(dbf, comps, properties, phase_records, callable_dict, conds_keys, verbose):\n \"\"\"\n Compute equilibrium for the given conditions.\n This private function is meant to be called from a worker subprocess.\n For that case, usually only a small slice of the master 'properties' is provided.\n Since that slice will be copied, we also return the modified 'properties'.\n\n Parameters\n ----------\n dbf : Database\n Thermodynamic database containing the relevant parameters.\n comps : list\n Names of components to consider in the calculation.\n properties : Dataset\n Will be modified! Thermodynamic properties and conditions.\n phase_records : dict of PhaseRecord\n Details on phase callables.\n callable_dict : dict of callable\n Objective functions for each phase.\n conds_keys : list of str\n List of conditions axes in dimension order.\n verbose : bool\n Print details.\n\n Returns\n -------\n properties : Dataset\n Modified with equilibrium values.\n \"\"\"\n cdef:\n double indep_sum\n int num_phases, num_vars, cur_iter, old_phase_length, new_phase_length, var_idx, sfidx, pfidx, m, n\n np.ndarray[ndim=1, dtype=np.float64_t] gradient_term, p_y, l_constraints, step\n np.ndarray[ndim=1, dtype=np.float64_t] site_fracs, candidate_site_fracs, l_multipliers, new_l_multipliers, candidate_phase_fracs, phase_fracs\n np.ndarray[ndim=2, dtype=np.float64_t] l_hessian, ymat, zmat, qmat, rmat, constraint_jac\n # Factored out via profiling\n prop_MU_values = properties['MU'].values\n prop_NP_values = properties['NP'].values\n prop_Phase_values = properties['Phase'].values\n prop_X_values = properties['X'].values\n prop_Y_values = properties['Y'].values\n prop_GM_values = properties['GM'].values\n\n it = np.nditer(prop_GM_values, flags=['multi_index'])\n\n #if verbose:\n # print('INITIAL CONFIGURATION')\n # print(properties.MU)\n # print(properties.Phase)\n # print(properties.NP)\n # print(properties.X)\n # print(properties.Y)\n # print('---------------------')\n while not it.finished:\n # A lot of this code relies on cur_conds being ordered!\n cur_conds = OrderedDict(zip(conds_keys,\n [np.asarray(properties['GM'].coords[b][a], dtype=np.float)\n for a, b in zip(it.multi_index, conds_keys)]))\n if len(cur_conds) == 0:\n cur_conds = properties['GM'].coords\n # sum of independently specified components\n indep_sum = np.sum([float(val) for i, val in cur_conds.items() if i.startswith('X_')])\n if indep_sum > 1:\n # Sum of independent component mole fractions greater than one\n # Skip this condition set\n # We silently allow this to make 2-D composition mapping easier\n prop_MU_values[it.multi_index] = np.nan\n prop_NP_values[it.multi_index + np.index_exp[:len(phases)]] = np.nan\n prop_Phase_values[it.multi_index + np.index_exp[:len(phases)]] = ''\n prop_X_values[it.multi_index + np.index_exp[:len(phases)]] = np.nan\n prop_Y_values[it.multi_index] = np.nan\n prop_GM_values[it.multi_index] = np.nan\n it.iternext()\n continue\n dependent_comp = set(comps) - set([i[2:] for i in cur_conds.keys() if i.startswith('X_')]) - {'VA'}\n if len(dependent_comp) == 1:\n dependent_comp = list(dependent_comp)[0]\n else:\n raise ValueError('Number of dependent components different from one')\n # chem_pots = OrderedDict(zip(properties.coords['component'].values, properties['MU'].values[it.multi_index]))\n # Used to cache generated mole fraction functions\n mole_fractions = {}\n for cur_iter in range(MAX_SOLVE_ITERATIONS):\n # print('CUR_ITER:', cur_iter)\n phases = list(prop_Phase_values[it.multi_index])\n if '' in phases:\n old_phase_length = phases.index('')\n else:\n old_phase_length = -1\n remove_degenerate_phases(prop_Phase_values[it.multi_index], prop_X_values[it.multi_index],\n prop_Y_values[it.multi_index], prop_NP_values[it.multi_index])\n phases = list(prop_Phase_values[it.multi_index])\n if '' in phases:\n new_phase_length = phases.index('')\n else:\n new_phase_length = -1\n # Are there removed phases?\n if '' in phases:\n num_phases = phases.index('')\n else:\n num_phases = len(phases)\n if num_phases == 0:\n raise ValueError('Zero phases are left in the system')\n zero_dof = np.all(\n (prop_Y_values[it.multi_index] == 1.) | np.isnan(prop_Y_values[it.multi_index]))\n if (num_phases == 1) and zero_dof:\n # Single phase with zero internal degrees of freedom, can't do any refinement\n # TODO: In the future we may be able to refine other degrees of freedom like temperature\n # Chemical potentials have no meaning for this case\n prop_MU_values[it.multi_index] = np.nan\n break\n phases = prop_Phase_values[it.multi_index + np.index_exp[:num_phases]]\n # num_sitefrac_bals = sum([len(dbf.phases[i].sublattices) for i in phases])\n # num_mass_bals = len([i for i in cur_conds.keys() if i.startswith('X_')]) + 1\n phase_fracs = prop_NP_values[it.multi_index + np.index_exp[:len(phases)]]\n phase_dof = [len(set(phase_records[name].variables) - {v.T, v.P}) for name in phases]\n # Flatten site fractions array and remove nan padding\n site_fracs = prop_Y_values[it.multi_index].ravel()\n # That *should* give us the internal dof\n # This may break if non-padding nan's slipped in from elsewhere...\n site_fracs = site_fracs[~np.isnan(site_fracs)]\n site_fracs[site_fracs < MIN_SITE_FRACTION] = MIN_SITE_FRACTION\n if len(site_fracs) == 0:\n print(properties)\n raise ValueError('Site fractions are invalid')\n phase_fracs[phase_fracs < MIN_SITE_FRACTION] = MIN_SITE_FRACTION\n var_idx = 0\n for name in phases:\n for idx in range(len(dbf.phases[name].sublattices)):\n active_in_subl = set(dbf.phases[name].constituents[idx]).intersection(comps)\n for ais in range(len(active_in_subl)):\n site_fracs[var_idx + ais] = site_fracs[var_idx + ais] / sum(site_fracs[var_idx:var_idx + len(active_in_subl)])\n var_idx += len(active_in_subl)\n l_constraints, constraint_jac, constraint_hess = \\\n _compute_constraints(dbf, comps, phases, cur_conds, site_fracs, phase_fracs, phase_records, mole_fractions=mole_fractions)\n # Reset Lagrange multipliers if active set of phases change\n if cur_iter == 0 or (old_phase_length != new_phase_length) or np.any(np.isnan(l_multipliers)):\n l_multipliers = np.zeros(l_constraints.shape[0])\n qmat, rmat = np.linalg.qr(constraint_jac.T, mode='complete')\n m = rmat.shape[1]\n n = qmat.shape[0]\n # Construct orthonormal basis for the constraints\n ymat = qmat[:, :m]\n zmat = qmat[:, m:]\n # Equation 18.14a in Nocedal and Wright\n p_y = np.linalg.solve(-np.dot(constraint_jac, ymat), l_constraints)\n num_vars = len(site_fracs) + len(phases)\n l_hessian, gradient_term = _build_multiphase_system(dbf, comps, phases, cur_conds, site_fracs, phase_fracs,\n l_constraints, constraint_jac, constraint_hess,\n l_multipliers, callable_dict, phase_records)\n if np.any(np.isnan(l_hessian)):\n print('Invalid l_hessian')\n l_hessian[:,:] = np.eye(l_hessian.shape[0])\n if np.any(np.isnan(gradient_term)):\n raise ValueError('Invalid gradient_term')\n # Equation 18.18 in Nocedal and Wright\n if m != n:\n if np.any(np.isnan(zmat)):\n raise ValueError('Invalid zmat')\n try:\n p_z = np.linalg.solve(np.dot(np.dot(zmat.T, l_hessian), zmat),\n -np.dot(np.dot(np.dot(zmat.T, l_hessian), ymat), p_y) - np.dot(zmat.T, gradient_term))\n except np.linalg.LinAlgError:\n p_z = np.zeros(zmat.shape[1], dtype=np.float)\n step = np.dot(ymat, p_y) + np.dot(zmat, p_z)\n else:\n step = np.dot(ymat, p_y)\n old_energy = copy.deepcopy(prop_GM_values[it.multi_index])\n old_chem_pots = copy.deepcopy(prop_MU_values[it.multi_index])\n candidate_site_fracs = np.empty_like(site_fracs)\n candidate_phase_fracs = np.empty_like(phase_fracs)\n for sfidx in range(candidate_site_fracs.shape[0]):\n candidate_site_fracs[sfidx] = min(max(site_fracs[sfidx] + step[sfidx], MIN_SITE_FRACTION), 1)\n\n for pfidx in range(candidate_phase_fracs.shape[0]):\n candidate_phase_fracs[pfidx] = min(max(phase_fracs[pfidx] + step[candidate_site_fracs.shape[0] + pfidx], 0), 1)\n candidate_l_constraints, candidate_constraint_jac, candidate_constraint_hess = \\\n _compute_constraints(dbf, comps, phases, cur_conds,\n candidate_site_fracs, candidate_phase_fracs, phase_records, mole_fractions=mole_fractions)\n candidate_energy, candidate_gradient_term = \\\n _build_multiphase_gradient(dbf, comps, phases, cur_conds, candidate_site_fracs,\n candidate_phase_fracs, candidate_l_constraints,\n candidate_constraint_jac, l_multipliers,\n callable_dict, phase_records)\n # We updated degrees of freedom this iteration\n new_l_multipliers = np.linalg.solve(np.dot(constraint_jac, ymat).T,\n np.dot(ymat.T, gradient_term + np.dot(l_hessian, step)))\n np.clip(new_l_multipliers, -MAX_ABS_LAGRANGE_MULTIPLIER, MAX_ABS_LAGRANGE_MULTIPLIER,\n out=new_l_multipliers)\n # XXX: Should fix underlying numerical problem at edges of composition space instead of working around\n if np.any(np.isnan(new_l_multipliers)):\n print('WARNING: Unstable Lagrange multipliers: ', new_l_multipliers)\n # Equation 18.16 in Nocedal and Wright\n # This method is less accurate but more stable\n new_l_multipliers = np.dot(np.dot(np.linalg.inv(np.dot(candidate_constraint_jac,\n candidate_constraint_jac.T)),\n candidate_constraint_jac), candidate_gradient_term)\n np.clip(new_l_multipliers, -MAX_ABS_LAGRANGE_MULTIPLIER, MAX_ABS_LAGRANGE_MULTIPLIER,\n out=new_l_multipliers)\n l_multipliers = new_l_multipliers\n if np.any(np.isnan(l_multipliers)):\n print('Invalid l_multipliers after recalculation', l_multipliers)\n l_multipliers[:] = 0\n if verbose:\n print('NEW_L_MULTIPLIERS', l_multipliers)\n num_mass_bals = len([i for i in cur_conds.keys() if i.startswith('X_')]) + 1\n chemical_potentials = l_multipliers[sum([len(dbf.phases[i].sublattices) for i in phases]):\n sum([len(dbf.phases[i].sublattices) for i in phases]) + num_mass_bals]\n prop_MU_values[it.multi_index] = chemical_potentials\n prop_NP_values[it.multi_index + np.index_exp[:len(phases)]] = candidate_phase_fracs\n prop_X_values[it.multi_index + np.index_exp[:len(phases)]] = 0\n prop_GM_values[it.multi_index] = candidate_energy\n var_offset = 0\n for phase_idx in range(len(phases)):\n prop_Y_values[it.multi_index + np.index_exp[phase_idx, :phase_dof[phase_idx]]] = \\\n candidate_site_fracs[var_offset:var_offset + phase_dof[phase_idx]]\n for comp_idx, comp in enumerate([c for c in comps if c != 'VA']):\n prop_X_values[it.multi_index + np.index_exp[phase_idx, comp_idx]] = \\\n mole_fractions[(phases[phase_idx], comp)][0](\n [candidate_site_fracs[var_offset:var_offset + phase_dof[phase_idx]]])\n var_offset += phase_dof[phase_idx]\n\n properties.attrs['solve_iterations'] += 1\n total_comp = np.nansum(prop_NP_values[it.multi_index][..., np.newaxis] * \\\n prop_X_values[it.multi_index], axis=-2)\n driving_force = (prop_MU_values[it.multi_index] * total_comp).sum(axis=-1) - \\\n prop_GM_values[it.multi_index]\n driving_force = np.squeeze(driving_force)\n if verbose:\n print('Chem pot progress', prop_MU_values[it.multi_index] - old_chem_pots)\n print('Energy progress', prop_GM_values[it.multi_index] - old_energy)\n print('Driving force', driving_force)\n no_progress = np.abs(prop_MU_values[it.multi_index] - old_chem_pots).max() < 0.01\n no_progress &= np.abs(prop_GM_values[it.multi_index] - old_energy) < MIN_SOLVE_ENERGY_PROGRESS\n if no_progress and np.abs(driving_force) > MAX_SOLVE_DRIVING_FORCE:\n print('Driving force failed to converge: {}'.format(cur_conds))\n prop_MU_values[it.multi_index] = np.nan\n prop_NP_values[it.multi_index] = np.nan\n prop_X_values[it.multi_index] = np.nan\n prop_Y_values[it.multi_index] = np.nan\n prop_GM_values[it.multi_index] = np.nan\n prop_Phase_values[it.multi_index] = ''\n break\n elif no_progress:\n if verbose:\n print('No progress')\n num_mass_bals = len([i for i in cur_conds.keys() if i.startswith('X_')]) + 1\n chemical_potentials = l_multipliers[sum([len(dbf.phases[i].sublattices) for i in phases]):\n sum([len(dbf.phases[i].sublattices) for i in phases]) + num_mass_bals]\n prop_MU_values[it.multi_index] = chemical_potentials\n break\n elif (not no_progress) and cur_iter == MAX_SOLVE_ITERATIONS-1:\n print('Failed to converge: {}'.format(cur_conds))\n prop_MU_values[it.multi_index] = np.nan\n prop_NP_values[it.multi_index] = np.nan\n prop_X_values[it.multi_index] = np.nan\n prop_Y_values[it.multi_index] = np.nan\n prop_GM_values[it.multi_index] = np.nan\n prop_Phase_values[it.multi_index] = ''\n it.iternext()\n return properties\n```\n\n\n\n\n\n\n\n \n \n Cython: _cython_magic_179ec25e50428c13c53b0ba863f4b74e.pyx\n \n \n \n\n

Generated by Cython 0.25.1

\n

\n Yellow lines hint at Python interaction.
\n Click on a line that starts with a \"+\" to see the C code that Cython generated for it.\n

\n
+001: import pycalphad.variables as v
\n
  __Pyx_TraceLine(1,0,__PYX_ERR(0, 1, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s__33);\n  __Pyx_GIVEREF(__pyx_n_s__33);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s__33);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_pycalphad_variables, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_v, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n/* \u2026 */\n  __Pyx_TraceLine(1,0,__PYX_ERR(0, 1, __pyx_L1_error))\n  __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+002: from pycalphad.core.utils import unpack_kwarg
\n
  __Pyx_TraceLine(2,0,__PYX_ERR(0, 2, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_unpack_kwarg);\n  __Pyx_GIVEREF(__pyx_n_s_unpack_kwarg);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_unpack_kwarg);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_pycalphad_core_utils, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_unpack_kwarg); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_unpack_kwarg, __pyx_t_2) < 0) __PYX_ERR(0, 2, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+003: from pycalphad.core.utils import unpack_condition, unpack_phases
\n
  __Pyx_TraceLine(3,0,__PYX_ERR(0, 3, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 3, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s_unpack_condition);\n  __Pyx_GIVEREF(__pyx_n_s_unpack_condition);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_unpack_condition);\n  __Pyx_INCREF(__pyx_n_s_unpack_phases);\n  __Pyx_GIVEREF(__pyx_n_s_unpack_phases);\n  PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_unpack_phases);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_pycalphad_core_utils, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 3, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_unpack_condition); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 3, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_unpack_condition, __pyx_t_1) < 0) __PYX_ERR(0, 3, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_unpack_phases); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 3, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_unpack_phases, __pyx_t_1) < 0) __PYX_ERR(0, 3, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+004: from pycalphad import calculate, Model
\n
  __Pyx_TraceLine(4,0,__PYX_ERR(0, 4, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_calculate);\n  __Pyx_GIVEREF(__pyx_n_s_calculate);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_calculate);\n  __Pyx_INCREF(__pyx_n_s_Model);\n  __Pyx_GIVEREF(__pyx_n_s_Model);\n  PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_Model);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_pycalphad, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_calculate); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_calculate, __pyx_t_2) < 0) __PYX_ERR(0, 4, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_Model); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_Model, __pyx_t_2) < 0) __PYX_ERR(0, 4, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+005: from pycalphad.constraints import mole_fraction
\n
  __Pyx_TraceLine(5,0,__PYX_ERR(0, 5, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 5, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s_mole_fraction);\n  __Pyx_GIVEREF(__pyx_n_s_mole_fraction);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_mole_fraction);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_pycalphad_constraints, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 5, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_mole_fraction); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 5, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_mole_fraction, __pyx_t_1) < 0) __PYX_ERR(0, 5, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+006: from pycalphad.core.lower_convex_hull import lower_convex_hull
\n
  __Pyx_TraceLine(6,0,__PYX_ERR(0, 6, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 6, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_lower_convex_hull);\n  __Pyx_GIVEREF(__pyx_n_s_lower_convex_hull);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_lower_convex_hull);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_pycalphad_core_lower_convex_hull, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 6, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_lower_convex_hull); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 6, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_lower_convex_hull, __pyx_t_2) < 0) __PYX_ERR(0, 6, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+007: from pycalphad.core.sympydiff_utils import build_functions as compiled_build_functions
\n
  __Pyx_TraceLine(7,0,__PYX_ERR(0, 7, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 7, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s_build_functions);\n  __Pyx_GIVEREF(__pyx_n_s_build_functions);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_build_functions);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_pycalphad_core_sympydiff_utils, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 7, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_build_functions); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 7, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_compiled_build_functions, __pyx_t_1) < 0) __PYX_ERR(0, 7, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+008: from pycalphad.core.constants import MIN_SITE_FRACTION
\n
  __Pyx_TraceLine(8,0,__PYX_ERR(0, 8, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 8, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_MIN_SITE_FRACTION);\n  __Pyx_GIVEREF(__pyx_n_s_MIN_SITE_FRACTION);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_MIN_SITE_FRACTION);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_pycalphad_core_constants, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 8, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_MIN_SITE_FRACTION); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 8, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_MIN_SITE_FRACTION, __pyx_t_2) < 0) __PYX_ERR(0, 8, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+009: from pycalphad.core.eqsolver import _solve_eq_at_conditions
\n
  __Pyx_TraceLine(9,0,__PYX_ERR(0, 9, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s_solve_eq_at_conditions);\n  __Pyx_GIVEREF(__pyx_n_s_solve_eq_at_conditions);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_solve_eq_at_conditions);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_pycalphad_core_eqsolver, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_solve_eq_at_conditions); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_solve_eq_at_conditions, __pyx_t_1) < 0) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+010: from sympy import Add, Symbol
\n
  __Pyx_TraceLine(10,0,__PYX_ERR(0, 10, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_Add);\n  __Pyx_GIVEREF(__pyx_n_s_Add);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_Add);\n  __Pyx_INCREF(__pyx_n_s_Symbol);\n  __Pyx_GIVEREF(__pyx_n_s_Symbol);\n  PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_Symbol);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_sympy, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 10, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_Add); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_Add, __pyx_t_2) < 0) __PYX_ERR(0, 10, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_Symbol); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_Symbol, __pyx_t_2) < 0) __PYX_ERR(0, 10, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+011: import dask
\n
  __Pyx_TraceLine(11,0,__PYX_ERR(0, 11, __pyx_L1_error))\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_dask, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 11, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_dask, __pyx_t_1) < 0) __PYX_ERR(0, 11, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+012: from dask import delayed
\n
  __Pyx_TraceLine(12,0,__PYX_ERR(0, 12, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s_delayed);\n  __Pyx_GIVEREF(__pyx_n_s_delayed);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_delayed);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_dask, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_delayed); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_delayed, __pyx_t_1) < 0) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+013: import dask.multiprocessing, dask.async
\n
  __Pyx_TraceLine(13,0,__PYX_ERR(0, 13, __pyx_L1_error))\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_dask_multiprocessing, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 13, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_dask, __pyx_t_2) < 0) __PYX_ERR(0, 13, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_dask_async, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 13, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_dask, __pyx_t_2) < 0) __PYX_ERR(0, 13, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+014: from xarray import Dataset
\n
  __Pyx_TraceLine(14,0,__PYX_ERR(0, 14, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 14, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_Dataset);\n  __Pyx_GIVEREF(__pyx_n_s_Dataset);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_Dataset);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_xarray, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 14, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_Dataset); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 14, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_Dataset, __pyx_t_2) < 0) __PYX_ERR(0, 14, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+015: import numpy as np
\n
  __Pyx_TraceLine(15,0,__PYX_ERR(0, 15, __pyx_L1_error))\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
 016: cimport numpy as np
\n
+017: from collections import namedtuple, OrderedDict
\n
\n  /* \"_cython_magic_179ec25e50428c13c53b0ba863f4b74e.pyx\":17\n * import numpy as np\n * cimport numpy as np\n * from collections import namedtuple, OrderedDict             # <<<<<<<<<<<<<<\n * from datetime import datetime\n * from pycalphad.core.eqsolver import *\n */\n  __Pyx_TraceLine(17,0,__PYX_ERR(0, 17, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s_namedtuple);\n  __Pyx_GIVEREF(__pyx_n_s_namedtuple);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_namedtuple);\n  __Pyx_INCREF(__pyx_n_s_OrderedDict);\n  __Pyx_GIVEREF(__pyx_n_s_OrderedDict);\n  PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_OrderedDict);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_collections, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_namedtuple); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_namedtuple, __pyx_t_1) < 0) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_OrderedDict); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_OrderedDict, __pyx_t_1) < 0) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+018: from datetime import datetime
\n
  __Pyx_TraceLine(18,0,__PYX_ERR(0, 18, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 18, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_datetime);\n  __Pyx_GIVEREF(__pyx_n_s_datetime);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_datetime);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_datetime, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 18, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_datetime); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 18, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_datetime, __pyx_t_2) < 0) __PYX_ERR(0, 18, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
+019: from pycalphad.core.eqsolver import *
\n
  __Pyx_TraceLine(19,0,__PYX_ERR(0, 19, __pyx_L1_error))\n  __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_n_s__33);\n  __Pyx_GIVEREF(__pyx_n_s__33);\n  PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s__33);\n  __pyx_t_2 = __Pyx_Import(__pyx_n_s_pycalphad_core_eqsolver, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 19, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_import_star(__pyx_t_2) < 0) __PYX_ERR(0, 19, __pyx_L1_error);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+020: from pycalphad.core.eqsolver import _build_multiphase_gradient, \\
\n
  __Pyx_TraceLine(20,0,__PYX_ERR(0, 20, __pyx_L1_error))\n  __pyx_t_2 = PyList_New(5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_n_s_build_multiphase_gradient);\n  __Pyx_GIVEREF(__pyx_n_s_build_multiphase_gradient);\n  PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_build_multiphase_gradient);\n  __Pyx_INCREF(__pyx_n_s_build_multiphase_system);\n  __Pyx_GIVEREF(__pyx_n_s_build_multiphase_system);\n  PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_build_multiphase_system);\n  __Pyx_INCREF(__pyx_n_s_compute_constraints);\n  __Pyx_GIVEREF(__pyx_n_s_compute_constraints);\n  PyList_SET_ITEM(__pyx_t_2, 2, __pyx_n_s_compute_constraints);\n  __Pyx_INCREF(__pyx_n_s_compute_phase_dof);\n  __Pyx_GIVEREF(__pyx_n_s_compute_phase_dof);\n  PyList_SET_ITEM(__pyx_t_2, 3, __pyx_n_s_compute_phase_dof);\n  __Pyx_INCREF(__pyx_n_s_remove_degenerate_phases);\n  __Pyx_GIVEREF(__pyx_n_s_remove_degenerate_phases);\n  PyList_SET_ITEM(__pyx_t_2, 4, __pyx_n_s_remove_degenerate_phases);\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_pycalphad_core_eqsolver, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_build_multiphase_gradient); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_build_multiphase_gradient, __pyx_t_2) < 0) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_build_multiphase_system); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_build_multiphase_system, __pyx_t_2) < 0) __PYX_ERR(0, 21, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_compute_constraints); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_compute_constraints, __pyx_t_2) < 0) __PYX_ERR(0, 21, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_compute_phase_dof); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_compute_phase_dof, __pyx_t_2) < 0) __PYX_ERR(0, 21, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_remove_degenerate_phases); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_remove_degenerate_phases, __pyx_t_2) < 0) __PYX_ERR(0, 21, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
 021:     _build_multiphase_system, _compute_constraints, _compute_phase_dof, remove_degenerate_phases
\n
 022: cimport cython
\n
 023: 
\n
+024: def _solve_eq_at_conditions(dbf, comps, properties, phase_records, callable_dict, conds_keys, verbose):
\n
/* Python wrapper */\nstatic PyObject *__pyx_pw_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e_1_solve_eq_at_conditions(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic char __pyx_doc_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e__solve_eq_at_conditions[] = \"\\n    Compute equilibrium for the given conditions.\\n    This private function is meant to be called from a worker subprocess.\\n    For that case, usually only a small slice of the master 'properties' is provided.\\n    Since that slice will be copied, we also return the modified 'properties'.\\n\\n    Parameters\\n    ----------\\n    dbf : Database\\n        Thermodynamic database containing the relevant parameters.\\n    comps : list\\n        Names of components to consider in the calculation.\\n    properties : Dataset\\n        Will be modified! Thermodynamic properties and conditions.\\n    phase_records : dict of PhaseRecord\\n        Details on phase callables.\\n    callable_dict : dict of callable\\n        Objective functions for each phase.\\n    conds_keys : list of str\\n        List of conditions axes in dimension order.\\n    verbose : bool\\n        Print details.\\n\\n    Returns\\n    -------\\n    properties : Dataset\\n        Modified with equilibrium values.\\n    \";\nstatic PyMethodDef __pyx_mdef_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e_1_solve_eq_at_conditions = {\"_solve_eq_at_conditions\", (PyCFunction)__pyx_pw_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e_1_solve_eq_at_conditions, METH_VARARGS|METH_KEYWORDS, __pyx_doc_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e__solve_eq_at_conditions};\nstatic PyObject *__pyx_pw_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e_1_solve_eq_at_conditions(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_dbf = 0;\n  PyObject *__pyx_v_comps = 0;\n  PyObject *__pyx_v_properties = 0;\n  PyObject *__pyx_v_phase_records = 0;\n  PyObject *__pyx_v_callable_dict = 0;\n  PyObject *__pyx_v_conds_keys = 0;\n  PyObject *__pyx_v_verbose = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_solve_eq_at_conditions (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dbf,&__pyx_n_s_comps,&__pyx_n_s_properties,&__pyx_n_s_phase_records,&__pyx_n_s_callable_dict,&__pyx_n_s_conds_keys,&__pyx_n_s_verbose,0};\n    PyObject* values[7] = {0,0,0,0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  7: values[6] = PyTuple_GET_ITEM(__pyx_args, 6);\n        case  6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n        case  5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_dbf)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_comps)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, 1); __PYX_ERR(0, 24, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_properties)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, 2); __PYX_ERR(0, 24, __pyx_L3_error)\n        }\n        case  3:\n        if (likely((values[3] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_phase_records)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, 3); __PYX_ERR(0, 24, __pyx_L3_error)\n        }\n        case  4:\n        if (likely((values[4] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_callable_dict)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, 4); __PYX_ERR(0, 24, __pyx_L3_error)\n        }\n        case  5:\n        if (likely((values[5] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_conds_keys)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, 5); __PYX_ERR(0, 24, __pyx_L3_error)\n        }\n        case  6:\n        if (likely((values[6] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_verbose)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, 6); __PYX_ERR(0, 24, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"_solve_eq_at_conditions\") < 0)) __PYX_ERR(0, 24, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 7) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n      values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n      values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n      values[6] = PyTuple_GET_ITEM(__pyx_args, 6);\n    }\n    __pyx_v_dbf = values[0];\n    __pyx_v_comps = values[1];\n    __pyx_v_properties = values[2];\n    __pyx_v_phase_records = values[3];\n    __pyx_v_callable_dict = values[4];\n    __pyx_v_conds_keys = values[5];\n    __pyx_v_verbose = values[6];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"_solve_eq_at_conditions\", 1, 7, 7, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 24, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"_cython_magic_179ec25e50428c13c53b0ba863f4b74e._solve_eq_at_conditions\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e__solve_eq_at_conditions(__pyx_self, __pyx_v_dbf, __pyx_v_comps, __pyx_v_properties, __pyx_v_phase_records, __pyx_v_callable_dict, __pyx_v_conds_keys, __pyx_v_verbose);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e__solve_eq_at_conditions(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dbf, PyObject *__pyx_v_comps, PyObject *__pyx_v_properties, PyObject *__pyx_v_phase_records, PyObject *__pyx_v_callable_dict, PyObject *__pyx_v_conds_keys, PyObject *__pyx_v_verbose) {\n  double __pyx_v_indep_sum;\n  int __pyx_v_num_phases;\n  CYTHON_UNUSED int __pyx_v_num_vars;\n  int __pyx_v_cur_iter;\n  int __pyx_v_old_phase_length;\n  int __pyx_v_new_phase_length;\n  int __pyx_v_var_idx;\n  int __pyx_v_sfidx;\n  int __pyx_v_pfidx;\n  int __pyx_v_m;\n  int __pyx_v_n;\n  PyArrayObject *__pyx_v_gradient_term = 0;\n  PyArrayObject *__pyx_v_p_y = 0;\n  PyArrayObject *__pyx_v_l_constraints = 0;\n  PyArrayObject *__pyx_v_step = 0;\n  PyArrayObject *__pyx_v_site_fracs = 0;\n  PyArrayObject *__pyx_v_candidate_site_fracs = 0;\n  PyArrayObject *__pyx_v_l_multipliers = 0;\n  PyArrayObject *__pyx_v_new_l_multipliers = 0;\n  PyArrayObject *__pyx_v_candidate_phase_fracs = 0;\n  PyArrayObject *__pyx_v_phase_fracs = 0;\n  PyArrayObject *__pyx_v_l_hessian = 0;\n  PyArrayObject *__pyx_v_ymat = 0;\n  PyArrayObject *__pyx_v_zmat = 0;\n  PyArrayObject *__pyx_v_qmat = 0;\n  PyArrayObject *__pyx_v_rmat = 0;\n  PyArrayObject *__pyx_v_constraint_jac = 0;\n  PyObject *__pyx_v_prop_MU_values = NULL;\n  PyObject *__pyx_v_prop_NP_values = NULL;\n  PyObject *__pyx_v_prop_Phase_values = NULL;\n  PyObject *__pyx_v_prop_X_values = NULL;\n  PyObject *__pyx_v_prop_Y_values = NULL;\n  PyObject *__pyx_v_prop_GM_values = NULL;\n  PyObject *__pyx_v_it = NULL;\n  PyObject *__pyx_v_cur_conds = NULL;\n  PyObject *__pyx_v_dependent_comp = NULL;\n  PyObject *__pyx_v_mole_fractions = NULL;\n  PyObject *__pyx_v_phases = NULL;\n  PyObject *__pyx_v_zero_dof = NULL;\n  PyObject *__pyx_v_phase_dof = NULL;\n  PyObject *__pyx_v_name = NULL;\n  Py_ssize_t __pyx_v_idx;\n  PyObject *__pyx_v_active_in_subl = NULL;\n  PyObject *__pyx_v_ais = NULL;\n  PyObject *__pyx_v_constraint_hess = NULL;\n  PyObject *__pyx_v_p_z = NULL;\n  PyObject *__pyx_v_old_energy = NULL;\n  PyObject *__pyx_v_old_chem_pots = NULL;\n  PyObject *__pyx_v_candidate_l_constraints = NULL;\n  PyObject *__pyx_v_candidate_constraint_jac = NULL;\n  CYTHON_UNUSED PyObject *__pyx_v_candidate_constraint_hess = NULL;\n  PyObject *__pyx_v_candidate_energy = NULL;\n  PyObject *__pyx_v_candidate_gradient_term = NULL;\n  PyObject *__pyx_v_num_mass_bals = NULL;\n  PyObject *__pyx_v_chemical_potentials = NULL;\n  PyObject *__pyx_v_var_offset = NULL;\n  Py_ssize_t __pyx_v_phase_idx;\n  PyObject *__pyx_v_comp_idx = NULL;\n  PyObject *__pyx_v_comp = NULL;\n  PyObject *__pyx_v_total_comp = NULL;\n  PyObject *__pyx_v_driving_force = NULL;\n  PyObject *__pyx_v_no_progress = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_candidate_phase_fracs;\n  __Pyx_Buffer __pyx_pybuffer_candidate_phase_fracs;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_candidate_site_fracs;\n  __Pyx_Buffer __pyx_pybuffer_candidate_site_fracs;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_constraint_jac;\n  __Pyx_Buffer __pyx_pybuffer_constraint_jac;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_gradient_term;\n  __Pyx_Buffer __pyx_pybuffer_gradient_term;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_l_constraints;\n  __Pyx_Buffer __pyx_pybuffer_l_constraints;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_l_hessian;\n  __Pyx_Buffer __pyx_pybuffer_l_hessian;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_l_multipliers;\n  __Pyx_Buffer __pyx_pybuffer_l_multipliers;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_new_l_multipliers;\n  __Pyx_Buffer __pyx_pybuffer_new_l_multipliers;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_p_y;\n  __Pyx_Buffer __pyx_pybuffer_p_y;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_phase_fracs;\n  __Pyx_Buffer __pyx_pybuffer_phase_fracs;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_qmat;\n  __Pyx_Buffer __pyx_pybuffer_qmat;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_rmat;\n  __Pyx_Buffer __pyx_pybuffer_rmat;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_site_fracs;\n  __Pyx_Buffer __pyx_pybuffer_site_fracs;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_step;\n  __Pyx_Buffer __pyx_pybuffer_step;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_ymat;\n  __Pyx_Buffer __pyx_pybuffer_ymat;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_zmat;\n  __Pyx_Buffer __pyx_pybuffer_zmat;\n  PyObject *__pyx_r = NULL;\n  __Pyx_TraceDeclarations\n  __Pyx_TraceFrameInit(__pyx_codeobj_)\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_solve_eq_at_conditions\", 0);\n  __Pyx_TraceCall(\"_solve_eq_at_conditions\", __pyx_f[0], 24, 0, __PYX_ERR(0, 24, __pyx_L1_error));\n  __pyx_pybuffer_gradient_term.pybuffer.buf = NULL;\n  __pyx_pybuffer_gradient_term.refcount = 0;\n  __pyx_pybuffernd_gradient_term.data = NULL;\n  __pyx_pybuffernd_gradient_term.rcbuffer = &__pyx_pybuffer_gradient_term;\n  __pyx_pybuffer_p_y.pybuffer.buf = NULL;\n  __pyx_pybuffer_p_y.refcount = 0;\n  __pyx_pybuffernd_p_y.data = NULL;\n  __pyx_pybuffernd_p_y.rcbuffer = &__pyx_pybuffer_p_y;\n  __pyx_pybuffer_l_constraints.pybuffer.buf = NULL;\n  __pyx_pybuffer_l_constraints.refcount = 0;\n  __pyx_pybuffernd_l_constraints.data = NULL;\n  __pyx_pybuffernd_l_constraints.rcbuffer = &__pyx_pybuffer_l_constraints;\n  __pyx_pybuffer_step.pybuffer.buf = NULL;\n  __pyx_pybuffer_step.refcount = 0;\n  __pyx_pybuffernd_step.data = NULL;\n  __pyx_pybuffernd_step.rcbuffer = &__pyx_pybuffer_step;\n  __pyx_pybuffer_site_fracs.pybuffer.buf = NULL;\n  __pyx_pybuffer_site_fracs.refcount = 0;\n  __pyx_pybuffernd_site_fracs.data = NULL;\n  __pyx_pybuffernd_site_fracs.rcbuffer = &__pyx_pybuffer_site_fracs;\n  __pyx_pybuffer_candidate_site_fracs.pybuffer.buf = NULL;\n  __pyx_pybuffer_candidate_site_fracs.refcount = 0;\n  __pyx_pybuffernd_candidate_site_fracs.data = NULL;\n  __pyx_pybuffernd_candidate_site_fracs.rcbuffer = &__pyx_pybuffer_candidate_site_fracs;\n  __pyx_pybuffer_l_multipliers.pybuffer.buf = NULL;\n  __pyx_pybuffer_l_multipliers.refcount = 0;\n  __pyx_pybuffernd_l_multipliers.data = NULL;\n  __pyx_pybuffernd_l_multipliers.rcbuffer = &__pyx_pybuffer_l_multipliers;\n  __pyx_pybuffer_new_l_multipliers.pybuffer.buf = NULL;\n  __pyx_pybuffer_new_l_multipliers.refcount = 0;\n  __pyx_pybuffernd_new_l_multipliers.data = NULL;\n  __pyx_pybuffernd_new_l_multipliers.rcbuffer = &__pyx_pybuffer_new_l_multipliers;\n  __pyx_pybuffer_candidate_phase_fracs.pybuffer.buf = NULL;\n  __pyx_pybuffer_candidate_phase_fracs.refcount = 0;\n  __pyx_pybuffernd_candidate_phase_fracs.data = NULL;\n  __pyx_pybuffernd_candidate_phase_fracs.rcbuffer = &__pyx_pybuffer_candidate_phase_fracs;\n  __pyx_pybuffer_phase_fracs.pybuffer.buf = NULL;\n  __pyx_pybuffer_phase_fracs.refcount = 0;\n  __pyx_pybuffernd_phase_fracs.data = NULL;\n  __pyx_pybuffernd_phase_fracs.rcbuffer = &__pyx_pybuffer_phase_fracs;\n  __pyx_pybuffer_l_hessian.pybuffer.buf = NULL;\n  __pyx_pybuffer_l_hessian.refcount = 0;\n  __pyx_pybuffernd_l_hessian.data = NULL;\n  __pyx_pybuffernd_l_hessian.rcbuffer = &__pyx_pybuffer_l_hessian;\n  __pyx_pybuffer_ymat.pybuffer.buf = NULL;\n  __pyx_pybuffer_ymat.refcount = 0;\n  __pyx_pybuffernd_ymat.data = NULL;\n  __pyx_pybuffernd_ymat.rcbuffer = &__pyx_pybuffer_ymat;\n  __pyx_pybuffer_zmat.pybuffer.buf = NULL;\n  __pyx_pybuffer_zmat.refcount = 0;\n  __pyx_pybuffernd_zmat.data = NULL;\n  __pyx_pybuffernd_zmat.rcbuffer = &__pyx_pybuffer_zmat;\n  __pyx_pybuffer_qmat.pybuffer.buf = NULL;\n  __pyx_pybuffer_qmat.refcount = 0;\n  __pyx_pybuffernd_qmat.data = NULL;\n  __pyx_pybuffernd_qmat.rcbuffer = &__pyx_pybuffer_qmat;\n  __pyx_pybuffer_rmat.pybuffer.buf = NULL;\n  __pyx_pybuffer_rmat.refcount = 0;\n  __pyx_pybuffernd_rmat.data = NULL;\n  __pyx_pybuffernd_rmat.rcbuffer = &__pyx_pybuffer_rmat;\n  __pyx_pybuffer_constraint_jac.pybuffer.buf = NULL;\n  __pyx_pybuffer_constraint_jac.refcount = 0;\n  __pyx_pybuffernd_constraint_jac.data = NULL;\n  __pyx_pybuffernd_constraint_jac.rcbuffer = &__pyx_pybuffer_constraint_jac;\n/* \u2026 */\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_10);\n  __Pyx_XDECREF(__pyx_t_11);\n  __Pyx_XDECREF(__pyx_t_12);\n  __Pyx_XDECREF(__pyx_t_14);\n  __Pyx_XDECREF(__pyx_t_30);\n  __Pyx_XDECREF(__pyx_t_31);\n  __Pyx_XDECREF(__pyx_t_32);\n  __Pyx_XDECREF(__pyx_t_45);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gradient_term.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_constraints.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_hessian.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_p_y.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_qmat.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_rmat.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_step.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_ymat.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_zmat.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"_cython_magic_179ec25e50428c13c53b0ba863f4b74e._solve_eq_at_conditions\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gradient_term.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_constraints.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_hessian.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_p_y.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_qmat.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_rmat.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_step.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_ymat.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_zmat.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_gradient_term);\n  __Pyx_XDECREF((PyObject *)__pyx_v_p_y);\n  __Pyx_XDECREF((PyObject *)__pyx_v_l_constraints);\n  __Pyx_XDECREF((PyObject *)__pyx_v_step);\n  __Pyx_XDECREF((PyObject *)__pyx_v_site_fracs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_candidate_site_fracs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_l_multipliers);\n  __Pyx_XDECREF((PyObject *)__pyx_v_new_l_multipliers);\n  __Pyx_XDECREF((PyObject *)__pyx_v_candidate_phase_fracs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_phase_fracs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_l_hessian);\n  __Pyx_XDECREF((PyObject *)__pyx_v_ymat);\n  __Pyx_XDECREF((PyObject *)__pyx_v_zmat);\n  __Pyx_XDECREF((PyObject *)__pyx_v_qmat);\n  __Pyx_XDECREF((PyObject *)__pyx_v_rmat);\n  __Pyx_XDECREF((PyObject *)__pyx_v_constraint_jac);\n  __Pyx_XDECREF(__pyx_v_prop_MU_values);\n  __Pyx_XDECREF(__pyx_v_prop_NP_values);\n  __Pyx_XDECREF(__pyx_v_prop_Phase_values);\n  __Pyx_XDECREF(__pyx_v_prop_X_values);\n  __Pyx_XDECREF(__pyx_v_prop_Y_values);\n  __Pyx_XDECREF(__pyx_v_prop_GM_values);\n  __Pyx_XDECREF(__pyx_v_it);\n  __Pyx_XDECREF(__pyx_v_cur_conds);\n  __Pyx_XDECREF(__pyx_v_dependent_comp);\n  __Pyx_XDECREF(__pyx_v_mole_fractions);\n  __Pyx_XDECREF(__pyx_v_phases);\n  __Pyx_XDECREF(__pyx_v_zero_dof);\n  __Pyx_XDECREF(__pyx_v_phase_dof);\n  __Pyx_XDECREF(__pyx_v_name);\n  __Pyx_XDECREF(__pyx_v_active_in_subl);\n  __Pyx_XDECREF(__pyx_v_ais);\n  __Pyx_XDECREF(__pyx_v_constraint_hess);\n  __Pyx_XDECREF(__pyx_v_p_z);\n  __Pyx_XDECREF(__pyx_v_old_energy);\n  __Pyx_XDECREF(__pyx_v_old_chem_pots);\n  __Pyx_XDECREF(__pyx_v_candidate_l_constraints);\n  __Pyx_XDECREF(__pyx_v_candidate_constraint_jac);\n  __Pyx_XDECREF(__pyx_v_candidate_constraint_hess);\n  __Pyx_XDECREF(__pyx_v_candidate_energy);\n  __Pyx_XDECREF(__pyx_v_candidate_gradient_term);\n  __Pyx_XDECREF(__pyx_v_num_mass_bals);\n  __Pyx_XDECREF(__pyx_v_chemical_potentials);\n  __Pyx_XDECREF(__pyx_v_var_offset);\n  __Pyx_XDECREF(__pyx_v_comp_idx);\n  __Pyx_XDECREF(__pyx_v_comp);\n  __Pyx_XDECREF(__pyx_v_total_comp);\n  __Pyx_XDECREF(__pyx_v_driving_force);\n  __Pyx_XDECREF(__pyx_v_no_progress);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_TraceReturn(__pyx_r, 0);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n/* \u2026 */\n  __pyx_tuple__34 = PyTuple_Pack(69, __pyx_n_s_dbf, __pyx_n_s_comps, __pyx_n_s_properties, __pyx_n_s_phase_records, __pyx_n_s_callable_dict, __pyx_n_s_conds_keys, __pyx_n_s_verbose, __pyx_n_s_indep_sum, __pyx_n_s_num_phases, __pyx_n_s_num_vars, __pyx_n_s_cur_iter, __pyx_n_s_old_phase_length, __pyx_n_s_new_phase_length, __pyx_n_s_var_idx, __pyx_n_s_sfidx, __pyx_n_s_pfidx, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_gradient_term, __pyx_n_s_p_y, __pyx_n_s_l_constraints, __pyx_n_s_step, __pyx_n_s_site_fracs, __pyx_n_s_candidate_site_fracs, __pyx_n_s_l_multipliers, __pyx_n_s_new_l_multipliers, __pyx_n_s_candidate_phase_fracs, __pyx_n_s_phase_fracs, __pyx_n_s_l_hessian, __pyx_n_s_ymat, __pyx_n_s_zmat, __pyx_n_s_qmat, __pyx_n_s_rmat, __pyx_n_s_constraint_jac, __pyx_n_s_prop_MU_values, __pyx_n_s_prop_NP_values, __pyx_n_s_prop_Phase_values, __pyx_n_s_prop_X_values, __pyx_n_s_prop_Y_values, __pyx_n_s_prop_GM_values, __pyx_n_s_it, __pyx_n_s_cur_conds, __pyx_n_s_dependent_comp, __pyx_n_s_mole_fractions, __pyx_n_s_phases, __pyx_n_s_zero_dof, __pyx_n_s_phase_dof, __pyx_n_s_name, __pyx_n_s_idx, __pyx_n_s_active_in_subl, __pyx_n_s_ais, __pyx_n_s_constraint_hess, __pyx_n_s_p_z, __pyx_n_s_old_energy, __pyx_n_s_old_chem_pots, __pyx_n_s_candidate_l_constraints, __pyx_n_s_candidate_constraint_jac, __pyx_n_s_candidate_constraint_hess, __pyx_n_s_candidate_energy, __pyx_n_s_candidate_gradient_term, __pyx_n_s_num_mass_bals, __pyx_n_s_chemical_potentials, __pyx_n_s_var_offset, __pyx_n_s_phase_idx, __pyx_n_s_comp_idx, __pyx_n_s_comp, __pyx_n_s_total_comp, __pyx_n_s_driving_force, __pyx_n_s_no_progress); if (unlikely(!__pyx_tuple__34)) __PYX_ERR(0, 24, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__34);\n  __Pyx_GIVEREF(__pyx_tuple__34);\n/* \u2026 */\n  __Pyx_TraceLine(24,0,__PYX_ERR(0, 24, __pyx_L1_error))\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_46_cython_magic_179ec25e50428c13c53b0ba863f4b74e_1_solve_eq_at_conditions, 0, __pyx_n_s_solve_eq_at_conditions, NULL, __pyx_n_s_cython_magic_179ec25e50428c13c5, __pyx_d, ((PyObject *)__pyx_codeobj_)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 24, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_solve_eq_at_conditions, __pyx_t_1) < 0) __PYX_ERR(0, 24, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n
 025:     """
\n
 026:     Compute equilibrium for the given conditions.
\n
 027:     This private function is meant to be called from a worker subprocess.
\n
 028:     For that case, usually only a small slice of the master 'properties' is provided.
\n
 029:     Since that slice will be copied, we also return the modified 'properties'.
\n
 030: 
\n
 031:     Parameters
\n
 032:     ----------
\n
 033:     dbf : Database
\n
 034:         Thermodynamic database containing the relevant parameters.
\n
 035:     comps : list
\n
 036:         Names of components to consider in the calculation.
\n
 037:     properties : Dataset
\n
 038:         Will be modified! Thermodynamic properties and conditions.
\n
 039:     phase_records : dict of PhaseRecord
\n
 040:         Details on phase callables.
\n
 041:     callable_dict : dict of callable
\n
 042:         Objective functions for each phase.
\n
 043:     conds_keys : list of str
\n
 044:         List of conditions axes in dimension order.
\n
 045:     verbose : bool
\n
 046:         Print details.
\n
 047: 
\n
 048:     Returns
\n
 049:     -------
\n
 050:     properties : Dataset
\n
 051:         Modified with equilibrium values.
\n
 052:     """
\n
 053:     cdef:
\n
 054:         double indep_sum
\n
 055:         int num_phases, num_vars, cur_iter, old_phase_length, new_phase_length, var_idx, sfidx, pfidx, m, n
\n
 056:         np.ndarray[ndim=1, dtype=np.float64_t] gradient_term, p_y, l_constraints, step
\n
 057:         np.ndarray[ndim=1, dtype=np.float64_t] site_fracs, candidate_site_fracs, l_multipliers, new_l_multipliers, candidate_phase_fracs, phase_fracs
\n
 058:         np.ndarray[ndim=2, dtype=np.float64_t] l_hessian, ymat, zmat, qmat, rmat, constraint_jac
\n
 059:     # Factored out via profiling
\n
+060:     prop_MU_values = properties['MU'].values
\n
  __Pyx_TraceLine(60,0,__PYX_ERR(0, 60, __pyx_L1_error))\n  __pyx_t_1 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_MU); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 60, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_values); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 60, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_prop_MU_values = __pyx_t_2;\n  __pyx_t_2 = 0;\n
+061:     prop_NP_values = properties['NP'].values
\n
  __Pyx_TraceLine(61,0,__PYX_ERR(0, 61, __pyx_L1_error))\n  __pyx_t_2 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_NP); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 61, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_values); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 61, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_prop_NP_values = __pyx_t_1;\n  __pyx_t_1 = 0;\n
+062:     prop_Phase_values = properties['Phase'].values
\n
  __Pyx_TraceLine(62,0,__PYX_ERR(0, 62, __pyx_L1_error))\n  __pyx_t_1 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_Phase); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 62, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_values); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 62, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_prop_Phase_values = __pyx_t_2;\n  __pyx_t_2 = 0;\n
+063:     prop_X_values = properties['X'].values
\n
  __Pyx_TraceLine(63,0,__PYX_ERR(0, 63, __pyx_L1_error))\n  __pyx_t_2 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_X); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 63, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_values); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 63, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_prop_X_values = __pyx_t_1;\n  __pyx_t_1 = 0;\n
+064:     prop_Y_values = properties['Y'].values
\n
  __Pyx_TraceLine(64,0,__PYX_ERR(0, 64, __pyx_L1_error))\n  __pyx_t_1 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_Y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 64, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_values); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 64, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_prop_Y_values = __pyx_t_2;\n  __pyx_t_2 = 0;\n
+065:     prop_GM_values = properties['GM'].values
\n
  __Pyx_TraceLine(65,0,__PYX_ERR(0, 65, __pyx_L1_error))\n  __pyx_t_2 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_GM); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_values); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_prop_GM_values = __pyx_t_1;\n  __pyx_t_1 = 0;\n
 066: 
\n
+067:     it = np.nditer(prop_GM_values, flags=['multi_index'])
\n
  __Pyx_TraceLine(67,0,__PYX_ERR(0, 67, __pyx_L1_error))\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_nditer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_v_prop_GM_values);\n  __Pyx_GIVEREF(__pyx_v_prop_GM_values);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_prop_GM_values);\n  __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = PyList_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_INCREF(__pyx_n_u_multi_index);\n  __Pyx_GIVEREF(__pyx_n_u_multi_index);\n  PyList_SET_ITEM(__pyx_t_4, 0, __pyx_n_u_multi_index);\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_flags, __pyx_t_4) < 0) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 67, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_it = __pyx_t_4;\n  __pyx_t_4 = 0;\n
 068: 
\n
 069:     #if verbose:
\n
 070:     #    print('INITIAL CONFIGURATION')
\n
 071:     #    print(properties.MU)
\n
 072:     #    print(properties.Phase)
\n
 073:     #    print(properties.NP)
\n
 074:     #    print(properties.X)
\n
 075:     #    print(properties.Y)
\n
 076:     #    print('---------------------')
\n
+077:     while not it.finished:
\n
  __Pyx_TraceLine(77,0,__PYX_ERR(0, 77, __pyx_L1_error))\n  while (1) {\n    __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_finished); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 77, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 77, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_6 = ((!__pyx_t_5) != 0);\n    if (!__pyx_t_6) break;\n
 078:         # A lot of this code relies on cur_conds being ordered!
\n
+079:         cur_conds = OrderedDict(zip(conds_keys,
\n
    __Pyx_TraceLine(79,0,__PYX_ERR(0, 79, __pyx_L1_error))\n    __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_OrderedDict); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 79, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    { /* enter inner scope */\n      PyObject *__pyx_7genexpr__pyx_v_a = NULL;\n      PyObject *__pyx_7genexpr__pyx_v_b = NULL;\n/* \u2026 */\n    __Pyx_TraceLine(79,0,__PYX_ERR(0, 79, __pyx_L1_error))\n    __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 79, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_INCREF(__pyx_v_conds_keys);\n    __Pyx_GIVEREF(__pyx_v_conds_keys);\n    PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_v_conds_keys);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_1);\n    __pyx_t_1 = 0;\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 79, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_7)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_7);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n      }\n    }\n    if (!__pyx_t_7) {\n      __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 79, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_3)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_1};\n        __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 79, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_1};\n        __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 79, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_14 = PyTuple_New(1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 79, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_14, 0, __pyx_t_7); __pyx_t_7 = NULL;\n        __Pyx_GIVEREF(__pyx_t_1);\n        PyTuple_SET_ITEM(__pyx_t_14, 0+1, __pyx_t_1);\n        __pyx_t_1 = 0;\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_14, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 79, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_cur_conds, __pyx_t_4);\n    __pyx_t_4 = 0;\n
+080:                                     [np.asarray(properties['GM'].coords[b][a], dtype=np.float)
\n
      __Pyx_TraceLine(80,0,__PYX_ERR(0, 80, __pyx_L7_error))\n      __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 80, __pyx_L7_error)\n      __Pyx_GOTREF(__pyx_t_1);\n/* \u2026 */\n        __Pyx_TraceLine(80,0,__PYX_ERR(0, 80, __pyx_L7_error))\n        __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_asarray); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_2 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_GM); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_coords); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_2 = PyObject_GetItem(__pyx_t_10, __pyx_7genexpr__pyx_v_b); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __pyx_t_10 = PyObject_GetItem(__pyx_t_2, __pyx_7genexpr__pyx_v_a); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_GIVEREF(__pyx_t_10);\n        PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_10);\n        __pyx_t_10 = 0;\n        __pyx_t_10 = PyDict_New(); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __pyx_t_12 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_float); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        if (PyDict_SetItem(__pyx_t_10, __pyx_n_s_dtype, __pyx_t_14) < 0) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_2, __pyx_t_10); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_14))) __PYX_ERR(0, 80, __pyx_L7_error)\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n
+081:                                      for a, b in zip(it.multi_index, conds_keys)]))
\n
      __Pyx_TraceLine(81,0,__PYX_ERR(0, 81, __pyx_L7_error))\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L7_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 81, __pyx_L7_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_GIVEREF(__pyx_t_2);\n      PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_2);\n      __Pyx_INCREF(__pyx_v_conds_keys);\n      __Pyx_GIVEREF(__pyx_v_conds_keys);\n      PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_v_conds_keys);\n      __pyx_t_2 = 0;\n      __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_7, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L7_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) {\n        __pyx_t_7 = __pyx_t_2; __Pyx_INCREF(__pyx_t_7); __pyx_t_8 = 0;\n        __pyx_t_9 = NULL;\n      } else {\n        __pyx_t_8 = -1; __pyx_t_7 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 81, __pyx_L7_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __pyx_t_9 = Py_TYPE(__pyx_t_7)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 81, __pyx_L7_error)\n      }\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      for (;;) {\n        if (likely(!__pyx_t_9)) {\n          if (likely(PyList_CheckExact(__pyx_t_7))) {\n            if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_7)) break;\n            #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n            __pyx_t_2 = PyList_GET_ITEM(__pyx_t_7, __pyx_t_8); __Pyx_INCREF(__pyx_t_2); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 81, __pyx_L7_error)\n            #else\n            __pyx_t_2 = PySequence_ITEM(__pyx_t_7, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L7_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            #endif\n          } else {\n            if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_7)) break;\n            #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n            __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_7, __pyx_t_8); __Pyx_INCREF(__pyx_t_2); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 81, __pyx_L7_error)\n            #else\n            __pyx_t_2 = PySequence_ITEM(__pyx_t_7, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L7_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            #endif\n          }\n        } else {\n          __pyx_t_2 = __pyx_t_9(__pyx_t_7);\n          if (unlikely(!__pyx_t_2)) {\n            PyObject* exc_type = PyErr_Occurred();\n            if (exc_type) {\n              if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n              else __PYX_ERR(0, 81, __pyx_L7_error)\n            }\n            break;\n          }\n          __Pyx_GOTREF(__pyx_t_2);\n        }\n        if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) {\n          PyObject* sequence = __pyx_t_2;\n          #if !CYTHON_COMPILING_IN_PYPY\n          Py_ssize_t size = Py_SIZE(sequence);\n          #else\n          Py_ssize_t size = PySequence_Size(sequence);\n          #endif\n          if (unlikely(size != 2)) {\n            if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n            else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n            __PYX_ERR(0, 81, __pyx_L7_error)\n          }\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          if (likely(PyTuple_CheckExact(sequence))) {\n            __pyx_t_10 = PyTuple_GET_ITEM(sequence, 0); \n            __pyx_t_11 = PyTuple_GET_ITEM(sequence, 1); \n          } else {\n            __pyx_t_10 = PyList_GET_ITEM(sequence, 0); \n            __pyx_t_11 = PyList_GET_ITEM(sequence, 1); \n          }\n          __Pyx_INCREF(__pyx_t_10);\n          __Pyx_INCREF(__pyx_t_11);\n          #else\n          __pyx_t_10 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 81, __pyx_L7_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          __pyx_t_11 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 81, __pyx_L7_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          #endif\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        } else {\n          Py_ssize_t index = -1;\n          __pyx_t_12 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 81, __pyx_L7_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __pyx_t_13 = Py_TYPE(__pyx_t_12)->tp_iternext;\n          index = 0; __pyx_t_10 = __pyx_t_13(__pyx_t_12); if (unlikely(!__pyx_t_10)) goto __pyx_L10_unpacking_failed;\n          __Pyx_GOTREF(__pyx_t_10);\n          index = 1; __pyx_t_11 = __pyx_t_13(__pyx_t_12); if (unlikely(!__pyx_t_11)) goto __pyx_L10_unpacking_failed;\n          __Pyx_GOTREF(__pyx_t_11);\n          if (__Pyx_IternextUnpackEndCheck(__pyx_t_13(__pyx_t_12), 2) < 0) __PYX_ERR(0, 81, __pyx_L7_error)\n          __pyx_t_13 = NULL;\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n          goto __pyx_L11_unpacking_done;\n          __pyx_L10_unpacking_failed:;\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __pyx_t_13 = NULL;\n          if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);\n          __PYX_ERR(0, 81, __pyx_L7_error)\n          __pyx_L11_unpacking_done:;\n        }\n        __Pyx_XDECREF_SET(__pyx_7genexpr__pyx_v_a, __pyx_t_10);\n        __pyx_t_10 = 0;\n        __Pyx_XDECREF_SET(__pyx_7genexpr__pyx_v_b, __pyx_t_11);\n        __pyx_t_11 = 0;\n/* \u2026 */\n        __Pyx_TraceLine(81,0,__PYX_ERR(0, 81, __pyx_L7_error))\n      }\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_XDECREF(__pyx_7genexpr__pyx_v_a);\n      __Pyx_XDECREF(__pyx_7genexpr__pyx_v_b);\n      goto __pyx_L12_exit_scope;\n      __pyx_L7_error:;\n      __Pyx_XDECREF(__pyx_7genexpr__pyx_v_a);\n      __Pyx_XDECREF(__pyx_7genexpr__pyx_v_b);\n      goto __pyx_L1_error;\n      __pyx_L12_exit_scope:;\n    } /* exit inner scope */\n
+082:         if len(cur_conds) == 0:
\n
    __Pyx_TraceLine(82,0,__PYX_ERR(0, 82, __pyx_L1_error))\n    __pyx_t_8 = PyObject_Length(__pyx_v_cur_conds); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 82, __pyx_L1_error)\n    __pyx_t_6 = ((__pyx_t_8 == 0) != 0);\n    if (__pyx_t_6) {\n/* \u2026 */\n    }\n
+083:             cur_conds = properties['GM'].coords
\n
      __Pyx_TraceLine(83,0,__PYX_ERR(0, 83, __pyx_L1_error))\n      __pyx_t_4 = PyObject_GetItem(__pyx_v_properties, __pyx_n_u_GM); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 83, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_coords); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 83, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF_SET(__pyx_v_cur_conds, __pyx_t_3);\n      __pyx_t_3 = 0;\n
 084:         # sum of independently specified components
\n
+085:         indep_sum = np.sum([float(val) for i, val in cur_conds.items() if i.startswith('X_')])
\n
    __Pyx_TraceLine(85,0,__PYX_ERR(0, 85, __pyx_L1_error))\n    __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 85, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_sum); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 85, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_14);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    { /* enter inner scope */\n      PyObject *__pyx_8genexpr1__pyx_v_i = NULL;\n      PyObject *__pyx_8genexpr1__pyx_v_val = NULL;\n      __pyx_t_4 = PyList_New(0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 85, __pyx_L16_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_8 = 0;\n      if (unlikely(__pyx_v_cur_conds == Py_None)) {\n        PyErr_Format(PyExc_AttributeError, \"'NoneType' object has no attribute '%s'\", \"items\");\n        __PYX_ERR(0, 85, __pyx_L16_error)\n      }\n      __pyx_t_7 = __Pyx_dict_iterator(__pyx_v_cur_conds, 0, __pyx_n_s_items, (&__pyx_t_15), (&__pyx_t_16)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 85, __pyx_L16_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_XDECREF(__pyx_t_1);\n      __pyx_t_1 = __pyx_t_7;\n      __pyx_t_7 = 0;\n      while (1) {\n        __pyx_t_17 = __Pyx_dict_iter_next(__pyx_t_1, __pyx_t_15, &__pyx_t_8, &__pyx_t_7, &__pyx_t_10, NULL, __pyx_t_16);\n        if (unlikely(__pyx_t_17 == 0)) break;\n        if (unlikely(__pyx_t_17 == -1)) __PYX_ERR(0, 85, __pyx_L16_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_i, __pyx_t_7);\n        __pyx_t_7 = 0;\n        __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_val, __pyx_t_10);\n        __pyx_t_10 = 0;\n        __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr1__pyx_v_i, __pyx_n_s_startswith); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 85, __pyx_L16_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 85, __pyx_L16_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_7); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 85, __pyx_L16_error)\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        if (__pyx_t_6) {\n          __pyx_t_7 = __Pyx_PyNumber_Float(__pyx_8genexpr1__pyx_v_val); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 85, __pyx_L16_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          if (unlikely(__Pyx_ListComp_Append(__pyx_t_4, (PyObject*)__pyx_t_7))) __PYX_ERR(0, 85, __pyx_L16_error)\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n      __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_i);\n      __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_val);\n      goto __pyx_L20_exit_scope;\n      __pyx_L16_error:;\n      __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_i);\n      __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_val);\n      goto __pyx_L1_error;\n      __pyx_L20_exit_scope:;\n    } /* exit inner scope */\n    __pyx_t_1 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_14))) {\n      __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_14);\n      if (likely(__pyx_t_1)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14);\n        __Pyx_INCREF(__pyx_t_1);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_14, function);\n      }\n    }\n    if (!__pyx_t_1) {\n      __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_14, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 85, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_14)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_1, __pyx_t_4};\n        __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_14, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 85, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_14)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_1, __pyx_t_4};\n        __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_14, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 85, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 85, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1); __pyx_t_1 = NULL;\n        __Pyx_GIVEREF(__pyx_t_4);\n        PyTuple_SET_ITEM(__pyx_t_7, 0+1, __pyx_t_4);\n        __pyx_t_4 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 85, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n    __pyx_t_18 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_18 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 85, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_indep_sum = __pyx_t_18;\n/* \u2026 */\n  __pyx_tuple__2 = PyTuple_Pack(1, __pyx_n_u_X_2); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 85, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__2);\n  __Pyx_GIVEREF(__pyx_tuple__2);\n
+086:         if indep_sum > 1:
\n
    __Pyx_TraceLine(86,0,__PYX_ERR(0, 86, __pyx_L1_error))\n    __pyx_t_6 = ((__pyx_v_indep_sum > 1.0) != 0);\n    if (__pyx_t_6) {\n/* \u2026 */\n    }\n
 087:             # Sum of independent component mole fractions greater than one
\n
 088:             # Skip this condition set
\n
 089:             # We silently allow this to make 2-D composition mapping easier
\n
+090:             prop_MU_values[it.multi_index] = np.nan
\n
      __Pyx_TraceLine(90,0,__PYX_ERR(0, 90, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 90, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_nan); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 90, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 90, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_MU_values, __pyx_t_3, __pyx_t_14) < 0)) __PYX_ERR(0, 90, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n
+091:             prop_NP_values[it.multi_index + np.index_exp[:len(phases)]] = np.nan
\n
      __Pyx_TraceLine(91,0,__PYX_ERR(0, 91, __pyx_L1_error))\n      __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      if (unlikely(!__pyx_v_phases)) { __Pyx_RaiseUnboundLocalError(\"phases\"); __PYX_ERR(0, 91, __pyx_L1_error) }\n      __pyx_t_15 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __pyx_t_7 = __Pyx_PyObject_GetSlice(__pyx_t_4, 0, __pyx_t_15, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = PyNumber_Add(__pyx_t_14, __pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_NP_values, __pyx_t_4, __pyx_t_3) < 0)) __PYX_ERR(0, 91, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+092:             prop_Phase_values[it.multi_index + np.index_exp[:len(phases)]] = ''
\n
      __Pyx_TraceLine(92,0,__PYX_ERR(0, 92, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (unlikely(!__pyx_v_phases)) { __Pyx_RaiseUnboundLocalError(\"phases\"); __PYX_ERR(0, 92, __pyx_L1_error) }\n      __pyx_t_15 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __pyx_t_4 = __Pyx_PyObject_GetSlice(__pyx_t_7, 0, __pyx_t_15, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = PyNumber_Add(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_Phase_values, __pyx_t_7, __pyx_kp_u__3) < 0)) __PYX_ERR(0, 92, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n
+093:             prop_X_values[it.multi_index + np.index_exp[:len(phases)]] = np.nan
\n
      __Pyx_TraceLine(93,0,__PYX_ERR(0, 93, __pyx_L1_error))\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_nan); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (unlikely(!__pyx_v_phases)) { __Pyx_RaiseUnboundLocalError(\"phases\"); __PYX_ERR(0, 93, __pyx_L1_error) }\n      __pyx_t_15 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __pyx_t_3 = __Pyx_PyObject_GetSlice(__pyx_t_14, 0, __pyx_t_15, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = PyNumber_Add(__pyx_t_7, __pyx_t_3); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_X_values, __pyx_t_14, __pyx_t_4) < 0)) __PYX_ERR(0, 93, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n
+094:             prop_Y_values[it.multi_index] = np.nan
\n
      __Pyx_TraceLine(94,0,__PYX_ERR(0, 94, __pyx_L1_error))\n      __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 94, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_nan); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 94, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 94, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_Y_values, __pyx_t_4, __pyx_t_14) < 0)) __PYX_ERR(0, 94, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n
+095:             prop_GM_values[it.multi_index] = np.nan
\n
      __Pyx_TraceLine(95,0,__PYX_ERR(0, 95, __pyx_L1_error))\n      __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 95, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_nan); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 95, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 95, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_GM_values, __pyx_t_14, __pyx_t_4) < 0)) __PYX_ERR(0, 95, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n
+096:             it.iternext()
\n
      __Pyx_TraceLine(96,0,__PYX_ERR(0, 96, __pyx_L1_error))\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_iternext); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 96, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_3 = NULL;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_14))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_14);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_14, function);\n        }\n      }\n      if (__pyx_t_3) {\n        __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_14, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      } else {\n        __pyx_t_4 = __Pyx_PyObject_CallNoArg(__pyx_t_14); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error)\n      }\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n
+097:             continue
\n
      __Pyx_TraceLine(97,0,__PYX_ERR(0, 97, __pyx_L1_error))\n      goto __pyx_L3_continue;\n
+098:         dependent_comp = set(comps) - set([i[2:] for i in cur_conds.keys() if i.startswith('X_')]) - {'VA'}
\n
    __Pyx_TraceLine(98,0,__PYX_ERR(0, 98, __pyx_L1_error))\n    __pyx_t_4 = PySet_New(__pyx_v_comps); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 98, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    { /* enter inner scope */\n      PyObject *__pyx_8genexpr2__pyx_v_i = NULL;\n      __pyx_t_14 = PyList_New(0); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 98, __pyx_L24_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_15 = 0;\n      if (unlikely(__pyx_v_cur_conds == Py_None)) {\n        PyErr_Format(PyExc_AttributeError, \"'NoneType' object has no attribute '%s'\", \"keys\");\n        __PYX_ERR(0, 98, __pyx_L24_error)\n      }\n      __pyx_t_7 = __Pyx_dict_iterator(__pyx_v_cur_conds, 0, __pyx_n_s_keys, (&__pyx_t_8), (&__pyx_t_16)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 98, __pyx_L24_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_XDECREF(__pyx_t_3);\n      __pyx_t_3 = __pyx_t_7;\n      __pyx_t_7 = 0;\n      while (1) {\n        __pyx_t_17 = __Pyx_dict_iter_next(__pyx_t_3, __pyx_t_8, &__pyx_t_15, &__pyx_t_7, NULL, NULL, __pyx_t_16);\n        if (unlikely(__pyx_t_17 == 0)) break;\n        if (unlikely(__pyx_t_17 == -1)) __PYX_ERR(0, 98, __pyx_L24_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_i, __pyx_t_7);\n        __pyx_t_7 = 0;\n        __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr2__pyx_v_i, __pyx_n_s_startswith); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 98, __pyx_L24_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 98, __pyx_L24_error)\n        __Pyx_GOTREF(__pyx_t_1);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 98, __pyx_L24_error)\n        __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        if (__pyx_t_6) {\n/* \u2026 */\n  __pyx_tuple__4 = PyTuple_Pack(1, __pyx_n_u_X_2); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 98, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__4);\n  __Pyx_GIVEREF(__pyx_tuple__4);\n          __pyx_t_1 = __Pyx_PyObject_GetSlice(__pyx_8genexpr2__pyx_v_i, 2, 0, NULL, NULL, &__pyx_slice__5, 1, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 98, __pyx_L24_error)\n          __Pyx_GOTREF(__pyx_t_1);\n          if (unlikely(__Pyx_ListComp_Append(__pyx_t_14, (PyObject*)__pyx_t_1))) __PYX_ERR(0, 98, __pyx_L24_error)\n          __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_i);\n      goto __pyx_L28_exit_scope;\n      __pyx_L24_error:;\n      __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_i);\n      goto __pyx_L1_error;\n      __pyx_L28_exit_scope:;\n    } /* exit inner scope */\n    __pyx_t_3 = PySet_New(__pyx_t_14); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n    __pyx_t_14 = PyNumber_Subtract(__pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 98, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_14);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_3 = PySet_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    if (PySet_Add(__pyx_t_3, __pyx_n_u_VA) < 0) __PYX_ERR(0, 98, __pyx_L1_error)\n    __pyx_t_4 = PyNumber_Subtract(__pyx_t_14, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 98, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_dependent_comp, __pyx_t_4);\n    __pyx_t_4 = 0;\n  __pyx_slice__5 = PySlice_New(__pyx_int_2, Py_None, Py_None); if (unlikely(!__pyx_slice__5)) __PYX_ERR(0, 98, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_slice__5);\n  __Pyx_GIVEREF(__pyx_slice__5);\n
+099:         if len(dependent_comp) == 1:
\n
    __Pyx_TraceLine(99,0,__PYX_ERR(0, 99, __pyx_L1_error))\n    __pyx_t_8 = PyObject_Length(__pyx_v_dependent_comp); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 99, __pyx_L1_error)\n    __pyx_t_6 = ((__pyx_t_8 == 1) != 0);\n    if (__pyx_t_6) {\n/* \u2026 */\n      goto __pyx_L29;\n    }\n
+100:             dependent_comp = list(dependent_comp)[0]
\n
      __Pyx_TraceLine(100,0,__PYX_ERR(0, 100, __pyx_L1_error))\n      __pyx_t_4 = PySequence_List(__pyx_v_dependent_comp); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 100, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = __Pyx_GetItemInt_List(__pyx_t_4, 0, long, 1, __Pyx_PyInt_From_long, 1, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 100, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF_SET(__pyx_v_dependent_comp, __pyx_t_3);\n      __pyx_t_3 = 0;\n
 101:         else:
\n
+102:             raise ValueError('Number of dependent components different from one')
\n
    __Pyx_TraceLine(102,0,__PYX_ERR(0, 102, __pyx_L1_error))\n    /*else*/ {\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(0, 102, __pyx_L1_error)\n    }\n    __pyx_L29:;\n/* \u2026 */\n  __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_u_Number_of_dependent_components_d); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(0, 102, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__6);\n  __Pyx_GIVEREF(__pyx_tuple__6);\n
 103:         # chem_pots = OrderedDict(zip(properties.coords['component'].values, properties['MU'].values[it.multi_index]))
\n
 104:         # Used to cache generated mole fraction functions
\n
+105:         mole_fractions = {}
\n
    __Pyx_TraceLine(105,0,__PYX_ERR(0, 105, __pyx_L1_error))\n    __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 105, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_XDECREF_SET(__pyx_v_mole_fractions, ((PyObject*)__pyx_t_3));\n    __pyx_t_3 = 0;\n
+106:         for cur_iter in range(MAX_SOLVE_ITERATIONS):
\n
    __Pyx_TraceLine(106,0,__PYX_ERR(0, 106, __pyx_L1_error))\n    __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_SOLVE_ITERATIONS); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 106, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_19 = __Pyx_PyInt_As_long(__pyx_t_3); if (unlikely((__pyx_t_19 == (long)-1) && PyErr_Occurred())) __PYX_ERR(0, 106, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    for (__pyx_t_16 = 0; __pyx_t_16 < __pyx_t_19; __pyx_t_16+=1) {\n      __pyx_v_cur_iter = __pyx_t_16;\n
 107:             # print('CUR_ITER:', cur_iter)
\n
+108:             phases = list(prop_Phase_values[it.multi_index])
\n
      __Pyx_TraceLine(108,0,__PYX_ERR(0, 108, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 108, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_GetItem(__pyx_v_prop_Phase_values, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 108, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PySequence_List(__pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 108, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_phases, __pyx_t_3);\n      __pyx_t_3 = 0;\n
+109:             if '' in phases:
\n
      __Pyx_TraceLine(109,0,__PYX_ERR(0, 109, __pyx_L1_error))\n      __pyx_t_6 = (__Pyx_PySequence_ContainsTF(__pyx_kp_u__3, __pyx_v_phases, Py_EQ)); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 109, __pyx_L1_error)\n      __pyx_t_5 = (__pyx_t_6 != 0);\n      if (__pyx_t_5) {\n/* \u2026 */\n        goto __pyx_L32;\n      }\n
+110:                 old_phase_length = phases.index('')
\n
        __Pyx_TraceLine(110,0,__PYX_ERR(0, 110, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_phases, __pyx_n_s_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 110, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 110, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_17 = __Pyx_PyInt_As_int(__pyx_t_4); if (unlikely((__pyx_t_17 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 110, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_v_old_phase_length = __pyx_t_17;\n/* \u2026 */\n  __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_u__3); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(0, 110, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__7);\n  __Pyx_GIVEREF(__pyx_tuple__7);\n
 111:             else:
\n
+112:                 old_phase_length = -1
\n
      __Pyx_TraceLine(112,0,__PYX_ERR(0, 112, __pyx_L1_error))\n      /*else*/ {\n        __pyx_v_old_phase_length = -1;\n      }\n      __pyx_L32:;\n
+113:             remove_degenerate_phases(prop_Phase_values[it.multi_index], prop_X_values[it.multi_index],
\n
      __Pyx_TraceLine(113,0,__PYX_ERR(0, 113, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_remove_degenerate_phases); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 113, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 113, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_1 = PyObject_GetItem(__pyx_v_prop_Phase_values, __pyx_t_14); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 113, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 113, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_7 = PyObject_GetItem(__pyx_v_prop_X_values, __pyx_t_14); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 113, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n
+114:                                      prop_Y_values[it.multi_index], prop_NP_values[it.multi_index])
\n
      __Pyx_TraceLine(114,0,__PYX_ERR(0, 114, __pyx_L1_error))\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 114, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_10 = PyObject_GetItem(__pyx_v_prop_Y_values, __pyx_t_14); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 114, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 114, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_2 = PyObject_GetItem(__pyx_v_prop_NP_values, __pyx_t_14); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 114, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n        __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_3);\n        if (likely(__pyx_t_14)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n          __Pyx_INCREF(__pyx_t_14);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_3, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_3)) {\n        PyObject *__pyx_temp[5] = {__pyx_t_14, __pyx_t_1, __pyx_t_7, __pyx_t_10, __pyx_t_2};\n        __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 4+__pyx_t_17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 113, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n        PyObject *__pyx_temp[5] = {__pyx_t_14, __pyx_t_1, __pyx_t_7, __pyx_t_10, __pyx_t_2};\n        __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 4+__pyx_t_17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 113, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_11 = PyTuple_New(4+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 113, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        if (__pyx_t_14) {\n          __Pyx_GIVEREF(__pyx_t_14); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_14); __pyx_t_14 = NULL;\n        }\n        __Pyx_GIVEREF(__pyx_t_1);\n        PyTuple_SET_ITEM(__pyx_t_11, 0+__pyx_t_17, __pyx_t_1);\n        __Pyx_GIVEREF(__pyx_t_7);\n        PyTuple_SET_ITEM(__pyx_t_11, 1+__pyx_t_17, __pyx_t_7);\n        __Pyx_GIVEREF(__pyx_t_10);\n        PyTuple_SET_ITEM(__pyx_t_11, 2+__pyx_t_17, __pyx_t_10);\n        __Pyx_GIVEREF(__pyx_t_2);\n        PyTuple_SET_ITEM(__pyx_t_11, 3+__pyx_t_17, __pyx_t_2);\n        __pyx_t_1 = 0;\n        __pyx_t_7 = 0;\n        __pyx_t_10 = 0;\n        __pyx_t_2 = 0;\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_11, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 113, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n
+115:             phases = list(prop_Phase_values[it.multi_index])
\n
      __Pyx_TraceLine(115,0,__PYX_ERR(0, 115, __pyx_L1_error))\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 115, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_GetItem(__pyx_v_prop_Phase_values, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 115, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = PySequence_List(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 115, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF_SET(__pyx_v_phases, __pyx_t_4);\n      __pyx_t_4 = 0;\n
+116:             if '' in phases:
\n
      __Pyx_TraceLine(116,0,__PYX_ERR(0, 116, __pyx_L1_error))\n      __pyx_t_5 = (__Pyx_PySequence_ContainsTF(__pyx_kp_u__3, __pyx_v_phases, Py_EQ)); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 116, __pyx_L1_error)\n      __pyx_t_6 = (__pyx_t_5 != 0);\n      if (__pyx_t_6) {\n/* \u2026 */\n        goto __pyx_L33;\n      }\n
+117:                 new_phase_length = phases.index('')
\n
        __Pyx_TraceLine(117,0,__PYX_ERR(0, 117, __pyx_L1_error))\n        __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_phases, __pyx_n_s_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 117, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 117, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_t_17 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_17 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 117, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_v_new_phase_length = __pyx_t_17;\n/* \u2026 */\n  __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_u__3); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 117, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__8);\n  __Pyx_GIVEREF(__pyx_tuple__8);\n
 118:             else:
\n
+119:                 new_phase_length = -1
\n
      __Pyx_TraceLine(119,0,__PYX_ERR(0, 119, __pyx_L1_error))\n      /*else*/ {\n        __pyx_v_new_phase_length = -1;\n      }\n      __pyx_L33:;\n
 120:             # Are there removed phases?
\n
+121:             if '' in phases:
\n
      __Pyx_TraceLine(121,0,__PYX_ERR(0, 121, __pyx_L1_error))\n      __pyx_t_6 = (__Pyx_PySequence_ContainsTF(__pyx_kp_u__3, __pyx_v_phases, Py_EQ)); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n      __pyx_t_5 = (__pyx_t_6 != 0);\n      if (__pyx_t_5) {\n/* \u2026 */\n        goto __pyx_L34;\n      }\n
+122:                 num_phases = phases.index('')
\n
        __Pyx_TraceLine(122,0,__PYX_ERR(0, 122, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_phases, __pyx_n_s_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 122, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 122, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_17 = __Pyx_PyInt_As_int(__pyx_t_4); if (unlikely((__pyx_t_17 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 122, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_v_num_phases = __pyx_t_17;\n/* \u2026 */\n  __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_u__3); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(0, 122, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__9);\n  __Pyx_GIVEREF(__pyx_tuple__9);\n
 123:             else:
\n
+124:                 num_phases = len(phases)
\n
      __Pyx_TraceLine(124,0,__PYX_ERR(0, 124, __pyx_L1_error))\n      /*else*/ {\n        __pyx_t_8 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 124, __pyx_L1_error)\n        __pyx_v_num_phases = __pyx_t_8;\n      }\n      __pyx_L34:;\n
+125:             if num_phases == 0:
\n
      __Pyx_TraceLine(125,0,__PYX_ERR(0, 125, __pyx_L1_error))\n      __pyx_t_5 = ((__pyx_v_num_phases == 0) != 0);\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+126:                 raise ValueError('Zero phases are left in the system')
\n
        __Pyx_TraceLine(126,0,__PYX_ERR(0, 126, __pyx_L1_error))\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 126, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __PYX_ERR(0, 126, __pyx_L1_error)\n/* \u2026 */\n  __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_u_Zero_phases_are_left_in_the_syst); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 126, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__10);\n  __Pyx_GIVEREF(__pyx_tuple__10);\n
+127:             zero_dof = np.all(
\n
      __Pyx_TraceLine(127,0,__PYX_ERR(0, 127, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 127, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_all); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 127, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+128:                 (prop_Y_values[it.multi_index] == 1.) | np.isnan(prop_Y_values[it.multi_index]))
\n
      __Pyx_TraceLine(128,0,__PYX_ERR(0, 128, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_2 = PyObject_GetItem(__pyx_v_prop_Y_values, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_PyFloat_EqObjC(__pyx_t_2, __pyx_float_1_, 1., 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_10 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_isnan); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_1 = PyObject_GetItem(__pyx_v_prop_Y_values, __pyx_t_10); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __pyx_t_10 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n        __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_7);\n        if (likely(__pyx_t_10)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n          __Pyx_INCREF(__pyx_t_10);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_7, function);\n        }\n      }\n      if (!__pyx_t_10) {\n        __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_7)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_1};\n          __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_1};\n          __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_14 = PyTuple_New(1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 128, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_14, 0, __pyx_t_10); __pyx_t_10 = NULL;\n          __Pyx_GIVEREF(__pyx_t_1);\n          PyTuple_SET_ITEM(__pyx_t_14, 0+1, __pyx_t_1);\n          __pyx_t_1 = 0;\n          __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_14, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = PyNumber_Or(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 128, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_2)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_2);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n        }\n      }\n      if (!__pyx_t_2) {\n        __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 127, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_7};\n          __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 127, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_4);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_7};\n          __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 127, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_4);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 127, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); __pyx_t_2 = NULL;\n          __Pyx_GIVEREF(__pyx_t_7);\n          PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_t_7);\n          __pyx_t_7 = 0;\n          __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 127, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_4);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_zero_dof, __pyx_t_4);\n      __pyx_t_4 = 0;\n
+129:             if (num_phases == 1) and zero_dof:
\n
      __Pyx_TraceLine(129,0,__PYX_ERR(0, 129, __pyx_L1_error))\n      __pyx_t_6 = ((__pyx_v_num_phases == 1) != 0);\n      if (__pyx_t_6) {\n      } else {\n        __pyx_t_5 = __pyx_t_6;\n        goto __pyx_L37_bool_binop_done;\n      }\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_zero_dof); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 129, __pyx_L1_error)\n      __pyx_t_5 = __pyx_t_6;\n      __pyx_L37_bool_binop_done:;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
 130:                 # Single phase with zero internal degrees of freedom, can't do any refinement
\n
 131:                 # TODO: In the future we may be able to refine other degrees of freedom like temperature
\n
 132:                 # Chemical potentials have no meaning for this case
\n
+133:                 prop_MU_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(133,0,__PYX_ERR(0, 133, __pyx_L1_error))\n        __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_nan); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 133, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_MU_values, __pyx_t_4, __pyx_t_11) < 0)) __PYX_ERR(0, 133, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+134:                 break
\n
        __Pyx_TraceLine(134,0,__PYX_ERR(0, 134, __pyx_L1_error))\n        goto __pyx_L31_break;\n
+135:             phases = prop_Phase_values[it.multi_index + np.index_exp[:num_phases]]
\n
      __Pyx_TraceLine(135,0,__PYX_ERR(0, 135, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 135, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = __Pyx_PyObject_GetSlice(__pyx_t_3, 0, __pyx_v_num_phases, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyNumber_Add(__pyx_t_11, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = PyObject_GetItem(__pyx_v_prop_Phase_values, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF_SET(__pyx_v_phases, __pyx_t_4);\n      __pyx_t_4 = 0;\n
 136:             # num_sitefrac_bals = sum([len(dbf.phases[i].sublattices) for i in phases])
\n
 137:             # num_mass_bals = len([i for i in cur_conds.keys() if i.startswith('X_')]) + 1
\n
+138:             phase_fracs = prop_NP_values[it.multi_index + np.index_exp[:len(phases)]]
\n
      __Pyx_TraceLine(138,0,__PYX_ERR(0, 138, __pyx_L1_error))\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_8 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __pyx_t_3 = __Pyx_PyObject_GetSlice(__pyx_t_11, 0, __pyx_t_8, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyNumber_Add(__pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyObject_GetItem(__pyx_v_prop_NP_values, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 138, __pyx_L1_error)\n      __pyx_t_20 = ((PyArrayObject *)__pyx_t_3);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_v_phase_fracs, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_phase_fracs.diminfo[0].strides = __pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_phase_fracs.diminfo[0].shape = __pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 138, __pyx_L1_error)\n      }\n      __pyx_t_20 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_phase_fracs, ((PyArrayObject *)__pyx_t_3));\n      __pyx_t_3 = 0;\n
+139:             phase_dof = [len(set(phase_records[name].variables) - {v.T, v.P}) for name in phases]
\n
      __Pyx_TraceLine(139,0,__PYX_ERR(0, 139, __pyx_L1_error))\n      { /* enter inner scope */\n        PyObject *__pyx_8genexpr3__pyx_v_name = NULL;\n        __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 139, __pyx_L41_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (likely(PyList_CheckExact(__pyx_v_phases)) || PyTuple_CheckExact(__pyx_v_phases)) {\n          __pyx_t_11 = __pyx_v_phases; __Pyx_INCREF(__pyx_t_11); __pyx_t_8 = 0;\n          __pyx_t_9 = NULL;\n        } else {\n          __pyx_t_8 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_v_phases); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __pyx_t_9 = Py_TYPE(__pyx_t_11)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 139, __pyx_L41_error)\n        }\n        for (;;) {\n          if (likely(!__pyx_t_9)) {\n            if (likely(PyList_CheckExact(__pyx_t_11))) {\n              if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_11)) break;\n              #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n              __pyx_t_4 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_8); __Pyx_INCREF(__pyx_t_4); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 139, __pyx_L41_error)\n              #else\n              __pyx_t_4 = PySequence_ITEM(__pyx_t_11, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L41_error)\n              __Pyx_GOTREF(__pyx_t_4);\n              #endif\n            } else {\n              if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_11)) break;\n              #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n              __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_8); __Pyx_INCREF(__pyx_t_4); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 139, __pyx_L41_error)\n              #else\n              __pyx_t_4 = PySequence_ITEM(__pyx_t_11, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L41_error)\n              __Pyx_GOTREF(__pyx_t_4);\n              #endif\n            }\n          } else {\n            __pyx_t_4 = __pyx_t_9(__pyx_t_11);\n            if (unlikely(!__pyx_t_4)) {\n              PyObject* exc_type = PyErr_Occurred();\n              if (exc_type) {\n                if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                else __PYX_ERR(0, 139, __pyx_L41_error)\n              }\n              break;\n            }\n            __Pyx_GOTREF(__pyx_t_4);\n          }\n          __Pyx_XDECREF_SET(__pyx_8genexpr3__pyx_v_name, __pyx_t_4);\n          __pyx_t_4 = 0;\n          __pyx_t_4 = PyObject_GetItem(__pyx_v_phase_records, __pyx_8genexpr3__pyx_v_name); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_4);\n          __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_variables); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __pyx_t_4 = PySet_New(__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_4);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_v); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_T); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_v); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_P); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __pyx_t_7 = PySet_New(0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          if (PySet_Add(__pyx_t_7, __pyx_t_2) < 0) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          if (PySet_Add(__pyx_t_7, __pyx_t_14) < 0) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n          __pyx_t_14 = PyNumber_Subtract(__pyx_t_4, __pyx_t_7); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __pyx_t_15 = PyObject_Length(__pyx_t_14); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n          __pyx_t_14 = PyInt_FromSsize_t(__pyx_t_15); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_14))) __PYX_ERR(0, 139, __pyx_L41_error)\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_name);\n        goto __pyx_L44_exit_scope;\n        __pyx_L41_error:;\n        __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_name);\n        goto __pyx_L1_error;\n        __pyx_L44_exit_scope:;\n      } /* exit inner scope */\n      __Pyx_XDECREF_SET(__pyx_v_phase_dof, ((PyObject*)__pyx_t_3));\n      __pyx_t_3 = 0;\n
 140:             # Flatten site fractions array and remove nan padding
\n
+141:             site_fracs = prop_Y_values[it.multi_index].ravel()
\n
      __Pyx_TraceLine(141,0,__PYX_ERR(0, 141, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 141, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_14 = PyObject_GetItem(__pyx_v_prop_Y_values, __pyx_t_11); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 141, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_ravel); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 141, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = NULL;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_14)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_14);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n        }\n      }\n      if (__pyx_t_14) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_14); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 141, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      } else {\n        __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 141, __pyx_L1_error)\n      }\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 141, __pyx_L1_error)\n      __pyx_t_20 = ((PyArrayObject *)__pyx_t_3);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_v_site_fracs, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          }\n        }\n        __pyx_pybuffernd_site_fracs.diminfo[0].strides = __pyx_pybuffernd_site_fracs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_site_fracs.diminfo[0].shape = __pyx_pybuffernd_site_fracs.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 141, __pyx_L1_error)\n      }\n      __pyx_t_20 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_site_fracs, ((PyArrayObject *)__pyx_t_3));\n      __pyx_t_3 = 0;\n
 142:             # That *should* give us the internal dof
\n
 143:             # This may break if non-padding nan's slipped in from elsewhere...
\n
+144:             site_fracs = site_fracs[~np.isnan(site_fracs)]
\n
      __Pyx_TraceLine(144,0,__PYX_ERR(0, 144, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 144, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_isnan); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 144, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_14))) {\n        __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_14);\n        if (likely(__pyx_t_11)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14);\n          __Pyx_INCREF(__pyx_t_11);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_14, function);\n        }\n      }\n      if (!__pyx_t_11) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_14, ((PyObject *)__pyx_v_site_fracs)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_14)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_11, ((PyObject *)__pyx_v_site_fracs)};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_14, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_14)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_11, ((PyObject *)__pyx_v_site_fracs)};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_14, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        {\n          __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 144, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __Pyx_GIVEREF(__pyx_t_11); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_11); __pyx_t_11 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_site_fracs));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_site_fracs));\n          PyTuple_SET_ITEM(__pyx_t_7, 0+1, ((PyObject *)__pyx_v_site_fracs));\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = PyNumber_Invert(__pyx_t_3); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 144, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyObject_GetItem(((PyObject *)__pyx_v_site_fracs), __pyx_t_14); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 144, __pyx_L1_error)\n      __pyx_t_20 = ((PyArrayObject *)__pyx_t_3);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_site_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_v_site_fracs, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_site_fracs.diminfo[0].strides = __pyx_pybuffernd_site_fracs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_site_fracs.diminfo[0].shape = __pyx_pybuffernd_site_fracs.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 144, __pyx_L1_error)\n      }\n      __pyx_t_20 = 0;\n      __Pyx_DECREF_SET(__pyx_v_site_fracs, ((PyArrayObject *)__pyx_t_3));\n      __pyx_t_3 = 0;\n
+145:             site_fracs[site_fracs < MIN_SITE_FRACTION] = MIN_SITE_FRACTION
\n
      __Pyx_TraceLine(145,0,__PYX_ERR(0, 145, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_MIN_SITE_FRACTION); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_MIN_SITE_FRACTION); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_7 = PyObject_RichCompare(((PyObject *)__pyx_v_site_fracs), __pyx_t_14, Py_LT); __Pyx_XGOTREF(__pyx_t_7); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      if (unlikely(PyObject_SetItem(((PyObject *)__pyx_v_site_fracs), __pyx_t_7, __pyx_t_3) < 0)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+146:             if len(site_fracs) == 0:
\n
      __Pyx_TraceLine(146,0,__PYX_ERR(0, 146, __pyx_L1_error))\n      __pyx_t_8 = PyObject_Length(((PyObject *)__pyx_v_site_fracs)); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 146, __pyx_L1_error)\n      __pyx_t_5 = ((__pyx_t_8 == 0) != 0);\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+147:                 print(properties)
\n
        __Pyx_TraceLine(147,0,__PYX_ERR(0, 147, __pyx_L1_error))\n        __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 147, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_INCREF(__pyx_v_properties);\n        __Pyx_GIVEREF(__pyx_v_properties);\n        PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_properties);\n        __pyx_t_7 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_3, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 147, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n
+148:                 raise ValueError('Site fractions are invalid')
\n
        __Pyx_TraceLine(148,0,__PYX_ERR(0, 148, __pyx_L1_error))\n        __pyx_t_7 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 148, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_Raise(__pyx_t_7, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __PYX_ERR(0, 148, __pyx_L1_error)\n/* \u2026 */\n  __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_u_Site_fractions_are_invalid); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 148, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__11);\n  __Pyx_GIVEREF(__pyx_tuple__11);\n
+149:             phase_fracs[phase_fracs < MIN_SITE_FRACTION] = MIN_SITE_FRACTION
\n
      __Pyx_TraceLine(149,0,__PYX_ERR(0, 149, __pyx_L1_error))\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_MIN_SITE_FRACTION); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 149, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_MIN_SITE_FRACTION); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 149, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_14 = PyObject_RichCompare(((PyObject *)__pyx_v_phase_fracs), __pyx_t_3, Py_LT); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 149, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (unlikely(PyObject_SetItem(((PyObject *)__pyx_v_phase_fracs), __pyx_t_14, __pyx_t_7) < 0)) __PYX_ERR(0, 149, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n
+150:             var_idx = 0
\n
      __Pyx_TraceLine(150,0,__PYX_ERR(0, 150, __pyx_L1_error))\n      __pyx_v_var_idx = 0;\n
+151:             for name in phases:
\n
      __Pyx_TraceLine(151,0,__PYX_ERR(0, 151, __pyx_L1_error))\n      if (likely(PyList_CheckExact(__pyx_v_phases)) || PyTuple_CheckExact(__pyx_v_phases)) {\n        __pyx_t_7 = __pyx_v_phases; __Pyx_INCREF(__pyx_t_7); __pyx_t_8 = 0;\n        __pyx_t_9 = NULL;\n      } else {\n        __pyx_t_8 = -1; __pyx_t_7 = PyObject_GetIter(__pyx_v_phases); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 151, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __pyx_t_9 = Py_TYPE(__pyx_t_7)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 151, __pyx_L1_error)\n      }\n      for (;;) {\n        if (likely(!__pyx_t_9)) {\n          if (likely(PyList_CheckExact(__pyx_t_7))) {\n            if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_7)) break;\n            #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n            __pyx_t_14 = PyList_GET_ITEM(__pyx_t_7, __pyx_t_8); __Pyx_INCREF(__pyx_t_14); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 151, __pyx_L1_error)\n            #else\n            __pyx_t_14 = PySequence_ITEM(__pyx_t_7, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 151, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            #endif\n          } else {\n            if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_7)) break;\n            #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n            __pyx_t_14 = PyTuple_GET_ITEM(__pyx_t_7, __pyx_t_8); __Pyx_INCREF(__pyx_t_14); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 151, __pyx_L1_error)\n            #else\n            __pyx_t_14 = PySequence_ITEM(__pyx_t_7, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 151, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            #endif\n          }\n        } else {\n          __pyx_t_14 = __pyx_t_9(__pyx_t_7);\n          if (unlikely(!__pyx_t_14)) {\n            PyObject* exc_type = PyErr_Occurred();\n            if (exc_type) {\n              if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n              else __PYX_ERR(0, 151, __pyx_L1_error)\n            }\n            break;\n          }\n          __Pyx_GOTREF(__pyx_t_14);\n        }\n        __Pyx_XDECREF_SET(__pyx_v_name, __pyx_t_14);\n        __pyx_t_14 = 0;\n/* \u2026 */\n        __Pyx_TraceLine(151,0,__PYX_ERR(0, 151, __pyx_L1_error))\n      }\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n
+152:                 for idx in range(len(dbf.phases[name].sublattices)):
\n
        __Pyx_TraceLine(152,0,__PYX_ERR(0, 152, __pyx_L1_error))\n        __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_dbf, __pyx_n_s_phases); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 152, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __pyx_t_3 = PyObject_GetItem(__pyx_t_14, __pyx_v_name); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 152, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_sublattices); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 152, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_15 = PyObject_Length(__pyx_t_14); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 152, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        for (__pyx_t_24 = 0; __pyx_t_24 < __pyx_t_15; __pyx_t_24+=1) {\n          __pyx_v_idx = __pyx_t_24;\n
+153:                     active_in_subl = set(dbf.phases[name].constituents[idx]).intersection(comps)
\n
          __Pyx_TraceLine(153,0,__PYX_ERR(0, 153, __pyx_L1_error))\n          __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dbf, __pyx_n_s_phases); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 153, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __pyx_t_11 = PyObject_GetItem(__pyx_t_3, __pyx_v_name); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 153, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_constituents); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 153, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_GetItemInt(__pyx_t_3, __pyx_v_idx, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 153, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __pyx_t_3 = PySet_New(__pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 153, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_intersection); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 153, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __pyx_t_3 = NULL;\n          if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) {\n            __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_11);\n            if (likely(__pyx_t_3)) {\n              PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n              __Pyx_INCREF(__pyx_t_3);\n              __Pyx_INCREF(function);\n              __Pyx_DECREF_SET(__pyx_t_11, function);\n            }\n          }\n          if (!__pyx_t_3) {\n            __pyx_t_14 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_v_comps); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 153, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n          } else {\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_11)) {\n              PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_comps};\n              __pyx_t_14 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 153, __pyx_L1_error)\n              __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n              __Pyx_GOTREF(__pyx_t_14);\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n              PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_comps};\n              __pyx_t_14 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 153, __pyx_L1_error)\n              __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n              __Pyx_GOTREF(__pyx_t_14);\n            } else\n            #endif\n            {\n              __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 153, __pyx_L1_error)\n              __Pyx_GOTREF(__pyx_t_4);\n              __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;\n              __Pyx_INCREF(__pyx_v_comps);\n              __Pyx_GIVEREF(__pyx_v_comps);\n              PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_comps);\n              __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_4, NULL); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 153, __pyx_L1_error)\n              __Pyx_GOTREF(__pyx_t_14);\n              __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n            }\n          }\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_XDECREF_SET(__pyx_v_active_in_subl, __pyx_t_14);\n          __pyx_t_14 = 0;\n
+154:                     for ais in range(len(active_in_subl)):
\n
          __Pyx_TraceLine(154,0,__PYX_ERR(0, 154, __pyx_L1_error))\n          __pyx_t_25 = PyObject_Length(__pyx_v_active_in_subl); if (unlikely(__pyx_t_25 == -1)) __PYX_ERR(0, 154, __pyx_L1_error)\n          __pyx_t_14 = PyInt_FromSsize_t(__pyx_t_25); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 154, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 154, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_GIVEREF(__pyx_t_14);\n          PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_14);\n          __pyx_t_14 = 0;\n          __pyx_t_14 = __Pyx_PyObject_Call(__pyx_builtin_range, __pyx_t_11, NULL); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 154, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          if (likely(PyList_CheckExact(__pyx_t_14)) || PyTuple_CheckExact(__pyx_t_14)) {\n            __pyx_t_11 = __pyx_t_14; __Pyx_INCREF(__pyx_t_11); __pyx_t_25 = 0;\n            __pyx_t_26 = NULL;\n          } else {\n            __pyx_t_25 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_t_14); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 154, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __pyx_t_26 = Py_TYPE(__pyx_t_11)->tp_iternext; if (unlikely(!__pyx_t_26)) __PYX_ERR(0, 154, __pyx_L1_error)\n          }\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n          for (;;) {\n            if (likely(!__pyx_t_26)) {\n              if (likely(PyList_CheckExact(__pyx_t_11))) {\n                if (__pyx_t_25 >= PyList_GET_SIZE(__pyx_t_11)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_14 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_25); __Pyx_INCREF(__pyx_t_14); __pyx_t_25++; if (unlikely(0 < 0)) __PYX_ERR(0, 154, __pyx_L1_error)\n                #else\n                __pyx_t_14 = PySequence_ITEM(__pyx_t_11, __pyx_t_25); __pyx_t_25++; if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 154, __pyx_L1_error)\n                __Pyx_GOTREF(__pyx_t_14);\n                #endif\n              } else {\n                if (__pyx_t_25 >= PyTuple_GET_SIZE(__pyx_t_11)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_14 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_25); __Pyx_INCREF(__pyx_t_14); __pyx_t_25++; if (unlikely(0 < 0)) __PYX_ERR(0, 154, __pyx_L1_error)\n                #else\n                __pyx_t_14 = PySequence_ITEM(__pyx_t_11, __pyx_t_25); __pyx_t_25++; if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 154, __pyx_L1_error)\n                __Pyx_GOTREF(__pyx_t_14);\n                #endif\n              }\n            } else {\n              __pyx_t_14 = __pyx_t_26(__pyx_t_11);\n              if (unlikely(!__pyx_t_14)) {\n                PyObject* exc_type = PyErr_Occurred();\n                if (exc_type) {\n                  if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                  else __PYX_ERR(0, 154, __pyx_L1_error)\n                }\n                break;\n              }\n              __Pyx_GOTREF(__pyx_t_14);\n            }\n            __Pyx_XDECREF_SET(__pyx_v_ais, __pyx_t_14);\n            __pyx_t_14 = 0;\n/* \u2026 */\n            __Pyx_TraceLine(154,0,__PYX_ERR(0, 154, __pyx_L1_error))\n          }\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+155:                         site_fracs[var_idx + ais] = site_fracs[var_idx + ais] / sum(site_fracs[var_idx:var_idx + len(active_in_subl)])
\n
            __Pyx_TraceLine(155,0,__PYX_ERR(0, 155, __pyx_L1_error))\n            __pyx_t_14 = __Pyx_PyInt_From_int(__pyx_v_var_idx); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __pyx_t_4 = PyNumber_Add(__pyx_t_14, __pyx_v_ais); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __pyx_t_14 = PyObject_GetItem(((PyObject *)__pyx_v_site_fracs), __pyx_t_4); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n            __pyx_t_27 = PyObject_Length(__pyx_v_active_in_subl); if (unlikely(__pyx_t_27 == -1)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __pyx_t_4 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_site_fracs), __pyx_v_var_idx, (__pyx_v_var_idx + __pyx_t_27), NULL, NULL, NULL, 1, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_3);\n            __Pyx_GIVEREF(__pyx_t_4);\n            PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4);\n            __pyx_t_4 = 0;\n            __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_sum, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n            __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_t_14, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_3);\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n            __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_var_idx); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __pyx_t_14 = PyNumber_Add(__pyx_t_4, __pyx_v_ais); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n            if (unlikely(PyObject_SetItem(((PyObject *)__pyx_v_site_fracs), __pyx_t_14, __pyx_t_3) < 0)) __PYX_ERR(0, 155, __pyx_L1_error)\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+156:                     var_idx += len(active_in_subl)
\n
          __Pyx_TraceLine(156,0,__PYX_ERR(0, 156, __pyx_L1_error))\n          __pyx_t_25 = PyObject_Length(__pyx_v_active_in_subl); if (unlikely(__pyx_t_25 == -1)) __PYX_ERR(0, 156, __pyx_L1_error)\n          __pyx_v_var_idx = (__pyx_v_var_idx + __pyx_t_25);\n        }\n
+157:             l_constraints, constraint_jac, constraint_hess  = \\
\n
      __Pyx_TraceLine(157,0,__PYX_ERR(0, 157, __pyx_L1_error))\n      if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 157, __pyx_L1_error)\n      if (!(likely(((__pyx_t_11) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_11, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 157, __pyx_L1_error)\n      __pyx_t_28 = ((PyArrayObject *)__pyx_t_3);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_constraints.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_constraints.rcbuffer->pybuffer, (PyObject*)__pyx_t_28, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_constraints.rcbuffer->pybuffer, (PyObject*)__pyx_v_l_constraints, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          }\n        }\n        __pyx_pybuffernd_l_constraints.diminfo[0].strides = __pyx_pybuffernd_l_constraints.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_l_constraints.diminfo[0].shape = __pyx_pybuffernd_l_constraints.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 157, __pyx_L1_error)\n      }\n      __pyx_t_28 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_l_constraints, ((PyArrayObject *)__pyx_t_3));\n      __pyx_t_3 = 0;\n      __pyx_t_29 = ((PyArrayObject *)__pyx_t_11);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer, (PyObject*)__pyx_t_29, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer, (PyObject*)__pyx_v_constraint_jac, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_constraint_jac.diminfo[0].strides = __pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_constraint_jac.diminfo[0].shape = __pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_constraint_jac.diminfo[1].strides = __pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_constraint_jac.diminfo[1].shape = __pyx_pybuffernd_constraint_jac.rcbuffer->pybuffer.shape[1];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 157, __pyx_L1_error)\n      }\n      __pyx_t_29 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_constraint_jac, ((PyArrayObject *)__pyx_t_11));\n      __pyx_t_11 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_constraint_hess, __pyx_t_7);\n      __pyx_t_7 = 0;\n
+158:                 _compute_constraints(dbf, comps, phases, cur_conds, site_fracs, phase_fracs, phase_records, mole_fractions=mole_fractions)
\n
      __Pyx_TraceLine(158,0,__PYX_ERR(0, 158, __pyx_L1_error))\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_compute_constraints); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 158, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_11 = PyTuple_New(7); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 158, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_INCREF(__pyx_v_dbf);\n      __Pyx_GIVEREF(__pyx_v_dbf);\n      PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_v_dbf);\n      __Pyx_INCREF(__pyx_v_comps);\n      __Pyx_GIVEREF(__pyx_v_comps);\n      PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_v_comps);\n      __Pyx_INCREF(__pyx_v_phases);\n      __Pyx_GIVEREF(__pyx_v_phases);\n      PyTuple_SET_ITEM(__pyx_t_11, 2, __pyx_v_phases);\n      __Pyx_INCREF(__pyx_v_cur_conds);\n      __Pyx_GIVEREF(__pyx_v_cur_conds);\n      PyTuple_SET_ITEM(__pyx_t_11, 3, __pyx_v_cur_conds);\n      __Pyx_INCREF(((PyObject *)__pyx_v_site_fracs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_site_fracs));\n      PyTuple_SET_ITEM(__pyx_t_11, 4, ((PyObject *)__pyx_v_site_fracs));\n      __Pyx_INCREF(((PyObject *)__pyx_v_phase_fracs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_phase_fracs));\n      PyTuple_SET_ITEM(__pyx_t_11, 5, ((PyObject *)__pyx_v_phase_fracs));\n      __Pyx_INCREF(__pyx_v_phase_records);\n      __Pyx_GIVEREF(__pyx_v_phase_records);\n      PyTuple_SET_ITEM(__pyx_t_11, 6, __pyx_v_phase_records);\n      __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 158, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_mole_fractions, __pyx_v_mole_fractions) < 0) __PYX_ERR(0, 158, __pyx_L1_error)\n      __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_11, __pyx_t_3); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 158, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if ((likely(PyTuple_CheckExact(__pyx_t_14))) || (PyList_CheckExact(__pyx_t_14))) {\n        PyObject* sequence = __pyx_t_14;\n        #if !CYTHON_COMPILING_IN_PYPY\n        Py_ssize_t size = Py_SIZE(sequence);\n        #else\n        Py_ssize_t size = PySequence_Size(sequence);\n        #endif\n        if (unlikely(size != 3)) {\n          if (size > 3) __Pyx_RaiseTooManyValuesError(3);\n          else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n          __PYX_ERR(0, 157, __pyx_L1_error)\n        }\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        if (likely(PyTuple_CheckExact(sequence))) {\n          __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); \n          __pyx_t_11 = PyTuple_GET_ITEM(sequence, 1); \n          __pyx_t_7 = PyTuple_GET_ITEM(sequence, 2); \n        } else {\n          __pyx_t_3 = PyList_GET_ITEM(sequence, 0); \n          __pyx_t_11 = PyList_GET_ITEM(sequence, 1); \n          __pyx_t_7 = PyList_GET_ITEM(sequence, 2); \n        }\n        __Pyx_INCREF(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_11);\n        __Pyx_INCREF(__pyx_t_7);\n        #else\n        __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 157, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_11 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 157, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_7 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 157, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        #endif\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      } else {\n        Py_ssize_t index = -1;\n        __pyx_t_4 = PyObject_GetIter(__pyx_t_14); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 157, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __pyx_t_13 = Py_TYPE(__pyx_t_4)->tp_iternext;\n        index = 0; __pyx_t_3 = __pyx_t_13(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L52_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_3);\n        index = 1; __pyx_t_11 = __pyx_t_13(__pyx_t_4); if (unlikely(!__pyx_t_11)) goto __pyx_L52_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_11);\n        index = 2; __pyx_t_7 = __pyx_t_13(__pyx_t_4); if (unlikely(!__pyx_t_7)) goto __pyx_L52_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_7);\n        if (__Pyx_IternextUnpackEndCheck(__pyx_t_13(__pyx_t_4), 3) < 0) __PYX_ERR(0, 157, __pyx_L1_error)\n        __pyx_t_13 = NULL;\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        goto __pyx_L53_unpacking_done;\n        __pyx_L52_unpacking_failed:;\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_t_13 = NULL;\n        if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);\n        __PYX_ERR(0, 157, __pyx_L1_error)\n        __pyx_L53_unpacking_done:;\n      }\n
 159:             # Reset Lagrange multipliers if active set of phases change
\n
+160:             if cur_iter == 0 or (old_phase_length != new_phase_length) or np.any(np.isnan(l_multipliers)):
\n
      __Pyx_TraceLine(160,0,__PYX_ERR(0, 160, __pyx_L1_error))\n      __pyx_t_6 = ((__pyx_v_cur_iter == 0) != 0);\n      if (!__pyx_t_6) {\n      } else {\n        __pyx_t_5 = __pyx_t_6;\n        goto __pyx_L55_bool_binop_done;\n      }\n      __pyx_t_6 = ((__pyx_v_old_phase_length != __pyx_v_new_phase_length) != 0);\n      if (!__pyx_t_6) {\n      } else {\n        __pyx_t_5 = __pyx_t_6;\n        goto __pyx_L55_bool_binop_done;\n      }\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 160, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_any); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 160, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isnan); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 160, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_4);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_4, function);\n        }\n      }\n      if (!__pyx_t_3) {\n        __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_4, ((PyObject *)__pyx_v_l_multipliers)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 160, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_l_multipliers)};\n          __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_l_multipliers)};\n          __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else\n        #endif\n        {\n          __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); __pyx_t_3 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_l_multipliers));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_l_multipliers));\n          PyTuple_SET_ITEM(__pyx_t_2, 0+1, ((PyObject *)__pyx_v_l_multipliers));\n          __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_4)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_4);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n        }\n      }\n      if (!__pyx_t_4) {\n        __pyx_t_14 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_7); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 160, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_14);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_7};\n          __pyx_t_14 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_7};\n          __pyx_t_14 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); __pyx_t_4 = NULL;\n          __Pyx_GIVEREF(__pyx_t_7);\n          PyTuple_SET_ITEM(__pyx_t_2, 0+1, __pyx_t_7);\n          __pyx_t_7 = 0;\n          __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_2, NULL); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 160, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_14); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 160, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_5 = __pyx_t_6;\n      __pyx_L55_bool_binop_done:;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+161:                 l_multipliers = np.zeros(l_constraints.shape[0])
\n
        __Pyx_TraceLine(161,0,__PYX_ERR(0, 161, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 161, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 161, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyInt_From_Py_intptr_t((__pyx_v_l_constraints->dimensions[0])); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 161, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_7 = NULL;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n          __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);\n          if (likely(__pyx_t_7)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n            __Pyx_INCREF(__pyx_t_7);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_2, function);\n          }\n        }\n        if (!__pyx_t_7) {\n          __pyx_t_14 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_11); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 161, __pyx_L1_error)\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_2)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_11};\n            __pyx_t_14 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 161, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_GOTREF(__pyx_t_14);\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_11};\n            __pyx_t_14 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 161, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_GOTREF(__pyx_t_14);\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          } else\n          #endif\n          {\n            __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 161, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_7); __pyx_t_7 = NULL;\n            __Pyx_GIVEREF(__pyx_t_11);\n            PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_t_11);\n            __pyx_t_11 = 0;\n            __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 161, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        if (!(likely(((__pyx_t_14) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_14, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 161, __pyx_L1_error)\n        __pyx_t_20 = ((PyArrayObject *)__pyx_t_14);\n        {\n          __Pyx_BufFmt_StackElem __pyx_stack[1];\n          __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer);\n          __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n          if (unlikely(__pyx_t_17 < 0)) {\n            PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n            if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_v_l_multipliers, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n              Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n              __Pyx_RaiseBufferFallbackError();\n            } else {\n              PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n            }\n          }\n          __pyx_pybuffernd_l_multipliers.diminfo[0].strides = __pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_l_multipliers.diminfo[0].shape = __pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer.shape[0];\n          if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 161, __pyx_L1_error)\n        }\n        __pyx_t_20 = 0;\n        __Pyx_XDECREF_SET(__pyx_v_l_multipliers, ((PyArrayObject *)__pyx_t_14));\n        __pyx_t_14 = 0;\n
+162:             qmat, rmat = np.linalg.qr(constraint_jac.T, mode='complete')
\n
      __Pyx_TraceLine(162,0,__PYX_ERR(0, 162, __pyx_L1_error))\n      __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_linalg); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_qr); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_constraint_jac), __pyx_n_s_T); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_GIVEREF(__pyx_t_2);\n      PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2);\n      __pyx_t_2 = 0;\n      __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_mode, __pyx_n_u_complete) < 0) __PYX_ERR(0, 162, __pyx_L1_error)\n      __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 162, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      if ((likely(PyTuple_CheckExact(__pyx_t_11))) || (PyList_CheckExact(__pyx_t_11))) {\n        PyObject* sequence = __pyx_t_11;\n        #if !CYTHON_COMPILING_IN_PYPY\n        Py_ssize_t size = Py_SIZE(sequence);\n        #else\n        Py_ssize_t size = PySequence_Size(sequence);\n        #endif\n        if (unlikely(size != 2)) {\n          if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n          else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n          __PYX_ERR(0, 162, __pyx_L1_error)\n        }\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        if (likely(PyTuple_CheckExact(sequence))) {\n          __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); \n          __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); \n        } else {\n          __pyx_t_2 = PyList_GET_ITEM(sequence, 0); \n          __pyx_t_4 = PyList_GET_ITEM(sequence, 1); \n        }\n        __Pyx_INCREF(__pyx_t_2);\n        __Pyx_INCREF(__pyx_t_4);\n        #else\n        __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 162, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        #endif\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      } else {\n        Py_ssize_t index = -1;\n        __pyx_t_14 = PyObject_GetIter(__pyx_t_11); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 162, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_13 = Py_TYPE(__pyx_t_14)->tp_iternext;\n        index = 0; __pyx_t_2 = __pyx_t_13(__pyx_t_14); if (unlikely(!__pyx_t_2)) goto __pyx_L58_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_2);\n        index = 1; __pyx_t_4 = __pyx_t_13(__pyx_t_14); if (unlikely(!__pyx_t_4)) goto __pyx_L58_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_4);\n        if (__Pyx_IternextUnpackEndCheck(__pyx_t_13(__pyx_t_14), 2) < 0) __PYX_ERR(0, 162, __pyx_L1_error)\n        __pyx_t_13 = NULL;\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        goto __pyx_L59_unpacking_done;\n        __pyx_L58_unpacking_failed:;\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __pyx_t_13 = NULL;\n        if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);\n        __PYX_ERR(0, 162, __pyx_L1_error)\n        __pyx_L59_unpacking_done:;\n      }\n      if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 162, __pyx_L1_error)\n      if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 162, __pyx_L1_error)\n      __pyx_t_29 = ((PyArrayObject *)__pyx_t_2);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_qmat.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_qmat.rcbuffer->pybuffer, (PyObject*)__pyx_t_29, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_qmat.rcbuffer->pybuffer, (PyObject*)__pyx_v_qmat, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_qmat.diminfo[0].strides = __pyx_pybuffernd_qmat.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_qmat.diminfo[0].shape = __pyx_pybuffernd_qmat.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_qmat.diminfo[1].strides = __pyx_pybuffernd_qmat.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_qmat.diminfo[1].shape = __pyx_pybuffernd_qmat.rcbuffer->pybuffer.shape[1];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 162, __pyx_L1_error)\n      }\n      __pyx_t_29 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_qmat, ((PyArrayObject *)__pyx_t_2));\n      __pyx_t_2 = 0;\n      __pyx_t_29 = ((PyArrayObject *)__pyx_t_4);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_rmat.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_rmat.rcbuffer->pybuffer, (PyObject*)__pyx_t_29, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_rmat.rcbuffer->pybuffer, (PyObject*)__pyx_v_rmat, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          }\n        }\n        __pyx_pybuffernd_rmat.diminfo[0].strides = __pyx_pybuffernd_rmat.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_rmat.diminfo[0].shape = __pyx_pybuffernd_rmat.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_rmat.diminfo[1].strides = __pyx_pybuffernd_rmat.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_rmat.diminfo[1].shape = __pyx_pybuffernd_rmat.rcbuffer->pybuffer.shape[1];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 162, __pyx_L1_error)\n      }\n      __pyx_t_29 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_rmat, ((PyArrayObject *)__pyx_t_4));\n      __pyx_t_4 = 0;\n
+163:             m = rmat.shape[1]
\n
      __Pyx_TraceLine(163,0,__PYX_ERR(0, 163, __pyx_L1_error))\n      __pyx_v_m = (__pyx_v_rmat->dimensions[1]);\n
+164:             n = qmat.shape[0]
\n
      __Pyx_TraceLine(164,0,__PYX_ERR(0, 164, __pyx_L1_error))\n      __pyx_v_n = (__pyx_v_qmat->dimensions[0]);\n
 165:             # Construct orthonormal basis for the constraints
\n
+166:             ymat = qmat[:, :m]
\n
      __Pyx_TraceLine(166,0,__PYX_ERR(0, 166, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_PyInt_From_int(__pyx_v_m); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 166, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_4 = PySlice_New(Py_None, __pyx_t_11, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 166, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 166, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_INCREF(__pyx_slice__12);\n      __Pyx_GIVEREF(__pyx_slice__12);\n      PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_slice__12);\n      __Pyx_GIVEREF(__pyx_t_4);\n      PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4);\n      __pyx_t_4 = 0;\n      __pyx_t_4 = PyObject_GetItem(((PyObject *)__pyx_v_qmat), __pyx_t_11); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 166, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 166, __pyx_L1_error)\n      __pyx_t_29 = ((PyArrayObject *)__pyx_t_4);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_ymat.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_ymat.rcbuffer->pybuffer, (PyObject*)__pyx_t_29, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_ymat.rcbuffer->pybuffer, (PyObject*)__pyx_v_ymat, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_ymat.diminfo[0].strides = __pyx_pybuffernd_ymat.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_ymat.diminfo[0].shape = __pyx_pybuffernd_ymat.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_ymat.diminfo[1].strides = __pyx_pybuffernd_ymat.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_ymat.diminfo[1].shape = __pyx_pybuffernd_ymat.rcbuffer->pybuffer.shape[1];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 166, __pyx_L1_error)\n      }\n      __pyx_t_29 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_ymat, ((PyArrayObject *)__pyx_t_4));\n      __pyx_t_4 = 0;\n/* \u2026 */\n  __pyx_slice__12 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__12)) __PYX_ERR(0, 166, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_slice__12);\n  __Pyx_GIVEREF(__pyx_slice__12);\n
+167:             zmat = qmat[:, m:]
\n
      __Pyx_TraceLine(167,0,__PYX_ERR(0, 167, __pyx_L1_error))\n      __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_m); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 167, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_11 = PySlice_New(__pyx_t_4, Py_None, Py_None); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 167, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 167, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_INCREF(__pyx_slice__13);\n      __Pyx_GIVEREF(__pyx_slice__13);\n      PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_slice__13);\n      __Pyx_GIVEREF(__pyx_t_11);\n      PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_11);\n      __pyx_t_11 = 0;\n      __pyx_t_11 = PyObject_GetItem(((PyObject *)__pyx_v_qmat), __pyx_t_4); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 167, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (!(likely(((__pyx_t_11) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_11, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 167, __pyx_L1_error)\n      __pyx_t_29 = ((PyArrayObject *)__pyx_t_11);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_zmat.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_zmat.rcbuffer->pybuffer, (PyObject*)__pyx_t_29, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_zmat.rcbuffer->pybuffer, (PyObject*)__pyx_v_zmat, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          }\n        }\n        __pyx_pybuffernd_zmat.diminfo[0].strides = __pyx_pybuffernd_zmat.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_zmat.diminfo[0].shape = __pyx_pybuffernd_zmat.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_zmat.diminfo[1].strides = __pyx_pybuffernd_zmat.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_zmat.diminfo[1].shape = __pyx_pybuffernd_zmat.rcbuffer->pybuffer.shape[1];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 167, __pyx_L1_error)\n      }\n      __pyx_t_29 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_zmat, ((PyArrayObject *)__pyx_t_11));\n      __pyx_t_11 = 0;\n/* \u2026 */\n  __pyx_slice__13 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__13)) __PYX_ERR(0, 167, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_slice__13);\n  __Pyx_GIVEREF(__pyx_slice__13);\n
 168:             # Equation 18.14a in Nocedal and Wright
\n
+169:             p_y = np.linalg.solve(-np.dot(constraint_jac, ymat), l_constraints)
\n
      __Pyx_TraceLine(169,0,__PYX_ERR(0, 169, __pyx_L1_error))\n      __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_linalg); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_solve); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_dot); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n        __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_7);\n        if (likely(__pyx_t_14)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n          __Pyx_INCREF(__pyx_t_14);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_7, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_7)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_14, ((PyObject *)__pyx_v_constraint_jac), ((PyObject *)__pyx_v_ymat)};\n        __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_14, ((PyObject *)__pyx_v_constraint_jac), ((PyObject *)__pyx_v_ymat)};\n        __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n      } else\n      #endif\n      {\n        __pyx_t_3 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (__pyx_t_14) {\n          __Pyx_GIVEREF(__pyx_t_14); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_14); __pyx_t_14 = NULL;\n        }\n        __Pyx_INCREF(((PyObject *)__pyx_v_constraint_jac));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_constraint_jac));\n        PyTuple_SET_ITEM(__pyx_t_3, 0+__pyx_t_17, ((PyObject *)__pyx_v_constraint_jac));\n        __Pyx_INCREF(((PyObject *)__pyx_v_ymat));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_ymat));\n        PyTuple_SET_ITEM(__pyx_t_3, 1+__pyx_t_17, ((PyObject *)__pyx_v_ymat));\n        __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = PyNumber_Negative(__pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {\n        __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_4);\n        if (likely(__pyx_t_2)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n          __Pyx_INCREF(__pyx_t_2);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_4, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_4)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_t_7, ((PyObject *)__pyx_v_l_constraints)};\n        __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_t_7, ((PyObject *)__pyx_v_l_constraints)};\n        __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_3 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (__pyx_t_2) {\n          __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); __pyx_t_2 = NULL;\n        }\n        __Pyx_GIVEREF(__pyx_t_7);\n        PyTuple_SET_ITEM(__pyx_t_3, 0+__pyx_t_17, __pyx_t_7);\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_constraints));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_constraints));\n        PyTuple_SET_ITEM(__pyx_t_3, 1+__pyx_t_17, ((PyObject *)__pyx_v_l_constraints));\n        __pyx_t_7 = 0;\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (!(likely(((__pyx_t_11) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_11, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 169, __pyx_L1_error)\n      __pyx_t_28 = ((PyArrayObject *)__pyx_t_11);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_p_y.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_p_y.rcbuffer->pybuffer, (PyObject*)__pyx_t_28, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_p_y.rcbuffer->pybuffer, (PyObject*)__pyx_v_p_y, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_p_y.diminfo[0].strides = __pyx_pybuffernd_p_y.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_p_y.diminfo[0].shape = __pyx_pybuffernd_p_y.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 169, __pyx_L1_error)\n      }\n      __pyx_t_28 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_p_y, ((PyArrayObject *)__pyx_t_11));\n      __pyx_t_11 = 0;\n
+170:             num_vars = len(site_fracs) + len(phases)
\n
      __Pyx_TraceLine(170,0,__PYX_ERR(0, 170, __pyx_L1_error))\n      __pyx_t_8 = PyObject_Length(((PyObject *)__pyx_v_site_fracs)); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 170, __pyx_L1_error)\n      __pyx_t_15 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 170, __pyx_L1_error)\n      __pyx_v_num_vars = (__pyx_t_8 + __pyx_t_15);\n
+171:             l_hessian, gradient_term = _build_multiphase_system(dbf, comps, phases, cur_conds, site_fracs, phase_fracs,
\n
      __Pyx_TraceLine(171,0,__PYX_ERR(0, 171, __pyx_L1_error))\n      __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_build_multiphase_system); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 171, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n/* \u2026 */\n      __Pyx_TraceLine(171,0,__PYX_ERR(0, 171, __pyx_L1_error))\n      if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 171, __pyx_L1_error)\n      if (!(likely(((__pyx_t_7) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_7, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 171, __pyx_L1_error)\n      __pyx_t_29 = ((PyArrayObject *)__pyx_t_4);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_hessian.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_hessian.rcbuffer->pybuffer, (PyObject*)__pyx_t_29, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_hessian.rcbuffer->pybuffer, (PyObject*)__pyx_v_l_hessian, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          }\n        }\n        __pyx_pybuffernd_l_hessian.diminfo[0].strides = __pyx_pybuffernd_l_hessian.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_l_hessian.diminfo[0].shape = __pyx_pybuffernd_l_hessian.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_l_hessian.diminfo[1].strides = __pyx_pybuffernd_l_hessian.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_l_hessian.diminfo[1].shape = __pyx_pybuffernd_l_hessian.rcbuffer->pybuffer.shape[1];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 171, __pyx_L1_error)\n      }\n      __pyx_t_29 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_l_hessian, ((PyArrayObject *)__pyx_t_4));\n      __pyx_t_4 = 0;\n      __pyx_t_28 = ((PyArrayObject *)__pyx_t_7);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gradient_term.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_gradient_term.rcbuffer->pybuffer, (PyObject*)__pyx_t_28, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_gradient_term.rcbuffer->pybuffer, (PyObject*)__pyx_v_gradient_term, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_gradient_term.diminfo[0].strides = __pyx_pybuffernd_gradient_term.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_gradient_term.diminfo[0].shape = __pyx_pybuffernd_gradient_term.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 171, __pyx_L1_error)\n      }\n      __pyx_t_28 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_gradient_term, ((PyArrayObject *)__pyx_t_7));\n      __pyx_t_7 = 0;\n
 172:                                                                 l_constraints, constraint_jac, constraint_hess,
\n
+173:                                                                 l_multipliers, callable_dict, phase_records)
\n
      __Pyx_TraceLine(173,0,__PYX_ERR(0, 173, __pyx_L1_error))\n      __pyx_t_3 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_4);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_4, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_4)) {\n        PyObject *__pyx_temp[13] = {__pyx_t_3, __pyx_v_dbf, __pyx_v_comps, __pyx_v_phases, __pyx_v_cur_conds, ((PyObject *)__pyx_v_site_fracs), ((PyObject *)__pyx_v_phase_fracs), ((PyObject *)__pyx_v_l_constraints), ((PyObject *)__pyx_v_constraint_jac), __pyx_v_constraint_hess, ((PyObject *)__pyx_v_l_multipliers), __pyx_v_callable_dict, __pyx_v_phase_records};\n        __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_17, 12+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n        PyObject *__pyx_temp[13] = {__pyx_t_3, __pyx_v_dbf, __pyx_v_comps, __pyx_v_phases, __pyx_v_cur_conds, ((PyObject *)__pyx_v_site_fracs), ((PyObject *)__pyx_v_phase_fracs), ((PyObject *)__pyx_v_l_constraints), ((PyObject *)__pyx_v_constraint_jac), __pyx_v_constraint_hess, ((PyObject *)__pyx_v_l_multipliers), __pyx_v_callable_dict, __pyx_v_phase_records};\n        __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_17, 12+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else\n      #endif\n      {\n        __pyx_t_7 = PyTuple_New(12+__pyx_t_17); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        if (__pyx_t_3) {\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_3); __pyx_t_3 = NULL;\n        }\n        __Pyx_INCREF(__pyx_v_dbf);\n        __Pyx_GIVEREF(__pyx_v_dbf);\n        PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_17, __pyx_v_dbf);\n        __Pyx_INCREF(__pyx_v_comps);\n        __Pyx_GIVEREF(__pyx_v_comps);\n        PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_17, __pyx_v_comps);\n        __Pyx_INCREF(__pyx_v_phases);\n        __Pyx_GIVEREF(__pyx_v_phases);\n        PyTuple_SET_ITEM(__pyx_t_7, 2+__pyx_t_17, __pyx_v_phases);\n        __Pyx_INCREF(__pyx_v_cur_conds);\n        __Pyx_GIVEREF(__pyx_v_cur_conds);\n        PyTuple_SET_ITEM(__pyx_t_7, 3+__pyx_t_17, __pyx_v_cur_conds);\n        __Pyx_INCREF(((PyObject *)__pyx_v_site_fracs));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_site_fracs));\n        PyTuple_SET_ITEM(__pyx_t_7, 4+__pyx_t_17, ((PyObject *)__pyx_v_site_fracs));\n        __Pyx_INCREF(((PyObject *)__pyx_v_phase_fracs));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_phase_fracs));\n        PyTuple_SET_ITEM(__pyx_t_7, 5+__pyx_t_17, ((PyObject *)__pyx_v_phase_fracs));\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_constraints));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_constraints));\n        PyTuple_SET_ITEM(__pyx_t_7, 6+__pyx_t_17, ((PyObject *)__pyx_v_l_constraints));\n        __Pyx_INCREF(((PyObject *)__pyx_v_constraint_jac));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_constraint_jac));\n        PyTuple_SET_ITEM(__pyx_t_7, 7+__pyx_t_17, ((PyObject *)__pyx_v_constraint_jac));\n        __Pyx_INCREF(__pyx_v_constraint_hess);\n        __Pyx_GIVEREF(__pyx_v_constraint_hess);\n        PyTuple_SET_ITEM(__pyx_t_7, 8+__pyx_t_17, __pyx_v_constraint_hess);\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_multipliers));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_multipliers));\n        PyTuple_SET_ITEM(__pyx_t_7, 9+__pyx_t_17, ((PyObject *)__pyx_v_l_multipliers));\n        __Pyx_INCREF(__pyx_v_callable_dict);\n        __Pyx_GIVEREF(__pyx_v_callable_dict);\n        PyTuple_SET_ITEM(__pyx_t_7, 10+__pyx_t_17, __pyx_v_callable_dict);\n        __Pyx_INCREF(__pyx_v_phase_records);\n        __Pyx_GIVEREF(__pyx_v_phase_records);\n        PyTuple_SET_ITEM(__pyx_t_7, 11+__pyx_t_17, __pyx_v_phase_records);\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if ((likely(PyTuple_CheckExact(__pyx_t_11))) || (PyList_CheckExact(__pyx_t_11))) {\n        PyObject* sequence = __pyx_t_11;\n        #if !CYTHON_COMPILING_IN_PYPY\n        Py_ssize_t size = Py_SIZE(sequence);\n        #else\n        Py_ssize_t size = PySequence_Size(sequence);\n        #endif\n        if (unlikely(size != 2)) {\n          if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n          else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n          __PYX_ERR(0, 171, __pyx_L1_error)\n        }\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        if (likely(PyTuple_CheckExact(sequence))) {\n          __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); \n          __pyx_t_7 = PyTuple_GET_ITEM(sequence, 1); \n        } else {\n          __pyx_t_4 = PyList_GET_ITEM(sequence, 0); \n          __pyx_t_7 = PyList_GET_ITEM(sequence, 1); \n        }\n        __Pyx_INCREF(__pyx_t_4);\n        __Pyx_INCREF(__pyx_t_7);\n        #else\n        __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __pyx_t_7 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        #endif\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      } else {\n        Py_ssize_t index = -1;\n        __pyx_t_3 = PyObject_GetIter(__pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_13 = Py_TYPE(__pyx_t_3)->tp_iternext;\n        index = 0; __pyx_t_4 = __pyx_t_13(__pyx_t_3); if (unlikely(!__pyx_t_4)) goto __pyx_L60_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_4);\n        index = 1; __pyx_t_7 = __pyx_t_13(__pyx_t_3); if (unlikely(!__pyx_t_7)) goto __pyx_L60_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_7);\n        if (__Pyx_IternextUnpackEndCheck(__pyx_t_13(__pyx_t_3), 2) < 0) __PYX_ERR(0, 171, __pyx_L1_error)\n        __pyx_t_13 = NULL;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        goto __pyx_L61_unpacking_done;\n        __pyx_L60_unpacking_failed:;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_13 = NULL;\n        if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);\n        __PYX_ERR(0, 171, __pyx_L1_error)\n        __pyx_L61_unpacking_done:;\n      }\n
+174:             if np.any(np.isnan(l_hessian)):
\n
      __Pyx_TraceLine(174,0,__PYX_ERR(0, 174, __pyx_L1_error))\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 174, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_any); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 174, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isnan); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_2, function);\n        }\n      }\n      if (!__pyx_t_3) {\n        __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_l_hessian)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 174, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_l_hessian)};\n          __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_l_hessian)};\n          __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else\n        #endif\n        {\n          __pyx_t_14 = PyTuple_New(1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_14, 0, __pyx_t_3); __pyx_t_3 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_l_hessian));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_l_hessian));\n          PyTuple_SET_ITEM(__pyx_t_14, 0+1, ((PyObject *)__pyx_v_l_hessian));\n          __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_14, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {\n        __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_4);\n        if (likely(__pyx_t_2)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n          __Pyx_INCREF(__pyx_t_2);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_4, function);\n        }\n      }\n      if (!__pyx_t_2) {\n        __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 174, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_7};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_7};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_14 = PyTuple_New(1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_14, 0, __pyx_t_2); __pyx_t_2 = NULL;\n          __Pyx_GIVEREF(__pyx_t_7);\n          PyTuple_SET_ITEM(__pyx_t_14, 0+1, __pyx_t_7);\n          __pyx_t_7 = 0;\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_14, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 174, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 174, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+175:                 print('Invalid l_hessian')
\n
        __Pyx_TraceLine(175,0,__PYX_ERR(0, 175, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 175, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n/* \u2026 */\n  __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_u_Invalid_l_hessian); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 175, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__14);\n  __Pyx_GIVEREF(__pyx_tuple__14);\n
+176:                 l_hessian[:,:] = np.eye(l_hessian.shape[0])
\n
        __Pyx_TraceLine(176,0,__PYX_ERR(0, 176, __pyx_L1_error))\n        __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_eye); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_t_4 = __Pyx_PyInt_From_Py_intptr_t((__pyx_v_l_hessian->dimensions[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __pyx_t_7 = NULL;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_14))) {\n          __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_14);\n          if (likely(__pyx_t_7)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14);\n            __Pyx_INCREF(__pyx_t_7);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_14, function);\n          }\n        }\n        if (!__pyx_t_7) {\n          __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_14, __pyx_t_4); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 176, __pyx_L1_error)\n          __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_14)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_4};\n            __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_14, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 176, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_14)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_4};\n            __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_14, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 176, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          } else\n          #endif\n          {\n            __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 176, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_7); __pyx_t_7 = NULL;\n            __Pyx_GIVEREF(__pyx_t_4);\n            PyTuple_SET_ITEM(__pyx_t_2, 0+1, __pyx_t_4);\n            __pyx_t_4 = 0;\n            __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_2, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 176, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n/* \u2026 */\n  __pyx_slice__15 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__15)) __PYX_ERR(0, 176, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_slice__15);\n  __Pyx_GIVEREF(__pyx_slice__15);\n  __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(0, 176, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_slice__16);\n  __Pyx_GIVEREF(__pyx_slice__16);\n        if (unlikely(PyObject_SetItem(((PyObject *)__pyx_v_l_hessian), __pyx_tuple__17, __pyx_t_11) < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n  __pyx_tuple__17 = PyTuple_Pack(2, __pyx_slice__15, __pyx_slice__16); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(0, 176, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__17);\n  __Pyx_GIVEREF(__pyx_tuple__17);\n
+177:             if np.any(np.isnan(gradient_term)):
\n
      __Pyx_TraceLine(177,0,__PYX_ERR(0, 177, __pyx_L1_error))\n      __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_any); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_isnan); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n        __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_7);\n        if (likely(__pyx_t_4)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n          __Pyx_INCREF(__pyx_t_4);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_7, function);\n        }\n      }\n      if (!__pyx_t_4) {\n        __pyx_t_14 = __Pyx_PyObject_CallOneArg(__pyx_t_7, ((PyObject *)__pyx_v_gradient_term)); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_7)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_4, ((PyObject *)__pyx_v_gradient_term)};\n          __pyx_t_14 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_4, ((PyObject *)__pyx_v_gradient_term)};\n          __pyx_t_14 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n        } else\n        #endif\n        {\n          __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); __pyx_t_4 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_gradient_term));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_gradient_term));\n          PyTuple_SET_ITEM(__pyx_t_3, 0+1, ((PyObject *)__pyx_v_gradient_term));\n          __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_3, NULL); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n        __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);\n        if (likely(__pyx_t_7)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n          __Pyx_INCREF(__pyx_t_7);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_2, function);\n        }\n      }\n      if (!__pyx_t_7) {\n        __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_14); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_14};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_14};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_7); __pyx_t_7 = NULL;\n          __Pyx_GIVEREF(__pyx_t_14);\n          PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_t_14);\n          __pyx_t_14 = 0;\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+178:                 raise ValueError('Invalid gradient_term')
\n
        __Pyx_TraceLine(178,0,__PYX_ERR(0, 178, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 178, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_Raise(__pyx_t_11, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __PYX_ERR(0, 178, __pyx_L1_error)\n/* \u2026 */\n  __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_u_Invalid_gradient_term); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(0, 178, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__18);\n  __Pyx_GIVEREF(__pyx_tuple__18);\n
 179:             # Equation 18.18 in Nocedal and Wright
\n
+180:             if m != n:
\n
      __Pyx_TraceLine(180,0,__PYX_ERR(0, 180, __pyx_L1_error))\n      __pyx_t_5 = ((__pyx_v_m != __pyx_v_n) != 0);\n      if (__pyx_t_5) {\n/* \u2026 */\n        goto __pyx_L64;\n      }\n
+181:                 if np.any(np.isnan(zmat)):
\n
        __Pyx_TraceLine(181,0,__PYX_ERR(0, 181, __pyx_L1_error))\n        __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_any); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_isnan); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __pyx_t_14 = NULL;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n          __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_7);\n          if (likely(__pyx_t_14)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n            __Pyx_INCREF(__pyx_t_14);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_7, function);\n          }\n        }\n        if (!__pyx_t_14) {\n          __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_7, ((PyObject *)__pyx_v_zmat)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_7)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_14, ((PyObject *)__pyx_v_zmat)};\n            __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __Pyx_GOTREF(__pyx_t_2);\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_14, ((PyObject *)__pyx_v_zmat)};\n            __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __Pyx_GOTREF(__pyx_t_2);\n          } else\n          #endif\n          {\n            __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_GIVEREF(__pyx_t_14); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_14); __pyx_t_14 = NULL;\n            __Pyx_INCREF(((PyObject *)__pyx_v_zmat));\n            __Pyx_GIVEREF(((PyObject *)__pyx_v_zmat));\n            PyTuple_SET_ITEM(__pyx_t_4, 0+1, ((PyObject *)__pyx_v_zmat));\n            __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __pyx_t_7 = NULL;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n          __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3);\n          if (likely(__pyx_t_7)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n            __Pyx_INCREF(__pyx_t_7);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_3, function);\n          }\n        }\n        if (!__pyx_t_7) {\n          __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 181, __pyx_L1_error)\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_3)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_2};\n            __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_2};\n            __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          } else\n          #endif\n          {\n            __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_7); __pyx_t_7 = NULL;\n            __Pyx_GIVEREF(__pyx_t_2);\n            PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_t_2);\n            __pyx_t_2 = 0;\n            __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        if (__pyx_t_5) {\n/* \u2026 */\n        }\n
+182:                     raise ValueError('Invalid zmat')
\n
          __Pyx_TraceLine(182,0,__PYX_ERR(0, 182, __pyx_L1_error))\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 182, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_Raise(__pyx_t_11, 0, 0, 0);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __PYX_ERR(0, 182, __pyx_L1_error)\n/* \u2026 */\n  __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_u_Invalid_zmat); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(0, 182, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__19);\n  __Pyx_GIVEREF(__pyx_tuple__19);\n
+183:                 try:
\n
        __Pyx_TraceLine(183,0,__PYX_ERR(0, 183, __pyx_L66_error))\n        {\n          /*try:*/ {\n/* \u2026 */\n          }\n          __Pyx_XDECREF(__pyx_t_23); __pyx_t_23 = 0;\n          __Pyx_XDECREF(__pyx_t_22); __pyx_t_22 = 0;\n          __Pyx_XDECREF(__pyx_t_21); __pyx_t_21 = 0;\n          goto __pyx_L73_try_end;\n          __pyx_L66_error:;\n          __Pyx_PyThreadState_assign\n          __Pyx_XDECREF(__pyx_t_31); __pyx_t_31 = 0;\n          __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n          __Pyx_XDECREF(__pyx_t_30); __pyx_t_30 = 0;\n          __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n          __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n          __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n/* \u2026 */\n          __Pyx_PyThreadState_assign\n          __Pyx_XGIVEREF(__pyx_t_23);\n          __Pyx_XGIVEREF(__pyx_t_22);\n          __Pyx_XGIVEREF(__pyx_t_21);\n          __Pyx_ExceptionReset(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          goto __pyx_L1_error;\n          __pyx_L67_exception_handled:;\n          __Pyx_PyThreadState_assign\n          __Pyx_XGIVEREF(__pyx_t_23);\n          __Pyx_XGIVEREF(__pyx_t_22);\n          __Pyx_XGIVEREF(__pyx_t_21);\n          __Pyx_ExceptionReset(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          __pyx_L73_try_end:;\n        }\n
+184:                     p_z = np.linalg.solve(np.dot(np.dot(zmat.T, l_hessian), zmat),
\n
            __Pyx_TraceLine(184,0,__PYX_ERR(0, 184, __pyx_L66_error))\n            __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_3);\n            __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_linalg); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n            __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_solve); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_3);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n            __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_dot); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_7);\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            __pyx_t_14 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_dot); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_1);\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __pyx_t_14 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_zmat), __pyx_n_s_T); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 184, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __pyx_t_10 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {\n              __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_1);\n              if (likely(__pyx_t_10)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n                __Pyx_INCREF(__pyx_t_10);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_1, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_1)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_10, __pyx_t_14, ((PyObject *)__pyx_v_l_hessian)};\n              __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n              __Pyx_GOTREF(__pyx_t_2);\n              __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_10, __pyx_t_14, ((PyObject *)__pyx_v_l_hessian)};\n              __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n              __Pyx_GOTREF(__pyx_t_2);\n              __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_12 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_12);\n              if (__pyx_t_10) {\n                __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_10); __pyx_t_10 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_14);\n              PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_17, __pyx_t_14);\n              __Pyx_INCREF(((PyObject *)__pyx_v_l_hessian));\n              __Pyx_GIVEREF(((PyObject *)__pyx_v_l_hessian));\n              PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_17, ((PyObject *)__pyx_v_l_hessian));\n              __pyx_t_14 = 0;\n              __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_2);\n              __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n            __pyx_t_1 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n              __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_7);\n              if (likely(__pyx_t_1)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n                __Pyx_INCREF(__pyx_t_1);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_7, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_7)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_1, __pyx_t_2, ((PyObject *)__pyx_v_zmat)};\n              __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n              __Pyx_GOTREF(__pyx_t_4);\n              __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_1, __pyx_t_2, ((PyObject *)__pyx_v_zmat)};\n              __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n              __Pyx_GOTREF(__pyx_t_4);\n              __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_12 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_12);\n              if (__pyx_t_1) {\n                __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_1); __pyx_t_1 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_2);\n              PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_17, __pyx_t_2);\n              __Pyx_INCREF(((PyObject *)__pyx_v_zmat));\n              __Pyx_GIVEREF(((PyObject *)__pyx_v_zmat));\n              PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_17, ((PyObject *)__pyx_v_zmat));\n              __pyx_t_2 = 0;\n              __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_4);\n              __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n
+185:                                           -np.dot(np.dot(np.dot(zmat.T, l_hessian), ymat), p_y) - np.dot(zmat.T, gradient_term))
\n
            __Pyx_TraceLine(185,0,__PYX_ERR(0, 185, __pyx_L66_error))\n            __pyx_t_12 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_dot); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_1);\n            __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_dot); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_14);\n            __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n            __pyx_t_10 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_10);\n            __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_dot); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_30);\n            __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n            __pyx_t_10 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_zmat), __pyx_n_s_T); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_10);\n            __pyx_t_31 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_30))) {\n              __pyx_t_31 = PyMethod_GET_SELF(__pyx_t_30);\n              if (likely(__pyx_t_31)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_30);\n                __Pyx_INCREF(__pyx_t_31);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_30, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_30)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_31, __pyx_t_10, ((PyObject *)__pyx_v_l_hessian)};\n              __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_30, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_31); __pyx_t_31 = 0;\n              __Pyx_GOTREF(__pyx_t_1);\n              __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_30)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_31, __pyx_t_10, ((PyObject *)__pyx_v_l_hessian)};\n              __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_30, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_31); __pyx_t_31 = 0;\n              __Pyx_GOTREF(__pyx_t_1);\n              __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_32 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_32);\n              if (__pyx_t_31) {\n                __Pyx_GIVEREF(__pyx_t_31); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_31); __pyx_t_31 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_10);\n              PyTuple_SET_ITEM(__pyx_t_32, 0+__pyx_t_17, __pyx_t_10);\n              __Pyx_INCREF(((PyObject *)__pyx_v_l_hessian));\n              __Pyx_GIVEREF(((PyObject *)__pyx_v_l_hessian));\n              PyTuple_SET_ITEM(__pyx_t_32, 1+__pyx_t_17, ((PyObject *)__pyx_v_l_hessian));\n              __pyx_t_10 = 0;\n              __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_30, __pyx_t_32, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_1);\n              __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n            __pyx_t_30 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_14))) {\n              __pyx_t_30 = PyMethod_GET_SELF(__pyx_t_14);\n              if (likely(__pyx_t_30)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14);\n                __Pyx_INCREF(__pyx_t_30);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_14, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_14)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_30, __pyx_t_1, ((PyObject *)__pyx_v_ymat)};\n              __pyx_t_12 = __Pyx_PyFunction_FastCall(__pyx_t_14, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_30); __pyx_t_30 = 0;\n              __Pyx_GOTREF(__pyx_t_12);\n              __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_14)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_30, __pyx_t_1, ((PyObject *)__pyx_v_ymat)};\n              __pyx_t_12 = __Pyx_PyCFunction_FastCall(__pyx_t_14, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_30); __pyx_t_30 = 0;\n              __Pyx_GOTREF(__pyx_t_12);\n              __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_32 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_32);\n              if (__pyx_t_30) {\n                __Pyx_GIVEREF(__pyx_t_30); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_30); __pyx_t_30 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_1);\n              PyTuple_SET_ITEM(__pyx_t_32, 0+__pyx_t_17, __pyx_t_1);\n              __Pyx_INCREF(((PyObject *)__pyx_v_ymat));\n              __Pyx_GIVEREF(((PyObject *)__pyx_v_ymat));\n              PyTuple_SET_ITEM(__pyx_t_32, 1+__pyx_t_17, ((PyObject *)__pyx_v_ymat));\n              __pyx_t_1 = 0;\n              __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_32, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_12);\n              __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n            __pyx_t_14 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n              __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_2);\n              if (likely(__pyx_t_14)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n                __Pyx_INCREF(__pyx_t_14);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_2, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_2)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_14, __pyx_t_12, ((PyObject *)__pyx_v_p_y)};\n              __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n              __Pyx_GOTREF(__pyx_t_7);\n              __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_14, __pyx_t_12, ((PyObject *)__pyx_v_p_y)};\n              __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n              __Pyx_GOTREF(__pyx_t_7);\n              __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_32 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_32);\n              if (__pyx_t_14) {\n                __Pyx_GIVEREF(__pyx_t_14); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_14); __pyx_t_14 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_12);\n              PyTuple_SET_ITEM(__pyx_t_32, 0+__pyx_t_17, __pyx_t_12);\n              __Pyx_INCREF(((PyObject *)__pyx_v_p_y));\n              __Pyx_GIVEREF(((PyObject *)__pyx_v_p_y));\n              PyTuple_SET_ITEM(__pyx_t_32, 1+__pyx_t_17, ((PyObject *)__pyx_v_p_y));\n              __pyx_t_12 = 0;\n              __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_32, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_7);\n              __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            __pyx_t_2 = PyNumber_Negative(__pyx_t_7); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_32);\n            __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_dot); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            __pyx_t_32 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_zmat), __pyx_n_s_T); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_32);\n            __pyx_t_14 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n              __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_12);\n              if (likely(__pyx_t_14)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n                __Pyx_INCREF(__pyx_t_14);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_12, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_12)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_14, __pyx_t_32, ((PyObject *)__pyx_v_gradient_term)};\n              __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n              __Pyx_GOTREF(__pyx_t_7);\n              __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_14, __pyx_t_32, ((PyObject *)__pyx_v_gradient_term)};\n              __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;\n              __Pyx_GOTREF(__pyx_t_7);\n              __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_1 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_1);\n              if (__pyx_t_14) {\n                __Pyx_GIVEREF(__pyx_t_14); PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_14); __pyx_t_14 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_32);\n              PyTuple_SET_ITEM(__pyx_t_1, 0+__pyx_t_17, __pyx_t_32);\n              __Pyx_INCREF(((PyObject *)__pyx_v_gradient_term));\n              __Pyx_GIVEREF(((PyObject *)__pyx_v_gradient_term));\n              PyTuple_SET_ITEM(__pyx_t_1, 1+__pyx_t_17, ((PyObject *)__pyx_v_gradient_term));\n              __pyx_t_32 = 0;\n              __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_1, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_7);\n              __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_12 = PyNumber_Subtract(__pyx_t_2, __pyx_t_7); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L66_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __pyx_t_7 = NULL;\n            __pyx_t_17 = 0;\n            if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {\n              __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3);\n              if (likely(__pyx_t_7)) {\n                PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n                __Pyx_INCREF(__pyx_t_7);\n                __Pyx_INCREF(function);\n                __Pyx_DECREF_SET(__pyx_t_3, function);\n                __pyx_t_17 = 1;\n              }\n            }\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_3)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_4, __pyx_t_12};\n              __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n              __Pyx_GOTREF(__pyx_t_11);\n              __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n              __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n              PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_4, __pyx_t_12};\n              __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n              __Pyx_GOTREF(__pyx_t_11);\n              __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n              __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_2 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_2);\n              if (__pyx_t_7) {\n                __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_7); __pyx_t_7 = NULL;\n              }\n              __Pyx_GIVEREF(__pyx_t_4);\n              PyTuple_SET_ITEM(__pyx_t_2, 0+__pyx_t_17, __pyx_t_4);\n              __Pyx_GIVEREF(__pyx_t_12);\n              PyTuple_SET_ITEM(__pyx_t_2, 1+__pyx_t_17, __pyx_t_12);\n              __pyx_t_4 = 0;\n              __pyx_t_12 = 0;\n              __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 184, __pyx_L66_error)\n              __Pyx_GOTREF(__pyx_t_11);\n              __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            }\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n            __Pyx_XDECREF_SET(__pyx_v_p_z, __pyx_t_11);\n            __pyx_t_11 = 0;\n
+186:                 except np.linalg.LinAlgError:
\n
          __Pyx_TraceLine(186,0,__PYX_ERR(0, 186, __pyx_L68_except_error))\n          __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 186, __pyx_L68_except_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_linalg); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 186, __pyx_L68_except_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_LinAlgError); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 186, __pyx_L68_except_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __pyx_t_17 = __Pyx_PyErr_ExceptionMatches(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          if (__pyx_t_17) {\n            __Pyx_AddTraceback(\"_cython_magic_179ec25e50428c13c53b0ba863f4b74e._solve_eq_at_conditions\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n            if (__Pyx_GetException(&__pyx_t_11, &__pyx_t_3, &__pyx_t_2) < 0) __PYX_ERR(0, 186, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_GOTREF(__pyx_t_3);\n            __Pyx_GOTREF(__pyx_t_2);\n
+187:                     p_z = np.zeros(zmat.shape[1], dtype=np.float)
\n
            __Pyx_TraceLine(187,0,__PYX_ERR(0, 187, __pyx_L68_except_error))\n            __pyx_t_12 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_zeros); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_4);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_12 = __Pyx_PyInt_From_Py_intptr_t((__pyx_v_zmat->dimensions[1])); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __pyx_t_7 = PyTuple_New(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_7);\n            __Pyx_GIVEREF(__pyx_t_12);\n            PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_12);\n            __pyx_t_12 = 0;\n            __pyx_t_12 = PyDict_New(); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_1);\n            __pyx_t_32 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_32);\n            __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n            if (PyDict_SetItem(__pyx_t_12, __pyx_n_s_dtype, __pyx_t_32) < 0) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            __pyx_t_32 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, __pyx_t_12); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 187, __pyx_L68_except_error)\n            __Pyx_GOTREF(__pyx_t_32);\n            __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n            __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __Pyx_XDECREF_SET(__pyx_v_p_z, __pyx_t_32);\n            __pyx_t_32 = 0;\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            goto __pyx_L67_exception_handled;\n          }\n          goto __pyx_L68_except_error;\n          __pyx_L68_except_error:;\n
+188:                 step = np.dot(ymat, p_y) + np.dot(zmat, p_z)
\n
        __Pyx_TraceLine(188,0,__PYX_ERR(0, 188, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 188, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_dot); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 188, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = NULL;\n        __pyx_t_17 = 0;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n          __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_11);\n          if (likely(__pyx_t_3)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n            __Pyx_INCREF(__pyx_t_3);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_11, function);\n            __pyx_t_17 = 1;\n          }\n        }\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_3, ((PyObject *)__pyx_v_ymat), ((PyObject *)__pyx_v_p_y)};\n          __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_3, ((PyObject *)__pyx_v_ymat), ((PyObject *)__pyx_v_p_y)};\n          __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n        } else\n        #endif\n        {\n          __pyx_t_32 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          if (__pyx_t_3) {\n            __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_3); __pyx_t_3 = NULL;\n          }\n          __Pyx_INCREF(((PyObject *)__pyx_v_ymat));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_ymat));\n          PyTuple_SET_ITEM(__pyx_t_32, 0+__pyx_t_17, ((PyObject *)__pyx_v_ymat));\n          __Pyx_INCREF(((PyObject *)__pyx_v_p_y));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_p_y));\n          PyTuple_SET_ITEM(__pyx_t_32, 1+__pyx_t_17, ((PyObject *)__pyx_v_p_y));\n          __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_32, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 188, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_dot); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 188, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        __pyx_t_32 = NULL;\n        __pyx_t_17 = 0;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n          __pyx_t_32 = PyMethod_GET_SELF(__pyx_t_3);\n          if (likely(__pyx_t_32)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n            __Pyx_INCREF(__pyx_t_32);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_3, function);\n            __pyx_t_17 = 1;\n          }\n        }\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_32, ((PyObject *)__pyx_v_zmat), __pyx_v_p_z};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_32, ((PyObject *)__pyx_v_zmat), __pyx_v_p_z};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        {\n          __pyx_t_12 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          if (__pyx_t_32) {\n            __Pyx_GIVEREF(__pyx_t_32); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_32); __pyx_t_32 = NULL;\n          }\n          __Pyx_INCREF(((PyObject *)__pyx_v_zmat));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_zmat));\n          PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_17, ((PyObject *)__pyx_v_zmat));\n          __Pyx_INCREF(__pyx_v_p_z);\n          __Pyx_GIVEREF(__pyx_v_p_z);\n          PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_17, __pyx_v_p_z);\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_12, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 188, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 188, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 188, __pyx_L1_error)\n        __pyx_t_28 = ((PyArrayObject *)__pyx_t_3);\n        {\n          __Pyx_BufFmt_StackElem __pyx_stack[1];\n          __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_step.rcbuffer->pybuffer);\n          __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_step.rcbuffer->pybuffer, (PyObject*)__pyx_t_28, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n          if (unlikely(__pyx_t_17 < 0)) {\n            PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n            if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_step.rcbuffer->pybuffer, (PyObject*)__pyx_v_step, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n              Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n              __Pyx_RaiseBufferFallbackError();\n            } else {\n              PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n            }\n          }\n          __pyx_pybuffernd_step.diminfo[0].strides = __pyx_pybuffernd_step.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_step.diminfo[0].shape = __pyx_pybuffernd_step.rcbuffer->pybuffer.shape[0];\n          if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 188, __pyx_L1_error)\n        }\n        __pyx_t_28 = 0;\n        __Pyx_XDECREF_SET(__pyx_v_step, ((PyArrayObject *)__pyx_t_3));\n        __pyx_t_3 = 0;\n
 189:             else:
\n
+190:                 step = np.dot(ymat, p_y)
\n
      __Pyx_TraceLine(190,0,__PYX_ERR(0, 190, __pyx_L1_error))\n      /*else*/ {\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 190, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_dot); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 190, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = NULL;\n        __pyx_t_17 = 0;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n          __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_2);\n          if (likely(__pyx_t_11)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n            __Pyx_INCREF(__pyx_t_11);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_2, function);\n            __pyx_t_17 = 1;\n          }\n        }\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_11, ((PyObject *)__pyx_v_ymat), ((PyObject *)__pyx_v_p_y)};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 190, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_11, ((PyObject *)__pyx_v_ymat), ((PyObject *)__pyx_v_p_y)};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 190, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        {\n          __pyx_t_12 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 190, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          if (__pyx_t_11) {\n            __Pyx_GIVEREF(__pyx_t_11); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_11); __pyx_t_11 = NULL;\n          }\n          __Pyx_INCREF(((PyObject *)__pyx_v_ymat));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_ymat));\n          PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_17, ((PyObject *)__pyx_v_ymat));\n          __Pyx_INCREF(((PyObject *)__pyx_v_p_y));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_p_y));\n          PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_17, ((PyObject *)__pyx_v_p_y));\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_12, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 190, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 190, __pyx_L1_error)\n        __pyx_t_28 = ((PyArrayObject *)__pyx_t_3);\n        {\n          __Pyx_BufFmt_StackElem __pyx_stack[1];\n          __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_step.rcbuffer->pybuffer);\n          __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_step.rcbuffer->pybuffer, (PyObject*)__pyx_t_28, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n          if (unlikely(__pyx_t_17 < 0)) {\n            PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n            if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_step.rcbuffer->pybuffer, (PyObject*)__pyx_v_step, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n              Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n              __Pyx_RaiseBufferFallbackError();\n            } else {\n              PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n            }\n          }\n          __pyx_pybuffernd_step.diminfo[0].strides = __pyx_pybuffernd_step.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_step.diminfo[0].shape = __pyx_pybuffernd_step.rcbuffer->pybuffer.shape[0];\n          if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 190, __pyx_L1_error)\n        }\n        __pyx_t_28 = 0;\n        __Pyx_XDECREF_SET(__pyx_v_step, ((PyArrayObject *)__pyx_t_3));\n        __pyx_t_3 = 0;\n      }\n      __pyx_L64:;\n
+191:             old_energy = copy.deepcopy(prop_GM_values[it.multi_index])
\n
      __Pyx_TraceLine(191,0,__PYX_ERR(0, 191, __pyx_L1_error))\n      __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_copy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 191, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_deepcopy); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 191, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 191, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_11 = PyObject_GetItem(__pyx_v_prop_GM_values, __pyx_t_2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 191, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n        __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_12);\n        if (likely(__pyx_t_2)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n          __Pyx_INCREF(__pyx_t_2);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_12, function);\n        }\n      }\n      if (!__pyx_t_2) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_12, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 191, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_11};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 191, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_11};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 191, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_32 = PyTuple_New(1+1); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 191, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_2); __pyx_t_2 = NULL;\n          __Pyx_GIVEREF(__pyx_t_11);\n          PyTuple_SET_ITEM(__pyx_t_32, 0+1, __pyx_t_11);\n          __pyx_t_11 = 0;\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_32, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 191, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_old_energy, __pyx_t_3);\n      __pyx_t_3 = 0;\n
+192:             old_chem_pots = copy.deepcopy(prop_MU_values[it.multi_index])
\n
      __Pyx_TraceLine(192,0,__PYX_ERR(0, 192, __pyx_L1_error))\n      __pyx_t_12 = __Pyx_GetModuleGlobalName(__pyx_n_s_copy); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 192, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __pyx_t_32 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_deepcopy); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 192, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 192, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __pyx_t_11 = PyObject_GetItem(__pyx_v_prop_MU_values, __pyx_t_12); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 192, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_32))) {\n        __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_32);\n        if (likely(__pyx_t_12)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_32);\n          __Pyx_INCREF(__pyx_t_12);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_32, function);\n        }\n      }\n      if (!__pyx_t_12) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_32, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 192, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_32)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_11};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_32, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 192, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_32)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_11};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_32, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 192, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 192, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_12); __pyx_t_12 = NULL;\n          __Pyx_GIVEREF(__pyx_t_11);\n          PyTuple_SET_ITEM(__pyx_t_2, 0+1, __pyx_t_11);\n          __pyx_t_11 = 0;\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_32, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 192, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_old_chem_pots, __pyx_t_3);\n      __pyx_t_3 = 0;\n
+193:             candidate_site_fracs = np.empty_like(site_fracs)
\n
      __Pyx_TraceLine(193,0,__PYX_ERR(0, 193, __pyx_L1_error))\n      __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 193, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_empty_like); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 193, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      __pyx_t_32 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n        __pyx_t_32 = PyMethod_GET_SELF(__pyx_t_2);\n        if (likely(__pyx_t_32)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n          __Pyx_INCREF(__pyx_t_32);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_2, function);\n        }\n      }\n      if (!__pyx_t_32) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_site_fracs)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_32, ((PyObject *)__pyx_v_site_fracs)};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_32, ((PyObject *)__pyx_v_site_fracs)};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        {\n          __pyx_t_11 = PyTuple_New(1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 193, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_GIVEREF(__pyx_t_32); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_32); __pyx_t_32 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_site_fracs));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_site_fracs));\n          PyTuple_SET_ITEM(__pyx_t_11, 0+1, ((PyObject *)__pyx_v_site_fracs));\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 193, __pyx_L1_error)\n      __pyx_t_20 = ((PyArrayObject *)__pyx_t_3);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_v_candidate_site_fracs, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_candidate_site_fracs.diminfo[0].strides = __pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_candidate_site_fracs.diminfo[0].shape = __pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 193, __pyx_L1_error)\n      }\n      __pyx_t_20 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_candidate_site_fracs, ((PyArrayObject *)__pyx_t_3));\n      __pyx_t_3 = 0;\n
+194:             candidate_phase_fracs = np.empty_like(phase_fracs)
\n
      __Pyx_TraceLine(194,0,__PYX_ERR(0, 194, __pyx_L1_error))\n      __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 194, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_empty_like); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 194, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_2)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_2);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n        }\n      }\n      if (!__pyx_t_2) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_11, ((PyObject *)__pyx_v_phase_fracs)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 194, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, ((PyObject *)__pyx_v_phase_fracs)};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 194, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, ((PyObject *)__pyx_v_phase_fracs)};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 194, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n        } else\n        #endif\n        {\n          __pyx_t_32 = PyTuple_New(1+1); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 194, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_2); __pyx_t_2 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_phase_fracs));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_phase_fracs));\n          PyTuple_SET_ITEM(__pyx_t_32, 0+1, ((PyObject *)__pyx_v_phase_fracs));\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_32, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 194, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 194, __pyx_L1_error)\n      __pyx_t_20 = ((PyArrayObject *)__pyx_t_3);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer, (PyObject*)__pyx_v_candidate_phase_fracs, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n          }\n        }\n        __pyx_pybuffernd_candidate_phase_fracs.diminfo[0].strides = __pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_candidate_phase_fracs.diminfo[0].shape = __pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 194, __pyx_L1_error)\n      }\n      __pyx_t_20 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_candidate_phase_fracs, ((PyArrayObject *)__pyx_t_3));\n      __pyx_t_3 = 0;\n
+195:             for sfidx in range(candidate_site_fracs.shape[0]):
\n
      __Pyx_TraceLine(195,0,__PYX_ERR(0, 195, __pyx_L1_error))\n      __pyx_t_33 = (__pyx_v_candidate_site_fracs->dimensions[0]);\n      for (__pyx_t_17 = 0; __pyx_t_17 < __pyx_t_33; __pyx_t_17+=1) {\n        __pyx_v_sfidx = __pyx_t_17;\n
+196:                 candidate_site_fracs[sfidx] = min(max(site_fracs[sfidx] + step[sfidx], MIN_SITE_FRACTION), 1)
\n
        __Pyx_TraceLine(196,0,__PYX_ERR(0, 196, __pyx_L1_error))\n        __pyx_t_34 = 1;\n        __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_MIN_SITE_FRACTION); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_35 = __pyx_v_sfidx;\n        __pyx_t_36 = -1;\n        if (__pyx_t_35 < 0) {\n          __pyx_t_35 += __pyx_pybuffernd_site_fracs.diminfo[0].shape;\n          if (unlikely(__pyx_t_35 < 0)) __pyx_t_36 = 0;\n        } else if (unlikely(__pyx_t_35 >= __pyx_pybuffernd_site_fracs.diminfo[0].shape)) __pyx_t_36 = 0;\n        if (unlikely(__pyx_t_36 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_36);\n          __PYX_ERR(0, 196, __pyx_L1_error)\n        }\n        __pyx_t_37 = __pyx_v_sfidx;\n        __pyx_t_36 = -1;\n        if (__pyx_t_37 < 0) {\n          __pyx_t_37 += __pyx_pybuffernd_step.diminfo[0].shape;\n          if (unlikely(__pyx_t_37 < 0)) __pyx_t_36 = 0;\n        } else if (unlikely(__pyx_t_37 >= __pyx_pybuffernd_step.diminfo[0].shape)) __pyx_t_36 = 0;\n        if (unlikely(__pyx_t_36 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_36);\n          __PYX_ERR(0, 196, __pyx_L1_error)\n        }\n        __pyx_t_38 = ((*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_float64_t *, __pyx_pybuffernd_site_fracs.rcbuffer->pybuffer.buf, __pyx_t_35, __pyx_pybuffernd_site_fracs.diminfo[0].strides)) + (*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_float64_t *, __pyx_pybuffernd_step.rcbuffer->pybuffer.buf, __pyx_t_37, __pyx_pybuffernd_step.diminfo[0].strides)));\n        __pyx_t_32 = PyFloat_FromDouble(__pyx_t_38); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        __pyx_t_2 = PyObject_RichCompare(__pyx_t_3, __pyx_t_32, Py_GT); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        if (__pyx_t_5) {\n          __Pyx_INCREF(__pyx_t_3);\n          __pyx_t_11 = __pyx_t_3;\n        } else {\n          __pyx_t_2 = PyFloat_FromDouble(__pyx_t_38); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 196, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __pyx_t_11 = __pyx_t_2;\n          __pyx_t_2 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_INCREF(__pyx_t_11);\n        __pyx_t_3 = __pyx_t_11;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_t_34); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __pyx_t_32 = PyObject_RichCompare(__pyx_t_2, __pyx_t_3, Py_LT); __Pyx_XGOTREF(__pyx_t_32); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_32); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        if (__pyx_t_5) {\n          __pyx_t_32 = __Pyx_PyInt_From_long(__pyx_t_34); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 196, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          __pyx_t_11 = __pyx_t_32;\n          __pyx_t_32 = 0;\n        } else {\n          __Pyx_INCREF(__pyx_t_3);\n          __pyx_t_11 = __pyx_t_3;\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_38 = __pyx_PyFloat_AsDouble(__pyx_t_11); if (unlikely((__pyx_t_38 == ((npy_float64)-1)) && PyErr_Occurred())) __PYX_ERR(0, 196, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_39 = __pyx_v_sfidx;\n        __pyx_t_36 = -1;\n        if (__pyx_t_39 < 0) {\n          __pyx_t_39 += __pyx_pybuffernd_candidate_site_fracs.diminfo[0].shape;\n          if (unlikely(__pyx_t_39 < 0)) __pyx_t_36 = 0;\n        } else if (unlikely(__pyx_t_39 >= __pyx_pybuffernd_candidate_site_fracs.diminfo[0].shape)) __pyx_t_36 = 0;\n        if (unlikely(__pyx_t_36 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_36);\n          __PYX_ERR(0, 196, __pyx_L1_error)\n        }\n        *__Pyx_BufPtrStrided1d(__pyx_t_5numpy_float64_t *, __pyx_pybuffernd_candidate_site_fracs.rcbuffer->pybuffer.buf, __pyx_t_39, __pyx_pybuffernd_candidate_site_fracs.diminfo[0].strides) = __pyx_t_38;\n      }\n
 197: 
\n
+198:             for pfidx in range(candidate_phase_fracs.shape[0]):
\n
      __Pyx_TraceLine(198,0,__PYX_ERR(0, 198, __pyx_L1_error))\n      __pyx_t_33 = (__pyx_v_candidate_phase_fracs->dimensions[0]);\n      for (__pyx_t_17 = 0; __pyx_t_17 < __pyx_t_33; __pyx_t_17+=1) {\n        __pyx_v_pfidx = __pyx_t_17;\n
+199:                 candidate_phase_fracs[pfidx] = min(max(phase_fracs[pfidx] + step[candidate_site_fracs.shape[0] + pfidx], 0), 1)
\n
        __Pyx_TraceLine(199,0,__PYX_ERR(0, 199, __pyx_L1_error))\n        __pyx_t_34 = 1;\n        __pyx_t_40 = 0;\n        __pyx_t_41 = __pyx_v_pfidx;\n        __pyx_t_36 = -1;\n        if (__pyx_t_41 < 0) {\n          __pyx_t_41 += __pyx_pybuffernd_phase_fracs.diminfo[0].shape;\n          if (unlikely(__pyx_t_41 < 0)) __pyx_t_36 = 0;\n        } else if (unlikely(__pyx_t_41 >= __pyx_pybuffernd_phase_fracs.diminfo[0].shape)) __pyx_t_36 = 0;\n        if (unlikely(__pyx_t_36 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_36);\n          __PYX_ERR(0, 199, __pyx_L1_error)\n        }\n        __pyx_t_42 = ((__pyx_v_candidate_site_fracs->dimensions[0]) + __pyx_v_pfidx);\n        __pyx_t_36 = -1;\n        if (__pyx_t_42 < 0) {\n          __pyx_t_42 += __pyx_pybuffernd_step.diminfo[0].shape;\n          if (unlikely(__pyx_t_42 < 0)) __pyx_t_36 = 0;\n        } else if (unlikely(__pyx_t_42 >= __pyx_pybuffernd_step.diminfo[0].shape)) __pyx_t_36 = 0;\n        if (unlikely(__pyx_t_36 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_36);\n          __PYX_ERR(0, 199, __pyx_L1_error)\n        }\n        __pyx_t_38 = ((*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_float64_t *, __pyx_pybuffernd_phase_fracs.rcbuffer->pybuffer.buf, __pyx_t_41, __pyx_pybuffernd_phase_fracs.diminfo[0].strides)) + (*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_float64_t *, __pyx_pybuffernd_step.rcbuffer->pybuffer.buf, __pyx_t_42, __pyx_pybuffernd_step.diminfo[0].strides)));\n        if (((__pyx_t_40 > __pyx_t_38) != 0)) {\n          __pyx_t_43 = __pyx_t_40;\n        } else {\n          __pyx_t_43 = __pyx_t_38;\n        }\n        __pyx_t_38 = __pyx_t_43;\n        if (((__pyx_t_34 < __pyx_t_38) != 0)) {\n          __pyx_t_43 = __pyx_t_34;\n        } else {\n          __pyx_t_43 = __pyx_t_38;\n        }\n        __pyx_t_44 = __pyx_v_pfidx;\n        __pyx_t_36 = -1;\n        if (__pyx_t_44 < 0) {\n          __pyx_t_44 += __pyx_pybuffernd_candidate_phase_fracs.diminfo[0].shape;\n          if (unlikely(__pyx_t_44 < 0)) __pyx_t_36 = 0;\n        } else if (unlikely(__pyx_t_44 >= __pyx_pybuffernd_candidate_phase_fracs.diminfo[0].shape)) __pyx_t_36 = 0;\n        if (unlikely(__pyx_t_36 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_36);\n          __PYX_ERR(0, 199, __pyx_L1_error)\n        }\n        *__Pyx_BufPtrStrided1d(__pyx_t_5numpy_float64_t *, __pyx_pybuffernd_candidate_phase_fracs.rcbuffer->pybuffer.buf, __pyx_t_44, __pyx_pybuffernd_candidate_phase_fracs.diminfo[0].strides) = __pyx_t_43;\n      }\n
+200:             candidate_l_constraints, candidate_constraint_jac, candidate_constraint_hess = \\
\n
      __Pyx_TraceLine(200,0,__PYX_ERR(0, 200, __pyx_L1_error))\n      __Pyx_XDECREF_SET(__pyx_v_candidate_l_constraints, __pyx_t_32);\n      __pyx_t_32 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_candidate_constraint_jac, __pyx_t_3);\n      __pyx_t_3 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_candidate_constraint_hess, __pyx_t_11);\n      __pyx_t_11 = 0;\n
+201:                 _compute_constraints(dbf, comps, phases, cur_conds,
\n
      __Pyx_TraceLine(201,0,__PYX_ERR(0, 201, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_compute_constraints); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 201, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n/* \u2026 */\n      __Pyx_TraceLine(202,0,__PYX_ERR(0, 202, __pyx_L1_error))\n      __pyx_t_3 = PyTuple_New(7); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 201, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_v_dbf);\n      __Pyx_GIVEREF(__pyx_v_dbf);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_dbf);\n      __Pyx_INCREF(__pyx_v_comps);\n      __Pyx_GIVEREF(__pyx_v_comps);\n      PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_comps);\n      __Pyx_INCREF(__pyx_v_phases);\n      __Pyx_GIVEREF(__pyx_v_phases);\n      PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_phases);\n      __Pyx_INCREF(__pyx_v_cur_conds);\n      __Pyx_GIVEREF(__pyx_v_cur_conds);\n      PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_v_cur_conds);\n      __Pyx_INCREF(((PyObject *)__pyx_v_candidate_site_fracs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_candidate_site_fracs));\n      PyTuple_SET_ITEM(__pyx_t_3, 4, ((PyObject *)__pyx_v_candidate_site_fracs));\n      __Pyx_INCREF(((PyObject *)__pyx_v_candidate_phase_fracs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_candidate_phase_fracs));\n      PyTuple_SET_ITEM(__pyx_t_3, 5, ((PyObject *)__pyx_v_candidate_phase_fracs));\n      __Pyx_INCREF(__pyx_v_phase_records);\n      __Pyx_GIVEREF(__pyx_v_phase_records);\n      PyTuple_SET_ITEM(__pyx_t_3, 6, __pyx_v_phase_records);\n/* \u2026 */\n      __Pyx_TraceLine(201,0,__PYX_ERR(0, 201, __pyx_L1_error))\n      __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_3, __pyx_t_32); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 201, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) {\n        PyObject* sequence = __pyx_t_2;\n        #if !CYTHON_COMPILING_IN_PYPY\n        Py_ssize_t size = Py_SIZE(sequence);\n        #else\n        Py_ssize_t size = PySequence_Size(sequence);\n        #endif\n        if (unlikely(size != 3)) {\n          if (size > 3) __Pyx_RaiseTooManyValuesError(3);\n          else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n          __PYX_ERR(0, 200, __pyx_L1_error)\n        }\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        if (likely(PyTuple_CheckExact(sequence))) {\n          __pyx_t_32 = PyTuple_GET_ITEM(sequence, 0); \n          __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); \n          __pyx_t_11 = PyTuple_GET_ITEM(sequence, 2); \n        } else {\n          __pyx_t_32 = PyList_GET_ITEM(sequence, 0); \n          __pyx_t_3 = PyList_GET_ITEM(sequence, 1); \n          __pyx_t_11 = PyList_GET_ITEM(sequence, 2); \n        }\n        __Pyx_INCREF(__pyx_t_32);\n        __Pyx_INCREF(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_11);\n        #else\n        __pyx_t_32 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 200, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 200, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_11 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 200, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        #endif\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      } else {\n        Py_ssize_t index = -1;\n        __pyx_t_12 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 200, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_13 = Py_TYPE(__pyx_t_12)->tp_iternext;\n        index = 0; __pyx_t_32 = __pyx_t_13(__pyx_t_12); if (unlikely(!__pyx_t_32)) goto __pyx_L80_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_32);\n        index = 1; __pyx_t_3 = __pyx_t_13(__pyx_t_12); if (unlikely(!__pyx_t_3)) goto __pyx_L80_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_3);\n        index = 2; __pyx_t_11 = __pyx_t_13(__pyx_t_12); if (unlikely(!__pyx_t_11)) goto __pyx_L80_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_11);\n        if (__Pyx_IternextUnpackEndCheck(__pyx_t_13(__pyx_t_12), 3) < 0) __PYX_ERR(0, 200, __pyx_L1_error)\n        __pyx_t_13 = NULL;\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        goto __pyx_L81_unpacking_done;\n        __pyx_L80_unpacking_failed:;\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __pyx_t_13 = NULL;\n        if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);\n        __PYX_ERR(0, 200, __pyx_L1_error)\n        __pyx_L81_unpacking_done:;\n      }\n
+202:                                      candidate_site_fracs, candidate_phase_fracs, phase_records, mole_fractions=mole_fractions)
\n
      __pyx_t_32 = PyDict_New(); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 202, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      if (PyDict_SetItem(__pyx_t_32, __pyx_n_s_mole_fractions, __pyx_v_mole_fractions) < 0) __PYX_ERR(0, 202, __pyx_L1_error)\n
+203:             candidate_energy, candidate_gradient_term = \\
\n
      __Pyx_TraceLine(203,0,__PYX_ERR(0, 203, __pyx_L1_error))\n      __Pyx_XDECREF_SET(__pyx_v_candidate_energy, __pyx_t_11);\n      __pyx_t_11 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_candidate_gradient_term, __pyx_t_32);\n      __pyx_t_32 = 0;\n
+204:                 _build_multiphase_gradient(dbf, comps, phases, cur_conds, candidate_site_fracs,
\n
      __Pyx_TraceLine(204,0,__PYX_ERR(0, 204, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_build_multiphase_gradient); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 204, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n
 205:                                            candidate_phase_fracs, candidate_l_constraints,
\n
 206:                                            candidate_constraint_jac, l_multipliers,
\n
+207:                                            callable_dict, phase_records)
\n
      __Pyx_TraceLine(207,0,__PYX_ERR(0, 207, __pyx_L1_error))\n      __pyx_t_3 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_11)) {\n        PyObject *__pyx_temp[12] = {__pyx_t_3, __pyx_v_dbf, __pyx_v_comps, __pyx_v_phases, __pyx_v_cur_conds, ((PyObject *)__pyx_v_candidate_site_fracs), ((PyObject *)__pyx_v_candidate_phase_fracs), __pyx_v_candidate_l_constraints, __pyx_v_candidate_constraint_jac, ((PyObject *)__pyx_v_l_multipliers), __pyx_v_callable_dict, __pyx_v_phase_records};\n        __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-__pyx_t_17, 11+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 204, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n        PyObject *__pyx_temp[12] = {__pyx_t_3, __pyx_v_dbf, __pyx_v_comps, __pyx_v_phases, __pyx_v_cur_conds, ((PyObject *)__pyx_v_candidate_site_fracs), ((PyObject *)__pyx_v_candidate_phase_fracs), __pyx_v_candidate_l_constraints, __pyx_v_candidate_constraint_jac, ((PyObject *)__pyx_v_l_multipliers), __pyx_v_callable_dict, __pyx_v_phase_records};\n        __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-__pyx_t_17, 11+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 204, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n      } else\n      #endif\n      {\n        __pyx_t_32 = PyTuple_New(11+__pyx_t_17); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 204, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        if (__pyx_t_3) {\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_3); __pyx_t_3 = NULL;\n        }\n        __Pyx_INCREF(__pyx_v_dbf);\n        __Pyx_GIVEREF(__pyx_v_dbf);\n        PyTuple_SET_ITEM(__pyx_t_32, 0+__pyx_t_17, __pyx_v_dbf);\n        __Pyx_INCREF(__pyx_v_comps);\n        __Pyx_GIVEREF(__pyx_v_comps);\n        PyTuple_SET_ITEM(__pyx_t_32, 1+__pyx_t_17, __pyx_v_comps);\n        __Pyx_INCREF(__pyx_v_phases);\n        __Pyx_GIVEREF(__pyx_v_phases);\n        PyTuple_SET_ITEM(__pyx_t_32, 2+__pyx_t_17, __pyx_v_phases);\n        __Pyx_INCREF(__pyx_v_cur_conds);\n        __Pyx_GIVEREF(__pyx_v_cur_conds);\n        PyTuple_SET_ITEM(__pyx_t_32, 3+__pyx_t_17, __pyx_v_cur_conds);\n        __Pyx_INCREF(((PyObject *)__pyx_v_candidate_site_fracs));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_candidate_site_fracs));\n        PyTuple_SET_ITEM(__pyx_t_32, 4+__pyx_t_17, ((PyObject *)__pyx_v_candidate_site_fracs));\n        __Pyx_INCREF(((PyObject *)__pyx_v_candidate_phase_fracs));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_candidate_phase_fracs));\n        PyTuple_SET_ITEM(__pyx_t_32, 5+__pyx_t_17, ((PyObject *)__pyx_v_candidate_phase_fracs));\n        __Pyx_INCREF(__pyx_v_candidate_l_constraints);\n        __Pyx_GIVEREF(__pyx_v_candidate_l_constraints);\n        PyTuple_SET_ITEM(__pyx_t_32, 6+__pyx_t_17, __pyx_v_candidate_l_constraints);\n        __Pyx_INCREF(__pyx_v_candidate_constraint_jac);\n        __Pyx_GIVEREF(__pyx_v_candidate_constraint_jac);\n        PyTuple_SET_ITEM(__pyx_t_32, 7+__pyx_t_17, __pyx_v_candidate_constraint_jac);\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_multipliers));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_multipliers));\n        PyTuple_SET_ITEM(__pyx_t_32, 8+__pyx_t_17, ((PyObject *)__pyx_v_l_multipliers));\n        __Pyx_INCREF(__pyx_v_callable_dict);\n        __Pyx_GIVEREF(__pyx_v_callable_dict);\n        PyTuple_SET_ITEM(__pyx_t_32, 9+__pyx_t_17, __pyx_v_callable_dict);\n        __Pyx_INCREF(__pyx_v_phase_records);\n        __Pyx_GIVEREF(__pyx_v_phase_records);\n        PyTuple_SET_ITEM(__pyx_t_32, 10+__pyx_t_17, __pyx_v_phase_records);\n        __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_32, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 204, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) {\n        PyObject* sequence = __pyx_t_2;\n        #if !CYTHON_COMPILING_IN_PYPY\n        Py_ssize_t size = Py_SIZE(sequence);\n        #else\n        Py_ssize_t size = PySequence_Size(sequence);\n        #endif\n        if (unlikely(size != 2)) {\n          if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n          else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n          __PYX_ERR(0, 203, __pyx_L1_error)\n        }\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        if (likely(PyTuple_CheckExact(sequence))) {\n          __pyx_t_11 = PyTuple_GET_ITEM(sequence, 0); \n          __pyx_t_32 = PyTuple_GET_ITEM(sequence, 1); \n        } else {\n          __pyx_t_11 = PyList_GET_ITEM(sequence, 0); \n          __pyx_t_32 = PyList_GET_ITEM(sequence, 1); \n        }\n        __Pyx_INCREF(__pyx_t_11);\n        __Pyx_INCREF(__pyx_t_32);\n        #else\n        __pyx_t_11 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 203, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_32 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 203, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        #endif\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      } else {\n        Py_ssize_t index = -1;\n        __pyx_t_3 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 203, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_13 = Py_TYPE(__pyx_t_3)->tp_iternext;\n        index = 0; __pyx_t_11 = __pyx_t_13(__pyx_t_3); if (unlikely(!__pyx_t_11)) goto __pyx_L82_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_11);\n        index = 1; __pyx_t_32 = __pyx_t_13(__pyx_t_3); if (unlikely(!__pyx_t_32)) goto __pyx_L82_unpacking_failed;\n        __Pyx_GOTREF(__pyx_t_32);\n        if (__Pyx_IternextUnpackEndCheck(__pyx_t_13(__pyx_t_3), 2) < 0) __PYX_ERR(0, 203, __pyx_L1_error)\n        __pyx_t_13 = NULL;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        goto __pyx_L83_unpacking_done;\n        __pyx_L82_unpacking_failed:;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_13 = NULL;\n        if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);\n        __PYX_ERR(0, 203, __pyx_L1_error)\n        __pyx_L83_unpacking_done:;\n      }\n
 208:             # We updated degrees of freedom this iteration
\n
+209:             new_l_multipliers = np.linalg.solve(np.dot(constraint_jac, ymat).T,
\n
      __Pyx_TraceLine(209,0,__PYX_ERR(0, 209, __pyx_L1_error))\n      __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 209, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_linalg); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 209, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      __pyx_t_32 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_solve); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 209, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 209, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_dot); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 209, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_12);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_12, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_12)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_3, ((PyObject *)__pyx_v_constraint_jac), ((PyObject *)__pyx_v_ymat)};\n        __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_3, ((PyObject *)__pyx_v_constraint_jac), ((PyObject *)__pyx_v_ymat)};\n        __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else\n      #endif\n      {\n        __pyx_t_7 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        if (__pyx_t_3) {\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_3); __pyx_t_3 = NULL;\n        }\n        __Pyx_INCREF(((PyObject *)__pyx_v_constraint_jac));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_constraint_jac));\n        PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_17, ((PyObject *)__pyx_v_constraint_jac));\n        __Pyx_INCREF(((PyObject *)__pyx_v_ymat));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_ymat));\n        PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_17, ((PyObject *)__pyx_v_ymat));\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_7, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_T); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 209, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(209,0,__PYX_ERR(0, 209, __pyx_L1_error))\n      if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 209, __pyx_L1_error)\n      __pyx_t_20 = ((PyArrayObject *)__pyx_t_2);\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_v_new_l_multipliers, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_new_l_multipliers.diminfo[0].strides = __pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_new_l_multipliers.diminfo[0].shape = __pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 209, __pyx_L1_error)\n      }\n      __pyx_t_20 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_new_l_multipliers, ((PyArrayObject *)__pyx_t_2));\n      __pyx_t_2 = 0;\n
+210:                                                 np.dot(ymat.T, gradient_term + np.dot(l_hessian, step)))
\n
      __Pyx_TraceLine(210,0,__PYX_ERR(0, 210, __pyx_L1_error))\n      __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 210, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_dot); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 210, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_ymat), __pyx_n_s_T); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 210, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 210, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_dot); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 210, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n      __pyx_t_1 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_14))) {\n        __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_14);\n        if (likely(__pyx_t_1)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14);\n          __Pyx_INCREF(__pyx_t_1);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_14, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_14)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_1, ((PyObject *)__pyx_v_l_hessian), ((PyObject *)__pyx_v_step)};\n        __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_14, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_14)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_1, ((PyObject *)__pyx_v_l_hessian), ((PyObject *)__pyx_v_step)};\n        __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_14, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n      } else\n      #endif\n      {\n        __pyx_t_30 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (__pyx_t_1) {\n          __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_1); __pyx_t_1 = NULL;\n        }\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_hessian));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_hessian));\n        PyTuple_SET_ITEM(__pyx_t_30, 0+__pyx_t_17, ((PyObject *)__pyx_v_l_hessian));\n        __Pyx_INCREF(((PyObject *)__pyx_v_step));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_step));\n        PyTuple_SET_ITEM(__pyx_t_30, 1+__pyx_t_17, ((PyObject *)__pyx_v_step));\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_30, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_t_14 = PyNumber_Add(((PyObject *)__pyx_v_gradient_term), __pyx_t_4); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 210, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_14);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_4 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n        __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);\n        if (likely(__pyx_t_4)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n          __Pyx_INCREF(__pyx_t_4);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_3, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_3)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_t_7, __pyx_t_14};\n        __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_t_7, __pyx_t_14};\n        __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_30 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (__pyx_t_4) {\n          __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_4); __pyx_t_4 = NULL;\n        }\n        __Pyx_GIVEREF(__pyx_t_7);\n        PyTuple_SET_ITEM(__pyx_t_30, 0+__pyx_t_17, __pyx_t_7);\n        __Pyx_GIVEREF(__pyx_t_14);\n        PyTuple_SET_ITEM(__pyx_t_30, 1+__pyx_t_17, __pyx_t_14);\n        __pyx_t_7 = 0;\n        __pyx_t_14 = 0;\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_30, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 210, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = NULL;\n      __pyx_t_17 = 0;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_32))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_32);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_32);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_32, function);\n          __pyx_t_17 = 1;\n        }\n      }\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_32)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_t_12, __pyx_t_11};\n        __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_32, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_32)) {\n        PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_t_12, __pyx_t_11};\n        __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_32, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_30 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (__pyx_t_3) {\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_3); __pyx_t_3 = NULL;\n        }\n        __Pyx_GIVEREF(__pyx_t_12);\n        PyTuple_SET_ITEM(__pyx_t_30, 0+__pyx_t_17, __pyx_t_12);\n        __Pyx_GIVEREF(__pyx_t_11);\n        PyTuple_SET_ITEM(__pyx_t_30, 1+__pyx_t_17, __pyx_t_11);\n        __pyx_t_12 = 0;\n        __pyx_t_11 = 0;\n        __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_32, __pyx_t_30, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 209, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      }\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n
+211:             np.clip(new_l_multipliers, -MAX_ABS_LAGRANGE_MULTIPLIER, MAX_ABS_LAGRANGE_MULTIPLIER,
\n
      __Pyx_TraceLine(211,0,__PYX_ERR(0, 211, __pyx_L1_error))\n      __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_32 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_clip); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_ABS_LAGRANGE_MULTIPLIER); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_30 = PyNumber_Negative(__pyx_t_2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_ABS_LAGRANGE_MULTIPLIER); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_11 = PyTuple_New(3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_INCREF(((PyObject *)__pyx_v_new_l_multipliers));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_new_l_multipliers));\n      PyTuple_SET_ITEM(__pyx_t_11, 0, ((PyObject *)__pyx_v_new_l_multipliers));\n      __Pyx_GIVEREF(__pyx_t_30);\n      PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_30);\n      __Pyx_GIVEREF(__pyx_t_2);\n      PyTuple_SET_ITEM(__pyx_t_11, 2, __pyx_t_2);\n      __pyx_t_30 = 0;\n      __pyx_t_2 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(211,0,__PYX_ERR(0, 211, __pyx_L1_error))\n      __pyx_t_30 = __Pyx_PyObject_Call(__pyx_t_32, __pyx_t_11, __pyx_t_2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 211, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+212:                     out=new_l_multipliers)
\n
      __Pyx_TraceLine(212,0,__PYX_ERR(0, 212, __pyx_L1_error))\n      __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 212, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_out, ((PyObject *)__pyx_v_new_l_multipliers)) < 0) __PYX_ERR(0, 212, __pyx_L1_error)\n
 213:             # XXX: Should fix underlying numerical problem at edges of composition space instead of working around
\n
+214:             if np.any(np.isnan(new_l_multipliers)):
\n
      __Pyx_TraceLine(214,0,__PYX_ERR(0, 214, __pyx_L1_error))\n      __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_any); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 214, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 214, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_isnan); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 214, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      __pyx_t_32 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n        __pyx_t_32 = PyMethod_GET_SELF(__pyx_t_12);\n        if (likely(__pyx_t_32)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n          __Pyx_INCREF(__pyx_t_32);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_12, function);\n        }\n      }\n      if (!__pyx_t_32) {\n        __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_12, ((PyObject *)__pyx_v_new_l_multipliers)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_32, ((PyObject *)__pyx_v_new_l_multipliers)};\n          __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_32, ((PyObject *)__pyx_v_new_l_multipliers)};\n          __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n        } else\n        #endif\n        {\n          __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_GIVEREF(__pyx_t_32); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_32); __pyx_t_32 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_new_l_multipliers));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_new_l_multipliers));\n          PyTuple_SET_ITEM(__pyx_t_3, 0+1, ((PyObject *)__pyx_v_new_l_multipliers));\n          __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_12)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_12);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n        }\n      }\n      if (!__pyx_t_12) {\n        __pyx_t_30 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 214, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_GOTREF(__pyx_t_30);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_2};\n          __pyx_t_30 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_2};\n          __pyx_t_30 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_12); __pyx_t_12 = NULL;\n          __Pyx_GIVEREF(__pyx_t_2);\n          PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_t_2);\n          __pyx_t_2 = 0;\n          __pyx_t_30 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_3, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 214, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_30); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 214, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+215:                 print('WARNING: Unstable Lagrange multipliers: ', new_l_multipliers)
\n
        __Pyx_TraceLine(215,0,__PYX_ERR(0, 215, __pyx_L1_error))\n        __pyx_t_30 = PyTuple_New(2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 215, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_INCREF(__pyx_kp_u_WARNING_Unstable_Lagrange_multip);\n        __Pyx_GIVEREF(__pyx_kp_u_WARNING_Unstable_Lagrange_multip);\n        PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_kp_u_WARNING_Unstable_Lagrange_multip);\n        __Pyx_INCREF(((PyObject *)__pyx_v_new_l_multipliers));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_new_l_multipliers));\n        PyTuple_SET_ITEM(__pyx_t_30, 1, ((PyObject *)__pyx_v_new_l_multipliers));\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_30, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 215, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
 216:                 # Equation 18.16 in Nocedal and Wright
\n
 217:                 # This method is less accurate but more stable
\n
+218:                 new_l_multipliers = np.dot(np.dot(np.linalg.inv(np.dot(candidate_constraint_jac,
\n
        __Pyx_TraceLine(218,0,__PYX_ERR(0, 218, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_30, __pyx_n_s_dot); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_dot); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_linalg); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n        __pyx_t_32 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_inv); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_32);\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_dot); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 218, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n/* \u2026 */\n        __Pyx_TraceLine(218,0,__PYX_ERR(0, 218, __pyx_L1_error))\n        if (!(likely(((__pyx_t_11) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_11, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 218, __pyx_L1_error)\n        __pyx_t_20 = ((PyArrayObject *)__pyx_t_11);\n        {\n          __Pyx_BufFmt_StackElem __pyx_stack[1];\n          __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer);\n          __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_t_20, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n          if (unlikely(__pyx_t_17 < 0)) {\n            PyErr_Fetch(&__pyx_t_23, &__pyx_t_22, &__pyx_t_21);\n            if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_v_new_l_multipliers, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n              Py_XDECREF(__pyx_t_23); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_21);\n              __Pyx_RaiseBufferFallbackError();\n            } else {\n              PyErr_Restore(__pyx_t_23, __pyx_t_22, __pyx_t_21);\n            }\n          }\n          __pyx_pybuffernd_new_l_multipliers.diminfo[0].strides = __pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_new_l_multipliers.diminfo[0].shape = __pyx_pybuffernd_new_l_multipliers.rcbuffer->pybuffer.shape[0];\n          if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 218, __pyx_L1_error)\n        }\n        __pyx_t_20 = 0;\n        __Pyx_DECREF_SET(__pyx_v_new_l_multipliers, ((PyArrayObject *)__pyx_t_11));\n        __pyx_t_11 = 0;\n
+219:                                                                        candidate_constraint_jac.T)),
\n
        __Pyx_TraceLine(219,0,__PYX_ERR(0, 219, __pyx_L1_error))\n        __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_candidate_constraint_jac, __pyx_n_s_T); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 219, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __pyx_t_1 = NULL;\n        __pyx_t_17 = 0;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {\n          __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4);\n          if (likely(__pyx_t_1)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n            __Pyx_INCREF(__pyx_t_1);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_4, function);\n            __pyx_t_17 = 1;\n          }\n        }\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_1, __pyx_v_candidate_constraint_jac, __pyx_t_7};\n          __pyx_t_14 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_1, __pyx_v_candidate_constraint_jac, __pyx_t_7};\n          __pyx_t_14 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_10 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          if (__pyx_t_1) {\n            __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_1); __pyx_t_1 = NULL;\n          }\n          __Pyx_INCREF(__pyx_v_candidate_constraint_jac);\n          __Pyx_GIVEREF(__pyx_v_candidate_constraint_jac);\n          PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_17, __pyx_v_candidate_constraint_jac);\n          __Pyx_GIVEREF(__pyx_t_7);\n          PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_17, __pyx_t_7);\n          __pyx_t_7 = 0;\n          __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_10, NULL); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_14);\n          __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __pyx_t_4 = NULL;\n        if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_32))) {\n          __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_32);\n          if (likely(__pyx_t_4)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_32);\n            __Pyx_INCREF(__pyx_t_4);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_32, function);\n          }\n        }\n        if (!__pyx_t_4) {\n          __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_32, __pyx_t_14); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n          __Pyx_GOTREF(__pyx_t_2);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_32)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_14};\n            __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_32, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 218, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_32)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_14};\n            __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_32, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 218, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n          } else\n          #endif\n          {\n            __pyx_t_10 = PyTuple_New(1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_10);\n            __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_4); __pyx_t_4 = NULL;\n            __Pyx_GIVEREF(__pyx_t_14);\n            PyTuple_SET_ITEM(__pyx_t_10, 0+1, __pyx_t_14);\n            __pyx_t_14 = 0;\n            __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_32, __pyx_t_10, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 218, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n
+220:                                            candidate_constraint_jac), candidate_gradient_term)
\n
        __Pyx_TraceLine(220,0,__PYX_ERR(0, 220, __pyx_L1_error))\n        __pyx_t_32 = NULL;\n        __pyx_t_17 = 0;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n          __pyx_t_32 = PyMethod_GET_SELF(__pyx_t_12);\n          if (likely(__pyx_t_32)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n            __Pyx_INCREF(__pyx_t_32);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_12, function);\n            __pyx_t_17 = 1;\n          }\n        }\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_32, __pyx_t_2, __pyx_v_candidate_constraint_jac};\n          __pyx_t_30 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_32, __pyx_t_2, __pyx_v_candidate_constraint_jac};\n          __pyx_t_30 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_10 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          if (__pyx_t_32) {\n            __Pyx_GIVEREF(__pyx_t_32); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_32); __pyx_t_32 = NULL;\n          }\n          __Pyx_GIVEREF(__pyx_t_2);\n          PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_17, __pyx_t_2);\n          __Pyx_INCREF(__pyx_v_candidate_constraint_jac);\n          __Pyx_GIVEREF(__pyx_v_candidate_constraint_jac);\n          PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_17, __pyx_v_candidate_constraint_jac);\n          __pyx_t_2 = 0;\n          __pyx_t_30 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_10, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __pyx_t_12 = NULL;\n        __pyx_t_17 = 0;\n        if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n          __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_3);\n          if (likely(__pyx_t_12)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n            __Pyx_INCREF(__pyx_t_12);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_3, function);\n            __pyx_t_17 = 1;\n          }\n        }\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_12, __pyx_t_30, __pyx_v_candidate_gradient_term};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[3] = {__pyx_t_12, __pyx_t_30, __pyx_v_candidate_gradient_term};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_17, 2+__pyx_t_17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_10 = PyTuple_New(2+__pyx_t_17); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          if (__pyx_t_12) {\n            __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_12); __pyx_t_12 = NULL;\n          }\n          __Pyx_GIVEREF(__pyx_t_30);\n          PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_17, __pyx_t_30);\n          __Pyx_INCREF(__pyx_v_candidate_gradient_term);\n          __Pyx_GIVEREF(__pyx_v_candidate_gradient_term);\n          PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_17, __pyx_v_candidate_gradient_term);\n          __pyx_t_30 = 0;\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_10, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 218, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+221:                 np.clip(new_l_multipliers, -MAX_ABS_LAGRANGE_MULTIPLIER, MAX_ABS_LAGRANGE_MULTIPLIER,
\n
        __Pyx_TraceLine(221,0,__PYX_ERR(0, 221, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_clip); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_ABS_LAGRANGE_MULTIPLIER); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_10 = PyNumber_Negative(__pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_ABS_LAGRANGE_MULTIPLIER); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_30 = PyTuple_New(3); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_INCREF(((PyObject *)__pyx_v_new_l_multipliers));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_new_l_multipliers));\n        PyTuple_SET_ITEM(__pyx_t_30, 0, ((PyObject *)__pyx_v_new_l_multipliers));\n        __Pyx_GIVEREF(__pyx_t_10);\n        PyTuple_SET_ITEM(__pyx_t_30, 1, __pyx_t_10);\n        __Pyx_GIVEREF(__pyx_t_11);\n        PyTuple_SET_ITEM(__pyx_t_30, 2, __pyx_t_11);\n        __pyx_t_10 = 0;\n        __pyx_t_11 = 0;\n/* \u2026 */\n        __Pyx_TraceLine(221,0,__PYX_ERR(0, 221, __pyx_L1_error))\n        __pyx_t_10 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_30, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 221, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n
+222:                     out=new_l_multipliers)
\n
        __Pyx_TraceLine(222,0,__PYX_ERR(0, 222, __pyx_L1_error))\n        __pyx_t_11 = PyDict_New(); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 222, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_out, ((PyObject *)__pyx_v_new_l_multipliers)) < 0) __PYX_ERR(0, 222, __pyx_L1_error)\n
+223:             l_multipliers = new_l_multipliers
\n
      __Pyx_TraceLine(223,0,__PYX_ERR(0, 223, __pyx_L1_error))\n      {\n        __Pyx_BufFmt_StackElem __pyx_stack[1];\n        __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer);\n        __pyx_t_17 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_v_new_l_multipliers, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n        if (unlikely(__pyx_t_17 < 0)) {\n          PyErr_Fetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23);\n          if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer, (PyObject*)__pyx_v_l_multipliers, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float64_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n            Py_XDECREF(__pyx_t_21); Py_XDECREF(__pyx_t_22); Py_XDECREF(__pyx_t_23);\n            __Pyx_RaiseBufferFallbackError();\n          } else {\n            PyErr_Restore(__pyx_t_21, __pyx_t_22, __pyx_t_23);\n          }\n        }\n        __pyx_pybuffernd_l_multipliers.diminfo[0].strides = __pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_l_multipliers.diminfo[0].shape = __pyx_pybuffernd_l_multipliers.rcbuffer->pybuffer.shape[0];\n        if (unlikely(__pyx_t_17 < 0)) __PYX_ERR(0, 223, __pyx_L1_error)\n      }\n      __Pyx_INCREF(((PyObject *)__pyx_v_new_l_multipliers));\n      __Pyx_XDECREF_SET(__pyx_v_l_multipliers, __pyx_v_new_l_multipliers);\n
+224:             if np.any(np.isnan(l_multipliers)):
\n
      __Pyx_TraceLine(224,0,__PYX_ERR(0, 224, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 224, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_any); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 224, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 224, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isnan); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 224, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_12);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_12, function);\n        }\n      }\n      if (!__pyx_t_3) {\n        __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_12, ((PyObject *)__pyx_v_l_multipliers)); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 224, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_l_multipliers)};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_l_multipliers)};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        {\n          __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); __pyx_t_3 = NULL;\n          __Pyx_INCREF(((PyObject *)__pyx_v_l_multipliers));\n          __Pyx_GIVEREF(((PyObject *)__pyx_v_l_multipliers));\n          PyTuple_SET_ITEM(__pyx_t_2, 0+1, ((PyObject *)__pyx_v_l_multipliers));\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_2, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_30))) {\n        __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_30);\n        if (likely(__pyx_t_12)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_30);\n          __Pyx_INCREF(__pyx_t_12);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_30, function);\n        }\n      }\n      if (!__pyx_t_12) {\n        __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_t_30, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 224, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_GOTREF(__pyx_t_10);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_30)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_11};\n          __pyx_t_10 = __Pyx_PyFunction_FastCall(__pyx_t_30, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_10);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_30)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_11};\n          __pyx_t_10 = __Pyx_PyCFunction_FastCall(__pyx_t_30, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_10);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_12); __pyx_t_12 = NULL;\n          __Pyx_GIVEREF(__pyx_t_11);\n          PyTuple_SET_ITEM(__pyx_t_2, 0+1, __pyx_t_11);\n          __pyx_t_11 = 0;\n          __pyx_t_10 = __Pyx_PyObject_Call(__pyx_t_30, __pyx_t_2, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 224, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 224, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+225:                 print('Invalid l_multipliers after recalculation', l_multipliers)
\n
        __Pyx_TraceLine(225,0,__PYX_ERR(0, 225, __pyx_L1_error))\n        __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 225, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_INCREF(__pyx_kp_u_Invalid_l_multipliers_after_reca);\n        __Pyx_GIVEREF(__pyx_kp_u_Invalid_l_multipliers_after_reca);\n        PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_kp_u_Invalid_l_multipliers_after_reca);\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_multipliers));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_multipliers));\n        PyTuple_SET_ITEM(__pyx_t_10, 1, ((PyObject *)__pyx_v_l_multipliers));\n        __pyx_t_30 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_10, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 225, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+226:                 l_multipliers[:] = 0
\n
        __Pyx_TraceLine(226,0,__PYX_ERR(0, 226, __pyx_L1_error))\n        if (__Pyx_PyObject_SetSlice(((PyObject *)__pyx_v_l_multipliers), __pyx_int_0, 0, 0, NULL, NULL, &__pyx_slice__20, 0, 0, 1) < 0) __PYX_ERR(0, 226, __pyx_L1_error)\n/* \u2026 */\n  __pyx_slice__20 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__20)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_slice__20);\n  __Pyx_GIVEREF(__pyx_slice__20);\n
+227:             if verbose:
\n
      __Pyx_TraceLine(227,0,__PYX_ERR(0, 227, __pyx_L1_error))\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_verbose); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 227, __pyx_L1_error)\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+228:                 print('NEW_L_MULTIPLIERS', l_multipliers)
\n
        __Pyx_TraceLine(228,0,__PYX_ERR(0, 228, __pyx_L1_error))\n        __pyx_t_30 = PyTuple_New(2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 228, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_INCREF(__pyx_n_u_NEW_L_MULTIPLIERS);\n        __Pyx_GIVEREF(__pyx_n_u_NEW_L_MULTIPLIERS);\n        PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_n_u_NEW_L_MULTIPLIERS);\n        __Pyx_INCREF(((PyObject *)__pyx_v_l_multipliers));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_l_multipliers));\n        PyTuple_SET_ITEM(__pyx_t_30, 1, ((PyObject *)__pyx_v_l_multipliers));\n        __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_30, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 228, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n
+229:             num_mass_bals = len([i for i in cur_conds.keys() if i.startswith('X_')]) + 1
\n
      __Pyx_TraceLine(229,0,__PYX_ERR(0, 229, __pyx_L1_error))\n      { /* enter inner scope */\n        PyObject *__pyx_8genexpr4__pyx_v_i = NULL;\n        __pyx_t_10 = PyList_New(0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 229, __pyx_L89_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __pyx_t_15 = 0;\n        if (unlikely(__pyx_v_cur_conds == Py_None)) {\n          PyErr_Format(PyExc_AttributeError, \"'NoneType' object has no attribute '%s'\", \"keys\");\n          __PYX_ERR(0, 229, __pyx_L89_error)\n        }\n        __pyx_t_2 = __Pyx_dict_iterator(__pyx_v_cur_conds, 0, __pyx_n_s_keys, (&__pyx_t_8), (&__pyx_t_17)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 229, __pyx_L89_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_XDECREF(__pyx_t_30);\n        __pyx_t_30 = __pyx_t_2;\n        __pyx_t_2 = 0;\n        while (1) {\n          __pyx_t_36 = __Pyx_dict_iter_next(__pyx_t_30, __pyx_t_8, &__pyx_t_15, &__pyx_t_2, NULL, NULL, __pyx_t_17);\n          if (unlikely(__pyx_t_36 == 0)) break;\n          if (unlikely(__pyx_t_36 == -1)) __PYX_ERR(0, 229, __pyx_L89_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_XDECREF_SET(__pyx_8genexpr4__pyx_v_i, __pyx_t_2);\n          __pyx_t_2 = 0;\n          __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr4__pyx_v_i, __pyx_n_s_startswith); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 229, __pyx_L89_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 229, __pyx_L89_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 229, __pyx_L89_error)\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          if (__pyx_t_5) {\n            if (unlikely(__Pyx_ListComp_Append(__pyx_t_10, (PyObject*)__pyx_8genexpr4__pyx_v_i))) __PYX_ERR(0, 229, __pyx_L89_error)\n          }\n        }\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_XDECREF(__pyx_8genexpr4__pyx_v_i);\n        goto __pyx_L93_exit_scope;\n        __pyx_L89_error:;\n        __Pyx_XDECREF(__pyx_8genexpr4__pyx_v_i);\n        goto __pyx_L1_error;\n        __pyx_L93_exit_scope:;\n      } /* exit inner scope */\n      __pyx_t_8 = PyList_GET_SIZE(__pyx_t_10); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 229, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __pyx_t_10 = PyInt_FromSsize_t((__pyx_t_8 + 1)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 229, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __Pyx_XDECREF_SET(__pyx_v_num_mass_bals, __pyx_t_10);\n      __pyx_t_10 = 0;\n/* \u2026 */\n  __pyx_tuple__21 = PyTuple_Pack(1, __pyx_n_u_X_2); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(0, 229, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__21);\n  __Pyx_GIVEREF(__pyx_tuple__21);\n
+230:             chemical_potentials = l_multipliers[sum([len(dbf.phases[i].sublattices) for i in phases]):
\n
      __Pyx_TraceLine(230,0,__PYX_ERR(0, 230, __pyx_L1_error))\n      { /* enter inner scope */\n        PyObject *__pyx_8genexpr5__pyx_v_i = NULL;\n        __pyx_t_10 = PyList_New(0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 230, __pyx_L96_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        if (likely(PyList_CheckExact(__pyx_v_phases)) || PyTuple_CheckExact(__pyx_v_phases)) {\n          __pyx_t_30 = __pyx_v_phases; __Pyx_INCREF(__pyx_t_30); __pyx_t_8 = 0;\n          __pyx_t_9 = NULL;\n        } else {\n          __pyx_t_8 = -1; __pyx_t_30 = PyObject_GetIter(__pyx_v_phases); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_GOTREF(__pyx_t_30);\n          __pyx_t_9 = Py_TYPE(__pyx_t_30)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 230, __pyx_L96_error)\n        }\n        for (;;) {\n          if (likely(!__pyx_t_9)) {\n            if (likely(PyList_CheckExact(__pyx_t_30))) {\n              if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_30)) break;\n              #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n              __pyx_t_11 = PyList_GET_ITEM(__pyx_t_30, __pyx_t_8); __Pyx_INCREF(__pyx_t_11); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 230, __pyx_L96_error)\n              #else\n              __pyx_t_11 = PySequence_ITEM(__pyx_t_30, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L96_error)\n              __Pyx_GOTREF(__pyx_t_11);\n              #endif\n            } else {\n              if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_30)) break;\n              #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n              __pyx_t_11 = PyTuple_GET_ITEM(__pyx_t_30, __pyx_t_8); __Pyx_INCREF(__pyx_t_11); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 230, __pyx_L96_error)\n              #else\n              __pyx_t_11 = PySequence_ITEM(__pyx_t_30, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L96_error)\n              __Pyx_GOTREF(__pyx_t_11);\n              #endif\n            }\n          } else {\n            __pyx_t_11 = __pyx_t_9(__pyx_t_30);\n            if (unlikely(!__pyx_t_11)) {\n              PyObject* exc_type = PyErr_Occurred();\n              if (exc_type) {\n                if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                else __PYX_ERR(0, 230, __pyx_L96_error)\n              }\n              break;\n            }\n            __Pyx_GOTREF(__pyx_t_11);\n          }\n          __Pyx_XDECREF_SET(__pyx_8genexpr5__pyx_v_i, __pyx_t_11);\n          __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_dbf, __pyx_n_s_phases); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __pyx_t_2 = PyObject_GetItem(__pyx_t_11, __pyx_8genexpr5__pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_sublattices); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __pyx_t_15 = PyObject_Length(__pyx_t_11); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = PyInt_FromSsize_t(__pyx_t_15); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          if (unlikely(__Pyx_ListComp_Append(__pyx_t_10, (PyObject*)__pyx_t_11))) __PYX_ERR(0, 230, __pyx_L96_error)\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_i);\n        goto __pyx_L99_exit_scope;\n        __pyx_L96_error:;\n        __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_i);\n        goto __pyx_L1_error;\n        __pyx_L99_exit_scope:;\n      } /* exit inner scope */\n      __pyx_t_30 = PyTuple_New(1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 230, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_GIVEREF(__pyx_t_10);\n      PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_10);\n      __pyx_t_10 = 0;\n      __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_sum, __pyx_t_30, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 230, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(230,0,__PYX_ERR(0, 230, __pyx_L1_error))\n      __pyx_t_30 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_l_multipliers), 0, 0, &__pyx_t_10, &__pyx_t_11, NULL, 0, 0, 1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 230, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_chemical_potentials, __pyx_t_30);\n      __pyx_t_30 = 0;\n
+231:                                                 sum([len(dbf.phases[i].sublattices) for i in phases]) + num_mass_bals]
\n
      __Pyx_TraceLine(231,0,__PYX_ERR(0, 231, __pyx_L1_error))\n      { /* enter inner scope */\n        PyObject *__pyx_8genexpr6__pyx_v_i = NULL;\n        __pyx_t_30 = PyList_New(0); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 231, __pyx_L102_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (likely(PyList_CheckExact(__pyx_v_phases)) || PyTuple_CheckExact(__pyx_v_phases)) {\n          __pyx_t_11 = __pyx_v_phases; __Pyx_INCREF(__pyx_t_11); __pyx_t_8 = 0;\n          __pyx_t_9 = NULL;\n        } else {\n          __pyx_t_8 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_v_phases); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __pyx_t_9 = Py_TYPE(__pyx_t_11)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 231, __pyx_L102_error)\n        }\n        for (;;) {\n          if (likely(!__pyx_t_9)) {\n            if (likely(PyList_CheckExact(__pyx_t_11))) {\n              if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_11)) break;\n              #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n              __pyx_t_2 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_8); __Pyx_INCREF(__pyx_t_2); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 231, __pyx_L102_error)\n              #else\n              __pyx_t_2 = PySequence_ITEM(__pyx_t_11, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 231, __pyx_L102_error)\n              __Pyx_GOTREF(__pyx_t_2);\n              #endif\n            } else {\n              if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_11)) break;\n              #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n              __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_8); __Pyx_INCREF(__pyx_t_2); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 231, __pyx_L102_error)\n              #else\n              __pyx_t_2 = PySequence_ITEM(__pyx_t_11, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 231, __pyx_L102_error)\n              __Pyx_GOTREF(__pyx_t_2);\n              #endif\n            }\n          } else {\n            __pyx_t_2 = __pyx_t_9(__pyx_t_11);\n            if (unlikely(!__pyx_t_2)) {\n              PyObject* exc_type = PyErr_Occurred();\n              if (exc_type) {\n                if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                else __PYX_ERR(0, 231, __pyx_L102_error)\n              }\n              break;\n            }\n            __Pyx_GOTREF(__pyx_t_2);\n          }\n          __Pyx_XDECREF_SET(__pyx_8genexpr6__pyx_v_i, __pyx_t_2);\n          __pyx_t_2 = 0;\n          __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_dbf, __pyx_n_s_phases); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __pyx_t_12 = PyObject_GetItem(__pyx_t_2, __pyx_8genexpr6__pyx_v_i); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_sublattices); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __pyx_t_15 = PyObject_Length(__pyx_t_2); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __pyx_t_2 = PyInt_FromSsize_t(__pyx_t_15); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          if (unlikely(__Pyx_ListComp_Append(__pyx_t_30, (PyObject*)__pyx_t_2))) __PYX_ERR(0, 231, __pyx_L102_error)\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_XDECREF(__pyx_8genexpr6__pyx_v_i);\n        goto __pyx_L105_exit_scope;\n        __pyx_L102_error:;\n        __Pyx_XDECREF(__pyx_8genexpr6__pyx_v_i);\n        goto __pyx_L1_error;\n        __pyx_L105_exit_scope:;\n      } /* exit inner scope */\n      __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_GIVEREF(__pyx_t_30);\n      PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_30);\n      __pyx_t_30 = 0;\n      __pyx_t_30 = __Pyx_PyObject_Call(__pyx_builtin_sum, __pyx_t_11, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 231, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyNumber_Add(__pyx_t_30, __pyx_v_num_mass_bals); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+232:             prop_MU_values[it.multi_index] = chemical_potentials
\n
      __Pyx_TraceLine(232,0,__PYX_ERR(0, 232, __pyx_L1_error))\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 232, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_MU_values, __pyx_t_30, __pyx_v_chemical_potentials) < 0)) __PYX_ERR(0, 232, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+233:             prop_NP_values[it.multi_index + np.index_exp[:len(phases)]] = candidate_phase_fracs
\n
      __Pyx_TraceLine(233,0,__PYX_ERR(0, 233, __pyx_L1_error))\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_8 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __pyx_t_11 = __Pyx_PyObject_GetSlice(__pyx_t_10, 0, __pyx_t_8, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __pyx_t_10 = PyNumber_Add(__pyx_t_30, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_NP_values, __pyx_t_10, ((PyObject *)__pyx_v_candidate_phase_fracs)) < 0)) __PYX_ERR(0, 233, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n
+234:             prop_X_values[it.multi_index + np.index_exp[:len(phases)]] = 0
\n
      __Pyx_TraceLine(234,0,__PYX_ERR(0, 234, __pyx_L1_error))\n      __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_8 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __pyx_t_11 = __Pyx_PyObject_GetSlice(__pyx_t_30, 0, __pyx_t_8, NULL, NULL, NULL, 0, 1, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __pyx_t_30 = PyNumber_Add(__pyx_t_10, __pyx_t_11); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_X_values, __pyx_t_30, __pyx_int_0) < 0)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+235:             prop_GM_values[it.multi_index] = candidate_energy
\n
      __Pyx_TraceLine(235,0,__PYX_ERR(0, 235, __pyx_L1_error))\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 235, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      if (unlikely(PyObject_SetItem(__pyx_v_prop_GM_values, __pyx_t_30, __pyx_v_candidate_energy) < 0)) __PYX_ERR(0, 235, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+236:             var_offset = 0
\n
      __Pyx_TraceLine(236,0,__PYX_ERR(0, 236, __pyx_L1_error))\n      __Pyx_INCREF(__pyx_int_0);\n      __Pyx_XDECREF_SET(__pyx_v_var_offset, __pyx_int_0);\n
+237:             for phase_idx in range(len(phases)):
\n
      __Pyx_TraceLine(237,0,__PYX_ERR(0, 237, __pyx_L1_error))\n      __pyx_t_8 = PyObject_Length(__pyx_v_phases); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 237, __pyx_L1_error)\n      for (__pyx_t_15 = 0; __pyx_t_15 < __pyx_t_8; __pyx_t_15+=1) {\n        __pyx_v_phase_idx = __pyx_t_15;\n
+238:                 prop_Y_values[it.multi_index + np.index_exp[phase_idx, :phase_dof[phase_idx]]] = \\
\n
        __Pyx_TraceLine(238,0,__PYX_ERR(0, 238, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_10 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __pyx_t_10 = PyInt_FromSsize_t(__pyx_v_phase_idx); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __pyx_t_12 = __Pyx_GetItemInt_List(__pyx_v_phase_dof, __pyx_v_phase_idx, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __pyx_t_3 = PySlice_New(Py_None, __pyx_t_12, Py_None); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_GIVEREF(__pyx_t_10);\n        PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_10);\n        __Pyx_GIVEREF(__pyx_t_3);\n        PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_3);\n        __pyx_t_10 = 0;\n        __pyx_t_3 = 0;\n        __pyx_t_3 = PyObject_GetItem(__pyx_t_2, __pyx_t_12); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __pyx_t_12 = PyNumber_Add(__pyx_t_11, __pyx_t_3); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_Y_values, __pyx_t_12, __pyx_t_30) < 0)) __PYX_ERR(0, 238, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+239:                     candidate_site_fracs[var_offset:var_offset + phase_dof[phase_idx]]
\n
        __Pyx_TraceLine(239,0,__PYX_ERR(0, 239, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_GetItemInt_List(__pyx_v_phase_dof, __pyx_v_phase_idx, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 239, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __pyx_t_11 = PyNumber_Add(__pyx_v_var_offset, __pyx_t_30); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 239, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_30 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_candidate_site_fracs), 0, 0, &__pyx_v_var_offset, &__pyx_t_11, NULL, 0, 0, 1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 239, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+240:                 for comp_idx, comp in enumerate([c for c in comps if c != 'VA']):
\n
        __Pyx_TraceLine(240,0,__PYX_ERR(0, 240, __pyx_L1_error))\n        __Pyx_INCREF(__pyx_int_0);\n        __pyx_t_30 = __pyx_int_0;\n        { /* enter inner scope */\n          PyObject *__pyx_8genexpr7__pyx_v_c = NULL;\n          __pyx_t_12 = PyList_New(0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 240, __pyx_L112_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          if (likely(PyList_CheckExact(__pyx_v_comps)) || PyTuple_CheckExact(__pyx_v_comps)) {\n            __pyx_t_3 = __pyx_v_comps; __Pyx_INCREF(__pyx_t_3); __pyx_t_24 = 0;\n            __pyx_t_9 = NULL;\n          } else {\n            __pyx_t_24 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_comps); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L112_error)\n            __Pyx_GOTREF(__pyx_t_3);\n            __pyx_t_9 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 240, __pyx_L112_error)\n          }\n          for (;;) {\n            if (likely(!__pyx_t_9)) {\n              if (likely(PyList_CheckExact(__pyx_t_3))) {\n                if (__pyx_t_24 >= PyList_GET_SIZE(__pyx_t_3)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_11 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_24); __Pyx_INCREF(__pyx_t_11); __pyx_t_24++; if (unlikely(0 < 0)) __PYX_ERR(0, 240, __pyx_L112_error)\n                #else\n                __pyx_t_11 = PySequence_ITEM(__pyx_t_3, __pyx_t_24); __pyx_t_24++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 240, __pyx_L112_error)\n                __Pyx_GOTREF(__pyx_t_11);\n                #endif\n              } else {\n                if (__pyx_t_24 >= PyTuple_GET_SIZE(__pyx_t_3)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_11 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_24); __Pyx_INCREF(__pyx_t_11); __pyx_t_24++; if (unlikely(0 < 0)) __PYX_ERR(0, 240, __pyx_L112_error)\n                #else\n                __pyx_t_11 = PySequence_ITEM(__pyx_t_3, __pyx_t_24); __pyx_t_24++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 240, __pyx_L112_error)\n                __Pyx_GOTREF(__pyx_t_11);\n                #endif\n              }\n            } else {\n              __pyx_t_11 = __pyx_t_9(__pyx_t_3);\n              if (unlikely(!__pyx_t_11)) {\n                PyObject* exc_type = PyErr_Occurred();\n                if (exc_type) {\n                  if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                  else __PYX_ERR(0, 240, __pyx_L112_error)\n                }\n                break;\n              }\n              __Pyx_GOTREF(__pyx_t_11);\n            }\n            __Pyx_XDECREF_SET(__pyx_8genexpr7__pyx_v_c, __pyx_t_11);\n            __pyx_t_11 = 0;\n            __pyx_t_5 = (__Pyx_PyUnicode_Equals(__pyx_8genexpr7__pyx_v_c, __pyx_n_u_VA, Py_NE)); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 240, __pyx_L112_error)\n            if (__pyx_t_5) {\n              if (unlikely(__Pyx_ListComp_Append(__pyx_t_12, (PyObject*)__pyx_8genexpr7__pyx_v_c))) __PYX_ERR(0, 240, __pyx_L112_error)\n            }\n          }\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_c);\n          goto __pyx_L116_exit_scope;\n          __pyx_L112_error:;\n          __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_c);\n          goto __pyx_L1_error;\n          __pyx_L116_exit_scope:;\n        } /* exit inner scope */\n        __pyx_t_3 = __pyx_t_12; __Pyx_INCREF(__pyx_t_3); __pyx_t_24 = 0;\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        for (;;) {\n          if (__pyx_t_24 >= PyList_GET_SIZE(__pyx_t_3)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_24); __Pyx_INCREF(__pyx_t_12); __pyx_t_24++; if (unlikely(0 < 0)) __PYX_ERR(0, 240, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_3, __pyx_t_24); __pyx_t_24++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 240, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n          __Pyx_XDECREF_SET(__pyx_v_comp, __pyx_t_12);\n          __pyx_t_12 = 0;\n          __Pyx_INCREF(__pyx_t_30);\n          __Pyx_XDECREF_SET(__pyx_v_comp_idx, __pyx_t_30);\n          __pyx_t_12 = __Pyx_PyInt_AddObjC(__pyx_t_30, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 240, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          __Pyx_DECREF(__pyx_t_30);\n          __pyx_t_30 = __pyx_t_12;\n          __pyx_t_12 = 0;\n/* \u2026 */\n          __Pyx_TraceLine(240,0,__PYX_ERR(0, 240, __pyx_L1_error))\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+241:                     prop_X_values[it.multi_index + np.index_exp[phase_idx, comp_idx]] = \\
\n
          __Pyx_TraceLine(241,0,__PYX_ERR(0, 241, __pyx_L1_error))\n          __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __pyx_t_32 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_t_32, __pyx_n_s_index_exp); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __pyx_t_32 = PyInt_FromSsize_t(__pyx_v_phase_idx); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_GIVEREF(__pyx_t_32);\n          PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_32);\n          __Pyx_INCREF(__pyx_v_comp_idx);\n          __Pyx_GIVEREF(__pyx_v_comp_idx);\n          PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_v_comp_idx);\n          __pyx_t_32 = 0;\n          __pyx_t_32 = PyObject_GetItem(__pyx_t_10, __pyx_t_11); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_32);\n          __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = PyNumber_Add(__pyx_t_2, __pyx_t_32); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n          if (unlikely(PyObject_SetItem(__pyx_v_prop_X_values, __pyx_t_11, __pyx_t_12) < 0)) __PYX_ERR(0, 241, __pyx_L1_error)\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n
+242:                         mole_fractions[(phases[phase_idx], comp)][0](
\n
          __Pyx_TraceLine(242,0,__PYX_ERR(0, 242, __pyx_L1_error))\n          __pyx_t_11 = __Pyx_GetItemInt(__pyx_v_phases, __pyx_v_phase_idx, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 242, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 242, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_11);\n          PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_11);\n          __Pyx_INCREF(__pyx_v_comp);\n          __Pyx_GIVEREF(__pyx_v_comp);\n          PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_comp);\n          __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_PyDict_GetItem(__pyx_v_mole_fractions, __pyx_t_2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 242, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_11, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 242, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+243:                             [candidate_site_fracs[var_offset:var_offset + phase_dof[phase_idx]]])
\n
          __Pyx_TraceLine(243,0,__PYX_ERR(0, 243, __pyx_L1_error))\n          __pyx_t_11 = __Pyx_GetItemInt_List(__pyx_v_phase_dof, __pyx_v_phase_idx, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 243, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __pyx_t_10 = PyNumber_Add(__pyx_v_var_offset, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 243, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __pyx_t_11 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_candidate_site_fracs), 0, 0, &__pyx_v_var_offset, &__pyx_t_10, NULL, 0, 0, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 243, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n          __pyx_t_10 = PyList_New(1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 243, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          __Pyx_GIVEREF(__pyx_t_11);\n          PyList_SET_ITEM(__pyx_t_10, 0, __pyx_t_11);\n          __pyx_t_11 = 0;\n          __pyx_t_11 = NULL;\n          if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n            __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_2);\n            if (likely(__pyx_t_11)) {\n              PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n              __Pyx_INCREF(__pyx_t_11);\n              __Pyx_INCREF(function);\n              __Pyx_DECREF_SET(__pyx_t_2, function);\n            }\n          }\n          if (!__pyx_t_11) {\n            __pyx_t_12 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_10); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 242, __pyx_L1_error)\n            __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n            __Pyx_GOTREF(__pyx_t_12);\n          } else {\n            #if CYTHON_FAST_PYCALL\n            if (PyFunction_Check(__pyx_t_2)) {\n              PyObject *__pyx_temp[2] = {__pyx_t_11, __pyx_t_10};\n              __pyx_t_12 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 242, __pyx_L1_error)\n              __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n              __Pyx_GOTREF(__pyx_t_12);\n              __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n            } else\n            #endif\n            #if CYTHON_FAST_PYCCALL\n            if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n              PyObject *__pyx_temp[2] = {__pyx_t_11, __pyx_t_10};\n              __pyx_t_12 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 242, __pyx_L1_error)\n              __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n              __Pyx_GOTREF(__pyx_t_12);\n              __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n            } else\n            #endif\n            {\n              __pyx_t_32 = PyTuple_New(1+1); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 242, __pyx_L1_error)\n              __Pyx_GOTREF(__pyx_t_32);\n              __Pyx_GIVEREF(__pyx_t_11); PyTuple_SET_ITEM(__pyx_t_32, 0, __pyx_t_11); __pyx_t_11 = NULL;\n              __Pyx_GIVEREF(__pyx_t_10);\n              PyTuple_SET_ITEM(__pyx_t_32, 0+1, __pyx_t_10);\n              __pyx_t_10 = 0;\n              __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_32, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 242, __pyx_L1_error)\n              __Pyx_GOTREF(__pyx_t_12);\n              __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n            }\n          }\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n
+244:                 var_offset += phase_dof[phase_idx]
\n
        __Pyx_TraceLine(244,0,__PYX_ERR(0, 244, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_GetItemInt_List(__pyx_v_phase_dof, __pyx_v_phase_idx, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 244, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_var_offset, __pyx_t_30); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 244, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF_SET(__pyx_v_var_offset, __pyx_t_3);\n        __pyx_t_3 = 0;\n      }\n
 245: 
\n
+246:             properties.attrs['solve_iterations'] += 1
\n
      __Pyx_TraceLine(246,0,__PYX_ERR(0, 246, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_properties, __pyx_n_s_attrs); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 246, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_n_u_solve_iterations);\n      __pyx_t_45 = __pyx_n_u_solve_iterations;\n      __pyx_t_30 = PyObject_GetItem(__pyx_t_3, __pyx_t_45); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 246, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __pyx_t_12 = __Pyx_PyInt_AddObjC(__pyx_t_30, __pyx_int_1, 1, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 246, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      if (unlikely(PyObject_SetItem(__pyx_t_3, __pyx_t_45, __pyx_t_12) < 0)) __PYX_ERR(0, 246, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __Pyx_DECREF(__pyx_t_45); __pyx_t_45 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+247:             total_comp = np.nansum(prop_NP_values[it.multi_index][..., np.newaxis] * \\
\n
      __Pyx_TraceLine(247,0,__PYX_ERR(0, 247, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_nansum); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_30 = PyObject_GetItem(__pyx_v_prop_NP_values, __pyx_t_3); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_newaxis); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_INCREF(Py_Ellipsis);\n      __Pyx_GIVEREF(Py_Ellipsis);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, Py_Ellipsis);\n      __Pyx_GIVEREF(__pyx_t_11);\n      PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_11);\n      __pyx_t_11 = 0;\n      __pyx_t_11 = PyObject_GetItem(__pyx_t_30, __pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(247,0,__PYX_ERR(0, 247, __pyx_L1_error))\n      __pyx_t_3 = PyNumber_Multiply(__pyx_t_11, __pyx_t_30); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __pyx_t_30 = PyTuple_New(1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_GIVEREF(__pyx_t_3);\n      PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_3);\n      __pyx_t_3 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(247,0,__PYX_ERR(0, 247, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_30, __pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 247, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_total_comp, __pyx_t_11);\n      __pyx_t_11 = 0;\n
+248:                                    prop_X_values[it.multi_index], axis=-2)
\n
      __Pyx_TraceLine(248,0,__PYX_ERR(0, 248, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 248, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_30 = PyObject_GetItem(__pyx_v_prop_X_values, __pyx_t_3); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 248, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(248,0,__PYX_ERR(0, 248, __pyx_L1_error))\n      __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 248, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_axis, __pyx_int_neg_2) < 0) __PYX_ERR(0, 248, __pyx_L1_error)\n
+249:             driving_force = (prop_MU_values[it.multi_index] * total_comp).sum(axis=-1) - \\
\n
      __Pyx_TraceLine(249,0,__PYX_ERR(0, 249, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_3 = PyObject_GetItem(__pyx_v_prop_MU_values, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyNumber_Multiply(__pyx_t_3, __pyx_v_total_comp); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_sum); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyDict_New(); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_axis, __pyx_int_neg_1) < 0) __PYX_ERR(0, 249, __pyx_L1_error)\n      __pyx_t_30 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_empty_tuple, __pyx_t_11); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n/* \u2026 */\n      __Pyx_TraceLine(249,0,__PYX_ERR(0, 249, __pyx_L1_error))\n      __pyx_t_11 = PyNumber_Subtract(__pyx_t_30, __pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_driving_force, __pyx_t_11);\n      __pyx_t_11 = 0;\n
+250:                              prop_GM_values[it.multi_index]
\n
      __Pyx_TraceLine(250,0,__PYX_ERR(0, 250, __pyx_L1_error))\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 250, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_3 = PyObject_GetItem(__pyx_v_prop_GM_values, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 250, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+251:             driving_force = np.squeeze(driving_force)
\n
      __Pyx_TraceLine(251,0,__PYX_ERR(0, 251, __pyx_L1_error))\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 251, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_squeeze); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 251, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_30))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_30);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_30);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_30, function);\n        }\n      }\n      if (!__pyx_t_3) {\n        __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_30, __pyx_v_driving_force); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 251, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_30)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_driving_force};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_30, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 251, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_30)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_driving_force};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_30, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 251, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        {\n          __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 251, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_3); __pyx_t_3 = NULL;\n          __Pyx_INCREF(__pyx_v_driving_force);\n          __Pyx_GIVEREF(__pyx_v_driving_force);\n          PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_v_driving_force);\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_30, __pyx_t_12, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 251, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_DECREF_SET(__pyx_v_driving_force, __pyx_t_11);\n      __pyx_t_11 = 0;\n
+252:             if verbose:
\n
      __Pyx_TraceLine(252,0,__PYX_ERR(0, 252, __pyx_L1_error))\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_verbose); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 252, __pyx_L1_error)\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+253:                 print('Chem pot progress', prop_MU_values[it.multi_index] - old_chem_pots)
\n
        __Pyx_TraceLine(253,0,__PYX_ERR(0, 253, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 253, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_30 = PyObject_GetItem(__pyx_v_prop_MU_values, __pyx_t_11); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 253, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = PyNumber_Subtract(__pyx_t_30, __pyx_v_old_chem_pots); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 253, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_30 = PyTuple_New(2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 253, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_INCREF(__pyx_kp_u_Chem_pot_progress);\n        __Pyx_GIVEREF(__pyx_kp_u_Chem_pot_progress);\n        PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_kp_u_Chem_pot_progress);\n        __Pyx_GIVEREF(__pyx_t_11);\n        PyTuple_SET_ITEM(__pyx_t_30, 1, __pyx_t_11);\n        __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_30, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 253, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+254:                 print('Energy progress', prop_GM_values[it.multi_index] - old_energy)
\n
        __Pyx_TraceLine(254,0,__PYX_ERR(0, 254, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_30 = PyObject_GetItem(__pyx_v_prop_GM_values, __pyx_t_11); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 254, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = PyNumber_Subtract(__pyx_t_30, __pyx_v_old_energy); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_30 = PyTuple_New(2); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 254, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_INCREF(__pyx_kp_u_Energy_progress);\n        __Pyx_GIVEREF(__pyx_kp_u_Energy_progress);\n        PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_kp_u_Energy_progress);\n        __Pyx_GIVEREF(__pyx_t_11);\n        PyTuple_SET_ITEM(__pyx_t_30, 1, __pyx_t_11);\n        __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_30, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+255:                 print('Driving force', driving_force)
\n
        __Pyx_TraceLine(255,0,__PYX_ERR(0, 255, __pyx_L1_error))\n        __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 255, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_INCREF(__pyx_kp_u_Driving_force);\n        __Pyx_GIVEREF(__pyx_kp_u_Driving_force);\n        PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_kp_u_Driving_force);\n        __Pyx_INCREF(__pyx_v_driving_force);\n        __Pyx_GIVEREF(__pyx_v_driving_force);\n        PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_v_driving_force);\n        __pyx_t_30 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_11, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 255, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+256:             no_progress = np.abs(prop_MU_values[it.multi_index] - old_chem_pots).max() < 0.01
\n
      __Pyx_TraceLine(256,0,__PYX_ERR(0, 256, __pyx_L1_error))\n      __pyx_t_12 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_abs); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __pyx_t_32 = PyObject_GetItem(__pyx_v_prop_MU_values, __pyx_t_12); if (unlikely(!__pyx_t_32)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_32);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = PyNumber_Subtract(__pyx_t_32, __pyx_v_old_chem_pots); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_32); __pyx_t_32 = 0;\n      __pyx_t_32 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n        __pyx_t_32 = PyMethod_GET_SELF(__pyx_t_3);\n        if (likely(__pyx_t_32)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n          __Pyx_INCREF(__pyx_t_32);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_3, function);\n        }\n      }\n      if (!__pyx_t_32) {\n        __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_12); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 256, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        __Pyx_GOTREF(__pyx_t_11);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_32, __pyx_t_12};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 256, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_32, __pyx_t_12};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 256, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_32); __pyx_t_32 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_2 = PyTuple_New(1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 256, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_2);\n          __Pyx_GIVEREF(__pyx_t_32); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_32); __pyx_t_32 = NULL;\n          __Pyx_GIVEREF(__pyx_t_12);\n          PyTuple_SET_ITEM(__pyx_t_2, 0+1, __pyx_t_12);\n          __pyx_t_12 = 0;\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 256, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_max); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = NULL;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {\n        __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_3);\n        if (likely(__pyx_t_11)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n          __Pyx_INCREF(__pyx_t_11);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_3, function);\n        }\n      }\n      if (__pyx_t_11) {\n        __pyx_t_30 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_11); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 256, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      } else {\n        __pyx_t_30 = __Pyx_PyObject_CallNoArg(__pyx_t_3); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 256, __pyx_L1_error)\n      }\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyObject_RichCompare(__pyx_t_30, __pyx_float_0_01, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 256, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_XDECREF_SET(__pyx_v_no_progress, __pyx_t_3);\n      __pyx_t_3 = 0;\n
+257:             no_progress &= np.abs(prop_GM_values[it.multi_index] - old_energy) < MIN_SOLVE_ENERGY_PROGRESS
\n
      __Pyx_TraceLine(257,0,__PYX_ERR(0, 257, __pyx_L1_error))\n      __pyx_t_30 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_30, __pyx_n_s_abs); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __pyx_t_2 = PyObject_GetItem(__pyx_v_prop_GM_values, __pyx_t_30); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __pyx_t_30 = PyNumber_Subtract(__pyx_t_2, __pyx_v_old_energy); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __pyx_t_2 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) {\n        __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_11);\n        if (likely(__pyx_t_2)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n          __Pyx_INCREF(__pyx_t_2);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_11, function);\n        }\n      }\n      if (!__pyx_t_2) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_30); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 257, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_30};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 257, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_11)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_2, __pyx_t_30};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_11, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 257, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 257, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_2); __pyx_t_2 = NULL;\n          __Pyx_GIVEREF(__pyx_t_30);\n          PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_t_30);\n          __pyx_t_30 = 0;\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_t_12, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 257, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_MIN_SOLVE_ENERGY_PROGRESS); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_12 = PyObject_RichCompare(__pyx_t_3, __pyx_t_11, Py_LT); __Pyx_XGOTREF(__pyx_t_12); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyNumber_InPlaceAnd(__pyx_v_no_progress, __pyx_t_12); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 257, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __Pyx_DECREF_SET(__pyx_v_no_progress, __pyx_t_11);\n      __pyx_t_11 = 0;\n
+258:             if no_progress and np.abs(driving_force) > MAX_SOLVE_DRIVING_FORCE:
\n
      __Pyx_TraceLine(258,0,__PYX_ERR(0, 258, __pyx_L1_error))\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_no_progress); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 258, __pyx_L1_error)\n      if (__pyx_t_6) {\n      } else {\n        __pyx_t_5 = __pyx_t_6;\n        goto __pyx_L119_bool_binop_done;\n      }\n      __pyx_t_12 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 258, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_abs); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 258, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __pyx_t_12 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n        __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_3);\n        if (likely(__pyx_t_12)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n          __Pyx_INCREF(__pyx_t_12);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_3, function);\n        }\n      }\n      if (!__pyx_t_12) {\n        __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_driving_force); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 258, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_v_driving_force};\n          __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 258, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_v_driving_force};\n          __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 258, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n          __Pyx_GOTREF(__pyx_t_11);\n        } else\n        #endif\n        {\n          __pyx_t_30 = PyTuple_New(1+1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 258, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_30);\n          __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_12); __pyx_t_12 = NULL;\n          __Pyx_INCREF(__pyx_v_driving_force);\n          __Pyx_GIVEREF(__pyx_v_driving_force);\n          PyTuple_SET_ITEM(__pyx_t_30, 0+1, __pyx_v_driving_force);\n          __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_30, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 258, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_SOLVE_DRIVING_FORCE); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 258, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_30 = PyObject_RichCompare(__pyx_t_11, __pyx_t_3, Py_GT); __Pyx_XGOTREF(__pyx_t_30); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 258, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_30); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 258, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __pyx_t_5 = __pyx_t_6;\n      __pyx_L119_bool_binop_done:;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+259:                 print('Driving force failed to converge: {}'.format(cur_conds))
\n
        __Pyx_TraceLine(259,0,__PYX_ERR(0, 259, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_kp_u_Driving_force_failed_to_converge, __pyx_n_s_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_11 = NULL;\n        if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {\n          __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_3);\n          if (likely(__pyx_t_11)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n            __Pyx_INCREF(__pyx_t_11);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_3, function);\n          }\n        }\n        if (!__pyx_t_11) {\n          __pyx_t_30 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_cur_conds); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 259, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_30);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_3)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_11, __pyx_v_cur_conds};\n            __pyx_t_30 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 259, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n            __Pyx_GOTREF(__pyx_t_30);\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_11, __pyx_v_cur_conds};\n            __pyx_t_30 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 259, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;\n            __Pyx_GOTREF(__pyx_t_30);\n          } else\n          #endif\n          {\n            __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 259, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_GIVEREF(__pyx_t_11); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_11); __pyx_t_11 = NULL;\n            __Pyx_INCREF(__pyx_v_cur_conds);\n            __Pyx_GIVEREF(__pyx_v_cur_conds);\n            PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_v_cur_conds);\n            __pyx_t_30 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_12, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 259, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_30);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_GIVEREF(__pyx_t_30);\n        PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_30);\n        __pyx_t_30 = 0;\n        __pyx_t_30 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_3, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 259, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+260:                 prop_MU_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(260,0,__PYX_ERR(0, 260, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 260, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_30, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 260, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 260, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_MU_values, __pyx_t_30, __pyx_t_3) < 0)) __PYX_ERR(0, 260, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+261:                 prop_NP_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(261,0,__PYX_ERR(0, 261, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 261, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_nan); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 261, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 261, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_NP_values, __pyx_t_3, __pyx_t_30) < 0)) __PYX_ERR(0, 261, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+262:                 prop_X_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(262,0,__PYX_ERR(0, 262, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 262, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_30, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 262, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 262, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_X_values, __pyx_t_30, __pyx_t_3) < 0)) __PYX_ERR(0, 262, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+263:                 prop_Y_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(263,0,__PYX_ERR(0, 263, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 263, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_nan); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 263, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 263, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_Y_values, __pyx_t_3, __pyx_t_30) < 0)) __PYX_ERR(0, 263, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+264:                 prop_GM_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(264,0,__PYX_ERR(0, 264, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 264, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_30, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 264, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 264, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_GM_values, __pyx_t_30, __pyx_t_3) < 0)) __PYX_ERR(0, 264, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+265:                 prop_Phase_values[it.multi_index] = ''
\n
        __Pyx_TraceLine(265,0,__PYX_ERR(0, 265, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 265, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_Phase_values, __pyx_t_3, __pyx_kp_u__3) < 0)) __PYX_ERR(0, 265, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+266:                 break
\n
        __Pyx_TraceLine(266,0,__PYX_ERR(0, 266, __pyx_L1_error))\n        goto __pyx_L31_break;\n
+267:             elif no_progress:
\n
      __Pyx_TraceLine(267,0,__PYX_ERR(0, 267, __pyx_L1_error))\n      __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_no_progress); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 267, __pyx_L1_error)\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n
+268:                 if verbose:
\n
        __Pyx_TraceLine(268,0,__PYX_ERR(0, 268, __pyx_L1_error))\n        __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_verbose); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 268, __pyx_L1_error)\n        if (__pyx_t_5) {\n/* \u2026 */\n        }\n
+269:                     print('No progress')
\n
          __Pyx_TraceLine(269,0,__PYX_ERR(0, 269, __pyx_L1_error))\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 269, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n/* \u2026 */\n  __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_u_No_progress); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(0, 269, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__22);\n  __Pyx_GIVEREF(__pyx_tuple__22);\n
+270:                 num_mass_bals = len([i for i in cur_conds.keys() if i.startswith('X_')]) + 1
\n
        __Pyx_TraceLine(270,0,__PYX_ERR(0, 270, __pyx_L1_error))\n        { /* enter inner scope */\n          PyObject *__pyx_8genexpr8__pyx_v_i = NULL;\n          __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 270, __pyx_L124_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __pyx_t_8 = 0;\n          if (unlikely(__pyx_v_cur_conds == Py_None)) {\n            PyErr_Format(PyExc_AttributeError, \"'NoneType' object has no attribute '%s'\", \"keys\");\n            __PYX_ERR(0, 270, __pyx_L124_error)\n          }\n          __pyx_t_12 = __Pyx_dict_iterator(__pyx_v_cur_conds, 0, __pyx_n_s_keys, (&__pyx_t_15), (&__pyx_t_17)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 270, __pyx_L124_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          __Pyx_XDECREF(__pyx_t_30);\n          __pyx_t_30 = __pyx_t_12;\n          __pyx_t_12 = 0;\n          while (1) {\n            __pyx_t_36 = __Pyx_dict_iter_next(__pyx_t_30, __pyx_t_15, &__pyx_t_8, &__pyx_t_12, NULL, NULL, __pyx_t_17);\n            if (unlikely(__pyx_t_36 == 0)) break;\n            if (unlikely(__pyx_t_36 == -1)) __PYX_ERR(0, 270, __pyx_L124_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_XDECREF_SET(__pyx_8genexpr8__pyx_v_i, __pyx_t_12);\n            __pyx_t_12 = 0;\n            __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr8__pyx_v_i, __pyx_n_s_startswith); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 270, __pyx_L124_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 270, __pyx_L124_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(0, 270, __pyx_L124_error)\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n            if (__pyx_t_5) {\n              if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_8genexpr8__pyx_v_i))) __PYX_ERR(0, 270, __pyx_L124_error)\n            }\n          }\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n          __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_i);\n          goto __pyx_L128_exit_scope;\n          __pyx_L124_error:;\n          __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_i);\n          goto __pyx_L1_error;\n          __pyx_L128_exit_scope:;\n        } /* exit inner scope */\n        __pyx_t_15 = PyList_GET_SIZE(__pyx_t_3); if (unlikely(__pyx_t_15 == -1)) __PYX_ERR(0, 270, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = PyInt_FromSsize_t((__pyx_t_15 + 1)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 270, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF_SET(__pyx_v_num_mass_bals, __pyx_t_3);\n        __pyx_t_3 = 0;\n/* \u2026 */\n  __pyx_tuple__23 = PyTuple_Pack(1, __pyx_n_u_X_2); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(0, 270, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__23);\n  __Pyx_GIVEREF(__pyx_tuple__23);\n
+271:                 chemical_potentials = l_multipliers[sum([len(dbf.phases[i].sublattices) for i in phases]):
\n
        __Pyx_TraceLine(271,0,__PYX_ERR(0, 271, __pyx_L1_error))\n        { /* enter inner scope */\n          PyObject *__pyx_8genexpr9__pyx_v_i = NULL;\n          __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 271, __pyx_L131_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          if (likely(PyList_CheckExact(__pyx_v_phases)) || PyTuple_CheckExact(__pyx_v_phases)) {\n            __pyx_t_30 = __pyx_v_phases; __Pyx_INCREF(__pyx_t_30); __pyx_t_15 = 0;\n            __pyx_t_9 = NULL;\n          } else {\n            __pyx_t_15 = -1; __pyx_t_30 = PyObject_GetIter(__pyx_v_phases); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_GOTREF(__pyx_t_30);\n            __pyx_t_9 = Py_TYPE(__pyx_t_30)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 271, __pyx_L131_error)\n          }\n          for (;;) {\n            if (likely(!__pyx_t_9)) {\n              if (likely(PyList_CheckExact(__pyx_t_30))) {\n                if (__pyx_t_15 >= PyList_GET_SIZE(__pyx_t_30)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_11 = PyList_GET_ITEM(__pyx_t_30, __pyx_t_15); __Pyx_INCREF(__pyx_t_11); __pyx_t_15++; if (unlikely(0 < 0)) __PYX_ERR(0, 271, __pyx_L131_error)\n                #else\n                __pyx_t_11 = PySequence_ITEM(__pyx_t_30, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 271, __pyx_L131_error)\n                __Pyx_GOTREF(__pyx_t_11);\n                #endif\n              } else {\n                if (__pyx_t_15 >= PyTuple_GET_SIZE(__pyx_t_30)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_11 = PyTuple_GET_ITEM(__pyx_t_30, __pyx_t_15); __Pyx_INCREF(__pyx_t_11); __pyx_t_15++; if (unlikely(0 < 0)) __PYX_ERR(0, 271, __pyx_L131_error)\n                #else\n                __pyx_t_11 = PySequence_ITEM(__pyx_t_30, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 271, __pyx_L131_error)\n                __Pyx_GOTREF(__pyx_t_11);\n                #endif\n              }\n            } else {\n              __pyx_t_11 = __pyx_t_9(__pyx_t_30);\n              if (unlikely(!__pyx_t_11)) {\n                PyObject* exc_type = PyErr_Occurred();\n                if (exc_type) {\n                  if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                  else __PYX_ERR(0, 271, __pyx_L131_error)\n                }\n                break;\n              }\n              __Pyx_GOTREF(__pyx_t_11);\n            }\n            __Pyx_XDECREF_SET(__pyx_8genexpr9__pyx_v_i, __pyx_t_11);\n            __pyx_t_11 = 0;\n            __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_dbf, __pyx_n_s_phases); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __pyx_t_12 = PyObject_GetItem(__pyx_t_11, __pyx_8genexpr9__pyx_v_i); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n            __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_sublattices); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_8 = PyObject_Length(__pyx_t_11); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n            __pyx_t_11 = PyInt_FromSsize_t(__pyx_t_8); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_11))) __PYX_ERR(0, 271, __pyx_L131_error)\n            __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          }\n          __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n          __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_i);\n          goto __pyx_L134_exit_scope;\n          __pyx_L131_error:;\n          __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_i);\n          goto __pyx_L1_error;\n          __pyx_L134_exit_scope:;\n        } /* exit inner scope */\n        __pyx_t_30 = PyTuple_New(1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 271, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_GIVEREF(__pyx_t_3);\n        PyTuple_SET_ITEM(__pyx_t_30, 0, __pyx_t_3);\n        __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_sum, __pyx_t_30, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 271, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n/* \u2026 */\n        __Pyx_TraceLine(271,0,__PYX_ERR(0, 271, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_PyObject_GetSlice(((PyObject *)__pyx_v_l_multipliers), 0, 0, &__pyx_t_3, &__pyx_t_11, NULL, 0, 0, 1); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 271, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF_SET(__pyx_v_chemical_potentials, __pyx_t_30);\n        __pyx_t_30 = 0;\n
+272:                                                     sum([len(dbf.phases[i].sublattices) for i in phases]) + num_mass_bals]
\n
        __Pyx_TraceLine(272,0,__PYX_ERR(0, 272, __pyx_L1_error))\n        { /* enter inner scope */\n          PyObject *__pyx_9genexpr10__pyx_v_i = NULL;\n          __pyx_t_30 = PyList_New(0); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 272, __pyx_L137_error)\n          __Pyx_GOTREF(__pyx_t_30);\n          if (likely(PyList_CheckExact(__pyx_v_phases)) || PyTuple_CheckExact(__pyx_v_phases)) {\n            __pyx_t_11 = __pyx_v_phases; __Pyx_INCREF(__pyx_t_11); __pyx_t_15 = 0;\n            __pyx_t_9 = NULL;\n          } else {\n            __pyx_t_15 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_v_phases); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __pyx_t_9 = Py_TYPE(__pyx_t_11)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 272, __pyx_L137_error)\n          }\n          for (;;) {\n            if (likely(!__pyx_t_9)) {\n              if (likely(PyList_CheckExact(__pyx_t_11))) {\n                if (__pyx_t_15 >= PyList_GET_SIZE(__pyx_t_11)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_12 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_15); __Pyx_INCREF(__pyx_t_12); __pyx_t_15++; if (unlikely(0 < 0)) __PYX_ERR(0, 272, __pyx_L137_error)\n                #else\n                __pyx_t_12 = PySequence_ITEM(__pyx_t_11, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 272, __pyx_L137_error)\n                __Pyx_GOTREF(__pyx_t_12);\n                #endif\n              } else {\n                if (__pyx_t_15 >= PyTuple_GET_SIZE(__pyx_t_11)) break;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_15); __Pyx_INCREF(__pyx_t_12); __pyx_t_15++; if (unlikely(0 < 0)) __PYX_ERR(0, 272, __pyx_L137_error)\n                #else\n                __pyx_t_12 = PySequence_ITEM(__pyx_t_11, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 272, __pyx_L137_error)\n                __Pyx_GOTREF(__pyx_t_12);\n                #endif\n              }\n            } else {\n              __pyx_t_12 = __pyx_t_9(__pyx_t_11);\n              if (unlikely(!__pyx_t_12)) {\n                PyObject* exc_type = PyErr_Occurred();\n                if (exc_type) {\n                  if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n                  else __PYX_ERR(0, 272, __pyx_L137_error)\n                }\n                break;\n              }\n              __Pyx_GOTREF(__pyx_t_12);\n            }\n            __Pyx_XDECREF_SET(__pyx_9genexpr10__pyx_v_i, __pyx_t_12);\n            __pyx_t_12 = 0;\n            __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_v_dbf, __pyx_n_s_phases); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __pyx_t_2 = PyObject_GetItem(__pyx_t_12, __pyx_9genexpr10__pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_GOTREF(__pyx_t_2);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_sublattices); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n            __pyx_t_8 = PyObject_Length(__pyx_t_12); if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __pyx_t_12 = PyInt_FromSsize_t(__pyx_t_8); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            if (unlikely(__Pyx_ListComp_Append(__pyx_t_30, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 272, __pyx_L137_error)\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n          }\n          __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n          __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_i);\n          goto __pyx_L140_exit_scope;\n          __pyx_L137_error:;\n          __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_i);\n          goto __pyx_L1_error;\n          __pyx_L140_exit_scope:;\n        } /* exit inner scope */\n        __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 272, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_GIVEREF(__pyx_t_30);\n        PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_30);\n        __pyx_t_30 = 0;\n        __pyx_t_30 = __Pyx_PyObject_Call(__pyx_builtin_sum, __pyx_t_11, NULL); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 272, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = PyNumber_Add(__pyx_t_30, __pyx_v_num_mass_bals); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 272, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+273:                 prop_MU_values[it.multi_index] = chemical_potentials
\n
        __Pyx_TraceLine(273,0,__PYX_ERR(0, 273, __pyx_L1_error))\n        __pyx_t_30 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 273, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_30);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_MU_values, __pyx_t_30, __pyx_v_chemical_potentials) < 0)) __PYX_ERR(0, 273, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n
+274:                 break
\n
        __Pyx_TraceLine(274,0,__PYX_ERR(0, 274, __pyx_L1_error))\n        goto __pyx_L31_break;\n
+275:             elif (not no_progress) and cur_iter == MAX_SOLVE_ITERATIONS-1:
\n
      __Pyx_TraceLine(275,0,__PYX_ERR(0, 275, __pyx_L1_error))\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_no_progress); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(0, 275, __pyx_L1_error)\n      __pyx_t_46 = ((!__pyx_t_6) != 0);\n      if (__pyx_t_46) {\n      } else {\n        __pyx_t_5 = __pyx_t_46;\n        goto __pyx_L141_bool_binop_done;\n      }\n      __pyx_t_30 = __Pyx_PyInt_From_int(__pyx_v_cur_iter); if (unlikely(!__pyx_t_30)) __PYX_ERR(0, 275, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_30);\n      __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_MAX_SOLVE_ITERATIONS); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 275, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_11);\n      __pyx_t_3 = __Pyx_PyInt_SubtractObjC(__pyx_t_11, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 275, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_11 = PyObject_RichCompare(__pyx_t_30, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 275, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_30); __pyx_t_30 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_46 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely(__pyx_t_46 < 0)) __PYX_ERR(0, 275, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n      __pyx_t_5 = __pyx_t_46;\n      __pyx_L141_bool_binop_done:;\n      if (__pyx_t_5) {\n/* \u2026 */\n      }\n    }\n    __pyx_L31_break:;\n
+276:                 print('Failed to converge: {}'.format(cur_conds))
\n
        __Pyx_TraceLine(276,0,__PYX_ERR(0, 276, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_kp_u_Failed_to_converge, __pyx_n_s_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 276, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_30 = NULL;\n        if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {\n          __pyx_t_30 = PyMethod_GET_SELF(__pyx_t_3);\n          if (likely(__pyx_t_30)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n            __Pyx_INCREF(__pyx_t_30);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_3, function);\n          }\n        }\n        if (!__pyx_t_30) {\n          __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_cur_conds); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 276, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_11);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_3)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_30, __pyx_v_cur_conds};\n            __pyx_t_11 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 276, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_30); __pyx_t_30 = 0;\n            __Pyx_GOTREF(__pyx_t_11);\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_30, __pyx_v_cur_conds};\n            __pyx_t_11 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 276, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_30); __pyx_t_30 = 0;\n            __Pyx_GOTREF(__pyx_t_11);\n          } else\n          #endif\n          {\n            __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 276, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_12);\n            __Pyx_GIVEREF(__pyx_t_30); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_30); __pyx_t_30 = NULL;\n            __Pyx_INCREF(__pyx_v_cur_conds);\n            __Pyx_GIVEREF(__pyx_v_cur_conds);\n            PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_v_cur_conds);\n            __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_12, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 276, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_11);\n            __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 276, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_GIVEREF(__pyx_t_11);\n        PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_11);\n        __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_3, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 276, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+277:                 prop_MU_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(277,0,__PYX_ERR(0, 277, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_MU_values, __pyx_t_11, __pyx_t_3) < 0)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+278:                 prop_NP_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(278,0,__PYX_ERR(0, 278, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 278, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_nan); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 278, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 278, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_NP_values, __pyx_t_3, __pyx_t_11) < 0)) __PYX_ERR(0, 278, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+279:                 prop_X_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(279,0,__PYX_ERR(0, 279, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 279, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 279, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 279, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_X_values, __pyx_t_11, __pyx_t_3) < 0)) __PYX_ERR(0, 279, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+280:                 prop_Y_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(280,0,__PYX_ERR(0, 280, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 280, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_nan); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 280, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 280, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_Y_values, __pyx_t_3, __pyx_t_11) < 0)) __PYX_ERR(0, 280, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n
+281:                 prop_GM_values[it.multi_index] = np.nan
\n
        __Pyx_TraceLine(281,0,__PYX_ERR(0, 281, __pyx_L1_error))\n        __pyx_t_11 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 281, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_nan); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 281, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 281, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_11);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_GM_values, __pyx_t_11, __pyx_t_3) < 0)) __PYX_ERR(0, 281, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+282:                 prop_Phase_values[it.multi_index] = ''
\n
        __Pyx_TraceLine(282,0,__PYX_ERR(0, 282, __pyx_L1_error))\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_multi_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 282, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        if (unlikely(PyObject_SetItem(__pyx_v_prop_Phase_values, __pyx_t_3, __pyx_kp_u__3) < 0)) __PYX_ERR(0, 282, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n
+283:         it.iternext()
\n
    __Pyx_TraceLine(283,0,__PYX_ERR(0, 283, __pyx_L1_error))\n    __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_it, __pyx_n_s_iternext); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 283, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_11);\n    __pyx_t_12 = NULL;\n    if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) {\n      __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_11);\n      if (likely(__pyx_t_12)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11);\n        __Pyx_INCREF(__pyx_t_12);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_11, function);\n      }\n    }\n    if (__pyx_t_12) {\n      __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_12); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 283, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n    } else {\n      __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 283, __pyx_L1_error)\n    }\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_L3_continue:;\n  }\n
+284:     return properties
\n
  __Pyx_TraceLine(284,0,__PYX_ERR(0, 284, __pyx_L1_error))\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_properties);\n  __pyx_r = __pyx_v_properties;\n  goto __pyx_L0;\n
\n\n\n\n\n```python\n%lprun -f _solve_eq_at_conditions equilibrium(dbf, ['AL', 'FE', 'VA'], list(dbf.phases.keys()), {v.T: 700, v.X('AL'): (0,1,0.02), v.P: 101325}, model=models, solve_eq_at_conditions=_solve_eq_at_conditions)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "9b9edb92a41123cdf5c658be1aaf878ab40cdfde", "size": 806655, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "EqPerformance-solveq.ipynb", "max_stars_repo_name": "richardotis/pycalphad-sandbox", "max_stars_repo_head_hexsha": "43d8786eee8f279266497e9c5f4630d19c893092", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-03-08T18:21:30.000Z", "max_stars_repo_stars_event_max_datetime": "2017-03-08T18:21:30.000Z", "max_issues_repo_path": "EqPerformance-solveq.ipynb", "max_issues_repo_name": "richardotis/pycalphad-sandbox", "max_issues_repo_head_hexsha": "43d8786eee8f279266497e9c5f4630d19c893092", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EqPerformance-solveq.ipynb", "max_forks_repo_name": "richardotis/pycalphad-sandbox", "max_forks_repo_head_hexsha": "43d8786eee8f279266497e9c5f4630d19c893092", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-11-03T01:31:57.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-03T01:31:57.000Z", "avg_line_length": 98.6613258317, "max_line_length": 1703, "alphanum_fraction": 0.6108237103, "converted": true, "num_tokens": 244089, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6224593452091672, "lm_q2_score": 0.4649015713733885, "lm_q1q2_score": 0.28938232770379235}} {"text": "## Introduction \n\nThis notebook will go over optimization of a single response variable using the MRUplift Framework. It will go over:\n\n1. The Business Problem and Data Generating Process\n\n2. Building / Gridsearching an uplift model \n\n3. Evaluating Model with out-of-sample ERUPT metric\n\n4. Assigning Optimal Treatments for new observations \n\n\n```python\nimport numpy as np\nimport pandas as pd\n\nfrom mr_uplift.dataset.data_simulation import get_simple_uplift_data\nfrom mr_uplift.mr_uplift import MRUplift\nfrom ggplot import *\n```\n\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n Using TensorFlow backend.\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/ggplot/utils.py:81: FutureWarning: pandas.tslib is deprecated and will be removed in a future version.\n You can access Timestamp as pandas.Timestamp\n pd.tslib.Timestamp,\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/ggplot/stats/smoothers.py:4: FutureWarning: The pandas.lib module is deprecated and will be removed in a future version. These are private functions and can be accessed from pandas._libs.lib instead\n from pandas.lib import Timestamp\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n\n\n\n\n\n### Business Problem\n\nImagine we are data scientists working for a startup that would like to be more profitibile. As a tactic to increase user activity the company gives all users an expensive bonus (referred to as the treatment). In order to reduce costs we were assigned the task of using data to find a subset of users that should continue receiving the costly treatment. \n\nWe are given explanatory variables for users $X$, a random treatment of whether a users recieved the treatment or not $T$, and response variable of profitibility $y$. \n\nWe can use uplift models and the IbottaUplift package specifically to find users who should receive the treatment.\n\n### Uplift Problem Setup\nThe general setup for a lift model is:\n \n$y$: Response variable of interest you\u2019d like to maximize. Here it is profitibility.\n\n$X$: User level covariates. Includes things like previous activity per user.\n\n$T$: The randomly assigned treatment. In this case it is whether or not to give a bonus to a particular user and is binary. Assume that the distribution and assignment of a treatment is uniform and random.\n\nWith the data $(y, X, T)$ the goal is to build a treatment assignment policy \ud835\udf0b(x) that will use $X$ to assign $T$ that maximizes the value of $y$. Or in this case we want to use user history to assign whether to give a bonus to a user in order to maximize profit.\n\nA frequent practice is to model the expected outcome $y_i$ under different treatments and choose the treatment $T$ that maximizes $y_i$ for each user.\n\n\n\\begin{equation}\n \\pi(x_i) =argmax \\:_{t \\in T} E[y_i | X=x_i, T=t]\n\\end{equation}\n\n\nThere are several approaches to do this and can be done with a run of the mill ML algorithm that incorporates interactions. IbottaUplift uses a neural network. \n\nTo get the counterfactual for each treatment one needs to predict with different values of $t$. This calculation is closely related to to creating an [ICE](https://arxiv.org/pdf/1309.6392.pdf) plot with the treatment variable.\n\n### Data Generating Process \n\n\nBelow is the data generating process of the data we are given. \n\n\\begin{equation}\nx_1 \\sim runif(0,1)\n\\end{equation}\n\\begin{equation}\nx_2 \\sim runif(0,1)\n\\end{equation}\n\\begin{equation}\ne_1 \\sim rnorm(0,1)\n\\end{equation}\n\\begin{equation}\ne_2 \\sim rnorm(0,1)\n\\end{equation}\n\\begin{equation}\nt \\sim rbinom(.5)\n\\end{equation}\n\n\\begin{equation}\nrevenue = x_1*t + e_1\n\\end{equation}\n\\begin{equation}\ncosts = x_2*t + e_2\n\\end{equation}\n\\begin{equation}\nprofit = revenue - costs\n\\end{equation}\n\n(In this problem we are interested in only the response variable $profit$)\n\n\n```python\ny, x, t = get_simple_uplift_data(10000)\n\ny = pd.DataFrame(y)\ny.columns = ['revenue','cost', 'noise']\ny['profit'] = y['revenue'] - y['cost']\n```\n\n### Model Building / Gridsearch\nAfter instantiating the MRUplift class the `.fit` function will build the model. It first seperates the data into a train / test split and builds standard scaler transformerd on all variables $x, y, t$.\n\nThen it builds and runs grisdesarch using neural network model that minimizes the mean squared error of the form $y = f(t,x)$. The user can input a custom parameter grid. \n\n\n\n```python\nuplift_model = MRUplift()\nparam_grid = dict(num_nodes=[8], dropout=[.1, .5], activation=[\n 'relu'], num_layers=[1, 2], epochs=[25], batch_size=[30])\n\n\nuplift_model.fit(x, y[['profit']], t.reshape(-1,1), param_grid = param_grid, n_jobs = 1)\n```\n\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.\n warnings.warn(msg, DataConversionWarning)\n\n\n### Expected Response Under Proposed Treatments (ERUPT) Metric\n\nAfter gridsearching we want to know how much better the model is than the current state. Evaluating the out-of-sample importance is key to ensure the model will perform as intended in production. While there are other metrics such as the Qini metric they are usually limited to single treatment case. ERUPT is the only metric I'm aware of that can be applied to multiple treatments and provides an unbiased estimate what would happen if the model were applied.\n\n#### ERUPT\nSuppose you have an observation where \ud835\udf0b(x) proposes a treatment of not giving bonus and the randomly assigned treatment was given a bonus. Since these do not align it\u2019s not clear we can say anything about it.\n\nHowever, if the optimal treatment for a model is equal to the assigned treatment we can include that observation in our proposed treatment examples. We go through this exercise for all observations and calculate the response mean for only those where the \ud835\udf0b(x) = assigned treatment. This is our estimated value of y under the model! Mathematically it is:\n\n$$\\frac{\\sum_i y_i I(\\pi(x_i) = t_i)} {\\sum_i I(\\pi(x_i)=t_i)}$$\n\nNote that this formula assumes the treatments distirbution is uniform (same number for each treatment) and randomly assigned. The functionality in this package does not require uniform treatments but does require them to be randomly assigned.\n\nFor further information please my blog post [here](https://medium.com/building-ibotta/erupt-expected-response-under-proposed-treatments-ff7dd45c84b4).\n\n### Evaluating Model with out-of-sample ERUPT metric\nUsing the test dataset MRUplift will then evaluate the model using the ERUPT metric. This functionality gives the model builder insight into whether or not the model performs well out of sample. \n\nIt outputs two dataframes:\n\n1) The first dataframe shows the ERUPT metric and standard deviation for the model assignment. In this example it tells us the expected profit if we were to use this model. In addition we can also see a 'random' row under the assignment column. This uses the same distribution as $\\pi(x)$ but shuffles the treatments so as to make it a random assignment. Looking at the difference between the Model and Random assignments should tell us if the model is learning the individual treatment effects well. \n\nBelow we can see that the model performs much better than the randomized treatments suggesting the model learned the heterogenity of the treatment effects well. If we deployed the model we expect to see profit to be ~ 0.16.\n\n2) The dataframe shows the distribution of treatments under the optimal assignment. In this example we can see about half are assigned the treatment and half are not. \n\n\n\n\n\n```python\nerupt_curves, dists = uplift_model.get_erupt_curves()\nerupt_curves\n```\n\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.\n warnings.warn(msg, DataConversionWarning)\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
meanstdresponse_var_namesweightsassignment
00.1618560.004738profit1model
0-0.0067520.005352profit1random
\n
\n\n\n\n\n```python\ndists\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
num_observationstmtweightspercent_tmt
03646010.520857
13354110.479143
\n
\n\n\n\n### Assigning Optimal Treatments for New Observations\nAfter building and evaluating an uplift model the modeler may deem it worthy of production. To assign new users the optimal treatment one can use the `predict_optimal_treatments` function as shown below.\n\n\n\n\n```python\n#generate 5 new observation\n_, x_new ,_ = get_simple_uplift_data(5)\nuplift_model.predict_optimal_treatments(x_new)\n```\n\n /Users/samweiss/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.\n warnings.warn(msg, DataConversionWarning)\n\n\n\n\n\n array([[0],\n [1],\n [0],\n [1],\n [0]])\n\n\n", "meta": {"hexsha": "612b46fee743e2dca4cf67495bc65550ef6883d1", "size": 15935, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/mr_uplift_one_response_example.ipynb", "max_stars_repo_name": "stephenfenech/mr_uplift", "max_stars_repo_head_hexsha": "df95abd88e317f24f7857ba8f79aff0b93a22d1b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-04-22T16:57:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T00:21:13.000Z", "max_issues_repo_path": "examples/mr_uplift_one_response_example.ipynb", "max_issues_repo_name": "stephenfenech/mr_uplift", "max_issues_repo_head_hexsha": "df95abd88e317f24f7857ba8f79aff0b93a22d1b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-05-01T18:15:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T07:26:18.000Z", "max_forks_repo_path": "examples/mr_uplift_one_response_example.ipynb", "max_forks_repo_name": "stephenfenech/mr_uplift", "max_forks_repo_head_hexsha": "df95abd88e317f24f7857ba8f79aff0b93a22d1b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-04-25T08:41:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-08T11:21:23.000Z", "avg_line_length": 37.4941176471, "max_line_length": 511, "alphanum_fraction": 0.5793536241, "converted": true, "num_tokens": 2734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6224593312018545, "lm_q2_score": 0.46490157137338844, "lm_q1q2_score": 0.2893823211917706}} {"text": "\n\n# \u30e1\u30e2\n\n\n\n# networkx \u3068 matplotlib\n\n\u30b9\u30bf\u30d6\u306e\u307f\u3002\n\n# \u30b5\u30a4\u30f3\u30fb\u30ab\u30fc\u30d6\n\n\n```python\n# sin\u30ab\u30fc\u30d6\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nx = np.arange(0,10,0.2)\ny = np.sin(x)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(x,y)\nplt.show\n\n```\n\n\n```python\n# sin\u30ab\u30fc\u30d6 # need kernel restart\n%matplotlib notebook \nimport numpy as np\nimport matplotlib.pyplot as plt\nx = np.arange(0,10,0.2)\ny = np.sin(x)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(x,y)\nplt.show\n```\n\n\n \n\n\n\n
\n\n\n\n\n\n \n\n\n\n\n \n\n\n\n
\n\n\n# nbviewer.jupyter.org\ngithub\u3067\u306f\u8868\u793a\u3055\u308c\u306a\u3044\u30b0\u30e9\u30d5\u304c\u8868\u793a\u3055\u308c\u308b \n\u3068\u304b \nbokeh\u306fjavascript\u306b\u4f9d\u5b58\u3057\u3066\u3044\u3066\u3001github\u306fjavascript\u306f\u52d5\u4f5c\u3057\u306a\u3044\u3002 \n\u306a\u308b\u307b\u3069\n\n# pandas\ngit clone https://github.com/practical-jupyter/sample-data.git\n\n\n```python\nimport os\nbase_url = 'https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/'\nanime_csv = os.path.join(base_url, 'anime.csv')\nanime_csv\n```\n\n\n\n\n 'https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/anime.csv'\n\n\n\n\n```python\nimport pandas as pd\nanime_csv = os.path.join(base_url, 'anime.csv')\npd.read_csv(anime_csv).head()\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
anime_idnamegenretypeepisodesratingmembers
032281Kimi no Na wa.Drama, Romance, School, SupernaturalMovie19.37200630
15114Fullmetal Alchemist: BrotherhoodAction, Adventure, Drama, Fantasy, Magic, Mili...TV649.26793665
228977Gintama\u00b0Action, Comedy, Historical, Parody, Samurai, S...TV519.25114262
39253Steins;GateSci-Fi, ThrillerTV249.17673572
49969Gintama&#039;Action, Comedy, Historical, Parody, Samurai, S...TV519.16151266
\n
\n\n\n\n\n```python\n# \u524d\u51e6\u7406\u306e\u7d50\u679c\nanime_master_csv = os.path.join(base_url, 'anime_master.csv')\npd.read_csv(anime_master_csv).head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
anime_idnamegenretypeepisodesratingmembers
032281Kimi no Na wa.Drama, Romance, School, SupernaturalMovie19.37200630
15114Fullmetal Alchemist: BrotherhoodAction, Adventure, Drama, Fantasy, Magic, Mili...TV649.26793665
228977Gintama\u00b0Action, Comedy, Historical, Parody, Samurai, S...TV519.25114262
39253Steins;GateSci-Fi, ThrillerTV249.17673572
49969Gintama'Action, Comedy, Historical, Parody, Samurai, S...TV519.16151266
\n
\n\n\n\n\n```python\n# \u30b8\u30e3\u30f3\u30eb\u5225\u306b\u52a0\u5de5\nanime_split_genre_csv = os.path.join(base_url, 'anime_split_genre.csv')\npd.read_csv(anime_split_genre_csv).head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
anime_idnamegenretypeepisodesratingmembers
020707\"0\"MusicMusic15.061170
125627\"Aesop\" no Ohanashi yori: Ushi to Kaeru, Yokub...KidsMovie15.00113
27669\"Bungaku Shoujo\" Kyou no Oyatsu: HatsukoiComedyOVA17.0614351
37669\"Bungaku Shoujo\" Kyou no Oyatsu: HatsukoiSchoolOVA17.0614351
47669\"Bungaku Shoujo\" Kyou no Oyatsu: HatsukoiFantasyOVA17.0614351
\n
\n\n\n\n\n```python\n# \u30e1\u30f3\u30d0\u6570\u4e0a\u4f4d10\nanime_genre_top10_csv = os.path.join(base_url, 'anime_genre_top10.csv')\npd.read_csv(anime_genre_top10_csv).head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
anime_idnamegenretypeepisodesratingmembers
07669\"Bungaku Shoujo\" Kyou no Oyatsu: HatsukoiComedyOVA17.0614351
17669\"Bungaku Shoujo\" Kyou no Oyatsu: HatsukoiSchoolOVA17.0614351
27669\"Bungaku Shoujo\" Kyou no Oyatsu: HatsukoiFantasyOVA17.0614351
38481\"Bungaku Shoujo\" MemoireSchoolOVA37.5418013
48481\"Bungaku Shoujo\" MemoireDramaOVA37.5418013
\n
\n\n\n\n\n```python\n# \u30af\u30ed\u30b9\u96c6\u8a08\nanime_genre_top10_pivoted_csv = os.path.join(base_url, 'anime_genre_top10_pivoted.csv')\npd.read_csv(anime_genre_top10_pivoted_csv).head()\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
genreMovieMusicONAOVASpecialTV
0Comedy7293127.020860.01477266.05614758.06659293.065420862.0
1Action10224960.077054.0524907.05793680.03412689.063364032.0
2Drama9034099.0100734.0188427.03043374.01915578.041011557.0
3Romance5245386.042811.0411331.03143167.02015820.040703388.0
4Supernatural5452779.09189.0192989.02696715.02336723.038956520.0
\n
\n\n\n\n\n```python\n# \u682a\u4fa1\u30c7\u30fc\u30bf\nanime_stock_price_csv = os.path.join(base_url, 'anime_stock_price.csv')\npd.read_csv(anime_stock_price_csv, index_col=0, parse_dates=['Date']).head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
TOEI ANIMATIONIG Port
Date
2015-01-013356.861201.51
2015-01-023356.861201.51
2015-01-053396.121218.44
2015-01-063361.771201.51
2015-01-073297.971202.51
\n
\n\n\n\n\n```python\n# \u9a30\u843d\u7387\nanime_stock_returns_csv = os.path.join(base_url, 'anime_stock_returns.csv')\npd.read_csv(anime_stock_returns_csv, index_col=0, parse_dates=['Date']).head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
TOEI ANIMATIONIG Port
Date
2015-01-011.0000001.000000
2015-01-021.0000001.000000
2015-01-051.0116951.014082
2015-01-061.0014631.000000
2015-01-070.9824571.000824
\n
\n\n\n\n\n```python\n# 4\u672c\u5024\nt4816_csv = os.path.join(base_url, '4816.csv')\npd.read_csv(t4816_csv, index_col=0, parse_dates=['Date']).head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
OpenHighLowCloseVolume
Date
2010-01-041600.01600.01580.01597.05600.0
2010-01-051597.01605.01590.01600.014800.0
2010-01-061600.01602.01579.01601.08300.0
2010-01-071600.01600.01590.01595.03700.0
2010-01-081599.01601.01595.01600.032300.0
\n
\n\n\n\n# \u6570\u5217 series\n\n\n```python\nx = symbols('x')\na = x**3\na\n```\n\n\n```python\nfrom sympy import *\ndegree(a,x)\n```\n", "meta": {"hexsha": "fe05ec89581854bda6247aec16f83b664f07f3e8", "size": 141941, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mathwithpython03.ipynb", "max_stars_repo_name": "kalz2q/-yjupyternotebooks", "max_stars_repo_head_hexsha": "ba37ac7822543b830fe8602b3f611bb617943463", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-16T03:45:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-16T03:45:19.000Z", "max_issues_repo_path": "mathwithpython03.ipynb", "max_issues_repo_name": "kalz2q/-yjupyternotebooks", "max_issues_repo_head_hexsha": "ba37ac7822543b830fe8602b3f611bb617943463", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathwithpython03.ipynb", "max_forks_repo_name": "kalz2q/-yjupyternotebooks", "max_forks_repo_head_hexsha": "ba37ac7822543b830fe8602b3f611bb617943463", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.2980155918, "max_line_length": 18970, "alphanum_fraction": 0.4610648086, "converted": true, "num_tokens": 5238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5389832206876841, "lm_q2_score": 0.5350984286266115, "lm_q1q2_score": 0.28840907444608993}} {"text": "# Introduction to RLlib\n\nRLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. \nThe following introduction was adapted from [rllib_exercises](https://github.com/ray-project/tutorial/blob/master/rllib_exercises/rllib_colab.ipynb).\n\nFor more information about RLlib and its open source community:\n\n - [documentation](https://ray.readthedocs.io/en/latest/rllib.html)\n - [GitHub repo](https://github.com/ray-project/ray/tree/master/rllib#rllib-scalable-reinforcement-learning)\n - [project board](https://github.com/ray-project/ray/projects/6)\n - [Slack sign-up](https://forms.gle/9TSdDYUgxYs8SA9e8)\n - [Twitter](https://twitter.com/raydistributed)\n\n## Install Dependencies\n\nFirst, install the necessary dependencies before beginning the exercises:\n\n\n```python\n!pip install ray[rllib]\n!pip install ray[debug]\n!pip install ray[tune]\n!pip install pandas\n!pip install requests\n!pip install tensorflow\n```\n\n## RLlib: Markov Decision Processes\n\n**GOAL:** The goal of the exercise is to introduce the Markov Decision Process abstraction and to show its use in Python.\n\n**The key abstraction in reinforcement learning is the Markov Decision Process (MDP).**\nAn MDP models sequential interactions with an external environment. It consists of the following:\n\n- a **state space**\n- a set of **actions**\n- a **transition function** which describes the probability of being in a state $s'$ at time $t+1$ given that the MDP was in state $s$ at time $t$ and action $a$ was taken\n- a **reward function**, which determines the reward received at time $t$\n- a **discount factor** $\\gamma$\n\nMore details are available [here](https://en.wikipedia.org/wiki/Markov_decision_process).\n\n**NOTE:** Reinforcement learning algorithms are often applied to problems that don't strictly fit into the MDP framework. In particular, situations in which the state of the environment is not fully observed lead to violations of the MDP assumption. Nevertheless, RL algorithms can be applied anyway.\n\n### Policies\n\nA **policy** is a function that takes in a **state** and returns an **action**. A policy may be stochastic (i.e., it may sample from a probability distribution) or it can be deterministic.\n\nThe **goal of reinforcement learning** is to learn a **policy** for maximizing the cumulative reward in an MDP. That is, we wish to find a policy $\\pi$ which solves the following optimization problem\n\n\\begin{equation}\n\\arg\\max_{\\pi} \\sum_{t=1}^T \\gamma^t R_t(\\pi),\n\\end{equation}\n\nwhere $T$ is the number of steps taken in the MDP (this is a random variable and may depend on $\\pi$) and $R_t$ is the reward received at time $t$ (also a random variable which depends on $\\pi$).\n\nA number of algorithms are available for solving reinforcement learning problems. Several of the most widely known are [value iteration](https://en.wikipedia.org/wiki/Markov_decision_process#Value_iteration), [policy iteration](https://en.wikipedia.org/wiki/Markov_decision_process#Policy_iteration), and [Q learning](https://en.wikipedia.org/wiki/Q-learning).\n\n### RL in Python\n\nThe `gym` Python module provides MDP interfaces to a variety of simulators. For example, the CartPole environment interfaces with a simple simulator which simulates the physics of balancing a pole on a cart. The CartPole problem is described at https://gym.openai.com/envs/CartPole-v0. This example fits into the MDP framework as follows.\n- The **state** consists of the position and velocity of the cart as well as the angle and angular velocity of the pole that is balancing on the cart.\n- The **actions** are to decrease or increase the cart's velocity by one unit.\n- The **transition function** is deterministic and is determined by simulating physical laws.\n- The **reward function** is a constant 1 as long as the pole is upright, and 0 once the pole has fallen over. Therefore, maximizing the reward means balancing the pole for as long as possible.\n- The **discount factor** in this case can be taken to be 1.\n\nMore information about the `gym` Python module is available at https://gym.openai.com/.\n\n\n```python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport gym\nimport numpy as np\n```\n\nThe code below illustrates how to create and manipulate MDPs in Python. An MDP can be created by calling `gym.make`. Gym environments are identified by names like `CartPole-v0`. A **catalog of built-in environments** can be found at https://gym.openai.com/envs.\n\n\n```python\nenv = gym.make('CartPole-v0')\nprint('Created env:', env)\n```\n\nReset the state of the MDP by calling `env.reset()`. This call returns the initial state of the MDP.\n\n\n```python\nstate = env.reset()\nprint('The starting state is:', state)\n```\n\nThe `env.step` method takes an action (in the case of the CartPole environment, the appropriate actions are 0 or 1, for moving left or right). It returns a tuple of four things:\n1. the new state of the environment\n2. a reward\n3. a boolean indicating whether the simulation has finished\n4. a dictionary of miscellaneous extra information\n\n\n```python\n# Simulate taking an action in the environment. Appropriate actions for\n# the CartPole environment are 0 and 1 (for moving left and right).\naction = 0\nstate, reward, done, info = env.step(action)\nprint(state, reward, done, info)\n```\n\nA **rollout** is a simulation of a policy in an environment. It alternates between choosing actions based (using some policy) and taking those actions in the environment.\n\nThe code below performs a rollout in a given environment. It takes **random actions** until the simulation has finished and returns the cumulative reward.\n\n\n```python\ndef random_rollout(env):\n state = env.reset()\n \n done = False\n cumulative_reward = 0\n\n # Keep looping as long as the simulation has not finished.\n while not done:\n # Choose a random action (either 0 or 1).\n action = np.random.choice([0, 1])\n \n # Take the action in the environment.\n state, reward, done, _ = env.step(action)\n \n # Update the cumulative reward.\n cumulative_reward += reward\n \n # Return the cumulative reward.\n return cumulative_reward\n \nreward = random_rollout(env)\nprint(reward)\nreward = random_rollout(env)\nprint(reward)\n```\n\n**EXERCISE:** Finish implementing the `rollout_policy` function below, which should take an environment *and* a policy. The *policy* is a function that takes in a *state* and returns an *action*. The main difference is that instead of choosing a **random action**, the action should be chosen **with the policy** (as a function of the state).\n\n\n```python\ndef rollout_policy(env, policy):\n state = env.reset()\n \n done = False\n cumulative_reward = 0\n\n # EXERCISE: Fill out this function by copying the 'random_rollout' function\n # and then modifying it to choose the action using the policy.\n raise NotImplementedError\n\n # Return the cumulative reward.\n return cumulative_reward\n\ndef sample_policy1(state):\n return 0 if state[0] < 0 else 1\n\ndef sample_policy2(state):\n return 1 if state[0] < 0 else 0\n\nreward1 = np.mean([rollout_policy(env, sample_policy1) for _ in range(100)])\nreward2 = np.mean([rollout_policy(env, sample_policy2) for _ in range(100)])\n\nprint('The first sample policy got an average reward of {}.'.format(reward1))\nprint('The second sample policy got an average reward of {}.'.format(reward2))\n\nassert 5 < reward1 < 15, ('Make sure that rollout_policy computes the action '\n 'by applying the policy to the state.')\nassert 25 < reward2 < 35, ('Make sure that rollout_policy computes the action '\n 'by applying the policy to the state.')\n```\n\n## RLlib: Proximal Policy Optimization\n\n**GOAL:** The goal of this exercise is to demonstrate how to use the proximal policy optimization (PPO) algorithm.\n\nTo understand how to use **RLlib**, see the documentation at http://rllib.io.\n\nPPO is described in detail in https://arxiv.org/abs/1707.06347. It is a variant of Trust Region Policy Optimization (TRPO) described in https://arxiv.org/abs/1502.05477\n\nPPO works in two phases. In one phase, a large number of rollouts are performed (in parallel). The rollouts are then aggregated on the driver and a surrogate optimization objective is defined based on those rollouts. We then use SGD to find the policy that maximizes that objective with a penalty term for diverging too much from the current policy.\n\n\n\n**NOTE:** The SGD optimization step is best performed in a data-parallel manner over multiple GPUs. This is exposed through the `num_gpus` field of the `config` dictionary (for this to work, you must be using a machine that has GPUs).\n\n\n```python\n# Be sure to install the latest version of RLlib.\n! pip install -U ray[rllib]\n```\n\n\n```python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport gym\nimport ray\nfrom ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG\nfrom ray.tune.logger import pretty_print\n```\n\n\n```python\n# Start up Ray. This must be done before we instantiate any RL agents.\nray.init(num_cpus=3, ignore_reinit_error=True, log_to_driver=False)\n```\n\nInstantiate a PPOTrainer object. We pass in a config object that specifies how the network and training procedure should be configured. Some of the parameters are the following.\n\n- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used.\n- `num_sgd_iter` is the number of epochs of SGD (passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO.\n- `sgd_minibatch_size` is the SGD batch size that will be used to optimize the PPO surrogate objective.\n- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers.\n\n\n```python\nconfig = DEFAULT_CONFIG.copy()\nconfig['num_workers'] = 1\nconfig['num_sgd_iter'] = 30\nconfig['sgd_minibatch_size'] = 128\nconfig['model']['fcnet_hiddens'] = [100, 100]\nconfig['num_cpus_per_worker'] = 0 # This avoids running out of resources in the notebook environment when this cell is re-executed\n\nagent = PPOTrainer(config, 'CartPole-v0')\n```\n\nTrain the policy on the `CartPole-v0` environment for 2 steps. The CartPole problem is described at https://gym.openai.com/envs/CartPole-v0.\n\n**EXERCISE:** Inspect how well the policy is doing by looking for the lines that say something like\n\n```\nepisode_len_mean: 22.262569832402235\nepisode_reward_mean: 22.262569832402235\n```\n\nThis indicates how much reward the policy is receiving and how many time steps of the environment the policy ran. The maximum possible reward for this problem is 200. The reward and trajectory length are very close because the agent receives a reward of one for every time step that it survives (however, that is specific to this environment).\n\n\n```python\nfor i in range(2):\n result = agent.train()\n print(pretty_print(result))\n```\n\n**EXERCISE:** The current network and training configuration are too large and heavy-duty for a simple problem like CartPole. Modify the configuration to use a smaller network and to speed up the optimization of the surrogate objective (fewer SGD iterations and a larger batch size should help).\n\n\n```python\nconfig = DEFAULT_CONFIG.copy()\nconfig['num_workers'] = 3\nconfig['num_sgd_iter'] = 30\nconfig['sgd_minibatch_size'] = 128\nconfig['model']['fcnet_hiddens'] = [100, 100]\nconfig['num_cpus_per_worker'] = 0\n\nagent = PPOTrainer(config, 'CartPole-v0')\n```\n\n**EXERCISE:** Train the agent and try to get a reward of 200. If it's training too slowly you may need to modify the config above to use fewer hidden units, a larger `sgd_minibatch_size`, a smaller `num_sgd_iter`, or a larger `num_workers`.\n\nThis should take around 20 or 30 training iterations.\n\n\n```python\nfor i in range(2):\n result = agent.train()\n print(pretty_print(result))\n```\n\nCheckpoint the current model. The call to `agent.save()` returns the path to the checkpointed model and can be used later to restore the model.\n\n\n```python\ncheckpoint_path = agent.save()\nprint(checkpoint_path)\n```\n\nNow let's use the trained policy to make predictions.\n\n**NOTE:** Here we are loading the trained policy in the same process, but in practice, this would often be done in a different process (probably on a different machine).\n\n\n```python\ntrained_config = config.copy()\n\ntest_agent = PPOTrainer(trained_config, 'CartPole-v0')\ntest_agent.restore(checkpoint_path)\n```\n\nNow use the trained policy to act in an environment. The key line is the call to `test_agent.compute_action(state)` which uses the trained policy to choose an action.\n\n**EXERCISE:** Verify that the reward received roughly matches up with the reward printed in the training logs.\n\n\n```python\nenv = gym.make('CartPole-v0')\nstate = env.reset()\ndone = False\ncumulative_reward = 0\n\nwhile not done:\n action = test_agent.compute_action(state)\n state, reward, done, _ = env.step(action)\n cumulative_reward += reward\n\nprint(cumulative_reward)\n```\n", "meta": {"hexsha": "cd15b2639b8598325ecf51bbddc1385222ceb772", "size": 21866, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "intro.ipynb", "max_stars_repo_name": "deanwampler/rllib_tutorials", "max_stars_repo_head_hexsha": "b17912a7412ea040f34e5b56e61b467a5fe0189d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 40, "max_stars_repo_stars_event_min_datetime": "2020-08-14T06:23:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:44:09.000Z", "max_issues_repo_path": "intro.ipynb", "max_issues_repo_name": "deanwampler/rllib_tutorials", "max_issues_repo_head_hexsha": "b17912a7412ea040f34e5b56e61b467a5fe0189d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-03-29T23:07:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-31T12:27:07.000Z", "max_forks_repo_path": "intro.ipynb", "max_forks_repo_name": "deanwampler/rllib_tutorials", "max_forks_repo_head_hexsha": "b17912a7412ea040f34e5b56e61b467a5fe0189d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-03-26T22:12:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-28T03:47:30.000Z", "avg_line_length": 33.9007751938, "max_line_length": 369, "alphanum_fraction": 0.6129150279, "converted": true, "num_tokens": 3131, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.546738151984614, "lm_q2_score": 0.5273165233795672, "lm_q1q2_score": 0.2883040615034961}} {"text": "```python\n# Author: Ciar\u00e1n O'Brien & Chloe Doyle\n# Lecture: Svetlana Hensman\n# Submitted: 07/12/18\n# This file is in response to the second assignment as set out per the classification goal\n\n```\n\n\n```python\n# Boiler plate imports\nimport matplotlib.pyplot as plt\nimport seaborn as sn\nimport numpy as np\nimport pandas as pd\nimport re\nimport time\nimport cardinality\nimport statistics\nfrom math import floor\nfrom statistics import mean, median\nfrom collections import Counter\nfrom sympy import pretty_print as pp,latex\nimport seaborn as sn\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import StandardScaler, LabelEncoder\nfrom sklearn.model_selection import train_test_split\n\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import tree\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import GaussianNB\n```\n\n\n```python\ndata = pd.read_csv(\"data/trainingset.txt\",sep='\\,',header=None, encoding='utf-8')\n```\n\n c:\\users\\ciaran\\appdata\\local\\programs\\python\\python35\\lib\\site-packages\\ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.\n \"\"\"Entry point for launching an IPython kernel.\n\n\n\n```python\ndata.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
idagejobmaritaleducationdefaultbalancehousingloancontactdaymonthdurationcampaignpdayspreviouspoutcomey
0TR144\"JobCat9\"\"single\"\"secondary\"\"no\"29\"yes\"\"no\"\"unknown\"5\"may\"01-10\"unknown\"\"TypeA\"
1TR231\"JobCat4\"\"married\"\"secondary\"\"no\"2\"yes\"\"yes\"\"unknown\"5\"may\"01-10\"unknown\"\"TypeA\"
2TR342\"JobCat4\"\"divorced\"\"tertiary\"\"yes\"2\"yes\"\"no\"\"unknown\"5\"may\"01-10\"unknown\"\"TypeA\"
3TR458\"JobCat2\"\"married\"\"primary\"\"no\"121\"yes\"\"no\"\"unknown\"5\"may\"01-10\"unknown\"\"TypeA\"
4TR543\"JobCat9\"\"single\"\"secondary\"\"no\"593\"yes\"\"no\"\"unknown\"5\"may\"01-10\"unknown\"\"TypeA\"
\n
\n\n\n\n\n```python\ncontinuous_data = data.select_dtypes(include = ['int64','float64'])\ncategorical_data = data.select_dtypes(include = ['object'])\n```\n\n\n```python\ncategorical_data.drop('id', axis=1, inplace=True)\ncategorical_data.head()\n```\n\n c:\\users\\ciaran\\appdata\\local\\programs\\python\\python35\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \n A value is trying to be set on a copy of a slice from a DataFrame\n \n See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
jobmaritaleducationdefaulthousingloancontactmonthpoutcomey
0\"JobCat9\"\"single\"\"secondary\"\"no\"\"yes\"\"no\"\"unknown\"\"may\"\"unknown\"\"TypeA\"
1\"JobCat4\"\"married\"\"secondary\"\"no\"\"yes\"\"yes\"\"unknown\"\"may\"\"unknown\"\"TypeA\"
2\"JobCat4\"\"divorced\"\"tertiary\"\"yes\"\"yes\"\"no\"\"unknown\"\"may\"\"unknown\"\"TypeA\"
3\"JobCat2\"\"married\"\"primary\"\"no\"\"yes\"\"no\"\"unknown\"\"may\"\"unknown\"\"TypeA\"
4\"JobCat9\"\"single\"\"secondary\"\"no\"\"yes\"\"no\"\"unknown\"\"may\"\"unknown\"\"TypeA\"
\n
\n\n\n\n# Data exploration\n\nViewing all the unique values from each coloumn\n\n\n## Results \n\nColumn 12 only contain 1 unique value, it's no use to us.\nColumns 5,7,8 are only binary \n\n\n```python\nfor col in data.columns.values:\n print(col, data[col].unique())\n```\n\n 0 ['TR1' 'TR2' 'TR3' ... 'TR24316' 'TR24317' 'TR24318']\n 1 [44 31 42 58 43 57 51 45 40 37 52 46 34 49 50 60 54 30 53 33 55 47 59 41\n 27 56 26 48 24 32 29 36 28 35 22 21 23 25 61 20 19 18 66 83 70 68 65 64\n 69 62 75 71 67 76 85 63 90 73 78 80 94 72 17 74 86 79 95 82 81 77 16 84\n 87 93 88]\n 2 ['\"JobCat9\"' '\"JobCat4\"' '\"JobCat2\"' '\"JobCat7\"' '\"JobCat11\"' '\"JobCat6\"'\n '\"JobCat3\"' '\"JobCat8\"' '\"JobCat10\"' '\"JobCat1\"' '\"JobCat5\"' '\"unknown\"']\n 3 ['\"single\"' '\"married\"' '\"divorced\"']\n 4 ['\"secondary\"' '\"tertiary\"' '\"primary\"' '\"unknown\"']\n 5 ['\"no\"' '\"yes\"']\n 6 [ 29 2 121 ... 16353 5083 4655]\n 7 ['\"yes\"' '\"no\"']\n 8 ['\"no\"' '\"yes\"']\n 9 ['\"unknown\"' '\"cellular\"' '\"telephone\"']\n 10 [ 5 6 7 8 9 12 13 14 15 16 19 20 21 23 26 27 28 29 30 2 3 4 11 17\n 18 24 25 1 10 22 31]\n 11 ['\"may\"' '\"jun\"' '\"jul\"' '\"aug\"' '\"oct\"' '\"nov\"' '\"dec\"' '\"jan\"' '\"feb\"'\n '\"mar\"' '\"apr\"' '\"sep\"']\n 12 [0]\n 13 [ 1 2 3 5 4 6 8 7 9 10 13 11 12 14 32 18 22 15 17 25 21 19 63 26\n 28 16 50 38 23 24 37 27 29 30 41 20 31 33 35 34 39 36]\n 14 [ -1 151 86 147 176 174 170 195 188 196 172 118 119 171 131 123 159 186\n 111 115 116 173 166 164 110 96 103 150 175 104 193 181 185 154 138 145\n 132 126 180 129 101 167 117 109 97 130 168 125 105 102 26 179 182 28\n 183 165 127 112 124 187 190 113 162 152 134 169 189 8 120 144 191 184\n 177 99 133 93 92 155 91 100 156 106 198 128 153 160 107 90 197 136\n 139 157 122 178 135 30 98 163 121 10 141 158 192 31 94 199 137 108\n 268 247 253 226 244 245 231 258 223 265 246 250 240 204 205 254 266 259\n 241 261 239 251 225 161 237 248 255 262 227 234 224 206 238 260 2 270\n 232 252 228 235 256 5 273 242 214 249 208 272 271 207 202 269 201 229\n 210 264 216 217 230 76 200 263 73 203 221 267 222 77 82 6 209 257\n 274 243 7 212 215 233 213 276 9 275 211 1 279 12 280 88 194 277\n 236 84 85 219 41 294 329 307 331 64 314 323 332 333 326 335 313 312\n 325 327 328 140 330 316 310 289 57 295 336 339 301 315 337 300 143 334\n 340 319 146 17 74 322 148 299 344 320 149 321 342 324 318 317 346 345\n 282 281 343 305 338 14 303 15 347 304 302 341 348 306 349 285 287 350\n 25 278 81 87 79 70 13 83 37 80 63 22 355 19 4 89 35 351\n 362 365 309 358 366 363 356 364 288 357 293 360 297 359 367 353 352 296\n 361 368 290 283 298 308 66 371 370 284 369 286 292 354 50 373 374 372\n 291 311 59 95 40 29 43 114 20 69 60 56 55 78 391 34 27 44\n 67 32 393 65 395 388 399 49 389 412 385 434 394 440 68 461 462 422\n 430 442 403 457 459 454 379 428 392 410 475 477 478 54 474 463 479 45\n 46 142 495 58 48 518 75 378 218 33 544 435 433 436 558 469 616 561\n 553 555 585 480 667 626 449 633 426 61 53 460 71 670 551 404 651 686\n 384 425 376 504 578 674 386 450 745 514 417 424 776 381 396 439 415 456\n 458 481 791 531 792 413 535 784 455 491 431 446 472 437 782 414 828 524\n 562 761 492 775 493 760 432 655 427 749 779 842 484 489 62 409 444 485\n 503 772 526 528 826 508 547 541 38 550]\n 15 [ 0 3 4 2 1 6 5 10 7 8 14 11 15 9 12 37 16 27 17 38 18 24 51 23\n 20 13 29 19 21 30 22 58 25 28 26 40 55]\n 16 ['\"unknown\"' '\"failure\"' '\"other\"' '\"success\"']\n 17 ['\"TypeA\"' '\"TypeB\"']\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n# TODO \nLook at removing all the \"\" from the values entries\n\n## Optional\nset the columns from a regex apllied to the text file provided by Svetlana, datadescription.txt\n\n\n```python\n\n# with open('data/datadescription.txt', 'r') as myfile:\n# filedata=myfile.read().replace('\\n', '')\n```\n\n\n```python\n# expression = re.compile(\"((100)|[1-9]\\d? - \\b[^\\d\\W]+\\b)\")\n# regx = '(100)|[1-9]\\d? - \\b[^\\d\\W]+\\b'\n\n\n# match = re.findall(regx,filedata)\n# print(match)\n# #print(re.search(expression,str(dataFeatures[rows])))\n```\n\n\n```python\ndata.columns = [\"id\",\"age\",\"job\",\"marital\",\"education\",\"default\",\"balance\",\"housing\",\"loan\",\"contact\",\"day\",\"month\",\"duration\",\"campaign\",\"pdays\",\"previous\",\"poutcome\",\"y\"]\n```\n\n\n```python\n\n```\n\n\n```python\ndef label_encode(df, columns):\n for col in columns:\n le = LabelEncoder()\n col_values_unique = list(df[col].unique())\n le_fitted = le.fit(col_values_unique)\n \n col_values = list(df[col].values)\n le.classes_\n col_values_transformed = le.transform(col_values)\n df[col] = col_values_transformed\n \n# Encoding a unique label to each catergorical data entry\ndf_categorical_labled = categorical_data.copy(deep=True)\nto_be_encoded_cols = df_categorical_labled.columns.values\nlabel_encode(df_categorical_labled, to_be_encoded_cols)\ndisplay(df_categorical_labled.head())\n\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
jobmaritaleducationdefaulthousingloancontactmonthpoutcomey
010210102830
15110112830
25021102830
33100102830
410210102830
\n
\n\n\n\n```python\n# Heatmap of the coeffecients \n# Shows how dependant each column is on each other \n# Needs the encoding above to work\nmy_cmap = sn.light_palette(\"Navy\", as_cmap=True)\ncorrMatt = df_categorical_labled.corr()\nmask = np.array(corrMatt)\nmask[np.tril_indices_from(mask)] = False\nfig,ax= plt.subplots()\nfig.set_size_inches(20,10)\nsn.heatmap(corrMatt, mask=mask,vmax=.8, square=True,annot=True,cmap=my_cmap)\n\n```\n\n\n```python\n# Heatmap of the coeffecients \n# Shows how dependant each column is on each other \n# Needs the encoding above to work\nmy_cmap = sn.light_palette(\"Navy\", as_cmap=True)\ncorrMatt = data.corr()\nmask = np.array(corrMatt)\nmask[np.tril_indices_from(mask)] = False\nfig,ax= plt.subplots()\nfig.set_size_inches(20,10)\nsn.heatmap(corrMatt, mask=mask,vmax=.8, square=True,annot=True,cmap=my_cmap)\n\n```\n\n# Results\n\nFurther away from 1, the higher the correlation \n\n\n```python\n#Here we'll split the data into our training and validation sets\n#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\n```\n\n# Classification\nShe said we can build our classifier from sklearn librayr, they have a load of classifiers compared [here.](http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html)\n\nEach classifier take various different values, each are outlined in the documentation\n\n\n```python\ndef get_train_test(df, y_col, x_cols, ratio):\n \"\"\" \n This method transforms a dataframe into a train and test set, for this you need to specify:\n 1. the ratio train : test (usually 0.7)\n 2. the column with the Y_values\n \"\"\"\n mask = np.random.rand(len(df)) < ratio\n df_train = df[mask]\n df_test = df[~mask]\n \n Y_train = df_train[y_col].values\n Y_test = df_test[y_col].values\n X_train = df_train[x_cols].values\n X_test = df_test[x_cols].values\n return df_train, df_test, X_train, Y_train, X_test, Y_test\n```\n\n\n```python\ndict_classifiers = {\n \"Logistic Regression\": LogisticRegression(),\n \"Nearest Neighbors\": KNeighborsClassifier(),\n \"Linear SVM\": SVC(),\n# \"Gradient Boosting Classifier\": GradientBoostingClassifier(n_estimators=1000),\n \"Decision Tree\": tree.DecisionTreeClassifier(),\n# \"Random Forest\": RandomForestClassifier(n_estimators=1000),\n# \"Neural Net\": MLPClassifier(alpha = 1),\n \"Naive Bayes\": GaussianNB()\n}\n```\n\n\n```python\ndef batch_classify(X_train, Y_train, X_test, Y_test, no_classifiers = 5, verbose = True):\n \"\"\"\n This method, takes as input the X, Y matrices of the Train and Test set.\n And fits them on all of the Classifiers specified in the dict_classifier.\n The trained models, and accuracies are saved in a dictionary. The reason to use a dictionary\n is because it is very easy to save the whole dictionary with the pickle module.\n \n Usually, the SVM, Random Forest and Gradient Boosting Classifier take quiet some time to train. \n So it is best to train them on a smaller dataset first and \n decide whether you want to comment them out or not based on the test accuracy score.\n \"\"\"\n \n dict_models = {}\n for classifier_name, classifier in list(dict_classifiers.items())[:no_classifiers]:\n t_start = time.clock()\n classifier.fit(X_train, Y_train)\n t_end = time.clock()\n \n t_diff = t_end - t_start\n train_score = classifier.score(X_train, Y_train)\n test_score = classifier.score(X_test, Y_test)\n \n dict_models[classifier_name] = {'model': classifier, 'train_score': train_score, 'test_score': test_score, 'train_time': t_diff}\n if verbose:\n print(\"trained {c} in {f:.2f} s\".format(c=classifier_name, f=t_diff))\n return dict_models\n```\n\n\n```python\ndef display_dict_models(dict_models, sort_by='test_score'):\n cls = [key for key in dict_models.keys()]\n test_s = [dict_models[key]['test_score'] for key in cls]\n training_s = [dict_models[key]['train_score'] for key in cls]\n training_t = [dict_models[key]['train_time'] for key in cls]\n \n```\n\n\n```python\ny_col_bank = 'job'\nx_cols_glass = list(df_categorical_labled.columns.values)\ntrain_test_ratio = 0.7\ndf_train, df_test, X_train, Y_train, X_test, Y_test = get_train_test(df_categorical_labled, y_col_bank, x_cols_glass, train_test_ratio)\n\ndict_models = batch_classify(X_train, Y_train, X_test, Y_test, no_classifiers = 8)\ndisplay_dict_models(dict_models)\n```\n\n trained Logistic Regression in 1.98 s\n trained Nearest Neighbors in 0.09 s\n trained Decision Tree in 0.02 s\n trained Linear SVM in 3.73 s\n trained Naive Bayes in 0.06 s\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "717eb240ff544c210a404478262555746674c0ed", "size": 114310, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ML CA2 - Classifier - C14385336- C15765215.ipynb", "max_stars_repo_name": "Machine-Learning-CA/Classifier", "max_stars_repo_head_hexsha": "ab359d2b855e55935a2fa86f7aa6596b65832174", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ML CA2 - Classifier - C14385336- C15765215.ipynb", "max_issues_repo_name": "Machine-Learning-CA/Classifier", "max_issues_repo_head_hexsha": "ab359d2b855e55935a2fa86f7aa6596b65832174", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML CA2 - Classifier - C14385336- C15765215.ipynb", "max_forks_repo_name": "Machine-Learning-CA/Classifier", "max_forks_repo_head_hexsha": "ab359d2b855e55935a2fa86f7aa6596b65832174", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 118.3333333333, "max_line_length": 52640, "alphanum_fraction": 0.8153967282, "converted": true, "num_tokens": 6408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5467381519846138, "lm_q2_score": 0.5273165233795671, "lm_q1q2_score": 0.28830406150349597}} {"text": "```python\n#we may need some code in the ../python directory and/or matplotlib styles\nimport sys\nimport os\nsys.path.append('../python/')\n\n#set up matplotlib\nos.environ['MPLCONFIGDIR'] = '../mplstyles'\nprint(os.environ['MPLCONFIGDIR'])\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\n#got smarter about the mpl config: see mplstyles/ directory\nplt.style.use('standard')\nprint(mpl.__version__) \nprint(mpl.get_configdir())\n\n\n#fonts\n# Set the font dictionaries (for plot title and axis titles)\ntitle_font = {'fontname':'Arial', 'size':'16', 'color':'black', 'weight':'normal',\n 'verticalalignment':'bottom'} # Bottom vertical alignment for more space\naxis_font = {'fontname':'Arial', 'size':'32'}\nlegend_font = {'fontname':'Arial', 'size':'22'}\n\n#fonts global settings\nmpl.rc('font',family=legend_font['fontname'])\n\n\n#set up numpy\nimport numpy as np\n```\n\n ../mplstyles\n 3.0.3\n /home/phys/villaa/analysis/misc/nrFano_paper2019/mplstyles\n\n\n# ER/NR Band Calculation\n\nIn a previous notebook `QEr_2D_joint.ipynb` we calculated the correct two-dimensional probability distribution for energy recoils of arbitrary average ionization yield, given the resolutions ($\\sigma_H$,$\\sigma_I$, and $\\sigma_N$) as functions of the true recoil energy. The variance on the number of e/h pairs created, $\\sigma_N$ included the effective Fano factor, $F$. \n\nThis was all done starting from the fundamental variables $\\delta H$, $\\delta I$, and N (see `QEr_2D_joint.ipynb` for all variable definitions). The final result was the probability distribution in the Q,$\\tilde{E}_r$ plane of measured yield and recoil energy, given a true recoil distribution $P(E_r)$. $P(E_r)$ was taken to be:\n\n\\begin{equation}\nP(E_r) = \\frac{1}{\\alpha}e^{-\\alpha E_r},\n\\end{equation}\n\nwhere $\\alpha$ is the decay constant and we take to be equal to about 1/100 keV$^{-1}$ for nuclear recoils.\n\nIn this note we will use this model to construct the variance of constant-$\\tilde{E}_r$ slices of this distribution. The motivation for this calculation is that these variances are often the ones measured in experiments like Edelweiss [REF] and SuperCDMS. \n\n## Ionization Yield Distributions at Fixed $\\tilde{E}_r$\n\nFirst, we want to construct bands for electron recoils with the parameters $F^{\\prime}$=0 (Fano factor) and $\\alpha^{\\prime}$ = 1/100000 keV$^{-1}$ (essentially flat true recoil distribution).\n\n**NOTE: The value of $F^{\\prime}$ is choosen as such because the measured resolutions are done with electron recoils and implicitly include the true Fano factor. Although this Fano factor (typically in germanium of order 0.13 with large variability in the measurements [REF]) is included note that it is not included correctly because it is not separated from the variance of $\\delta I$. Nevertheless, since the Fano contribution for electron recoils is small compared to the ionization resolutions in most published work, it is safely parameterized inside the $\\delta I$ resolution for electron recoils.**\n\nNext we will construct a nuclear recoil band with the parameters $F$=15 (this appears to be a plausible lower limit for measurements above 10 keV, see [REF] Dougherty, and the notebook constructing the Fano factor from that) and $\\alpha$=1/100 keV$^{-1}$. \n\nWe are constructing the distribution that results from the following:\n\n\\begin{equation}\nP(Q,\\tilde{E}_r=E) = \\int_0^{\\infty} dE_r P(Q,\\tilde{E}_r=E|E_r)P(E_r)\n\\end{equation}\n\nthe following two cells set all the parameters and get the correct resolutions. I am using detector GGA1 here to compare with because in Fig. 5 of the Edelweiss paper [REF] those bands are actually plotted and we have digitized versions of the 90% (1.645$\\sigma$) containment bands. \n\n\n```python\n#constants\nV=4.0 #volts\neps = 3.0/1000 #keV per pair, I usually use 3.3 for the numerator, but Edw. uses 3.\nFWHM_to_SIG = 1 / (2*np.sqrt(2*np.log(2)))\n\n#yield models\na=0.16\nb=0.18\nQbar = lambda Er: a*Er**b\nQer = lambda Er: 1\n```\n\n\n```python\n#getting the resolutions\nimport EdwRes as er\n\n#these happen to be the parameters for GGA1 detector (Edelweiss)\naH=0.02\nfh2 = er.get_heatRes_func(1.3, 3.5,aH*FWHM_to_SIG)\nsigH = lambda x:fh2(x)\n\nfi2 = er.get_ionRes_func(1.3, 1.3, 2.8)\nsigI = lambda x:fi2(x)\n\n#new resolution functions \nEhee = lambda Er: ((1+(V/(1000*eps))*Qbar(Er))*Er)/(1+(V/(1000*eps)))\nEIee = lambda Er: Qbar(Er)*Er\n\n\nsigH_NR = lambda Er: sigH(Ehee(Er))\n\nsigI_NR = lambda Er: sigI(EIee(Er))\n\n#check sigI @ 122keV\nprint('FWHM @ 122 keV: {}'.format(2.355*sigI(122)))\n```\n\n FWHM @ 122 keV: 2.8002139755495987\n\n\nIn the next two cells, I set up the proability distributions for the afromentioned parameters with the functions developed as a result of the `QEr_2D.ipynb` notebook. \n\nAs a test energy I choose 10 keV because the distributions begin to look wide and clearly non-Gaussian there. At that energy we take a slice of the $P(Q,\\tilde{E}_r)$ distribution, and calculate the integral so that we can properly normalize it. This is the ionization yield distribution that would be observed at a single measured recoil energy, $\\tilde{E}_r$. \n\n\n```python\nimport prob_dist as pd\n\n#set up ER distribution\nFp = 0.0001\nalphap=1/100000\nPer = pd.QEr_v2_2D_fast(sigH,sigI,V,eps,Fp,Qer)\ner_band = pd.expband_2D(Per,alphap,3)\n\n#set up NR distribution\nF = 15\nalpha=1/100.0\nPnr = pd.QEr_v2_2D_fast(sigH_NR,sigI_NR,V,eps,F,Qbar)\nnr_band = pd.expband_2D(Pnr,alpha,1.5)\n```\n\n\n```python\n#test the dists at a certain energy\nEtest=10\n\nfrom scipy import integrate\nernorm = integrate.quad(er_band,0,4,args=(Etest,))[0]\nnrnorm = integrate.quad(nr_band,0,4,args=(Etest,))[0]\nprint(ernorm)\nprint(nrnorm)\n\nX=np.arange(0,1.5,0.01)\nQdist_er = lambda Q: (1/ernorm)*er_band(Q,Etest)\nQdist_nr = lambda Q: (1/nrnorm)*nr_band(Q,Etest)\nQerv = np.vectorize(Qdist_er)\nQnrv = np.vectorize(Qdist_nr)\nYer = Qerv(X)\nYnr = Qnrv(X)\n```\n\n 100038.55775424647\n 88.38426954941066\n\n\nThe next two cells show the ER and NR distributions for the above parameters on a logarithmic and linear vertical scale. It can be seen on the logarithmic plots that the distributions are not Gaussian. \n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\n\n\nax1.plot(X,Yer,color='orange',linestyle='-',label='{} keV ER band'.format(Etest))\nax1.plot(X,Ynr,color='steelblue',linestyle='-',label='{} keV NR band'.format(Etest))\n\nax1.axvline(Qbar(Etest), color='r', linestyle='--', lw=2, alpha=0.8,label='predicted centers')\nax1.axvline(1, color='r', linestyle='--', lw=2, alpha=0.8)\n\nymin = 1e-4\nmax1 = np.max(Yer)\nmax2 = np.max(Ynr)\nymax = 1.01*np.max(np.asarray([max1,max2]))\n\nax1.set_yscale('linear')\nax1.set_yscale('log')\nax1.set_xlim(0, 1.5) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'ionization yield',**axis_font)\nax1.set_ylabel('PDF',**axis_font)\nax1.grid(True)\n#ax1.yaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=4,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\nplt.tight_layout()\n#plt.savefig('figures/figure.png')\nplt.show()\n```\n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\n\n\nax1.plot(X,Yer,color='orange',linestyle='-',label='{} keV ER band'.format(Etest))\nax1.plot(X,Ynr,color='steelblue',linestyle='-',label='{} keV NR band'.format(Etest))\n\nax1.axvline(Qbar(Etest), color='r', linestyle='--', lw=2, alpha=0.8,label='predicted centers')\nax1.axvline(1, color='r', linestyle='--', lw=2, alpha=0.8)\n\nymin = 1e-4\nmax1 = np.max(Yer)\nmax2 = np.max(Ynr)\nymax = 1.01*np.max(np.asarray([max1,max2]))\n\nax1.set_yscale('linear')\nax1.set_yscale('linear')\nax1.set_xlim(0, 1.5) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'ionization yield',**axis_font)\nax1.set_ylabel('PDF',**axis_font)\nax1.grid(True)\n#ax1.yaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=1,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\nplt.tight_layout()\n#plt.savefig('figures/figure.png')\nplt.show()\n```\n\n## The 68.27% Symmetric Containment Region\n\nSince these distributions are not quite Gaussian for all combinations of parameters, we must approximate the widths, $\\sigma_Q(\\tilde{E}_r)$. We define these to be the symmetric regions about the expected yield centers (note, it may be true that in some cases the distribution can be skewed so that neither the mean nor the mode are at the expected yield centers) that contain 68.27% of the distribution. \n\nFor this we can use the optimization libraries in `scipy.optimize`. We define a numerical function that will be zero when the size of the aformentioned region is supplied as an argument. \n\n\n```python\nimport scipy.optimize as so\n\nQint_er = lambda a: integrate.quad(Qerv,Qer(Etest)-a,Qer(Etest)+a,limit=100)[0]\nQint_nr = lambda a: integrate.quad(Qnrv,Qbar(Etest)-a,Qbar(Etest)+a,limit=100)[0]\n\nsiger_zero = lambda a: Qint_er(a) - 0.6827 #one sigma\nsignr_zero = lambda a: Qint_nr(a) - 0.6827 #one sigma\n\nsiger_90 = lambda a: Qint_er(a) - 0.9 #90%\nsignr_90 = lambda a: Qint_nr(a) - 0.9 #90%\n\nsigQer = so.brentq(siger_zero,0,1,rtol=0.001,maxiter=100)\nprint(sigQer)\nsigQnr = so.brentq(signr_zero,0,1,rtol=0.001,maxiter=100)\nprint(sigQnr)\n\nsigQer90 = so.brentq(siger_90,0,1,rtol=0.001,maxiter=100)\nprint('1.645xone sigma: {}; 90\\%:{}'.format(1.645*sigQer,sigQer90))\nsigQnr90 = so.brentq(signr_90,0,1,rtol=0.001,maxiter=100)\nprint('1.645xone sigma: {}; 90\\%:{}'.format(1.645*sigQnr,sigQnr90))\n```\n\n 0.22478180172240245\n 0.11474735445072594\n 1.645xone sigma: 0.36976606383335203; 90\\%:0.3698908536745943\n 1.645xone sigma: 0.18875939807144418; 90\\%:0.1840166980484885\n\n\n\n```python\nimport time\nstart = time.time()\n#root = pd.sigrootEdw(10,10,V,eps,(1/100),Qbar)\nroot = pd.sigrootEdw(Fp,7,V,eps,alphap,Qer)\nend = time.time()\nprint('sigma: {}'.format(root))\nprint('{} s'.format(end - start))\n```\n\n sigma: 0.3220219209327321\n 9.04426646232605 s\n\n\n\n```python\nimport fano_calc as fc\n\n(nrsigma,nrE) = fc.calcQWidth(10,F,V,eps,alpha,Qbar,aH)\n(ersigma,erE) = fc.calcQWidth(10,Fp,V,eps,alphap,Qer,aH)\n```\n\nIn the Edelweiss paper [REF], a Gaussian approximation was used wherein the width in ionization yield could be written as:\n\n\\begin{equation}\n\\sigma_Q(\\tilde{E}_r) \\simeq \\frac{1}{\\tilde{E}_r} \\sqrt{\\left(1+\\frac{V}{\\epsilon}\\bar{Q}\\right)^2\\sigma_I^2 + \\left(1+\\frac{V}{\\epsilon}\\right)^2\\bar{Q}^2\\sigma_H^2}.\n\\end{equation}\n\nIt is therefore useful to compare this function to the values obtained from the more fundamental derivation that we are seeking. \n\n\n```python\n#make functions for analytical bands\nsigQer = lambda Etr: (1/Etr)*np.sqrt((1+(V/(1000*eps))*Qer(Etr))**2*sigI(Etr)**2 + (1+(V/(1000*eps)))**2*Qer(Etr)**2 \\\n *sigH(Etr)**2)\n \nsigQnr = lambda Etr: (1/Etr)*np.sqrt((1+(V/(1000*eps))*Qbar(Etr))**2*sigI_NR(Etr)**2 + (1+(V/(1000*eps)))**2*Qbar(Etr)**2 \\\n *sigH_NR(Etr)**2)\n\nprint(sigQer(10))\nprint(sigQnr(10))\nsigQerv = np.vectorize(sigQer)\nsigQnrv = np.vectorize(sigQnr)\n```\n\n 0.2246462352359262\n 0.10795032751310846\n\n\n## Comparing Modeled and Edelweiss Published Bands\n\nSince we can now calculate the 68.27% containments at a given recoil energy, we can find what they are for all recoil energy and compare to the bands published in the Edelweiss paper [REF]. We compare for detector GGA1, because that is the detector that the Edw. paper shows the bands from. \n\n\n```python\nimport pandas as pd\nband_data = pd.read_csv(\"data/edelweiss_bands_GGA1.csv\")\nprint (band_data.head(10))\n```\n\n x Curve1 curve2 curve3 curve4\n 0 1.667 -6.44581 -0.08957 1.49300 -0.64478\n 1 1.802 -6.06069 -0.08789 1.44296 -0.62392\n 2 1.804 -6.05415 -0.08786 1.46798 -0.62356\n 3 1.938 -5.68168 -0.08613 1.41795 -0.60321\n 4 2.075 -5.31531 -0.08430 1.39293 -0.58303\n 5 2.212 -4.96139 -0.08240 1.36791 -0.56335\n 6 2.883 -3.39678 -0.07177 1.11174 -0.47381\n 7 3.158 -2.83384 -0.06656 1.10052 -0.44031\n 8 3.295 -2.57110 -0.06372 1.06171 -0.42438\n 9 4.518 -0.66571 -0.01776 0.75550 -0.30031\n\n\n\n```python\n# path 0: 68.75 keV inelastic scattering\n# path 1: average NR line\n# path 2: 13.26 keV inelastic\n# path 3: ionization threshold\n# path 4: lower nuclear recoil band\n# path 5: upper nuclear recoil band\n# path 6: upper and lower electron recoil band\nimport pandas as pd\nband_data_svg = pd.read_csv(\"data/edelweiss_band_GGA1_allCurveData.txt\", skiprows=1, header=None, delim_whitespace=True)\nprint (band_data_svg.head(5))\nlist(band_data_svg.columns.values)\n```\n\n 0 1 2 3 4 5 6 7 \\\n 0 69.4986 0.991839 1.02904 0.159922 14.2519 0.944224 2.10350 1.49896 \n 1 69.8626 0.987244 1.31515 0.167136 14.3933 0.936084 2.11548 1.48548 \n 2 70.2266 0.982648 1.60126 0.174349 14.5347 0.927944 2.12746 1.47200 \n 3 70.5907 0.978053 2.05730 0.181087 14.6761 0.919803 2.13943 1.45851 \n 4 70.9944 0.973552 2.45114 0.187750 14.8725 0.911724 2.15141 1.44503 \n \n 8 9 10 11 12 13 \n 0 11.0197 -0.002028 1.89725 1.50072 9.99176 1.50730 \n 1 11.3235 0.005082 1.92804 1.48860 10.39770 1.48881 \n 2 11.6309 0.012183 1.95883 1.47648 10.78450 1.47030 \n 3 11.9934 0.019148 1.98962 1.46436 11.35520 1.44694 \n 4 12.3254 0.026192 2.02041 1.45224 12.01670 1.42316 \n\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]\n\n\n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\n\n#X = np.arange(0.1,100,0.1)\n\nerQ = Qer(erE)\nnrQ = Qbar(nrE)\n\n# plot the data-theifed bands\n#ax1.plot(band_data.x[band_data.x>7.82], band_data.Curve1[band_data.x>7.82], 'b-',label='')\n#ax1.plot(band_data.x, band_data.curve2, 'b-',label='')\n#ax1.plot(band_data.x, band_data.curve3, 'r-',label='')\n#ax1.plot(band_data.x, band_data.curve4, 'r-',label='')\nax1.plot(nrE,nrQ+1.645*nrsigma,'r-',linewidth=2,label='NR band model (F={:2.1f}; a$_H$={})'.format(F,aH))\nax1.plot(erE,erQ+1.645*ersigma,'b-',linewidth=2,label='ER band model (F={:2.1f}; a$_H$={})'.format(Fp,aH))\nax1.plot(nrE,nrQ-1.645*nrsigma,'r-',linewidth=2)\nax1.plot(erE,erQ-1.645*ersigma,'b-',linewidth=2)\n\nax1.plot(nrE,nrQ+1.645*sigQnrv(nrE),color='orange',linestyle='--',linewidth=3,label='NR model Edw. (a$_H$={})'.format(aH))\nax1.plot(erE,erQ+1.645*sigQerv(erE),color='steelblue',linestyle='--',linewidth=3,label='ER model Edw. (a$_H$={})'.format(aH))\nax1.plot(nrE,nrQ-1.645*sigQnrv(nrE),color='orange',linestyle='--',linewidth=3)\nax1.plot(erE,erQ-1.645*sigQerv(erE),color='steelblue',linestyle='--',linewidth=3)\n\n#svg-extracted\n# plot the svg? electron-recoil bands\nax1.plot(band_data_svg[12][band_data_svg[13]>1], band_data_svg[13][band_data_svg[13]>1], 'b:', linewidth=2, \\\n label='Edw. paper ER')\nax1.plot(band_data_svg[12][band_data_svg[13]<1], band_data_svg[13][band_data_svg[13]<1], 'b:', linewidth=2,label='')\n\n# nuclear recoil bands\nax1.plot(band_data_svg[8], band_data_svg[9], 'r:', linewidth=2, label=\"Edw. paper NR\")\nax1.plot(band_data_svg[10], band_data_svg[11], 'r:', linewidth=2, label='')\n\n#ax1.plot(X,ynr_muv(X),'r--',label='NR mu')\n#ax1.plot(X,ynr_muv(X)+3*ynr_sigv(X),'r-',label='NR 3$\\sigma$')\n#ax1.plot(X,ynr_muv(X)-3*ynr_sigv(X),'r-',label=None)\n\n#ax1.plot(X,yer_muv(X),color='orange',linestyle='--',label='ER mu')\n#ax1.plot(X,yer_muv(X)+3*yer_sigv(X),color='orange',linestyle='-',label='ER 3$\\sigma$')\n#ax1.plot(X,yer_muv(X)-3*yer_sigv(X),color='orange',linestyle='-',label=None)\n\n\n\n#ax1.axvline(t(t_test[idx]), color='k', linestyle='-', lw=2, alpha=0.8,label=None)\n\n\nymin = 0\nymax = 1.7\n\n\n\nax1.set_yscale('linear')\n#ax1.set_yscale('linear')\nax1.set_xlim(0, 100) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'recoil energy [keV]',**axis_font)\nax1.set_ylabel('ionization yield',**axis_font)\nax1.grid(True)\nax1.yaxis.grid(True,which='minor',linestyle='--')\n#ax1.legend(loc=1,prop={'size':22})\nlgd = ax1.legend(bbox_to_anchor=(1.04,1),loc='upper left',borderaxespad=0,prop={'size':22})\n#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\n#plt.tight_layout()\n#plt.savefig('figures/GGA1_bandModel.png')\nplt.savefig('figures/GGA1_bandModel.png', bbox_extra_artists=(lgd,), bbox_inches='tight')\nplt.show()\n```\n\n# Matching Published and Calculated Widths\n\nIt is obvious from the plot above that, while our calculation using the fully correct distribution in the ionization-recoil plane matches very well with the Edw. paper model, it does not match with the bands extracted from Fig. 5 of the Edw. paper. I should plot these more carefully and look for discrepancies. \n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\n\n\n\nX=np.arange(0.1,160,0.1)\n\n\n#ax1.plot(X,sigQnrv(X),color='r',linestyle=\"--\",linewidth=2,label='single-scatter yield model (aH={:1.3})'.format(aH))\n#ax1.plot(X,1.645*sigQerv(X),color='b',linestyle=\"--\",linewidth=2,label='ER model Edw. (a$_H$={})'.format(aH))\nax1.plot(X,2.2*sigQerv(X),color='b',linestyle=\"--\",linewidth=2,label='ER model Edw. (a$_H$={})'.format(aH))\nax1.plot(band_data_svg[12][band_data_svg[13]>1], band_data_svg[13][band_data_svg[13]>1]-1.0, 'b:', linewidth=2, \\\n label='Edw. paper ER band plot')\n\n\n\n\n\nymin = 0.04\nymax = 0.2\n\n\n\nax1.set_yscale('linear')\n#ax1.set_yscale('log')\nax1.set_xlim(0, 160) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'recoil energy [keV]',**axis_font)\nax1.set_ylabel('ionization yield width',**axis_font)\nax1.grid(True)\nax1.yaxis.grid(True,which='minor',linestyle='--')\n#ax1.legend(loc=1,prop={'size':22})\n#ax1.legend(loc='upper right', bbox_to_anchor=(1, 0.5),prop={'size':22})\nlgd = ax1.legend(bbox_to_anchor=(1.04,1),loc=\"upper left\",borderaxespad=0,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\n#plt.tight_layout()\n#plt.savefig('figures/figure.png')\n#plt.savefig('figures/figure.png', bbox_extra_artists=(lgd,), bbox_inches='tight')\nplt.show()\n```\n\nSo, by adjusting $a_H$ to 0.02 **_and_** increasing the ER model do report 2.2$\\sigma$ we get close. I can keep playing this game until I get the curve right, but why in the world would Edw. have plotted some arbitrary value like 2.2$\\sigma$?\n\n\n```python\n\n```\n", "meta": {"hexsha": "82cb238bb978ff76b89d82f79256b7d8fec2931e", "size": 356462, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "analysis_notebooks/ERNR_bands.ipynb", "max_stars_repo_name": "villano-lab/nrFano_paper2019", "max_stars_repo_head_hexsha": "f44565bfb3e45b2dfbe2a73cba9f620a7120abd7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-04-06T17:27:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T20:38:54.000Z", "max_issues_repo_path": "analysis_notebooks/ERNR_bands.ipynb", "max_issues_repo_name": "villano-lab/nrFano_paper2019", "max_issues_repo_head_hexsha": "f44565bfb3e45b2dfbe2a73cba9f620a7120abd7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "analysis_notebooks/ERNR_bands.ipynb", "max_forks_repo_name": "villano-lab/nrFano_paper2019", "max_forks_repo_head_hexsha": "f44565bfb3e45b2dfbe2a73cba9f620a7120abd7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 477.1914323963, "max_line_length": 131996, "alphanum_fraction": 0.935788387, "converted": true, "num_tokens": 6399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5698526660244838, "lm_q2_score": 0.5039061705290805, "lm_q1q2_score": 0.2871522747021847}} {"text": "```python\nimport pandas as pd\nimport ms3 \nfrom ms3.utils import *\nimport os \nfrom ms3 import Score\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics.pairwise import cosine_similarity\nimport numpy as np\nfrom scipy import stats\nimport pretty_midi\nimport statistics\nfrom scipy.spatial import distance\nimport torch.nn\nfrom torch import nn\nfrom torch.optim import SGD\n# include directory \nhome_dir = '/Users/Pushkar/PycharmProjects/Tresillo'\n```\n\n# Towards a Rhythmical Definition of the Tressilio beat and a Tracing of it in Popular Music \n\n## 1) Introduction & Research Question \n\n### 1.1) Research question: \n\"Can we compute to which extent a defined rhythm, which we refer to as Tresillo rhythm, is used in a given pop song and if so can we measure the intensity of Tresillo rhythm use in top-20-billboard songs over the past twenty years?\" \n\n**Discussion:** \nIn our project we would like to discuss the use of a rhythm, which we refer to as 'Tresillo rhythm', in the popular music of the last 20 years. We define this rhythm in our project, given secondary literature, and thus obtain a precise notation and formulization of the Tresillo rhythm. Given this definition we can then compute the similarity between the Tresillo rhythm and the rhythm of a given pop song. Thus, we hope to obtain a similarity coefficient which measures how similar the rythm of a given pop song is to our self defined Tresillo rythm. Given the computed similarity coefficients, we hope to measure the use of the Tresillo rhythm in the top 20 billboard songs of the past 20 years (1999-2019).\n\n\n### 1.2) Assumptions\n\n- Pop songs found in the billboard charts mostly come with a simple melodic and rythmic structure without a lot of variation troughout the song. We assume that for most pop songs present in the billboard charts, one can identify one dominant rythm per song.\n- For our presntation of songs in rythm space, we also assume that the majority of the songs in the billboard charts is in the time signature of 4/4. Our data exploration justifies this assumption.(There are 420 songs with 4/4, 6 with 3/4 and 16 with changing time signature). \n- We assume that a time sample longer than 30 seconds is suitable to present the main rythm of a pop song. This assumptions is related to the difficultys of finding puplic available midi data for recent pop music. While we couldn't find a data source providing the midi data of the full songs, we managed to find midi data for shorter samples of the songs.\n \n\n\n```python\n#Some numbers regarding our song length assumption, not adressed in the former notebooks\nlist_midis = os.listdir(r\"C:/Users/Florian/Documents/GitHub/Tresillo/dataset/project_midi/billboard\")\nlength_midis = []\nfor el in list_midis:\n midi_data = pretty_midi.PrettyMIDI(\"C:/Users/Florian/Documents/GitHub/Tresillo/dataset/project_midi/billboard\" +'/'+el)\n length_midis.append(midi_data.get_end_time())\n\naverage_midi_length = (sum(length_midis)/len(length_midis))\nplt.figure(1)\nplt.hist(length_midis, 20)\nmidi_length_standart_dev = statistics.stdev(length_midis)\n\nshort_midis = []\nfor songs in length_midis:\n if songs < 60:\n short_midis.append(songs)\n \nplt.figure(2)\nplt.hist(short_midis, 20)\n\ncounter_30 = 0\ncounter_40 = 0\ncounter_30 = sum (i < 30 for i in length_midis)\ncounter_40 = sum (i < 40 for i in length_midis)\nprint(\"Samples shorter than 30 sec: \" , counter_30)\nprint(\"Samples shorter 40 sec: \" , counter_40)\nprint (\"Average sample length\" , average_midi_length)\nprint(\"Standart deveiation sample length\" , midi_length_standart_dev)\n```\n\n### 1.3) Data Representation\nInitially our data is represented in the MIDI file format. The representation of music in the MIDI format has the advantage, that often several voices of different instruments are represented in such files. In contrast to musescore, where often only the voice of one instrument (mostly piano) is notated. \nHowever, to obtain a list of onsets of every musical event in a given song, we have to convert our MIDI (.midi) files to Musescore (.mscx) files. \nTo convert and further analyze our files, we will use the [ms3](https://pypi.org/project/ms3/) python library. \nTo convert a directory of .midi files to .mscx files we use following command:\n\n\n\n```python\npath_midi = '/home/nulpe/Desktop/Tresillo/dataset/project_midi/tresillo/'\ntarget = '/home/nulpe/Desktop/Tresillo/dataset/project_mscx/mscx_tresillos_billboard/'\n\ndir_list = os.listdir(path_midi)\n\nfor el in dir_list:\n convert(path_midi+el, target+el[:-4]+'.mscx', MS='musescore3')\n```\n\n**TA instructions**: Precision of Research Question\n\n- State the final version of your research question as you understand it now.\n\n- Make all assumptions and hypotheses explicit.\n\n- In case Milestone 2 did not include the final data representation that you are analyzing, present the representation you finally used and the necessary steps to get to it.\n\n**Thoughts Aurel:** \n - Reasearch Question: Move to a fuzzy definition of Tresillio-ness: E.g.: \n Are pop songs increasingly using a rythm pattern which is similar to a rythm pattern which we reffer to as 'Tresillio rythm'? \n \n - Assumptions are very important for them. Here we have to note our definition(s) of the Tresillio rythm pattern and how we derive them (incl sources). Furthermore, we have to discuss cases where there are Rythms which are similar to the Tresillio rythm but not equivalent (e.g.: reggaeton) and how we deal with them computationally. \n \n - Here we have to discuss the conversion of our MIDI files to the musescore3 file format. Furthermore, we have to discuss what the musescore3 format offers us, and why it is the better choice for our analysis. \n \n **Toughts Florian:**\n -Assumptions:\n \n -30 second piece of pop song is suitable to identfy the main rythm\n \n -Main rythm can be identified by counting the onsets\n \n -the great majority of pop songs comes in 4/4 (not shure if we really need that assumption)\n \n\n\n \n\n## 2) Methods\n\n### 2.1) Definition of the Tresillo rhythm\nWe notate a clean version of a rythm wich we from now on use as our defintion for the clean tresillo rythm, in the following context also called vanilla tresillo. \n\n\n\nThe rhythm pattern consists of a dotted eighth note, followed by a sixteenth note, an eighth rest and an eighth note and is repeated two times in a 4/4 bar. The rytm pattern is beeing used as a own rythm or as a rythmic part of a more complex rythm, for example the \"clave\" pattern or the ragaetton rythm Floyd, 1999).\nBy notating the rythm in MuseScore 3 and saving it as a .mscx file, we can use it our data processing pipeline described in the following sections.\n\n### 2.2) Rhythm histograms and vectors \n\nTo be able to measure the similarity between rhythm we must have a clear definition and thus following representation of rhythm. In general, one can define rhythm as \"a series of onsets and durations of musical events.\u201d (Rohrmeier, 2020). In our specific case however, we are interested in the dominat and repeating rhythm of a given song. Therefore we prefer a narrower definition of rhythm as \u201crepeated recurrences in alternate heavy and light beats\u201d (Chin and Wu, 1992). To furthermore simplify our data, we assume that the main rhythm of a song can be defined by the onsets of its musical events (notes). \nTo obtain a representation of the dominate rhythm of a song, we preceed to aggregate all musical onset to one bar. Collapsing all musical onsets to one bar and thus obtaining onset 'histograms' is a common pratice and has been used beside others to analyze western classical music (Palmer and Krumhansl, 1990) and american folk music (Huron and Ommen, 2006). \nWith the onset histogram of a song we can compile a n dimensional vector for each song, which we refer to as a 'rhythm vector'. Given that the number of songs with meters other than 4/4 is negible, we only consider songs with a 4/4 meter in our analysis. Given that we only consider songs with 4/4 meters, we obtain for each song a 16 dimensional vector. \n\n\n\n\n### 2.3) Evaluating different similarity measures \nIn this section we try out different similarity measures to choose the best one. A good similarity measure would have a high similarity for all songs which have a $tresillo^+$ pattern and a low similarity for songs which do not have such a pattern.\nTo measure this, we use 'Similarity Goodness' $S^*$, which is the ratio of mean similarity in songs which have tresillo and mean similarity of songs which don't.\n\\begin{equation}\nS^* = \\frac{\\frac{\\sum{\\text{similarity(Songs with tresillo)}}}{n}}{\\frac{\\sum{\\text{similarity(Songs without tresillo)}}}{m}}\n\\end{equation}\n`n` and `m` denote the number of songs in each of respective categories. \n\nIn each of the subsections we calculate relavant statistics by dividing the similarity methods based on the following techniques\n* Template based center vs Centroid computed from known tresillo songs\n* Cosine similarity vs Euclidean Distance\n* Parameterized vs non parameterized distance function\n\nEach section contains the mean similarity/distance and it's standard deviation.\n\nAt the end of the section we compare the $S^*$ (Model Goodness) for all of the above subsections.\n\n_NOTE: Both the datasets used to evaluate $S^*$ are from the validation/test set and not used as part of training/finding the centroid._\n\n\n\n```python\n# Collapse all voices into a single voice by taking the mean over the respective beat.\ndef collapse_normalize_vectors(df_rythm_vectors):\n rythm_vector_collaped = df_rythm_vectors.groupby(['song_artist']).agg(['sum'])\n rythm_vector_collaped.columns = rythm_vector_collaped.columns.droplevel(1)\n rythm_vector_collaped = rythm_vector_collaped.drop(['instrument', 'level_1'],axis=1)\n rythm_vector_collaped[\"sum\"] = rythm_vector_collaped.sum(axis=1)\n rythm_vector_collaped = rythm_vector_collaped.loc[:,\"0\":\"15\"].div(rythm_vector_collaped[\"sum\"], axis=0)\n return rythm_vector_collaped\n```\n\n\n```python\ndf_tresillo_not_billb = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_tresillo_not_billboard.csv')\ndf_not_tre_validation = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_not_tresillo_validation.csv')\ndf_rythm_vectors_train = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_no_tresillos_4_4.csv')\ndf_rythm_vectors = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_tresillos_billboard.csv')\n\n\ntresillo_train = collapse_normalize_vectors(df_tresillo_not_billb)\ntresillo_test = collapse_normalize_vectors(df_rythm_vectors)\ndf_not_tre_validation = collapse_normalize_vectors(df_not_tre_validation)\nnon_tresillio_vectors = collapse_normalize_vectors(df_rythm_vectors_train)\n\nnp_tresillo_train = tresillo_train.to_numpy()\nnp_tresillo_test = tresillo_test.to_numpy()\nnp_not_tre_validation = df_not_tre_validation.to_numpy()\nnp_non_tresillo = non_tresillio_vectors.to_numpy()\n\ndf_synt_tresillo = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_tresillio.csv')\ndf_synt_tresillo = df_synt_tresillo[df_synt_tresillo['song_artist']!='Raggetone'].drop(['instrument','level_1'], axis=1)\ndf_synt_tresillo.index = df_synt_tresillo.song_artist\ndf_synt_tresillo.drop(['song_artist'], axis=1)\ndf_synt_tresillo[\"sum\"] = df_synt_tresillo.sum(axis=1)\ndf_synt_tresillo = df_synt_tresillo.loc[:,\"0\":\"15\"].div(df_synt_tresillo[\"sum\"], axis=0)\nvanilla_tresillo_vector = np.asarray(df_synt_tresillo.loc['Vanilla_Tresillo',\"0\":\"15\"]).reshape(-1, 1).T\n```\n\n#### 2.3.1) Rhythm similarity measured with cosine simularity, using tresillo template as centre.\nGiven the 16 dimensional rhythm vectors we obtain following the methode described above, we can now compute simple similarity metrics. \nIn rhythm analysis a common similarity metric which is used to calculate the similarity between two rhythm vectors is the cosine distance (see: Panteli et al., 2014; Parry and Essa, 2003). The cosine similarity metric is scale invariante, which as such is interesting for rhythm similarity given that thus only relative frequencies of onsets are important and not absolute frequencies. \nThe cosine distance between two vectors A and B is defined as following: \n\\begin{equation}\n\\cos ({\\bf A},{\\bf B})= {{\\bf A}*{\\bf B} \\over \\|{\\bf A}\\| \\|{\\bf B}\\|} = \\frac{ \\sum_{i=1}^{n}{{\\bf A}_i{\\bf B}_i} }{ \\sqrt{\\sum_{i=1}^{n}{({\\bf A}_i)^2}} \\sqrt{\\sum_{i=1}^{n}{({\\bf B}_i)^2}} }\n\\end{equation}\n\nGiven the definition of the cosine similarity we can now compute the similarity between our self defined Tresillo rhythm and the billboard songs. \nFirst, however we will validate this similarity metric by testing it on our self compiled list of songs which do comprise a Tresillo rhythm and songs which do not comprise a Tresillo rhythm. We then compute the mean 'Tresillo-ness' (similarity to Tresillo rhythm) of both samples. By employing the Bootstrapping method we can also obtain a measurement of uncertainty, as provided by 2.5% and 97.5% confidence intervals.\n\n\n```python\nmodel_goodness_1 = 1\n```\n\n#### 2.3.2) Rhythm similarity measured with inverse euclidean distance, using tresillo template as centre. \n\nIn this section the similarity will be calculated by calculating the Euclidean distance between two points. We will go with the convention of \"high value -> high similarity\" and hence use the inverse distance.\n\n\n```python\nmodel_goodness_2 = 1\n```\n\n#### 2.3.3) Rhythm similarity measured with cosine simularity, using the centroid of tresillo songs as centre.\n\n\n```python\ncentroid = tresillo_train.sum(axis=0)/tresillo_train.shape[0]\ncentroid = np.array(centroid).reshape(1,-1)\n```\n\n\n```python\nsim_present_train = cosine_similarity(tresillo_train, centroid)\n```\n\nSimilarity in test set. Tresillo Present\n\n\n```python\nsim_present = cosine_similarity(np_tresillo_test, centroid)\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(sim_present), 4)} +- {round(np.std(sim_present), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.9433 +- 0.0236\n\n\nSimilarity for songs without tresillo\n\n\n```python\nsim_not_present = cosine_similarity(np_non_tresillo, np.array(centroid).reshape(1,-1))\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(sim_not_present), 4)} +- {round(np.std(sim_not_present), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.924 +- 0.025\n\n\n$S^*$ Model Goodness\n\n\n```python\nmodel_goodness_3 = round(np.mean(sim_present) / np.mean(sim_not_present), 5)\nmodel_goodness_3\n```\n\n\n\n\n 1.02092\n\n\n\n#### 2.3.4) Rhythm similarity measured with inverse euclidean distance, using the centroid of tresillo songs as centre.\n\nTraining Tresillo songs mean distance from the centroid\n\n\n```python\ndst = [distance.euclidean(point, centroid) for point in np_tresillo_train]\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(dst), 4)} +- {round(np.std(dst), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.1123 +- 0.0362\n\n\nTest Tresillo songs mean distance from the centroid\n\n\n```python\ndst_test = [distance.euclidean(point, centroid) for point in np_tresillo_test]\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(dst_test), 4)} +- {round(np.std(dst_test), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.1116 +- 0.028\n\n\nNot Tresillo songs mean distance from the centroid\n\n\n```python\ndst_test_nt_tresillo = [distance.euclidean(point, centroid) for point in np_non_tresillo]\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(dst_test_nt_tresillo), 4)} +- {round(np.std(dst_test_nt_tresillo), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.1384 +- 0.0266\n\n\n$S^*$ Model Goodness for inverse distance\n\n\n```python\n# Since bigger distance implies low similarity\nmodel_goodness_4 = round(1 / (np.mean(dst_test) / np.mean(dst_test_nt_tresillo)), 4)\nmodel_goodness_4\n```\n\n\n\n\n 1.2392\n\n\n\n#### 2.3.5) Rhythm similarity measured with parameterized cosine simularity, using the centroid of tresillo songs as centre.\n\n\n```python\nclass ParameterizedDistance(nn.Module):\n def __init__(self, theta, device, distance_function):\n super().__init__()\n self.device = device\n if len(theta.shape) == 1:\n theta = theta.reshape(1, -1)\n self.theta = torch.nn.Parameter(torch.from_numpy(theta), requires_grad=True).to(device)\n self.distance_function = distance_function\n\n def forward(self, tresillo_vectors, not_tresillo_vectors, vanila_tresillo_vector):\n assert isinstance(tresillo_vectors, np.ndarray)\n assert isinstance(not_tresillo_vectors, np.ndarray)\n assert isinstance(vanila_tresillo_vector, np.ndarray)\n assert self.theta.shape[1] == tresillo_vectors.shape[1] == not_tresillo_vectors.shape[1] == \\\n vanila_tresillo_vector.shape[1]\n\n not_tresillo_vectors = torch.from_numpy(not_tresillo_vectors).to(self.device)\n tresillo_vectors = torch.from_numpy(tresillo_vectors).to(self.device)\n vanila_tresillo_vector = torch.from_numpy(vanila_tresillo_vector).to(self.device)\n\n parameterized_vector_not_tresillo = self.theta * not_tresillo_vectors\n parameterized_vector_tresillo = self.theta * tresillo_vectors\n parameterized_vector_vanilla = self.theta * vanila_tresillo_vector\n\n cosine_similarity_not_t = torch.mean(\n self.distance_function(parameterized_vector_not_tresillo, parameterized_vector_vanilla))\n cosine_similarity_t = torch.mean(\n self.distance_function(parameterized_vector_tresillo, parameterized_vector_vanilla))\n\n assert cosine_similarity_t.cpu().detach().numpy() != 0, \"0 Similarity between Tresillo set and Vanilla-Tresillo Beat\"\n\n return cosine_similarity_not_t / cosine_similarity_t\n \n def similarity(self, x, y):\n assert isinstance(x, np.ndarray)\n assert isinstance(y, np.ndarray)\n assert self.theta.shape[1] == x.shape[1] == y.shape[1]\n \n x = torch.from_numpy(x).to(self.device)\n y = torch.from_numpy(y).to(self.device)\n\n x = self.theta * x\n y = self.theta * y\n return self.distance_function(x, y)\n```\n\n\n```python\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = ParameterizedDistance(np.random.rand(1, 16), device, torch.cosine_similarity)\noptim = SGD(model.parameters(), lr=1e-2, weight_decay=1e-4)\noptim.zero_grad()\nmodel.train()\nprev_validation_ratio = float('inf')\nfor i in range(100):\n loss = model(np_tresillo_train, np_non_tresillo, centroid)\n loss.backward()\n optim.step()\n with torch.no_grad():\n validation_ratio = model(np_tresillo_test, np_not_tre_validation, centroid)\n if validation_ratio > prev_validation_ratio:\n break\n prev_validation_ratio = validation_ratio\n```\n\n\n```python\nmodel_goodness_5 = 1/validation_ratio\nmodel_goodness_5 = model_goodness_5.cpu().detach().item()\nmodel_goodness_5\n```\n\n\n\n\n 1.040989750239579\n\n\n\n\n```python\nsimilarity = model.similarity(np_tresillo_test, vanilla_tresillo_vector).cpu().detach().numpy()\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(similarity), 4)} +- {round(np.std(similarity), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.7023 +- 0.182\n\n\n\n```python\nsimilarity_not_tre = model.similarity(np_not_tre_validation, vanilla_tresillo_vector).cpu().detach().numpy()\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(similarity_not_tre), 4)} +- {round(np.std(similarity_not_tre), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.4663 +- 0.0735\n\n\n\n```python\nmodel.theta.cpu().detach().numpy()\n```\n\n\n\n\n array([[0.72976688, 0.69940337, 0.65299366, 0.92980249, 0.62651748,\n 0.77724899, 0.03002122, 0.50504894, 0.15016441, 0.72617848,\n 0.66931015, 0.43817627, 0.8624732 , 0.53485404, 0.09186637,\n 0.22150938]])\n\n\n\n#### 2.3.6) Rhythm similarity measured with parameterized cosine simularity, using tresillo template as centre.\n\n\n```python\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = ParameterizedDistance(np.random.rand(1, 16), device, torch.cosine_similarity)\noptim = SGD(model.parameters(), lr=1e-3, weight_decay=1e-4)\noptim.zero_grad()\nmodel.train()\nprev_validation_ratio = float('inf')\nfor i in range(200):\n loss = model(np_tresillo_train, np_non_tresillo, vanilla_tresillo_vector)\n loss.backward()\n optim.step()\n with torch.no_grad():\n validation_ratio = model(np_tresillo_test, np_not_tre_validation, vanilla_tresillo_vector)\n if validation_ratio > prev_validation_ratio:\n break\n prev_validation_ratio = validation_ratio\n```\n\n\n```python\nmodel_goodness_6 = 1/validation_ratio\nmodel_goodness_6 = model_goodness_6.cpu().detach().item()\nmodel_goodness_6\n```\n\n\n\n\n 2.4386116929263775\n\n\n\n\n```python\nsimilarity = model.similarity(np_tresillo_test, vanilla_tresillo_vector).cpu().detach().numpy()\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(similarity), 4)} +- {round(np.std(similarity), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.5919 +- 0.2708\n\n\n\n```python\nsimilarity_not_tre = model.similarity(np_not_tre_validation, vanilla_tresillo_vector).cpu().detach().numpy()\nprint(f\"Mean Similarity +- Standard Dev: {round(np.mean(similarity_not_tre), 4)} +- {round(np.std(similarity_not_tre), 4)}\")\n```\n\n Mean Similarity +- Standard Dev: 0.2427 +- 0.1563\n\n\n\n```python\nmodel.theta.cpu().detach().numpy()\n```\n\n\n\n\n array([[-0.12385222, 0.67118366, 0.64942834, 1.06137132, 1.18011032,\n 0.10959411, 0.0499411 , 0.4357717 , -0.12146487, 0.68657846,\n 0.58290389, 1.05967711, 1.0022932 , 0.85658089, 0.12920415,\n 0.37872593]])\n\n\n\nThe parameters of the similarity function hint that it is better to not consider the value of 8th beat. It gives high importance to 3rd, 5th, 11th and 12th beat out of which 3rd and 11th are part of the tresillo peaks.\n\n\n```python\nimport plotly.express as px\ndf_goodness = pd.DataFrame(columns=[\"Cosine with template\", \"Euclidean with template\", \"Cosine with centroid\", \"Euclidean with template\", \n \"Parameterized with template\", \"Parameterized with centroid\"])\ndf_goodness.append({\"Cosine with template\": model_goodness_1, \"Euclidean with template\": model_goodness_2, \n \"Cosine with centroid\": model_goodness_3, \"Euclidean with template\": model_goodness_4, \n \"Parameterized with template\": model_goodness_5, \"Euclidean with template\": model_goodness_6}, ignore_index=True)\nfig = px.bar(data_canada, x='year', y='pop')\nfig.show()\n```\n\n\n```python\n\n```\n\n### 2.4) Reducing noise: (pushkar)\n\n**Thoughts Pushkar**: \n* I don't think this is needed if the results from the previous section are reasonable.\n* This was a reserved approach in mind incase the parameterized similarity was still not good enough.\n* As per our research question, Clustering is not `required`, as we have already created a method with reasonable confidence, which does the task. We can have a section at the end, if needed, in which we can do such exploratory analysis.\n* Many of the points in this section should be covered/answered from the previous section.\n\n**TA instructions**: \n- How did you deal with the problems you mentioned in Milestone 2?\n\n- Which methods did you use to obtain the final results? Provide self-contained explanations and make sure to cite relevant literature where appropriate.\n\n- Explain your core calculations using equations.\n\n- Do not describe all the methods you tried out but only those that lead to the results: the final analysis is not an exploratory analysis anymore.\n\n- Specify any adjustments you made to pre-existing methods\n\n\n**Thoughts Aurel**: \n - Talk about how we got to the bar representation of our music. Furthermore, also discuss how we get to a 'perfect Tresillio Histogram'\n \n - Here I propose we try out several things and compare the results of several methods:\n \n a) A first big topics is how we define the perfect Tresillo: \n 1. Given predefined rythm patterns (by Florian)\n 2. Given songs with high Tresillio-ness\n - All instruments collapsed\n - Only key instruments\n - Certain instruments \n \n a) The second big question is how do we measure Tresillio-ness in the pop songs, I would suggest three approaches. All aproaches require that we first obtain 16 dimensional Rythm vectors of each song of interest: \n 1. Compare our vanilla self defined Tresillio rythm vector with all vectors of our pop songs. Measure distance or simularity with some commonly used distance measure in the literature\n 2. Very similar to 1) but this time use the Tresillio rythm vector as defined by our songs\n 3. Prior clustering of the rythm vectors. Obtaining centroid and measuring with it Tresillio ness in the charts (method as proposed by Pushkar)\n \n- Equations should be included in the prior part\n- Discuss critically any outliers, problems and limitations of our methodology. Extra focus on the question how we deal with related but not the same rythm (e.g.: Reggaeton)\n\n\n**Toughts Florian**\n- Following the last paper discussion one measuurement we could use is the Cosin distance \n- Not sure about an instrument selection. We can mention that a reduction seems not favorable as the tresillo rythm is presented with different instruments throughout the dataset\n\n\n\n## 3) Final Results\n\n### 3.1) Onset histograms and rhythm vectors \nIn this first part we will use onset histograms to compute rhythm vectors. \nTo obtain the onset histogram of a given song, we use the notes representation provided by the [ms3](https://pypi.org/project/ms3/) libary and colapse all musical onsets to one bar. In the example below we will compile the histograms for our self defined 'Vanilla Tresillo' and for the example song 'shape of you' by Ed Sheran. Then we will proceed to compute the rhythm vectors for both songs\n\n\n```python\n#paths to both examples\nshape_of_you = home_dir+'dataset/project_mscx/mscx_tresillos_billboard/Shape of you-Ed Sheran.mscx'\nvanilla_tresillo = home_dir+'dataset/project_mscx/mscx_tresillos/Vanilla_Tresillo.mscx'\n\n\n# get the note scores of both examples \ndf_shape_of_you = Score(shape_of_you).mscx.notes\ndf_vanilla_tresillo = Score(vanilla_tresillo).mscx.notes\n\n#calculate quarter note position for each note\ndf_shape_of_you['quarter_beats'] = (df_shape_of_you.mc_onset*16).astype('int32')\ndf_vanilla_tresillo['quarter_beats'] = (df_vanilla_tresillo.mc_onset*16).astype('int32')\n\n\nfig, ax = plt.subplots(1,2, figsize=(12,3))\n\nax[0].hist(df_shape_of_you['quarter_beats'], bins=16)\nax[1].hist(df_vanilla_tresillo['quarter_beats'], bins=16)\nax[0].set_xlabel('quarter_beats')\nax[1].set_xlabel('quarter_beats')\nax[0].xaxis.set_ticks(np.arange(0, 16, 1))\nax[1].xaxis.set_ticks(np.arange(0, 16, 1))\nax[0].set_ylabel('count')\nax[0].set_title('Shape of you')\nax[1].set_title('Vanilla Tresillo')\n\n#mention somewhere that we are working with quarter notes \n```\n\nIn a next step we want to compile rhythm of each song given this notion of histograms. E.g: every dimension incorporates the absolute frequency of onsets on one given quarter note. This is done as follows:\n\n\n```python\nrhythm_vector_shape_you = df_shape_of_you.groupby(['quarter_beats'])['mn'].agg(['count'])\nrhythm_vector_shape_you = rhythm_vector_shape_you.reindex(list(range(0,16)),fill_value=0).T\nrhythm_vector_shape_you \n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
quarter_beats0123456789101112131415
count20633441197237169122002975112752716710
\n
\n\n\n\nIn the assumption that the rhythm vectors of distinct voices within a song might include information we want to preserve, we compiled one rhythm vector per instrument in a song as follows:\n\n\n```python\n# Define instruments \nshape_of_you_score = Score(shape_of_you)\ninstrument_dict = {}\nfor key in shape_of_you_score.mscx.metadata['parts']:\n for staff in shape_of_you_score.mscx.metadata['parts'][key].keys():\n instrument_dict[staff] = key\n\n\n#staff to voice/instruments \ndf_shape_of_you['instrument'] = [instrument_dict[el] if el in instrument_dict else 'na' for el in df_shape_of_you.staff]\n\n#compute rhythm vectors per voice\nrhythm_vector_shape_you_instruments = df_shape_of_you.groupby(['instrument','quarter_beats'])['mn'].agg(['count'])\nrhythm_vector_shape_you_instruments = rhythm_vector_shape_you_instruments.groupby(level=0).apply(lambda x: x.reset_index(level = 0).drop(['instrument'],axis=1).reindex(list(range(0,16)),fill_value=0).T)\nrhythm_vector_shape_you_instruments = rhythm_vector_shape_you_instruments.reset_index()\nrhythm_vector_shape_you_instruments\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
quarter_beatsinstrumentlevel_10123456789101112131415
0Grand Pianocount660000000660000000
1Marimba, untitledcount660053140630680057170630
2Melodic Drumcount00333613193900050281019380
3Overdrive Gtrcount6211446261626160
4Percussioncount242302192222212112192232
5Tenor Saxcount2081081612168167186155148
6Woodblockcount2002002020012020
7Xylophonecount2200176021021001660210
\n
\n\n\n\nIf we want to compile rhythm vectors (per voice) for all mscx files in one directory we can use following loop:\n\n\n```python\ndef rythm_vectors(in_dir, out_dir):\n list_sheet_music = os.listdir(in_dir)\n df_rythm_vectors =[]\n\n for idx, el in enumerate(list_sheet_music):\n if el[-4:] == 'mscx':\n\n \n #Get notes with onsets\n s = Score(dir_sheet_music+el)\n df = s.mscx.notes\n\n # Define instruments \n instrument_dict = {}\n for key in s.mscx.metadata['parts']:\n for staff in s.mscx.metadata['parts'][key].keys():\n instrument_dict[staff] = key\n\n\n #staff to instruments \n df['instrument'] = [instrument_dict[el] if el in instrument_dict else 'na' for el in df.staff]\n\n\n # define quarter beat \n df['quarter_beats'] = (df.mc_onset*16).astype('int32')\n\n\n #make rythm matrix & data frame\n df_histogram = df.groupby(['instrument','quarter_beats'])['mn'].agg(['count'])\n df_histogram = df_histogram.groupby(level=0).apply(lambda x: x.reset_index(level = 0).drop(['instrument'],axis=1).reindex(list(range(0,16)),fill_value=0).T)\n df_histogram = df_histogram.reset_index()\n\n df_histogram.insert(loc=0, column='song_artist', value=el[:-5])\n\n #concat to big rythm vector df\n if len(df_rythm_vectors) == 0: df_rythm_vectors = df_histogram\n\n df_rythm_vectors = pd.concat([df_rythm_vectors,df_histogram], axis=0)\n\n df_rythm_vectors.to_csv(out_dir, index = False)\n\n \ndir_sheet_music = home_dir + '/dataset/project_mscx/mscx_billboard/'\nout_dir = home_dir + '/dataset/rythm_vectors/rythm_vectors_billboard.csv'\nrythm_vectors(dir_sheet_music, out_dir)\n```\n\n### 3.2) Tresilio-ness with Cosine Similarity\nIn a first naive analysis we will employ the cosine similarity measurement to assess if the vector of a given song is similar to our defined Tresillo rhythm. We will need some helper functions. \nFollowing two helper functions will help us to collapse all rhythms of all instruments to one rhythm vector per song. \nThe second function calculates the cosine similarity between a pandas data frame of rhythm vectors and one single rhythm vector. \nThe third function calculates the 2.5% and the 97.5% confidence intervals of the distribution of a mean of a given data set. This function will allow us to assess, how big the uncerntaingty is in our data set and if the means of two distributions are indeed significantly different.\n\n\n```python\ndef calc_cosine_sim(rythm_vectors, tresillo_vector):\n rythm_vectors['cosine_sim_tresillo'] = cosine_similarity(rythm_vectors.loc[:,\"0\":\"15\"],tresillo_vector)\n return rythm_vectors\n\ndef bootstrap_CI(data, nbr_draws):\n means = np.zeros(nbr_draws)\n data = np.array(data)\n\n for n in range(nbr_draws):\n indices = np.random.randint(0, len(data), len(data))\n data_tmp = data[indices] \n means[n] = np.nanmean(data_tmp)\n return [np.nanpercentile(means, 2.5),np.nanpercentile(means, 97.5)]\n```\n\nNow lets read in the rhythm vector of our vanilla Tresillo.\n\n\n```python\ndf_synt_tresillo = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_tresillio.csv') # read in all defined 'tresillos' and variations\ndf_vanilla_tresillo = df_synt_tresillo[df_synt_tresillo['song_artist']=='Vanilla_Tresillo'].loc[: ,\"0\":\"15\"] # only use the 16d vector of our vanilla_tresillo\nvector_vanilla_tresillo = np.asarray(df_vanilla_tresillo)\nvector_vanilla_tresillo\n```\n\n\n\n\n array([[20, 0, 0, 20, 0, 0, 20, 0, 20, 0, 0, 20, 0, 0, 20, 0]])\n\n\n\nWe will now calculate the cosine similarity between our vanilla Tresillo vector and between a hand selected set of Tresillo songs. \nThis set of songs has been selected by us ourself and none of those 'validation' songs are included in the billboard data sets.\n\n\n```python\n\ntresillo_test_set_vectors = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_tresillo_not_billboard.csv')\n#tresillo_test_set_vectors = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_tresillos_billboard.csv')\ntresillo_test_set_vectors = collapse_normalize_vectors(tresillo_test_set_vectors)\ntresillo_test_set_vectors = calc_cosine_sim(tresillo_test_set_vectors, vector_vanilla_tresillo)\nprint(tresillo_test_set_vectors['cosine_sim_tresillo'])\n```\n\n song_artist\n Attention Charlie Puth 0.613390\n Chandelier Sia 0.605698\n Cold Water Major Lazer 0.896750\n Hips Don't Lie Shakira 0.819003\n I Don't Care Ed Sheeran & Justin Bieber 0.876542\n One Dance - Drake & Wizkid, Kyla Reid 0.717454\n Rockabye Baby Clean Bandit 0.880677\n Sorry Justin Bieber 0.853424\n Titanium David Guetta 0.652046\n Name: cosine_sim_tresillo, dtype: float64\n\n\nLet's calculate the mean Tresillo-ness and also the 2.5% and 97.5% confidence intervals of the mean as obtained by bootstrapping.\n\n\n```python\nprint('mean Tresillo-ness in the test set: ', tresillo_test_set_vectors['cosine_sim_tresillo'].mean())\nlower_tresillo_ci, upper_tresillo_ci = bootstrap_CI(tresillo_test_set_vectors['cosine_sim_tresillo'], 100)\nprint('tresillo upper and lower ci on 100 draws: ', lower_tresillo_ci, upper_tresillo_ci)\n```\n\n mean Tresillo-ness in the test set: 0.7683315382006799\n tresillo upper and lower ci on 100 draws: 0.6808770780048731 0.8481138158323536\n\n\nLet's do exact same thing, but with a comparison data set of songs which we kno to not include any Tresillo songs. \nAlso calculating the mean and the confidence intervals of the not Tresillo songs.\n\n\n```python\nnon_tresillio_vectors = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_not_tresillo_validation.csv')\n#non_tresillio_vectors = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_no_tresillos_4_4.csv')\n\nnon_tresillio_vectors = collapse_normalize_vectors(non_tresillio_vectors)\nnon_tresillio_vectors = calc_cosine_sim(non_tresillio_vectors, vector_vanilla_tresillo)\nprint(non_tresillio_vectors['cosine_sim_tresillo'])\n\nprint('mean Tresillo-ness in the test set: ', non_tresillio_vectors['cosine_sim_tresillo'].mean())\nlower_non_tresillo_ci, upper_non_tresillo_ci = bootstrap_CI(non_tresillio_vectors['cosine_sim_tresillo'], 100)\nprint('tresillo upper and lower ci on 100 draws: ', lower_non_tresillo_ci, upper_non_tresillo_ci)\n```\n\n song_artist\n Dress You Up 0.666450\n Fear of the Dark 0.689443\n Folsom Prison Blues 0.657568\n Immigrant Song 0.584275\n Jailhouse Rock 0.593940\n One Step Closer 0.523399\n Sweet Child O'Mine 0.724700\n Sweet Home Alabama 0.534338\n Name: cosine_sim_tresillo, dtype: float64\n mean Tresillo-ness in the test set: 0.6217641548092555\n tresillo upper and lower ci on 100 draws: 0.5792063648094466 0.6717705312519476\n\n\nAlready looking at the means of the two samples and their confidence intervalls, they seem to be significantly different. \nHowever let's also calculate the t-test statistics to ensure that the distributions are actually different.\n\n\n```python\nttest_pvalue = stats.ttest_ind(tresillo_test_set_vectors['cosine_sim_tresillo'], non_tresillio_vectors['cosine_sim_tresillo']).pvalue\nprint('p value that the means are the same: ', ttest_pvalue)\n```\n\n p value that the means are the same: 0.009618772484088895\n\n\n**Discussion:** \nLooking at the the Tresillio cosine similarity metrics of both data sets and comparing the distributions of the means whit each others (confidence intervals and t-test), we see that their mean Tresillo-ness is indeed significantly diffferent. \nHowever, we also see that 1) this Tresillo-ness measurement seems to be quite noisy 2) The distribution of the two samples are not as far apart as we would like them to be. \nThe noisiness of this metric, can be especially seen, if one looks at the individual cosine Tresillo-ness values of the songs. Here we see that there is great variance in Tresillo-ness in the hand selected Tresillo data set. Values can be as small as 0.6, which corresponds to the value of a none Tresillo song. \nFurthermore, it seems that also songs wich we classified to have no Tresillo, have a cosine Tresillo-ness up to 0.72. \nIn general we can state that the cosine similarity methods seems to identify a ceratin Tresillo-ness, however it is questionable how robust this method is.\n\n### 3.3) Tresilio-ness over Time\n\nNow that we explored several ways to compute Tresillo-ness, we can calculate the Tresilo-ness of ou billboard data set. \nFirst let us load the rhythm vectors of our billboard songs and let's also merge it to the metadata of the billboard songs. \nThe metadata of the billboard songs includes time signatures with which we can Tresillo-ness over time.\n\n\n```python\ndf_billboard = pd.read_csv(home_dir + '/dataset/rythm_vectors/rythm_vectors_billboard.csv')\ndf_billboard_meta = pd.read_csv(home_dir + '/dataset/billboard_data_sets/billboard_1999-2019_unique_top_20_selection.csv')\n\n#calculate cosine simularity\ndf_billboard = collapse_normalize_vectors(df_billboard)\ndf_billboard_sim = calc_cosine_sim(df_billboard, vector_vanilla_tresillo)\n\n#prepare for merge with meta data\ndf_billboard_sim['song_artist'] = df_billboard_sim.index\ndf_billboard_sim['song'] = df_billboard_sim.song_artist.apply(lambda x: x.split('_')[0])\ndf_billboard_sim['artist'] = df_billboard_sim.song_artist.apply(lambda x: x.split('_')[1][:-1])\ndf_billboard_sim = df_billboard_sim.drop(['song_artist'], axis=1)\n\n#merge data frames\ndf_billboard_merged = df_billboard_sim.merge(df_billboard_meta, left_on=['song','artist'], right_on=['Name', 'Artists'], how='left')\ndf_billboard_reduced = df_bilboard_merged[['Name', 'Artists', 'Peak.position', 'Week', 'Genre', 'cosine_sim_tresillo']]\n#let's look at some songs with very high tresillo-ness\ndf_billboard_reduced[df_billboard_reduced['cosine_sim_tresillo']>0.85]\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
NameArtistsPeak.positionWeekGenrecosine_sim_tresillo
89Cheap ThrillsSia19.02016-06-18Jamaica,Remix,Australia,Rap,Synth-Pop,Pop0.908945
293Rather BeClean Bandit19.02014-08-08Electro-Pop,Deep House,Eurodance,Dance-Pop,Cla...0.861075
319Shape Of YouEd Sheeran1.02017-01-28Tropical House,Dancehall,Pop,UK0.915565
387Treat You BetterShawn Mendes20.02016-07-30Canada,Pop0.883772
389Turn Me OnKevin Lyttle20.02004-06-23Pop0.937243
423Where Have You BeenRihanna18.02012-05-25House,Electro-Pop,Pop0.947448
\n
\n\n\n\nLet's plot weekly cosine Tresillo-ness \n\n\n```python\ndf_bilboard_merged_weekly = df_bilboard_merged.groupby(['Week'])['cosine_sim_tresillo'].agg(['mean']).reset_index()\nplt.plot(pd.to_datetime(df_bilboard_merged_weekly.Week), df_bilboard_merged_weekly['mean'])\nplt.title('Weekly Cosine Tresillo-ness in the Top 20 Billboards')\nplt.xlabel('Time')\nplt.ylabel('Cosine Tresillo-ness')\n```\n\nThe time trend above is very noisy and hard to read. By calculating a 4 weeks moveing average we might get smoother results.\n\n\n```python\ndf_bilboard_merged_weekly['rolling_mean'] = df_bilboard_merged_weekly.iloc[:,1].rolling(window=4).mean()\n\nplt.plot(pd.to_datetime(df_bilboard_merged_weekly.Week), df_bilboard_merged_weekly['rolling_mean'])\nplt.title('4 Weeks Moving Average Cosine Tresillo-ness in the Top 20 Billboards')\nplt.xlabel('Time')\nplt.ylabel('Cosine Tresillo-ness')\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n**TA instructions**: \n- Present your results in relation to your research question.\n- Present them in a logical order that does not have to be the order in which you achieved them.\n\n**Thoughts Aurel:** \n\n 4. Discussion of the clustering method, either k-mean clustering or something we dont have to set the cluster number\n 5. Finding Tresillio-ness in the pop charts with all three methods a) Vanilla Tresillio-ness b) Tresillio songs vector c) \n \n \n\n## 4) Outlook on final interpretation\n\nPoints to discuss as stated by TAs: \n- Interpreting your results is the final step that you will do in preparing Milestone 4 (your presentations). Please end your submission by giving a first,preliminary outlook on this final step: what aspects of your results do you find interesting with respect to your hypotheses and previous literature? What do you think might be the main points to elaborate upon in the discussion? \n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## References\n\n- Chin, F. and Wu, S. (1992). An efficient algorithm for rhythm-finding.Computer MusicJournal, 16(2):35\u201344.\n- Dixon, S., Gouyon, F., Widmer, G., et al. (2004). Towards characterisation of music viarhythmic patterns. InISMIR.\n- Huron, D. and Ommen, A. (2006). An empirical study of syncopation in american popularmusic, 1890\u20131939.Music Theory Spectrum, 28(2):211\u2013231.\n- Palmer, C. and Krumhansl, C. L. (1990). Mental representations for musical meter.Journalof Experimental Psychology: Human Perception and Performance, 16(4):728.\n- Panteli, M., Bogaards, N., Honingh, A. K., et al. (2014). Modeling rhythm similarity forelectronic dance music. InISMIR, pages 537\u2013542.\n- Parry, M. and Essa, I. (2003). Rhythmic similarity through elaboration.\n- Pohle, T., Schnitzer, D., Schedl, M., Knees, P., and Widmer, G. (2009). On rhythm andgeneral music similarity. InISMIR, pages 525\u2013530. \n- Rohrmeier, M. (2020). Towards a formalization of musical rhythm. InProc. of the 21st Int.Society for Music Information Retrieval Conf\n- Floyd, Samuel A. \"Black music in the circum-Caribbean.\" American Music (1999): 5-7.\n\n\n```python\n\n```\n", "meta": {"hexsha": "dcd4fb9b56bf5c068766b42457ccc252bfeb7fe7", "size": 200232, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/milestone_3_sugessted_structure-Pushkar.ipynb", "max_stars_repo_name": "pushkarjajoria/Tresillo", "max_stars_repo_head_hexsha": "9b72373746192d00d82c6e9c2de19d70648108eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/milestone_3_sugessted_structure-Pushkar.ipynb", "max_issues_repo_name": "pushkarjajoria/Tresillo", "max_issues_repo_head_hexsha": "9b72373746192d00d82c6e9c2de19d70648108eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/milestone_3_sugessted_structure-Pushkar.ipynb", "max_forks_repo_name": "pushkarjajoria/Tresillo", "max_forks_repo_head_hexsha": "9b72373746192d00d82c6e9c2de19d70648108eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 97.5789473684, "max_line_length": 43848, "alphanum_fraction": 0.8046266331, "converted": true, "num_tokens": 13472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5312093733737563, "lm_q2_score": 0.5389832206876841, "lm_q1q2_score": 0.28631293892047366}} {"text": "

Chapter 1: Restricted Hartree Fock (RHF) for Simple Atom (He and Be) using Slater Type Orbital (STO)\n with Double Zeta\u00a0Basis in Python

\n
Integral calculated by sympy
\n\n

\n SIMUC PROJECT\n

\n\n

\n By Jinze (Richard) Xue   / \n Source code\n

\n\n\n1. It will be better to understand if you have read Levine, Quantum Chemistry, 7th Edition, chapter 11 and 14. \nThere is an example calculation of Helium atom at page 412-414.\n\n1. For well-organized and documented python file, please find in github: https://github.com/yueyericardo/simuc/tree/master/notebooks/pchem/hartree-fock/sto\n\n1. This notebook is self-contained. \nHowever, if you wanna try in other place, you should put `hf.py` file in the same directory where you wanna import it. \n\n### Chapters\n- Chapter 1: Restricted Hartree Fock (RHF) for Simple Atom (He and Be) using Slater Type Orbital (STO) with Double Zeta Basis in Python\n- Chapter 2: Slater Type Orbital (STO) VS Gaussian Type Orbital (GTO) \u3010WIP\u3011 \n Interactive tool \u3010WIP\u3011 \n- Chapter 3: Restricted Hartree Fock (RHF) for Simple diatomic molecule (H2 and HeH+) using Gaussian Type Orbital (GTO) with STO-3G Basis in Python \u3010Code is on [github](https://github.com/yueyericardo/simuc/tree/master/notebooks/pchem/hartree-fock/gto)\u3011\u3010Document WIP\u3011 \n- Chapter 4: Restricted Hartree Fock (RHF) for Polyatomic Molecule using Simple and Powerful Package: PSI4\u3010WIP\u3011 \n\n### Overview\n1. [Introduction](#1.-Introduction)\n1. [Born\u2013Oppenheimer approximation](#2.-Born\u2013Oppenheimer-approximation)\n2. [Hartree fock approximation](#3.-Hartree-fock-approximation)\n3. [Why build matrix and how to solve secular equation?](#4.-Why-build-matrix-and-how-to-solve-secular-equation?)\n4. [How to build matrix](#5.-How-to-build-matrix)\n5. [Secular Equation](#6.-Secular-Equation)\n6. [Total Energy](#7.-Total-Energy)\n7. [Utils](#8.-Utils)\n8. [Run Hartree Fock](#9.-Run-Hartree-Fock)\n9. [Test](#10.-Test)\n10. [Excise - Plot the charge density of orbitals](#11.-Excise---Plot-the-charge-density-of-orbitals)\n10. [Limitations](#12.-Limitations)\n11. [Reference](#13.-Reference)\n\n\n---\n\n### 1. Introduction\nIn this notebook, we will use Helium Atom as an example to explain Restricted Hartree Fock (RHF) method. \n(Restricted means only for closed shell molecule, orbitals are either doubly occupied or empty)\n\nHelium has two electrons on 1s orbital. \n\nOne-electron wavefunction of 1s orbital could be written as $^*$\n$$\\chi_{1 \\mathrm{s}}^{\\mathrm{STO}}=\\left(\\frac{\\zeta^{3}}{\\pi}\\right)^{1 / 2} \\exp (-\\zeta r)$$\n

\n [*] \n For simplicity, there is no angular part since it's s orbital.\n
\n $\\zeta$ is a constant related to the effective charge of the nucleus, the nuclear charge being partly shielded by electrons.\n

\nWith double zeta, we are saying we will buil two $\\chi_{1 \\mathrm{s}}^{\\mathrm{STO}}$ function. And the final 1s orbital of Helium will be a linear combination of these two STO. $$\\phi_1 = c_{11} \\chi_1 + c_{21} \\chi_2 \\quad \\text{(occupied orbital)}\\;\\; \\tag {Eq1}$$ $$\\phi_2 = c_{12} \\chi_1 + c_{22} \\chi_2 \\quad \\text{(unoccupied orbital)}$$\n\n

The goal of hartree fock is using Self-Consistent Field method (SCF) to optimize coefficients to get close to the real wavefunction.

\n\nFor Helium, from reference [[1]](https://www.sciencedirect.com/science/article/pii/S0092640X74800161), two double zeta are 1.45363, 2.91093. \nWe could build these two STO with each zeta using sympy like below.\n\n\n```python\nimport hf\nimport sympy as sp\nimport numpy as np\nimport scipy.linalg\nfrom sympy import oo\nfrom sympy import diff\nimport time\nimport matplotlib\nfrom matplotlib import pyplot as plt\nfrom IPython.display import Math\nsp.init_printing()\n%matplotlib inline\n```\n\n\n```python\nr, r1, r2, zeta = sp.symbols(\"r, r1, r2, zeta\")\nn = sp.Symbol('n', integer=True)\n```\n\n\n```python\ndef STO(zeta, n, r=r):\n \"\"\"\n Define a Slater Type Orbital function using sympy.\n\n INPUT:\n zeta: zeta for the STO.\n n: principle quantum number for the STO.\n \"\"\"\n f = r ** (n - 1) * sp.exp(-zeta * r)\n # normalization\n N = sp.sqrt(1 / sp.integrate(4 * sp.pi * f * f * r * r, (r, 0, +oo)))\n return N * f\n```\n\n\n```python\nf1s_1 = hf.STO(zeta=1.45363, n=1)\nf1s_2 = hf.STO(zeta=2.91093, n=1)\n\ndisplay(Math('$\\chi_1 :'))\ndisplay(f1s_1)\ndisplay(Math('$\\chi_2 :'))\ndisplay(f1s_2)\n```\n\n Note: More detail about this will be covered at part 2, but I want to point out the main idea at beginning. So it could remind you what's our final goal, when you feel distracted.\n\nBy using Roothan equations below , which could be solved self-consistently for the orbital coefficient matrix **C** and orbital energy eigenvalues $\\epsilon_i$ by iterations, we could finally \n1. get close to the real wavefunction (using improved Coefficients for Eq1).\n2. get close to correct Molecular orbital energies (using $\\epsilon_i$), to get the real total energy\n\n$${\\mathbf {F}}{\\mathbf {C}}={\\mathbf {S}}{\\mathbf {C}}{\\mathbf {\\epsilon }}$$\n

\n Note: \n F (Fock matrix), S (Overlap matrix) are inputs.
\n S matrix is fixed, F matrix is changing every iteration because of the improved C.
\n C (Coefficient matrix) and $\\epsilon_i$ (eigenvalues) are results.\n
\n

\n\n---\n\n### 2. Born\u2013Oppenheimer approximation\nUsing Born\u2013Oppenheimer approximation, molecular Hamiltonian could be expressed as \n\n$${\\displaystyle H=H_{\\text{e}}+T_{\\text{n}}}$$\n

\n Note: \n e (electron), n (nuclear)\n
\n

\nWhere\n\n$${\\displaystyle H_{\\text{e}}=-\\sum _{i}{{\\frac {1}{2}}\\nabla _{i}^{2}}-\\sum _{i,\\alpha}{\\frac {Z_{\\alpha}}{r_{i\\alpha}}}+\\sum _{i>j}{\\frac {1}{r_{ij}}}+\\sum _{\\beta>\\alpha}{\\frac {Z_{\\alpha}Z_{\\beta}}{R_{\\alpha \\beta}}}\\quad {\\text{and}}\\quad T_{\\text{n}}=-\\sum _{\\alpha}{{\\frac {1}{2M_{\\alpha}}}\\nabla _{\\alpha}^{2}}}$$\n

\n Note: \n i, j (electron), $\\alpha$, $\\beta$ (nuclear)\n
\n

\n\n$H_e$: \n1. sum of kinetic-energy operators for each electron\n2. sum of nuclear\u2013electronic Coulombic attraction terms\n3. sum of electron-electron repulsion energy\n4. sum of nuclear-nuclear repulsion energy ($V_{NN}$)\n\n$T_n$: \n1. sum of kinetic-energy operators for each nuclear\n\nClassically, during the time of a cycle of electronic motion, the change in nuclear configuration is negligible. Thus, considering the nuclei as fixed, we omit the nuclear kinetic-energy terms $T_n$. So commonly when we say hartree fock energy, we are only talking about $H_e$ term.\n\n$${\\displaystyle H_{\\text{e}}=-\\sum _{i}{{\\frac {1}{2}}\\nabla _{i}^{2}}-\\sum _{i,\\alpha}{\\frac {Z_{\\alpha}}{r_{i\\alpha}}}+\\sum_{i} \\sum_{j>i}{\\frac {1}{r_{ij}}} + V_{NN}}$$\n\nThe first 3 terms together is purely electronic Hamiltonian. \n\n---\n\n### 3. Hartree fock approximation\n\nBecause of the inter-electronic repulsion term $\\frac{1}{r_{i j}}$, the Schr\u00f6dinger equation for a molecule wavefunction is not separable. So the true wave function cannot be written as the product of n one-electron functions. \n\nThe essence of hartree-fock approximation is to treat electron-electron repulsion in an average way, so this complicated many-electron problem could be solved as one-electron problem. \n\nThen the molecular wavefunction could be written as a product of all one-electron wavefunctions. \nThe functions chosen to represent each electron is based on the hydrogen-like atomic wavefunction. ( Note: This is the reason why we use Slater type Orbital (STO), because it's transformed from exact wavefunction for an electron around a hydrogen atom)\n\n$$\n\\Psi({r_1 r_2})=\\phi_{1}\\left(\\boldsymbol{r}_{1}\\right) \\phi_{2}\\left(\\boldsymbol{r}_{2}\\right)\n$$\n\nHowever this product does not satisfy antisymmetric requirements (which means if you swap electrons the sign of the wavefunction should invert). This problem can be overcome by taking a linear combination of both products:\n$$\n\\begin{aligned} \\Psi\\left(\\mathbf{r}_{1}, \\mathbf{r}_{2}\\right) &=\\frac{1}{\\sqrt{2}}\\left\\{\\phi_{1}\\left(\\mathbf{r}_{1}\\right) \\phi_{2}\\left(\\mathbf{r}_{2}\\right)-\\phi_{1}\\left(\\mathbf{r}_{2}\\right) \\phi_{2}\\left(\\mathbf{r}_{1}\\right)\\right\\} \\\\ &=\\frac{1}{\\sqrt{2}}\\left|\\begin{array}{ll}{\\phi_{1}\\left(\\mathbf{r}_{1}\\right)} & {\\phi_{2}\\left(\\mathbf{r}_{1}\\right)} \\\\ {\\phi_{1}\\left(\\mathbf{r}_{2}\\right)} & {\\phi_{2}\\left(\\mathbf{r}_{2}\\right)}\\end{array}\\right| \\end{aligned}\n$$\n\nIn this way, the Schr\u00f6dinger equation would then be separated into n one-electron hydrogenlike equations.\n\nSuppose there is an operator called Fock operator $\\hat F$, the eigenvalue corresponding to $\\hat F$ on a one-electron wavefunction is the energy related to this electron.\n\nEach electron energy $\\varepsilon_{i}$ will include \n1. kinetic-energy for this electron\n2. sum of coulombic attraction between this electron and all nuclears\n3. the potential of this electron interacting with an averaged distribution of other electrons, (which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge, and this is the major simplification inherent in the Hartree\u2013Fock method). \n(What does this means? explained at below )\n\n Note: If we take$\\sum_{i}^{n} {\\varepsilon_{i}}$, we will count each interelectronic repulsion twice, which needs to be subtracted when calculating total energy of molecule. \n\nWhat's the meaing of the potential of this electron interacting with an averaged distribution of other electrons?\nsuppose we want to find the electron-electron repulsion potential ($\\text{Vee}$) of electron 1 with electron 2.\n\n\n$$\\text{Vee}_{1 2}=\\left\\langle\\phi (1)^*|\\frac{1}{r_{1 2}} | \\phi (1)\\right\\rangle$$\nThis is not solvable, because we don't know the location of electron (2).\n\nAnd hartree fock simplify this to: \n(recall that $\\left|\\phi(2)\\right|^{2}$ is the probability density of electron (2)\n\n$$\\text{Vee}_{1 2}=\\left\\langle\\phi (1)^*|\\;\\;\\; \\int \\frac{\\left|\\phi(2)\\right|^{2}}{r_{12}} d v_{2} \\;\\;\\; | \\phi (1)\\right\\rangle \\tag{Eq2}$$\n\nBy using the probability density of electron (2) $\\left|\\phi(2)\\right|^{2}$, \n\n$$\\text{infinitesimal charge density} * \\text{infinitesimal volume} = \\text{infinitesimal charge}$$\n\nand integrate over all space, we could get the repulsion energy above. \nIt could be also rewritten as, which is more commonly used: \n\n$$\n\\text{Vee}_{1 2}= \\int \\int \\frac{\\left|\\phi(2)\\right|^{2} \\phi (1)^* \\phi (1)}{r_{12}} d v_{2}d v_{1}\n$$\n$$\n= \\int \\int \\frac{\\phi(2)^*\\phi(2) \\phi (1)^* \\phi (1)}{r_{12}} d v_{2}d v_{1}\n$$\n\n

Fock operator

\nThe operator corresponding to one electron energy $\\varepsilon_{i}$ is fock operator. (For the restricted case which assumes closed-shell orbitals and single- determinantal wavefunctions)\n\n$$\\hat{F}(i)=-\\frac{1}{2} \\nabla_{i}^{2}-\\sum_{\\alpha} \\frac{Z_{\\alpha}}{r_{1 \\alpha}} + \\sum_{j=1}^{n / 2}\\left[2 \\hat{J}_{j}(i)-\\hat{K}_{j}(i)\\right]$$\n\nwhere:\n\n- ${\\displaystyle {\\hat {F}}(i)}$ is the Fock operator for the i-th electron in the system, \n- $-\\frac{1}{2} \\nabla_{i}^{2}-\\sum_{\\alpha} \\frac{Z_{\\alpha}}{r_{1 \\alpha}} $ are kinetics energy and sum of nuclear-electron attraction respectively. \nThese two terms are often considered as the core terms, and refered as $\\hat{H}^{\\mathrm{core}}_i$\n\nSo Fock operator could be rewritten as \n$$\\hat{F}(i)=\\hat{H}_{core}(i)+\\sum_{j=1}^{n / 2}\\left[2 \\hat{J}_{j}(i)-\\hat{K}_{j}(i)\\right]$$\n\n- ${\\displaystyle n}$ is the number of electrons and ${\\displaystyle {\\frac {n}{2}}}$ is the number of occupied orbitals in the closed-shell system, \n- ${\\displaystyle {\\hat {J}}_{j}(i)}$ is the Coulomb operator, defining the repulsive force between the j-th and i-th electrons in the system, (explained at Eq2)\n$$\n\\hat{J}_{j}(1) f(1) =f(1) \\int\\left|\\phi_{j}(2)\\right|^{2} \\frac{1}{r_{12}} d v_{2}\n$$\n\n\n

\n Note: \n ${\\displaystyle f(1)}$, ${\\displaystyle f(2)}$ are the one-electron wavefunctions acted upon by the exchange operator as functions of the electron positions,
and ${\\displaystyle \\phi _{j}(1)}$ and ${\\displaystyle \\phi _{j}(2)}$ are the one-electron wavefunction of the jth electron as functions of the positions of the electrons.\n
\n

\n \n- ${\\displaystyle {\\hat {K}}_{j}(i)}$ is the exchange operator, defining the quantum effect produced by exchanging two electrons. \n\n$$\\hat{K}_{j}(1) f(1) =\\phi_{j}(1) \\int \\frac{\\phi_{j}^{*}(2) f(2)}{r_{12}} d v_{2}$$\n\nThe Coulomb operator is multiplied by two since there are two electrons in each occupied orbital. The exchange operator is not multiplied by two since it has a non-zero result only for electrons which have the same spin as the i-th electron.\n\n

Roothan equation

\nThe eigenvalue corresponding to the fock operator is the molecular orbital energies, electrons will occupy start from the lowest level, since we are dealing closed shell molecule, each orbital will has 2 or 0 electrons. \n\nThe Roothaan equations are a representation of the Hartree\u2013Fock equation in a non orthonormal basis set (molecular orbitals represented by linear combination of atomic orbitals), which could be solved in matrix way that computer is good at.\n\n$${\\mathbf {F}}{\\mathbf {C}}={\\mathbf {S}}{\\mathbf {C}}{\\mathbf {\\epsilon }}$$\n

\n Note: \n F (Fock matrix), S (Overlap matrix) are inputs.
\n S matrix is fixed, F matrix is changing every iteration because of the improved C.
\n C (Coefficient matrix) and $\\epsilon_i$ (eigenvalues) are results.\n
\n

\n\n\n

Why iterations?

\n\nSince the Fock operator depends on the orbitals used to construct the corresponding Fock matrix, the eigenfunctions of the Fock operator are in turn new orbitals, which can be used to construct a new Fock operator. In this way, the Hartree\u2013Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals is calculated. The Hartree\u2013Fock electronic wave function is then the Slater determinant constructed from these orbitals. Following the basic postulates of quantum mechanics, the Hartree\u2013Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree\u2013Fock method and the approximations employed.\n\n---\n\n### 4. Why build matrix and how to solve secular equation?\nBecause in this way, computer could solve it efficiently. \n\n
\n\n Why build matrix? Click to uncollapse \n(This part is from Calculating Orbital Energies and Expansion Coefficients - Chemistry LibreTexts\n\n\nSolving secular equation is actually calculating orbital energies and coefficients based on **variation principle**, which states that any approximate wavefunction must have a higher energy than the true wavefunction. \n(This part is from [Calculating Orbital Energies and Expansion Coefficients - Chemistry LibreTexts](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Symmetry_(Vallance)/20%3A_Calculating_Orbital_Energies_and_Expansion_Coefficients) .)\n\nLet's ignore the coulomb $\\hat J$ and exchange $\\hat K$ operator in fock operator, what's left is one electron Hamiltonian core operator $\\hat H_{core}$. So how to calculate eigenvalue of Hamiltonian core operator and it's corresponding wavefunction of Helium with double zeta basis? \n(Fock operator could be used as the same way.)\n\n$$\nE=\\frac{\\langle\\phi|\\hat{H}| \\phi\\rangle}{\\langle\\phi | \\phi\\rangle}\n$$\nWhere \n$$\\phi = c_{1} \\chi_1 + c_{2} \\chi_2 \\quad \\text{(unnormalized)} $$\nPlug in $\\phi$ and expand\n\n$$\n\\begin{aligned} E &=\\frac{\\left\\langle c_{1} \\chi_{1}+c_{2} \\chi_{2}|\\hat{H}| c_{1} \\chi_{1}+c_{2} \\chi_{2}\\right\\rangle}{\\left\\langle c_{1} \\chi_{1}+c_{2} \\chi_{2} | c_{1} \\chi_{1}+c_{2} \\chi_{2}\\right\\rangle} \\\\ &=\\frac{\\left\\langle c_{1} \\chi_{1}|\\hat{H}| c_{1} \\chi_{1}\\right\\rangle+\\left\\langle c_{1} \\chi_{1}|\\hat{H}| c_{2} \\chi_{2}\\right\\rangle+\\left\\langle c_{2} \\chi_{2}|\\hat{H}| c_{1} \\chi_{1}\\right\\rangle+\\left\\langle c_{2} \\chi_{2}|\\hat{H}| c_{2} \\chi_{2}\\right\\rangle}{\\left\\langle c_{1} \\chi_{1}| c_{1}\\chi_{1}\\right\\rangle+\\left\\langle c_{1} \\chi_{1}| c_{2} \\chi_{2}\\right\\rangle+\\left\\langle c_{2} \\chi_{2} | c_{1} \\chi_{1}\\right\\rangle+\\left\\langle c_{2} \\chi_{2} | c_{2} \\chi_{2}\\right\\rangle} \\\\ &=\\frac{c_{1}^{2}\\left\\langle\\chi_{1}|\\hat{H}| \\chi_{1}\\right\\rangle+ c_{1} c_{2}\\left\\langle\\chi_{1}|\\hat{H}| \\chi_{2}\\right\\rangle+ c_{2} c_{1}\\left\\langle\\chi_{2}|\\hat{H}| \\chi_{1}\\right\\rangle+ c_{2}^{2}\\left\\langle\\chi_{2}|\\hat{H}| \\chi_{2}\\right\\rangle}{c_{1}^{2}\\left\\langle\\chi_{1} | \\chi_{1}\\right\\rangle+ c_{1} c_{2}\\left\\langle\\chi_{1} | \\chi_{2}\\right\\rangle+ c_{2} c_{1}\\left\\langle\\chi_{2} | \\chi_{1}\\right\\rangle+ c_{2}^{2}\\left\\langle \\chi_{2}|\\chi_{2}\\right\\rangle} \\end{aligned}\n$$\n\nIf define\n$$\nH_{i j}=\\left\\langle\\chi_{i}|\\hat{H}| \\chi_{j}\\right\\rangle\n\\quad and \\quad \nS_{i j}=\\left\\langle\\chi_{i} | \\chi_{j}\\right\\rangle \n\\tag {Eq3}\n$$\n

\n Note: This is where H matrix (same to Fock matrix) and S matrix comes from! We will come build these matrix at part 5.\n
\n

\nand note that $H_{ij} = H_{j}$ and $S_{ij} = S_{ji}$ , \n\n$$\nE=\\frac{c_{1}^{2} H_{11}+2 c_{1} c_{2} H_{12}+c_{2}^{2} H_{22}}{c_{1}^{2} S_{11}+2 c_{1} c_{2} S_{12}+c_{2}^{2} S_{22}}\n$$\n\n\n$$\nE\\left(c_{1}^{2} S_{11}+2 c_{1} c_{2} S_{12}+c_{2}^{2} S_{22}\\right)=c_{1}^{2} H_{11}+2 c_{1} c_{2} H_{12}+c_{2}^{2} H_{22}\n$$\n\nTo minimize the energy with respect to c1 and c2, we require\n\n$$\n\\frac{\\partial E}{\\partial c_{1}}=0\n\\quad and \\quad\n\\frac{\\partial E}{\\partial c_{2}}=0\n$$\n\nIf we differentiate the above equation through separately by c1 and c2 and apply this condition, we will end up with two equations in the two unknowns c1 and c2 , which we can solve to determine the coefficients and the energy.\n\n$$\n\\begin{array}{l}{E\\left(2 c_{1} S_{11}+2 c_{2} S_{12}\\right)=2 c_{1} H_{11}+2 c_{2} H_{12}} \\\\ {E\\left(2 c_{1} S_{12}+2 c_{2} S_{22}\\right)=2 c_{1} H_{12}+2 c_{2} H_{22}}\\end{array}\n$$\n\nThese are normally rewritten slightly, in the form\n$$\n\\begin{array}{l}{c_{1}\\left(H_{11}-E S_{11}\\right)+c_{2}\\left(H_{12}-E S_{12}\\right)=0} \\\\ {c_{1}\\left(H_{12}-E S_{12}\\right)+c_{2}\\left(H_{22}-E S_{22}\\right)=0}\\end{array} \\tag{Eq4}\n$$\nWrite this in matrix form gives\n$$\n\\left(\\begin{array}{cc}{H_{11}-E S_{11}} & {H_{12}-E S_{12}} \\\\ {H_{12}-E S_{12}} & {H_{22}-E S_{22}}\\end{array}\\right)\\left(\\begin{array}{c}{c_{1}} \\\\ {c_{2}}\\end{array}\\right)=\\left(\\begin{array}{l}{0} \\\\ {0}\\end{array}\\right)\n$$\nFor the equations to have a solution, the determinant of the matrix must be equal to zero. Which means\n$$\n\\left(H_{11}-E\\right)\\left(H_{22}-E\\right)-\\left(H_{12}-E S_{12}\\right)^{2}=0\n$$\nNow, there is only one unkown variable E, solve the equation will commonly give us two E (eigenvalue). \nAnd put this two E back to Eq4 will give us two set of (c1, c2), which corresponding to \n$$\\phi_1 = c_{11} \\chi_1 + c_{21} \\chi_2 \\quad (\\varepsilon_{1})$$\n$$\\phi_2 = c_{12} \\chi_1 + c_{22} \\chi_2 \\quad (\\varepsilon_{2})$$\n

\n\nThanks for powerful scipy, secular equation could be solved simply by calling \n`eigenvalue, C = scipy.linalg.eigh(H, S)`\n\n\n```python\n# H and S are calculated from next part\nH = np.array([[-1.85073991, -1.88346692], \n [-1.88346692, -1.58510327]])\nS = np.array([[1. , 0.83752358],\n [0.83752358, 1. ]])\n\ne, Co = scipy.linalg.eigh(H, S)\n\nprint(e)\nprint(Co)\n```\n\n [-1.97961968 1.03859384]\n [[-0.66167682 1.70635833]\n [-0.37818627 -1.79065634]]\n\n\nLet's test if whether the eigenvalue and coefficient satisfy Eq4. \nNote: `[-0.66167682, -0.37818627]` is the eigenvector corresponding to eigenvalue `-1.97961968` \n`[1.70635833, -1.79065634]` is the eigenvector corresponding to eigenvalue `1.03859384` \nThe result below $e^{-17}$ and $e^{-16}$ are so close to 0, there is no much difference with 0.\n\n\n```python\ntmp1 = Co[0, 0] * (H[0, 0] - e[0] * S[0, 0]) + Co[1, 0] * (H[0, 1] - e[0] * S[0, 1])\ntmp2 = Co[0, 1] * (H[0, 1] - e[1] * S[0, 1]) + Co[1, 1] * (H[1, 1] - e[1] * S[1, 1])\nprint(tmp1)\nprint(tmp2)\n```\n\n 2.7755575615628914e-17\n 8.881784197001252e-16\n\n\n### 5. How to build matrix\n\nNote: To avoid confusion, letters `r`, `s`, `t`, `u` are used to label matrix element and the basis functions $\\chi$, and the letters `i`, `j` are used to label the MOs $\\phi$.\n\nBack to Eq3, $H_{rs}$ in H matrix and $S_{rs}$ in S matrix are defined as \n$$\nH_{r s}=\\left\\langle\\chi_{r}|\\hat{H}| \\chi_{s}\\right\\rangle\n\\quad and \\quad \nS_{r s}=\\left\\langle\\chi_{r} | \\chi_{s}\\right\\rangle \n$$\n\n

(a) Hamiltonian core matrix H

\n\nWhere Hamiltonian core operator $\\hat H$ is \n$$\\hat{H}^{\\mathrm{core}}_i \\equiv-\\frac{1}{2} \\nabla_{i}^{2}-\\sum_{\\alpha} \\frac{Z_{\\alpha}}{r_{i \\alpha}}$$\n\n$$H_{r s}=\\left\\langle\\chi_{r}|\\hat{H}| \\chi_{s}\\right\\rangle\n\\quad $$\n\n$$=\\int_{0}^\\infty \\chi_r \\hat{H} \\chi_s \\; 4\\pi r^2dr$$\n\n$$= \\int_{0}^\\infty \\chi_r ((-\\dfrac{1}{2}) \\nabla^2 -\\sum_{\\alpha} \\frac{Z_{\\alpha}}{r_{i \\alpha}})\\chi_s \\; 4\\pi r^2 dr$$\n\n\nWhere $\n{\\text { Laplace operator: } \\nabla^{2}} {=\\frac{1}{r^{2}} \\frac{\\partial}{\\partial r}\\left(r^{2} \\frac{\\partial f}{\\partial r}\\right)=\\frac{1}{r} \\frac{\\partial^{2}}{\\partial r^{2}}(r f)}$\n(For wavefunction which only has radial part.) \n\nAnd since we are calculating atom, there is only one nuclear $\\alpha$\n\n$$\\therefore H_{rs}= \\int_{0}^\\infty \\chi_r ((-\\dfrac{1}{2}) \\dfrac{1}{r} \\dfrac{\\partial}{\\partial r} \\dfrac{\\partial}{\\partial r} r \\chi_s - \\dfrac{Z_{\\alpha}}{r} \\chi_s )4\\pi r^2 dr$$ \n\nLet's write this in sympy. \nFor easy reading, the code below seperate equation above into\n$ H_{rs}= \\int_{0}^\\infty \\chi_r (T - V )4\\pi r^2 dr$ \n\n\n```python\ndef H_int(fr, fs, Z):\n \"\"\"\n Compute H_core integral between two STO functions.\n H_core = electron kinetics energy + electron nuclear potential energy\n\n INPUT:\n Z: Nuclear charge\n \"\"\"\n T = - ((1 / 2) * (1 / r) * diff(diff(r * fs, r), r))\n V = - (Z / r) * fs\n return sp.integrate(fr * (T + V) * 4 * sp.pi * r * r, (r, 0, +oo))\n```\n\n`H_int` function calculate $H_{rs}$ element in H matrix. To build H matrix, we just need to go over `r` from (1, 2, 3 ... num_bfs), and go over `s` also from (1, 2, 3 ... num_bfs), where `num_bfs` is number of basis functions, we will see this more at below.\n\n\n```python\ndef H_matrix(bfs, Z):\n \"\"\"\n Compute the core hamiltonian matrix H.\n H_core = electron kinetics energy + electron nuclear potential energy\n\n INPUT:\n bfs: basis functions\n Z: nuclear charge\n OUTPUT:\n H: core hamiltonian matrix\n \"\"\"\n num_bfs = len(bfs)\n H = np.zeros((num_bfs, num_bfs))\n\n for r in range(num_bfs):\n for s in range(num_bfs):\n H[r, s] = H_int(bfs[r], bfs[s], Z)\n\n return H\n```\n\n

(b) Overlap matrix S

\n\nIf you understand H matrix, S (Overlap) matrix will be very easy.\n\n$$S_{rs} = \\int_{0}^\\infty \\chi_r^* \\chi_s \\; 4 \\pi r^2dr$$\n\n\n```python\ndef S_int(fr, fs):\n \"\"\"\n Compute overlap integral between two STO functions.\n \"\"\"\n return sp.integrate(fr * fs * 4 * sp.pi * r * r, (r, 0, +oo))\n```\n\n\n```python\ndef S_matrix(bfs):\n \"\"\"\n Compute overlap matrix S.\n\n INPUT:\n fs: basis functions\n OUTPUT:\n S: Overlap matrix\n \"\"\"\n num_bfs = len(bfs)\n S = np.zeros((num_bfs, num_bfs))\n\n for r in range(num_bfs):\n for s in range(num_bfs):\n S[r, s] = S_int(bfs[r], bfs[s])\n\n return S\n```\n\n

(c) Fock matrix F

\n

Fock matrix F = H matrix + G matrix

\n\nFock matrix could be generated similarly like H, where Fock operator is \n$$\\hat{F}(i)=\\hat{H}_{core}(i)+\\sum_{j=1}^{n / 2}\\left[2 \\hat{J}_{j}(i)-\\hat{K}_{j}(i)\\right]$$\n\nThe second term is often denoted as $\\hat G$.\n\n$$\\hat{F}(i)=\\hat{H}_{core}(i)+\\hat G$$\n\nBecause we already have **H matrix** above, so we only need to build **G matrix**, add them together will generate **Fock matrix**. \n\n

G matrix

\n\nThe defination of $J$ (Coulomb operator) and $K$ (exchange operator) are. (1) (2) below simple means they are different electron.\n\n$$\n\\begin{aligned} \\hat{J}_{j}(1) f(1) &=f(1) \\int\\left|\\phi_{j}(2)\\right|^{2} \\frac{1}{r_{12}} d v_{2} \\\\ \\hat{K}_{j}(1) f(1) &=\\phi_{j}(1) \\int \\frac{\\phi_{j}^{*}(2) f(2)}{r_{12}} d v_{2} \\end{aligned}\n$$\n\n

\n Note: \n ${\\displaystyle f(1)}$, ${\\displaystyle f(2)}$ are the one-electron wavefunctions acted upon by the exchange operator as functions of the electron positions,
and ${\\displaystyle \\phi _{j}(1)}$ and ${\\displaystyle \\phi _{j}(2)}$ are the one-electron wavefunction of the jth electron as functions of the positions of the electrons.\n
\n

\n\n\nLet's see the result first, we will walk through an example ($G_{12}$ of helium) below explain how to get here. (Basically, it's just expand $\\phi_i$ into linear combination of the basis functions.)\n$$\n\\left\\langle\\chi_{r}(1) | \\hat{J}_{j}(1) \\chi_{s}(1)\\right\\rangle=\\sum_{t} \\sum_{u} c_{t j}^{*} c_{u j} \\iint \\frac{\\chi_{r}^{*}(1) \\chi_{s}(1) \\chi_{t}^{*}(2) \\chi_{u}(2)}{r_{12}} d v_{1} d v_{2}\n$$\n$$\n\\left\\langle\\chi_{r}(1) | \\hat{K}_{j}(1) \\chi_{s}(1)\\right\\rangle=\\sum_{t} \\sum_{u} c_{t j}^{*} c_{u j} \\iint \\frac{\\chi_{r}^{*}(1) \\chi_{u}(1) \\chi_{t}^{*}(2) \\chi_{s}(2)}{r_{12}} d v_{1} d v_{2}\n$$\nIf define\n\n$$\n(r s | t u) \\equiv \\iint \\frac{\\chi_{r}^{*}(1) \\chi_{s}(1) \\chi_{t}^{*}(2) \\chi_{u}(2)}{r_{12}} d v_{1} d v_{2}\n$$\n\nIt could be rewritten as\n$$\n\\left\\langle\\chi_{r}(1) | \\hat{J}_{j}(1) \\chi_{s}(1)\\right\\rangle=\\sum_{t=1}^{b} \\sum_{u=1}^{b} c_{t j}^{*} c_{u j}(r s | t u)\n$$\n\n$$\n\\left\\langle\\chi_{r}(1) | \\hat{K}_{j}(1) \\chi_{s}(1)\\right\\rangle=\\sum_{t=1}^{b} \\sum_{u=1}^{b} c_{t j}^{*} c_{u j}(r u | t s)\n$$\nAnd final $G_{rs}$ will be\n$$\nG_{r s}=\\sum_{t=1}^{b} \\sum_{u=1}^{b} \\sum_{j=1}^{n / 2} c_{t j}^{*} c_{u j}[2(r s | t u)-(r u | t s)]\n$$\n\n

\n\n Example:
\nLet's take $G_{12}$ element in G matrix (Helium atom) as an example to see how this comes out. (click to uncollapse)\n
\n\n\n$$G_{r s}=\\left\\langle\\chi_{r}|\\hat{G}| \\chi_{s}\\right\\rangle$$\n$$G_{1 2}=\\left\\langle\\chi_{1}|\\hat{G}| \\chi_{2}\\right\\rangle$$\n\n$$G_{1 2}= \\sum_{j=1}^{n / 2} \\left\\langle\\chi_{1}|2 \\hat{J}_{j}(i)-\\hat{K}_{j}(i)| \\chi_{2}\\right\\rangle$$\n\nFor helium atom, num of electron (n) is 2, there is only one 1s orbital.\n\n$$G_{1 2}= \\left\\langle\\chi_{1}|2 \\hat{J}_{j}(i)-\\hat{K}_{j}(i)| \\chi_{2}\\right\\rangle$$\n$$G_{1 2}= 2\\left\\langle\\chi_{1}| \\hat{J}_{j}(i)| \\chi_{2}\\right\\rangle - \\left\\langle\\chi_{1}|\\hat{K}_{j}(i)| \\chi_{2}\\right\\rangle$$\n\nLet's do $\\left\\langle\\chi_{1}| \\hat{J}_{j}(i)| \\chi_{2}\\right\\rangle$ first \n\n$$\\left\\langle\\chi_{1}(1)| \\hat{J}_{1}| \\chi_{2}(1)\\right\\rangle $$ \n$$= \\left\\langle\\chi_{1}(1)| \\;\\;\\; \\int \\frac{\\left|\\phi_1(2)\\right|^{2}}{r_{12}} d v_{2} \\;\\;\\; | \\chi_{2}(1)\\right\\rangle $$ \n$$\n= \\int \\int \\frac{\\phi_1(2)^*\\phi_1(2) \\chi (1)^* \\chi (1)}{r_{12}} d v_{2}d v_{1}\n$$\n\n$$\n= \\int \\int \\frac{[c_{11}\\chi_{1}^*(2) + c_{21}\\chi_{2}^*(2)][c_{11}\\chi_{1}(2) + c_{21}\\chi_{2}(2)] \\chi (1)^* \\chi (1)}{r_{12}} d v_{2}d v_{1}\n$$\n\n\n$$\n= \\int \\int \\frac{c_{11}\\chi_{1}^*(2)c_{11}\\chi_{1}(2) \\chi (1)^* \\chi (1)}{r_{12}} d v_{2}d v_{1}\n+ \\int \\int \\frac{c_{11}\\chi_{1}^*(2)c_{21}\\chi_{2}(2) \\chi (1)^* \\chi (1)}{r_{12}} d v_{2}d v_{1}\n$$\n$$\n+ \\int \\int \\frac{c_{21}\\chi_{2}^*(2)c_{11}\\chi_{1}(2) \\chi (1)^* \\chi (1)}{r_{12}} d v_{2}d v_{1}\n+ \\int \\int \\frac{c_{21}\\chi_{2}^*(2)c_{21}\\chi_{2}(2) \\chi (1)^* \\chi (1)}{r_{12}} d v_{2}d v_{1}\n$$\n\nUse \n$\n(r s | t u) \\equiv \\iint \\frac{\\chi_{r}^{*}(1) \\chi_{s}(1) \\chi_{t}^{*}(2) \\chi_{u}(2)}{r_{12}} d v_{1} d v_{2}\n$ to simplify the representation.\n\n$$\n\\left\\langle\\chi_{1}| \\hat{J}_{j}(i)| \\chi_{2}\\right\\rangle\n= c_{11}c_{11}(1 2 | 1 1) + c_{11}c_{21}(1 2 | 1 2) + c_{21}c_{11}(1 2 | 2 1) + c_{21}c_{21}(1 2 | 2 2)\n$$\n$$\n=\\sum_{t=1}^{2} \\sum_{u=1}^{2} \\sum_{j=1}^{2 / 2} c_{t j}^{*} c_{u j} (1 \\;2 | t \\;u)\n$$\n\nSimilar $\\left\\langle\\chi_{1}| \\hat{K}_{j}(i)| \\chi_{2}\\right\\rangle$ could also be inferred, \n$$\n\\left\\langle\\chi_{1}| \\hat{K}_{j}(i)| \\chi_{2}\\right\\rangle\n= c_{11}c_{11}(1 1 | 1 2) + c_{11}c_{21}(1 2 | 1 2) + c_{21}c_{11}(1 1 | 2 2) + c_{21}c_{21}(1 2 | 2 2)\n$$\n$$\n=\\sum_{t=1}^{2} \\sum_{u=1}^{2} \\sum_{j=1}^{2 / 2} c_{t j}^{*} c_{u j} (1 \\;u | t \\;2)\n$$\n\nSo $G_{1 2}$ is \n$$G_{1 2}= \\left\\langle\\chi_{1}|2 \\hat{J}_{j}(i)-\\hat{K}_{j}(i)| \\chi_{2}\\right\\rangle$$\n$$\n=\\sum_{t=1}^{2} \\sum_{u=1}^{2} \\sum_{j=1}^{2 / 2} c_{t j}^{*} c_{u j}[2(1 \\;2 | t \\;u)-(1\\; u | t \\;2)]\n$$\n\n
\n\nBuild G matrix in python\n\nrecall $G_{rs}$ is\n$$\nG_{r s}=\\sum_{t=1}^{b} \\sum_{u=1}^{b} \\sum_{j=1}^{n / 2} c_{t j}^{*} c_{u j}[2(r s | t u)-(r u | t s)]\n$$\n\nIf define $P_{rs}$ (density matrix) as \n$$\nP_{t u} \\equiv 2 \\sum_{j=1}^{n / 2} c_{t j}^{*} c_{u j}, \\quad t=1,2, \\ldots, b, \\quad u=1,2, \\ldots, b\n$$\n\nThen $G_{rs}$ could be simplified again as \n$$\nG_{r s}=\\sum_{t=1}^{b} \\sum_{u=1}^{b} P_{t u}\\left[(r s | t u)-\\frac{1}{2}(r u | t s)\\right]\n$$\n\nIf we built a matrix **R** which is 4 dimension, and save all the posible $(r s | t u)$, then we could build G matrix like below.\n\n\n```python\ndef G_matrix(P, R):\n \"\"\"\n Compute G matrix.\n G = coulombic repulsion energy + exchange energy\n\n INPUT:\n P: density matrix\n R: electron repulsion matrix\n OUTPUT:\n G: repulsion matrix\n \"\"\"\n num_bfs = P.shape[0]\n G = np.zeros((num_bfs, num_bfs))\n\n for r in range(num_bfs):\n for s in range(num_bfs):\n g = 0\n for t in range(num_bfs):\n for u in range(num_bfs):\n int1 = R[r, s, t, u]\n int2 = R[r, u, t, s]\n g += P[t, u] * (int1 - 0.5 * int2)\n G[r, s] = g\n\n return G\n```\n\n

P matrix

\n\nAnd also P (density) matrix\n$$\nP_{t u} \\equiv 2 \\sum_{j=1}^{n / 2} c_{t j}^{*} c_{u j}, \\quad t=1,2, \\ldots, b, \\quad u=1,2, \\ldots, b\n$$\n\n\n```python\ndef P_matrix(Co, N):\n \"\"\"\n Compute density matrix P.\n\n INPUT:\n Co: coefficents matrix\n N: num of electrons\n OUTPUT:\n P: repulsion matrix\n \"\"\"\n P = np.zeros([Co.shape[0], Co.shape[0]])\n\n for t in range(Co.shape[0]):\n for u in range(Co.shape[0]):\n for j in range(int(N/2)):\n P[t, u] += 2 * Co[t, j] * Co[u, j]\n return P\n```\n\n

R matrix

\n\nR matrix (4 dimenstion), which is the most computation-expensive part in hartree-fock method. (In principle, there are a lot of elements in R matrix are equal by symmetry, but for code simplicity, we didn't implemented this.)\n\n\n```python\ndef R_matrix(bfs):\n \"\"\"\n Compute the electron repulsion integral matrix R.\n\n INPUT:\n fs: basis functions\n OUTPUT:\n R: repulsion matrix\n \"\"\"\n start = time.time()\n num_bfs = len(bfs)\n R = np.zeros((num_bfs, num_bfs, num_bfs, num_bfs))\n\n for r in range(num_bfs):\n for s in range(num_bfs):\n for t in range(num_bfs):\n for u in range(num_bfs):\n R[r, s, t, u] = R_int([bfs[r], bfs[s], bfs[t], bfs[u]])\n\n stop = time.time()\n print('time Repu: {:.1f} s'.format(stop-start))\n return R\n```\n\nRecall the element $R_{rstu}$ in R matrix is \n$$\n(r s | t u) \\equiv \\iint \\frac{\\chi_{r}^{*}(1) \\chi_{s}(1) \\chi_{t}^{*}(2) \\chi_{u}(2)}{r_{12}} d v_{1} d v_{2}\n$$\n\nFor 1s or 2s orbital which only has radial part, this could be calculated approximately in sympy. However, for the orbitals which has angular part, how to solve two-electron Repulsion integral of Slater Type Orbital (STO) is still slow and difficult. This is the main reason why Gaussian Type Orbital (GTO) is more frequently used, and we will try GTO in next chapter.\n\n$$(rs|tu) = \\int_{0}^\\infty \\int_{0}^\\infty \\dfrac{\\chi_r^*(1) \\chi_s(1) \\chi_t^*(2) \\chi_u(2)}{r_{12}} \\; 4 \\pi r_1^2dr_1\\; 4 \\pi r_2^2dr_2 $$\n\n$$(rs|tu) = \\int_{0}^\\infty \\chi_r^*(1) \\chi_s(1) \\; 4 \\pi r_1^2dr_1\\int_{0}^\\infty \\frac{ \\chi_t^*(2) \\chi_u(2)}{r_{12}}\\; 4 \\pi r_2^2dr_2 $$\n\n$r_{12}$ here is tricky to deal with, from problem 9.14 in quantum_chemistry by levine\n\n$$(rs|tu) = \\int_{0}^\\infty \\chi_r^*(1) \\chi_s(1) \\; 4 \\pi r_1^2dr_1\\int_{0}^\\infty \\frac{ \\chi_t^*(2) \\chi_u(2)}{r_{>}}\\; 4 \\pi r_2^2dr_2 $$\n

\n Note: \n $r_{>}$ is the larger one between $r1$ and $r2$\n
\n

\n$$(rs|tu) = \\int_{0}^\\infty \\chi_r^*(1) \\chi_s(1) \\; 4 \\pi r_1^2dr_1(\\int_{0}^{r_1} \\frac{ \\chi_t^*(2) \\chi_u(2)}{r_{1}}\\; 4 \\pi r_2^2dr_2 + \\int_{r_1}^\\infty \\frac{ \\chi_t^*(2) \\chi_u(2)}{r_{2}}\\; 4 \\pi r_2^2dr_2) $$\n\nLet$ \\; B= \\int_{0}^{r_1} \\frac{ \\chi_t^*(2) \\chi_u(2)}{r_{1}}\\; 4 \\pi r_2^2dr_2 + \\int_{r_1}^\\infty \\frac{ \\chi_t^*(2) \\chi_u(2)}{r_{2}}\\; 4 \\pi r_2^2dr_2$\n\n$$(rs|tu) = \\int_{0}^\\infty \\chi_r^*(1) \\chi_s(1) B \\; 4 \\pi r_1^2 dr_1 $$\n\nSo the element $R_{rstu}$ in R matrix could be calculated by\n\n\n```python\ndef R_int(four_bfs):\n \"\"\"\n Compute electron-electron repulsion integral.\n\n INPUT:\n four_bfs: an array contain 4 basis functions\n \"\"\"\n f1, f2, f3, f4 = four_bfs\n\n f1 = f1.subs(r, r1)\n f2 = f2.subs(r, r1)\n f3 = f3.subs(r, r2)\n f4 = f4.subs(r, r2)\n\n B = (1 / r1) * sp.integrate(f3 * f4 * 4 * sp.pi * r2 * r2, (r2, 0, r1)) + sp.integrate((1 / r2) * f3 * f4 * 4 * sp.pi * r2 * r2, (r2, r1, +oo))\n return sp.integrate(f1 * f2 * 4 * sp.pi * r1 * r1 * B, (r1, 0, +oo))\n```\n\nThis is basically all the matrixes we will need to use, and we already solved the most difficult part of hartree fock!! \n\n### 6. Secular Equation\n\n**Fock matrix** is the sum of **H matrix** and **G matrix**. \nF = H + G\n\nThen, Roothan equation could be solved simply by calling \n`eigenvalue, C = scipy.linalg.eigh(F, S)` \neigenvalues are MO orbital energies.\n\n$${\\mathbf {F}}{\\mathbf {C}}={\\mathbf {S}}{\\mathbf {C}}{\\mathbf {\\epsilon }}$$\n

\n Note: \n F (Fock matrix), S (Overlap matrix) are inputs.
\n S matrix is fixed, F matrix is changing every iteration because of the improved C.
\n C (Coefficient matrix) and $\\epsilon_i$ (eigenvalues) are results.\n
\n

\n\nWe could wrap `scipy.linalg.eigh` into a function\n\n\n```python\ndef secular_eqn(F, S):\n \"\"\"\n Slove secular equation, return the MO energies (eigenvalue) and improved coeffients (eigenvector)\n\n INPUT:\n F: fock matrix or h_core matrix\n S: overlap integral\n OUTPUT:\n ei: eigenvalue\n C: eigenvector\n \"\"\"\n ei, C = scipy.linalg.eigh(F, S)\n return ei, C\n```\n\n### 7. Total Energy\nFinally, hartree fock total energy is \n$$\nE_{\\mathrm{HF}}=2 \\sum_{i=1}^{n / 2} \\varepsilon_{i}-\\sum_{i=1}^{n / 2} \\sum_{j=1}^{n / 2}\\left(2 J_{i j}-K_{i j}\\right)+V_{N N}\n$$\n

\n Note: \n i, j are electron (only r, s, t, u are used for matrix element)\n
\n

\n\nWhere every term is \n1. Energy of all the electrons (in closed shell condition, every orbital is doubly-occupied)\n2. Recall that when we build F operator and calculat electron energy, electron-electron repulsion is counted twice when you sum all the electrons energy. This is the remove the extra count.\n3. Nuclear-Nuclear repulsion term, which is not included in F operator. (because we are calculating Atom, this term is always 0)\n\nhalf sum of the all the electrons $({n / 2})$ could also be written as \n$$\n\\sum_{i=1}^{n / 2} \\varepsilon_{i}=\\sum_{i=1}^{n / 2} H_{i i}^{\\mathrm{core}}+\\sum_{i=1}^{n / 2} \\sum_{j=1}^{n / 2}\\left(2 J_{i j}-K_{i j}\\right)\n$$\n\nSo $E_{\\mathrm{HF}}$ could be simplifed as \n$$\nE_{\\mathrm{HF}}=\\sum_{i=1}^{n / 2} \\varepsilon_{i}+\\sum_{i=1}^{n / 2} H_{i i}^{\\mathrm{core}}+V_{N N}\n$$ \n

\n Note: \n i, j are electron (only r, s, t, u are used for matrix element)
\n H here is not H matrix, which is $\nH_{i i}^{\\text {core }}=\\left\\langle\\phi_{i}\\left|\\hat{H}^{\\text {core }}\\right| \\phi_{i}\\right\\rangle\n$\n
\n

\n\nExpand $\\phi_{i}$ into basis functions and simplify, we could calculate total energy using the matrix we have built before.\n$$\nE_{\\mathrm{HF}}=\\sum_{i=1}^{n / 2} \\varepsilon_{i}+\\frac{1}{2} \\sum_{r=1}^{b} \\sum_{s=1}^{b} P_{r s} H_{r s}^{\\mathrm{core}}+V_{N N}\n$$\n\n\n```python\ndef energy_tot(e, N, P, H, Vnn=0):\n \"\"\"\n Compute the total energy.\n\n INPUT:\n e: MO energies\n N: num of electrons\n P: density matrix\n H: h_core matrix\n Vnn: nuclear nuclear repulsion energy, for atom is 0\n \"\"\"\n e_tot = 0\n\n for i in range(int(N/2)):\n e_tot += e[i].real\n\n e_tot = e_tot + 0.5 * (P * H).sum() + Vnn\n return e_tot\n```\n\n### 8. Utils\n\nAnd also some utils function to print information and compare our result with reference\n\n\n```python\ndef print_info(S, H, e, Co, P, hf_e, start, stop, delta_e=0, verbose=False):\n \"\"\"\n Print information while doing SCF interations.\n \"\"\"\n if(verbose):\n # overlap\n print('Overlap:')\n print(S)\n\n # hamiltonian\n print('Core hamiltonian:')\n print(H)\n\n # Co\n print('Coefficients:')\n print(Co)\n\n # density\n print('Density matrix:')\n print(P)\n\n # MOs\n print('MO energies:')\n message = ', '\n m_list = ['e{} = {:0.3f}'.format(i+1, x) for i, x in enumerate(e)]\n message = message.join(m_list)\n print(message)\n\n print('HF energy: {:0.5f} (hartree) = {:0.5f} (eV)'.format(hf_e, hf_e*27.211))\n if delta_e != 0:\n print('dE : {:.2e}'.format(delta_e))\n print('time used: {:.1f} s'.format(stop-start))\n\n\ndef compare(cal, ref, tol=1.0e-4):\n \"\"\"\n Compare calculated result with reference data.\n \"\"\"\n delta = np.abs(ref - cal)\n if delta < tol:\n message = '\\33[32m' + 'PASSED' + '\\x1b[0m'\n else:\n message = '\\033[91m' + 'FAILED' + '\\033[0m'\n print('-' * 32, message, '-' * 33)\n print('cal: {:.7f}, ref: {:.7f}\\n\\n'.format(cal, ref))\n```\n\n### 9. Run Hartree Fock\n\nSteps to run hartree fock\n\n1. Initialization\n - Let Fock matrix = H_core matrix, without considering electron repulsion\n - Solve secular equation with H and S to get initial Co (means initial guessed molecular orbitals) and build inital P (density) matrix\n - Prepare Repulsion matrix R (take time)\n2. Iteration\n - Using P matrix and R matrix to calculate G matrix\n - F matrix = H matrix + G matrix\n - Solve secular equation with F and S to get improved Co (means improved molecular orbitals)\n - Using improved Co to build improved P matrix\n - check whether converged (the change of total energy smaller than converge requirement)\n\n\n```python\ndef run_hf(bfs, Z):\n \"\"\"\n Run restricted hartree fock for a single atom.\n\n INPUT:\n bfs: basis functions\n Z: nuclear charge of the atom\n \"\"\"\n print('------------------------------', \"Initialization\", '------------------------------')\n print('-------------------------', \"Ignore repulsion integral\", '------------------------')\n N = Z # num of electron = nuclear charege (since it's atom)\n start = time.time()\n\n # initialization\n H = H_matrix(bfs, Z)\n S = S_matrix(bfs)\n e, Co = secular_eqn(H, S)\n P = P_matrix(Co, N)\n Vnn = 0 # A single atom does not have nuclear repulsion\n hf_e = energy_tot(e, N, P, H, Vnn)\n\n stop = time.time()\n print_info(S, H, e, Co, P, hf_e, start, stop, verbose=verbose)\n print('-----------', \"Caculating Electron Repulsion Integral (takes time)\", '------------')\n R = R_matrix(bfs)\n delta_e = 1\n ITER = 0\n previous_e = hf_e\n\n # Iterations\n while(delta_e > E_conv and ITER < MAXITER):\n print('------------------------------', \"Iteration\", ITER + 1, '------------------------------')\n start = time.time()\n\n # important scf steps\n G = G_matrix(P, R)\n F = H + G\n e, Co = secular_eqn(F, S)\n P = P_matrix(Co, N)\n hf_e = energy_tot(e, N, P, H, Vnn)\n\n delta_e = np.abs(hf_e - previous_e)\n previous_e = hf_e\n ITER += 1\n stop = time.time()\n print_info(S, H, e, Co, P, hf_e, start, stop, delta_e, verbose=verbose)\n\n return hf_e\n```\n\nSet converge criterion\n\n\n```python\nMAXITER = 40 # Maximum SCF iterations\nE_conv = 1.0e-6 # Energy convergence criterion\nverbose = False # whether to print matrix information while iterating\n```\n\n### 10. Test\n\n

Run hartree fock for Helium

\n\n\n```python\ndef test1():\n # Use 2 Slater Type ourbital to represent Helium 1s orbital.\n # The final Helium 1s orbital is a linear combination of these two STO.\n f1s_1 = STO(zeta=1.45363, n=1)\n f1s_2 = STO(zeta=2.91093, n=1)\n\n # all basis functions\n fs = [f1s_1, f1s_2]\n\n # nuclear charge of He\n Z = 2\n\n # run hartree fock\n hf_e = run_hf(fs, Z)\n\n # compare result with reference\n ref_hf_e = -2.8616726\n compare(hf_e, ref_hf_e)\n```\n\n\n```python\ntest1()\n```\n\n ------------------------------ Initialization ------------------------------\n ------------------------- Ignore repulsion integral ------------------------\n HF energy: -3.95924 (hartree) = -107.73486 (eV)\n time used: 0.2 s\n ----------- Caculating Electron Repulsion Integral (takes time) ------------\n time Repu: 3.2 s\n ------------------------------ Iteration 1 ------------------------------\n HF energy: -2.78457 (hartree) = -75.77099 (eV)\n dE : 1.17e+00\n time used: 0.0 s\n ------------------------------ Iteration 2 ------------------------------\n HF energy: -2.85860 (hartree) = -77.78532 (eV)\n dE : 7.40e-02\n time used: 0.0 s\n ------------------------------ Iteration 3 ------------------------------\n HF energy: -2.86152 (hartree) = -77.86490 (eV)\n dE : 2.92e-03\n time used: 0.0 s\n ------------------------------ Iteration 4 ------------------------------\n HF energy: -2.86167 (hartree) = -77.86877 (eV)\n dE : 1.42e-04\n time used: 0.0 s\n ------------------------------ Iteration 5 ------------------------------\n HF energy: -2.86167 (hartree) = -77.86896 (eV)\n dE : 7.01e-06\n time used: 0.0 s\n ------------------------------ Iteration 6 ------------------------------\n HF energy: -2.86167 (hartree) = -77.86897 (eV)\n dE : 3.45e-07\n time used: 0.0 s\n -------------------------------- \u001b[32mPASSED\u001b[0m ---------------------------------\n cal: -2.8616726, ref: -2.8616726\n \n \n\n\n

Run hartree fock for Beryllium

\n\n\n```python\ndef test2():\n \"\"\"\n Test of Be (1s, 2s)\n \"\"\"\n # Use 2 STO to represent Be 1s orbital and another 2 STO for 2s orbital\n # The final 1s orbital is a linear combination of these 4 STO.\n # Same for 2s orbital.\n f1s_1 = STO(zeta=5.59108, n=1)\n f1s_2 = STO(zeta=3.35538, n=1)\n f2s_1 = STO(zeta=1.01122, n=2)\n f2s_2 = STO(zeta=0.61000, n=2)\n\n # all basis functions\n fs = [f1s_1, f1s_2, f2s_1, f2s_2]\n\n # nuclear charge of Be\n Z = 4\n\n # run hartree fock\n hf_e = run_hf(fs, Z)\n\n # compare result with reference\n ref_hf_e = -14.572369\n compare(hf_e, ref_hf_e)\n```\n\n\n```python\ntest2()\n```\n\n ------------------------------ Initialization ------------------------------\n ------------------------- Ignore repulsion integral ------------------------\n HF energy: -19.51846 (hartree) = -531.11686 (eV)\n time used: 1.0 s\n ----------- Caculating Electron Repulsion Integral (takes time) ------------\n time Repu: 56.9 s\n ------------------------------ Iteration 1 ------------------------------\n HF energy: -14.28744 (hartree) = -388.77544 (eV)\n dE : 5.23e+00\n time used: 0.0 s\n ------------------------------ Iteration 2 ------------------------------\n HF energy: -14.50425 (hartree) = -394.67518 (eV)\n dE : 2.17e-01\n time used: 0.0 s\n ------------------------------ Iteration 3 ------------------------------\n HF energy: -14.55105 (hartree) = -395.94867 (eV)\n dE : 4.68e-02\n time used: 0.0 s\n ------------------------------ Iteration 4 ------------------------------\n HF energy: -14.56616 (hartree) = -396.35976 (eV)\n dE : 1.51e-02\n time used: 0.0 s\n ------------------------------ Iteration 5 ------------------------------\n HF energy: -14.57061 (hartree) = -396.48075 (eV)\n dE : 4.45e-03\n time used: 0.0 s\n ------------------------------ Iteration 6 ------------------------------\n HF energy: -14.57187 (hartree) = -396.51521 (eV)\n dE : 1.27e-03\n time used: 0.0 s\n ------------------------------ Iteration 7 ------------------------------\n HF energy: -14.57223 (hartree) = -396.52493 (eV)\n dE : 3.57e-04\n time used: 0.0 s\n ------------------------------ Iteration 8 ------------------------------\n HF energy: -14.57233 (hartree) = -396.52766 (eV)\n dE : 1.01e-04\n time used: 0.0 s\n ------------------------------ Iteration 9 ------------------------------\n HF energy: -14.57236 (hartree) = -396.52843 (eV)\n dE : 2.83e-05\n time used: 0.0 s\n ------------------------------ Iteration 10 ------------------------------\n HF energy: -14.57237 (hartree) = -396.52865 (eV)\n dE : 7.94e-06\n time used: 0.0 s\n ------------------------------ Iteration 11 ------------------------------\n HF energy: -14.57237 (hartree) = -396.52871 (eV)\n dE : 2.23e-06\n time used: 0.0 s\n ------------------------------ Iteration 12 ------------------------------\n HF energy: -14.57237 (hartree) = -396.52872 (eV)\n dE : 6.27e-07\n time used: 0.0 s\n -------------------------------- \u001b[32mPASSED\u001b[0m ---------------------------------\n cal: -14.5723687, ref: -14.5723690\n \n \n\n\n### 11. Excise - Plot the charge density of orbitals\nWrite a function based on the code of `run_hf()`, plot the charge density $4 \\pi r^2 |\\phi_i|^2$ of all the orbitals (final iteration).\n\nFor example, Beryllium:\nNote: You only need to plot the final one (most bottom) \nHint see below.\n\n\nHint: convert a sympy expression to python function by using [sp.lambdify()](https://docs.sympy.org/latest/modules/numeric-computation.html#lambdify) \nFor example, to plot $\\chi_1$, which is the 1st wavefunction of Beryllium Basis set.\n\n\n```python\nf1s_1 = STO(zeta=5.59108, n=1)\ndisplay(f1s_1)\nf = sp.lambdify(r, f1s_1, \"numpy\")\n```\n\n\n```python\nx = np.linspace(0, 5, 300)\ny = f(x)\nplt.figure(figsize=(16,2))\nplt.plot(x, y, label='$\\chi_{}$'.format(1))\nplt.legend()\nplt.xlim(0, x[-1])\nplt.xlabel('r')\nplt.ylabel('$\\chi_1$')\nplt.title('Wavefunction of $\\chi_1$')\nplt.show()\n```\n\nCode below is example solution\n\n\n```python\ndef run_hf(bfs, Z):\n \"\"\"\n Run restricted hartree fock for a single atom.\n\n INPUT:\n bfs: basis functions\n Z: nuclear charge of the atom\n \"\"\"\n print('------------------------------', \"Initialization\", '------------------------------')\n print('-------------------------', \"Ignore repulsion integral\", '------------------------')\n N = Z # num of electron = nuclear charege (since it's atom)\n start = time.time()\n\n # initialization\n H = H_matrix(bfs, Z)\n S = S_matrix(bfs)\n e, Co = secular_eqn(H, S)\n P = P_matrix(Co, N)\n Vnn = 0 # A single atom does not have nuclear repulsion\n hf_e = energy_tot(e, N, P, H, Vnn)\n\n stop = time.time()\n print_info(S, H, e, Co, P, hf_e, start, stop, verbose=verbose)\n print('-----------', \"Caculating Electron Repulsion Integral (takes time)\", '------------')\n R = R_matrix(bfs)\n delta_e = 1\n ITER = 0\n previous_e = hf_e\n\n densities = [] # [[d1, d2], [d1, d2], [d1, d2]]\n # plot \n x = np.linspace(0, 5, 300)\n tmp_density = get_density(bfs, Co, x)\n densities.append(tmp_density)\n\n # Iterations\n while(delta_e > E_conv and ITER < MAXITER):\n print('------------------------------', \"Iteration\", ITER + 1, '------------------------------')\n start = time.time()\n\n # important scf steps\n G = G_matrix(P, R)\n F = H + G\n e, Co = secular_eqn(F, S)\n P = P_matrix(Co, N)\n hf_e = energy_tot(e, N, P, H, Vnn)\n\n delta_e = np.abs(hf_e - previous_e)\n previous_e = hf_e\n ITER += 1\n stop = time.time()\n print_info(S, H, e, Co, P, hf_e, start, stop, delta_e, verbose=verbose)\n\n # plot\n tmp_density = get_density(bfs, Co, x)\n densities.append(tmp_density)\n\n plot_density(densities, x)\n\n\n return hf_e\n```\n\n\n```python\ndef get_density(bfs, Co, x):\n r = sp.Symbol('r')\n density = []\n \n # all orbitals\n for i in range(Co.shape[0]):\n tmp_orbital = 0\n # all basis functions\n for j, f in enumerate(bfs):\n tmp_orbital += Co[j][i] * bfs[j]\n tmp_d_function = tmp_orbital * tmp_orbital * r * r * 4 * np.pi\n tmp_f = sp.lambdify(r, tmp_d_function, \"numpy\")\n tmp_d = tmp_f(x)\n density.append(tmp_d)\n return density\n```\n\n\n```python\ndef plot_density(densities, x):\n num_orbitals = len(densities[0])\n num_iterations = len(densities)\n\n for i in range(num_orbitals):\n plt.figure(figsize=(16,2))\n for j, d in enumerate(densities):\n if j == 0 or j == (num_iterations - 1):\n plt.plot(x, d[i], label='iteration {}'.format(j+1))\n else:\n plt.plot(x, d[i])\n plt.legend()\n plt.xlim(0, x[-1])\n plt.xlabel('r')\n plt.ylabel('$4 \\pi r^2 |\\phi_{}|^2$'.format(i+1))\n plt.title('charge density of $\\phi_{}$'.format(i+1))\n plt.show()\n \n plt.figure(figsize=(16,2))\n for i, d in enumerate(densities[-1]):\n plt.plot(x, d, label='$\\phi_{}$'.format(i+1))\n plt.legend()\n plt.xlim(0, x[-1])\n plt.xlabel('r')\n plt.ylabel('$4 \\pi r^2 |\\phi_i|^2$')\n plt.title('charge density of all orbitals $\\phi_i$ (final iteration)')\n plt.show()\n```\n\n\n```python\ntest1()\n```\n\n\n```python\ntest2()\n```\n\n### 12. Limitations\nLimitations of this implementation: \n1. Because of the coordinate system (Spherical coordinate system), this implementation can only deal with atom.\n1. Because angular part is not included in current STO function, only atoms which only have s orbital could be represented by STO.\n1. For restricted hartree fock, could only run closed shell atom.\n1. Integral calculated by sympy, which is slow but easy to understand.\n\n\n### 13. Reference\n[1] Levine, Quantum Chemistry, 7th Edition, chapter 14 \n[2] Wikipedia\n[Hartree\u2013Fock method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method#Hartree%E2%80%93Fock_algorithm), \n[Fock matrix](https://en.wikipedia.org/wiki/Fock_matrix), \n[Roothaan equations](https://en.wikipedia.org/wiki/Roothaan_equations), \n[Coulomb operator](https://en.wikipedia.org/wiki/Coulomb_operator), \n[Exchange operator](https://en.wikipedia.org/wiki/Exchange_operator) \n[3] Clementi, Enrico, and Carla Roetti. [Roothaan-Hartree-Fock atomic wavefunctions: Basis functions and their coefficients for ground and certain excited states of neutral and ionized atoms, Z\u2264 54.](https://www.sciencedirect.com/science/article/pii/S0092640X74800161) Atomic data and nuclear data tables 14.3-4 (1974): 177-478. \n[4] Acosta C R. [Restricted closed shell Hartree Fock Roothaan matrix method applied to Helium atom using Mathematica[J].](https://files.eric.ed.gov/fulltext/EJ1051495.pdf) European Journal of Physics Education, 2017, 5(1): 1-14. \n[5] Jacob Martin, [Simple Quantum Chemistry: Hartree-Fock in Python](http://nznano.blogspot.com/2018/03/simple-quantum-chemistry-hartree-fock.html), 2018 \n[6] Claire Vallance, [Calculating Orbital Energies and Expansion Coefficients](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Symmetry_(Vallance)/20%3A_Calculating_Orbital_Energies_and_Expansion_Coefficients), 2019\n", "meta": {"hexsha": "518e8e2553290b9e5c796d23f37a44670ce74e1a", "size": 339829, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/pchem/hartree-fock/sto/1_Restricted_Hartree_Fock_for_Atom.ipynb", "max_stars_repo_name": "yueyericardo/data_visualization_chem", "max_stars_repo_head_hexsha": "0b69433512a59243502093e0f21c22287423f2f5", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-09-27T04:36:43.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-02T05:45:14.000Z", "max_issues_repo_path": "notebooks/pchem/hartree-fock/sto/1_Restricted_Hartree_Fock_for_Atom.ipynb", "max_issues_repo_name": "yueyericardo/data_visualization_chem", "max_issues_repo_head_hexsha": "0b69433512a59243502093e0f21c22287423f2f5", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-12-09T01:18:52.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-09T01:18:52.000Z", "max_forks_repo_path": "notebooks/pchem/hartree-fock/sto/1_Restricted_Hartree_Fock_for_Atom.ipynb", "max_forks_repo_name": "yueyericardo/data_visualization_chem", "max_forks_repo_head_hexsha": "0b69433512a59243502093e0f21c22287423f2f5", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-11-14T04:43:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-22T13:45:08.000Z", "avg_line_length": 155.1730593607, "max_line_length": 48708, "alphanum_fraction": 0.8493948427, "converted": true, "num_tokens": 18327, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5273165233795672, "lm_q2_score": 0.5428632831725052, "lm_q1q2_score": 0.28626077915294296}} {"text": "# Not operator modeling and characterization\nIn this notebook we will construct a genetic network to model a Not operator, a device that is repressed by an input, upload the simulated data to Flapjack, and then show how to characterize the operator based on this data. The GeneticNetwork will be a signal inverter, requiring both a Receiver and a Not operator.\n\nNOTE: In order to run this notebook, and characterize the Not operator, you must first run Receiver1.ipynb to generate data for the Receiver used in the inverter network.\n\n## Import required packages\n\n\n```python\nfrom loica import *\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport getpass\n```\n\n## Make a connection to Flapjack\nNote here you should specify which instance of Flapjack you will use, whether it is local or the public instance for example.\n\n\n```python\nfrom flapjack import *\n#fj = Flapjack(url_base='flapjack.rudge-lab.org:8000')\nfj = Flapjack(url_base='localhost:8000')\nfj.log_in(username=input('Flapjack username: '), password=getpass.getpass('Password: '))\n```\n\n## Get or create Flapjack objects\nTo associate with the components of the genetic network and the simulated data with Flapjack we need the Ids of the appropriate objects. Note that if the objects already exist you will be prompted and can simply hit return to use the existing objects.\n\n\n```python\nreceiver_vector = fj.get('vector', name='receiver1')\n```\n\n\n```python\nstudy = fj.create('study', name='Loica testing', description='Test study for demonstrating Loica')\n```\n\n\n```python\ndna = fj.create('dna', name='not')\nvector = fj.create('vector', name='not', dnas=dna.id)\n```\n\n\n```python\nsfp = fj.create('signal', name='SFP', color='green', description='Simulated fluorescent protein')\n```\n\n## Create the network with measurable reporter\nFirst we create a GeneticNetwork object and associate it with a Flapjack Vector (collection of DNA). The connection to Flapjack is optional, but we will use it here to upload data and characterize our components.\n\n\n```python\nnetwork = GeneticNetwork(vector=vector.id[0])\n```\n\n\n```python\nreporter = Reporter(name='SFP', color='green', degradation_rate=0, init_concentration=0, signal_id=sfp.id[0])\n```\n\n\n```python\nnetwork.add_reporter(reporter)\n```\n\n## Create the Not operator\nThe Not operator is a device which is repressed by a single repressor $r$, and produces an output expression rate $\\phi(r)$ modeled as follows:\n\n\\begin{equation}\n \\phi(r)\n =\n \\frac\n {\n \\alpha_0 + \\alpha_1 (\\frac{r}{K})^n\n }\n {\n 1 + (\\frac{r}{K})^n\n }\n\\end{equation}\n\n\n```python\nrepressor = Regulator('LacI')\nnot_ = Hill1(input=repressor, output=reporter, alpha=[1,0], K=1, n=2)\n```\n\n## Create the Receiver operator\nThe receiver operator responds to a signal $s$ to produce an output expression rate $\\phi(s)$ modeled as follows:\n\n\\begin{equation}\n \\phi(s)\n =\n \\frac\n {\n \\alpha_0 + \\alpha_1 (\\frac{s}{K})^n\n }\n {\n 1 + (\\frac{s}{K})^n\n }\n\\end{equation}\n\nHere we must create a Supplement object to represent the signal, in this case modeling an acyl-homoserine lactone (AHL). The Receiver drives the repressor, which then is the input to the Not operator.\n\n\n```python\nahl = Supplement(name='AHL1')\nrec = Receiver(input=ahl, output=repressor, alpha=[0,100], K=1, n=2)\n```\n\n## Add the Operators and Regulator to the GeneticNetwork\nAdding the two Operators and the Regulator effectively forms an inverter circuit, as can be seen from the graph visualization.\n\n\n```python\nnetwork.add_operators([rec,not_])\nnetwork.add_regulator(repressor)\n```\n\n## Draw the GeneticNetwork as a graph\nWe can now make a visual representation of our GeneticNetwork to check it is wired up correctly.\n\n\n```python\nplt.figure(figsize=(3,3), dpi=150)\nnetwork.draw()\n```\n\n## Simulate the GeneticNetwork\nIn order to simulate the GeneticNetwork behaviour we need to specify the growth conditions in which it will operate. To do this we create a SimulatedMetabolism object which specifies growth functions.\n\n\n```python\ndef growth_rate(t):\n return gompertz_growth_rate(t, 0.05, 1, 1, 1)\n\ndef biomass(t):\n return gompertz(t, 0.05, 1, 1, 1)\n \nmetab = SimulatedMetabolism(biomass, growth_rate)\n\nmedia = fj.create('media', name='loica', description='Simulated loica media')\nstrain = fj.create('strain', name='loica', description='Loica test strain')\n```\n\nNow we can create Samples that contain our GeneticNetwork driven by the SimulatedMetabolism. We also need to specify the Media and Strain, in order to link to the Flapjack data model. To test the inverter behaviour we must also add the signal (ahl) at a range of concentrations.\n\n\n```python\n# Create list of samples \nsamples = []\nconcs = np.append(0, np.logspace(-4, 2, 18))\nfor conc in concs:\n for _ in range(1):\n sample = Sample(genetic_network=network, \n metabolism=metab,\n media=media.id[0],\n strain=strain.id[0])\n # Add AHL to samples at given concentration\n sample.add_supplement(ahl, conc)\n samples.append(sample)\n```\n\nGiven our Samples, we can now create an Assay which will simulate an experiment containing them. We need to specify the biomass signal in order to link to the Flapjack data model for later upload. Running the assay will simulate the behaviour of the GeneticNetwork.\n\n\n```python\nbiomass_signal = fj.create('signal', name='SOD', description='Simulated OD', color='black')\n```\n\n\n```python\nassay = Assay(samples, \n n_measurements=100, \n interval=0.24,\n name='Loica inverter',\n description='Simulated inverter generated by loica',\n biomass_signal_id=biomass_signal.id[0]\n )\nassay.run()\n```\n\n## Upload simulated data to Flapjack\n\n\n```python\nassay.upload(fj, study.id[0])\n```\n\nNow we can check that the simulation worked by plotting an induction curve using the PyFlapjack package to connect to the Flapjack API. This also allows us to see if we have covered the dynamic range of the inverter, in order to correctly characterize the Not operator.\n\n\n```python\nahl1_id = fj.get('chemical', name='AHL1').id[0]\nfig = fj.plot(study=study.id, \n vector=vector.id,\n signal=sfp.id,\n type='Induction Curve',\n analyte=ahl1_id,\n function='Mean Expression',\n biomass_signal=biomass_signal.id[0],\n normalize='None',\n subplots='Signal',\n markers='Vector',\n plot='All data points'\n )\nfig\n```\n\n## Characterize the Not operator from the uploaded data\n\n\n```python\nnot_.characterize(\n fj,\n receiver=receiver_vector.id,\n inverter=vector.id,\n media=media.id,\n strain=strain.id,\n signal=sfp.id,\n biomass_signal=biomass_signal.id,\n gamma=0\n)\n```\n\n\n```python\nnot_.a, not_.b, not_.K, not_.n\n```\n\n\n```python\nnot_.a_A, not_.b_A, not_.K_A, not_.n_A\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "9785ef1aad129c4093c0b7b499644c6e4903768a", "size": 12839, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Not.ipynb", "max_stars_repo_name": "RudgeLab/LOICA", "max_stars_repo_head_hexsha": "ea7f203ccd9642a6793537184dbccc764521f6fc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-16T22:00:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-16T22:00:24.000Z", "max_issues_repo_path": "notebooks/Not.ipynb", "max_issues_repo_name": "RudgeLab/LOICA", "max_issues_repo_head_hexsha": "ea7f203ccd9642a6793537184dbccc764521f6fc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 18, "max_issues_repo_issues_event_min_datetime": "2021-12-03T13:26:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-23T00:57:46.000Z", "max_forks_repo_path": "notebooks/Not.ipynb", "max_forks_repo_name": "RudgeLab/LOICA", "max_forks_repo_head_hexsha": "ea7f203ccd9642a6793537184dbccc764521f6fc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.1437632135, "max_line_length": 324, "alphanum_fraction": 0.5541708856, "converted": true, "num_tokens": 1743, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.600188359260205, "lm_q2_score": 0.476579651063676, "lm_q1q2_score": 0.2860375588287087}} {"text": "##### Copyright 2020 The TensorFlow Authors.\n\n\n```\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# \u3053\u3093\u306b\u3061\u306f\u3001\u591a\u304f\u306e\u4e16\u754c\n\n\n \n \n \n \n
View on TensorFlow.orgRun in Google ColabGitHub \u3067\u30bd\u30fc\u30b9\u3092\u8868\u793a{\u30ce\u30fc\u30c8\u30d6\u30c3\u30af\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9/a0}
\n\n\u3053\u306e\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u3067\u306f\u3001\u53e4\u5178\u7684\u306a\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u304c\u91cf\u5b50\u30d3\u30c3\u30c8\u30fb\u30ad\u30e3\u30ea\u30d6\u30ec\u30fc\u30b7\u30e7\u30f3\u30a8\u30e9\u30fc\u306e\u8a02\u6b63\u3092\u5b66\u7fd2\u3059\u308b\u65b9\u6cd5\u3092\u7d39\u4ecb\u3057\u307e\u3059\u3002Circq \u306f NISQ\uff08\u30ce\u30a4\u30ba\u306e\u591a\u3044\u4e2d\u9593\u30b9\u30b1\u30fc\u30eb\u91cf\u5b50\uff09\u56de\u8def\u3092\u4f5c\u6210\u3001\u7de8\u96c6\u3001\u547c\u3073\u51fa\u3059\u305f\u3081\u306e Python \u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3067\u3042\u308a\u3001\u3053\u3053\u3067\u306f Cirq \u304c TensorFlow Quantum \u3068\u3069\u306e\u3088\u3046\u306b\u3084\u308a\u53d6\u308a\u3059\u308b\u304b\u3092\u793a\u3057\u307e\u3059\u3002\n\n## \u30bb\u30c3\u30c8\u30a2\u30c3\u30d7\n\n\n```\n!pip install tensorflow==2.1.0\n```\n\nTensorFlow Quantum \u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u307e\u3059\u3002\n\n\n```\n!pip install tensorflow-quantum\n```\n\n\u6b21\u306b\u3001TensorFlow \u3068\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u4f9d\u5b58\u95a2\u4fc2\u3092\u30a4\u30f3\u30dd\u30fc\u30c8\u3057\u307e\u3059\u3002\n\n\n```\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\n```\n\n## 1. \u57fa\u672c\n\n### 1.1 Cirq \u3068\u30d1\u30e9\u30e1\u30fc\u30bf\u5316\u3055\u308c\u305f\u91cf\u5b50\u56de\u8def\n\nTensorFlow Quantum (TFQ) \u306b\u3064\u3044\u3066\u8aac\u660e\u3059\u308b\u524d\u306b\u3001Circq \u306e\u57fa\u672c\u3092\u3044\u304f\u3064\u304b\u898b\u3066\u307f\u307e\u3057\u3087\u3046\u3002Cirq \u306f\u3001Google \u306e\u91cf\u5b50\u30b3\u30f3\u30d4\u30e5\u30fc\u30c6\u30a3\u30f3\u30b0\u7528\u306e Python \u30e9\u30a4\u30d6\u30e9\u30ea\u3067\u3001\u9759\u7684\u30b2\u30fc\u30c8\u3084\u30d1\u30e9\u30e1\u30fc\u30bf\u5316\u3055\u308c\u305f\u30b2\u30fc\u30c8\u306a\u3069\u306e\u56de\u8def\u306e\u5b9a\u7fa9\u306b\u4f7f\u7528\u3057\u307e\u3059\u3002\n\nCirq \u306f\u3001SymPy \u30b7\u30f3\u30dc\u30eb\u3092\u4f7f\u7528\u3057\u3066\u81ea\u7531\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u8868\u3057\u307e\u3059\u3002\n\n\n```\na, b = sympy.symbols('a b')\n```\n\n\u6b21\u306e\u30b3\u30fc\u30c9\u306f\u3001\u4e0a\u8a18\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u4f7f\u7528\u3057\u3066 2 \u3064\u306e\u91cf\u5b50\u30d3\u30c3\u30c8\u56de\u8def\u3092\u4f5c\u6210\u3057\u307e\u3059\u3002\n\n\n```\n# Create two qubits\nq0, q1 = cirq.GridQubit.rect(1, 2)\n\n# Create a circuit on these qubits using the parameters you created above.\ncircuit = cirq.Circuit(\n cirq.rx(a).on(q0),\n cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1))\n\nSVGCircuit(circuit)\n```\n\n\u56de\u8def\u3092\u8a55\u4fa1\u3059\u308b\u306b\u306f\u3001`cirq.Simulator`\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3092\u4f7f\u7528\u3057\u307e\u3059\u3002\u56de\u8def\u5185\u306e\u81ea\u7531\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u7279\u5b9a\u306e\u6570\u5024\u306b\u7f6e\u304d\u63db\u3048\u308b\u306b\u306f\u3001`cirq.ParamResolver`\u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092\u6e21\u3057\u307e\u3059\u3002\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u5316\u3055\u308c\u305f\u56de\u8def\u306e\u751f\u306e\u72b6\u614b\u30d9\u30af\u30c8\u30eb\u51fa\u529b\u3092\u8a08\u7b97\u3057\u307e\u3059\u3002\n\n\n```\n# Calculate a state vector with a=0.5 and b=-0.5.\nresolver = cirq.ParamResolver({a: 0.5, b: -0.5})\noutput_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state\noutput_state_vector\n```\n\n\u72b6\u614b\u30d9\u30af\u30c8\u30eb\u306f\u3001\u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u306e\u5916\u304b\u3089\u76f4\u63a5\u30a2\u30af\u30bb\u30b9\u3059\u308b\u3053\u3068\u306f\u3067\u304d\u307e\u305b\u3093\uff08\u4e0a\u8a18\u306e\u8907\u7d20\u6570\u51fa\u529b\u306b\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044\uff09\u3002\u7269\u7406\u7684\u306b\u73fe\u5b9f\u7684\u306b\u3059\u308b\u306b\u306f\u3001\u72b6\u614b\u30d9\u30af\u30c8\u30eb\u3092\u53e4\u5178\u7684\u30b3\u30f3\u30d4\u30e5\u30fc\u30bf\u304c\u7406\u89e3\u3067\u304d\u308b\u5b9f\u6570\u306b\u5909\u63db\u3059\u308b\u6e2c\u5b9a\u5024\u3092\u6307\u5b9a\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3002Cirq \u306f\u3001Pauli \u6f14\u7b97\u5b50 $\\hat{X}$, $\\hat{Y}$ \u304a\u3088\u3073 $\\hat{Z}$ \u306e\u7d44\u307f\u5408\u308f\u305b\u3092\u4f7f\u7528\u3057\u3066\u6e2c\u5b9a\u5024\u3092\u6307\u5b9a\u3057\u307e\u3059\u3002\u4f8b\u3068\u3057\u3066\u3001\u6b21\u306e\u30b3\u30fc\u30c9\u306f\u3001\u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u3057\u305f\u72b6\u614b\u30d9\u30af\u30c8\u30eb\u3067 $\\hat{Z}_0$ \u3068 $\\frac{1}{2}\\hat{Z}_0 + \\hat{X}_1$ \u3092\u6e2c\u5b9a\u3057\u307e\u3059\u3002\n\n\n```\nz0 = cirq.Z(q0)\n\nqubit_map={q0: 0, q1: 1}\n\nz0.expectation_from_wavefunction(output_state_vector, qubit_map).real\n```\n\n\n```\nz0x1 = 0.5 * z0 + cirq.X(q1)\n\nz0x1.expectation_from_wavefunction(output_state_vector, qubit_map).real\n```\n\n### 1.2 \u30c6\u30f3\u30bd\u30eb\u3068\u3057\u3066\u306e\u91cf\u5b50\u56de\u8def\n\nTensorFlow Quantum (TFQ) \u306f\u3001Cirq \u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092\u30c6\u30f3\u30bd\u30eb\u306b\u5909\u63db\u3059\u308b\u95a2\u6570\u3067\u3042\u308b`tfq.convert_to_tensor`\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002\u3053\u308c\u306b\u3088\u308a\u3001Cirq \u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092\u91cf\u5b50\u30ec\u30a4\u30e4\u30fc\u304a\u3088\u3073\u91cf\u5b50\u6f14\u7b97\u306b\u9001\u4fe1\u3067\u304d\u307e\u3059\u3002\u3053\u306e\u95a2\u6570\u306f\u3001Cirq Circuits \u3068 Cirq Paulis \u306e\u30ea\u30b9\u30c8\u307e\u305f\u306f\u914d\u5217\u3067\u547c\u3073\u51fa\u3059\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\n\n\n```\n# Rank 1 tensor containing 1 circuit.\ncircuit_tensor = tfq.convert_to_tensor([circuit])\n\nprint(circuit_tensor.shape)\nprint(circuit_tensor.dtype)\n```\n\n\u3053\u308c\u306f\u3001Cirq \u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092`tf.string`\u30c6\u30f3\u30bd\u30eb\u3068\u3057\u3066\u30a8\u30f3\u30b3\u30fc\u30c9\u3057\u3001`tfq`\u6f14\u7b97\u306f\u5fc5\u8981\u306b\u5fdc\u3058\u3066\u30c7\u30b3\u30fc\u30c9\u3057\u307e\u3059\u3002\n\n\n```\n# Rank 1 tensor containing 2 Pauli operators.\npauli_tensor = tfq.convert_to_tensor([z0, z0x1])\npauli_tensor.shape\n```\n\n### 1.3 \u30d0\u30c3\u30c1\u56de\u8def\u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\n\nTFQ \u306f\u3001\u671f\u5f85\u5024\u3001\u30b5\u30f3\u30d7\u30eb\u3001\u304a\u3088\u3073\u72b6\u614b\u30d9\u30af\u30c8\u30eb\u3092\u8a08\u7b97\u3059\u308b\u305f\u3081\u306e\u30e1\u30bd\u30c3\u30c9\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002\u307e\u305a\u3001*\u671f\u5f85\u5024*\u304b\u3089\u898b\u3066\u3044\u304d\u307e\u3057\u3087\u3046\u3002\n\n\u671f\u5f85\u5024\u3092\u8a08\u7b97\u3059\u308b\u305f\u3081\u306e\u6700\u9ad8\u30ec\u30d9\u30eb\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306f\u3001`tf.keras.Layer`\u3067\u3042\u308b`tfq.layers.Expectation`\u30ec\u30a4\u30e4\u30fc\u3067\u3059\u3002\u6700\u3082\u5358\u7d14\u306a\u5f62\u5f0f\u3067\u306f\u3001\u3053\u306e\u30ec\u30a4\u30e4\u30fc\u306f\u3001\u591a\u304f\u306e`cirq.ParamResolvers`\u3067\u30d1\u30e9\u30e1\u30fc\u30bf\u5316\u3055\u308c\u305f\u56de\u8def\u3092\u30b7\u30df\u30e5\u30ec\u30fc\u30c8\u3059\u308b\u3053\u3068\u3068\u540c\u7b49\u3067\u3059\u304c\u3001TFQ \u3067\u306f TensorFlow \u30bb\u30de\u30f3\u30c6\u30a3\u30af\u30b9\u306b\u5f93\u3063\u305f\u30d0\u30c3\u30c1\u51e6\u7406\u304c\u53ef\u80fd\u3067\u3042\u308a\u3001\u56de\u8def\u306f\u52b9\u7387\u7684\u306a C++ \u30b3\u30fc\u30c9\u3092\u4f7f\u7528\u3057\u3066\u30b7\u30df\u30e5\u30ec\u30fc\u30c8\u3055\u308c\u307e\u3059\u3002\n\n`a`\u3068`b`\u30d1\u30e9\u30e1\u30fc\u30bf\u306e\u4ee3\u308f\u308a\u306b\u5024\u306e\u30d0\u30c3\u30c1\u3092\u4f5c\u6210\u3057\u307e\u3059\u3002\n\n\n```\nbatch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32)\n```\n\nCirq \u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u5024\u306b\u5bfe\u3059\u308b\u30d0\u30c3\u30c1\u56de\u8def\u306e\u5b9f\u884c\u306b\u306f\u3001\u30eb\u30fc\u30d7\u304c\u5fc5\u8981\u3067\u3059\u3002\n\n\n```\ncirq_results = []\ncirq_simulator = cirq.Simulator()\n\nfor vals in batch_vals:\n resolver = cirq.ParamResolver({a: vals[0], b: vals[1]})\n final_state = cirq_simulator.simulate(circuit, resolver).final_state\n cirq_results.append(\n [z0.expectation_from_wavefunction(final_state, {\n q0: 0,\n q1: 1\n }).real])\n\nprint('cirq batch results: \\n {}'.format(np.array(cirq_results)))\n```\n\nTFQ \u3067\u306f\u540c\u3058\u6f14\u7b97\u304c\u7c21\u7565\u5316\u3055\u308c\u3066\u3044\u307e\u3059\u3002\n\n\n```\ntfq.layers.Expectation()(circuit,\n symbol_names=[a, b],\n symbol_values=batch_vals,\n operators=z0)\n```\n\n## 2. \u91cf\u5b50\u53e4\u5178\u30cf\u30a4\u30d6\u30ea\u30c3\u30c9\u306e\u6700\u9069\u5316\n\n\u4ee5\u4e0a\u306f\u57fa\u672c\u306e\u8aac\u660e\u3067\u3057\u305f\u3002\u6b21\u306b\u3001TensorFlow Quantum \u3092\u4f7f\u7528\u3057\u3066*\u91cf\u5b50\u53e4\u5178\u30cf\u30a4\u30d6\u30ea\u30c3\u30c9\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8*\u3092\u69cb\u7bc9\u3057\u307e\u3057\u3087\u3046\u3002\u53e4\u5178\u7684\u306a\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u3092\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3057\u3066\u30011 \u3064\u306e\u91cf\u5b50\u30d3\u30c3\u30c8\u3092\u5236\u5fa1\u3057\u307e\u3059\u3002\u30b3\u30f3\u30c8\u30ed\u30fc\u30eb\u306f\u3001`0`\u307e\u305f\u306f`1`\u306e\u72b6\u614b\u306e\u91cf\u5b50\u30d3\u30c3\u30c8\u3092\u6b63\u3057\u304f\u6e96\u5099\u3059\u308b\u3088\u3046\u306b\u6700\u9069\u5316\u3055\u308c\u3001\u30b7\u30df\u30e5\u30ec\u30fc\u30c8\u3055\u308c\u305f\u7cfb\u7d71\u7684\u306a\u30ad\u30e3\u30ea\u30d6\u30ec\u30fc\u30b7\u30e7\u30f3\u30a8\u30e9\u30fc\u3092\u514b\u670d\u3057\u307e\u3059\u3002\u4ee5\u4e0b\u306e\u56f3\u306f\u3001\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u3092\u793a\u3057\u3066\u3044\u307e\u3059\u3002\n\n \n\n\u3053\u308c\u306f\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u304c\u306a\u304f\u3066\u3082\u7c21\u5358\u306b\u89e3\u6c7a\u3067\u304d\u308b\u554f\u984c\u3067\u3059\u304c\u3001\u30c6\u30fc\u30de\u306f TFQ \u3092\u4f7f\u7528\u3057\u3066\u89e3\u6c7a\u3067\u304d\u308b\u5b9f\u969b\u306e\u91cf\u5b50\u5236\u5fa1\u306e\u554f\u984c\u3068\u4f3c\u3066\u3044\u307e\u3059\u3002\u3053\u308c\u306f\u3001`tf.keras.Model`\u5185\u306e`tfq.layers.ControlledPQC` (Parametrized Quantum Circuit) \u30ec\u30a4\u30e4\u30fc\u3092\u4f7f\u7528\u3057\u305f\u91cf\u5b50\u53e4\u5178\u8a08\u7b97\u306e\u30a8\u30f3\u30c9\u30c4\u30fc\u30a8\u30f3\u30c9\u306e\u4f8b\u3092\u793a\u3057\u3066\u3044\u307e\u3059\u3002\n\n\u3053\u306e\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u306e\u5b9f\u88c5\u3067\u306f\u3001\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u306f 3 \u3064\u306e\u90e8\u5206\u306b\u5206\u304b\u308c\u3066\u3044\u307e\u3059\u3002\n\n- *\u5165\u529b\u56de\u8def*\u307e\u305f\u306f*\u30c7\u30fc\u30bf\u30dd\u30a4\u30f3\u30c8\u56de\u8def*\uff1a\u6700\u521d\u306e 3 \u3064\u306e $R$ \u30b2\u30fc\u30c8\u3002\n- *\u5236\u5fa1\u56de\u8def*\uff1a\u305d\u306e\u4ed6\u306e 3 \u3064\u306e $R$ \u30b2\u30fc\u30c8\u3002\n- *\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9*\uff1a\u5236\u5fa1\u56de\u8def\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u8a2d\u5b9a\u3059\u308b\u53e4\u5178\u7684\u306a\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3002\n\n### 2.1 \u5236\u5fa1\u56de\u8def\u306e\u5b9a\u7fa9\n\n\u4e0a\u306e\u56f3\u306b\u793a\u3059\u3088\u3046\u306b\u3001\u5b66\u7fd2\u53ef\u80fd\u306a\u30b7\u30f3\u30b0\u30eb\u30d3\u30c3\u30c8\u30ed\u30fc\u30c6\u30fc\u30b7\u30e7\u30f3\u3092\u5b9a\u7fa9\u3057\u307e\u3059\u3002\u3053\u308c\u306f\u3001\u5236\u5fa1\u56de\u8def\u306b\u5bfe\u5fdc\u3057\u307e\u3059\u3002\n\n\n```\n# Parameters that the classical NN will feed values into.\ncontrol_params = sympy.symbols('theta_1 theta_2 theta_3')\n\n# Create the parameterized circuit.\nqubit = cirq.GridQubit(0, 0)\nmodel_circuit = cirq.Circuit(\n cirq.rz(control_params[0])(qubit),\n cirq.ry(control_params[1])(qubit),\n cirq.rx(control_params[2])(qubit))\n\nSVGCircuit(model_circuit)\n```\n\n### 2.2 \u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\n\n\u6b21\u306b\u3001\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u5b9a\u7fa9\u3057\u307e\u3059\u3002 \n\n\n```\n# The classical neural network layers.\ncontroller = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation='elu'),\n tf.keras.layers.Dense(3)\n])\n```\n\n\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u306b\u30b3\u30de\u30f3\u30c9\u306e\u30d0\u30c3\u30c1\u3092\u4e0e\u3048\u308b\u3068\u3001\u5236\u5fa1\u3055\u308c\u305f\u56de\u8def\u306e\u5236\u5fa1\u4fe1\u53f7\u306e\u30d0\u30c3\u30c1\u304c\u51fa\u529b\u3055\u308c\u307e\u3059\u3002\n\n\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u306f\u30e9\u30f3\u30c0\u30e0\u306b\u521d\u671f\u5316\u3055\u308c\u308b\u305f\u3081\u3001\u3053\u308c\u3089\u306e\u51fa\u529b\u306f\u307e\u3060\u6709\u7528\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002\n\n\n```\ncontroller(tf.constant([[0.0],[1.0]])).numpy()\n```\n\n### 2.3 \u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u3092\u56de\u8def\u306b\u63a5\u7d9a\u3059\u308b\n\n`tfq`\u3092\u4f7f\u7528\u3057\u3066\u3001\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u3092 1 \u3064\u306e`keras.Model`\u3068\u3057\u3066\u5236\u5fa1\u56de\u8def\u306b\u63a5\u7d9a\u3057\u307e\u3059\u3002\n\n\u3053\u306e\u30b9\u30bf\u30a4\u30eb\u306e\u30e2\u30c7\u30eb\u5b9a\u7fa9\u306e\u8a73\u7d30\u306b\u3064\u3044\u3066\u306f\u3001[Keras Functional API \u30ac\u30a4\u30c9](https://www.tensorflow.org/guide/keras/functional)\u3092\u3054\u89a7\u304f\u3060\u3055\u3044\u3002\n\n\u307e\u305a\u3001\u30e2\u30c7\u30eb\u3078\u306e\u5165\u529b\u3092\u5b9a\u7fa9\u3057\u307e\u3059\u3002 \n\n\n```\n# This input is the simulated miscalibration that the model will learn to correct.\ncircuits_input = tf.keras.Input(shape=(),\n # The circuit-tensor has dtype `tf.string` \n dtype=tf.string,\n name='circuits_input')\n\n# Commands will be either `0` or `1`, specifying the state to set the qubit to.\ncommands_input = tf.keras.Input(shape=(1,),\n dtype=tf.dtypes.float32,\n name='commands_input')\n\n```\n\n\u6b21\u306b\u3001\u3053\u308c\u3089\u306e\u5165\u529b\u306b\u6f14\u7b97\u3092\u9069\u7528\u3057\u3066\u3001\u8a08\u7b97\u3092\u5b9a\u7fa9\u3057\u307e\u3059\u3002\n\n\n```\ndense_2 = controller(commands_input)\n\n# TFQ layer for classically controlled circuits.\nexpectation_layer = tfq.layers.ControlledPQC(model_circuit,\n # Observe Z\n operators = cirq.Z(qubit))\nexpectation = expectation_layer([circuits_input, dense_2])\n```\n\n\u6b21\u306b\u3001\u3053\u306e\u8a08\u7b97\u3092`tf.keras.Model`\u3068\u3057\u3066\u30d1\u30c3\u30b1\u30fc\u30b8\u5316\u3057\u307e\u3059\u3002\n\n\n```\n# The full Keras model is built from our layers.\nmodel = tf.keras.Model(inputs=[circuits_input, commands_input],\n outputs=expectation)\n```\n\n\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u306f\u3001\u4ee5\u4e0b\u306e\u30e2\u30c7\u30eb\u306e\u30d7\u30ed\u30c3\u30c8\u3067\u793a\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u3053\u306e\u30e2\u30c7\u30eb\u30d7\u30ed\u30c3\u30c8\u3092\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u56f3\u3068\u6bd4\u8f03\u3057\u3066\u3001\u6b63\u78ba\u3055\u3092\u78ba\u8a8d\u3057\u307e\u3059\u3002\n\n\u6ce8\u610f: `graphviz`\u30d1\u30c3\u30b1\u30fc\u30b8\u306e\u30b7\u30b9\u30c6\u30e0\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u304c\u5fc5\u8981\u306b\u306a\u308b\u5834\u5408\u304c\u3042\u308a\u307e\u3059\u3002\n\n\n```\ntf.keras.utils.plot_model(model, show_shapes=True, dpi=70)\n```\n\n\u3053\u306e\u30e2\u30c7\u30eb\u306f\u3001\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u306e\u30b3\u30de\u30f3\u30c9\u3068\u3001\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u304c\u51fa\u529b\u3092\u4fee\u6b63\u3057\u3088\u3046\u3068\u3057\u3066\u3044\u308b\u5165\u529b\u56de\u8def\u306e 2 \u3064\u306e\u5165\u529b\u3092\u53d7\u3051\u53d6\u308a\u307e\u3059\u3002 \n\n### 2.4 \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\n\n\u30e2\u30c7\u30eb\u306f\u3001\u30b3\u30de\u30f3\u30c9\u3054\u3068\u306b $\\hat{Z}$ \u306e\u6b63\u3057\u3044\u6e2c\u5b9a\u5024\u306e\u51fa\u529b\u3092\u8a66\u884c\u3057\u307e\u3059\u3002\u30b3\u30de\u30f3\u30c9\u3068\u6b63\u3057\u3044\u5024\u306e\u5b9a\u7fa9\u306f\u4ee5\u4e0b\u306e\u3068\u304a\u308a\u3067\u3059\u3002\n\n\n```\n# The command input values to the classical NN.\ncommands = np.array([[0], [1]], dtype=np.float32)\n\n# The desired Z expectation value at output of quantum circuit.\nexpected_outputs = np.array([[1], [-1]], dtype=np.float32)\n```\n\n\u3053\u308c\u306f\u3001\u3053\u306e\u30bf\u30b9\u30af\u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u5168\u4f53\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u5185\u306e\u5404\u30c7\u30fc\u30bf\u30dd\u30a4\u30f3\u30c8\u306b\u3082\u5165\u529b\u56de\u8def\u304c\u5fc5\u8981\u3067\u3059\u3002\n\n### 2.4 \u5165\u529b\u56de\u8def\u306e\u5b9a\u7fa9\n\n\u4ee5\u4e0b\u306e\u5165\u529b\u56de\u8def\u306f\u3001\u30e2\u30c7\u30eb\u304c\u4fee\u6b63\u3059\u308b\u3053\u3068\u3092\u5b66\u7fd2\u3059\u308b\u305f\u3081\u306e\u30e9\u30f3\u30c0\u30e0\u306a\u8aa4\u6821\u6b63\u3092\u5b9a\u7fa9\u3057\u307e\u3059\u3002\n\n\n```\nrandom_rotations = np.random.uniform(0, 2 * np.pi, 3)\nnoisy_preparation = cirq.Circuit(\n cirq.rx(random_rotations[0])(qubit),\n cirq.ry(random_rotations[1])(qubit),\n cirq.rz(random_rotations[2])(qubit)\n)\ndatapoint_circuits = tfq.convert_to_tensor([\n noisy_preparation\n] * 2) # Make two copied of this circuit\n```\n\n\u56de\u8def\u306b\u306f 2 \u3064\u306e\u30b3\u30d4\u30fc\u304c\u3042\u308a\u307e\u3059\uff08\u30c7\u30fc\u30bf\u30dd\u30a4\u30f3\u30c8\u3054\u3068\u306b 1 \u3064\u305a\u3064\uff09\u3002\n\n\n```\ndatapoint_circuits.shape\n```\n\n### 2.5 \u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\n\n\u5b9a\u7fa9\u3055\u308c\u305f\u5165\u529b\u3092\u4f7f\u7528\u3057\u3066\u3001`tfq`\u30e2\u30c7\u30eb\u306e\u30c6\u30b9\u30c8\u30e9\u30f3\u3092\u5b9f\u884c\u3057\u307e\u3059\u3002\n\n\n```\nmodel([datapoint_circuits, commands]).numpy()\n```\n\n\u6b21\u306b\u3001\u6a19\u6e96\u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30d7\u30ed\u30bb\u30b9\u3092\u5b9f\u884c\u3057\u3066\u3001\u3053\u308c\u3089\u306e\u5024\u3092`expected_outputs`\u306b\u5411\u3051\u3066\u8abf\u6574\u3057\u307e\u3059\u3002\n\n\n```\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.05)\nloss = tf.keras.losses.MeanSquaredError()\nmodel.compile(optimizer=optimizer, loss=loss)\nhistory = model.fit(x=[datapoint_circuits, commands],\n y=expected_outputs,\n epochs=30,\n verbose=0)\n```\n\n\n```\nplt.plot(history.history['loss'])\nplt.title(\"Learning to Control a Qubit\")\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Error in Control\")\nplt.show()\n```\n\n\u3053\u306e\u30d7\u30ed\u30c3\u30c8\u304b\u3089\u3001\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u304c\u4f53\u7cfb\u7684\u306a\u30ad\u30e3\u30ea\u30d6\u30ec\u30fc\u30b7\u30e7\u30f3\u30a8\u30e9\u30fc\u3092\u8a02\u6b63\u3059\u308b\u3053\u3068\u3092\u5b66\u7fd2\u3057\u305f\u3053\u3068\u304c\u308f\u304b\u308a\u307e\u3059\u3002\n\n### 2.6 \u51fa\u529b\u306e\u78ba\u8a8d\n\n\u6b21\u306b\u3001\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u6e08\u307f\u30e2\u30c7\u30eb\u3092\u4f7f\u7528\u3057\u3066\u3001\u91cf\u5b50\u30d3\u30c3\u30c8\u30fb\u30ad\u30e3\u30ea\u30d6\u30ec\u30fc\u30b7\u30e7\u30f3\u30a8\u30e9\u30fc\u3092\u4fee\u6b63\u3057\u307e\u3059\u3002Cirq \u3092\u4f7f\u7528\u3059\u308b\u5834\u5408\u306f\u4ee5\u4e0b\u306e\u3068\u304a\u308a\u3067\u3059\u3002\n\n\n```\ndef check_error(command_values, desired_values):\n \"\"\"Based on the value in `command_value` see how well you could prepare\n the full circuit to have `desired_value` when taking expectation w.r.t. Z.\"\"\"\n params_to_prepare_output = controller(command_values).numpy()\n full_circuit = noisy_preparation + model_circuit\n\n # Test how well you can prepare a state to get expectation the expectation\n # value in `desired_values`\n for index in [0, 1]:\n state = cirq_simulator.simulate(\n full_circuit,\n {s:v for (s,v) in zip(control_params, params_to_prepare_output[index])}\n ).final_state\n expectation = z0.expectation_from_wavefunction(state, {qubit: 0}).real\n print(f'For a desired output (expectation) of {desired_values[index]} with'\n f' noisy preparation, the controller\\nnetwork found the following '\n f'values for theta: {params_to_prepare_output[index]}\\nWhich gives an'\n f' actual expectation of: {expectation}\\n')\n\n\ncheck_error(commands, expected_outputs)\n```\n\n\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u4e2d\u306e\u640d\u5931\u95a2\u6570\u306e\u5024\u304b\u3089\u3001\u30e2\u30c7\u30eb\u306e\u5b66\u7fd2\u304c\u3069\u308c\u307b\u3069\u9032\u3093\u3067\u3044\u308b\u304b\u304c\u5927\u307e\u304b\u306b\u5206\u304b\u308a\u307e\u3059\u3002\u640d\u5931\u304c\u5c0f\u3055\u3044\u307b\u3069\u3001\u4e0a\u8a18\u306e\u30bb\u30eb\u306e\u671f\u5f85\u5024\u306f`Desired_values`\u306b\u8fd1\u304f\u306a\u308a\u307e\u3059\u3002\u30d1\u30e9\u30e1\u30fc\u30bf\u5024\u306b\u95a2\u5fc3\u304c\u306a\u3044\u5834\u5408\u306f\u3001`tfq`\u3092\u4f7f\u7528\u3057\u3066\u4e0a\u8a18\u304b\u3089\u306e\u51fa\u529b\u3092\u3044\u3064\u3067\u3082\u78ba\u8a8d\u3067\u304d\u307e\u3059\u3002\n\n\n```\nmodel([datapoint_circuits, commands])\n```\n\n## 3 \u3055\u307e\u3056\u307e\u306a\u6f14\u7b97\u5b50\u306e\u56fa\u6709\u72b6\u614b\u306e\u6e96\u5099\u306b\u3064\u3044\u3066\u5b66\u3076\n\n1 \u3068 0 \u306b\u5bfe\u5fdc\u3059\u308b $\\pm \\hat{Z}$ \u56fa\u6709\u72b6\u614b\u306e\u9078\u629e\u306f\u4efb\u610f\u3067\u3057\u305f\u30021 \u3092 $+ \\hat{Z}$ \u56fa\u6709\u72b6\u614b\u306b\u5bfe\u5fdc\u3055\u305b\u30010 \u3092 $-\\hat{X}$ \u56fa\u6709\u72b6\u614b\u306b\u5bfe\u5fdc\u3055\u305b\u308b\u3053\u3068\u3082\u7c21\u5358\u306b\u3067\u304d\u307e\u3059\u3002\u305d\u306e\u305f\u3081\u306b\u306f\u3001\u6b21\u306e\u56f3\u306b\u793a\u3059\u3088\u3046\u306b\u3001\u30b3\u30de\u30f3\u30c9\u3054\u3068\u306b\u7570\u306a\u308b\u6e2c\u5b9a\u6f14\u7b97\u5b50\u3092\u6307\u5b9a\u3057\u307e\u3059\u3002\n\n \n\n\u3053\u308c\u306b\u306f\u3001tfq.layers.Expectation\u3092\u4f7f\u7528\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3002\u3053\u308c\u3067\u3001\u5165\u529b\u306f\u3001\u56de\u8def\u3001\u30b3\u30de\u30f3\u30c9\u3001\u304a\u3088\u3073\u6f14\u7b97\u5b50\u306e 3 \u3064\u306e\u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092\u542b\u3080\u3088\u3046\u306b\u306a\u308a\u307e\u3057\u305f\u3002\u51fa\u529b\u306f\u671f\u5f85\u5024\u306e\u307e\u307e\u3067\u3059\u3002\n\n### 3.1 \u65b0\u3057\u3044\u30e2\u30c7\u30eb\u306e\u5b9a\u7fa9\n\n\u3053\u306e\u30bf\u30b9\u30af\u3092\u5b9f\u884c\u3059\u308b\u305f\u3081\u306e\u30e2\u30c7\u30eb\u3092\u898b\u3066\u307f\u307e\u3057\u3087\u3046\u3002\n\n\n```\n# Define inputs.\ncommands_input = tf.keras.layers.Input(shape=(1),\n dtype=tf.dtypes.float32,\n name='commands_input')\ncircuits_input = tf.keras.Input(shape=(),\n # The circuit-tensor has dtype `tf.string` \n dtype=tf.dtypes.string,\n name='circuits_input')\noperators_input = tf.keras.Input(shape=(1,),\n dtype=tf.dtypes.string,\n name='operators_input')\n```\n\n\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306f\u6b21\u306e\u3068\u304a\u308a\u3067\u3059\u3002\n\n\n```\n# Define classical NN.\ncontroller = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation='elu'),\n tf.keras.layers.Dense(3)\n])\n```\n\n`tfq`\u3092\u4f7f\u7528\u3057\u3066\u3001\u56de\u8def\u3068\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u3092 1 \u3064\u306e`keras.Model`\u306b\u7d50\u5408\u3057\u307e\u3059\u3002\n\n\n```\ndense_2 = controller(commands_input)\n\n# Since you aren't using a PQC or ControlledPQC you must append\n# your model circuit onto the datapoint circuit tensor manually.\nfull_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit)\nexpectation_output = tfq.layers.Expectation()(full_circuit,\n symbol_names=control_params,\n symbol_values=dense_2,\n operators=operators_input)\n\n# Contruct your Keras model.\ntwo_axis_control_model = tf.keras.Model(\n inputs=[circuits_input, commands_input, operators_input],\n outputs=[expectation_output])\n```\n\n### 3.2 \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\n\n`model_circuit`\u306b\u63d0\u4f9b\u3059\u308b\u5404\u30c7\u30fc\u30bf\u30dd\u30a4\u30f3\u30c8\u306b\u5bfe\u3057\u3066\u6e2c\u5b9a\u3059\u308b\u6f14\u7b97\u5b50\u3082\u542b\u3081\u307e\u3059\u3002\n\n\n```\n# The operators to measure, for each command.\noperator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]])\n\n# The command input values to the classical NN.\ncommands = np.array([[0], [1]], dtype=np.float32)\n\n# The desired expectation value at output of quantum circuit.\nexpected_outputs = np.array([[1], [-1]], dtype=np.float32)\n```\n\n### 3.3\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\n\n\u65b0\u3057\u3044\u5165\u529b\u3068\u51fa\u529b\u3092\u4f7f\u7528\u3057\u3001keras \u3067\u3082\u3046\u4e00\u5ea6\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3057\u307e\u3059\u3002\n\n\n```\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.05)\nloss = tf.keras.losses.MeanSquaredError()\n\ntwo_axis_control_model.compile(optimizer=optimizer, loss=loss)\n\nhistory = two_axis_control_model.fit(\n x=[datapoint_circuits, commands, operator_data],\n y=expected_outputs,\n epochs=30,\n verbose=1)\n```\n\n\n```\nplt.plot(history.history['loss'])\nplt.title(\"Learning to Control a Qubit\")\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Error in Control\")\nplt.show()\n```\n\n\u640d\u5931\u95a2\u6570\u306f\u30bc\u30ed\u306b\u4f4e\u4e0b\u3057\u307e\u3057\u305f\u3002\n\n`controller`\u306f\u30b9\u30bf\u30f3\u30c9\u30a2\u30ed\u30f3\u30e2\u30c7\u30eb\u3068\u3057\u3066\u5229\u7528\u3067\u304d\u307e\u3059\u3002\u30b3\u30f3\u30c8\u30ed\u30fc\u30e9\u3092\u547c\u3073\u51fa\u3057\u3001\u5404\u30b3\u30de\u30f3\u30c9\u4fe1\u53f7\u306b\u5bfe\u3059\u308b\u5fdc\u7b54\u3092\u78ba\u8a8d\u3057\u307e\u3059\u3002\u591a\u5c11\u624b\u9593\u304c\u304b\u304b\u308a\u307e\u3059\u304c\u3001\u3053\u308c\u3089\u306e\u51fa\u529b\u3092`random_rotations`\u306e\u5185\u5bb9\u3068\u6bd4\u8f03\u3057\u307e\u3059\u3002\n\n\n```\ncontroller.predict(np.array([0,1]))\n```\n\n\u6210\u529f: \u6700\u521d\u306e\u30e2\u30c7\u30eb\u306e`check_error`\u95a2\u6570\u3092\u3001\u3053\u306e\u65b0\u3057\u3044\u30e2\u30c7\u30eb\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u3067\u52d5\u4f5c\u3059\u308b\u3088\u3046\u306b\u9069\u5408\u3055\u305b\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3057\u307e\u3059\u3002\n", "meta": {"hexsha": "1e4e68b7e9d6cced874a6af17a9cba15832028e9", "size": 31110, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "site/ja/quantum/tutorials/hello_many_worlds.ipynb", "max_stars_repo_name": "seunggabi/docs-l10n", "max_stars_repo_head_hexsha": "e25b1cf2e3fd5979aaa4cd57fa343b2d2a94e1fc", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-12T18:02:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-18T19:32:41.000Z", "max_issues_repo_path": "site/ja/quantum/tutorials/hello_many_worlds.ipynb", "max_issues_repo_name": "seunggabi/docs-l10n", "max_issues_repo_head_hexsha": "e25b1cf2e3fd5979aaa4cd57fa343b2d2a94e1fc", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "site/ja/quantum/tutorials/hello_many_worlds.ipynb", "max_forks_repo_name": "seunggabi/docs-l10n", "max_forks_repo_head_hexsha": "e25b1cf2e3fd5979aaa4cd57fa343b2d2a94e1fc", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.8055555556, "max_line_length": 392, "alphanum_fraction": 0.4986820958, "converted": true, "num_tokens": 5959, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5851011686727231, "lm_q2_score": 0.4882833952958347, "lm_q1q2_score": 0.2856951852310781}} {"text": "# \u9891\u7387\u5206\u6790 (4)\uff1a\u975e\u8c10\u6027\u9891\u7387\u77eb\u6b63 VPT2\n\n\u5728\u672c\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u5c06\u4f1a\u8ba8\u8bba\u975e\u8c10\u6027\u9891\u7387\u77eb\u6b63\u6a21\u578b VPT2\u3002\n\n\u5728\u91cf\u5316\u8ba1\u7b97\u4e2d\uff0c\u901a\u5e38\u5206\u5b50\u632f\u52a8\u7684\u9891\u7387\u901a\u8fc7 Hessian \u77e9\u9635\u7ed9\u51fa\u3002\u4f46\u6211\u4eec\u4e5f\u77e5\u9053\uff0cHessian \u77e9\u9635\u662f\u80fd\u91cf\u5bf9\u539f\u5b50\u6838\u5750\u6807\u7684\u4e24\u6b21\u5bfc\u6570\uff1b\u4ee5\u8fd9\u79cd\u65b9\u5f0f\u6c42\u5f97\u7684\u9891\u7387\uff0c\u76f8\u5f53\u4e8e\u6c42\u53d6\u629b\u7269\u7ebf\u52bf\u51fd\u6570\u7684\u9891\u7387\u3002\u4f46\u771f\u5b9e\u7684\u52bf\u51fd\u6570\u5e76\u975e\u629b\u7269\u7ebf (\u8b6c\u5982 Morse\u3001L-J \u52bf\u51fd\u6570\u7b49)\uff0c\u56e0\u6b64\u771f\u5b9e\u7684\u9891\u7387\u4f1a\u4e0e Hessian \u77e9\u9635\u6240\u5f97\u5230\u7684\u503c\u6709\u5c11\u8bb8\u533a\u522b\u3002\u8fd9\u4e5f\u4f1a\u5f71\u54cd\u5230\u70ed\u529b\u5b66\u77eb\u6b63\u4e2d\uff0c\u9891\u7387\u914d\u5206\u51fd\u6570\u6240\u4ea7\u751f\u7684\u8d21\u732e\u503c\u3002\n\n\u4e3a\u4e86\u83b7\u5f97\u66f4\u771f\u5b9e\u7684\u5206\u5b50\u632f\u52a8\u4fe1\u606f\uff0c\u90a3\u4e48\u5c31\u9700\u8981\u6c42\u89e3 Hamiltonian \u53d7\u5206\u5b50\u632f\u52a8\u5fae\u6270\u540e\u7684\u672c\u5f81\u6ce2\u51fd\u6570\uff0c\u5e76\u8fdb\u800c\u8fdb\u884c\u5206\u6790\u3002\u8fd1\u4f3c\u7684\u505a\u6cd5\u53ef\u4ee5\u662f\u5fae\u6270\u7406\u8bba\u7684\u77eb\u6b63\u3002\u8fd9\u7bc7\u6587\u6863\u662f\u5176\u4e2d\u4e00\u79cd\u8fd1\u4f3c\u65b9\u6cd5\uff0c\u79f0\u4e3a Vibrational Perturbation Theory to the Second-Order (VPT2)\u3002\n\n\u6211\u4eec\u4e3b\u8981\u4f1a\u4f9d\u636e V. Barone [^Barone.JCP.2005] \u6240\u63d0\u4f9b\u7684\u7b97\u6cd5\u4e0e\u516c\u5f0f\u8fdb\u884c\u5b9e\u73b0\u3002\u6211\u4eec\u4e0d\u5bf9\u516c\u5f0f\u4e0e\u5176\u6b63\u786e\u6027\u4f5c\u63a8\u6f14\u4e0e\u68c0\u67e5\u3002\u6211\u4eec\u4e3b\u8981\u4e0e Gaussian 16 rev. B01 \u7684\u7ed3\u679c\u4f5c\u53c2\u7167\uff0c\u4f46\u4e0d\u4f1a\u5b8c\u5168\u76f8\u540c\u3002\n\n:::{note}\n\n\u5728 4 Core CPU \u4e0a\u5b8c\u6574\u6267\u884c\u672c\u6587\u6863\u9700\u8981\u5927\u7ea6 5-10 \u5206\u949f\u3002\n\n:::\n\n::: {warning}\n\n\u6211\u4eec\u5728\u672c\u6587\u6863\u4e2d\u4f7f\u7528\u5904\u4e8e\u975e\u5e73\u8861\u6784\u578b\u7684\u975e\u5bf9\u79f0\u7684\u6c28\u5206\u5b50\u3002\n\n- **\u9891\u7387\u5206\u6790\u6ca1\u6709\u7269\u7406\u610f\u4e49**\uff1a\u9891\u7387\u5206\u6790\u539f\u5219\u4e0a\u4e00\u5b9a\u8981\u5904\u4e8e\u5e73\u8861\u6784\u578b\uff1b\u7279\u6b8a\u7684\u60c5\u51b5\u4f1a\u662f\u8fc7\u6e21\u6001\u3002\u9664\u6b64\u4e4b\u5916\u7684\u60c5\u51b5\uff0c\u9891\u7387\u5206\u6790\u90fd\u6ca1\u6709\u7269\u7406\u610f\u4e49\u3002\u9700\u8981\u5148\u4f18\u5316\u5230\u7a33\u5b9a\u7684\u6784\u578b\u518d\u8fdb\u884c\u5206\u6790\u3002\n- **\u4e0d\u8003\u5bdf\u5bf9\u79f0\u6027**\uff1a\u5bf9\u4e8e\u5bf9\u79f0\u7684\u5206\u5b50\uff0c\u6709\u53ef\u80fd\u4f1a\u4ea7\u751f\u7b80\u5e76\u7684\u9891\u7387\u3002\u5728\u4e00\u4e9b\u60c5\u51b5\u4e0b\uff0c\u7b80\u5e76\u4f1a\u4ea7\u751f\u6570\u503c\u4e0a\u7684\u5947\u70b9\u3002\u975e\u7b80\u5e76\u7684\u5206\u5b50\u4e0d\u592a\u5bb9\u6613\u4ea7\u751f\u5947\u70b9\uff0c\u5206\u6790\u76f8\u5bf9\u5bb9\u6613\u3002\n- **\u975e\u7ebf\u6027\u5206\u5b50**\uff1a\u5f88\u591a\u65f6\u5019\uff0c\u7ebf\u6027\u4e0e\u975e\u7ebf\u6027\u5206\u5b50\u9700\u8981\u5206\u522b\u4f7f\u7528\u4e0d\u540c\u7684\u65b9\u6cd5\u5206\u6790\u3002\u6211\u4eec\u53ea\u8003\u8651\u975e\u7ebf\u6027\u5206\u5b50\uff0c\u5373 $3N-6$ \u578b\u5206\u5b50\u3002\n\n:::\n\n:::{danger}\n\n\u82e5\u672c\u6587\u6863\u6709\u5e78 (\u6216\u4e0d\u5e78) \u80fd\u5e2e\u52a9\u5230\u8bfb\u8005\uff0c\u8bfb\u8005\u4e5f\u9700\u8981\u77e5\u9053**\u8fd9\u4efd\u6587\u6863\u7684\u7a0b\u5e8f\u5b9e\u73b0\u672a\u5fc5\u6b63\u786e\uff01**\n\n\u672c\u6587\u6863\u6240\u7528\u4e8e\u8ba1\u7b97\u7684\u4f53\u7cfb\u6bd4\u8f83\u7279\u522b (\u6709\u865a\u9891\u7684\u6c28\u3001\u80fd\u91cf\u6700\u4f4e\u7ed3\u6784\u7684\u4e59\u919b)\uff0c**\u65e0\u6cd5\u4fdd\u8bc1\u53ef\u4ee5\u5e94\u7528\u5230\u5176\u5b83\u7684\u5206\u5b50**\u3002\n\n\u540c\u65f6\uff0c**Gaussian \u7684\u4e0d\u540c\u7248\u672c\u6709\u4e0d\u540c\u7684 bug \u548c\u4f7f\u7528\u6280\u5de7** (\u5f53\u7136\u5b8c\u5168\u53ef\u80fd\u662f\u6211\u7684\u9519\u8bef\uff0c\u4f46\u8fd9\u5e94\u7531\u8bfb\u8005\u53bb\u5224\u65ad\u4e0e\u8d1f\u8d23)\u3002\u6211\u4eec\u4f1a\u5728\u5177\u4f53\u7684\u4f4d\u7f6e\uff0c\u7ed9\u51fa\u5404\u4e2a\u7248\u672c\u5b58\u5728\u6f5c\u5728\u95ee\u9898\u7684\u60c5\u51b5\uff1b\u5e76\u4e14\u53ef\u80fd\u7684\u8bdd\uff0c\u4f1a\u7ed9\u51fa\u5982\u4f55\u91cd\u590d\u51fa Gaussian \u7684\u7ed3\u679c\u3002\u5bf9\u4e8e\u8f93\u5165\u5361 {download}`anharm.gjf` \u6211\u4eec\u6240\u8003\u5bdf\u7684 Gaussian \u7248\u672c\u6709 16 rev. B01 {download}`anharm-G16B01.log`, 09 rev. D01 {download}`anharm-G09D01.log`, 09 rev. C01 {download}`anharm-G09C01.log`\u3002\n\n\u8bfb\u8005\u7279\u522b\u9700\u8981\u6ce8\u610f\u5728\u4f5c\u6027\u8d28\u8ba1\u7b97\u65f6\uff0c**\u5927\u591a\u6570\u65f6\u5019\u6211\u4eec\u91c7\u7528 Eckart Orientation**\u3002\n\n\u7531\u4e8e\u5f53\u524d\u7684 VPT2 \u7406\u8bba\u672c\u8eab\u5c31\u4e0d\u9002\u5408\u7528\u4e8e\u5c0f\u5206\u5b50\u4ee5\u5916\u7684\u8ba1\u7b97 (\u5bb9\u6613\u7531\u4e8e\u9891\u7387\u5171\u632f\u800c\u7ed9\u51fa\u975e\u5e38\u79bb\u8c31\u7684\u975e\u8c10\u77eb\u6b63\u503c)\uff0c\u56e0\u6b64\u5373\u4f7f\u770b\u8d77\u6765\u8fd9\u91cc\u7ed9\u51fa\u7684\u505a\u6cd5\u4f3c\u4e4e\u9002\u5408\u5927\u5206\u5b50\uff0c\u5e76\u4e14\u7a0b\u5e8f\u4e5f\u770b\u8d77\u6765\u4e0d\u4f1a\u62a5\u9519\uff0c\u4f46**\u5fc5\u987b\u8981\u614e\u7528\u5f53\u524d\u6587\u6863\u6240\u5b9e\u73b0\u7684 VPT2**\u3002VPT2 \u6709\u4e0d\u6b62\u4e00\u79cd\u5e94\u7528\u65b9\u5f0f\uff1bGaussian \u4e5f\u5b9e\u73b0\u4e86\u4e00\u4e9b\u53d8\u79cd\uff0c\u53ef\u4ee5\u4e00\u5b9a\u7a0b\u5ea6\u4e0a\u89e3\u51b3\u5171\u632f\u6548\u5e94\uff0c\u8fd9\u662f\u672c\u6587\u6863\u6211\u5b8c\u5168\u65e0\u6cd5\u5904\u7406\u7684\u3002\n\n:::\n\nVPT2 \u5206\u6790\u4f1a\u7a0d\u5fae\u590d\u6742\u4e00\u4e9b\u3002\u6211\u4eec\u5728\u8fd9\u91cc\u5c3d\u7ba1\u4f1a\u4e0e Gaussian 16 \u7684\u7ed3\u679c\u4f5c\u5bf9\u7167\uff0c\u4f46\u6240\u6709\u91cf\u5316\u8ba1\u7b97\u90fd\u4f1a\u5728 PySCF \u8fdb\u884c\u3002\u4e3a\u6b64\u6211\u4eec\u9700\u8981\u5bf9 Hessian \u77e9\u9635\u4f5c\u57fa\u7840\u7684 [\u9891\u7387\u5206\u6790](freq_1.ipynb)\uff0c\u5e76\u4e14\u9700\u8981\u8fdb\u884c [\u6570\u503c\u5bfc\u6570](https://py-xdh.readthedocs.io/zh_CN/latest/numdiff/nuc_grad.html)\uff1b\u8fd9\u4e24\u4e2a\u529f\u80fd\u5206\u522b\u5b9e\u73b0\u5728 {download}`freqanal.py` \u4e0e {download}`deriv_numerical.py` \u4e2d\u3002\n\n\n```python\n%matplotlib notebook\n\nfrom freqanal import FreqAnal\nfrom deriv_numerical import NumericDiff, NucCoordDerivGenerator\n\nimport itertools\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom opt_einsum import contract as einsum\nfrom pyscf import gto, scf, hessian, lib, data\n\nnp.set_printoptions(6, linewidth=150, suppress=True)\n```\n\n\u6211\u4eec\u4e5f\u9700\u8981\u8fdb\u884c\u5355\u4f4d\u6362\u7b97\u3002\u8fd9\u5728\u5148\u524d\u7684\u82e5\u5e72\u6587\u6863\u4e2d\u5df2\u7ecf\u6709\u6240\u4f7f\u7528\u3002\n\n\n```python\nfrom scipy.constants import physical_constants\n# https://docs.scipy.org/doc/scipy/reference/constants.html\nE_h = physical_constants[\"Hartree energy\"][0]\na_0 = physical_constants[\"Bohr radius\"][0]\nN_A = physical_constants[\"Avogadro constant\"][0]\nc_0 = physical_constants[\"speed of light in vacuum\"][0]\ne_c = physical_constants[\"elementary charge\"][0]\ne_0 = physical_constants[\"electric constant\"][0]\nF = physical_constants[\"Faraday constant\"][0]\nk_B = physical_constants[\"Boltzmann constant\"][0]\nR = physical_constants[\"molar gas constant\"][0]\nh = physical_constants[\"Planck constant\"][0]\nhbar = physical_constants[\"reduced Planck constant\"][0]\namu = physical_constants[\"atomic mass constant\"][0]\n```\n\n\u9664\u4e86\u6587\u672b\u7684\u7ea2\u5916\u5149\u8c31\u7ed8\u5236\uff0c\u6211\u4eec\u4f7f\u7528\u7684\u5206\u5b50\u662f\u4e0d\u5177\u6709\u5bf9\u79f0\u6027\u7684\u6c28\u5206\u5b50\uff0c\u57fa\u7ec4 STO-3G\u3002\n\n\n```python\nmol = gto.Mole(atom=\"\"\"\nN 0.000000 0.000000 0.000000\nH 0.000000 0.000000 0.940000\nH 1.006874 0.000000 -0.260395\nH -1.037114 -0.277894 -0.640054\n\"\"\", basis=\"STO-3G\", verbose=0).build()\n```\n\n## \u6570\u503c\u4e09\u9636\u4e0e\u56db\u9636\u6838\u5750\u6807\u5bfc\u6570\n\n### \u4e8c\u9636\u5bfc\u6570\u56de\u987e\u4e0e\u9891\u7387\u6570\u503c\n\n:::{admonition} \u8bb0\u53f7\u5b9a\u4e49\n\n**\u89d2\u6807\u5b9a\u4e49**\n\n- $A, B$\uff1a\u539f\u5b50\u6838\u6807\u53f7\uff1b\n- $\\alpha, \\beta, \\gamma$\uff1a\u4e09\u7ef4\u5750\u6807\u7684\u53d6\u5411 (\u5373\u53ef\u4ee5\u53d6 $x, y, z$\uff0c\u4e5f\u53ef\u80fd\u53d6\u8f6c\u52a8\u60ef\u91cf\u4e3b\u8f74\uff0c\u672c\u6587\u6863\u591a\u6570\u65f6\u95f4\u9009\u62e9\u540e\u8005)\uff1b\n- $i, j, k$\uff1a\u7b80\u632f\u6a21\u5f0f\u7684\u7f16\u53f7\u3002\n\n**\u7269\u7406\u91cf\u5b9a\u4e49**\n\n- $A_\\alpha$\uff1a\u539f\u5b50\u6838 $A$ \u7684\u5750\u6807\u4e09\u7ef4\u5206\u91cf $\\alpha$\uff1b\n- $Q_i$\uff1a\u4e00\u5355\u4f4d\u7684\u7b80\u632f\u6a21\u5f0f $i$\uff1b\n- $Q_{A_\\alpha, i}$\uff1a\u7b80\u632f\u6a21\u5f0f\u8f6c\u6362\u77e9\u9635\uff0c\u5373 $i$ \u5728\u539f\u5b50\u5750\u6807\u5206\u91cf $A_\\alpha$ \u7684\u5206\u91cf\u503c\u3002\n\n\u9700\u8981\u6ce8\u610f\u5230\uff0c\u8fd9\u4e9b\u8bb0\u53f7\u7684\u5b9a\u4e49\u4e0e\u91cf\u5316\u7a0b\u5e8f\u4e2d\u7684\u5b9a\u4e49\uff0c\u4e5f\u4e0e\u4e4b\u524d\u6587\u6863\u7684\u5b9a\u4e49\u662f\u51b2\u7a81\u6216\u4e0d\u540c\u7684\u3002\u8bfb\u8005\u9700\u8981\u6ce8\u610f\u8bb0\u53f7\u7684\u5dee\u522b\u3002\n\n:::\n\n\u6211\u4eec\u77e5\u9053\uff0cHessian \u77e9\u9635\u662f\u9891\u7387\u5206\u6790\u7684\u5fc5\u8981\u77e9\u9635\uff0c\u5b83\u5373\u662f\u4e8c\u9636\u6838\u5750\u6807\u5bfc\u6570\uff0c\u53ca\n\n$$\n\\Phi_{A_\\alpha B_\\beta} = \\frac{\\partial^2 E}{\\partial A_t \\partial B_s}\n$$\n\n\u5728\u539f\u5b50\u5750\u6807\u5206\u91cf\u8868\u793a\u4e0b\uff0c\u8fd9\u5e94\u5f53\u662f\u4e00\u4e2a $(A_\\alpha, B_\\beta): (3 n_\\mathrm{atm}, 3 n_\\mathrm{atm})$ \u7684\u77e9\u9635\u3002\u5c3d\u7ba1\u4f9d\u636e\u7a0b\u5e8f\u4e0d\u540c\uff0c\u4e5f\u53ef\u80fd\u662f\u5f20\u91cf\u2014\u2014\u5728 PySCF \u4e2d\uff0cHessian \u7684\u9ed8\u8ba4\u8f93\u51fa\u662f $(A, B, \\alpha, \\beta): (n_\\mathrm{atm}, n_\\mathrm{atm}, 3, 3)$ \u7ef4\u5ea6\u7684\u5f20\u91cf\uff0c\u4f46\u5f20\u91cf\u7684\u5927\u5c0f\u4e0e $(A_\\alpha, B_\\beta)$ \u5b9e\u9645\u4e0a\u4e00\u81f4\uff0c\u7ecf\u8fc7\u8f6c\u7f6e\u4e24\u8005\u5c31\u80fd\u76f8\u540c\u3002\n\n\n```python\nnatm = mol.natm\nnhess = natm * 3\n\nmf = scf.RHF(mol).run()\nmf_hess = mf.Hessian().run()\n```\n\n\u6211\u4eec\u4e5f\u9700\u8981\u5b9a\u4e49\u9891\u7387\u5206\u6790\u7684\u5b9e\u4f8b `fa`\uff0c\u5176\u53d8\u91cf\u540d\u662f `FreqAnal` \u7684\u9996\u5b57\u6bcd\uff0c\u4e0e\u2642fa \u4e50\u5668\u2642\u65e0\u5173\u3002\u4e0b\u9762\u7684 code cell \u8f93\u51fa\u7684\u662f\u4ee5 $\\mathrm{cm^{-1}}$ \u4e3a\u7b80\u632f\u632f\u52a8\u9891\u7387 $\\omega_i$\u3002\n\n\u5982\u679c\u8981\u4e0e Gaussian \u7684\u7ed3\u679c\u6709\u66f4\u597d\u7684\u5370\u8bc1\uff0c\u539f\u5b50\u8d28\u91cf\u9700\u8981\u53d6\u4e0e Gaussian \u76f8\u540c\u7684\u503c\uff0c\u800c\u4e0d\u592a\u9002\u5408\u7528\u6574\u6570\u503c\u3002\u56e0\u6b64\uff0c\u8fd9\u91cc\u6ca1\u6709\u7528 PySCF \u81ea\u5e26\u7684 `mol.atom_mass_list`\uff0c\u800c\u662f\u4f7f\u7528\u4e00\u4e2a\u51fd\u6570 `get_atom_mass_list` \u6765\u89e3\u51b3\u95ee\u9898\u3002\n\n\n```python\ndef get_atom_mass_list(mol):\n d = {1: 1.00782504, 6: 12., 7: 14.0030740, 8: 15.9949146}\n return np.array([d[m] for m in mol.atom_charges()])\n```\n\n\n```python\nfa = FreqAnal()\nfa.mol_weights = get_atom_mass_list(mol)\nfa.mol_coords = mol.atom_coords()\nfa.natm = mol.natm\nfa.mol_hess = mf_hess.de.swapaxes(1, 2)\nnvib = fa.freq.size\nfa.freq\n```\n\n\n\n\n array([-969.746082, 1680.3876 , 1931.786797, 2059.643873, 3874.822068, 5095.777567])\n\n\n\n:::{admonition} \u8bb0\u53f7\u5b9a\u4e49\n\n\u6211\u4eec\u4e4b\u540e\u4f1a\u7ecf\u5e38\u4f5c\u5355\u4f4d\u6362\u7b97\u4e0e\u91cf\u7eb2\u5206\u6790\uff0c\u56e0\u6b64\u8fd9\u91cc\u5c3d\u91cf\u5c06\u76ee\u524d\u4e0e\u5c06\u6765\u9700\u8981\u7684\u7269\u7406\u91cf\u4e0e\u91cf\u7eb2\u5217\u51fa\uff1a\n\n- $Q_i$ \u7b80\u632f\u6a21\u5f0f\uff0c\u5355\u4f4d $\\mathrm{Bohr \\ amu^{1/2}}$\uff1b\n- $Q_{A_\\alpha i}$ \u7b80\u632f\u6a21\u91cf\u5230\u5206\u5b50\u5750\u6807\u8868\u8c61\u7684\u8f6c\u6362\u77e9\u9635\uff0c\u5355\u4f4d $\\mathrm{amu^{-1/2}}$\uff0c\u7a0b\u5e8f\u8c03\u7528 `fa.q`\uff1b\n- $Q_{A_\\alpha i}^\\mathrm{norm}$ \u5f52\u4e00\u5316\u7684\u7b80\u632f\u6a21\u91cf\u8f6c\u6362\u77e9\u9635\uff0c\u65e0\u91cf\u7eb2\uff0c\u7a0b\u5e8f\u8c03\u7528 `fa.qnorm`\uff1b\n- $\\omega_i$ \u7b80\u632f\u632f\u52a8\u9891\u7387\uff0c\u5355\u4f4d $\\mathrm{cm^{-1}}$\uff0c\u7a0b\u5e8f\u8c03\u7528 `fa.freq`\uff1b\n- $\\nu_i$ \u975e\u8c10\u632f\u52a8\u9891\u7387\uff0c\u5355\u4f4d $\\mathrm{cm^{-1}}$\uff1b\n- $\\lambda_i$ \u4e8c\u9636\u529b\u5e38\u6570\uff0c\u5355\u4f4d $\\mathrm{\\mathit{E}_h \\ Bohr^{-2} \\ amu^{-1}}$\uff1b\n- $\\Phi_{A_\\alpha B_\\beta}$ \u5750\u6807\u8868\u793a\u7684 Hessian \u77e9\u9635\uff0c\u5355\u4f4d $\\mathrm{\\mathit{E}_h \\ Bohr^{-2}}$\uff1b\n- $\\Phi_{ij}$ \u7b80\u632f\u8868\u793a\u7684 Hessian \u77e9\u9635\uff0c\u5355\u4f4d $\\mathrm{\\mathit{E}_h \\ Bohr^{-2} \\ amu^{-1}}$\uff1b\n- $\\Phi_{ijk}$ \u7b80\u632f\u8868\u793a\u7684\u4e09\u9636\u5bfc\u6570\uff0c\u5355\u4f4d $\\mathrm{\\mathit{E}_h \\ Bohr^{-3} \\ amu^{-3/2}}$\uff1b\n- $\\Phi_{ijkl}$ \u7b80\u632f\u8868\u793a\u7684\u56db\u9636\u5bfc\u6570\uff0c\u5355\u4f4d $\\mathrm{\\mathit{E}_h \\ Bohr^{-4} \\ amu^{-2}}$\uff1b\n- $I_\\alpha$ \u8f6c\u52a8\u60ef\u91cf\uff0c\u5355\u4f4d $\\mathrm{amu \\ Bohr^{2}}$\uff1b\n- $\\varpi_\\alpha$ \u8f6c\u52a8\u5e38\u6570\uff0c\u5355\u4f4d $\\mathrm{cm^{-1}}$\uff1b\n- $\\zeta_{ij}^\\alpha$ Coriolis \u8026\u5408\u5e38\u6570\uff0c\u65e0\u91cf\u7eb2\uff1b\n- $\\theta_i (T)$ \u6e29\u5ea6\u77eb\u6b63\u7cfb\u6570\uff0c\u65e0\u91cf\u7eb2\uff1b\n- $\\boldsymbol{\\mu}$ \u5076\u6781\u77e9\uff0c\u5355\u4f4d $\\mathrm{Debye}$\uff1b\n\n:::\n\n\u4e3a\u4e86\u540e\u6587\u7684\u4fbf\u5229\uff0c\u6211\u4eec\u4e5f\u4f1a\u7ecf\u5e38\u4f7f\u7528\u5728\u7b80\u632f\u6a21\u5f0f\u8868\u793a\u4e0b\u7684 Hessian \u77e9\u9635\uff1a\n\n$$\n\\Phi_{ij} = \\frac{\\partial^2 E}{\\partial Q_i \\partial Q_j}\n$$\n\n\u4ece\u5b9a\u4e49\u4e0a\uff0c\u7b80\u632f\u6a21\u5f0f\u662f\u5206\u5b50\u632f\u52a8\u7684 Hessian \u77e9\u9635\u7684\u672c\u5f81\u6a21\u5f0f\uff0c\u56e0\u6b64 $\\Phi_{ij}$ \u5b9e\u9645\u4e0a\u662f\u5bf9\u89d2\u77e9\u9635 $\\lambda_i \\delta_{ij}$\u3002\u5176\u4e2d\uff0c$\\lambda_i$ \u662f\u7b80\u632f\u6a21\u5f0f $i$ \u7684\u632f\u52a8\u5f3a\u5ea6\u3002\u5750\u6807\u8868\u793a\u4e0e\u7b80\u632f\u8868\u793a\u7684\u6362\u7b97\u5173\u7cfb\u662f\n\n$$\n\\Phi_{ij} = \\sum_{A_\\alpha B_\\beta} Q_{A_\\alpha i} \\Phi_{A_\\alpha B_\\beta} Q_{B_\\beta j}\n$$\n\n\u6211\u4eec\u7528 `deriv_2` \u8868\u793a $\\Phi_{ij}$\u3002\n\n\n```python\nderiv_2 = einsum(\"Pi, PQ, Qj -> ij\", fa.q, fa.mol_hess.reshape(nhess, nhess), fa.q)\nderiv_2\n```\n\n\n\n\n array([[-0.035588, -0. , 0. , 0. , -0. , -0. ],\n [-0. , 0.106859, 0. , -0. , 0. , 0. ],\n [ 0. , 0. , 0.141224, -0. , 0. , 0. ],\n [ 0. , -0. , -0. , 0.160537, -0. , -0. ],\n [ 0. , 0. , -0. , -0. , 0.568192, -0. ],\n [-0. , -0. , -0. , -0. , -0. , 0.982681]])\n\n\n\n\u6211\u4eec\u6ce8\u610f\u5230\u8fd9\u662f\u4e00\u4e2a\u5bf9\u79f0\u77e9\u9635\u3002\u5b9e\u9645\u4e0a\uff0c\u7b80\u632f\u6a21\u5f0f\u4ece\u5b9a\u4e49\u4e0a\u5c31\u662f\u5728 Hessian \u77e9\u9635\u53bb\u9664\u4e86\u5e73\u52a8\u3001\u8f6c\u52a8\u8d21\u732e\u4e4b\u540e\u7684\u672c\u5f81\u632f\u52a8\u6a21\u5f0f\uff1b\u56e0\u6b64\uff0c\u5728\u7b80\u632f\u6a21\u5f0f\u8868\u793a\u4e0b\u7684 Hessian \u77e9\u9635\u81ea\u7136\u5e94\u5f53\u662f\u5bf9\u89d2\u5316\u7684\u3002\u5176\u5bf9\u89d2\u5143\u6211\u4eec\u4e4b\u540e\u4e5f\u7ecf\u5e38\u7528\u5230\uff0c\u56e0\u6b64\u4ee5 `lambd` $\\lambda_i = \\Phi_{ij} \\delta_{ij}$ \u8868\u793a\uff0c\u7269\u7406\u91cf\u4e0a\u79f0\u4e3a\u529b\u5e38\u6570\u3002\n\n\n```python\nlambd = deriv_2.diagonal()\nlambd\n```\n\n\n\n\n array([-0.035588, 0.106859, 0.141224, 0.160537, 0.568192, 0.982681])\n\n\n\n\u529b\u5e38\u6570 $\\lambda_i$ \u5b83\u4e0e\u9891\u7387 $\\omega_i$ \u4e4b\u95f4\u7684\u5173\u7cfb\u662f\n\n$$\n\\omega_i = \\frac{\\sqrt{\\lambda_i}}{2 \\pi c_0} = \\frac{1}{2 \\pi c_0} \\sqrt{\\frac{k_i}{m}}, \\quad \\lambda = (2 \\pi c_0 \\omega_i)^2\n$$\n\n\u5b83\u4e0e\u6211\u4eec\u7ecf\u5e38\u770b\u5230\u7684\u53e6\u4e00\u4e2a\u529b\u5e38\u6570 $k_i$ \u4e0d\u540c\u3002\u4ece\u91cf\u7eb2\u4e0a\uff0c$\\lambda_i$ \u662f $\\mathrm{[T]^{-2}}$\u3002\u5982\u679c\u8003\u8651\u5230\u4e00\u4e9b\u6bd4\u8f83\u9ebb\u70e6\u7684\u5355\u4f4d\u6362\u7b97\uff0c\u6211\u4eec\u53ef\u4ee5\u901a\u8fc7\u9891\u7387 $\\omega_i$ \u7ed9\u51fa\u529b\u5e38\u6570 $\\lambda_i$\u3002\n\n\u5355\u4f4d\u6362\u7b97\u7684\u539f\u5219\u5982\u4e0b\u3002\u6211\u4eec\u53d1\u73b0\uff0c\u5bf9\u4e8e\u7b49\u5f0f\u5de6\u8fb9\u7684\u5f85\u6c42\u91cf\u5b8c\u5168\u662f\u539f\u5b50\u5355\u4f4d\uff0c\u4f46\u53f3\u8fb9\u5219\u5b8c\u5168\u662f\u7ecf\u8fc7\u53d8\u52a8\u7684 SI \u5355\u4f4d\u5236 (\u6ce2\u6570\u4ee5 $\\mathrm{cm^{-1}}$ \u800c\u975e\u4ee5 $\\mathrm{m^{-1}}$ \u8868\u793a)\u3002\u6211\u4eec\u4ee5 SI \u5355\u4f4d\u5236\u4f5c\u4e3a\u4e2d\u95f4\u5355\u4f4d\uff0c\n- \u5bf9\u7b49\u5f0f\u5de6\u8fb9\uff0c$\\lambda_i$ \u7684\u5355\u4f4d\u662f $\\mathrm{\\mathit{E}_h \\ Bohr^{-2} \\ amu^{-1}}$\uff0c\u6211\u4eec\u8981\u5bf9\u5e94\u5730\u9664\u4ee5\u8fd9\u90e8\u5206\u5355\u4f4d\u7684 SI \u8f6c\u6362\uff1b\n- \u5bf9\u7b49\u5f0f\u53f3\u8fb9\uff0c$\\omega_i$ \u7684\u5355\u4f4d\u662f $\\mathrm{cm^{-1}}$\uff0c\u6211\u4eec\u8981\u4e58\u4ee5\u8fd9\u90e8\u5206\u7684 SI \u8f6c\u6362 ($100$)\u3002\n\n\n```python\n(\n (2 * np.pi * c_0 * fa.freq)**2 # equation\n / (E_h * a_0**-2 * amu**-1) * 100**2 # unit conversion\n)\n```\n\n\n\n\n array([0.035588, 0.106859, 0.141224, 0.160537, 0.568192, 0.982681])\n\n\n\n\u6211\u4eec\u6700\u540e\u8ba8\u8bba\u4e00\u4e0b\u7b80\u632f\u6a21\u91cf\u77e9\u9635 $Q_{A_\\alpha i}$ \u4e0e\u5176\u5f52\u4e00\u5316\u5f62\u5f0f $Q_{A_\\alpha i}^\\mathrm{norm}$ \u4e4b\u95f4\u7684\u5173\u7cfb\u3002\u5728 Gaussian \u7684\u8f93\u51fa\u4e2d\uff0c\u6211\u4eec\u4e00\u822c\u53ea\u4f1a\u770b\u5230 $Q_{A_\\alpha i}^\\mathrm{norm}$\uff0c\u5b83\u5177\u6709\u4e0b\u8ff0\u9650\u5236\uff1a\n\n$$\n\\sum_{A_\\alpha} (Q_{A_\\alpha i}^\\mathrm{norm})^2 = 1 , \\ \\forall i\n$$\n\n\u6b63\u56e0\u4e3a\u8fd9\u79cd\u5f52\u4e00\u5316\u6761\u4ef6\uff0c$Q_{A_\\alpha i}^\\mathrm{norm}$ \u5b9e\u9645\u4e0a\u662f\u6ca1\u6709\u91cf\u7eb2\u7684\u3002\u4f46\u6211\u4eec\u4ee5\u540e\u4e00\u76f4\u8981\u9047\u5230\u4e0e\u5904\u7406\u7684\u7b80\u632f\u6a21\u91cf\uff0c\u662f\u6240\u8c13\u5206\u5b50\u8d28\u91cf\u6298\u5408\u8fc7\u7684\u6a21\u91cf\uff0c\u5176\u91cf\u7eb2\u662f $\\mathrm{[M]^{-1/2}}$\u3002\u5b83\u4e5f\u53ef\u4ee5\u4ece\u5f52\u4e00\u5316\u7684\u7b80\u632f\u6a21\u91cf\u5bfc\u51fa\uff0c\u5176\u4e2d $m_\\mathrm{A}$ \u662f $A$ \u539f\u5b50\u7684\u8d28\u91cf\uff1a\n\n$$\nQ_{A_\\alpha i} = Q_{A_\\alpha i}^\\mathrm{norm} \\left( \\sum_{A_\\alpha} \\big( Q_{A_\\alpha i}^\\mathrm{norm} \\sqrt{m_\\mathrm{A}} \\big)^2 \\right)^{-1/2}\n$$\n\n\n```python\nnp.allclose(fa.qnorm / np.linalg.norm(fa.qnorm * np.sqrt(np.repeat(get_atom_mass_list(mol), 3))[:, None], axis=0), fa.q)\n```\n\n\n\n\n True\n\n\n\n### \u6570\u503c\u4e09\u9636\u5bfc\u6570\n\n\u6211\u4eec\u4ee5\u540e\u4e00\u76f4\u4f1a\u4f7f\u7528\u7b80\u632f\u5750\u6807\u8868\u793a\u4e0b\u7684\u80fd\u91cf\u5bfc\u6570\uff1b\u5e76\u4e14\u6211\u4eec\u6ce8\u610f\u5230\u7b80\u632f\u6a21\u5f0f\u7684\u6570\u91cf\u662f $3N-6$\uff0c\u6bd4\u539f\u5b50\u5750\u6807\u5206\u91cf\u7684 $3N$ \u6570\u91cf\u8981\u5c11\u4e00\u4e9b\u3002\u56e0\u6b64\uff0c\u5728\u6c42\u53d6\u6570\u503c\u5bfc\u6570\u65f6\uff0c\u6211\u4eec\u4f1a\u5e0c\u671b\u4f7f\u7528\u7b80\u632f\u6a21\u5f0f\u4f5c\u4e3a\u5b9e\u9645\u88ab\u6c42\u5bfc\u91cf\u3002\n\n\u6211\u4eec\u9700\u8981\u7684\u4e09\u9636\u5bfc\u6570\u91cf\u662f\n\n$$\n\\Phi_{ijk} = \\frac{\\partial^3 E}{\\partial Q_i \\partial Q_j \\partial Q_k}\n$$\n\n\u4f46\u5728\u6c42\u53d6\u6570\u503c\u5bfc\u6570\u65f6\uff0c\u6211\u4eec\u53ef\u4ee5\u9009\u62e9\u4e00\u4e2a\u91cf\u4f5c\u4e3a\u6570\u503c\u504f\u79fb\u91cf\uff0c\u8b6c\u5982 $Q_k$\uff1b\u800c\u5269\u4e0b\u7684\u91cf\u5219\u662f\u89e3\u6790\u5730\u8ba1\u7b97\u3002\u4f46\u6211\u4eec\u4e5f\u6ce8\u610f\u5230\uff0c\u7a0b\u5e8f\u4e0a\u9ed8\u8ba4\u7ed9\u51fa\u7684\u662f\u539f\u5b50\u5750\u6807\u8868\u793a\u7684 $\\Phi_{A_\\alpha B_\\beta}$\uff0c\u8fd8\u9700\u8981\u6211\u4eec\u4f5c\u8f6c\u6362\u5f97\u5230 $\\Phi_{ij}$\u3002\u56e0\u6b64\uff0c\u6570\u503c\u5bfc\u6570\u7684\u6c42\u53d6\u65b9\u6cd5\u662f\n\n$$\n\\Phi_{ijk} = \\sum_{A_\\alpha B_\\beta} Q_{A_\\alpha i} Q_{B_\\beta j} \\Phi_{A_\\alpha B_\\beta k} = \\sum_{A_\\alpha B_\\beta} Q_{A_\\alpha i} Q_{B_\\beta j} \\frac{\\Phi_{A_\\alpha B_\\beta} (\\delta Q_k) - \\Phi_{A_\\alpha B_\\beta} (-\\delta Q_k)}{2 \\delta Q_k} \n$$\n\n\u4e0b\u9762\u7684\u7a0b\u5e8f\u5c31\u662f\u4ee5 $0.01 \\ \\mathrm{\\mathring{A}} \\ \\mathrm{amu}^{1/2}$ \u4e3a\u504f\u79fb\u5927\u5c0f\uff0c\u6c42\u53d6\u5bf9\u7b80\u632f\u5750\u6807\u5bfc\u6570\u7684\u7a0b\u5e8f\uff1a\n\n\n```python\nclass ModeDerivGenerator(NucCoordDerivGenerator):\n\n def __init__(self, mol, mf_func, q, interval=3e-3):\n self.q = q\n super(ModeDerivGenerator, self).__init__(mol, mf_func, stencil=3, interval=interval)\n \n def init_objects(self):\n nmode = self.q.shape[1]\n self.objects = np.empty((nmode, 2), dtype=object)\n \n def perform_mf(self):\n natm = self.mol.natm\n nmode = self.q.shape[1]\n for nq, norm in enumerate(self.q.T):\n norm = norm.reshape(natm, 3)\n movelist = [(A, t, norm[A, t]) for A in range(natm) for t in range(3)]\n self.objects[nq, 1] = self.mf_func(self.move_mol(movelist))\n movelist = [(A, t, - norm[A, t]) for A in range(natm) for t in range(3)]\n self.objects[nq, 0] = self.mf_func(self.move_mol(movelist))\n```\n\n\u6211\u4eec\u5c06 Hessian \u5bf9\u7b80\u632f\u5750\u6807\u7684\u5bfc\u6570\u5b9e\u4f8b\u5b9a\u4e3a `num_hess`\uff1a\n\n\n```python\nnum_hess = ModeDerivGenerator(mol, lambda mol: scf.RHF(mol).run().Hessian().run(), fa.q, interval=0.01)\n```\n\n\u4f7f\u7528 `NumericDiff` \u76f4\u63a5\u6c42\u5bfc\u7684\u7ed3\u679c\u662f $\\Phi_{A_\\alpha B_\\beta k}$\u3002\u6211\u4eec\u9700\u8981\u518d\u5bf9\u5176\u8fdb\u884c\u8f6c\u6362\u518d\u5f97\u5230 $\\Phi_{ijk}$ `deriv_3`\uff1a\n\n\n```python\ntmp_3 = NumericDiff(num_hess, lambda mf: mf.de.swapaxes(1, 2).reshape(nhess, nhess)).derivative\nderiv_3 = einsum(\"Ai, Bj, kAB -> ijk\", fa.q, fa.q, tmp_3)\n```\n\n\u4e25\u683c\u5730\u6765\u8bf4\uff0c$\\Phi_{ijk}$ \u5e94\u5f53\u662f\u5177\u6709\u516d\u91cd\u5bf9\u79f0\u6027\u7684\u5f20\u91cf\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u8fd8\u53ef\u4ee5\u5bf9\u5176\u8fdb\u884c\u5bf9\u79f0\u5316\uff0c\u4f7f\u5f97\u7ed3\u679c\u5b9a\u6027\u4e0a\u66f4\u6b63\u786e\uff1a\n\n\n```python\nderiv_3 = 1/3 * (deriv_3 + deriv_3.transpose(1, 2, 0) + deriv_3.transpose(2, 0, 1))\n```\n\n### \u6570\u503c\u56db\u9636\u5bfc\u6570\n\n\u6211\u4eec\u901a\u8fc7\u4e09\u70b9\u5dee\u5206\u7684\u65b9\u6cd5\uff0c\u7ed9\u51fa\u4e86\u4e09\u9636\u5bfc\u6570\u3002\u5c3d\u7ba1\u6211\u4eec\u65e0\u6cd5\u6c42\u51fa\u5168\u90e8\u7684\u56db\u9636\u5bfc\u6570\uff0c\u4f46\u5229\u7528\u521a\u624d\u6c42\u53d6\u4e09\u9636\u5bfc\u6570\u7684\u6570\u636e\uff0c\u6211\u4eec\u53ef\u4ee5\u6c42\u53d6 $\\Phi_{ijkk}$ \u5f62\u5f0f\u7684\u56db\u9636\u5bfc\u6570\uff1a\n\n$$\n\\Phi_{ijkk} = \\frac{\\partial^2}{\\partial Q_k^2} \\frac{\\partial^2 E}{\\partial Q_i \\partial Q_j} = \\frac{\\Phi_{ij} (\\delta Q_k) - 2 \\Phi_{ij} (0) + \\Phi_{ij} (-\\delta Q_k)}{(\\delta Q_k)^2}\n$$\n\n\u6ce8\u610f\u5230\u5229\u7528\u73b0\u6709\u7684\u6570\u636e\uff0c\u6211\u4eec\u53ea\u80fd\u6c42\u53d6 $\\Phi_{ijkk}$\uff0c\u800c\u4e0d\u80fd\u6c42\u53d6\u66f4\u4e00\u822c\u7684 $\\Phi_{ijkl}$\u3002\u8be5\u5f20\u91cf\u4e5f\u5177\u6709\u5bf9\u79f0\u6027\uff1b\u6211\u4eec\u53ef\u4ee5\u5bf9 $i, j$ \u89d2\u6807\u8fdb\u884c\u5bf9\u79f0\u5316\u3002\u6211\u4eec\u5c06\u4f1a\u7528 `deriv_4` \u8868\u793a $\\Phi_{ijkk}$\uff0c\u7ef4\u5ea6\u662f $(i, j, k)$\n\n\n```python\nhess_gathered = np.array([[mf.de.swapaxes(1, 2).reshape(nhess, nhess) for mf in l] for l in num_hess.objects])\ntmp_4 = einsum(\"ksAB, Ai, Bj -> skij\", hess_gathered, fa.q, fa.q)\nderiv_4 = (tmp_4[0] + tmp_4[1] - 2 * deriv_2) / (num_hess.interval**2) # finite diff\nderiv_4 = (deriv_4 + deriv_4.swapaxes(-1, -2)) / 2 # symmetrize\nderiv_4 = einsum(\"kij -> ijk\", deriv_4) # subscript transform\n```\n\n## \u975e\u8c10\u9891\u7387\u77eb\u6b63\n\n\u5728\u6c42\u53d6\u9891\u7387\u77eb\u6b63\u524d\uff0c\u6211\u4eec\u9700\u8981\u5148\u83b7\u5f97\u77e9\u9635 $\\xi_{ij}$ `xmat`\u3002\u8be5\u77e9\u9635\u4e0e\u9891\u7387\u76f8\u540c\u5355\u4f4d\uff0c\u5373 $\\mathrm{cm}^{-1}$\u3002\u8fd9\u91cc\u7684\u516c\u5f0f\u4e3b\u8981\u53c2\u8003 Barone eq. (36-38)\uff0c\u4f46\u5728\u4e0e\u4e00\u4e9b\u7cfb\u6570\u6709\u5173\u7684\u95ee\u9898\u4e0a\u4e0d\u592a\u76f8\u540c\u3002\u8be5\u77e9\u9635\u5bf9\u975e\u8c10\u77eb\u6b63\u540e\u7684\u9891\u7387\u81f3\u5173\u91cd\u8981\u3002\n\n### \u56db\u9636\u5bfc\u6570\u5bf9 $\\xi_{ij}$ \u7684\u8d21\u732e\n\n\u82e5\u9664\u5f00\u5355\u4f4d\u6362\u7b97\uff0c\u56db\u9636\u5bfc\u6570\u5bf9\u77e9\u9635 $\\xi_{ij}$ \u7684\u8d21\u732e\u662f\n\n$$\n\\begin{align}\n\\xi_{ii} &\\leftarrow \\frac{\\hbar}{2 \\pi c_0} \\frac{1}{16 \\lambda_i} \\Phi_{iiii} \\\\\n\\xi_{ij} &\\leftarrow \\frac{\\hbar}{2 \\pi c_0} \\frac{1}{4 \\sqrt{|\\lambda_i \\lambda_j|}} \\Phi_{iijj} \\quad (i \\neq j)\n\\end{align}\n$$\n\n\u4e4b\u6240\u4ee5\u8868\u8fbe\u5f0f\u4e2d\u9700\u8981\u5f15\u5165\u7edd\u5bf9\u503c\uff0c\u662f\u56e0\u4e3a\u6211\u4eec\u540c\u65f6\u5904\u7406\u5b9e\u9891\u4e0e\u865a\u9891\u3002\u865a\u9891\u4e0b\uff0c$\\lambda_i$ \u5c0f\u4e8e\u96f6\uff0c\u65e0\u6cd5\u5f00\u6839\u53f7\u3002\u4f9d\u636e Miller, et al. [^Miller.CPL.1990] \u7684\u8868\u8fbe\u5f0f\uff0c\u5b9e\u9645\u4e0a\u5bf9\u4e8e\u6709\u4e14\u4ec5\u6709\u4e00\u4e2a\u865a\u9891\u7684 $Q_i$\uff0c\u5176\u5bf9\u5e94\u7684 $\\xi_{ij}$ \u4e5f\u5e94\u5f53\u662f\u865a\u6570\u3002\u5f53\u524d\u7684\u5206\u5b50\u4e5f\u6070\u597d\u662f\u6709\u4e14\u4ec5\u6709\u4e00\u4e2a\u865a\u9891\uff0c\u5e76\u4e14\u5b9e\u9645\u4e0a\u865a\u6570\u7684 $\\xi_{ij}$ \u53ef\u4ee5\u4e0e\u865a\u6570\u7684 $\\omega_i$ \u76f8\u52a0\u5f97\u5230\u975e\u8c10\u9891\u7387 $\\nu_i$\uff1b\u56e0\u6b64\u5c31\u8fd9\u79cd\u8fc7\u6e21\u6001\u60c5\u5f62\uff0c\u5728\u7a0b\u5e8f\u5b9e\u73b0\u4e0a\uff0c\u53ef\u4ee5\u5bf9\u865a\u9891\u4e0e\u5b9e\u9891\u4e00\u89c6\u540c\u4ec1\u3002\n\n\n```python\nxmat_4 = np.zeros((nvib, nvib))\nfor i in range(nvib):\n xmat_4[i, i] = deriv_4[i,i,i] / (16 * lambd[i])\n for j in range(nvib):\n if i == j: continue\n xmat_4[i, j] = deriv_4[i,i,j] / (4 * np.sqrt(np.abs(lambd[i] * lambd[j])))\nxmat_4 *= (hbar / (2 * np.pi * c_0)) * (a_0**-2 * amu**-1) / (100)\nxmat_4\n```\n\n\n\n\n array([[ -59.929105, 17.359227, 2.022484, -26.747814, -89.778445, -133.911891],\n [ 17.360842, 11.4939 , 18.955104, -6.407765, -6.83072 , -112.008037],\n [ 2.024055, 18.952477, 4.4002 , 42.205352, -50.906768, -38.849444],\n [ -26.737152, -6.40661 , 42.197109, 45.874797, -16.574222, -5.321896],\n [ -89.725547, -6.828563, -50.876096, -16.565645, 62.64625 , 1.480829],\n [-133.825551, -111.932215, -38.826908, -5.319965, 1.480938, 70.703952]])\n\n\n\n### \u4e09\u9636\u5bfc\u6570\u5bf9 $\\xi_{ij}$ \u7684\u8d21\u732e\n\n\u4e09\u9636\u5bfc\u6570\u5bf9 $\\xi_{ij}$ \u7684\u8d21\u732e\u6838\u7b97\u76f8\u5bf9\u590d\u6742\u3002\u6211\u4eec\u9996\u5148\u5904\u7406\u5bf9\u89d2\u7ebf\u7684\u8d21\u732e\uff1a\n\n$$\n\\xi_{ii} \\leftarrow - \\frac{\\hbar}{2 \\pi c_0} \\frac{1}{16 \\lambda_i} \\left( \\frac{5}{3 \\lambda_i} \\Phi_{iii}^2 + \\frac{8 \\lambda_i - 3 \\lambda_j}{\\lambda_j (4 \\lambda_i - \\lambda_j)} \\Phi_{iij}^2 \\right)\n$$\n\n\n```python\nxmat_3 = np.zeros((nvib, nvib))\n```\n\n\n```python\nfor i in range(nvib):\n xmat_3[i, i] -= deriv_3[i,i,i]**2 * 5 / (3 * lambd[i])\n for j in range(nvib):\n if i == j: continue\n xmat_3[i, i] -= deriv_3[i,i,j]**2 * (8 * lambd[i] - 3 * lambd[j]) / (lambd[j] * (4 * lambd[i] - lambd[j]))\n xmat_3[i, i] /= 16 * lambd[i]\n```\n\n\u968f\u540e\u5904\u7406\u975e\u5bf9\u89d2\u7ebf\u7684\u8d21\u732e ($i \\neq j$)\uff1a\n\n$$\n\\begin{align}\n\\xi_{ij} &\\leftarrow - \\frac{\\hbar}{2 \\pi c_0} \\frac{1}{4 \\sqrt{|\\lambda_i \\lambda_j|}} \\left( \\frac{2}{4 \\lambda_i - \\lambda_j} \\Phi_{iij}^2 + \\frac{2}{4 \\lambda_j - \\lambda_i} \\Phi_{ijj}^2 + \\frac{\\Phi_{iii} \\Phi_{ijj}}{\\lambda_i} + \\frac{\\Phi_{jjj} \\Phi_{iij}}{\\lambda_j} \\right) \\\\\n&\\quad + \\frac{\\hbar}{2 \\pi c_0} \\frac{1}{4 \\sqrt{|\\lambda_i \\lambda_j|}} \\sum_{k \\notin \\{i, j\\}} \\left[ \\frac{2 (\\lambda_i + \\lambda_j - \\lambda_k)}{\\Delta_{ijk}} \\Phi_{ijk}^2 - \\frac{\\Phi_{iik} \\Phi_{jjk}}{\\lambda_k} \\right]\n\\end{align}\n$$\n\n\n```python\nfor i in range(nvib):\n for j in range(nvib):\n if i == j: continue\n xmat_3[i, j] -= deriv_3[i,i,j]**2 * 2 / (4 * lambd[i] - lambd[j])\n xmat_3[i, j] -= deriv_3[i,j,j]**2 * 2 / (4 * lambd[j] - lambd[i])\n xmat_3[i, j] -= deriv_3[i,i,i] * deriv_3[i,j,j] / lambd[i]\n xmat_3[i, j] -= deriv_3[j,j,j] * deriv_3[i,i,j] / lambd[j]\n for k in range(nvib):\n if len(set([i, j, k])) != 3: continue\n delta_ijk = lambd[i]**2 + lambd[j]**2 + lambd[k]**2 - 2 * (lambd[i]*lambd[j] + lambd[j]*lambd[k] + lambd[k]*lambd[i])\n xmat_3[i, j] += deriv_3[i,j,k]**2 * 2 * (lambd[i] + lambd[j] - lambd[k]) / delta_ijk\n xmat_3[i, j] -= deriv_3[i,i,k] * deriv_3[j,j,k] / lambd[k]\n xmat_3[i, j] /= 4 * np.sqrt(np.abs(lambd[i] * lambd[j]))\n```\n\n\n```python\nxmat_3 *= (hbar / (2 * np.pi * c_0)) * (a_0**-2 * amu**-1) / (100)\nxmat_3\n```\n\n\n\n\n array([[ 6.436059, -17.659895, -12.843587, -17.730696, 49.859864, 86.717056],\n [ -17.659895, -18.294258, -35.959998, -46.395273, 2.625465, 94.139235],\n [ -12.843587, -35.959998, -65.430162, -108.220206, 228.295265, 26.274758],\n [ -17.730696, -46.395273, -108.220206, -115.728507, -19.13443 , -3.494852],\n [ 49.859864, 2.625465, 228.295265, -19.13443 , -111.055075, -10.265471],\n [ 86.717056, 94.139235, 26.274758, -3.494852, -10.265471, -112.711441]])\n\n\n\n### Coriolis \u77eb\u6b63\u5bf9 $\\xi_{ij}$ \u7684\u8d21\u732e\n\nCoriolis \u77eb\u6b63\u7684\u8d21\u732e\u65b9\u5f0f\u662f (\u5141\u8bb8 $i = j$)\n\n$$\n\\xi_{ij} \\leftarrow \\sum_\\alpha \\frac{\\lambda_i + \\lambda_j}{\\sqrt{|\\lambda_i \\lambda_j|}} \\varpi_\\alpha (\\zeta_{ij}^\\alpha)^2\n$$\n\n\u8fd9\u91cc\u5f15\u5165\u4e86\u4e00\u4e9b\u7279\u6b8a\u7684\u8bb0\u53f7\u3002\u88ab\u6c42\u548c\u7684\u89d2\u6807 $\\alpha$ \u5c3d\u7ba1\u662f $x, y, z$ \u4e09\u4e2a\u53d6\u5411\u4e4b\u4e00\uff0c\u4f46\u5728\u6c42\u53d6 Coriolis \u77eb\u6b63\u8d21\u732e\u65f6\uff0c\u5b83\u5fc5\u987b\u662f\u8f6c\u52a8\u60ef\u91cf\u4e3b\u8f74\u3002\n\n\u6211\u4eec\u5728\u901a\u8fc7 Hessian \u77e9\u9635\u5f97\u5230\u5206\u5b50\u632f\u52a8\u9891\u7387\u65f6\uff0c\u9700\u8981\u53bb\u9664\u8f6c\u52a8\u7684\u4e09\u4e2a\u81ea\u7531\u5ea6\u8d21\u732e\u3002\u90a3\u65f6\u6211\u4eec\u5df2\u7ecf\u8ba1\u7b97\u8fc7\u4e0e\u8f6c\u52a8\u6709\u5173\u7684\u5404\u79cd\u4e2d\u95f4\u53d8\u91cf\u3002$\\varpi_\\alpha$ `rot_wavenum` \u662f\u4ee5 $\\mathrm{cm^{-1}}$ \u4e3a\u5355\u4f4d\u7684\u8f6c\u52a8\u5e38\u6570\uff1a\n\n$$\n\\varpi_\\alpha = \\frac{h}{8 \\pi^2 c_0 I_\\alpha}\n$$\n\n\n```python\nrot_wavenum = h / (8 * np.pi**2 * c_0 * fa.rot_eig) * (a_0**-2 * amu**-1) / (100)\nrot_wavenum\n```\n\n\n\n\n array([13.875725, 7.153573, 4.775983])\n\n\n\n\u800c $\\zeta_{ij}^\\alpha$ `zeta` Coriolis \u8026\u5408\u5e38\u6570\u901a\u8fc7\u4e0b\u5f0f\u8fdb\u884c\u8ba1\u7b97 (Barone, eq (11)\uff0c\u7ef4\u5ea6 $(\\alpha, i, j)$\uff0c\u65e0\u91cf\u7eb2)\uff1a\n\n$$\n\\begin{align}\n\\zeta_{ij}^\\alpha &= \\sum_k (L_{A_\\beta i} L_{A_\\gamma j} - L_{A_\\gamma i} L_{A_\\beta j}) \\\\\nL_{A_\\alpha i} &= \\sum_\\beta Q_{A_\\beta i} R_{\\beta \\alpha} \\sqrt{m_A}\n\\end{align}\n$$\n\n\u5176\u4e2d\uff0c$R_{\\beta \\alpha}$ \u8f6c\u52a8\u7684\u672c\u5f81\u5411\u91cf\uff0c\u65e0\u91cf\u7eb2\u3002\u5176\u672c\u5f81\u503c\u662f\u8f6c\u52a8\u60ef\u91cf\uff0c\u8fd9\u5728 [\u9891\u7387\u5206\u6790 (1)](freq_1.ipynb) \u4e2d\u6709\u6240\u63d0\u53ca\u3002\u540c\u65f6\uff0c\u6c42\u53d6 $\\zeta_{ij}^\\alpha$ \u65f6\uff0c\u5176\u8868\u8fbe\u5f0f\u7684\u5206\u91cf $\\alpha, \\beta, \\gamma$ \u5fc5\u987b\u6309\u7167\u53f3\u624b\u5750\u6807\u7cfb\u6392\u5e8f\u3002\n\n\n```python\nq_ = einsum(\"A\u03b2i, \u03b2\u03b1, A -> A\u03b1i\", fa.q.reshape(natm, 3, nvib), fa.rot_vec, np.sqrt(get_atom_mass_list(mol)))\nzeta_x = einsum(\"Ai, Aj -> ij\", q_[:, 1, :], q_[:, 2, :]) - einsum(\"Ai, Aj -> ij\", q_[:, 2, :], q_[:, 1, :])\nzeta_y = einsum(\"Ai, Aj -> ij\", q_[:, 2, :], q_[:, 0, :]) - einsum(\"Ai, Aj -> ij\", q_[:, 0, :], q_[:, 2, :])\nzeta_z = einsum(\"Ai, Aj -> ij\", q_[:, 0, :], q_[:, 1, :]) - einsum(\"Ai, Aj -> ij\", q_[:, 1, :], q_[:, 0, :])\nzeta = np.array([zeta_x, zeta_y, zeta_z])\n```\n\n\u540c\u65f6\uff0c\u8f6c\u52a8\u672c\u5f81\u5411\u91cf\u8fd8\u53ef\u4ee5\u7528\u6765\u5c06\u5206\u5b50\u65cb\u8f6c\u5230\u8f6c\u52a8\u4e3b\u8f74\u5750\u6807\u7cfb\uff0c\u4e14\u8be5\u5750\u6807\u7cfb\u7684 $I_x < I_y < I_z$\u3002\u8be5\u5750\u6807\u7cfb\u5728 Gaussian \u4e2d\u79f0\u4e3a Eckart Orientation\u3002\u6211\u4eec\u53ef\u4ee5\u8f93\u51fa\u8be5\u5750\u6807\u7cfb\u4e0b\u4ee5 Angstrom \u4e3a\u5355\u4f4d\u7684\u5206\u5b50\u6784\u578b\u3002\n\n\n```python\nfa.centered_coord @ fa.rot_vec * a_0 * 1e10\n```\n\n\n\n\n array([[ 0.003095, -0.002241, -0.016268],\n [ 0.384988, 0.851931, 0.073996],\n [ 0.805212, -0.657419, 0.078348],\n [-1.233207, -0.163374, 0.07369 ]])\n\n\n\n\u81f3\u6b64\uff0c\u6211\u4eec\u5c31\u53ef\u4ee5\u5c06 Coriolis \u77eb\u6b63\u7684\u8d21\u732e\u8ba1\u7b97\u51fa\u6765\u3002\n\n$$\n\\xi_{ij} \\leftarrow \\sum_\\alpha \\frac{\\lambda_i + \\lambda_j}{\\sqrt{|\\lambda_i \\lambda_j|}} \\varpi_\\alpha (\\zeta_{ij}^\\alpha)^2\n$$\n\n\n```python\nxmat_coriol = np.zeros((nvib, nvib))\nfor i in range(nvib):\n for j in range(nvib):\n for ixyz in range(3):\n xmat_coriol[i, j] += zeta[ixyz, i, j]**2 * rot_wavenum[ixyz]\n xmat_coriol[i, j] *= (lambd[i] + lambd[j]) / np.sqrt(np.abs(lambd[i] * lambd[j]))\nxmat_coriol\n```\n\n\n\n\n array([[-0. , 6.485503, 2.854787, 7.462297, 14.215879, 25.469408],\n [ 6.485503, 0. , 0.862218, 2.938909, 0.906227, 8.749904],\n [ 2.854787, 0.862218, 0. , 0.883756, 5.835533, 3.491808],\n [ 7.462297, 2.938909, 0.883756, 0. , 1.970819, 0.474133],\n [14.215879, 0.906227, 5.835533, 1.970819, 0. , 0.009302],\n [25.469408, 8.749904, 3.491808, 0.474133, 0.009302, 0. ]])\n\n\n\n:::{danger}\n\n\u5bf9\u4e8e Gaussian 09 rev. D01 \u53ca C01\uff0c\u5176\u8f93\u51fa\u7684 Coriolis \u8d21\u732e\u662f\u9519\u8bef\u7684\u3002\u5176\u539f\u56e0\u5728\u4e8e\u6ca1\u6709\u5c06\u5206\u5b50\u5750\u6807\u9884\u5148\u8f6c\u7f6e\u5230 Eckart Orientation \u4e0b\uff0c\u800c\u4ecd\u7136\u5728\u8f93\u5165\u5750\u6807\u4e0b\u8fdb\u884c\u8ba1\u7b97\u3002\u5c06\u8fd9\u79cd\u5750\u6807\u4e0e\u8f6c\u52a8\u60ef\u91cf\u4e3b\u8f74\u7684\u8f6c\u52a8\u9891\u7387\u4f5c\u4e58\u79ef\u6ca1\u6709\u610f\u4e49\u3002\u4e5f\u56e0\u6b64\uff0c\u8be5\u7248\u672c\u7684 Gaussian \u5b9e\u9645\u4e0a\u65e0\u6cd5\u6b63\u786e\u5730\u8f93\u51fa\u9891\u7387\u7684\u77eb\u6b63\uff1b\u9664\u975e\u8f93\u5165\u5750\u6807\u4e0e Eckart Orientation \u4e00\u81f4\u3002\n\n\u5982\u679c\u8981\u91cd\u590d\u8be5\u7248\u672c\u7684\u7ed3\u679c\uff0c**\u5bf9\u4e8e\u5f53\u524d\u7684\u5206\u5b50**\uff0c\u53ef\u4ee5\u4f7f\u7528\u540e\u9762\u4e00\u4e2a\u88ab\u6298\u53e0\u7684 code cell \u4ee3\u7801\u3002\n\n\u6211\u4eec\u4e4b\u524d\u53f7\u79f0\u6b63\u786e\u7684\u5b9e\u73b0\u65b9\u5f0f\uff0c\u4e0e Gaussian 16 rev. B01 \u7684\u7ed3\u679c\u5e94\u5f53\u4e00\u81f4\u3002\n\n:::\n\n\n```python\ndef xmat_coriol_G09():\n # -- What makes different --\n q_ = einsum(\"A\u03b1i, A -> A\u03b1i\", fa.q.reshape(natm, 3, nvib), np.sqrt(get_atom_mass_list(mol)))\n rot_in_G09 = rot_wavenum[0], rot_wavenum[2], rot_wavenum[1]\n # -- The rest are merely the same --\n zeta = np.zeros((3, nvib, nvib))\n for \u03b1, \u03b2, \u03b3 in [(0, 1, 2), (1, 2, 0), (2, 0, 1)]:\n zeta[\u03b1] = einsum(\"Ai, Aj -> ij\", q_[:, \u03b2, :], q_[:, \u03b3, :]) - einsum(\"Ai, Aj -> ij\", q_[:, \u03b3, :], q_[:, \u03b2, :])\n xmat_coriol = einsum(\"\u03b1ij, \u03b1, ij -> ij\", zeta**2, rot_in_G09, (lambd[:, None] + lambd[None, :]) / np.sqrt(np.abs(lambd[:, None] * lambd[None, :])))\n return xmat_coriol\nxmat_coriol_G09()\n```\n\n\n\n\n array([[ 0. , 4.90998 , 3.505416, 9.864751, 10.909095, 26.892735],\n [ 4.90998 , 0. , 0.933722, 3.050355, 0.933074, 9.206211],\n [ 3.505416, 0.933722, 0. , 0.902214, 5.887021, 3.409958],\n [ 9.864751, 3.050355, 0.902214, 0. , 1.987215, 0.43257 ],\n [10.909095, 0.933074, 5.887021, 1.987215, 0. , 0.009969],\n [26.892735, 9.206211, 3.409958, 0.43257 , 0.009969, 0. ]])\n\n\n\n\n```python\n! head -n 9725 anharm-G09C01.log | tail -n 10\n```\n\n Coriolis contributions to Xmat \r\n 1 2 3 4 5 \r\n 1 0.000000D+00\r\n 2 0.268927D+02 0.000000D+00\r\n 3 0.109091D+02 0.996906D-02 0.000000D+00\r\n 4 0.986475D+01 0.432572D+00 0.198722D+01 0.000000D+00\r\n 5 0.350541D+01 0.340996D+01 0.588702D+01 0.902213D+00 0.000000D+00\r\n 6 0.490997D+01 0.920621D+01 0.933076D+00 0.305035D+01 0.933724D+00\r\n 6 \r\n 6 0.000000D+00\r\n\n\n### \u603b $\\xi_{ij}$ \u77e9\u9635\u4e0e\u975e\u8c10\u9891\u7387\u7684\u5bfc\u51fa\n\n\u6211\u4eec\u9996\u5148\u5c06\u56db\u9636\u5bfc\u6570\u3001\u4e09\u9636\u5bfc\u6570\u3001Coriolis \u7684\u8d21\u732e\u76f8\u52a0\uff0c\u5f97\u5230\u603b $\\xi_{ij}$ \u77e9\u9635\u3002\n\n\n```python\nxmat = xmat_4 + xmat_3 + xmat_coriol\nxmat\n```\n\n\n\n\n array([[-53.493046, 6.184835, -7.966316, -37.016213, -25.702702, -21.725427],\n [ 6.18645 , -6.800358, -16.142676, -49.864129, -3.299028, -9.118898],\n [ -7.964745, -16.145302, -61.029962, -65.131098, 183.22403 , -9.082879],\n [-37.005551, -49.862974, -65.139341, -69.85371 , -33.737833, -8.342616],\n [-25.649804, -3.296871, 183.254702, -33.729256, -48.408825, -8.77534 ],\n [-21.639087, -9.043076, -9.060343, -8.340685, -8.775231, -42.007489]])\n\n\n\n\u8fd9\u53ef\u4ee5\u4e0e Gaussian \u7684\u8f93\u51fa\u4f5c\u6bd4\u5bf9\u3002\u9700\u8981\u6ce8\u610f\uff0c**\u6211\u4eec\u7684\u7ed3\u679c\u4e0e Gaussian \u5e76\u4e0d\u5b8c\u5168\u4e00\u81f4**\uff0c\u4e14\u4e0d\u4ec5\u662f\u7531\u4e8e\u7b80\u632f\u6a21\u5f0f\u7684\u6392\u5e8f\u4e0d\u540c\u6240\u4ea7\u751f\u3002\u8fd9\u662f\u7531\u4e8e Fermi \u5171\u632f\u6240\u81f4\uff0c\u6211\u4eec\u540e\u9762\u4f1a\u63d0\u53ca\u3002**\u6211\u4eec\u7684\u7a0b\u5e8f\u65e0\u6cd5\u5904\u7406 Fermi \u5171\u632f**\uff0c\u4f46 Gaussian \u4f1a\u6709\u4e00\u4e9b\u529e\u6cd5\u3002\n\n\n```python\n! head -n 14013 anharm-G16B01.log | tail -n 11\n```\n\n Total Anharmonic X Matrix (in cm^-1)\r\n ------------------------------------\r\n 1 2 3 4 5\r\n 1 -0.534804D+02\r\n 2 -0.216582D+02 -0.420071D+02\r\n 3 -0.256619D+02 -0.877403D+01 -0.484077D+02\r\n 4 -0.370099D+02 -0.834034D+01 -0.337307D+02 -0.698507D+02\r\n 5 -0.797061D+01 -0.906609D+01 -0.296660D+02 -0.651376D+02 -0.780405D+01\r\n 6 0.617458D+01 -0.905380D+01 -0.329760D+01 -0.498632D+02 -0.161479D+02\r\n 6\r\n 6 -0.680891D+01\r\n\n\n\u4f9d\u636e Barone eq. (43)\uff0c\u975e\u8c10\u9891\u7387\u5f88\u5bb9\u6613\u5730\u901a\u8fc7\u4e0b\u5f0f\u7ed9\u51fa (\u5355\u4f4d $\\mathrm{cm^{-1}}$)\uff1a\n\n$$\n\\nu_i = \\omega_i + 2 \\xi_{ii} + \\frac{1}{2} \\sum_{j \\neq i} \\xi_{ij} = \\omega_i + \\frac{3}{2} \\xi_{ii} + \\frac{1}{2} \\sum_j \\xi_{ij}\n$$\n\n\n```python\nfa.freq + 1.5 * xmat.diagonal() + 0.5 * xmat.sum(axis=-1)\n```\n\n\n\n\n array([-1119.845085, 1630.667744, 1852.176877, 1822.892296, 3833.906134, 4983.333379])\n\n\n\n\u5f53\u6211\u4eec\u4e0e Gaussian \u7684\u975e\u8c10\u9891\u7387\u8fdb\u884c\u6bd4\u5bf9\u65f6\uff0c\u4f1a\u53d1\u73b0\u6211\u4eec $\\mathrm{3834 \\ cm^{-1}}$ \u6ce2\u6570\u7684\u7ed3\u679c\u4e0e Gaussian \u7684 $\\mathrm{3748 \\ cm^{-1}}$ \u76f8\u5dee\u5f88\u5927\u3002\n\n\n```python\n! head -n 14193 anharm-G16B01.log | tail -n 9\n```\n\n Fundamental Bands\r\n -----------------\r\n Mode(n) Status E(harm) E(anharm) Aa(x) Ba(y) Ca(z)\r\n 1(1) active -969.747 -1119.771 14.263374 7.063101 4.568200\r\n 2(1) active 5095.778 4983.317 14.043335 6.985596 4.575906\r\n H 3(1) active 3874.822 3747.756 14.108724 6.929780 4.566570\r\n 4(1) active 2059.644 1822.902 14.182793 6.771150 4.581152\r\n 5(1) active 1931.787 1852.185 14.252150 7.053448 4.510857\r\n 6(1) active 1680.388 1630.676 14.460822 7.057768 4.466441\r\n\n\n\u8fd9\u662f\u6e90\u4e8e Gaussian \u6709\u5bf9\u8fd1\u7b80\u5e76\u500d\u9891\u632f\u52a8\u6a21\u5f0f (Fermi \u5171\u632f) \u7684\u5904\u7406\u3002\u5728\u6211\u4eec\u7684\u95ee\u9898\u4e2d\uff0c$\\omega_4 - 2 \\omega_2 = 3834 - 2 \\times 1852 = 11 (\\mathrm{cm^{-1}})$ \u88ab\u8ba4\u4e3a\u662f\u500d\u9891\u8fd1\u7b80\u5e76\u7684\uff1a\n\n\n```python\nfa.freq[4] - 2 * fa.freq[2]\n```\n\n\n\n\n 11.248473679877407\n\n\n\n\n```python\n! head -n 13948 anharm-G16B01.log | tail -n 5\n```\n\n Fermi resonances\r\n ----------------\r\n \r\n I J + K Freq. Diff. Red. Cubic Const. PT2-Variat.Diff.\r\n 3 5 5 11.248 138.473 1009.207\r\n\n\n\u5b83\u5bf9\u4e0b\u8ff0\u4e09\u9636\u5bfc\u6570\u8d21\u732e\u9879 $\\xi_{42} = \\xi_{24}$ \u4f1a\u6709\u975e\u5e38\u5927\u7684\u8d21\u732e (\u6ce8\u610f\u5230 $\\lambda_i \\propto \\omega_i^2$\uff0c\u56e0\u6b64\u4e0b\u5f0f\u4e2d\u5206\u6bcd\u6709 $\\omega_4 - 2 \\omega_2$ \u7684\u56e0\u5f0f)\uff1a\n\n$$\n\\xi_{24} \\leftarrow - \\frac{\\hbar}{2 \\pi c_0} \\frac{1}{4 \\sqrt{|\\lambda_2 \\lambda_4|}} \\frac{2}{4 \\lambda_2 - \\lambda_4} \\Phi_{224}^2 \n$$\n\n\u8fd9\u7c7b\u8868\u8fbe\u5f0f\u53ef\u4ee5\u62c6\u5206\u4e3a\u5171\u632f\u9879\u4e0e\u975e\u5171\u632f\u9879\uff1a\n\n$$\n\\frac{1}{4 \\lambda_i - \\lambda_j} = \\frac{1}{(2 \\pi c_0)^2} \\left( \\underset{\\text{non-resonant}}{\\frac{1}{4 \\omega_i (2 \\omega_i + \\omega_j)}} + \\underset{\\text{resonant}}{\\frac{1}{4 \\omega_i (2 \\omega_i - \\omega_j)}} \\right)\n$$\n\n\u5728 Gaussian \u4e2d\uff0c\u5f53\u88ab\u5224\u65ad\u4e3a Fermi \u5171\u632f\u65f6\uff0c\u5c31\u4f1a\u76f4\u63a5\u6392\u9664\u5171\u632f\u9879\uff0c\u800c\u53ea\u4fdd\u7559\u975e\u5171\u632f\u9879\uff0c\u4ee5\u907f\u514d\u9891\u7387\u77eb\u6b63\u7684\u5947\u70b9\u3002\u540c\u65f6\u4f1a\u5728\u975e\u8c10\u9891\u7387\u8f93\u51fa\u7684\u90e8\u5206\u7528\u8bb0\u53f7 `H` \u8868\u793a\u8fd9\u79cd Fermi \u5171\u632f\u6548\u5e94\u3002\n\n## \u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb\n\n\u4e3a\u4e86\u8ba1\u7b97\u5206\u5b50\u7684\u5404\u79cd\u6027\u8d28 (\u8b6c\u5982\u5076\u6781\u3001\u7ea2\u5916\u7b49\u5149\u8c31\u6027\u8d28) \u7684\u975e\u8c10\u77eb\u6b63\u77eb\u6b63\uff0c\u6211\u4eec\u9700\u8981\u7ed9\u51fa\u7b80\u632f\u5750\u6807\u7684\u5e73\u5747\u504f\u79fb\u91cf $\\langle Q_i \\rangle$ \u4e0e\u4e8c\u9636\u5e73\u5747\u504f\u79fb\u91cf $\\langle Q_i Q_j \\rangle$\u3002\n\n\u6211\u4eec\u5728\u8fd9\u4e00\u8282\u4e2d\uff0c\u4e3a\u4e86\u4e0e Gaussian \u7684\u7ed3\u679c\u4f5c\u6bd4\u5bf9\uff0c\u6682\u65f6\u4e0d\u8003\u8651\u8f6c\u52a8\u77eb\u6b63\u6548\u5e94\u3002\n\n### \u6e29\u5ea6\u77eb\u6b63\u6548\u5e94\n\n\u6211\u4eec\u73b0\u5728\u8981\u8003\u8651\u6e29\u5ea6\u5bf9\u5206\u5b50\u632f\u52a8\u7684\u5f71\u54cd\u3002\u4e00\u822c\u6765\u8bf4\uff0c\u6e29\u5ea6\u8d8a\u9ad8\uff0c\u5206\u5b50\u7684\u632f\u52a8\u4f1a\u8d8a\u5267\u70c8\uff0c\u5176\u5404\u79cd\u8c31\u5b66\u6027\u8d28\u4e0e $0 \\ K$ \u4e0b\u7684\u7ed3\u679c\u4f1a\u504f\u5dee\u8d8a\u5927\u3002\u8fd9\u79cd\u6e29\u5ea6\u6548\u5e94\u662f\u901a\u8fc7\u8fd1\u4f3c\u7684\u914d\u5206\u51fd\u6570\u6240\u7ed9\u51fa (\u7c7b\u4f3c\u4e8e softmax \u7684\u6548\u679c)\u3002\u865a\u9891\u88ab\u8ba4\u4e3a\u4e0d\u53c2\u4e0e\u914d\u5206\uff0c\u56e0\u6b64\u5176\u503c\u7f6e\u4e3a\u96f6\u3002\u8be5\u914d\u5206\u51fd\u6570\u662f (Barone, eq. (59))\n\n$$\n\\theta_i (T) = \\left\\{\n\\begin{alignat}{3}\n& \\cot \\left( \\frac{h \\omega_i c_0}{2 k_\\mathrm{B} T} \\right) \\quad &&\\omega_i \\in \\mathbb{R} \\\\\n& 0 && \\omega_i \\in \\mathbb{I}\n\\end{alignat}\n\\right.\n$$\n\n\u5728\u540e\u7eed\u7684\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u59cb\u7ec8\u4f1a\u4f7f\u7528 $T_{2500} = 2500 \\ \\mathrm{K}$\u3002\n\n\n```python\ndef get_coef_temp(T, calc_imag=False):\n coef_temp = np.tanh(h * fa.freq * 100 * c_0 / (2 * k_B * T))**-1\n if not calc_imag: coef_temp[coef_temp < 0] = 0\n else: coef_temp = np.abs(coef_temp)\n return coef_temp\n```\n\n\u663e\u7136\uff0c\u5728 $0 \\ \\mathrm{K}$ \u65f6\uff0c\u5b9e\u6570\u9891\u7387\u7684 $\\theta_i (0) = 1$\u3002\u5728 $2500 \\ \\mathrm{K}$ \u65f6\uff0c\u6e29\u5ea6\u5bf9\u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb\u7684\u77eb\u6b63\u6bd4\u4f8b `coef_temp` \u662f\n\n\n```python\nT = 2500\ncoef_temp = get_coef_temp(T)\ncoef_temp\n```\n\n\n\n\n array([0. , 2.226801, 1.980529, 1.88035 , 1.240967, 1.1125 ])\n\n\n\n### \u4e00\u9636\u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb\u7684\u632f\u52a8\u8d21\u732e\n\n\u6211\u4eec\u5047\u5b9a\u865a\u9891\u5bf9\u8c31\u5b66\u6027\u8d28\u7684\u8ba1\u7b97\u4e0d\u4ea7\u751f\u8d21\u732e\u3002\u5bf9\u4e8e\u5b9e\u6570\u7684\u7b80\u632f\u6a21\u5f0f $i$\uff0c\u4e00\u9636\u7684\u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb $\\langle Q_i \\rangle^\\mathrm{vib}$ \u4e3a (Barone, eq. (56))\uff1a\n\n$$\n\\langle Q_i \\rangle^\\mathrm{vib} (T) = - \\frac{\\hbar}{4 \\lambda_i} \\sum_j \\frac{\\Phi_{ijj}}{\\sqrt{|\\lambda_j|}} \\theta_j (T)\n$$\n\n\u6211\u4eec\u5c06\u5e38\u7528\u7684\u8f6c\u6362\u7cfb\u6570\u7f6e\u4e8e\u53d8\u91cf `Q_scale`\uff0c\u5e76\u4ee4 $\\langle Q_i \\rangle^\\mathrm{vib} (0)$ \u4e3a `Q_1`\uff0c$\\langle Q_i \\rangle^\\mathrm{vib} (T_{2500})$ \u4e3a `QT_1`\u3002\n\n\n```python\nQ_scale = hbar / np.sqrt(E_h * a_0**2 * amu)\n```\n\n\n```python\nQ_1, QT_1 = np.zeros(nvib), np.zeros(nvib)\nfor i in range(nvib):\n if fa.freq[i] <= 0: continue\n for j in range(nvib):\n val = - deriv_3[i,j,j] / (4 * lambd[i] * np.sqrt(np.abs(lambd[j]))) * Q_scale\n Q_1[i] += val\n QT_1[i] += val * coef_temp[j]\nprint(\" (0 K) \", Q_1)\nprint(\" (2500 K)\", QT_1)\n```\n\n (0 K) [ 0. 0.014627 -0.03185 -0.059824 0.01557 0.00532 ]\n (2500 K) [ 0. 0.026619 -0.055617 -0.11182 0.022404 0.005395]\n\n\nGaussian \u7684\u7ed3\u679c\u663e\u5f0f\u5982\u4e0b\u3002\u4e4b\u6240\u4ee5\u4f1a\u6709\u6b63\u8d1f\u53f7\u7684\u5dee\u5f02\uff0c\u662f\u56e0\u4e3a\u6211\u4eec\u7684\u7b80\u632f\u5750\u6807\u65b9\u5411\u4f1a\u4e0e Gaussian \u6709\u4e9b\u5fae\u533a\u522b\u3002\n\n\n```python\n! head -n 13347 anharm-G16B01.log | tail -n 8\n```\n\n Average Normal Coordinates (in amu^1/2.bohr)\r\n --------------------------------------------\r\n Mode (0) (******)\r\n 2 0.005318 0.005392\r\n 3 0.015568 0.022402\r\n 4 -0.059824 -0.111821\r\n 5 0.031847 0.055613\r\n 6 -0.014626 -0.026617\r\n\n\n:::{danger}\n\n\u5bf9\u4e8e\u8fd9\u91cc\u7684\u5b9e\u73b0\uff0c\u5173\u4e8e\u5982\u4f55\u5904\u7406\u865a\u9891\uff0cGaussian 09 rev. D01 \u4e0e Gaussian 16 rev. B01 \u5e94\u662f\u4e00\u81f4\u7684\uff0c\u4f46\u4e0e Gaussian 09 rev. C01 \u4e0d\u4e00\u81f4\u3002\u4e24\u8fb9\u4f3c\u4e4e\u90fd\u6709\u4e0d\u5408\u7406\u4e4b\u5904\u3002\n\n\u6211\u4eec\u4e4b\u524d\u91c7\u7528\u7684\u662f\u4e0e Gaussian 16 rev. B01 \u76f8\u4f3c\u7684\u505a\u6cd5\u3002\n\n- \u5bf9\u4e8e Gaussian 09 rev. C01\uff0c\u5b83\u5728\u8ba1\u7b97\u6e29\u5ea6\u77eb\u6b63\u7cfb\u6570\u65f6\uff0c\u5e76\u6ca1\u6709\u6392\u9664\u865a\u9891\u90e8\u5206\u3002\u540c\u65f6\uff0c\u8be5\u7248\u672c\u8fd8\u53ea\u80fd\u8ba1\u7b97\u5ba4\u6e29\u7684\u6e29\u5ea6\u6548\u5e94\u3002\u4f7f\u7528\u540e\u4e00\u4e2a\u88ab\u6298\u53e0\u7684 code cell \u5373\u53ef\u7ed9\u51fa\u8be5\u7248\u672c\u7684\u7ed3\u679c\u3002\n\n- \u5bf9\u4e8e Gaussian 16 rev. B01\uff0c\u5c3d\u7ba1\u5728 $2500 \\ \\mathrm{K}$ \u65f6\u6392\u9664\u4e86\u865a\u9891\u7684\u8d21\u732e\uff0c\u4f46\u5728 $0 \\ \\mathrm{K}$ \u4e0b\u4ecd\u7136\u8003\u8651\u4e86\u865a\u9891\uff1b\u56e0\u6b64\u5c3d\u7ba1\u8bf4 $T_{2500}$ \u662f\u5f88\u9ad8\u7684\u6e29\u5ea6\uff0c\u4f46 $\\langle Q_i \\rangle^\\mathrm{vib} (0)$ \u4e0e $\\langle Q_i \\rangle^\\mathrm{vib} (T_{2500})$\uff0c\u751a\u81f3\u6781\u7aef\u4e00\u4e9b $\\langle Q_i \\rangle^\\mathrm{vib} (1 \\ \\mathrm{K})$ \u7684\u5dee\u8ddd\u8fd8\u662f\u4f1a\u975e\u5e38\u5927\u3002\u8fd9\u79cd\u505a\u6cd5\u4f3c\u4e4e\u4e5f\u4e0d\u5408\u7406\u3002\n\n:::\n\n\n```python\ndef print_Q_1_G09C01():\n coef_temp_G09C01 = np.abs(get_coef_temp(298, calc_imag=True))\n Q_1, QT_1 = np.zeros(nvib), np.zeros(nvib)\n for i in range(nvib):\n if fa.freq[i] <= 0: continue\n for j in range(nvib):\n val = - deriv_3[i,j,j] / (4 * lambd[i] * np.sqrt(np.abs(lambd[j]))) * Q_scale\n Q_1[i] += val\n QT_1[i] += val * coef_temp_G09C01[j]\n print(\" (0 K) \", Q_1)\n print(\" (298 K) \", QT_1)\nprint_Q_1_G09C01()\n```\n\n (0 K) [ 0. 0.014627 -0.03185 -0.059824 0.01557 0.00532 ]\n (298 K) [ 0. 0.014657 -0.031826 -0.059851 0.015479 0.005218]\n\n\n### \u4e8c\u9636\u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb\u7684\u632f\u52a8\u8d21\u732e\n\n\u5bf9\u4e8e\u5b9e\u6570\u7684\u7b80\u632f\u6a21\u5f0f $i$\uff0c\u4e00\u9636\u7684\u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb $\\langle Q_i Q_j \\rangle$ \u53ea\u6709\u5728 $i = j$ \u65f6\u975e\u96f6\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u53ea\u9700\u8981\u8003\u8651 $\\langle Q_i^2 \\rangle (T)$ (Barone, eq. (56))\uff1a\n\n$$\n\\langle Q_i^2 \\rangle (T) = \\frac{\\hbar}{2 \\sqrt{\\lambda_i}}\n$$\n\n\u6211\u4eec\u4ee4 $\\langle Q_i^2 \\rangle (0)$ \u4e3a `Q_2`\uff0c$\\langle Q_i^2 \\rangle (T_{2500})$ \u4e3a `QT_2`\u3002\n\n\n```python\nQ_2, QT_2 = np.zeros(nvib), np.zeros(nvib)\nfor i in range(nvib):\n if fa.freq[i] <= 0: continue\n val = 1 / (2 * np.sqrt(lambd[i])) * Q_scale\n Q_2[i] += val\n QT_2[i] += val * coef_temp[i]\nprint(Q_2)\nprint(QT_2)\n```\n\n [0. 0.035825 0.031163 0.029228 0.015536 0.011814]\n [0. 0.079775 0.061719 0.054959 0.01928 0.013143]\n\n\nGaussian \u7684\u7ed3\u679c\u663e\u5f0f\u5982\u4e0b\u3002\n\n\n```python\n! head -n 13338 anharm-G16B01.log | tail -n 8\n```\n\n Mean Square Amplitudes of Normal Coordinates (in amu.bohr^2)\r\n -----------------------------------------------------------\r\n Mode (0) (******) (******) class.\r\n 2 0.011814 0.013143 0.318060\r\n 3 0.015536 0.019280 0.550081\r\n 4 0.029228 0.054959 1.946911\r\n 5 0.031163 0.061719 2.213156\r\n 6 0.035825 0.079775 2.924904\r\n\n\n:::{danger}\n\nBarone \u6587\u7ae0\u7684 eq. (57) \u516c\u5f0f\u5f88\u6709\u53ef\u80fd\u5dee\u4e86\u4e00\u4e2a\u6b63\u8d1f\u53f7\u3002\u56e0\u6b64\u4e0a\u9762\u7684\u5b9e\u73b0\u65b9\u5f0f\u4e0e Barone \u539f\u6587\u5e76\u4e0d\u5b8c\u5168\u4e00\u81f4\u3002\u5728 Harding et al. [^Gauss.JCP.2008] \u7684 eq. (3) \u4e2d\uff0c\u5c31\u6ca1\u6709\u51fa\u73b0\u8d1f\u53f7\uff1b\u56e0\u6b64\u591a\u534a\u4e0e Gaussian \u7684\u5b9e\u73b0\u662f\u4e00\u81f4\u7684\u3002\u5728\u4f5c\u8fd9\u90e8\u5206 Double Check \u540e\uff0c\u51b3\u5b9a\u5728\u8fd9\u7bc7\u6587\u6863\u7684\u7a0b\u5e8f\u5b9e\u73b0\u4e2d\u53bb\u9664\u8d1f\u53f7\u3002\n\n:::\n\n## \u975e\u8c10\u8c31\u5b66\u6027\u8d28\uff1a\u5076\u6781\u77e9\n\n\u73b0\u5728\u6211\u4eec\u8003\u8651\u5728\u975e\u8c10\u8fd1\u4f3c\u4e0b\uff0c\u76f8\u5bf9\u6765\u8bf4\u6700\u7b80\u5355\u7684\u5149\u8c31\u6027\u8d28\uff0c\u5373\u5076\u6781\u77e9\u5f3a\u5ea6\u3002\n\n\u6211\u4eec\u5728\u83b7\u5f97\u4e86\u7b80\u632f\u5750\u6807\u5e73\u5747\u504f\u79fb\u91cf $\\langle Q_i \\rangle$ \u4e0e $\\langle Q_i^2 \\rangle$\uff0c\u4e14\u5f97\u5230\u5e73\u8861\u6784\u578b\u4e0b\u76ee\u6807\u6027\u8d28 $P$ \u5bf9\u7b80\u632f\u5750\u6807\u7684\u5bfc\u6570\u540e\uff0c\u5c31\u53ef\u4ee5\u6c42\u53d6\u7279\u5b9a\u6e29\u5ea6\u975e\u8c10\u8fd1\u4f3c\u4e0b\u7684\u76ee\u6807\u6027\u8d28\u7684\u503c\uff1a\n\n$$\n\\langle P \\rangle^\\mathrm{anharm} (T) \\simeq P + \\sum_i \\frac{\\partial P}{\\partial Q_i} \\cdot \\langle Q_i \\rangle (T) + \\frac{1}{2} \\sum_{i} \\frac{\\partial^2 P}{\\partial Q_i^2} \\cdot \\langle Q_i^2 \\rangle (T)\n$$\n\n\u5c3d\u7ba1\u6c42\u53d6\u7684\u65b9\u5f0f\u5e76\u4e0d\u590d\u6742\uff0c\u4f46\u5728\u83b7\u53d6\u7a0b\u5e8f\u8f93\u51fa\u65f6\uff0c\u4ecd\u7136\u5fc5\u987b\u8981\u683c\u5916\u5f53\u5fc3\u3002\n\n### \u5076\u6781\u77e9\u7684\u9ad8\u9636\u5bfc\u6570\n\n\u6211\u4eec\u77e5\u9053\uff0c\u4e3a\u83b7\u5f97\u7ea2\u5916\u5149\u8c31\u6570\u636e\uff0c\u6211\u4eec\u9700\u8981\u8ba1\u7b97\u5076\u6781\u5728\u539f\u5b50\u5750\u6807\u4e0b\u7684\u5bfc\u6570 $\\partial \\boldsymbol{\\mu} / \\partial A_\\alpha$\u3002\u5728 Gaussian \u4e2d\uff0c\u53ea\u8981\u8fdb\u884c\u9891\u7387\u5206\u6790 (\u6c42\u53d6\u80fd\u91cf\u7684\u4e8c\u9636\u5bfc\u6570)\uff0c\u5c31\u53ef\u4ee5\u83b7\u5f97\u5076\u6781\u77e9\u7684\u539f\u5b50\u5750\u6807\u5bfc\u6570\u3002\n\n\u5728 PySCF \u4e2d\uff0c\u5c3d\u7ba1\u6ca1\u6709\u7279\u5b9a\u7684\u51fd\u6570\u53bb\u5b8c\u6210\u5076\u6781\u77e9\u7684\u539f\u5b50\u5750\u6807\u5bfc\u6570\u7684\u529f\u80fd\uff1b\u4f46\u5bf9\u4e8e\u5f00\u58f3\u5c42\u7684\u81ea\u6d3d\u573a\u8ba1\u7b97\u65b9\u6cd5\uff0c\u5982\u679c\u6211\u4eec\u6709\u4e86 Hessian \u8ba1\u7b97\u7684\u5b9e\u4f8b\uff0c\u90a3\u4e48\u5176\u6c42\u53d6\u5b9e\u9645\u4e0a\u53ea\u9700\u8981 15 \u884c\u4ee3\u7801\u5de6\u53f3\u3002\u4e0b\u8ff0 `get_dipderiv` \u901a\u8fc7\u5e26\u5165\u95ed\u58f3\u5c42 Hessian \u8ba1\u7b97\u5b9e\u4f8b\uff0c\u7ed9\u51fa\u5076\u6781\u5728\u539f\u5b50\u5750\u6807\u4e0b\u7684\u5bfc\u6570 $\\partial \\boldsymbol{\\mu} / \\partial A_\\alpha$ (\u7ef4\u5ea6\uff1a$(A, \\alpha, \\gamma)$\uff0c\u5176\u4e2d $A_\\alpha$ \u662f\u539f\u5b50\u5750\u6807\uff0c$\\gamma$ \u662f\u5076\u6781\u65b9\u5411)\u3002\n\n\n```python\ndef get_dipderiv(mf_hess):\n mf, mol = mf_hess.base, mf_hess.mol\n natm, nao, nocc = mol.natm, mol.nao, mol.nelec[0]\n H_2_ao = np.zeros((3, natm, 3, nao, nao))\n int1e_irp = mol.intor(\"int1e_irp\").reshape((3, 3, nao, nao))\n for A in range(natm):\n _, _, p0, p1 = mol.aoslice_by_atom()[A]\n H_2_ao[:, A, :, :, p0:p1] = int1e_irp[:, :, :, p0:p1]\n H_2_ao += H_2_ao.swapaxes(-1, -2)\n dipderiv_skeleton = einsum(\"rAtuv, uv -> rAt\", H_2_ao, mf.make_rdm1())\n dipderiv_nuc = einsum(\"rt, A -> rAt\", np.eye(3), mol.atom_charges())\n h1ao = mf_hess.make_h1(mf.mo_coeff, mf.mo_occ)\n mo1 = np.array(mf_hess.solve_mo1(mf.mo_energy, mf.mo_coeff, mf.mo_occ, h1ao)[0])\n dipderiv_U = 4 * einsum(\"Atui, ruv, vi -> rAt\", mo1, - mol.intor(\"int1e_r\"), mf.mo_coeff[:, :nocc])\n return (dipderiv_skeleton + dipderiv_nuc + dipderiv_U).reshape(3, 3 * natm).T\n```\n\n\u5728\u53ef\u4ee5\u89e3\u6790\u5730\u8ba1\u7b97\u5076\u6781\u7684\u6838\u5750\u6807\u5bfc\u6570\u4e4b\u540e\uff0c\u4eff\u7167\u5148\u524d\u6c42\u53d6\u80fd\u91cf\u5bf9\u7b80\u632f\u6a21\u91cf\u7684\u9ad8\u9636\u5bfc\u6570\uff0c\u6211\u4eec\u5f88\u5bb9\u6613\u5730\u7ed9\u51fa\u5076\u6781\u77e9\u5bf9\u7b80\u632f\u5750\u6807\u7684\u6570\u503c\u4e8c\u9636\u5bfc\u6570\u3001\u4ee5\u53ca\u4e00\u90e8\u5206\u6570\u503c\u4e09\u9636\u5bfc\u6570\u3002\u7a0b\u5e8f\u8ba1\u7b97\u6240\u4f7f\u7528\u7684\u5355\u4f4d\u662f\u5076\u6781\u4f7f\u7528 $\\mathrm{Debye}$\uff0c\u7b80\u632f\u5750\u6807\u4f7f\u7528\u539f\u5b50\u5355\u4f4d $\\mathrm{Bohr \\ amu^{1/2}}$\uff0c\u6240\u6709\u5076\u6781\u53d6\u5411\u5747\u53d8\u6362\u5230 Eckart Orientation\u3002\u9700\u8981\u6ce8\u610f\u5230\u6211\u4eec\u65e0\u6cd5\u6c42\u53d6\u5b8c\u6574\u7684\u4e09\u9636\u5bfc\u6570\uff0c\u56e0\u6b64\u4e09\u9636\u5bfc\u6570\u91cf\u7684\u7ef4\u5ea6\u4e0e\u4e8c\u9636\u76f8\u540c\u3002\n\n- `dip_0`\uff1a$\\mu_\\gamma$\n- `dip_1`\uff1a$\\partial \\mu_\\gamma / \\partial Q_i$\uff0c\u7ef4\u5ea6 $(i, \\gamma)$\n- `dip_2`\uff1a$\\partial^2 \\mu_\\gamma / \\partial Q_i \\partial Q_j$\uff0c\u7ef4\u5ea6 $(i, j, \\gamma)$\n- `dip_3`\uff1a$\\partial^3 \\mu_\\gamma / \\partial Q_i^2 \\partial Q_j$\uff0c\u7ef4\u5ea6 $(i, j, \\gamma)$\n\n\u8fd9\u90e8\u5206\u7684\u7ed3\u679c\u53ef\u4ee5\u4e0e Gaussian 16 rev. B01 \u4f5c\u6838\u9a8c\u3002\n\n\n```python\ndip_0 = mf.dip_moment() @ fa.rot_vec\ndip_1 = einsum(\"A\u03b1, \u03b1\u03b3, Ai -> i\u03b3\", get_dipderiv(mf_hess), fa.rot_vec, fa.q) * data.nist.AU2DEBYE\ndip_tmp_2 = NumericDiff(num_hess, lambda mf_hess: get_dipderiv(mf_hess)).derivative * data.nist.AU2DEBYE\ndip_2 = einsum(\"iA\u03b1, \u03b1\u03b3, Aj -> ij\u03b3\", dip_tmp_2, fa.rot_vec, fa.q)\ndipderiv_gathered = np.array([[get_dipderiv(mf_hess) for mf_hess in l] for l in num_hess.objects]) * data.nist.AU2DEBYE\ndip_tmp_3 = einsum(\"i\u03c3A\u03b1, \u03b1\u03b3, Aj -> \u03c3ij\u03b3\", dipderiv_gathered, fa.rot_vec, fa.q)\ndip_3 = (dip_tmp_3[0] + dip_tmp_3[1] - 2 * dip_1) / (num_hess.interval**2)\n```\n\n Dipole moment(X, Y, Z, Debye): -0.01359, -0.52278, 0.07319\n\n\n:::{warning}\n\nGaussian 09 rev. D01 \u7248\u672c\u4e2d\uff0c\u5076\u6781\u77e9\u7684\u53d6\u5411\u5bf9\u5e94\u8f93\u5165\u5206\u5b50\u6240\u4f7f\u7528\u7684\u5750\u6807\u7cfb\uff1b\u4f46 Gaussian 16 rev. B01 \u7248\u672c\u4e2d\uff0c\u5076\u6781\u7684\u53d6\u5411\u5bf9\u5e94 Eckart Orientation\u3002\u672c\u6587\u6863\u4f7f\u7528\u540e\u8005\uff1b\u8bfb\u8005\u9700\u8981\u5c0f\u5fc3\u5730\u5904\u7406\u5750\u6807\u7cfb\u53d6\u5411\u3002\n\n:::\n\n### \u4e8c\u9636\u975e\u8c10\u6548\u5e94\u7684\u5076\u6781\u8d21\u732e\n\n\u77eb\u6b63\u9879 $\\partial^2 \\mu_\\gamma / \\partial Q_i^2 \\cdot \\langle Q_i^2 \\rangle (T)$ \u4ee5\u7ef4\u5ea6\u4e3a $(i, \\alpha)$ \u7684\u77e9\u9635\u8868\u793a\u5982\u4e0b\uff1a\n\n- `dip_anharm_2`\uff1a$T = 0$ \u7684\u60c5\u5f62\n- `dipT_anharm_2`\uff1a$T = T_{2500}$ \u7684\u60c5\u5f62\n\n\n```python\ndip_anharm_2 = 0.5 * dip_2.diagonal(0, 0, 1).T * Q_2[:, None]\ndip_anharm_2\n```\n\n\n\n\n array([[ 0. , 0. , -0. ],\n [ 0.002213, -0.001139, -0.000712],\n [-0.004645, 0.001438, 0.000898],\n [ 0.008767, 0.001811, -0.000082],\n [-0.003475, 0.002666, 0.000128],\n [-0.002509, -0.005047, -0.000059]])\n\n\n\n\n```python\ndipT_anharm_2 = 0.5 * dip_2.diagonal(0, 0, 1).T * QT_2[:, None]\ndipT_anharm_2\n```\n\n\n\n\n array([[ 0. , 0. , -0. ],\n [ 0.004927, -0.002536, -0.001586],\n [-0.0092 , 0.002849, 0.001778],\n [ 0.016485, 0.003405, -0.000155],\n [-0.004312, 0.003309, 0.000159],\n [-0.002791, -0.005614, -0.000066]])\n\n\n\n### \u4e00\u9636\u632f\u52a8\u975e\u8c10\u6548\u5e94\u7684\u5076\u6781\u8d21\u732e\n\n\u77eb\u6b63\u9879 $\\partial \\mu_\\gamma / \\partial Q_i \\cdot \\langle Q_i \\rangle^\\mathrm{vib} (T)$ \u4ee5\u7ef4\u5ea6\u4e3a $(i, \\alpha)$ \u7684\u77e9\u9635\u8868\u793a\u5982\u4e0b\uff1a\n\n- `dip_anharm_1`\uff1a$T = 0$ \u7684\u60c5\u5f62\n- `dipT_anharm_1`\uff1a$T = T_{2500}$ \u7684\u60c5\u5f62\n\n\n```python\ndip_anharm_1 = dip_1 * Q_1[:, None]\ndip_anharm_1\n```\n\n\n\n\n array([[-0. , -0. , -0. ],\n [-0.000133, 0.004004, -0.000545],\n [-0.004411, 0.005262, 0.000769],\n [-0.013259, -0.008027, -0.000521],\n [ 0.002164, -0.000338, 0.000207],\n [ 0.000337, 0.000047, 0.000094]])\n\n\n\n\n```python\ndipT_anharm_1 = dip_1 * QT_1[:, None]\ndipT_anharm_1\n```\n\n\n\n\n array([[-0. , -0. , -0. ],\n [-0.000242, 0.007286, -0.000992],\n [-0.007703, 0.009188, 0.001343],\n [-0.024782, -0.015004, -0.000973],\n [ 0.003114, -0.000487, 0.000299],\n [ 0.000342, 0.000048, 0.000095]])\n\n\n\n:::{danger}\n\nGaussian 16 rev. B01 \u4e0e Gaussian 09 rev. D01 \u5728\u4e00\u9636\u632f\u52a8\u8ba1\u7b97\u4e0a\u90fd\u5b58\u5728\u95ee\u9898\uff0c\u4e14\u95ee\u9898\u5747\u4e3a\u8ba1\u7b97\u6027\u8d28\u65f6\u6240\u7528\u7684 $\\langle Q_i \\rangle^\\mathrm{vib} (T)$ \u4e0e\u5148\u524d\u8ba1\u7b97\u7684\u7ed3\u679c\u4e0d\u540c\u3002\n\n\u6211\u4eec\u56de\u987e\u5230\uff0c\u5bf9\u4e8e\u5b9e\u6570\u9891\u7387\uff0c\n\n$$\n\\langle Q_i \\rangle^\\mathrm{vib} (T) = - \\frac{\\hbar}{4 \\lambda_i} \\sum_j \\frac{\\Phi_{ijj}}{\\sqrt{|\\lambda_j|}} \\theta_j (T)\n$$\n\n\u4f46\u5728 Gaussian \u4e2d\u7684\u5b9e\u73b0\uff0c\u5f88\u6709\u53ef\u80fd\u662f\n\n$$\n\\langle Q_i \\rangle^\\mathrm{vib} (T) = - \\frac{\\hbar}{4 \\lambda_i} \\sum_j \\frac{\\Phi_{ijj}}{\\sqrt{|\\lambda_j|}} \\theta_j (298.15 \\ \\mathrm{K}) \\theta_j (T)\n$$\n\n\u6211\u8ba4\u4e3a\u8fd9\u663e\u7136\u6709\u4e9b\u95ee\u9898\u54c8\u3002\n\n\u4e0b\u9762\u88ab\u6298\u53e0\u7684 code cell \u53ef\u4ee5\u590d\u73b0 Gaussian \u8f93\u51fa\u7684\u7ed3\u679c\u3002\n\n:::\n\n\n```python\ndef dip_anharm_1_Gaussian():\n Q_1, QT_1, QTT_1 = np.zeros(nvib), np.zeros(nvib), np.zeros(nvib)\n for i in range(nvib):\n if fa.freq[i] <= 0: continue\n for j in range(nvib):\n val = - deriv_3[i,j,j] / (4 * lambd[i] * np.sqrt(np.abs(lambd[j]))) * Q_scale\n Q_1[i] += val\n QT_1[i] += val * get_coef_temp(298.15)[j]\n QTT_1[i] += val * get_coef_temp(298.15)[j] * get_coef_temp(T)[j]\n print(\"Gaussian 16 rev. B01\")\n print(\"Temperature: 0 K\")\n print(dip_1 * QT_1[:, None])\n print(\"Temperature: 2500 K\")\n print(dip_1 * QTT_1[:, None])\n print(\"=====\")\n print(\"Gaussian 09 rev. D01\")\n print(\"Temperature: 0 K\")\n print(dip_1 * QT_1[:, None] @ fa.rot_vec.T)\n print(\"Temperature: 2500 K\")\n print(dip_1 * QTT_1[:, None] @ fa.rot_vec.T)\ndip_anharm_1_Gaussian()\n```\n\n Gaussian 16 rev. B01\n Temperature: 0 K\n [[-0. -0. -0. ]\n [-0.000119 0.003575 -0.000487]\n [-0.004617 0.005507 0.000805]\n [-0.013046 -0.007899 -0.000512]\n [ 0.002838 -0.000443 0.000272]\n [ 0.000674 0.000094 0.000188]]\n Temperature: 2500 K\n [[-0. -0. -0. ]\n [-0.000242 0.007287 -0.000993]\n [-0.007704 0.009189 0.001343]\n [-0.024786 -0.015006 -0.000973]\n [ 0.003113 -0.000487 0.000299]\n [ 0.000341 0.000048 0.000095]]\n =====\n Gaussian 09 rev. D01\n Temperature: 0 K\n [[ 0. 0. 0. ]\n [-0.001651 0.000601 0.003154]\n [-0.006357 -0.001267 0.003206]\n [-0.008541 -0.001726 -0.012527]\n [ 0.002775 0.000134 0.000776]\n [ 0.000591 -0.000082 0.000378]]\n Temperature: 2500 K\n [[ 0. 0. 0. ]\n [-0.003365 0.001224 0.006428]\n [-0.010607 -0.002114 0.00535 ]\n [-0.016227 -0.003279 -0.0238 ]\n [ 0.003045 0.000147 0.000851]\n [ 0.000299 -0.000042 0.000191]]\n\n\n\n```python\n! head -n 13449 anharm-G16B01.log | tail -n 20\n```\n\n ## Vibrational contributions to averages at 0K (Unit: Debye) ##\r\n ---------------------------------------------------------------------------\r\n Mode | P1(diag) P1(vib) P1(rot) P2 | P1(tot)+P2\r\n ---------------------------------------------------------------------------\r\n X 2 | 0.111D-02 0.674D-03 0.000D+00 -0.251D-02 | -0.183D-02\r\n X 3 | 0.316D-02 0.284D-02 0.000D+00 -0.348D-02 | -0.637D-03\r\n X 4 | -0.898D-02 -0.130D-01 0.000D+00 0.877D-02 | -0.428D-02\r\n X 5 | 0.664D-05 -0.462D-02 0.000D+00 -0.465D-02 | -0.926D-02\r\n X 6 | 0.226D-04 -0.119D-03 0.000D+00 0.221D-02 | 0.209D-02\r\n Y 2 | 0.156D-03 0.944D-04 0.000D+00 -0.505D-02 | -0.495D-02\r\n Y 3 | -0.494D-03 -0.443D-03 0.000D+00 0.267D-02 | 0.222D-02\r\n Y 4 | -0.544D-02 -0.790D-02 0.000D+00 0.181D-02 | -0.609D-02\r\n Y 5 | -0.793D-05 0.551D-02 0.000D+00 0.144D-02 | 0.694D-02\r\n Y 6 | -0.681D-03 0.357D-02 0.000D+00 -0.114D-02 | 0.244D-02\r\n Z 2 | 0.310D-03 0.188D-03 0.000D+00 -0.591D-04 | 0.129D-03\r\n Z 3 | 0.303D-03 0.272D-03 0.000D+00 0.128D-03 | 0.400D-03\r\n Z 4 | -0.352D-03 -0.512D-03 0.000D+00 -0.824D-04 | -0.595D-03\r\n Z 5 | -0.116D-05 0.805D-03 0.000D+00 0.898D-03 | 0.170D-02\r\n Z 6 | 0.927D-04 -0.487D-03 0.000D+00 -0.712D-03 | -0.120D-02\r\n ---------------------------------------------------------------------------\r\n\n\n\n```python\n! head -n 13479 anharm-G16B01.log | tail -n 20\n```\n\n ## Vibrational contributions to averages at 2500K (Unit: Debye) ##\r\n ---------------------------------------------------------------------------\r\n Mode | P1(diag) P1(vib) P1(rot) P2 | P1(tot)+P2\r\n ---------------------------------------------------------------------------\r\n X 2 | 0.111D-02 0.342D-03 -0.696D-05 -0.279D-02 | -0.246D-02\r\n X 3 | 0.316D-02 0.311D-02 0.252D-04 -0.431D-02 | -0.117D-02\r\n X 4 | -0.898D-02 -0.248D-01 -0.115D-03 0.165D-01 | -0.841D-02\r\n X 5 | 0.664D-05 -0.770D-02 0.656D-03 -0.920D-02 | -0.162D-01\r\n X 6 | 0.226D-04 -0.242D-03 0.104D-04 0.493D-02 | 0.470D-02\r\n Y 2 | 0.156D-03 0.478D-04 -0.974D-06 -0.561D-02 | -0.557D-02\r\n Y 3 | -0.494D-03 -0.487D-03 -0.394D-05 0.331D-02 | 0.282D-02\r\n Y 4 | -0.544D-02 -0.150D-01 -0.698D-04 0.340D-02 | -0.117D-01\r\n Y 5 | -0.793D-05 0.919D-02 -0.782D-03 0.285D-02 | 0.113D-01\r\n Y 6 | -0.681D-03 0.729D-02 -0.167D-03 -0.254D-02 | 0.458D-02\r\n Z 2 | 0.310D-03 0.950D-04 -0.194D-05 -0.658D-04 | 0.273D-04\r\n Z 3 | 0.303D-03 0.299D-03 0.242D-05 0.159D-03 | 0.460D-03\r\n Z 4 | -0.352D-03 -0.973D-03 -0.452D-05 -0.155D-03 | -0.113D-02\r\n Z 5 | -0.116D-05 0.134D-02 -0.114D-03 0.178D-02 | 0.301D-02\r\n Z 6 | 0.927D-04 -0.992D-03 0.228D-04 -0.159D-02 | -0.256D-02\r\n ---------------------------------------------------------------------------\r\n\n\n### \u4e00\u9636\u8f6c\u52a8\u975e\u8c10\u6548\u5e94\u7684\u5076\u6781\u8d21\u732e\n\n\u6211\u4eec\u73b0\u5728\u8003\u8651\u4e00\u9636\u8f6c\u52a8\u7684\u975e\u8c10\u6548\u5e94\u3002\u5b9e\u9645\u4e0a\uff0c\u4f9d\u636e Harding et al. [^Gauss.JCP.2008] eq. (4)\uff0c\u4e00\u9636\u7b80\u8c10\u6a21\u91cf\u7684\u5e73\u5747\u504f\u79fb\u4e0d\u6b62\u6709\u632f\u52a8\u7684\u8d21\u732e\uff0c\u4e5f\u8fd8\u6709\u8f6c\u52a8\u7684\u8d21\u732e\uff1a\n\n$$\n\\langle Q_i \\rangle (T) = \\langle Q_i \\rangle^\\mathrm{vib} (T) + \\langle Q_i \\rangle^\\mathrm{rot} (T) = - \\frac{\\hbar}{4 \\lambda_i} \\sum_j \\frac{\\Phi_{ijj}}{\\sqrt{|\\lambda_j|}} \\theta_j (T) + \\frac{k_\\mathrm{B} T}{2 \\lambda_i} \\sum_\\alpha \\frac{\\partial I_{\\alpha \\alpha}}{\\partial Q_i} \\frac{1}{I_{\\alpha \\alpha}}\n$$\n\n\u4e3a\u6b64\uff0c\u6211\u4eec\u8fd8\u9700\u8981\u6c42\u53d6\u8f6c\u52a8\u60ef\u91cf $I_{\\alpha \\beta}$ \u5728\u7b80\u632f\u5750\u6807\u4e0b\u7684\u5bfc\u6570 $\\partial I_{\\alpha \\beta}$ `deriv_inertia` (\u7ef4\u5ea6\uff1a$(i, \\alpha, \\beta)$)\u3002\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u5c3d\u7ba1\u6211\u4eec\u5df2\u7ecf\u5c06\u5206\u5b50\u8f6c\u52a8\u5230 Eckart Orientation \u4ee5\u4f7f\u5f97\u8f6c\u52a8\u60ef\u91cf\u77e9\u9635\u662f\u5bf9\u89d2\u77e9\u9635 $I_{\\alpha \\beta} = \\delta_{\\alpha \\beta} I_{\\alpha \\alpha}$\uff0c\u4f46\u8fd9\u4e0d\u610f\u5473\u7740\u5176\u7b80\u632f\u5750\u6807\u4e0b\u7684\u5bfc\u6570\u8fd8\u662f\u5bf9\u89d2\u7684\u3002\n\n\n```python\ndef get_fa(mf_hess):\n mol = mf_hess.mol\n fa = FreqAnal()\n fa.mol_weights = get_atom_mass_list(mol)\n fa.mol_coords = mol.atom_coords()\n fa.natm = mol.natm\n fa.mol_hess = mf_hess.de.swapaxes(1, 2)\n return fa\n```\n\n\n```python\nderiv_inertia = NumericDiff(num_hess, lambda mf_hess: get_fa(mf_hess).mom_inertia).derivative\nderiv_inertia = einsum(\"A\u03b3\u03b4, \u03b3\u03b1, \u03b4\u03b2 -> A\u03b1\u03b2\", deriv_inertia, fa.rot_vec, fa.rot_vec)\n```\n\n\u4f9d\u636e\u8fd9\u4e2a\u5bfc\u6570\uff0c\u5728\u5408\u7406\u7684\u5355\u4f4d\u8f6c\u6362\u4e0b\uff0c\u6211\u4eec\u80fd\u7ed9\u51fa $\\langle Q_i \\rangle^\\mathrm{vib} (T_{2500})$ `QTrot_1`\u3002\u540c\u65f6\uff0c\u6211\u4eec\u4e5f\u80fd\u770b\u5230\uff0c\u663e\u7136\u5730 $\\langle Q_i \\rangle^\\mathrm{vib} (0)$ `Qrot_1` \u4e3a\u96f6\uff0c\u56e0\u6b64\u5728 $0 \\ \\mathrm{K}$ \u4e0b\uff0c\u5206\u5b50\u65cb\u8f6c\u5bf9\u975e\u8c10\u6548\u5e94\u4e0d\u4ea7\u751f\u8d21\u732e\u3002\n\n\n```python\nQTrot_1 = (1/2 * T * k_B / lambd * (deriv_inertia.diagonal(0, -1, -2) / (fa.rot_eig)).sum(axis=-1)) * E_h**-1\nQTrot_1[lambd < 0] = 0\nQrot_1 = np.zeros_like(QTrot_1)\nQTrot_1\n```\n\n\n\n\n array([ 0. , 0.005706, -0.006855, -0.025615, 0.006727, 0.003773])\n\n\n\n\u77eb\u6b63\u9879 $\\partial \\mu_\\gamma / \\partial Q_i \\cdot \\langle Q_i \\rangle^\\mathrm{rot} (T)$ \u4ee5\u7ef4\u5ea6\u4e3a $(i, \\alpha)$ \u7684\u77e9\u9635\u8868\u793a\u5982\u4e0b\uff1a\n\n- `dip_anharm_rot`\uff1a$T = 0$ \u7684\u60c5\u5f62 (\u5b9e\u9645\u4e0a $0 \\ \\mathrm{K}$ \u4e0d\u4ea7\u751f\u8d21\u732e)\n- `dipT_anharm_rot`\uff1a$T = T_{2500}$ \u7684\u60c5\u5f62\n\n\n```python\ndipT_anharm_rot = QTrot_1[:, None] * dip_1\ndip_anharm_rot = np.zeros_like(dipT_anharm_rot)\ndipT_anharm_rot\n```\n\n\n\n\n array([[-0. , -0. , -0. ],\n [-0.000052, 0.001562, -0.000213],\n [-0.000949, 0.001132, 0.000166],\n [-0.005677, -0.003437, -0.000223],\n [ 0.000935, -0.000146, 0.00009 ],\n [ 0.000239, 0.000033, 0.000066]])\n\n\n\n\u65cb\u8f6c\u7684\u8d21\u732e\u4e00\u822c\u6765\u8bf4\u603b\u662f\u6bd4\u632f\u52a8\u5c0f\u4e0d\u5c11\u3002\u5728 Gaussian 09 \u7248\u672c\u4e2d\uff0c\u5b9e\u9645\u4e0a\u5c31\u6ca1\u6709\u8003\u8651\u8f6c\u52a8\u7684\u8d21\u732e\uff1b\u5c3d\u7ba1\u4e0d\u592a\u7cbe\u786e\uff0c\u4f46\u4f5c\u8fd9\u79cd\u5ffd\u7565\u4e5f\u5e94\u8ba4\u4e3a\u662f\u5408\u7406\u7684\u3002\n\n:::{danger}\n\nGaussian 16 rev. B01 \u7684\u8f6c\u52a8\u8d21\u732e\u8ba1\u7b97\u5f88\u53ef\u80fd\u662f\u9519\u8bef\u7684\u3002\u8be5\u9519\u8bef\u4e0d\u5bb9\u6613\u91cd\u73b0\u3002\n\n:::\n\n### \u7edf\u5408\u6240\u6709\u8d21\u732e\n\n\u56de\u987e\u5230\u975e\u8c10\u6027\u8d28\u5728\u7b80\u632f\u5750\u6807\u4e0b\u7684\u5c55\u5f00\u5f0f\n\n$$\n\\langle P \\rangle^\\mathrm{anharm} (T) \\simeq P + \\sum_i \\frac{\\partial P}{\\partial Q_i} \\cdot \\langle Q_i \\rangle (T) + \\frac{1}{2} \\sum_{i} \\frac{\\partial^2 P}{\\partial Q_i^2} \\cdot \\langle Q_i^2 \\rangle (T)\n$$\n\n\u5728\u6240\u6709\u8d21\u732e\u91cf\u90fd\u5df2\u7ecf\u6c42\u5f97\u7684\u60c5\u51b5\u4e0b\uff0c\u6211\u4eec\u5f88\u5bb9\u6613\u5730\u7ed9\u51fa $0 \\ \\mathrm{K}$ \u4e0e $T_{2500} = 2500 \\ \\mathrm{K}$ \u7684\u975e\u8c10\u77eb\u6b63\u4e0b\u7684\u5076\u6781\u77e9\u3002\n\n\u672a\u77eb\u6b63\u7684\u5076\u6781\u77e9\uff1a\n\n\n```python\nprint(\"Dipole/Debye (Uncorrected, Eckart Orientation): \", dip_0)\nprint(\"Dipole/Debye (Uncorrected, Input Orientation): \", dip_0 @ fa.rot_vec.T)\n```\n\n Dipole/Debye (Uncorrected, Eckart Orientation): [-0.059826 0.052123 0.522057]\n Dipole/Debye (Uncorrected, Input Orientation): [-0.013591 -0.522779 0.07319 ]\n\n\n$0 \\ \\mathrm{K}$ \u7684\u975e\u8c10\u77eb\u6b63\u5076\u6781\u77e9\uff1a\n\n\n```python\ndip_anharm = dip_0 + (dip_anharm_1 + dip_anharm_rot + dip_anharm_2).sum(axis=0)\nprint(\"Dipole/Debye (0 K, Eckart Orientation): \", dip_anharm)\nprint(\"Dipole/Debye (0 K, Input Orientation): \", dip_anharm @ fa.rot_vec.T)\n```\n\n Dipole/Debye (0 K, Eckart Orientation): [-0.074777 0.052801 0.522232]\n Dipole/Debye (0 K, Input Orientation): [-0.027333 -0.525138 0.067748]\n\n\n$T_{2500} = 2500 \\ \\mathrm{K}$ \u7684\u975e\u8c10\u77eb\u6b63\u5076\u6781\u77e9\uff1a\n\n\n```python\ndipT_anharm = dip_0 + (dipT_anharm_1 + dipT_anharm_rot + dipT_anharm_2).sum(axis=0)\nprint(\"Dipole/Debye (0 K, Eckart Orientation): \", dipT_anharm)\nprint(\"Dipole/Debye (0 K, Input Orientation): \", dipT_anharm @ fa.rot_vec.T)\n```\n\n Dipole/Debye (0 K, Eckart Orientation): [-0.089493 0.053711 0.521843]\n Dipole/Debye (0 K, Input Orientation): [-0.041028 -0.526894 0.062559]\n\n\n## \u975e\u8c10\u8c31\u5b66\u6027\u8d28\uff1a\u7ea2\u5916\u5149\u8c31\n\n\u5982\u679c\u4e0a\u9762\u7528\u4e8e\u8ba1\u7b97\u5076\u6781\u77e9\u7684\u8ba1\u7b97\u65b9\u6cd5\u4e5f\u53ef\u4ee5\u7528\u4e8e\u8ba1\u7b97\u5076\u6781\u77e9\u7684\u5750\u6807\u5bfc\u6570\uff0c\u90a3\u4e48\u7ea2\u5916\u5149\u8c31\u4e5f\u53ef\u4ee5\u901a\u8fc7\u7c7b\u4f3c\u7684\u65b9\u6cd5\u5b9e\u73b0\u3002\n\n:::{danger}\n\n\u8fd9\u53ea\u662f\u4f9d\u846b\u82a6\u753b\u74e2\uff0c\u7528\u8ba1\u7b97\u975e\u8c10\u5076\u6781\u7684\u65b9\u6cd5\u5957\u7528\u5230\u7ea2\u5916\u5149\u8c31\u4e2d\u3002\u5f88\u663e\u7136\u8fd9\u672a\u5fc5\u662f\u6b63\u786e\u7684\uff0c\u800c\u4e14\u4e8b\u5b9e\u4e0a\uff0c\u4e0d\u4ec5\u662f\u975e\u8c10\u632f\u52a8\u9891\u7387 (\u56e0\u4e3a Fermi \u6216 Darling-Dennsion \u5171\u632f)\uff0c**\u4e0b\u8ff0\u4ee3\u7801\u7684 IR \u5cf0\u5f3a\u5ea6\u7ed3\u679c\u4e5f\u4e0e Gaussian \u7684\u8f93\u51fa\u975e\u5e38\u4e0d\u540c**\u3002\n\n\u56e0\u6b64\uff0c\u5404\u4f4d\u770b\u5b98\u5c31\u5f53\u770b\u4e2a\u70ed\u95f9\u54c8 (\uff40\u30fb\u03c9\u30fb\u00b4)\n\n\u60f3\u8981\u4e86\u89e3\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u8003 Bloino \u4f5c\u4e3a\u7b2c\u4e00\u4f5c\u8005\u7684\u6587\u7ae0 ([^Bloino.JCP.2012], [^Bloino.JPCA.2015])\u3002\u7ea2\u5916\u5149\u8c31\u5e94\u5f53\u9700\u8981\u4f5c\u66f4\u4e25\u683c\u7684 RSPT \u4e8c\u9636\u5c55\u5f00\uff0c\u5e76\u4e14\u8003\u8651\u4e00\u90e8\u5206\u975e\u57fa\u9891\u632f\u52a8\u95f4\u7684\u76f8\u4e92\u8dc3\u8fc1\u5076\u6781\u77e9\u5bfc\u6570\u3002\u8fd9\u90e8\u5206\u5de5\u4f5c\u5e94\u5f53\u5df2\u7ecf\u5b9e\u73b0\u5728 Gaussian 16 rev. B01 \u4e2d\uff0c\u56e0\u6b64\u6211\u4eec\u80fd\u83b7\u5f97\u975e\u8c10\u7ea2\u5916\u4fe1\u606f\u3002\n\n\u5728\u4e0a\u9762\u5bf9 Gaussian \u7a0b\u5e8f\u7684\u5b9e\u73b0\u7ed3\u679c\u4f5c\u8ba8\u8bba\u4e4b\u540e\uff0c\u6211\u5176\u5b9e\u89c9\u5f97\u52a1\u5b9e\u770b\u5f85 Gaussian \u8f93\u51fa\u7684\u6001\u5ea6\u5e94\u5f53\u662f\u4fdd\u7559\u5730\u63a5\u53d7\u5b83\u3002\u5b83\u6240\u7ed9\u51fa\u7684\u5206\u5b50\u632f\u52a8\u9891\u7387\u8fd8\u662f\u76f8\u5bf9\u51c6\u786e\u7684\uff0c\u7279\u522b\u662f\u76f8\u6bd4\u4e0e\u672c\u6587\u6863\u800c\u8a00\uff0c\u5bf9\u9891\u7387\u5171\u632f\u7684\u5904\u7406\u591a\u5c11\u8ba9\u7ed3\u679c\u53d8\u597d\u4e00\u4e9b (\u5c3d\u7ba1\u5bf9\u4e8e Fermi \u5171\u632f\uff0c\u5982\u679c\u4e0d\u5e94\u7528 DCPT2 \u6216\u5176\u5b83\u7b80\u5e76\u5fae\u6270\u7406\u8bba\u800c\u5b8c\u5168\u53bb\u9664\u5171\u632f\u9879\uff0c\u4efb\u610f\u6027\u5176\u5b9e\u592a\u5f3a\uff1b\u8fd9\u5728 Gaussian \u4e2d\u6216\u8bb8\u5b9e\u73b0\u4e86\uff0c\u4f46\u6211\u8fd8\u4e0d\u6e05\u695a)\u3002\u4f46\u5728\u5076\u6781\u77e9 (\u6216\u53ef\u80fd\u5730\uff0c\u78c1\u77e9\u7b49) \u7684\u975e\u8c10\u5904\u7406\u4e0a\uff0c\u5927\u6982\u8fd8\u6709\u9700\u6539\u8fdb\u4e4b\u5904\u3002\n\n:::\n\n\u6211\u4eec\u8fd9\u91cc\u9009\u62e9\u4f7f\u7528\u76f8\u5bf9\u6765\u8bf4\u8f83\u4e3a\u771f\u5b9e\u7684\u4e59\u919b\u5206\u5b50 (CH3CHO) \u5206\u5b50\uff0c\u4f5c\u4e3a\u7ed8\u5236\u7ea2\u5916\u5149\u8c31\u7684\u5206\u5b50\u3002\u8ba1\u7b97\u65b9\u6cd5\u4ecd\u7136\u662f STO-3G\u3002\u7531\u4e8e\u6211\u4eec\u9700\u8981\u91cd\u65b0\u5199\u4e00\u904d\u8fd1\u4e4e\u5b8c\u6574\u7684\u4ee3\u7801\u6765\u7ed8\u5236\u7ea2\u5916\u5149\u8c31\uff0c\u56e0\u6b64\u8fd9\u91cc\u5c06\u5927\u90e8\u5206\u4ee3\u7801\u8fdb\u884c\u4e86\u6298\u53e0\u3002\u5176\u8f93\u5165\u5361 {download}`CH3CHO.gjf`\uff0c\u8f93\u51fa\u6587\u4ef6 {download}`CH3CHO.log`\u3002\u7ed8\u5236\u7ea2\u5916\u5149\u8c31\u65f6\uff0c\u4f7f\u7528 Lorentzian \u5c55\u5bbd\uff0c\u534a\u5cf0\u5bbd $30 \\ \\mathrm{cm^{-1}}$\u3002\n\n### \u5206\u5b50\u5b9a\u4e49\u4e0e\u539f\u5b50\u6838\u9ad8\u9636\u5bfc\u6570\n\n\n```python\nmol = gto.Mole(atom=\"\"\"\nC -0.852693 0.499814 -0.000294\nC 0.685096 0.670458 0.000342\nO 1.340323 -0.602319 -0.000432\nO -1.349649 -0.611839 0.000139\nH -1.450829 1.425645 -0.001000\nH 0.974215 1.253523 0.883979\nH 0.975003 1.254674 -0.882236\nH 0.581799 -1.242205 0.001322\n\"\"\", basis=\"STO-3G\", verbose=0).build()\n```\n\n\n```python\nnatm = mol.natm\nnhess = natm * 3\nmf = scf.RHF(mol).run()\nmf_hess = mf.Hessian().run()\n```\n\n\n```python\nfa = FreqAnal()\nfa.mol_weights = get_atom_mass_list(mol)\nfa.mol_coords = mol.atom_coords()\nfa.natm = mol.natm\nfa.mol_hess = mf_hess.de.swapaxes(1, 2)\nnvib = fa.freq.size\n```\n\n\n```python\nderiv_2 = einsum(\"Pi, PQ, Qj -> ij\", fa.q, fa.mol_hess.reshape(nhess, nhess), fa.q)\nlambd = deriv_2.diagonal()\n```\n\n\n```python\nnum_hess = ModeDerivGenerator(mol, lambda mol: scf.RHF(mol).run().Hessian().run(), fa.q, interval=0.01)\ntmp_3 = NumericDiff(num_hess, lambda mf: mf.de.swapaxes(1, 2).reshape(nhess, nhess)).derivative\nderiv_3 = einsum(\"Ai, Bj, kAB -> ijk\", fa.q, fa.q, tmp_3)\nderiv_3 = 1/3 * (deriv_3 + deriv_3.transpose(1, 2, 0) + deriv_3.transpose(2, 0, 1))\n```\n\n\n```python\nhess_gathered = np.array([[mf.de.swapaxes(1, 2).reshape(nhess, nhess) for mf in l] for l in num_hess.objects])\ntmp_4 = einsum(\"ksAB, Ai, Bj -> skij\", hess_gathered, fa.q, fa.q)\nderiv_4 = (tmp_4[0] + tmp_4[1] - 2 * deriv_2) / (num_hess.interval**2) # finite diff\nderiv_4 = (deriv_4 + deriv_4.swapaxes(-1, -2)) / 2 # symmetrize\nderiv_4 = einsum(\"kij -> ijk\", deriv_4) # subscript transform\n```\n\n### $\\xi_{ij}$ \u77e9\u9635\u751f\u6210\u4e0e\u975e\u8c10\u9891\u7387\u8ba1\u7b97\n\n\n```python\nxmat_4 = np.zeros((nvib, nvib))\nfor i in range(nvib):\n xmat_4[i, i] = deriv_4[i,i,i] / (16 * lambd[i])\n for j in range(nvib):\n if i == j: continue\n xmat_4[i, j] = deriv_4[i,i,j] / (4 * np.sqrt(np.abs(lambd[i] * lambd[j])))\nxmat_4 *= (hbar / (2 * np.pi * c_0)) * (a_0**-2 * amu**-1) / (100)\n```\n\n\n```python\nxmat_3 = np.zeros((nvib, nvib))\nfor i in range(nvib):\n xmat_3[i, i] -= deriv_3[i,i,i]**2 * 5 / (3 * lambd[i])\n for j in range(nvib):\n if i == j: continue\n xmat_3[i, i] -= deriv_3[i,i,j]**2 * (8 * lambd[i] - 3 * lambd[j]) / (lambd[j] * (4 * lambd[i] - lambd[j]))\n xmat_3[i, i] /= 16 * lambd[i]\nfor i in range(nvib):\n for j in range(nvib):\n if i == j: continue\n xmat_3[i, j] -= deriv_3[i,i,j]**2 * 2 / (4 * lambd[i] - lambd[j])\n xmat_3[i, j] -= deriv_3[i,j,j]**2 * 2 / (4 * lambd[j] - lambd[i])\n xmat_3[i, j] -= deriv_3[i,i,i] * deriv_3[i,j,j] / lambd[i]\n xmat_3[i, j] -= deriv_3[j,j,j] * deriv_3[i,i,j] / lambd[j]\n for k in range(nvib):\n if len(set([i, j, k])) != 3: continue\n delta_ijk = lambd[i]**2 + lambd[j]**2 + lambd[k]**2 - 2 * (lambd[i]*lambd[j] + lambd[j]*lambd[k] + lambd[k]*lambd[i])\n xmat_3[i, j] += deriv_3[i,j,k]**2 * 2 * (lambd[i] + lambd[j] - lambd[k]) / delta_ijk\n xmat_3[i, j] -= deriv_3[i,i,k] * deriv_3[j,j,k] / lambd[k]\n xmat_3[i, j] /= 4 * np.sqrt(np.abs(lambd[i] * lambd[j]))\nxmat_3 *= (hbar / (2 * np.pi * c_0)) * (a_0**-2 * amu**-1) / (100)\n```\n\n\n```python\nrot_wavenum = h / (8 * np.pi**2 * c_0 * fa.rot_eig) * (a_0**-2 * amu**-1) / (100)\nq_ = einsum(\"A\u03b2i, \u03b2\u03b1, A -> A\u03b1i\", fa.q.reshape(natm, 3, nvib), fa.rot_vec, np.sqrt(get_atom_mass_list(mol)))\nzeta_x = einsum(\"Ai, Aj -> ij\", q_[:, 1, :], q_[:, 2, :]) - einsum(\"Ai, Aj -> ij\", q_[:, 2, :], q_[:, 1, :])\nzeta_y = einsum(\"Ai, Aj -> ij\", q_[:, 2, :], q_[:, 0, :]) - einsum(\"Ai, Aj -> ij\", q_[:, 0, :], q_[:, 2, :])\nzeta_z = einsum(\"Ai, Aj -> ij\", q_[:, 0, :], q_[:, 1, :]) - einsum(\"Ai, Aj -> ij\", q_[:, 1, :], q_[:, 0, :])\nzeta = np.array([zeta_x, zeta_y, zeta_z])\n\nxmat_coriol = np.zeros((nvib, nvib))\nfor i in range(nvib):\n for j in range(nvib):\n for ixyz in range(3):\n xmat_coriol[i, j] += zeta[ixyz, i, j]**2 * rot_wavenum[ixyz]\n xmat_coriol[i, j] *= (lambd[i] + lambd[j]) / np.sqrt(np.abs(lambd[i] * lambd[j]))\n```\n\n\n```python\nxmat = xmat_4 + xmat_3 + xmat_coriol\nfreq_anharm = fa.freq + 1.5 * xmat.diagonal() + 0.5 * xmat.sum(axis=-1)\n```\n\n### \u6e29\u5ea6\u6548\u5e94\u77eb\u6b63\u7cfb\u6570\u8ba1\u7b97 (2500 K)\n\n\n```python\nT = 2500\ncoef_temp = get_coef_temp(T)\nQ_scale = hbar / np.sqrt(E_h * a_0**2 * amu)\n```\n\n\n```python\nQ_1, QT_1 = np.zeros(nvib), np.zeros(nvib)\nfor i in range(nvib):\n if fa.freq[i] <= 0: continue\n for j in range(nvib):\n val = - deriv_3[i,j,j] / (4 * lambd[i] * np.sqrt(np.abs(lambd[j]))) * Q_scale\n Q_1[i] += val\n QT_1[i] += val * coef_temp[j]\n```\n\n\n```python\nQ_2, QT_2 = np.zeros(nvib), np.zeros(nvib)\nfor i in range(nvib):\n if fa.freq[i] <= 0: continue\n val = 1 / (2 * np.sqrt(lambd[i])) * Q_scale\n Q_2[i] += val\n QT_2[i] += val * coef_temp[i]\n```\n\n\n```python\nderiv_inertia = NumericDiff(num_hess, lambda mf_hess: get_fa(mf_hess).mom_inertia).derivative\nderiv_inertia = einsum(\"A\u03b3\u03b4, \u03b3\u03b1, \u03b4\u03b2 -> A\u03b1\u03b2\", deriv_inertia, fa.rot_vec, fa.rot_vec)\nQTrot_1 = (1/2 * T * k_B / lambd * (deriv_inertia.diagonal(0, -1, -2) / (fa.rot_eig)).sum(axis=-1)) * E_h**-1\nQTrot_1[lambd < 0] = 0\nQrot_1 = np.zeros_like(QTrot_1)\n```\n\n### \u5076\u6781\u77e9\u5bfc\u6570\u4e0e\u7ea2\u5916\u5f3a\u5ea6\u8ba1\u7b97\n\n\n```python\ndip_0 = mf.dip_moment() @ fa.rot_vec\ndip_1 = einsum(\"A\u03b1, \u03b1\u03b3, Ai -> i\u03b3\", get_dipderiv(mf_hess), fa.rot_vec, fa.q) * data.nist.AU2DEBYE\ndip_tmp_2 = NumericDiff(num_hess, lambda mf_hess: get_dipderiv(mf_hess)).derivative * data.nist.AU2DEBYE\ndip_2 = einsum(\"iA\u03b1, \u03b1\u03b3, Aj -> ij\u03b3\", dip_tmp_2, fa.rot_vec, fa.q)\ndipderiv_gathered = np.array([[get_dipderiv(mf_hess) for mf_hess in l] for l in num_hess.objects]) * data.nist.AU2DEBYE\ndip_tmp_3 = einsum(\"i\u03c3A\u03b1, \u03b1\u03b3, Aj -> \u03c3ij\u03b3\", dipderiv_gathered, fa.rot_vec, fa.q)\ndip_3 = (dip_tmp_3[0] + dip_tmp_3[1] - 2 * dip_1) / (num_hess.interval**2)\n```\n\n Dipole moment(X, Y, Z, Debye): -0.85484, 1.70251, 0.00242\n\n\n\n```python\nIR_scale = 1/3 * np.pi * F**2 * data.nist.AU2DEBYE**-2 * 1e-7\nIR_inten = (dip_1**2).sum(axis=-1) * IR_scale\n```\n\n\n```python\ndip_1_anharm = einsum(\"ij\u03b3, j -> j\u03b3\", dip_2, Q_1 + Qrot_1) + 0.5 * einsum(\"ij\u03b3, j -> j\u03b3\", dip_3, Q_2)\nIR_anharm = ((dip_1 + dip_1_anharm)**2).sum(axis=-1) * IR_scale\n```\n\n\n```python\ndip_1T_anharm = einsum(\"ij\u03b3, j -> j\u03b3\", dip_2, QT_1 + QTrot_1) + 0.5 * einsum(\"ij\u03b3, j -> j\u03b3\", dip_3, QT_2)\nIRT_anharm = ((dip_1 + dip_1T_anharm)**2).sum(axis=-1) * IR_scale\n```\n\n### \u7ea2\u5916\u5149\u8c31\u7ed8\u5236\n\n\n```python\ndef lorentzian_freq(omega, omega_n, gamma):\n return 100 * 0.5 / np.pi * gamma / ((omega - omega_n)**2 + 0.25 * gamma**2)\n```\n\n\n```python\ndef ir_plot(omega, gamma, freq, ir):\n val = 0\n assert(len(freq) == len(ir))\n for i in range(len(freq)):\n val += lorentzian_freq(omega, freq[i], gamma) * ir[i]\n return val\n```\n\n\n```python\nfreq_anharm_gau = [4098.800, 3570.604, 3446.746, 3471.841, 2088.048, 1787.600, 1725.346, 1593.897, 1530.767, 1407.142, 1297.110, 1206.649, 1019.526, 878.962, 795.699, 185.138, 258.388, 69.977]\nIR_anharm_gau = [2.37219675, 1.54485433, 9.30810815, 7.71760501, 22.32308331, 0.91999625, 21.74763306, 0.33568248, 39.89748639, 0.34998748, 8.84110221, 1.90263087, 8.44102685, 0.12421802, 12.58976516, 36.74388344, 9.13647155, 24.14833335]\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(7, 4))\nax.grid()\n\nx = np.arange(0, 4500, 1)\nax.set_xlim(0, 4500)\nax.set_ylim(-10, 180)\nax.plot(x, ir_plot(x, 30, fa.freq, IR_inten), label=\"Harmonic\")\nax.plot(x, ir_plot(x, 30, freq_anharm, IR_anharm), label=\"Anharmonic (0 K)\")\n# ax.plot(x, ir_plot(x, 30, freq_anharm, IRT_anharm), label=\"Anharmonic (2500 K)\")\nax.plot(x, ir_plot(x, 30, freq_anharm_gau, IR_anharm_gau), label=\"Anharmonic (Gaussian)\")\nax.set_ylabel(\"Molar Absorption Coefficient (L mol$^{-1}$ cm$^{-1}$)\")\nax.set_xlabel(\"Vibration Wavenumber (cm$^{-1}$)\")\nax.set_title(\"Acetaldehyde ($\\mathsf{CH_3CHO}$) Infared Spectrum (HF/STO-3G)\")\nax.legend(loc=\"upper right\")\n\nax.set_ylabel(\"IR Intensity (km mol$^{-1}$)\")\nax.legend(loc=\"upper right\")\n```\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n## \u6587\u6863\u8865\u5145\u4fe1\u606f\n\n\u8fd9\u4efd\u6587\u6863\u7684\u4e00\u90e8\u5206\u5185\u5bb9\u4e0e\u989c\u6587\u6770\u6709\u8f83\u591a\u8ba8\u8bba\uff0c\u4e0e\u7533\u540c\u660a\u6709\u4e00\u90e8\u5206\u8ba8\u8bba\u3002\u6211\u4e5f\u7b97\u662f\u5bf9\u975e\u8c10\u77eb\u6b63\u6709\u4e9b\u597d\u5947\uff0c\u4e8e\u662f\u5c1d\u8bd5\u641e\u4e86\u4e00\u4e0b\u3002\n\n[^Barone.JCP.2005]: Anharmonic Vibrational Properties by a fully automated second-order perturbative approach, V. Barone, *J. Chem. Phys.* **2005**, *122*, 014108, doi: [10.1063/1.1824881](https://doi.org/10.1063/1.1824881).\n\n[^Miller.CPL.1990]: Ab initio calculation of anharmonic constants for a transition state, with application to semiclassical transition state tunneling probabilities, W. H. Miller, R. Hernandez, N. C. Handy, D. Jayatilaka, A. Willetts, *Chem. Phys. Lett.* **1990**, *172*, 62-68, doi: [10.1016/0009-2614(90)87217-F](https://doi.org/10.1016/0009-2614(90)87217-F).\n\n[^Gauss.JCP.2008]: Quantitative prediction of gas-phase 19F nuclear magnetic shielding constants, M. E. Harding, M. Lenhart, A. A. Auer, J. Gauss, *J. Chem. Phys.* **2008**, *128*, 244111, doi: [10.1063/1.2943145](https://doi.org/10.1063/1.2943145).\n\n[^Bloino.JCP.2012]: A second-order perturbation theory route to vibrational averages and transition properties of molecules: General formulation and application to infrared and vibrational circular dichroism spectroscopies, J. Bloino, V. Barone, *J. Chem. Phys.* **2012**, *136*, 124108, doi: [10.1063/1.3695210](https://doi.org/10.1063/1.3695210).\n\n[^Bloino.JPCA.2015]: A VPT2 Route to Near-Infrared Spectroscopy: The Role of Mechanical and Electrical Anharmonicity, *J. Phys. Chem. A* **2015**, *119*, 5269-5287, doi: [10.1021/jp509985u](https://doi.org/10.1021/jp509985u).\n", "meta": {"hexsha": "3739a329037d2936eeb92869c072499519b22b7b", "size": 272172, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "source/QC_Notes/Freq_Series/freq_4.ipynb", "max_stars_repo_name": "ajz34/ajz34.readthedocs.io", "max_stars_repo_head_hexsha": "73be05a73241c18b98fd0d4dbdc48c643278c3da", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-30T12:31:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-14T03:56:56.000Z", "max_issues_repo_path": "source/QC_Notes/Freq_Series/freq_4.ipynb", "max_issues_repo_name": "ajz34/ajz34.readthedocs.io", "max_issues_repo_head_hexsha": "73be05a73241c18b98fd0d4dbdc48c643278c3da", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/QC_Notes/Freq_Series/freq_4.ipynb", "max_forks_repo_name": "ajz34/ajz34.readthedocs.io", "max_forks_repo_head_hexsha": "73be05a73241c18b98fd0d4dbdc48c643278c3da", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-30T12:32:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-30T12:32:09.000Z", "avg_line_length": 70.1112828439, "max_line_length": 138099, "alphanum_fraction": 0.7316219156, "converted": true, "num_tokens": 27007, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.399811640739795, "lm_q1q2_score": 0.2847587181682613}} {"text": "```python\nimport numpy as np\nimport pandas as pd\nimport pandasql as ps\nimport seaborn as sns\nimport scipy\nimport matplotlib.pyplot as plt\nimport scipy as sp\nfrom scipy import stats\npd.set_option(\"display.max_rows\", 200)\npd.set_option(\"display.max_columns\", 200)\n\n```\n\n# reading the data \n\n\n```python\nrow_df = pd.read_excel('chen_shalev_labeld_row_data.xlsx')\n```\n\n\n```python\ndf = row_df[:] # object copy\n```\n\n### good/bad flaging\n\n\n```python\ngood_placing = np.logical_or(df['label'] == 1 , df['label'] == 2)\nbad_placing = ~good_placing\n\n```\n\n\n```python\ndf.loc[bad_placing,'is_good_placing'] = 0\ndf.loc[good_placing,'is_good_placing'] = 1\n\n```\n\n# arab_not_arab_flaging\n\n\n```python\ndf = df[['\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05ea','\u05e9\u05e4\u05d5\u05ea','\u05d3\u05ea','label','is_good_placing']]\n```\n\n\n```python\ncontains = df['\u05e9\u05e4\u05d5\u05ea'].str.contains('\u05e2\u05e8\u05d1\u05d9\u05ea') & (df['\u05d3\u05ea'].str.contains('\u05d9\u05d4\u05d5\u05d3\u05d9')==False)\nnot_contains = [not x for x in contains]\ndf.loc[contains, 'is_arab'] = 1\ndf.loc[not_contains, 'is_arab'] = 0\n```\n\n\n```python\npd.set_option(\"display.max_rows\", 200)\ndf.head(10)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05ea\u05e9\u05e4\u05d5\u05ea\u05d3\u05ealabelis_good_placingis_arab
0\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d940.01.0
1\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d930.01.0
2\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d2\u05d1\u05d5\u05d4\u05d4, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d921.01.0
3\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d940.01.0
4\u05d316\u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d921.01.0
5\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d940.01.0
6\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d940.01.0
7\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d2\u05d1\u05d5\u05d4\u05d4, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d9 \u05e1\u05d5\u05e0\u05d940.01.0
8\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d921.01.0
9\u05d316\u05e2\u05d1\u05e8\u05d9\u05ea -\u05d1\u05e1\u05d9\u05e1\u05d9\u05ea, \u05e2\u05e8\u05d1\u05d9\u05ea -\u05e9\u05e4\u05ea \u05d0\u05dd\u05de\u05d5\u05e1\u05dc\u05de\u05d940.01.0
\n
\n\n\n\n\n```python\ndf_ready = df[['\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05ea','is_arab','is_good_placing']]\ndf_ready.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05eais_arabis_good_placing
0\u05d3161.00.0
1\u05d3161.00.0
2\u05d3161.01.0
3\u05d3161.00.0
4\u05d3161.01.0
\n
\n\n\n\n## data manipulation - extracting count and proportions of each office and nation\n\n\n```python\nagg_data = ps.sqldf(f\"\"\"select a.office,\n a.is_arab,\n a.count_office_and_nation,\n b.count_office,\n a.count_office_and_nation * 1.0/ b.count_office proption_within_office,\n a.avg as proportion_of_good_placing\n \n from\n (SELECT \"\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05ea\" as office,\n is_arab,\n count(*) count_office_and_nation, \n AVG(is_good_placing) avg\n FROM df_ready \n group by 1,2) a\n \n left join (select \"\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05ea\" as office,\n count(*) count_office\n from df_ready\n group by 1) b\n on a.office = b.office\n \n inner join (SELECT office\n from\n (select \"\u05dc\u05e9\u05db\u05d4 \u05de\u05e7\u05d5\u05d3\u05d3\u05ea\" as office,\n is_arab,\n count(*) count_office_and_nation \n FROM df_ready \n group by 1,2\n having count_office_and_nation > 14)\n group by 1 \n having count(office) = 2 ) c\n on a.office = c.office\n \n ;\n \"\"\", globals())\n```\n\n\n```python\nagg_data.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
officeis_arabcount_office_and_nationcount_officeproption_within_officeproportion_of_good_placing
0\u05d040.0222025530.8695650.185135
1\u05d041.033325530.1304350.165165
2\u05d060.078710060.7823060.111817
3\u05d061.021910060.2176940.114155
4\u05d1100.010118720.0539530.099010
\n
\n\n\n\n### lets check if our data id taken from normal distribution\n\n\n```python\nflat_agg_data = ps.sqldf(f\"\"\"select *,\n not_arab_proportion - arab_proportion as diff\n from(\n SELECT office,\n max(case when is_arab = 1 then count_office_and_nation end ) arab_count_within_office,\n max(case when is_arab = 0 then count_office_and_nation end ) not_arab_count_within_office,\n\n max(case when is_arab = 1 then proportion_of_good_placing end ) arab_proportion,\n max(case when is_arab = 0 then proportion_of_good_placing end ) not_arab_proportion \n FROM agg_data \n group by 1 )\n \"\"\", globals())\n```\n\n\n```python\nflat_agg_data\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
officearab_count_within_officenot_arab_count_within_officearab_proportionnot_arab_proportiondiff
0\u05d0433322200.1651650.1851350.019970
1\u05d062197870.1141550.111817-0.002338
2\u05d11017711010.1095430.099010-0.010533
3\u05d13291617250.1419750.1495650.007590
4\u05d1510686440.0823970.0838510.001454
5\u05d1843390.0232560.1025640.079308
6\u05d2125027140.1960000.2313930.035393
7\u05d2114444820.1418920.124481-0.017411
8\u05d2322440.0909090.022727-0.068182
9\u05d2519831060.1770050.132075-0.044929
10\u05d31597830.2033900.174968-0.028422
11\u05d310402460.2000000.073171-0.126829
12\u05d31234480.0294120.0625000.033088
13\u05d3142923230.2326380.217391-0.015246
14\u05d315819180.1135530.000000-0.113553
15\u05d3162722290.2435710.3103450.066774
16\u05d318971120.0927840.1160710.023288
17\u05d3197912000.1645570.155000-0.009557
18\u05d3221644590.2481750.2881360.039960
19\u05d32421571700.2939270.235294-0.058633
20\u05d33699920.2360520.2934780.057427
21\u05d3450270.1600000.074074-0.085926
22\u05d371521470.2000000.153237-0.046763
23\u05d3892921750.1496230.1839080.034285
24\u05d410365300.1835620.2000000.016438
25\u05d42228060.1818180.1823820.000564
26\u05d4421221120.2082940.187500-0.020794
27\u05d472472110.2995950.175355-0.124240
28\u05d489517860.1787590.145038-0.033721
29\u05d49418170.2153110.058824-0.156487
\n
\n\n\n\n## checking for normality \n\n\n```python\na= flat_agg_data['diff'] \nsns.distplot(a, hist=True, kde=True, \n bins=15, color = 'darkblue', \n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 4}\n )\n\n```\n\n# Our data cdf and theoretical values cdf from normal distribution\n\n\n```python\ndef ks_plot_norm(data):\n length = len(data)\n plt.figure(figsize=(10, 5))\n plt.plot(np.sort(data), np.linspace(0, 1, len(data), endpoint=False))\n plt.plot(np.sort(stats.norm.rvs(loc=np.mean(data), scale=np.std(data), size=len(data))), np.linspace(0, 1, len(data), endpoint=False))\n plt.legend('top right')\n plt.legend(['Data', 'Theoretical Values'])\n plt.title('Comparing CDFs for KS-Test')\n```\n\n\n```python\nks_plot_norm(a)\n```\n\n###### looks like our data cdf is very close to cdf of normal distribution, now we just need to test it, we choose to use the Kolmogorov Smirnov test for normality\n\n# Kolmogorov Smirnov test for normality\n\n#### Normalization of data\n\n\n```python\na = (a - np.mean(a))/np.std(a)\n```\n\n\n```python\nsp.stats.kstest(a, 'norm')\n```\n\n\n\n\n KstestResult(statistic=0.1160630351595065, pvalue=0.8137852163264059)\n\n\n\n#### The P_value we get is much more bigger than 0.05 , that meaning we do not reject the null hypothesis which says that are data belongs to the normal distribution\n\n\n# t-test\n### since we have 30 samples and we saw that the data is normaly distributed we can use t-test\n\nWe will use the following formula to examine the difference between dependent pairs.\nIt is assumed that there are more similar characteristics to populations from the same offices\n\n\\begin{align}\nH_0: \\mu = 0 \\\\\nH_1: \\mu != 0 \\\\\n\\end{align}\n\n when our statistic is \n \n\n$$ t = \\frac{\\bar{D}}{\\sqrt{\\frac{s^2}{n}}}$$ \nwhen\n$$ \\bar{D} = \\frac{1}{n} \\sum{X-Y}$$ \n\n\n### calculation of accepted value estimator ( the average)\n \n$$ \\bar{D} = \\frac{1}{n} \\sum{X-Y}$$ \n\n\n```python\nD_bar = flat_agg_data['diff'].mean()\nD_bar\n```\n\n\n\n\n -0.01826748065950308\n\n\n\n### calculation of variance estimator\n \n$$ s^2 $$ \n\n\n```python\ns_2 = flat_agg_data['diff'].var()\ns_2\n```\n\n\n\n\n 0.00352159108043281\n\n\n\n### calculation of our statistic\n\n\n```python\nn = flat_agg_data.shape[0]\nstat = D_bar/((s_2/n)**0.5)\nstat\n```\n\n\n\n\n -1.6860475597295403\n\n\n\n## final check - do the statistic is less then the critical value?\n\n\n```python\nscipy.stats.t.ppf(0.95,29)\n```\n\n\n\n\n 1.6991270265334972\n\n\n\n\n```python\nstat <= scipy.stats.t.ppf(0.95,29)\n```\n\n\n\n\n True\n\n\n\n## same results but by taking a look over the p_value\n\n\n```python\nprint(f\"\"\"p_value equal to: {scipy.stats.t.pdf(stat,29)} \ndo our p_value is less then 0.05 ? answer:{ scipy.stats.t.pdf(stat,29) < 0.05}\"\"\")\n```\n\n p_value equal to: 0.09726958112890556 \n do our p_value is less then 0.05 ? answer:False\n\n\n### so we can say confidently that there is no diff between arabs and non arabs \n\n\n```python\n\n```\n", "meta": {"hexsha": "737517986e9ae36f262d72749de12950318613e7", "size": 80841, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "step_3_arab_discrimination.ipynb", "max_stars_repo_name": "chen890/Final-Research-Project---IES", "max_stars_repo_head_hexsha": "864b0d21b04e81755b969ed519519cd2e73850ab", "max_stars_repo_licenses": ["CECILL-B"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "step_3_arab_discrimination.ipynb", "max_issues_repo_name": "chen890/Final-Research-Project---IES", "max_issues_repo_head_hexsha": "864b0d21b04e81755b969ed519519cd2e73850ab", "max_issues_repo_licenses": ["CECILL-B"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "step_3_arab_discrimination.ipynb", "max_forks_repo_name": "chen890/Final-Research-Project---IES", "max_forks_repo_head_hexsha": "864b0d21b04e81755b969ed519519cd2e73850ab", "max_forks_repo_licenses": ["CECILL-B"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.3773148148, "max_line_length": 26372, "alphanum_fraction": 0.6702910652, "converted": true, "num_tokens": 6228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6113819591324418, "lm_q2_score": 0.4649015713733885, "lm_q1q2_score": 0.284232433510013}} {"text": "```python\n%matplotlib inline\n```\n\n\n\uc801\ub300\uc801 \uc608\uc81c \uc0dd\uc131(Adversarial Example Generation)\n====================================================\n\n**\uc800\uc790:** `Nathan Inkawhich `__\n**\ubc88\uc5ed:** `BONGMO KIM `__\n\n\uc774 \uae00\uc744 \uc77d\uace0 \uc788\ub2e4\uba74, \uc5ec\ub7ec\ubd84\uc740 \uc774\ubbf8 \uba38\uc2e0\ub7ec\ub2dd \ubaa8\ub378\uc774 \uc5bc\ub9c8\ub098 \ud6a8\uacfc\uc801\uc778\uc9c0 \uadf8 \uc9c4\uac00\ub97c \uc54c\uace0 \uc788\uc744 \uac83\uc785\ub2c8\ub2e4.\n\uba38\uc2e0 \ub7ec\ub2dd \uc5f0\uad6c\ub294 ML(Machine Learning) \ubaa8\ub378\uc744 \ub354\uc6b1 \ube60\ub974\uace0 \uc815\ud655\ud558\uba70 \ud6a8\uc728\uc801\uc774\uac8c \ud558\ub294 \ubc29\ud5a5\uc73c\ub85c \uc9c4\ud589 \ub418\uace0 \uc788\uc2b5\ub2c8\ub2e4.\n\uadf8\ub7ec\ub098 \ubaa8\ub378\uc744 \uc18d\uc774\ub824\ud558\ub294 \uc801\uc5d0 \ub300\ud55c \ubcf4\uc548\uacfc \uacac\uace0\ud568\uc740 \ubaa8\ub378\uc744 \uc124\uacc4\ud558\uace0 \ud6c8\ub828\ud560 \ub54c \uc885\uc885 \uac04\uacfc\ub418\ub294 \ubd80\ubd84\uc785\ub2c8\ub2e4.\n\n\uc774 \ud29c\ud1a0\ub9ac\uc5bc\uc740 ML \ubaa8\ub378\ub4e4\uc758 \ubcf4\uc548 \ucde8\uc57d\uc810\uc5d0 \ub300\ud55c \uc778\uc2dd\uc744 \ub192\uc774\uace0, \uc694\uc998 \ud654\ub450\uac00 \ub418\uace0\uc788\ub294 \uc801\ub300\uc801 \uba38\uc2e0 \ub7ec\ub2dd\uc5d0 \ub300\ud55c \ud1b5\ucc30\ub825\uc744 \uc81c\uacf5\ud560 \uac83\uc785\ub2c8\ub2e4.\n\uc774\ubbf8\uc9c0\uc5d0 \ub208\uce58\ucc4c \uc218 \uc5c6\ub294 \uc791\uc740 \ubcc0\ud654(perturbation)\ub97c \ucd94\uac00\ud558\uba74 \ubaa8\ub378 \uc131\ub2a5\uc774 \ud06c\uac8c \ub2ec\ub77c\uc9c8 \uc218 \uc788\ub2e4\ub294 \uc0ac\uc2e4\uc5d0 \ub180\ub784 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc774\ubc88 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 \uc774\ubbf8\uc9c0 \ubd84\ub958\uae30\uc758 \uc608\uc81c\ub97c \ud1b5\ud574 \uc704 \ub0b4\uc6a9\uc5d0 \ub300\ud574 \uc0b4\ud3b4\ubcfc \uac83\uc785\ub2c8\ub2e4.\n\ud2b9\ud788 \uc6b0\ub9ac\ub294 \uac00\uc7a5 \ub9ce\uc774 \uc0ac\uc6a9\ub418\ub294 \uacf5\uaca9 \ubc29\ubc95 \uc911 \ud558\ub098\uc778 FGSM (Fast Gradient Sign Attack)\uc744 \uc774\uc6a9\ud574 MNIST \ubd84\ub958\uae30\ub97c \uc18d\uc5ec \ubcfc \uac83\uc785\ub2c8\ub2e4. \n \n\n\n\uc704\ud611 \ubaa8\ub378\n------------\n\n\uc0c1\ud669\uc5d0 \ub530\ub77c \ub2e4\uc591\ud55c \ubc94\uc8fc\uc758 \uc801\ub300\uc801 \uacf5\uaca9\uc774 \uc788\ub294\ub370 \uac01\uac01 \ubaa9\ud45c\uac00 \ub2e4\ub974\uace0 \uacf5\uaca9\uc790\uac00 \uc54c\uace0 \uc788\ub294 \uc815\ubcf4\n\ub300\ud55c \uac00\uc815\ub3c4 \ub2e4\ub985\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \ubcf4\ud1b5 \uac00\uc7a5 \uc911\uc694\ud55c \ubaa9\ud45c\ub294 \uc785\ub825 \ub370\uc774\ud130\uc5d0 \ucd5c\uc18c\ud55c\uc758 \uc791\uc740 \ubcc0\ud654\ub97c\n\ucd94\uac00\ud558\uc5ec \uc774\uac83\uc774 \uc758\ub3c4\uc801\uc73c\ub85c \uc798\ubabb \ubd84\ub958\ub418\uac8c \ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uacf5\uaca9\uc790\uac00 \uac00\uc9c0\uace0 \uc788\ub294 \uc815\ubcf4\uc5d0 \ub300\ud55c\n\uac00\uc815\uc5d0\ub294 \uc5ec\ub7ec \uc885\ub958\uac00 \uc788\ub294\ub370, \ubcf4\ud1b5 **\ud654\uc774\ud2b8\ubc15\uc2a4** \uc640 **\ube14\ub799\ubc15\uc2a4** \ub450 \uac00\uc9c0\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n*\ud654\uc774\ud2b8\ubc15\uc2a4* \uacf5\uaca9\uc740 \uacf5\uaca9\uc790\uac00 \ubaa8\ub378\uc5d0 \ub300\ud574 \uc544\ud0a4\ud14d\ucc98, \uc785\ub825, \ucd9c\ub825, \uac00\uc911\uce58\ub97c \ud3ec\ud568\ud55c \ubaa8\ub4e0 \uac83\uc744\n\uc54c\uace0 \uc788\uace0 \uc811\uadfc\ud560 \uc218 \uc788\ub2e4\uace0 \uac00\uc815\ud569\ub2c8\ub2e4. *\ube14\ub799\ubc15\uc2a4* \uacf5\uaca9\uc740 \uacf5\uaca9\uc790\uac00 \ubaa8\ub378\uc758 \uc785\ub825\uacfc \ucd9c\ub825\uc5d0\n\ub300\ud574\uc11c\ub9cc \uc811\uadfc \uac00\ub2a5\ud558\uace0 \ubaa8\ub378\uc758 \uac00\uc911\uce58\uc640 \uc544\ud0a4\ud14d\ucc98\uc5d0 \uad00\ud55c \ub0b4\uc6a9\uc740 \ubaa8\ub978\ub2e4\uace0 \uac00\uc815\ud569\ub2c8\ub2e4.\n\uacf5\uaca9\uc790\uc758 \ubaa9\ud45c\ub294 \uc624\ubd84\ub958 \ubc0f **\uc18c\uc2a4/\ud0c0\uac9f \uc624\ubd84\ub958** \ub97c \ud3ec\ud568\ud558\ub294 \uc5ec\ub7ec \uc720\ud615\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n*\uc624\ubd84\ub958* \uc758 \ubaa9\ud45c\ub294 \uacf5\uaca9\uc790\uac00 \ucd9c\ub825\uc73c\ub85c \ub098\uc628 \ubd84\ub958 \uacb0\uacfc\uac00 \uc798\ubabb \ub418\ub3c4\ub85d \ud558\ub098 \uc0c8\ub85c\uc6b4 \ubd84\ub958 \uacb0\uacfc\uac00\n\uc5b4\ub5a4 \uac83\uc774 \ub098\uc624\ub294\uc9c0 \uc2e0\uacbd \uc4f0\uc9c0 \uc54a\ub294 \uac83\uc744 \uc758\ubbf8\ud569\ub2c8\ub2e4. *\uc18c\uc2a4/\ud0c0\uac9f \uc624\ubd84\ub958* \ub294 \uacf5\uaca9\uc790\uac00\n\uc6d0\ub798 \ud2b9\uc815 \uc18c\uc2a4 \ud074\ub798\uc2a4\uc758 \uc774\ubbf8\uc9c0\ub97c \ub2e4\ub978 \ud2b9\uc815 \ub300\uc0c1 \ud074\ub798\uc2a4\ub85c \ubd84\ub958\ud558\ub3c4\ub85d \ubcc0\uacbd\ud558\ub824\uace0 \ud568\uc744 \uc758\ubbf8\ud569\ub2c8\ub2e4.\n\n\n\uc774 \uacbd\uc6b0 FGSM \uacf5\uaca9\uc740 *\uc624\ubd84\ub958* \ub97c \ubaa9\ud45c\ub85c \ud558\ub294 \ud654\uc774\ud2b8 \ubc15\uc2a4 \uacf5\uaca9\uc785\ub2c8\ub2e4.\n\uc774\ub7f0 \ubc30\uacbd \uc815\ubcf4\ub97c \uac16\uace0 \uacf5\uaca9\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\ube60\ub978 \ubcc0\ud654\ub3c4 \ubd80\ud638 \uacf5\uaca9\n-------------------------\n\n\uacf5\uaca9 \ubc29\ubc95\uc5d0 \uc788\uc5b4 \ucd08\uae30 \ubc29\uc2dd\uc774\uba74\uc11c \uac00\uc7a5 \uc720\uba85\ud55c \ubc29\uc2dd\uc740 *\ube60\ub978 \ubcc0\ud654\ub3c4 \ubd80\ud638 \uacf5\uaca9 (FGSM)* \uc774\ub77c\uace0 \ud558\uba70\n`\uc801\ub300\uc801 \uc608\uc81c\uc5d0 \ub300\ud55c \uc124\uba85\uacfc \ud65c\uc6a9 `__ \uc5d0\uc11c\n\uc774\uc548 \uac13\ud3a0\ub85c\uc6b0\uac00 \uae30\uace0\ud558\uc600\uc2b5\ub2c8\ub2e4.\n\uc774 \uacf5\uaca9\ubc95\uc740 \ub180\ub78d\ub3c4\ub85d \uac15\ub825\ud558\uc9c0\ub9cc \uc9c1\uad00\uc801\uc785\ub2c8\ub2e4. \ud559\uc2b5 \ubc29\uc2dd, *\ubcc0\ud654\ub3c4(gradients)* \ub97c \ud65c\uc6a9\ud558\uc5ec \uc2e0\uacbd\ub9dd\uc744 \uacf5\uaca9\ud558\ub3c4\ub85d\n\uc124\uacc4 \ub418\uc5c8\uc2b5\ub2c8\ub2e4. \uc544\uc774\ub514\uc5b4\ub294 \uac04\ub2e8\ud569\ub2c8\ub2e4. \uc5ed\uc804\ud30c \ubcc0\ud654\ub3c4\ub97c \uae30\ubc18\uc73c\ub85c \uac00\uc911\uce58\ub97c \uc870\uc815\ud558\uc5ec \uc190\uc2e4\uc744 \ucd5c\uc18c\ud654\ud558\uae30\ubcf4\ub2e4\ub294\n\uacf5\uaca9\uc774 \ub3d9\uc77c\ud55c \uc5ed\uc804\ud30c \ubcc0\ud654\ub3c4\ub97c \uae30\ubc18\uc73c\ub85c *\uc190\uc2e4\uc744 \ucd5c\ub300\ud654\ud558\ud558\ub294 \ubc29\ud5a5\uc73c\ub85c \uc785\ub825 \ub370\uc774\ud130\ub97c \uc870\uc815* \ud569\ub2c8\ub2e4.\n\ub2e4\uc2dc \ub9d0\ud574 \uacf5\uaca9\uc740 \uc785\ub825 \ub370\uc774\ud130\uc5d0\uc11c \uacc4\uc0b0\ub41c \uc190\uc2e4 \ubcc0\ud654\ub3c4\ub97c \uc0ac\uc6a9\ud558\uace0 \uc785\ub825 \ub370\uc774\ud130\ub97c \uc870\uc815\ud558\uc5ec \uc190\uc2e4\uc774 \ucd5c\ub300\uac00 \ub418\uac8c \ud569\ub2c8\ub2e4.\n\ucf54\ub4dc\ub85c \ub118\uc5b4\uac00\uae30 \uc804\uc5d0 \uc720\uba85\ud55c `FGSM `__ \ud310\ub2e4 \uc608\uc81c\ub97c\n\ubcf4\uace0 \uba87 \uac00\uc9c0 \ud45c\uae30\ubc95\uc744 \uc815\ub9ac\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n.. figure:: /_static/img/fgsm_panda_image.png\n :alt: fgsm_panda_image\n\n\uadf8\ub9bc\uc73c\ub85c\ubd80\ud130, $\\mathbf{x}$ \ub294 \uc6d0\ubcf8 \uc785\ub825 \uc774\ubbf8\uc9c0\uac00 \"\ud310\ub2e4\" \ub85c \uc62c\ubc14\ub974\uac8c \ubd84\ub958\ub41c \uac83\uc744 \uc758\ubbf8\ud558\uace0,\n$y$ \ub294 $\\mathbf{x}$ \ub97c \uc704\ud55c \uc815\ub2f5 \ub77c\ubca8\uc774\uba70, $\\mathbf{\\theta}$ \ub294 \ubaa8\ub378\uc758\n\ud30c\ub77c\ubbf8\ud130\ub97c, $J(\\mathbf{\\theta}, \\mathbf{x}, y)$ \ub294 \ub124\ud2b8\uc6cc\ud06c\uc758 \ud559\uc2b5\uc744 \uc704\ud574\uc11c \uc0ac\uc6a9\ub418\ub294 \uc190\uc2e4\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\uacf5\uaca9\uc740 $\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y)$ \uacc4\uc0b0\uc744 \uc704\ud574 \uc785\ub825 \ub370\uc774\ud130\uc5d0 \ubcc0\ud654\ub3c4\ub97c \uc5ed\uc804\ud30c\ud569\ub2c8\ub2e4.\n\uadf8\ub7ec\uace0 \ub098\uc11c, \ubcc0\ud654\ub3c4\ub294 \uc190\uc2e4 \uac12\uc774 \ucd5c\ub300\ud654\ub418\ub294 \ubc29\ud5a5\uc73c\ub85c (\uc608\ub97c \ub4e4\uba74, $sign(\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y))$ )\n\uc791\uc740 \uc2a4\ud15d(step) \ub9cc\ud07c (\uadf8\ub9bc\uc5d0\uc11c\ub294 $\\epsilon$ \ud639\uc740 $0.007$) \uc785\ub825 \ub370\uc774\ud130\uc5d0 \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\uacb0\uacfc\ub85c \ub098\uc624\ub294 \uc791\uc740 \ubcc0\ud654\ub41c \uc774\ubbf8\uc9c0( $x'$ )\ub294 \ud0c0\uac9f \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc758\ud574 \"\uae34\ud314\uc6d0\uc22d\uc774\"\ub85c *\uc624\ubd84\ub958* \ub418\ub098 \uc5ec\uc804\ud788 \uc721\uc548\uc73c\ub85c\ub294\n\ubd84\uba85\ud788 \"\ud310\ub2e4\" \uc785\ub2c8\ub2e4.\n\n\uc774\uc81c \ubcf8 \ud29c\ud1a0\ub9ac\uc5bc\uc758 \ub3d9\uae30\uac00 \uba85\ud655\ud574\uc9c0\uae38 \ubc14\ub77c\uba70, \uad6c\ud604\uc73c\ub85c \ub118\uc5b4\uac00 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n\n\n\n```python\nfrom __future__ import print_function\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# NOTE: \uc544\ub798\ub294 MNIST \ub370\uc774\ud130\uc14b\uc744 \ub0b4\ub824\ubc1b\uc744 \ub54c \"User-agent\" \uad00\ub828\ud55c \uc81c\ud55c\uc744 \ud478\ub294 \ucf54\ub4dc\uc785\ub2c8\ub2e4.\n# \ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 https://github.com/pytorch/vision/issues/3497 \uc744 \ucc38\uace0\ud574\uc8fc\uc138\uc694.\nfrom six.moves import urllib\nopener = urllib.request.build_opener()\nopener.addheaders = [('User-agent', 'Mozilla/5.0')]\nurllib.request.install_opener(opener)\n```\n\n\uad6c\ud604\n--------------\n\n\uc774 \uc139\uc158\uc5d0\uc11c\ub294 \ud29c\ud1a0\ub9ac\uc5bc\uc758 \uc785\ub825 \ub9e4\uac1c \ubcc0\uc218\uc5d0 \ub300\ud574 \uc124\uba85\ud558\uace0 \uacf5\uaca9\uc911\uc778 \ubaa8\ub378\uc744\n\uc815\uc758\ud55c \ub2e4\uc74c \uacf5\uaca9\uc744 \ucf54\ub529\ud558\uace0 \uc77c\ubd80 \ud14c\uc2a4\ud2b8\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4.\n\n\uc785\ub825\n~~~~~~\n\n\uc774 \ud559\uc2b5\uc11c\uc5d0\ub294 \uc785\ub825\uc774 3 \uac1c\uc774\uba70 \ub2e4\uc74c\uacfc \uac19\uc774 \uc815\uc758\ub429\ub2c8\ub2e4:\n\n- **epsilons** - \uc2e4\ud589\uc5d0 \uc0ac\uc6a9\ud560 \uc5e1\uc2e4\ub860\uc758 \ub9ac\uc2a4\ud2b8\uc785\ub2c8\ub2e4. \uc5e1\uc2e4\ub860 0\uc758 \uac12\uc740 \uc6d0\ub798 \ud14c\uc2a4\ud2b8 \uc14b\uc758 \ubaa8\ub378 \uc131\ub2a5\uc744\n \ub098\ud0c0\ub0b4\ubbc0\ub85c \ubaa9\ub85d\uc5d0 \uc720\uc9c0\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \ub610\ud55c \uc9c1\uad00\uc801\uc73c\ub85c \uc5e1\uc2e4\ub860\uc774 \ud074\uc218\ub85d \uc791\uc740 \ubcc0\ud654\uac00 \ub354 \ub208\uc5d0 \ub744\uc9c0\ub9cc\n \ubaa8\ub378 \uc815\ud655\ub3c4\ub97c \uc800\ud558 \uc2dc\ud0a4\ub294 \uce21\uba74\uc5d0\uc11c \ub354 \ud6a8\uacfc\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc5ec\uae30\uc11c \ub370\uc774\ud130\uc758 \ubc94\uc704\ub294 0-1 \uc774\uae30 \ub54c\ubb38\uc5d0\n \uc5e1\uc2e4\ub860\uc758 \uac12\uc740 1\uc744 \ucd08\uacfc\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n- **pretrained_model** - `pytorch/examples/mnist `__\n \ub97c \ud1b5\ud574 \ubbf8\ub9ac \ud559\uc2b5\ub41c MNIST \ubaa8\ub378\uc758 \uacbd\ub85c.\n \ud29c\ud1a0\ub9ac\uc5bc\uc744 \uac04\ud3b8\ud558\uac8c \ud558\ub824\uba74 `\uc5ec\uae30 `__ \uc5d0\uc11c \ubbf8\ub9ac \ud559\uc2b5\ub41c \ubaa8\ub378\uc744 \ub2e4\uc6b4\ub85c\ub4dc\ud558\uc138\uc694.\n\n- **use_cuda** - CUDA \ub97c \uc0ac\uc6a9\ud560\uc9c0 \ub9d0\uc9c0 \uc815\ud558\ub294 \uc774\uc9c4 \ud50c\ub798\uadf8.\n \ubcf8 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 CPU \uc2dc\uac04\uc774 \uc624\ub798 \uac78\ub9ac\uc9c0 \uc54a\uc73c\ubbc0\ub85c CUDA\ub97c \uc9c0\uc6d0\ud558\ub294 GPU \uc758 \uc5ec\ubd80\ub294 \uc911\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\n\n\n\n```python\nepsilons = [0, .05, .1, .15, .2, .25, .3]\npretrained_model = \"data/lenet_mnist_model.pth\"\nuse_cuda=True\n```\n\n\uacf5\uaca9\uc744 \ubc1b\ub294 \ubaa8\ub378\n~~~~~~~~~~~~~~~~~~\n\n\uc55e\uc11c \ub9d0\ud55c\ub300\ub85c, \uacf5\uaca9\uc744 \ubc1b\ub294 \ubaa8\ub378\uc740 `pytorch/examples/mnist `__\n\uc640 \ub3d9\uc77c\ud55c MNIST \ubaa8\ub378\uc785\ub2c8\ub2e4. \ubcf8\uc778\uc758 MNIST \ubaa8\ub378\uc744 \ud559\uc2b5 \ubc0f \uc800\uc7a5\ud558\ub294 \ubc29\uc2dd\uc73c\ub85c \ud558\uac70\ub098 \uc81c\uacf5\ub41c \ubaa8\ub378\uc744 \ub2e4\uc6b4\ub85c\ub4dc \ud574 \uc0ac\uc6a9\ud558\ub294 \uc2dd\uc73c\ub85c \uc9c4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc5ec\uae30\uc11c *Net* \uc815\uc758 \ubc0f \ud14c\uc2a4\ud2b8 \ub370\uc774\ud130 \ub85c\ub354\ub294 MNIST \uc608\uc81c\uc5d0\uc11c \ubcf5\uc0ac \ud558\uc600\uc2b5\ub2c8\ub2e4.\n\uc774 \uc139\uc158\uc758 \ubaa9\uc801\uc740 \ubaa8\ub378\uacfc \ub370\uc774\ud130 \ub85c\ub354\ub97c \uc815\uc758\ud55c \ub2e4\uc74c, \ubaa8\ub378\uc744 \ucd08\uae30\ud654\ud558\uace0 \ubbf8\ub9ac \ud559\uc2b5\ub41c \uac00\uc911\uce58\ub97c \uc77d\uc5b4\uc624\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n\n\n\n\n```python\n# LeNet \ubaa8\ub378 \uc815\uc758\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=1)\n\n# MNIST \ud14c\uc2a4\ud2b8 \ub370\uc774\ud130\uc14b\uacfc \ub370\uc774\ud130\ub85c\ub354 \uc120\uc5b8\ntest_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([\n transforms.ToTensor(),\n ])),\n batch_size=1, shuffle=True)\n\n# \uc5b4\ub5a4 \ub514\ubc14\uc774\uc2a4\ub97c \uc0ac\uc6a9\ud560\uc9c0 \uc815\uc758\nprint(\"CUDA Available: \",torch.cuda.is_available())\ndevice = torch.device(\"cuda\" if (use_cuda and torch.cuda.is_available()) else \"cpu\")\n\n# \ubaa8\ub378 \ucd08\uae30\ud654\ud558\uae30\nmodel = Net().to(device)\n\n# \ubbf8\ub9ac \ud559\uc2b5\ub41c \ubaa8\ub378 \uc77d\uc5b4\uc624\uae30\nmodel.load_state_dict(torch.load(pretrained_model, map_location='cpu'))\n\n# \ubaa8\ub378\uc744 \ud3c9\uac00 \ubaa8\ub4dc\ub85c \uc124\uc815\ud558\uae30. \ub4dc\ub86d\uc544\uc6c3 \ub808\uc774\uc5b4\ub4e4\uc744 \uc704\ud574 \uc0ac\uc6a9\ub428\nmodel.eval()\n```\n\nFGSM \uacf5\uaca9\n~~~~~~~~~~~\n\n\uc774\uc81c \uc6d0\ub798 \uc785\ub825\uc744 \uad50\ub780\uc2dc\ucf1c \uc801\ub300\uc801\uc778 \uc608\ub97c \ub9cc\ub4dc\ub294 \ud568\uc218\ub97c \uc815\uc758 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n``fgsm_attack`` \ud568\uc218\ub294 \uc785\ub825 \ud30c\ub77c\ubbf8\ud130\ub85c 3\uac00\uc9c0\ub97c \uac00\uc9d1\ub2c8\ub2e4. \uccab\ubc88\uc9f8\ub294 \uc6d0\ubcf8 *\uc774\ubbf8\uc9c0* ( $x$ ),\n\ub450\ubc88\uc9f8\ub294 *\uc5e1\uc2e4\ub860* \uc73c\ub85c \ud53d\uc140 \ub2e8\uc704\uc758 \uc791\uc740 \ubcc0\ud654\ub97c \uc8fc\ub294 \uac12\uc785\ub2c8\ub2e4 ( $\\epsilon$ ).\n\ub9c8\uc9c0\ub9c9\uc740 *data_grad* \ub85c \uc785\ub825 \uc601\uc0c1 ( $\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y)$ ) \uc5d0 \ub300\ud55c \ubcc0\ud654\ub3c4 \uc190\uc2e4 \uac12\uc785\ub2c8\ub2e4.\n\uc544\ub798 \uc2dd\uc5d0 \ub530\ub978 \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n\\begin{align}perturbed\\_image = image + epsilon*sign(data\\_grad) = x + \\epsilon * sign(\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y))\\end{align}\n\n\ub9c8\uc9c0\ub9c9\uc73c\ub85c \ub370\uc774\ud130\uc758 \uc6d0\ub798 \ubc94\uc704\ub97c \uc720\uc9c0\ud558\uae30 \uc704\ud574, \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\uac00 $[0,1]$ \ubc94\uc704\ub85c \uc798\ub9bd\ub2c8\ub2e4.\n\n\n\n\n\n```python\n# FGSM \uacf5\uaca9 \ucf54\ub4dc\ndef fgsm_attack(image, epsilon, data_grad):\n # data_grad \uc758 \uc694\uc18c\ubcc4 \ubd80\ud638 \uac12\uc744 \uc5bb\uc5b4\uc635\ub2c8\ub2e4\n sign_data_grad = data_grad.sign()\n # \uc785\ub825 \uc774\ubbf8\uc9c0\uc758 \uac01 \ud53d\uc140\uc5d0 sign_data_grad \ub97c \uc801\uc6a9\ud574 \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4\n perturbed_image = image + epsilon*sign_data_grad\n # \uac12 \ubc94\uc704\ub97c [0,1]\ub85c \uc720\uc9c0\ud558\uae30 \uc704\ud574 \uc790\ub974\uae30(clipping)\ub97c \ucd94\uac00\ud569\ub2c8\ub2e4\n perturbed_image = torch.clamp(perturbed_image, 0, 1)\n # \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\ub97c \ub9ac\ud134\ud569\ub2c8\ub2e4\n return perturbed_image\n```\n\n\ud14c\uc2a4\ud305 \ud568\uc218\n~~~~~~~~~~~~~~~~\n\n\ub9c8\uc9c0\ub9c9\uc73c\ub85c \ubcf8 \ud29c\ud1a0\ub9ac\uc5bc\uc758 \ud575\uc2ec \uacb0\uacfc\ub294 ``\ud14c\uc2a4\ud2b8`` \ud568\uc218\uc5d0\uc11c \uc624\uac8c \ub429\ub2c8\ub2e4.\n\uc774 \ud14c\uc2a4\ud2b8 \uae30\ub2a5\uc744 \ud638\ucd9c \ud560 \ub54c\ub9c8\ub2e4 MNIST \ud14c\uc2a4\ud2b8 \uc14b\uc5d0\uc11c \uc804\uccb4 \ud14c\uc2a4\ud2b8 \ub2e8\uacc4\ub97c \uc218\ud589\ud558\uace0 \ucd5c\uc885 \uc815\ud655\ub3c4\ub97c \ubcf4\uace0\ud569\ub2c8\ub2e4.\n\uadf8\ub7ec\ub098 \uc774 \ud568\uc218\uc5d0\ub294 *\uc5e1\uc2e4\ub860* \uc785\ub825\ub3c4 \ud544\uc694\ud569\ub2c8\ub2e4. \uc774\ub294 ``\ud14c\uc2a4\ud2b8``` \ud568\uc218\uac00 $\\epsilon$ \ud06c\uae30\uc5d0 \ub530\ub77c \uacf5\uaca9\uc790\uc758 \uacf5\uaca9\uc744 \ubc1b\ub294 \ubaa8\ub378\uc758\n\uc815\ud655\ub3c4\uc744 \ubcf4\uace0\ud558\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \ub354 \uad6c\uccb4\uc801\uc73c\ub85c \ubcf4\uba74 \ud14c\uc2a4\ud2b8 \uc14b\uc758 \uac01\uac01\uc758 \uc0d8\ud50c\uc5d0\uc11c \ud14c\uc2a4\ud2b8 \ud568\uc218\ub294 \uc785\ub825 \ub370\uc774\ud130\uc5d0 \ub300\ud55c \uc190\uc2e4 \ubcc0\ud654\ub3c4( $data\\_grad$ )\ub97c \uacc4\uc0b0\ud558\uace0,\n``FGSM \uacf5\uaca9`` ($perturbed\\_data$) \uc744 \ubc1b\uc740 \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\ub97c \ub9cc\ub4e4\uace0 \ub098\uc11c \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\uac00 \uc801\ub300\uc801\uc778\uc9c0 \ud655\uc778\uc744 \ud569\ub2c8\ub2e4.\n\ucd94\uac00\ub85c \ubaa8\ub378\uc758 \uc815\ud655\ub3c4\ub97c \ud14c\uc2a4\ud2b8\ud558\uae30 \uc704\ud574\uc11c \ud14c\uc2a4\ud2b8 \ud568\uc218\ub294 \ub098\uc911\uc5d0 \uc2dc\uac01\ud654\ud558\uc5ec \ubcfc \uc218 \uc788\ub3c4\ub85d \uc131\uacf5\uc801\uc73c\ub85c \uc5bb\uc740 \uc801\ub300\uc801 \uc774\ubbf8\uc9c0\ub97c \uc800\uc7a5\ud558\uace0 \ubc18\ud658\ud569\ub2c8\ub2e4.\n\n\n\n\n\n```python\ndef test( model, device, test_loader, epsilon ):\n\n # \uc815\ud655\ub3c4 \uce74\uc6b4\ud130\n correct = 0\n adv_examples = []\n\n # \ud14c\uc2a4\ud2b8 \uc14b\uc758 \ubaa8\ub4e0 \uc608\uc81c\uc5d0 \ub300\ud574 \ub8e8\ud504\ub97c \ub3d5\ub2c8\ub2e4\n for data, target in test_loader:\n\n # \ub514\ubc14\uc774\uc2a4(CPU or GPU) \uc5d0 \ub370\uc774\ud130\uc640 \ub77c\ubca8 \uac12\uc744 \ubcf4\ub0c5\ub2c8\ub2e4\n data, target = data.to(device), target.to(device)\n\n # \ud150\uc11c\uc758 \uc18d\uc131 \uc911 requires_grad \ub97c \uc124\uc815\ud569\ub2c8\ub2e4. \uacf5\uaca9\uc5d0\uc11c \uc911\uc694\ud55c \ubd80\ubd84\uc785\ub2c8\ub2e4\n data.requires_grad = True\n\n # \ub370\uc774\ud130\ub97c \ubaa8\ub378\uc5d0 \ud1b5\uacfc\uc2dc\ud0b5\ub2c8\ub2e4\n output = model(data)\n init_pred = output.max(1, keepdim=True)[1] # \ub85c\uadf8 \ud655\ub960\uc758 \ucd5c\ub300\uac12\uc744 \uac00\uc9c0\ub294 \uc778\ub371\uc2a4\ub97c \uc5bb\uc2b5\ub2c8\ub2e4\n\n # \ub9cc\uc57d \ucd08\uae30 \uc608\uce21\uc774 \ud2c0\ub9ac\uba74, \uacf5\uaca9\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud558\uace0 \uacc4\uc18d \uc9c4\ud589\ud569\ub2c8\ub2e4\n if init_pred.item() != target.item():\n continue\n\n # \uc190\uc2e4\uc744 \uacc4\uc0b0\ud569\ub2c8\ub2e4\n loss = F.nll_loss(output, target)\n\n # \ubaa8\ub378\uc758 \ubcc0\ud654\ub3c4\ub4e4\uc744 \uc804\ubd80 0\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4\n model.zero_grad()\n\n # \ud6c4\ubc29 \uc804\ub2ec\uc744 \ud1b5\ud574 \ubaa8\ub378\uc758 \ubcc0\ud654\ub3c4\ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4\n loss.backward()\n\n # \ubcc0\ud654\ub3c4 \uac12\uc744 \ubaa8\uc74d\ub2c8\ub2e4\n data_grad = data.grad.data\n\n # FGSM \uacf5\uaca9\uc744 \ud638\ucd9c\ud569\ub2c8\ub2e4\n perturbed_data = fgsm_attack(data, epsilon, data_grad)\n\n # \uc791\uc740 \ubcc0\ud654\uac00 \uc801\uc6a9\ub41c \uc774\ubbf8\uc9c0\uc5d0 \ub300\ud574 \uc7ac\ubd84\ub958\ud569\ub2c8\ub2e4\n output = model(perturbed_data)\n\n # \uc62c\ubc14\ub978\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4\n final_pred = output.max(1, keepdim=True)[1] # \ub85c\uadf8 \ud655\ub960\uc758 \ucd5c\ub300\uac12\uc744 \uac00\uc9c0\ub294 \uc778\ub371\uc2a4\ub97c \uc5bb\uc2b5\ub2c8\ub2e4\n if final_pred.item() == target.item():\n correct += 1\n # 0 \uc5e1\uc2e4\ub860 \uc608\uc81c\uc5d0 \ub300\ud574\uc11c \uc800\uc7a5\ud569\ub2c8\ub2e4\n if (epsilon == 0) and (len(adv_examples) < 5):\n adv_ex = perturbed_data.squeeze().detach().cpu().numpy()\n adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )\n else:\n # \ucd94\ud6c4 \uc2dc\uac01\ud654\ub97c \uc704\ud558 \ub2e4\ub978 \uc608\uc81c\ub4e4\uc744 \uc800\uc7a5\ud569\ub2c8\ub2e4\n if len(adv_examples) < 5:\n adv_ex = perturbed_data.squeeze().detach().cpu().numpy()\n adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )\n\n # \ud574\ub2f9 \uc5e1\uc2e4\ub860\uc5d0\uc11c\uc758 \ucd5c\uc885 \uc815\ud655\ub3c4\ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4\n final_acc = correct/float(len(test_loader))\n print(\"Epsilon: {}\\tTest Accuracy = {} / {} = {}\".format(epsilon, correct, len(test_loader), final_acc))\n\n # \uc815\ud655\ub3c4\uc640 \uc801\ub300\uc801 \uc608\uc81c\ub97c \ub9ac\ud134\ud569\ub2c8\ub2e4\n return final_acc, adv_examples\n```\n\n\uacf5\uaca9 \uc2e4\ud589\n~~~~~~~~~~\n\n\uad6c\ud604\uc758 \ub9c8\uc9c0\ub9c9 \ubd80\ubd84\uc740 \uacf5\uaca9\uc744 \uc2e4\ud589\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc5ec\uae30\uc11c \uc804\uccb4 \ud14c\uc2a4\ud2b8 \uc2a4\ud15d\uc744 \uac01 *\uc5e1\uc2e4\ub860* \uac12\uc5d0 \uc2e4\ud589\ud569\ub2c8\ub2e4.\n\uac01 \uc5e1\uc2e4\ub860\ub9c8\ub2e4 \ucd5c\uc885 \uc815\ud655\ub3c4\uc640 \uc131\uacf5\uc801\uc778 \uc77c\ubd80 \uc801\ub300 \uc0ac\ub840\ub97c \uc800\uc7a5\ud558\uc5ec \ub2e4\uc74c \uc139\uc158\uc5d0 \ud45c\uc2dc\ud569\ub2c8\ub2e4.\n\uc5e1\uc2e4\ub860 \uac12\uc774 \uc99d\uac00\ud568\uc5d0 \ub530\ub77c \ucd9c\ub825\ub41c \uc815\ud655\ub3c4\uac00 \uc5b4\ub5bb\uac8c \uac10\uc18c\ud558\ub294\uc9c0 \ubcf4\uc2ed\uc2dc\uc624.\n\ub610\ud55c, $\\epsilon=0$ \uc778 \uacbd\uc6b0\uc5d0\ub294 \uacf5\uaca9\uc774 \uc5c6\ub294 \uc6d0\ubcf8 \ud14c\uc2a4\ud2b8 \uc815\ud655\ub3c4\uc784\uc744 \ubcf4\uc785\ub2c8\ub2e4.\n\n\n\n\n\n```python\naccuracies = []\nexamples = []\n\n# \uac01 \uc5e1\uc2e4\ub860\uc5d0 \ub300\ud574 \ud14c\uc2a4\ud2b8 \ud568\uc218\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4\nfor eps in epsilons:\n acc, ex = test(model, device, test_loader, eps)\n accuracies.append(acc)\n examples.append(ex)\n```\n\n\uacb0\uacfc\n-------\n\n\uc815\ud655\ub3c4 vs \uc5e1\uc2e4\ub860\n~~~~~~~~~~~~~~~~~~~\n\n\uccab \ubc88\uc9f8 \uacb0\uacfc\ub294 \uc815\ud655\ub3c4 vs \uc5e1\uc2e4\ub860 \uc744 \ub3c4\uc2dd\ud654 \ud55c \uac83 \uc785\ub2c8\ub2e4.\n\uc55e\uc5d0\uc11c \uc5b8\uae09\ud588\ub4ef\uc774, \uc5e1\uc2e4\ub860\uc774 \uc99d\uac00\ud568\uc5d0 \ub530\ub77c \uc6b0\ub9ac\ub294 \ud14c\uc2a4\ud2b8 \uc815\ud655\ub3c4\uac00 \uac10\uc18c\ud560 \uac83\uc73c\ub85c \uc608\uc0c1\ud569\ub2c8\ub2e4.\n\uc774\ub294 \ud559\uc2b5\uc744 \ub354 \uc9c4\ud589\ud574 \uac08\uc218\ub85d \uc5e1\uc2e4\ub860\uc774 \ud074\uc218\ub85d \uc190\uc2e4\uc744 \uadf9\ub300\ud654 \ud560 \ubc29\ud5a5\uc73c\ub85c \uc9c4\ud589\ub418\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.\n\uc5e1\uc2e4\ub860 \uac12\uc774 \uc120\ud615\uc801\uc73c\ub85c \ubd84\ud3ec\ud558\ub354\ub77c\ub3c4 \uace1\uc120\uc758 \ucd94\uc138\ub294 \uc120\ud615\uc758 \ud615\ud0dc\uac00 \uc544\ub2d9\ub2c8\ub2e4.\n\uc608\ub97c \ub4e4\uba74, math:`\\epsilon=0.05` \uc5d0\uc11c\uc758 \uc815\ud655\ub3c4\uac00 $\\epsilon=0$ \ubcf4\ub2e4 \uc57d 4% \ub0ae\uc9c0\ub9cc\n$\\epsilon=0.2$ \uc5d0\uc11c\uc758 \uc815\ud655\ub3c4\ub294 $\\epsilon=0.15$ \ubcf4\ub2e4 \uc57d 25% \uc815\ub3c4 \ub0ae\uc2b5\ub2c8\ub2e4.\n\ub610\ud55c, $\\epsilon=0.25$ \uc640 $\\epsilon=0.3$ \uc0ac\uc774\uc758 \ubaa8\ub378 \uc815\ud655\ub3c4\ub294 \ub79c\ub364\uc73c\ub85c\n10\uac1c\uc911 1\uac1c\ub97c \uc120\ud0dd\ud588\uc744 \ub54c\uc758 \uc815\ud655\ub3c4\uc640 \uc720\uc0ac\ud55c \uc218\uc900\uc785\ub2c8\ub2e4.\n\n\n\n\n\n```python\nplt.figure(figsize=(5,5))\nplt.plot(epsilons, accuracies, \"*-\")\nplt.yticks(np.arange(0, 1.1, step=0.1))\nplt.xticks(np.arange(0, .35, step=0.05))\nplt.title(\"Accuracy vs Epsilon\")\nplt.xlabel(\"Epsilon\")\nplt.ylabel(\"Accuracy\")\nplt.show()\n```\n\n\uc0d8\ud50c \uc801\ub300\uc801 \uc608\uc81c\ub4e4\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uacf5\uc9dc \uc810\uc2ec\uc740 \uc5c6\ub2e4\ub294 \uac83\uc744 \uae30\uc5b5\ud558\uc2dc\ub098\uc694? \uc774 \uacbd\uc6b0\uc5d0\ub294 \uc5e1\uc2e4\ub860\uc774 \uc99d\uac00\ud560\uc218\ub85d \ud14c\uc2a4\ud2b8 \uc815\ud655\ub3c4\ub294 \ub5a8\uc5b4\uc9d1\ub2c8\ub2e4.\n**\uadf8\ub7ec\ub098** \uc791\uc740 \ubcc0\ud654\ub294 \ub354 \uc27d\uac8c \uc778\uc2dd\ud560 \uc218 \uc788\uac8c \ub429\ub2c8\ub2e4.\n\uc2e4\uc81c\ub85c \uc815\ud655\ub3c4 \uc800\ud558\uc640 \uacf5\uaca9\uc790\uac00 \uace0\ub824\ud574\uc57c \ud558\ub294 \uc774\ud574\ub3c4 \uc0ac\uc774\uc5d0\ub294 \uc0c1\ucda9 \uad00\uacc4(tradeoff)\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\uc5ec\uae30\uc11c \uc6b0\ub9ac\ub294 \uac01 \uc5e1\uc2e4\ub860 \uac12\uc5d0\uc11c \uc131\uacf5\uc801\uc778 \ub300\uc801 \uc0ac\ub840\ub97c \ubcf4\uc774\ub294 \uba87 \uac00\uc9c0 \uc608\ub97c \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\uc544\ub798 \uc774\ubbf8\uc9c0\uc758 \uccab\ubc88\uc9f8\ub85c \uc5f4\uc740 $\\epsilon=0$ \uc778 \uc608\uc81c\ub4e4\ub85c \uc791\uc740 \ubcc0\ud654\uac00 \uc5c6\ub294 \uc6d0\ubcf8\uc758 \"\uae68\ub057\ud55c\" \uc774\ubbf8\uc9c0\ub4e4\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\uac01 \uc774\ubbf8\uc9c0\uc758 \uc704\uc758 \uae00\uc790\ub294 \"\uc6d0\ub798 \ubd84\ub958 \uacb0\uacfc -> \uc801\ub300\uc801 \ubd84\ub958 \uacb0\uacfc\"\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n$\\epsilon=0.15$ \uc5d0\uc11c \uc791\uc740 \ubcc0\ud654\uac00 \ub208\uc5d0 \ub744\uae30 \uc2dc\uc791\ud558\uace0 $\\epsilon=0.3$ \uc5d0\uc11c\ub294 \ud655\uc2e4\ud574 \ubcf4\uc785\ub2c8\ub2e4.\n\uadf8\ub7ec\ub098 \ubaa8\ub4e0 \uacbd\uc6b0\uc5d0 \ub300\ud574\uc11c \ub178\uc774\uc988\uac00 \ucd94\uac00\ub418\uc5c8\ub354\ub77c\ub3c4 \uc0ac\ub78c\uc740 \uc62c\ubc14\ub974\uac8c \ubd84\ub958\ub97c \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\n\n\n\n```python\n# \uac01 \uc5e1\uc2e4\ub860\uc5d0\uc11c \uc801\ub300\uc801 \uc0d8\ud50c\uc758 \uba87 \uac00\uc9c0 \uc608\ub97c \ub3c4\uc2dd\ud654\ud569\ub2c8\ub2e4\ncnt = 0\nplt.figure(figsize=(8,10))\nfor i in range(len(epsilons)):\n for j in range(len(examples[i])):\n cnt += 1\n plt.subplot(len(epsilons),len(examples[0]),cnt)\n plt.xticks([], [])\n plt.yticks([], [])\n if j == 0:\n plt.ylabel(\"Eps: {}\".format(epsilons[i]), fontsize=14)\n orig,adv,ex = examples[i][j]\n plt.title(\"{} -> {}\".format(orig, adv))\n plt.imshow(ex, cmap=\"gray\")\nplt.tight_layout()\nplt.show()\n```\n\n\ub2e4\uc74c \ub2e8\uacc4\ub294?\n-----------------\n\n\uc774\ubc88 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c \uc801\ub300\uc801 \uba38\uc2e0 \ub7ec\ub2dd\uc5d0 \ub300\ud55c \ud1b5\ucc30\uc744 \uc5bb\uc744 \uc218 \uc788\uc5c8\uae30\ub97c \ubc14\ub78d\ub2c8\ub2e4.\n\ud29c\ud1a0\ub9ac\uc5bc\uc758 \ub0b4\uc6a9\uc73c\ub85c\ubd80\ud130 \uc55e\uc73c\ub85c \ub354 \ub9ce\uc740 \uac83\ub4e4\uc744 \uc54c\uc544\ub098\uac08 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\ud29c\ud1a0\ub9ac\uc5bc\uc758 \uc801\ub300\uc801 \uacf5\uaca9 \uc608\uc81c\ub294 \ubcf8 \ubd84\uc57c\uc758 \ucd08\uae09 \ub2e8\uacc4\uc774\uba70\n\uc801\ub300\uc801 \uc0c1\ud669\uc73c\ub85c\ubd80\ud130 ML \ubaa8\ub378\uc744 \uacf5\uaca9\ud558\uace0 \ubc29\uc5b4\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \ub9ce\uc740 \ud6c4\uc18d \uc544\uc774\ub514\uc5b4\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\uc0ac\uc2e4 NIPS 2017 \uc5d0\uc11c \uc801\ub300\uc801 \uacf5\uaca9\uacfc \ubc29\uc5b4\uc5d0 \ub300\ud55c \uacbd\uc7c1(competition)\uc774 \uc788\uc5c8\uace0 \uc5ec\uae30\uc11c \uc0ac\uc6a9\ub41c\n\ub2e4\uc591\ud55c \ubc29\ubc95\ub4e4\uc740 \ub2e4\uc74c \ub17c\ubb38\uc5d0 \uc815\ub9ac \ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4: `\uc801\ub300\uc801 \uacf5\uaca9\uacfc \ubc29\uc5b4 \uacbd\uc7c1 `__.\n\ubc29\uc5b4\uc5d0 \ub300\ud55c \uc5f0\uad6c\ub294 \uc790\uc5f0\uc2a4\ub7fd\uac8c \uad50\ub780 \ubc0f \ud574\ud0b9 \ubaa9\uc801\uc73c\ub85c \uc81c\uc791\ub41c \uc785\ub825\uc5d0 \ub300\ud574 \uba38\uc2e0 \ub7ec\ub2dd \ubaa8\ub378\uc744\n\ubcf4\ub2e4 *\uacac\uace0\ud558\uac8c(robust)* \ub9cc\ub4dc\ub294 \uc544\uc774\ub514\uc5b4\ub85c \uc774\uc5b4\uc9d1\ub2c8\ub2e4.\n\n\ub610 \ub2e4\ub978 \ubc29\ud5a5\uc740 \ub2e4\ub978 \ub3c4\uba54\uc778\uc5d0\uc11c\uc758 \uc801\uc758 \uacf5\uaca9\uacfc \ubc29\uc5b4\uc785\ub2c8\ub2e4. \uc801\ub300\uc801 \uc5f0\uad6c\ub294 \uc774\ubbf8\uc9c0 \ub3c4\uba54\uc778\uc5d0 \uc81c\ud55c\ub418\uc5b4 \uc788\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n`\uc5ec\uae30 `__ \uc5d0\uc11c \uc74c\uc131-\ud14d\uc2a4\ud2b8 \ubcc0\ud658 \ubaa8\ub378\uc5d0\uc11c\uc758 \uacf5\uaca9\ub3c4 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uadf8\ub7ec\ub098 \uc801\ub300\uc801 \uba38\uc2e0 \ub7ec\ub2dd \ubd84\uc57c\uc5d0 \ub300\ud574\uc11c \ub9ce\uc740 \uac83\uc744 \uc54c\uae30 \uc704\ud55c \ucd5c\uace0\uc758 \ubc29\ubc95\uc740 \ub9ce\uc774 \uc2dc\ub3c4\ud574\ubcf4\ub294 \uac83\uc785\ub2c8\ub2e4.\nNIPS 2017 \uacbd\uc7c1\uc5d0\uc11c \uc18c\uac1c\ub41c \ub2e4\uc591\ud55c \uacf5\uaca9 \ubc29\ubc95\uc744 \uc9c1\uc811 \uad6c\ud604\ud574 \ubcf4\uace0, FGSM \uacfc \uc5b4\ub5a4 \uc810\uc774 \ub2e4\ub978\uc9c0 \uc5f0\uad6c\ud574 \ubcf4\uc138\uc694.\n\uadf8\ub9ac\uace0 \ub098\uc11c \uc9c1\uc811 \ub9cc\ub4e0 \uacf5\uaca9\uc73c\ub85c\ubd80\ud130 \ubaa8\ub378\uc744 \ubc29\uc5b4\ud574 \ubcf4\uc138\uc694.\n\n\n\n", "meta": {"hexsha": "94c85ee33fbbde3088f6f45393265bdf60f9c622", "size": 33866, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/2db6a7dd5fbf3ce6acb5f93d7d5373c5/fgsm_tutorial.ipynb", "max_stars_repo_name": "woojinsong/PyTorch-tutorials-kr", "max_stars_repo_head_hexsha": "36fefd556f45c2b1f5db912793172c0369430fd4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 221, "max_stars_repo_stars_event_min_datetime": "2018-04-06T01:42:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-28T10:12:45.000Z", "max_issues_repo_path": "docs/_downloads/2db6a7dd5fbf3ce6acb5f93d7d5373c5/fgsm_tutorial.ipynb", "max_issues_repo_name": "woojinsong/PyTorch-tutorials-kr", "max_issues_repo_head_hexsha": "36fefd556f45c2b1f5db912793172c0369430fd4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 280, "max_issues_repo_issues_event_min_datetime": "2018-05-25T08:53:21.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-02T05:37:25.000Z", "max_forks_repo_path": "docs/_downloads/2db6a7dd5fbf3ce6acb5f93d7d5373c5/fgsm_tutorial.ipynb", "max_forks_repo_name": "woojinsong/PyTorch-tutorials-kr", "max_forks_repo_head_hexsha": "36fefd556f45c2b1f5db912793172c0369430fd4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 181, "max_forks_repo_forks_event_min_datetime": "2018-05-25T02:00:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T11:56:39.000Z", "avg_line_length": 174.5670103093, "max_line_length": 6695, "alphanum_fraction": 0.7085572551, "converted": true, "num_tokens": 6490, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5273165233795671, "lm_q2_score": 0.5389832206876841, "lm_q1q2_score": 0.2842147580929516}} {"text": "```python\n# This cell is added by sphinx-gallery\n# It can be customized to whatever you like\n%matplotlib inline\n```\n\n\n\nPlugins and Hybrid computation\n==============================\n\n.. meta::\n :property=\"og:description\": This tutorial introduces the notion of hybrid\n computation by combining several PennyLane device backends to train an algorithm\n containing both photonic and qubit devices.\n :property=\"og:image\": https://pennylane.ai/qml/_images/photon_redirection.png\n\n.. related::\n\n tutorial_qubit_rotation Basic tutorial: qubit rotation\n tutorial_gaussian_transformation Gaussian transformation\n\n*Author: PennyLane dev team. Last updated: 1 Feb 2021.*\n\nThis tutorial introduces the notion of hybrid computation by combining several PennyLane\nplugins. We first introduce PennyLane's `Strawberry Fields plugin `_\nand use it to explore a non-Gaussian photonic circuit. We then combine this photonic circuit with a\nqubit circuit \u2014 along with some classical processing \u2014 to create and optimize a fully hybrid computation.\nBe sure to read through the introductory `qubit rotation ` and\n`Gaussian transformation ` tutorials before attempting this tutorial.\n\n

Note

To follow along with this tutorial on your own computer, you will require the\n `PennyLane-SF plugin `_, in order to access the\n `Strawberry Fields `_ Fock backend using\n PennyLane. It can be installed via pip:\n\n .. code-block:: bash\n\n pip install pennylane-sf

\n\n\nA non-Gaussian circuit\n----------------------\n\nWe first consider a photonic circuit which is similar in spirit to the\n`qubit rotation ` circuit:\n\n.. figure:: ../demonstrations/plugins_hybrid/photon_redirection.png\n :align: center\n :width: 30%\n :target: javascript:void(0);\n\nBreaking this down, step-by-step:\n\n1. **We start the computation with two qumode subsystems**. In PennyLane, we use the\n shorthand 'wires' to refer to quantum subsystems, whether they are qumodes, qubits, or\n any other kind of quantum register.\n\n2. **Prepare the state** $\\left|1,0\\right\\rangle$. That is, the first wire (wire 0) is prepared\n in a single-photon state, while the second\n wire (wire 1) is prepared in the vacuum state. The former state is non-Gaussian,\n necessitating the use of the ``'strawberryfields.fock'`` backend device.\n\n3. **Both wires are then incident on a beamsplitter**, with free parameters $\\theta$ and $\\phi$.\n Here, we have the convention that the beamsplitter transmission amplitude is $t=\\cos\\theta$,\n and the reflection amplitude is\n $r=e^{i\\phi}\\sin\\theta$. See :doc:`introduction/operations` for a full list of operation conventions.\n\n4. **Finally, we measure the mean photon number** $\\left\\langle \\hat{n}\\right\\rangle$ of the second wire, where\n\n .. math:: \\hat{n} = \\ad\\a\n\n is the number operator, acting on the Fock basis number states, such that $\\hat{n}\\left|n\\right\\rangle = n\\left|n\\right\\rangle$.\n\nThe aim of this tutorial is to optimize the beamsplitter parameters $(\\theta, \\phi)$ such\nthat the expected photon number of the second wire is **maximized**. Since the beamsplitter\nis a passive optical element that preserves the total photon number, this to the output\nstate $\\left|0,1\\right\\rangle$ \u2014 i.e., when the incident photon from the first wire has been\n'redirected' to the second wire.\n\n\nExact calculation\n~~~~~~~~~~~~~~~~~\n\nTo compare with later numerical results, we can first consider what happens analytically.\nThe initial state of the circuit is $\\left|\\psi_0\\right\\rangle=\\left|1,0\\right\\rangle$, and the output state\nof the system is of the form $\\left|\\psi\\right\\rangle = a\\left|1, 0\\right\\rangle + b\\left|0,1\\right\\rangle$, where\n$|a|^2+|b|^2=1$. We may thus write the output state as a vector in this\ncomputational basis, $\\left|\\psi\\right\\rangle = \\begin{bmatrix}a & b\\end{bmatrix}^T$.\n\nThe beamsplitter acts on this two-dimensional subspace as follows:\n\n\\begin{align}\\left|\\psi\\right\\rangle = B(\\theta, \\phi)\\left|1, 0\\right\\rangle = \\begin{bmatrix}\n \\cos\\theta & -e^{-i\\phi}\\sin\\theta\\\\\n e^{i\\phi}\\sin\\theta & \\cos\\theta\n \\end{bmatrix}\\begin{bmatrix} 1\\\\ 0\\end{bmatrix} = \\begin{bmatrix}\n \\cos\\theta\\\\\n e^{i\\phi} \\sin\\theta\n \\end{bmatrix}\\end{align}\n\nFurthermore, the mean photon number of the second wire is\n\n\\begin{align}\\left\\langle{\\hat{n}_1}\\right\\rangle = \\langle{\\psi}\\mid{\\hat{n}_1}\\mid{\\psi}\\rangle = |e^{i\\phi} \\sin\\theta|^2\n \\langle{0,1}\\mid{\\hat{n}_1}\\mid{0,1}\\rangle = \\sin^2 \\theta.\\end{align}\n\nTherefore, we can see that:\n\n1. $0\\leq \\left\\langle \\hat{n}_1\\right\\rangle\\leq 1$: the output of the quantum circuit is\n bound between 0 and 1;\n\n2. $\\frac{\\partial}{\\partial \\phi} \\left\\langle \\hat{n}_1\\right\\rangle=0$: the output of the\n quantum circuit is independent of the beamsplitter phase $\\phi$;\n\n3. The output of the quantum circuit above is maximised when $\\theta=(2m+1)\\pi/2$\n for $m\\in\\mathbb{Z}_0$.\n\nLoading the plugin device\n-------------------------\n\nWhile PennyLane provides a basic qubit simulator (``'default.qubit'``) and a basic CV\nGaussian simulator (``'default.gaussian'``), the true power of PennyLane comes from its\n`plugin ecosystem `_, allowing quantum computations\nto be run on a variety of quantum simulator and hardware devices.\n\nFor this circuit, we will be using the ``'strawberryfields.fock'`` device to construct\na QNode. This allows the underlying quantum computation to be performed using the\n`Strawberry Fields `_ Fock backend.\n\nAs usual, we begin by importing PennyLane and the wrapped version of NumPy provided by PennyLane:\n\n\n\n```python\nimport pennylane as qml\nfrom pennylane import numpy as np\n```\n\nNext, we create a device to run the quantum node. This is easy in PennyLane; as soon as\nthe PennyLane-SF plugin is installed, the ``'strawberryfields.fock'`` device can be loaded\n\u2014 no additional commands or library imports required.\n\n\n\n\n```python\ndev_fock = qml.device(\"strawberryfields.fock\", wires=2, cutoff_dim=2)\n```\n\nCompared to the default devices provided with PennyLane, the ``'strawberryfields.fock'``\ndevice requires the additional keyword argument:\n\n* ``cutoff_dim``: the Fock space truncation used to perform the quantum simulation\n\n

Note

Devices provided by external plugins may require additional arguments and keyword arguments\n \u2014 consult the plugin documentation for more details.

\n\n\n\nConstructing the QNode\n----------------------\n\nNow that we have initialized the device, we can construct our quantum node. Like\nthe other tutorials, we use the :mod:`~.pennylane.qnode` decorator\nto convert our quantum function (encoded by the circuit above) into a quantum node\nrunning on Strawberry Fields.\n\n\n\n\n```python\n@qml.qnode(dev_fock, diff_method=\"parameter-shift\")\ndef photon_redirection(params):\n qml.FockState(1, wires=0)\n qml.Beamsplitter(params[0], params[1], wires=[0, 1])\n return qml.expval(qml.NumberOperator(1))\n```\n\nThe ``'strawberryfields.fock'`` device supports all CV objects provided by PennyLane;\nsee `CV operations `.\n\n\n\nOptimization\n------------\n\nLet's now use one of the built-in PennyLane optimizers in order to\ncarry out photon redirection. Since we wish to maximize the mean photon number of\nthe second wire, we can define our cost function to minimize the *negative* of the circuit output.\n\n\n\n\n```python\ndef cost(params):\n return -photon_redirection(params)\n```\n\nTo begin our optimization, let's choose the following small initial values of\n$\\theta$ and $\\phi$:\n\n\n\n\n```python\ninit_params = np.array([0.01, 0.01])\nprint(cost(init_params))\n```\n\nHere, we choose the values of $\\theta$ and $\\phi$ to be very close to zero;\nthis results in $B(\\theta,\\phi)\\approx I$, and the output of the quantum\ncircuit will be very close to $\\left|1, 0\\right\\rangle$ \u2014 i.e., the circuit leaves the photon in the first mode.\n\nWhy don't we choose $\\theta=0$ and $\\phi=0$?\n\nAt this point in the parameter space, $\\left\\langle \\hat{n}_1\\right\\rangle = 0$, and\n$\\frac{d}{d\\theta}\\left\\langle{\\hat{n}_1}\\right\\rangle|_{\\theta=0}=2\\sin\\theta\\cos\\theta|_{\\theta=0}=0$.\nSince the gradient is zero at those initial parameter values, the optimization\nalgorithm would never descend from the maximum.\n\nThis can also be verified directly using PennyLane:\n\n\n\n\n```python\ndphoton_redirection = qml.grad(photon_redirection, argnum=0)\nprint(dphoton_redirection([0.0, 0.0]))\n```\n\nNow, let's use the :class:`~.pennylane.GradientDescentOptimizer`, and update the circuit\nparameters over 100 optimization steps.\n\n\n\n\n```python\n# initialise the optimizer\nopt = qml.GradientDescentOptimizer(stepsize=0.4)\n\n# set the number of steps\nsteps = 100\n# set the initial parameter values\nparams = init_params\n\nfor i in range(steps):\n # update the circuit parameters\n params = opt.step(cost, params)\n\n if (i + 1) % 5 == 0:\n print(\"Cost after step {:5d}: {: .7f}\".format(i + 1, cost(params)))\n\nprint(\"Optimized rotation angles: {}\".format(params))\n```\n\nComparing this to the `exact calculation ` above,\nthis is close to the optimum value of $\\theta=\\pi/2$, while the value of\n$\\phi$ has not changed \u2014 consistent with the fact that $\\left\\langle \\hat{n}_1\\right\\rangle$\nis independent of $\\phi$.\n\n\nHybrid computation\n------------------\n\nTo really highlight the capabilities of PennyLane, let's now combine the qubit-rotation QNode\nfrom the `qubit rotation tutorial ` with the CV photon-redirection\nQNode from above, as well as some classical processing, to produce a truly hybrid\ncomputational model.\n\nFirst, we define a computation consisting of three steps: two quantum nodes (the qubit rotation\nand photon redirection circuits, running on the ``'default.qubit'`` and\n``'strawberryfields.fock'`` devices, respectively), along with a classical function, that simply\nreturns the squared difference of its two inputs using NumPy:\n\n\n\n\n```python\n# create the devices\ndev_qubit = qml.device(\"default.qubit\", wires=1)\ndev_fock = qml.device(\"strawberryfields.fock\", wires=2, cutoff_dim=10)\n\n\n@qml.qnode(dev_qubit)\ndef qubit_rotation(phi1, phi2):\n \"\"\"Qubit rotation QNode\"\"\"\n qml.RX(phi1, wires=0)\n qml.RY(phi2, wires=0)\n return qml.expval(qml.PauliZ(0))\n\n\n@qml.qnode(dev_fock, diff_method=\"parameter-shift\")\ndef photon_redirection(params):\n \"\"\"The photon redirection QNode\"\"\"\n qml.FockState(1, wires=0)\n qml.Beamsplitter(params[0], params[1], wires=[0, 1])\n return qml.expval(qml.NumberOperator(1))\n\n\ndef squared_difference(x, y):\n \"\"\"Classical node to compute the squared\n difference between two inputs\"\"\"\n return np.abs(x - y) ** 2\n```\n\nNow, we can define an objective function associated with the optimization, linking together\nour three subcomponents. Here, we wish to\nperform the following hybrid quantum-classical optimization:\n\n.. figure:: ../demonstrations/plugins_hybrid/hybrid_graph.png\n :align: center\n :width: 70%\n :target: javascript:void(0);\n\n1. The qubit-rotation circuit will contain fixed rotation angles $\\phi_1$ and $\\phi_2$.\n\n2. The photon-redirection circuit will contain two free parameters, the beamsplitter angles\n $\\theta$ and $\\phi$, which are to be optimized.\n\n3. The outputs of both QNodes will then be fed into the classical node, returning the\n squared difference of the two quantum functions.\n\n4. Finally, the optimizer will calculate the gradient of the entire computation with\n respect to the free parameters $\\theta$ and $\\phi$, and update their values.\n\nIn essence, we are optimizing the photon-redirection circuit to return the **same expectation value**\nas the qubit-rotation circuit, even though they are two completely independent quantum systems.\n\nWe can translate this computational graph to the following function, which combines the three\nnodes into a single hybrid computation. Below, we choose default values\n$\\phi_1=0.5$, $\\phi_2=0.1$:\n\n\n\n\n```python\ndef cost(params, phi1=0.5, phi2=0.1):\n \"\"\"Returns the squared difference between\n the photon-redirection and qubit-rotation QNodes, for\n fixed values of the qubit rotation angles phi1 and phi2\"\"\"\n qubit_result = qubit_rotation(phi1, phi2)\n photon_result = photon_redirection(params)\n return squared_difference(qubit_result, photon_result)\n```\n\nNow, we use the built-in :class:`~.pennylane.GradientDescentOptimizer` to perform the optimization\nfor 100 steps. As before, we choose initial beamsplitter parameters of\n$\\theta=0.01$, $\\phi=0.01$.\n\n\n\n\n```python\n# initialise the optimizer\nopt = qml.GradientDescentOptimizer(stepsize=0.4)\n\n# set the number of steps\nsteps = 100\n# set the initial parameter values\nparams = np.array([0.01, 0.01])\n\nfor i in range(steps):\n # update the circuit parameters\n params = opt.step(cost, params)\n\n if (i + 1) % 5 == 0:\n print(\"Cost after step {:5d}: {: .7f}\".format(i + 1, cost(params)))\n\nprint(\"Optimized rotation angles: {}\".format(params))\n```\n\nSubstituting this into the photon redirection QNode shows that it now produces\nthe same output as the qubit rotation QNode:\n\n\n\n\n```python\nresult = [1.20671364, 0.01]\nprint(photon_redirection(result))\nprint(qubit_rotation(0.5, 0.1))\n```\n\nThis is just a simple example of the kind of hybrid computation that can be carried\nout in PennyLane. Quantum nodes (bound to different devices) and classical\nfunctions can be combined in many different and interesting ways.\n\n\n", "meta": {"hexsha": "05d776658ac1267c4f2c3a933b1391ec2d38306c", "size": 17389, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "98_quantum/99_tutorial_plugins_hybrid.ipynb", "max_stars_repo_name": "dpai/workshop", "max_stars_repo_head_hexsha": "d4936da77dac759ba2bac95a9584fde8e86c6b2b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2327, "max_stars_repo_stars_event_min_datetime": "2020-03-01T09:47:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-25T12:38:42.000Z", "max_issues_repo_path": "98_quantum/99_tutorial_plugins_hybrid.ipynb", "max_issues_repo_name": "trideau/Data-Science-with-AWS-Workshop", "max_issues_repo_head_hexsha": "7dbe7989fa99e88544da8bf262beec907c536093", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 209, "max_issues_repo_issues_event_min_datetime": "2020-03-01T17:14:12.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-08T20:35:42.000Z", "max_forks_repo_path": "98_quantum/99_tutorial_plugins_hybrid.ipynb", "max_forks_repo_name": "trideau/Data-Science-with-AWS-Workshop", "max_forks_repo_head_hexsha": "7dbe7989fa99e88544da8bf262beec907c536093", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 686, "max_forks_repo_forks_event_min_datetime": "2020-03-03T17:24:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-25T23:39:12.000Z", "avg_line_length": 67.92578125, "max_line_length": 6005, "alphanum_fraction": 0.6737017655, "converted": true, "num_tokens": 3579, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.523420348936324, "lm_q2_score": 0.5428632831725052, "lm_q1q2_score": 0.2841456891028712}} {"text": "

Score de densit\u00e9 m\u00e9dicale

\n\n

Kenza Elass et Rebecca Clain

\n\n\n```python\nimport pandas as pd\nimport seaborn as sns\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn import preprocessing\nsns.set_style('darkgrid')\npd.set_option('display.max_columns', None) \nimport ipywidgets as widgets\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nfrom IPython.display import display\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n```\n\n\n```python\ndf = pd.read_csv('analyse_df.csv', sep = ';', low_memory = False)\n\ndf.columns = ['code commune \u00e9tablissement', 'ambulance', 'analyse m\u00e9dicale', 'autre', 'autre sp\u00e9cialiste',\n 'chirurgien','dentiste', 'generaliste', 'hopital', 'infirmiers', 'organe', 'radiologiste',\n 'r\u00e9educateur podologue', 'code g\u00e9ographique', 'libell\u00e9 g\u00e9ographique','nombre m\u00e9nage fiscaux',\n 'nombre personne m\u00e9nage fiscaux', 'm\u00e9diane niveau de vie', 'part m\u00e9nages fiscaux impos\u00e9s',\n 'taux de pauvret\u00e9 ensemble', 'taux de pauvret\u00e9 moins de 30 ans', 'taux de pauvret\u00e9 30 \u00e0 39 ans',\n 'taux de pauvret\u00e9 40 \u00e0 49 ans', 'taux de pauvret\u00e9 50 \u00e0 59 ans', 'Taux de pauvret\u00e9 60 \u00e0 74 ans',\n 'taux de pauvret\u00e9 75 ans ou plus','taux de pauvret\u00e9 propri\u00e9taires','taux de pauvret\u00e9 locataires',\n 'part des revenus activit\u00e9s','dont part des salaires et traitement hors chomage',\n 'dont part des indemnites de chomage', 'dont part des revenus des activites non salaries',\n 'part des pensions retraites et rentes','part des revenus du patrimoine et autres revenus',\n 'part ensemble prestations sociales','dont part prestations familiales', 'dont part des minima sociaux',\n 'dont part des prestations logement','part des impots','1er d\u00e9cile niveau de vie',\n '9e d\u00e9cile niveau de vie','rapport interdecile 9e/1er','merge_']\n```\n\n\n```python\ndef missing_values_table(df) :\n \n mis_val = df.isnull().sum()\n mis_val_percent = 100 * df.isnull().sum() / len(df)\n mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)\n mis_val_table_ren_columns = mis_val_table.rename(\n columns = {0 : 'Missing Values', 1 : '% of Total Values'})\n mis_val_table_ren_columns = mis_val_table_ren_columns[\n mis_val_table_ren_columns.iloc[:,1] != 0].sort_values('% of Total Values', ascending = False).round(1)\n print (\"Your selected dataframe has \" + str(df.shape[1]) + \" columns.\\n\" \n \"There are \" + str(mis_val_table_ren_columns.shape[0]) + \" columns that have missing values.\")\n \n return mis_val_table_ren_columns\n \nmissing_values_table(df)\n```\n\n Your selected dataframe has 43 columns.\n There are 30 columns that have missing values.\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Missing Values% of Total Values
taux de pauvret\u00e9 75 ans ou plus3554098.9
taux de pauvret\u00e9 moins de 30 ans3511897.8
Taux de pauvret\u00e9 60 \u00e0 74 ans3506897.6
taux de pauvret\u00e9 50 \u00e0 59 ans3462396.4
taux de pauvret\u00e9 30 \u00e0 39 ans3448096.0
taux de pauvret\u00e9 40 \u00e0 49 ans3404794.8
taux de pauvret\u00e9 propri\u00e9taires3376994.0
taux de pauvret\u00e9 locataires3272291.1
taux de pauvret\u00e9 ensemble3152987.8
dont part des revenus des activites non salaries3066585.4
dont part des minima sociaux3066585.4
dont part des prestations logement3066585.4
part des impots3066585.4
1er d\u00e9cile niveau de vie3066585.4
9e d\u00e9cile niveau de vie3066585.4
dont part prestations familiales3066585.4
part ensemble prestations sociales3066585.4
part des revenus du patrimoine et autres revenus3066585.4
part des pensions retraites et rentes3066585.4
rapport interdecile 9e/1er3066585.4
dont part des indemnites de chomage3066585.4
dont part des salaires et traitement hors chomage3066585.4
part des revenus activit\u00e9s3066585.4
part m\u00e9nages fiscaux impos\u00e9s3066185.4
code commune \u00e9tablissement1544443.0
m\u00e9diane niveau de vie367010.2
nombre personne m\u00e9nage fiscaux367010.2
nombre m\u00e9nage fiscaux367010.2
code g\u00e9ographique740.2
libell\u00e9 g\u00e9ographique740.2
\n
\n\n\n\n# Construction d'un score de densit\u00e9 m\u00e9dicale\n\n\n```python\ndf_dept = (df.dropna(subset = ['code commune \u00e9tablissement','nombre personne m\u00e9nage fiscaux']) \n .assign(departement = lambda df : df['code commune \u00e9tablissement'].str[:2])\n .groupby('departement')\n [['nombre personne m\u00e9nage fiscaux','ambulance', 'analyse m\u00e9dicale', 'autre',\n 'autre sp\u00e9cialiste', 'chirurgien', 'dentiste', 'generaliste', 'hopital',\n 'infirmiers', 'organe', 'radiologiste', 'r\u00e9educateur podologue']]\n .sum())\ndf_dept.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
nombre personne m\u00e9nage fiscauxambulanceanalyse m\u00e9dicaleautreautre sp\u00e9cialistechirurgiendentistegeneralistehopitalinfirmiersorganeradiologister\u00e9educateur podologue
departement
01557566.082.039.0718.0203.039.0368.0597.0138.01070.01.038.0959.0
02412914.065.036.0238.0263.025.0261.0567.0119.0986.05.059.0562.0
03300824.559.025.0307.0157.035.0192.0392.059.0774.02.030.0525.0
04148467.523.018.0279.0118.015.0134.0342.048.0631.00.06.0419.0
05127431.033.014.0235.0138.021.0134.0379.066.0483.02.025.0596.0
\n
\n\n\n\n\n```python\nsoins = ['ambulance','analyse m\u00e9dicale','autre','autre sp\u00e9cialiste','chirurgien','dentiste',\n 'generaliste','hopital','infirmiers','organe','radiologiste','r\u00e9educateur podologue']\n\nfor v in soins :\n df_dept[v + ' pour 100 000 habitants'] = 100000*df_dept[v]/df_dept['nombre personne m\u00e9nage fiscaux']\n```\n\nExemple de distribution des g\u00e9n\u00e9ralistes en France (estimation de la densit\u00e9 par m\u00e9thode du noyau gaussien).\n\n\n```python\nplt.figure(figsize=(10,6))\nsns.kdeplot(df_dept['generaliste pour 100 000 habitants'])\nplt.axvline(df_dept['generaliste pour 100 000 habitants'].median(), color = 'red', linestyle = '--')\nplt.show()\n```\n\nNous appellerons densit\u00e9 m\u00e9dicale le nombre de m\u00e9decins pour 100 000 habitants dans un d\u00e9partement donn\u00e9. Il existe une densit\u00e9 m\u00e9dicale pour chaque poste de soins (ambulance, dentiste, chirurgien, radioth\u00e9rapiste, etc). \n\nPour simplifier, nous expliquerons la d\u00e9marche en parlant uniquement de la densit\u00e9 de g\u00e9n\u00e9raliste. La m\u00e9thode reste la m\u00eame pour les autres postes de soins.\n\nSoit $x_{i}$ un vecteur ordonnant l'ensemble des densit\u00e9s de g\u00e9n\u00e9ralistes pour l'ensemble des d\u00e9partements $i$ (le r\u00f4le des poids $w_{i}$ sera pr\u00e9sent\u00e9 plus tard).\n\n$x_{i} = [x_{min},...,x_{median},...,x_{max}] \\:, w_{i} = [w_{min},...,w_{median},...,w_{max}]$ \n\n\n```python\ndf_dept['generaliste pour 100 000 habitants'].describe()\n```\n\n\n\n\n count 97.000000\n mean 160.172290\n std 36.195403\n min 103.548581\n 25% 135.185412\n 50% 152.977729\n 75% 182.096106\n max 297.415856\n Name: generaliste pour 100 000 habitants, dtype: float64\n\n\n\nNous avons donc : $x_{i} = [104,...,153,...,297] \\:; w_{i} = [w_{min},...,w_{median},...,w_{max}]$ \n\nNous fixons un poids \u00e9gal \u00e0 $w_{median} = 1$ pour la densit\u00e9 de g\u00e9n\u00e9raliste m\u00e9diane $x_{median}$.\nPlus la densit\u00e9 de g\u00e9n\u00e9raliste diminuera relativement \u00e0 $x_{median}$, plus le poids $w$ diminuera.\nA l'inverse, plus la densit\u00e9 de g\u00e9n\u00e9raliste augmentera par rapport \u00e0 $x_{median}$, plus le poids augmentera. \n\nPour cela, nous utilisons 2 boucles : la premi\u00e8re pour remplir it\u00e9rativement le vecteur les poids de $w_{median}$ jusqu'\u00e0 $w_{min}$, la deuxi\u00e8me pour remplir it\u00e9rativement le vecteur de poids de $w_{median}$ jusqu'\u00e0 $w_{max}$.\n\n\n```python\ndef RelativeWeight(variable) : \n \n x = np.sort(df_dept[variable].values)\n\n i = x.tolist().index(np.median(x))\n w = [0] * len(x)\n w[i] = 1\n\n i = x.tolist().index(np.median(x))\n while i >= 1 : \n w[i-1] = (w[i]*x[i-1])/x[i]\n i = i - 1\n\n j = x.tolist().index(np.median(x))\n while j <= len(w)-1 : \n try :\n w[j+1] = (w[j]*x[j+1])/x[j]\n except : IndexError\n pass\n j = j + 1\n\n d = pd.DataFrame(data = w, index = x, columns = ['Poids'])\n \n return d\n\ndf_dept['poids_generaliste'] = df_dept['generaliste pour 100 000 habitants'].map(\n RelativeWeight(variable = 'generaliste pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_ambulance'] = df_dept['ambulance pour 100 000 habitants'].map(\n RelativeWeight(variable = 'ambulance pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_chirurgien'] = df_dept['chirurgien pour 100 000 habitants'].map(\n RelativeWeight(variable = 'chirurgien pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_dentiste'] = df_dept['dentiste pour 100 000 habitants'].map(\n RelativeWeight(variable = 'dentiste pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_hopital'] = df_dept['hopital pour 100 000 habitants'].map(\n RelativeWeight(variable = 'hopital pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_infirmiers'] = df_dept['infirmiers pour 100 000 habitants'].map(\n RelativeWeight(variable = 'infirmiers pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_r\u00e9eductateur_podologue'] = df_dept['r\u00e9educateur podologue pour 100 000 habitants'].map(\n RelativeWeight(variable = 'r\u00e9educateur podologue pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_radiologiste'] = df_dept['radiologiste pour 100 000 habitants'].map(\n RelativeWeight(variable = 'radiologiste pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_analyse_m\u00e9dicale'] = df_dept['analyse m\u00e9dicale pour 100 000 habitants'].map(\n RelativeWeight(variable = 'analyse m\u00e9dicale pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_autre'] = df_dept['autre pour 100 000 habitants'].map(\n RelativeWeight(variable = 'autre pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_autre_sp\u00e9cialiste'] = df_dept['autre sp\u00e9cialiste pour 100 000 habitants'].map(\n RelativeWeight(variable = 'autre sp\u00e9cialiste pour 100 000 habitants')[\"Poids\"])\n\ndf_dept['poids_organe'] = df_dept['organe pour 100 000 habitants'].map(\n RelativeWeight(variable = 'organe pour 100 000 habitants')[\"Poids\"].dropna())\n```\n\nMaintenant que les poids sont calcul\u00e9s, nous pouvons \"ajuster\" les densit\u00e9s, pour chaque d\u00e9partement $i$, pour chaque poste de soins $j$.\n\n\\begin{equation}\nx_{ij}^{adj} = w_{ij} \\: x_{ij}\n\\end{equation}\n\n\n```python\ndf_dept['ambulance_ajust\u00e9'] = df_dept['poids_ambulance'] * df_dept['ambulance pour 100 000 habitants']\n\ndf_dept['r\u00e9educateur_podologue_ajust\u00e9'] = df_dept['poids_r\u00e9eductateur_podologue'] * df_dept['r\u00e9educateur podologue pour 100 000 habitants']\n\ndf_dept['analyse_m\u00e9dicale_ajust\u00e9'] = df_dept['poids_analyse_m\u00e9dicale'] * df_dept['analyse m\u00e9dicale pour 100 000 habitants']\n\ndf_dept['autre_ajust\u00e9'] = df_dept['poids_autre'] * df_dept['autre pour 100 000 habitants']\n\ndf_dept['autre_sp\u00e9cialiste_ajust\u00e9'] = df_dept['poids_autre_sp\u00e9cialiste'] * df_dept['autre sp\u00e9cialiste pour 100 000 habitants']\n\ndf_dept['chirurgien_ajust\u00e9'] = df_dept['poids_chirurgien'] * df_dept['chirurgien pour 100 000 habitants']\n\ndf_dept['dentiste_ajust\u00e9'] = df_dept['poids_dentiste'] * df_dept['dentiste pour 100 000 habitants']\n\ndf_dept['generaliste_ajust\u00e9'] = df_dept['poids_generaliste'] * df_dept['generaliste pour 100 000 habitants']\n\ndf_dept['hopital_ajust\u00e9'] = df_dept['poids_hopital'] * df_dept['hopital pour 100 000 habitants']\n\ndf_dept['infirmier_ajust\u00e9'] = df_dept['poids_infirmiers'] * df_dept['infirmiers pour 100 000 habitants']\n\ndf_dept['organe_ajust\u00e9'] = df_dept['poids_organe'] * df_dept['organe pour 100 000 habitants']\n\ndf_dept['radiologiste_ajust\u00e9'] = df_dept['poids_radiologiste'] * df_dept['radiologiste pour 100 000 habitants']\n```\n\nNous avons ensuite d\u00e9cid\u00e9 de sommer en colonne l'ensemble de ces nouvelles densit\u00e9s ajust\u00e9es, de mani\u00e8re \u00e0 obtenir comme une densit\u00e9 m\u00e9dicale globale ajust\u00e9e, not\u00e9 $y_{i}$\n\n\\begin{equation}\n\\displaystyle y_{i} = \\sum_{j=1}^{m} x_{ij}^{adj} \\\\\n\\end{equation}\n\n\n```python\nf = []\nfor v in df_dept.columns : \n if v.endswith('ajust\u00e9') : \n f.append(v)\nd = df_dept[f]\nd['somme'] = d.sum(axis=1)\n```\n\nEnfin, nous appliquons une transformation min-max sur cette nouvelle variable, me permettant d'obtenir un score de densit\u00e9 m\u00e9dicale compris entre 0 et 1.\n\n\\begin{equation}\n\\displaystyle score_{i} = \\frac{y_{i} - y_{min}}{y_{max} - y_{min}}\n\\end{equation}\n\n\n```python\nx = d['somme'].values.reshape(-1,1) \nmin_max_scaler = preprocessing.MinMaxScaler()\nx_scaled = min_max_scaler.fit_transform(x)\nd['score'] = x_scaled\n```\n\nVisualisons la distribution de ce score (tend vers 1 si la densit\u00e9 m\u00e9dicale est relativement forte, et vers 0 si la densit\u00e9 m\u00e9dicale est relativement faible).\n\n\n```python\nplt.figure(figsize=(11,8))\nplt.subplot(2, 1, 1)\nplt.hist(d['score'], bins = 20)\nplt.subplot(2, 1, 2)\nsns.kdeplot(d['score'], shade = True)\nplt.show()\n```\n\n# Tableau de bord\n\n\n```python\ndf_dept = df_dept.rename(columns = {'ambulance pour 100 000 habitants' : 'Ambulance',\n 'analyse m\u00e9dicale pour 100 000 habitants' : 'Analyse m\u00e9dicale',\n 'autre pour 100 000 habitants' : 'Autre',\n 'autre sp\u00e9cialiste pour 100 000 habitants' : 'Autre sp\u00e9cialiste',\n 'chirurgien pour 100 000 habitants' : 'Chirurgien',\n 'dentiste pour 100 000 habitants' : 'Dentiste',\n 'generaliste pour 100 000 habitants' : 'G\u00e9n\u00e9raliste',\n 'hopital pour 100 000 habitants' : 'H\u00f4pital',\n 'infirmiers pour 100 000 habitants' : 'Infirmiers',\n 'organe pour 100 000 habitants' : 'Organe',\n 'radiologiste pour 100 000 habitants' : 'Radiologiste',\n 'r\u00e9educateur podologue pour 100 000 habitants' : 'R\u00e9educateur podologue'})\n\ndf_dept['D\u00e9partement'] = df_dept.index.values\ndf_dept['Score'] = x_scaled\n\na = widgets.ToggleButtons(\n options = ['Ambulance',\n 'Analyse m\u00e9dicale',\n 'Autre',\n 'Autre sp\u00e9cialiste',\n 'Chirurgien',\n 'Dentiste',\n 'G\u00e9n\u00e9raliste',\n 'H\u00f4pital',\n 'Infirmiers',\n 'Organe',\n 'Radiologiste',\n 'R\u00e9educateur podologue'],\n description = 'Sp\u00e9cialit\u00e9 :',\n disabled = False,\n button_style = ''\n )\nb = widgets.Select(\n options = df_dept['D\u00e9partement'].values.tolist(),\n value = '01',\n description = 'D\u00e9partement :',\n disabled = False\n )\n\ndisplay(a,b) \n\nbutton = widgets.Button(description= \"Voir les r\u00e9sultats\")\ndisplay(button)\n\noutput = widgets.Output()\n\n@output.capture()\ndef on_button_clicked(z):\n \n print('Dans le d\u00e9partement n\u00b0', b.value,', la densit\u00e9 en', a.value.lower(),\n 'est de', int(np.round(df_dept.loc[df_dept['D\u00e9partement'] == b.value, a.value].item(),0)),\n 'pour 100 000 habitants. \\nLe score de densit\u00e9 m\u00e9dicale de ce d\u00e9partement est \u00e9gal \u00e0',\n np.round(df_dept.loc[df_dept['D\u00e9partement'] == b.value, \"Score\"].item(),2))\n \nbutton.on_click(on_button_clicked)\ndisplay(output)\n```\n\n\n ToggleButtons(description='Sp\u00e9cialit\u00e9 :', options=('Ambulance', 'Analyse m\u00e9dicale', 'Autre', 'Autre sp\u00e9cialist\u2026\n\n\n\n Select(description='D\u00e9partement :', options=('01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11',\u2026\n\n\n\n Button(description='Voir les r\u00e9sultats', style=ButtonStyle())\n\n\n\n Output()\n\n", "meta": {"hexsha": "9b0f7d05db08fbc6f722098a34375b5cc598a39f", "size": 92739, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Projet_Big_Data_Elass_Clain.ipynb", "max_stars_repo_name": "elasskenza/deserts-medicaux", "max_stars_repo_head_hexsha": "57e2292d86613f67c26b67e24755a518616245c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Projet_Big_Data_Elass_Clain.ipynb", "max_issues_repo_name": "elasskenza/deserts-medicaux", "max_issues_repo_head_hexsha": "57e2292d86613f67c26b67e24755a518616245c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Projet_Big_Data_Elass_Clain.ipynb", "max_forks_repo_name": "elasskenza/deserts-medicaux", "max_forks_repo_head_hexsha": "57e2292d86613f67c26b67e24755a518616245c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.0992141454, "max_line_length": 28452, "alphanum_fraction": 0.7493179784, "converted": true, "num_tokens": 6413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5813030906443133, "lm_q2_score": 0.4882833952958347, "lm_q1q2_score": 0.28384064679576765}} {"text": "# Statistics with Nilearn\nThis notebook is about the GLM/statistics-related functionality of the [nilearn](https://nilearn.github.io/) Python package for statistical analysis of fMRI data. While it is still in development, it promises to become a full-fledged Python-based alternative to existing (f)MRI analysis software packages such as FSL, SPM, and AFNI.\n\nIn this notebook, we'll showcase the most important features of the statistics-related functionality of the package in a step-by-step fashion. Notably, this notebook contains several exercises (which we call \"ToDos\"), which are meant to make this tutorial more interactive! Also, this tutorial is merely an introduction to (parts of) the Nilearn package. We strongly recommend checking out the excellent [user guide](https://nilearn.github.io/user_guide.html) and [example gallery](https://nilearn.github.io/auto_examples/index.html) on the Nilearn website if you want to delve deeper into the package's (more advanced) features.\n\nWhile not strictly necessary, you'll get the most out of this tutorial if you are familiar with the Nilearn package. To familiarize yourself, you could go through our [Nilearn tutorial](https://github.com/lukassnoek/nilearn-tutorial). (For students doing either of the *Neuroimaging* courses at the University of Amsterdam: the `nilearn.ipynb` notebook should be in your home-folder already.) Also, this notebook contains a very short introduction to the [pandas](https://pandas.pydata.org/) package, but this can be skipped by those who are already familiar with it.\n\n**Contents**\n1. What is Nilearn?\n2. Data formats\n3. Creating design matrices\n4. First-level models\n5. Second-level models\n\n**Estimated time needed to complete**: 1-3 hours (depending on your experience with Python)
\n**Credits**: if you end up using `nilearn` in your work, please cite the corresponding [article](https://www.frontiersin.org/articles/10.3389/fninf.2014.00014/full).
\n\n\n```python\n# Install packages if necessary\ntry:\n import nilearn\nexcept ImportError:\n !pip install nilearn\n\n# We need to limit the amount of threads numpy can use, otherwise\n# it tends to hog all the CPUs available when using Nilearn\nimport os\nos.environ['MKL_NUM_THREADS'] = '1'\nos.environ['OPENBLAS_NUM_THREADS'] = '1'\nimport numpy as np\n```\n\n
\n Warning: This notebook uses a lot of RAM (up to 8GB at some point), so make sure your computer/server can handle this!\n
\n\n## What is Nilearn?\nNilearn is one of the packages in the growing \"nipy\" ecosystem of Python packages for neuroimaging analysis (see also MNE, nilearn, nipype, nibabel, and dipy). One of the (recently added) features of the package is the ability to run statistical (mostly univariate) analyses of fMRI data. (This functionality was previously implemented in the \"Nistats\" package, which has been merged into Nilearn recently.) Importantly, Nilearn does not contain functionality to preprocess your (f)MRI data and assumes that your data has been preprocessed by another software package. Personally, we think it works very well in combination with preprocessed data using the [Fmriprep](https://fmriprep.readthedocs.io/) package.\n\n
\n Note: Nilearn is a relatively new package and its API might change! If code in this notebook gives errors because it uses an old version of Nilearn, let us know. \n
\n\nMost statistical functionality in Nilearn is stored in the `glm` module (i.e., `nilearn.glm`), which itself contains submodules for first-level analyses (in `nilearn.glm.first_level`) and higher-level analyses (in `nilearn.glm.second_level`).\n\n## Data formats\nThe two most important data formats that are used in the Nilearn package are nifti images and comma- or tab-separated values (CSV, TSV) text files.\n\n### Nifti images\nLike most packages in the *nipy* ecosystem, Nilearn assumes that your MRI data is stored in nifti images, and as such, many functions in Nilearn involving nifti images accept either strings pointing towards the path of a nifti file (or a list with multiple paths) or a `Nifti1Image` object from the `nibabel` package. Together, these two types of inputs (filenames pointing to nifti files and `Nifti1Images`) are often referred to a \"niimgs\" (or \"niimg-like\") by Nilearn \u2014 a term you'll see a lot in the documentation.\n\nLet's actually download some data! In this tutorial, we use data from the [NARPS](https://www.narps.info/) project ([Botvinik-Nezer et al., 2019a](https://www.nature.com/articles/s41597-019-0113-7); [Botvinik-Nezer et al., 2019b](https://www.biorxiv.org/content/10.1101/843193v1)). This public dataset was analyzed by 70 different research groups to showcase the variety in analysis approaches and the way this affects the subsequent results. In the study, a \"mixed gambles\" experiment was used to investigate how potential monetary gains and losses are related to brain activity (which was based on the experiment by [Tom et al., 2007](https://science.sciencemag.org/content/315/5811/515)). \n\nEach trial, participants were presented simulatenously with a potential monetary \"gain\" (e.g., +12) and a potential monetary \"loss\" (e.g., -6. For each trial, participants had to choose whether to accept this gamble (either \"strongly accept\", \"weakly accept\", \"weakly reject\", or \"strongly reject\"), knowing that at the end of the experiment, one trial (\"gamble\") would be picked randomly and (if chosen) would be \"run\" with a 50/50 chance of losing the original \"loss\" amount and winning the original \"gain\" amount. As such, the following information was recorded per trial: potential gain amount, potential loss amount, reaction time (of participant's response), and the participant's choice/decision. \n\n
\n ToDo: Go to the NARPS website and read through the \"data and analysis\" section to get an idea of the experiment and scanning procedure.\n
\n\nThe dataset is publicly available from [Openneuro.org](http://openneuro.org/), which also includes preprocessed data (using [Fmriprep](https://fmriprep.readthedocs.io/)) — perfect for our purposes! The cell below will download the data, which may take a while because it's 2.4 GB in size.\n\n\n```python\n# First, we need the `awscli` package to download the data; install if necessary\ntry:\n import awscli\nexcept ImportError:\n print(\"Installing awscli package ...\")\n !pip install awscli\n\n# We'll save the data in your home folder\n# Note: the os.path.join function concatenates paths using the delimiter specific\n# to your operating system (\\ for windows, / for Mac/Linux)\nimport os \nsave_dir = os.path.join(os.path.expanduser(\"~\"), 'NARPS')\nif not os.path.isdir(save_dir):\n print(\"Data will be saved in %s ...\" % save_dir)\n print(\"Go get some coffee. This may take a while.\\n\")\n\n # Using the CLI interface of the `awscli` Python package, we'll download the data\n # Note that we only download the preprocessed data from a single run from a single subject\n # This may take about 10-60 minutes or so (depending on your download speed)\n !aws s3 sync --no-sign-request s3://openneuro.org/ds001734 {save_dir} --exclude \"*\" --include \"*fmriprep/sub-00[1,3]/func/*run-01*space-MNI*.nii.gz\"\n !aws s3 sync --no-sign-request s3://openneuro.org/ds001734 {save_dir} --exclude \"*\" --include \"*fmriprep/sub-00[1,3]/func/*run-01*.tsv\"\n !aws s3 sync --no-sign-request s3://openneuro.org/ds001734 {save_dir} --exclude \"*\" --include \"*sub-00[1,3]/func/*run-01*events.tsv\"\n print(\"\\nDone!\")\nelse:\n print(\"Data is already downloaded!\")\n```\n\nAlright, let's check out the directory in which we saved the downloaded data:\n\n\n```python\ndef list_files(startpath):\n \"\"\" Simple function to show directory tree. \n From: https://stackoverflow.com/questions/9727673/list-directory-tree-structure-in-python. \"\"\"\n for root, dirs, files in os.walk(startpath):\n level = root.replace(startpath, '').count(os.sep)\n indent = ' ' * 4 * (level)\n print('{}{}/'.format(indent, os.path.basename(root)))\n subindent = ' ' * 4 * (level + 1)\n for f in sorted(files):\n print('{}{}'.format(subindent, f))\n \nlist_files(save_dir)\n```\n\nAs you can see, this directory contains both subdirectories with \"unprocessed\" data (in `sub-001/func`) and preprocessed data (in `derivatives/fmriprep/sub-001/func`). Note that we excluded the raw (unprocessed) MRI data to save disk space (and time).\n\nNow, let's check out an fMRI file. We'll load the preprocessed `sub-001_task-MGT_run-01_bold_space-MNI152NLin2009cAsym_preproc.nii.gz` (which has been registered and resampled to standard MNI152 space already) using `nibabel`:\n\n\n```python\nimport nibabel as nib\nfmri_path = os.path.join(save_dir, 'derivatives', 'fmriprep', 'sub-001', 'func', 'sub-001_task-MGT_run-01_bold_space-MNI152NLin2009cAsym_preproc.nii.gz')\nif not os.path.isfile(fmri_path):\n raise ValueError(\"Data does not seem to be downloaded (correctly).\\n\"\n \"Try removing the data and re-downloading it!\")\nfmri = nib.load(fmri_path)\n```\n\n
\n ToDo: Do you remember how to inspect Nifti1Images from nibabel? Try to figure out this scan's TR and how many volumes (timepoints) this file has. Store the TR in a variable named tr_todo and the number of volumes in nvol_todo. \n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the ToDo above. '''\nassert(tr_todo == 1)\nassert(nvol_todo == 453)\nprint(\"Well done!\")\n```\n\n### TSV files (or: a short introduction to Pandas)\n*(If you're familiar with the Pandas package, you may skip this section.)*\n\nThe other type of file you'll probably encounter a lot when working with Nilearn is a TSV (tab-separated values) file. This plain-text file is like a spreadsheet with different \"observations\" in rows and different \"attributes\" in columns. For example, according to the Brain Imaging Data Structure ([BIDS](https://bids.neuroimaging.io/)), information about your experiment (like trial onsets, durations, conditions, etc.) should be stored in a TSV file ending in `_events.tsv`. \n\nAs expected, the data we downloaded also contains such an event-file, `sub-001_task-MGT_run-01_events.tsv`. (Note that these event-files are stored in the \"unprocessed\" data directory, not the \"derivatives\" directory).\n\nThese TSV files can be loaded into Python using the [pandas](https://pandas.pydata.org/) package. `pandas` is usually imported as follows:\n\n\n```python\nimport pandas as pd\n```\n\nThen, TSV files (or any x-delimited files, like CSV files) can be loaded in using the `pd.read_csv` function. Let's do that for our events-file:\n\n\n```python\npath = os.path.join(save_dir, 'sub-001', 'func', 'sub-001_task-MGT_run-01_events.tsv')\nevents_df = pd.read_csv(path, sep='\\t')\nevents_df # putting events_df at the end of a cell (instead of printing it) will show a nicely formatted table\n```\n\n
\n ToDo: What do you think the argument sep='\\t' does? Try removing the argument to see what happens.\n
\n\nAs you can see, the `pd.read_csv` function returns a table-like object with 64 rows (corresponding to the experiment's events/trials) and 6 columns (corresponding to different attributes of the trials, like onset, duration, etc.). The `events_df` is a custom Pandas object called a `DataFrame`:\n\n\n```python\nprint(type(events_df))\n```\n\nPandas `DataFrames` are similar to dataframes in the R programming language. A full introduction to Pandas is beyond the scope of this tutorial (see [10 minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html) for a nice introduction), but we'll highlight some useful Pandas-related functionality below.\n\nOne thing that is useful to know is that `DataFrames` have both row and column names. The row names are usually referred to as the `DataFrame`'s *index* (these are the elements left of the \"onset\" column, i.e., 0, 1, 2, ... 63). Row and column names can be of any type (strings, like the \"onset\" column, integers, like the row names in this `DataFrame`, etc.).\n\nTo select particular rows and/or columns, you can use the `.loc` and `.iloc` methods, where the `.loc` method selects based on the *name* of the row/column (like a dictionary) and the `.iloc` method selects based on the *position*(i.e., the \"number\") of the row/column (like a list).\n\nThe syntax of a `loc` and `iloc` based selection is as follows:\n\n```\ndf.loc[row_name, col_name]\ndf.iloc[row_index, col_index]\n```\n\nNote that the square brackets (`[]`) are different than what you expect from a *method*, but the technicalities are beyond the scope of this tutorial.\n\nAnyway, to for example select the `onset` column (and all rows), we can do the following:\n\n\n```python\n# the : tells loc to select *all* the rows\nevents_df.loc[:, 'onset']\n```\n\nAnd to select the first ten rows (and all columns), we can do the following:\n\n\n```python\n# Just like regular Python lists, iloc accepts slice syntax (like :10, or 5:12:2)\nevents_df.iloc[:10, :]\n```\n\n
\n ToThink In the example above, we used iloc to select the first ten rows of the DataFrame, but we could have also used loc for this purpose (try it out yourself by substituting loc for iloc). Why do you think this is the case? Is this always possible?\n
\n\nNote that you can also select multiple rows and/or columns at the same time by passing a list or tuple to the `loc` and `iloc` indexers:\n\n\n```python\nevents_df.loc[:, ['onset', 'duration']]\n```\n\n\n```python\nevents_df.iloc[[0, 5], :]\n```\n\n
\n ToDo: Let's practice a little. Select from events_df all odd rows in a single statement and store it in a variable named odd_rows. For a refresher on slicing in Python, see here).\n
\n\n\n```python\n''' Implement the ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the ToDo above. '''\nimport numpy as np\nassert(odd_rows.shape == (32, 6))\nnp.testing.assert_array_equal(odd_rows.index, range(1, 64, 2))\nprint(\"Well done!\")\n```\n\n
\n ToDo: Select the last 10 rows and the onset and duration columns in a single statement and store it in a variable named last10_onset_duration.\n
\n\n\n```python\n''' Implement the ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nassert(last10_onset_duration.shape == (10, 2))\nnp.testing.assert_array_equal(last10_onset_duration.columns, ['onset', 'duration'])\nnp.testing.assert_array_equal(last10_onset_duration.index, range(54, 64))\nprint(\"Well done!\")\n```\n\nAnother useful approach for selecting subsets of observations (i.e., rows) in `DataFrames` is *boolean indexing*. For example, to create an index that selects all trials (rows) in which the participant indicated \"strongly_accept\" (i.e., accepting a trial's proposed bet), we can do:\n\n\n```python\nbool_idx = events_df.loc[:, 'participant_response'] == 'strongly_accept'\nprint(bool_idx)\n```\n\nand we can subsequently pass this index to `loc` (note that `iloc` doesn't work with boolean indices):\n\n\n```python\nstrongly_accept_trials = events_df.loc[bool_idx, :]\nstrongly_accept_trials\n```\n\n
\n ToDo: Using boolean indexing, select all trials with a reaction time smaller than 1.5 seconds and store the result in a variable named rt_smaller_than_1p5.\n
\n\n\n```python\n''' Implement the ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nnp.testing.assert_almost_equal(rt_smaller_than_1p5.loc[:, 'RT'].mean(), 1.2196)\nprint('Well done!')\n```\n\nYou can also use `loc` and `iloc` to create new columns. For example, let's add a new column, `\"random_numbers\"`, with some random numbers:\n\n\n```python\n# Note that you can access the shape (nr of rows, nr of cols) of a DataFrame using\n# the .shape attribute (just like numpy arrays)!\nrnd = np.random.randn(events_df.shape[0])\nevents_df.loc[:, 'random_numbers'] = rnd\nevents_df\n```\n\nTo delete a row or column, you can use the `drop` method. Note that you should pass a particular `axis` to `drop`, either `axis=0` (should I drop rows?) or `axis=1` (should I drop columns?):\n\n\n```python\nevents_df = events_df.drop('random_numbers', axis=1)\nevents_df\n```\n\nThere are many, many more things you can do with Pandas, but for now, this should suffice for our purposes.\n\n## Creating design matrices\nBefore we are going to fit any model on our fMRI data ($y$), we are going to focus on what we're going to put into the model: the design matrix ($\\mathbf{X}$)! Note that Nilearn offers different ways to create design matrices (and fit univariate models in general), some offer a relatively \"high-level\" interface (such as `first_level_model_from_bids`) while others only offer the building blocks (such as different HRF models and various types of regression models) that you can use to build the models yourself. \n\nThese multiple interfaces for the same functionality is especially apparent when constructing design matrices using Nilearn. In this section, we'll start at the lowest level and work ourselves up to the more convenient (but less flexible) higher level interfaces.\n\nThat said, let's go back to design matrices. What do you need to construct them? Roughly, you need the following:\n* A particular HRF model\n* Information about event onset, duration, and (optionally) amplitude (weight);\n* Information about fMRI scan timing;\n\nUsing these three components, we can construct HRF-informed regressors for our events on the time scale of our fMRI signal. \n\n### Defining an HRF model\nFirst, let's try to explicitly construct an HRF model using Nilearn. The `nilearn.glm.first_level.hemodynamic_models` module contains various HRF models. Let's start with the good old canonical \"Glover\" HRF:\n\n\n```python\nfrom nilearn.glm.first_level.hemodynamic_models import glover_hrf\n```\n\nThis `glover_hrf` function takes a couple of arguments, most importantly: `tr` (temporal resolution of your fMRI scan in seconds) and `oversampling` (how much to temporally upsample the HRF relative to your `tr`; you can usually safely ignore the other two arguments, `time_length` and `onset`).\n\nSuppose that we want to define our HRF on the scale of 0.01 seconds, and knowing that our fMRI scan has a TR of 1, we can do the following:\n\n\n```python\ncanon_hrf = glover_hrf(tr=1, oversampling=100)\n```\n\nAnd let's plot it:\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Define timepoints corresponding to HRF\nt_hrf = np.linspace(0, 32, 32 * 100, endpoint=False)\n\nplt.figure(figsize=(11, 4))\nplt.plot(t_hrf, canon_hrf)\nplt.grid()\nplt.ylabel('Amplitude (A.U.)', fontsize=15)\nplt.xlabel('Time (seconds)', fontsize=15)\nplt.title('Canonical (\"Glover\"\") HRF', fontsize=20)\nplt.show()\n```\n\n
\n ToDo: Sometimes, people like to use a basis set of functions as a model for the HRF, such as the temporal and dispersion derivatives of the canonical HRF (in addition to the canonical HRF itself). The nilearn package also includes these temporal and dispersion derivatives: glover_time_derivative and glover_dispersion_derivative. Import these functions, construct these HRF derivatives, and plot all three together in the same figure (like the one above).\n
\n\n\n```python\n''' Implement your ToDo here. (No test cell.) '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n### Creating HRF-convolved regressors\nNow, to create actual regressors, we need our event onsets and durations. At the lowest level interface, we only need the onsets, durations, and amplitudes (weights) of our events of a particular condition. For now, let's focus on the \"strongly_accept\" condition (note that you can define a \"condition\" using whatever feature from the experiment, such as the gains or losses, or even reaction time). There is no reason to focus only on these type of trials other than for the sake of the example:\n\n\n```python\nidx = events_df.loc[:, 'participant_response'] == 'strongly_accept'\nsa_events = events_df.loc[idx, ['onset', 'duration']]\nsa_events['amplitude'] = 1 # set the same amplitude for each event\nsa_events\n```\n\nThe next step is to create an event regressor (basically arrays with zeros, containing ones at the onset of events) and convolving this array with our HRF model. We can do this easily using the `compute_regressor` function from the `nilearn.glm.first_level.hemodynamic_models` module:\n\n\n```python\nfrom nilearn.glm.first_level.hemodynamic_models import compute_regressor\n```\n\nThe most important arguments of this function are the following:\n* `exp_condition`: an array of shape $3$ (onset, duration, amplitude) $\\times N$ (events for a particular condition)\n* `hrf_model`: a **string** indicating the HRF model ('spm', 'spm + derivative', 'spm + derivative + dispersion',\n'glover', 'glover + derivative', 'glover + derivative + dispersion', 'fir')\n* `frame_times`: an array of shape $T$ (timepoints of fMRI series) containing the acquisition time of each fMRI volume\n\nFirst, let's define `exp_condition` by pulling out the array from `sa_events` (using the `.values` attribute of the `DataFrame`) and transposing it (otherwise it would be of the wrong shape, i.e., $N \\times 3$).\n\n\n```python\nexp_condition = sa_events.values.T\nprint(exp_condition.shape)\n```\n\nLet's for simplicity just use a canonical (Glover) HRF:\n\n\n```python\nhrf_model = 'glover'\n```\n\nAnd we'll define the frame times relative to the middle slice (i.e., assuming for simplicity that each slice was recorded at TR / 2):\n\n\n```python\nn_vols = 453\ntr = 1\nframe_times = np.linspace(tr / 2, n_vols * tr + tr / 2, n_vols, endpoint=False)\n```\n\nAlright, now we have everything to compute the regressor using `compute_regressor`:\n\n\n```python\n# The function compute_regressor returns two things:\n# the convolved regressor(s) and a (default) name(s)\nreg, name = compute_regressor(\n exp_condition=exp_condition,\n hrf_model=hrf_model,\n frame_times=frame_times\n)\n\nplt.figure(figsize=(11, 4))\nplt.plot(frame_times, reg)\nplt.grid()\nplt.ylabel('Amplitude (A.U.)', fontsize=15)\nplt.xlabel('Time (seconds)', fontsize=15)\nplt.title('HRF-convolved regressor', fontsize=20)\nplt.show()\n```\n\nOne of the analyses of NARPS project was to investigate the parametric effect of *gains* (i.e., the amount that could be gained for this trial), effectively asking: which voxels show activity that linearly relates to gains? For this analysis, you'd need both a simple \"gamble-vs-baseline\" regressor (like we computed before, but then for all trials instead of just the \"strongly_accept\" ones) and a \"parametric modulation\" regressor. This parametric modulation regressor uses the same onsets and durations as the \"trial-vs-baseline\" regressor, but *also* includes varying modulation values that indicate how much the effect of the gamble is modulated. Importantly, these modulation values should be mean-centered (or \"demeaned\"), i.e., its mean should be zero. You can mean-center any variable $x$ by subtracting the mean ($\\bar{x}$) from all values:\n\n\\begin{align}\nx_{\\mathrm{demeaned}} = x - \\bar{x} \n\\end{align}\n\n
\n ToDo (optional): Create a parametric regressor that is modulated by the \"gain\" values. You can do this by making sure that the third row in the array that is passed as the exp_condition argument of compute_regressor reflects the mean-centered gain values. Store the resulting regressor in a variable named parametric_gain_reg.\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nnp.testing.assert_almost_equal(parametric_gain_reg.mean(), 0.0095463)\nprint(\"Well done!\")\n```\n\n### Using the high-level `make_first_level_design_matrix` function\nWhile the HRF-related functions and `compute_regressor` already take care of a lot of stuff for us, you might still think that constructing a complete design matrix this way is quite cumbersome. Fortunately, Nilearn has a more high-level function for constructing complete design matrices easily called `make_first_level_design_matrix`:\n\n\n```python\nfrom nilearn.glm.first_level.design_matrix import make_first_level_design_matrix\n```\n\n
\n Tip: in Jupyter notebooks, you can view any (Python) function's documentation, or \"docstring\", by running the function (without brackets) followed by a question mark, e.g., make_first_level_design_matrix?. Try this below. You can remove the pop-up by clicking the cross-symbol in the upper right-corner.\n
\n\n\n```python\nmake_first_level_design_matrix?\n```\n\nAs you can see, this function needs (amongst other things) a `DataFrame` (events) containing the following columns: \"onset\", \"duration\", \"trial_type\", and (optionally) \"modulation\". Then, the function will basically apply the `compute_regressor` function across the different trial types as defined in the \"trial_type\" column. \n\nFor now, let's assume that we'd want to create a design matrix with a different (un-modulated) regressor for each of the participant responses (\"weakly_accept\", \"strongly_accept\", \"weakly_reject\", and \"strongly_reject\"). Below, we'll create the appropriate `DataFrame` for this purpose:\n\n\n```python\n# First, extract relevant columns\npr_events = events_df.loc[:, ['onset', 'duration', 'participant_response']]\n\n# Then, rename the participant_response column to trial_type\n# bonus: why do you think we have to specify axis=1 here?\npr_events = pr_events.rename({'participant_response': 'trial_type'}, axis=1)\n\n# Lastly, create a \"modulation\" column with only ones (indicating no modulation)\npr_events.loc[:, 'modulation'] = 1 # this automatically fills all rows with 1\npr_events\n```\n\nNow, the different conditions in our design are defined by the unique values in our `trial_type` column:\n\n\n```python\nprint(pr_events.loc[:, 'trial_type'].unique())\n```\n\nAh darn, we also have 'NoResp' trials!\n\n
\n ToDo Remove all 'NoResp' trials (i.e., rows) and store the new (filtered) DataFrame in a variable named pr_events_filt.\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nassert('NoResp' not in pr_events_filt.loc[:, 'trial_type'].unique())\nprint(\"Well done!\")\n```\n\n
\n ToDo Now you're ready to use make_first_level_design_matrix! Use the appropriate arguments (at least frame_times, events, and hrf_model) and make sure to use drift_model=None.\n This will prevent Nilearn from automatically adding a set of regressors to the design matrix which function as a high-pass filter (we'll take a look at this later). Store the output of the function in a new variable called dm, which should be a new pandas DataFrame of shape T (number of timepoints) x P (number of regressors, i.e., number of conditions + intercept). You can ignore the UserWarning.\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the ToDo above. '''\nassert(dm.shape == (453, 5))\nnp.testing.assert_array_equal(\n dm.columns,\n ['strongly_accept', 'strongly_reject', 'weakly_accept', 'weakly_reject', 'constant']\n)\nprint(\"Well done!\")\n```\n\nAlright, almost done with this section. One last thing we want to show you is the `plot_design_matrix` function, which you can use to, well, plot the design matrix:\n\n\n```python\nfrom nilearn.plotting import plot_design_matrix\ndm = pd.read_csv('dm_todo.tsv', sep='\\t')\nplot_design_matrix(dm);\n```\n\nThis plot shows you a \"color-coded\" version of your design matrix, with time on the y-axis and the different predictors on the x-axis. Note that the brighter the color (i.e., more yellow), the higher that regressor's value at that timepoint.\n\n
\n ToDo Run make_first_level_design_matrix again, but add the following arguments: drift_model='cosine' and high_pass=0.01 (i.e., in Hertz, corresponding to a cutoff of 100 seconds), which will add a set of cosine regressors to the design matrix, which function as a high-pass filter for our fMRI data. Store the design matrix in a new variable named dm_with_hp. Then, plot the design matrix again to see how it looks like!\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nassert(dm_with_hp.shape == (453, 14))\nassert(all(f'drift_{i}' in dm_with_hp.columns for i in range(1, 10)))\nprint(\"Well done!\")\n```\n\n## First-level models\nIn this section, we will talk about fitting first-level models and computing contrasts from their estimated parameters.\n\n### Constructing and fitting first-level models\nAt last, we can start fitting first-level GLM models! Like with constructing design matrices, there are different ways to go about defining and fitting first-level models in Nilearn:\n* Using the low-level `run_glm` function;\n* Using the `FirstLevelModel` class;\n* Using the `first_level_models_from_bids` function\n\nFor most purposes, we recommend using the `FirstLevelModel` interface, which strikes a nice balance between flexibility (unlike the very high-level `first_level_models_from_bids` function) and ease-of-use (unlike the very low-level `run_glm` function).\n\nLet's start by importing the `FirstLevelModel` class (you may ignore the FutureWarning):\n\n\n```python\n# You may ignore the DeprecationWarning\nfrom nilearn.glm.first_level import FirstLevelModel\n```\n\nNote that, unlike the previous functionality from Nilearn that we discussed, `FirstLevelModel` is not a *function*, but a custom *class* (often called a *type* in Python):\n\n\n```python\nprint(type(FirstLevelModel))\n\n# Contrast this with for example 'make_first_level_design_matrix'\nprint(type(make_first_level_design_matrix))\n```\n\nUsing custom classes, you can instantiate (\"create\") objects that behave as specified in the class. Before we go on, let's digress a little and discuss briefly how custom classes and object work.\n\nOne way to think about classes and objects is to see classes as *building blueprints* and the objects constructed from them as the *actual buildings*. For example, neuroimaging software in Python often work with MRI data stored in objects from the `Nifti1Image` class (from the `nibabel` package). An object is usually *constructed* from a class as follows:\n\n```\nsome_object = SomeClass(optional_arg1, optional_arg2, optional_arg3, etc.)\n```\n\nFor example, to construct a `Nifti1Image` object, we need to pass it both data (as a `numpy` array) and an affine (also as a `numpy` array):\n\n\n```python\nimport numpy as np\n\n# Very small hypothetical brain image of 10x10x10\ndata = np.random.normal(0, 1, size=(10, 10, 10))\naffine = np.eye(4)\n\n# Here, we construct an object (nifti_obj) from a class (nib.Nifti1Image)\nnifti_obj = nib.Nifti1Image(data, affine)\nprint(type(nifti_obj))\n```\n\nThe class (\"blueprint\") contains instructions about the \"properties\" of the object (\"building\") and the \"functionality\" of the object. The \"properties\" of an object are, technically, called *attributes* and can be accessed as: `some_object.attribute_name`. For example, `Nifti1Image` objects have an attribute named `shape`:\n\n\n```python\nprint(nifti_obj.shape)\n```\n\nThe \"functionality\" of objects are, technically, called *methods*. You can see methods as functions that are bound to a particular object. For example, `Nifti1Image` objects hava a method called `get_fdata`, which extract the actual brain data:\n\n\n```python\ndata = nifti_obj.get_fdata()\n# 'data' is the underlying numpy array!\nprint(type(data))\n```\n\nNote that you should call methods in the same way as functions (i.e., using round brackets, (), which may or may not contain arguments). Like functions, method calls may (or may not) return one or more things.\n\n
\n Tip: in Jupyter notebooks (and many other code editors), you can inspect an object's attributes and methods by typing the object's variable name followed by a dot and pressing tab, i.e., your_object.+TAB. Try it out below with the nifti_obj variable to inspect the different attributes/methods from the Nifti1Image class.\n
\n\n\n```python\n# Try out the tip here\n\n```\n\nThis topic of (custom) classes and objects is actually very important in Python. Technically, Python is an \"object-oriented\" programming language in which *everything is an object (from a particular class)*! But an in-depth explanation of object-oriented programming is beyond the scope of this tutorial. For now, knowing the basics about (custom) classes and objects suffices for our purposes.\n\nAlright, back to the `FirstLevelModel` class. \n\n
\n ToDo: Check out the arguments need for the initialization of a FirstLevelModel object below.\n
\n\n\n```python\nFirstLevelModel?\n```\n\nAs you can see, initializing a `FirstLevelModel` object takes a lot of arguments (`t_r`, `slice_time_ref`, `hrf_model`, `drift_model`, etc.). Fortunately, many arguments have sensible defaults. In what follows, we'll go through the most important arguments step by step, after which we'll (finally!) construct a `FirstLevelModel` object.\n\nThe `t_r` argument refers to the TR (temporal resolution, in seconds) of your fMRI scan.\n\n\n```python\nt_r = 1\n```\n\nThe `slice_time_ref` argument refers to which slice you want resample your design matrix (this is used internally to define the `frame_times`), which should be between 0 (first slice) and 1 (last slice). For example, if you slice-time corrected your fMRI data to the first slice, you should set `slice_time_ref` to 0 (middle slice → 0.5, last slice → 1, etc.). If you did not apply any slice-time correction (as is the case for the NARPS data), we recommend setting it to 0.5. (Can you think of a reason why this is a sensible choice for non-slice-time-corrected data?)\n\n\n```python\nslice_time_ref = 0.5\n```\n\nWhile you can construct your design matrix using Nilearn functionality directly, you can actually also leave it up to the `FirstLevelModel` to do this for you! That's why you might have recognized some parameters that you've seen in functions from the previous section (such as `hrf_model`, `drift_model`, and `high_pass`). For now, we'll let the `FirstLevelModel` take care of constructing a design for us, so we'll define these parameters here:\n\n\n```python\nhrf_model = 'glover'\ndrift_model = 'cosine'\nhigh_pass = 0.01 # cutoff: 0.01 Hz (i.e., 100 seconds)\n```\n\nThe next important parameter is `mask_img` (note that we're skipping some parameters, as they represent some more advanced functionality/analyses). This (optional!) parameter refers to a nifti image (a path to a nifti file or a `Nifti1Image` object) with binary (0, 1) data indicting which voxels should be included (1) and which ones should be \"masked\" (0). Fortunately, Fmriprep also returns a brain-mask:\n\n\n```python\nmask_img = os.path.join(\n save_dir, 'derivatives', 'fmriprep', 'sub-001', 'func',\n 'sub-001_task-MGT_run-01_bold_space-MNI152NLin2009cAsym_brainmask.nii.gz'\n)\nprint(mask_img)\n```\n\nNote that including a brain-mask may substantially speed up your analysis (as it reduces the number of voxels that will be analyzed).\n\nWe can also spatially smooth our fMRI data by setting the `smoothing_fwhm` parameter (in millimeters):\n\n\n```python\nsmoothing_fwhm = 3.5\n```\n\nThen, the `standardize` and `signal_scaling` parameters refer to the method that will be used to mean-center the data. In this example, we'll leave it to the default values (`standardize=False` and `signal_scaling=0`), which will only mean-center the data in the time domain (i.e., substract the mean across time from each voxel).\n\nThe `noise_model` parameter refers to the particular noise model of the GLM. Do we assume independent and identically distributed (IID) noise (`noise_model='ols'`) or do we assume that our noise is temporally autocorrelated (`noise_model='ar1'`)? For almost all fMRI data, we should assume that the noise is at least somewhat autocorrelated, so we'll set `noise_model` to `'ar1'`. (Note that `'ar1'` is a bit slower than `'ols'`, so you may consider using `'ols'` when for example testing your code.)\n\n\n```python\nnoise_model = 'ar1'\n```\n\nThen, the `n_jobs` parameter determines how many CPU cores the GLM analysis will use. For now, one core should suffice:\n\n\n```python\nn_jobs = 1\n```\n\nLastly, you may want to set the `minimize_memory` argument to `False`. While doing this will increase the memory (RAM) necessary for the GLM analysis, it allows use to retrieve more outputs from the GLM after fitting (such as model fit, $R^2$).\n\n\n```python\nminimize_memory = False\n```\n\nNow we're *finally* ready to constuct our `FirstLevelModel` object!\n\n\n```python\nflm = FirstLevelModel(\n t_r=t_r,\n slice_time_ref=slice_time_ref,\n hrf_model=hrf_model,\n drift_model=drift_model,\n high_pass=high_pass,\n mask_img=mask_img,\n smoothing_fwhm=smoothing_fwhm,\n noise_model=noise_model,\n n_jobs=n_jobs,\n minimize_memory=minimize_memory,\n verbose=True # this will print out some useful info later\n)\n\n\"\"\" Note that, here, we've defined all arguments beforehand,\nbut this is not necessary! We could have done something like:\nflm = FirstLevelModel(\n t_r=1,\n slice_time_ref=0.5,\n hrf_model='glover',\n drift_model='cosine',\n high_pass=0.01,\n mask_img=os.path.join(etc., etc.),\n smoothing_fwhm=3.5,\n noise_model='ar1',\n n_jobs=1,\n minimize_memory=False,\n verbose=True\n)\n\"\"\";\n```\n\nWait, what? Did it really just take a fraction of a second to fit the first-level model? Actually, no. We only *constructed* the first-level model, but we haven't fitted it yet! To do so, we can use the, guess what, `fit` method!\n\n
\n ToDo: check out the arguments needed by the fit method. \n
\n\n\n```python\nflm.fit?\n```\n\nThere are actually two ways of using the `fit` method, depending on which parameters you use:\n1. define `events` (+ optionally `confounds`)\n2. define `design_matrices`\n\nApproach 1 will construct a design matrix for you (by running `make_first_level_design_matrix` on your `events` and, optionally, concatenating this with your `confounds`) while approach 2 assumes that you have constructed a design matrix yourself.\n\nNote, by the way, that the parameter names are all plural (run_img**s**, event**s**, confound**s**, design_matrice**s**). This is because you can either fit a model on a *single* fMRI run or fit a model on *multiple* fMRI runs with a single `fit` call. To fit multiple runs, just pass a list instead of a single object for each parameter (e.g., `design_matrices=[dm_run1, dm_run2]` instead of `design_matrices=dm`). \n\nRegardless of the approach for constructing your design matrix, the `fit` method always needs (one or more) fMRI files (for the `run_img` parameter), which can either be a `Nifti1Image` object or string representing the path to a 4D nifti file. Furthermore, both the `events` and the `confounds` should be a Pandas `DataFrame`. For our `events`, let's use the `pr_events_filt` events, in which the participant responses represent our conditions:\n\n\n```python\n# This is also one possible solution to the ToDo earlier\nevents2use = pr_events.loc[pr_events.loc[:, 'trial_type'] != 'NoResp', :]\n```\n\nAlmost always, you'd want to add some confounds to your design matrix, as confounds represent independent variables that are (often) not of interest to the researcher, but might be related to the signal (the depenent variable, $y$) anyway. Including these confounds in your design matrix will then explain some variance that might otherwise not be accounted for (which will \"end up\" in the model's noise term, $\\hat{\\sigma}^2$), increasing the model's power to detect effects related to the variables you're interested in. Note that including the \"right\" confounds in your model is in no way trivial (and opinions differ to what confounds are the \"right\" ones to include)!\n\nThat said, fortunately, Fmriprep also computes an extensive set of timepoint-by-timepoint confounds for each fMRI run, which are conveniently stored in TSV files. (You can really see that Nilearn was designed with Fmriprep's outputs in mind.) We'll load these confounds below:\n\n\n```python\nconf_path = os.path.join(save_dir, 'derivatives', 'fmriprep', 'sub-001', 'func', 'sub-001_task-MGT_run-01_bold_confounds.tsv')\nconfounds = pd.read_csv(conf_path, sep='\\t')\nconfounds\n```\n\n
\n ToDo: Read the Fmriprep documentation about the different confounds (scroll down to the \"Confounds\" section). Note that the confound names in the documentation are slightly different that in our DataFrame because this dataset was preprocessed using an older version of Fmriprep.\n
\n\nAs stated in Fmriprep's documentation, it's probably not a good idea to include *all* confounds in our model. Selecting a subset of confound parameter is, however, also not a trivial matter; which confounds are appropriate and explain to most variance in your data likely depends on the characteristics of your particular dataset (field strength, scan technique, the amount of motion in your data, etc.) and your intended analysis (univariate group-level GLM, functional connectivity, MVPA).\n\nHere, we'll be relatively conservative and include only the 6 motion parameters (`X`, `Y`, `Z`, `RotX`, `RotY`, `RotZ`) and the non-steady-state outlier (`NonSteadyStateOutlier00`; i.e., a binary-coded regressor removing the influence of volumes at the beginning of a run that contain too much T1-contrast).\n\n\n```python\nconfounds2use = confounds.loc[:, ['X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ', 'NonSteadyStateOutlier00']]\nconfounds2use\n```\n\nIn fact, we can reuse the `plot_design_matrix` function to visualize this \"confound design matrix\":\n\n\n```python\nax = plot_design_matrix(confounds2use)\n# Rescale the colors a little\nax.get_images()[0].set_clim(0, 0.1)\n```\n\nAlright, now we can *finally* fit our model (this may take 2-5 minutes or so, depending on how fast your computer/server is; go get some coffee)!\n\n\n```python\nflm.fit(run_imgs=fmri, events=events2use, confounds=confounds2use)\n```\n\nThe `fit` function doesn't return the results of the GLM analysis, but stores it as attributes. This includes the constructed design matrix (or matrices, when there is more than one run):\n\n\n```python\n# the design_matrices_ attribute is a list with, in our case, only a single\n# element\nflm_dm = flm.design_matrices_[0]\n\n# Let's plot it again\nax = plot_design_matrix(flm_dm)\nax.get_images()[0].set_clim(0, 0.2)\n```\n\nAnother result from the GLM that we can inspect is model fit, expressed as $R^2$ (only when `minimize_memory=False`):\n\n\n```python\n# Again, it returns a list, and we'll take the first element,\n# because we only have one run\nr2_img = flm.r_square[0]\n```\n\nThe `r2_img` variable is a 3D `Nifti1Image` object with voxelwise $R^2$ values. We can visualize this using the `plotting` module from Nilearn (e.g., using `plot_stat_map` with an arbitrary threshold of 0.2):\n\n\n```python\nfrom nilearn import plotting\nplotting.plot_stat_map(r2_img, threshold=0.2);\n```\n\nAnother thing we can inspect is the residual time series from the model fit, which we can retrieve using the `residuals` attribute (you can ignore the `FutureWarning`):\n\n\n```python\nresids = flm.residuals[0]\nprint(type(resids))\nprint(resids.shape)\n```\n\n
\n ToDo (optional): If you want to brush up your Nilearn skills, try computing the standard deviation across time for each voxel in the resids variable. Store the result (a Nifti1Image) in a new variable called r_std. Then, plot the result using plot_stat_map using a threshold of 3. You should clearly see the brain's veins and contours (effects of movement?) in the plot! Hint: perhaps the function nilearn.image.math_img is of use here ... (but this ToDo can be implemented in various ways)\n
\n\n\n```python\n''' Implement the (optional) ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nnp.testing.assert_almost_equal(r_std.get_fdata().mean(), 0.34867, decimal=5)\nprint(\"Well done!\")\n```\n\n\n```python\n# Let's remove the residuals variable, which uses a lot of memory\ndel flm.residuals, resids\n```\n\n### Computing contrasts from first-level models\nThe last thing we'll discuss in this section is computing contrasts and thresholding the resulting statistical images. Computing contrasts using `FirstLevelModel` objects is ridiculously easy. This is done using the `compute_contrast` method.\n\n
\n ToDo: Check out the arguments needed for the compute_contrast method. \n
\n\n\n```python\nflm.compute_contrast?\n```\n\nThe `contrast_def` parameter represents the contrast you want compute and can be used in different ways. One way is to specify a contrast vector (a list or a numpy array). For example, if we'd like to evaluate the contrast \"strongly_accept > baseline\", we could specify the contrast vector as follows:\n\n\n```python\ncon_vec = np.zeros(flm.design_matrices_[0].shape[1])\ncon_vec[0] = 1\nprint(con_vec)\n```\n\nNote that values of the contrast vector are assumed to relate to the regressors froms the design matrix in the same order (e.g., the first value from the contrast vector corresponds to the first regressor in your design matrix, i.e., the first column).\n\nThen, we can compute the contrast as following (assuming a single contrast, i.e., `stat_type='t'`, and wanting $z$-values as output, i.e., `output_type='z_score'`):\n\n\n```python\ncon_img = flm.compute_contrast(con_vec, stat_type='t', output_type='z_score')\n```\n\nHow does that look like?\n\n\n```python\nplotting.plot_stat_map(con_img)\n```\n\nThe same contrast can be evaluated by using a string describing a \"formula\". In the \"strongly_accept > baseline\" contrast, this is trivial:\n\n\n```python\ncon_img2 = flm.compute_contrast('strongly_accept', stat_type='t', output_type='z_score')\n# check that it is literally the same; if it's not, it will give an error\nnp.testing.assert_array_equal(con_img.get_fdata(), con_img2.get_fdata())\n```\n\nBut we can also define more complicated contrasts, such as \"(strongly_accept + weakly_accept) > strongly_reject\" (note that this is theoretically a nonsense contrast and just used for the sake of the example):\n\n\n```python\n# Note: the same contrast can be defined using the contrast vector:\n# [1, -2, 1, 0, 0, 0, 0, ..., 0]\ncontrast_def = 'strongly_accept + weakly_accept - 2*strongly_reject'\ncon_img = flm.compute_contrast(contrast_def, stat_type='t', output_type='z_score')\n```\n\nNote that $z$-values are not the only output type possible. The other options are:\n* `'stat'`: either the $t$-value or $F$-value, depending on `stat_type`;\n* `'p_value'`: the $p$-value associated with the $t$ of $F$-value;\n* `'effect_size'`: the dot product of the contrast vector and the parameters ($c\\hat{\\beta}$);\n* `'effect_variance'`: the variance of the dot product of the contrast vector and the parameters ($\\mathrm{var}[c\\hat{\\beta}]$);\n* `'all'`: all of the above\n\n
\n ToDo: Using either the \"formula\" format or the contrast vector format, compute the $t$-values associated with the contrast \"reject > accept\" (regardless of strongly/weakly). Store the resulting image in a variable named reject_gt_accept (gt = greater than) and plot the image using plot_stat_map.\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nnp.testing.assert_almost_equal(reject_gt_accept.get_fdata().mean(), 0.04879, decimal=5)\nprint(\"Well done!\")\n```\n\nSo far, we plotted the resulting statistic maps without a threshold (or an arbitrary one), but in a proper analysis, you'd want to statistically threshold your image to reduce the risk of false positives. In Nilearn, this can be done using the function `threshold_stats_img`.\n\n\n```python\nfrom nilearn.glm.thresholding import threshold_stats_img\n```\n\n
\n ToDo: Checkout the arguments needed for the threshold_stats_img function. \n
\n\n\n```python\nthreshold_stats_img?\n```\n\nAs you can see, the primary input to the `threshold_stats_img` function (i.e., `stat_img`) is assumed to consist of $z$-values. \n\nUsing `threshold_stats_img`, there are various ways of thresholding your statistical map. Your desired method can be indicated using the `height_control` parameter (either 'fpr', 'fdr', or 'bonferroni', where 'fpr' refers to simple p-value based thresholding without multiple comparison control). The `alpha` parameter represents the significance level that should be used. There is also an option to only include significant voxels that belong to a cluster larger than a certain amount of voxels using `cluster_threshold` (which can make plots a little more visually appealing), but realize that this is by no means a method that guarantees proper false positive control.\n\nImportantly, the `threshold_stats_img` function optionally takes a mask (`mask_img`), which improves sensitivity a lot when using, for example, Bonferroni correction which depends on the number of tests. Also, note that the `threshold_stats_img` function returns two things: the thresholded map and the threshold that was actually used.\n\nAnyway, let's focus on the \"strongly_reject > baseline\" contrast and let's threshold it using the `'bonferroni'` method with an `alpha` of 0.01 while included the previously defined mask:\n\n\n```python\ncon_img = flm.compute_contrast('strongly_reject', stat_type='t', output_type='z_score')\ncon_img_thr, used_threshold = threshold_stats_img(con_img, mask_img=mask_img, height_control='bonferroni', alpha=0.05)\nplotting.plot_stat_map(con_img_thr)\n```\n\n
\n ToDo: threshold the same image (con_img), but this time using 'fdr' correction, an alpha of 0.01 and a cluster threshold of 100. Store the result in a variable named con_img_thr_fdr.\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nnp.testing.assert_almost_equal(con_img_thr_fdr.get_fdata().mean(), 0.03169, decimal=5)\nprint(\"Well done!\")\n```\n\nBefore we go on to the next section, we want to show you one last nugget: the `make_glm_report` function (see [documentation](https://nistats.github.io/modules/generated/nistats.reporting.make_glm_report.html#nistats.reporting.make_glm_report)). This function takes an *already fitted* `FirstLevelModel` object, a contrast definition, and optionally several parameters related to computing and thresholding the statistic image(s) and returns a summary of the results (as an HTML file, which can be nicely rendered in Jupyter notebooks).\n\n\n```python\nfrom nilearn.reporting import make_glm_report\n# This may take a couple of seconds\n# You can use your trackpad or keyboard keys to navigate the cell below\n# (make sure you are in \"edit\" mode, i.e., the cell border should be blue;\n# click the cell to enter \"edit\" mode)\nmake_glm_report(flm, 'strongly_accept - weakly_accept')\n```\n\n\n```python\n# Let's clear up some memory for the last section\n%reset -f\n```\n\n## Second-level models\nThe only major Nilearn functionality that we haven't discussed is *second-level models*. Unlike first-level models, there is basically only one way to fit second-level models: using the `SecondLevelModel` class.\n\n\n```python\nfrom nilearn.glm.second_level import SecondLevelModel\n```\n\n
\n ToDo: Check out the arguments needed for initialization of a SecondLevelModel object.\n
\n\n\n```python\nSecondLevelModel?\n```\n\nThe second-level model class allows you to run random-effects (but not mixed-effects!) models on multiple participants. Note that it assumes that all data have been registered/resampled to a common space (e.g., MNI, such as our data). It's interface is very similar to the `FirstLevelModel` class (i.e., it also has a `fit` and `compute_contrast` method). When constructing a `SecondLevelModel` object, the most important parameters are `mask_img`, an optional mask, and `smoothing_fwhm`, an optional smoothing kernel.\n\nFor our example, let's do a (horribly underpowered) group-level analysis of the two subjects whose data we downloaded. First of all, we need to fit both first-level models. We'll define a function below that does this for a single subject:\n\n\n```python\nimport os\nimport numpy as np\nimport pandas as pd\nimport nibabel as nib\nfrom glob import glob\nfrom tqdm.notebook import tqdm\nfrom nilearn import masking\nfrom nilearn.glm.first_level import FirstLevelModel\n\ndef fit_firstlevel(sub, bids_dir, task='MGT', run='01', space='MNI152NLin2009cAsym', \n conf_cols=None, **flm_kwargs):\n \"\"\" Example function of how you could implement a complete\n first-level analysis for a single subject. Note that this is\n just one way of implementing this; there may be (much more efficient)\n ways to do this.\n \n Parameters\n ----------\n sub : str\n Subject-identifier (e.g., 'sub-01')\n bids_dir : str\n Path to BIDS directory (root directory)\n task : str\n Name of task to analyse\n run : str\n Name of run to analyze\n space : str\n Name of space of the data\n conf_cols : list (or None)\n List of confound names to include; if None, only 6 motion params\n are included\n **flm_kwargs : kwargs\n Keyword arguments for the FirstLevelModel constructor\n \n Returns\n -------\n flm : FirstLevelModel\n Fitted FirstLevelModel object\n \"\"\"\n \n # If conf_cols is not set, let's use a \"standard\" set of\n # motion parameters (translation and rotation in 3 dimensions)\n if conf_cols is None:\n # Note: in new versions of Fmriprep, these variables are named differently,\n # i.e., trans_x, trans_y, trans_z, rot_x, rot_y, rot_z\n conf_cols = ['X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']\n\n # We assume it's a BIDS formatted dataset with the Fmriprep outputs in\n # bids_dir/derivatives/fmriprep\n bids_func_dir = os.path.join(bids_dir, sub, 'func')\n fprep_func_dir = os.path.join(bids_dir, 'derivatives', 'fmriprep', sub, 'func')\n \n # Let's find the fMRI files, given a particular space (e.g., T1w)\n funcs = sorted(glob(os.path.join(fprep_func_dir, f'*space-{space}*_preproc*.nii.gz')))\n\n # In this loop, we'll find the events/confounds/masks associated with the funcs\n confs, events, masks = [], [], []\n for func in funcs:\n # First, find the associated mask\n # Note, this doesn't work for newer versions of Fmriprep, which uses\n # a slightly different naming convention for brainmasks (desc-brain_mask)\n mask_path = func.replace('preproc', 'brainmask')\n masks.append(mask_path)\n\n # Find the associated confounds file\n conf_path = func.replace(f'space-{space}_preproc.nii.gz', 'confounds.tsv')\n conf_df = pd.read_csv(conf_path, sep='\\t').loc[:, conf_cols]\n confs.append(conf_df)\n \n # Find the associated events file\n event_path = os.path.join(bids_dir, sub, 'func', f'{sub}_task-{task}_run-{run}_events.tsv')\n event_df = pd.read_csv(event_path, sep='\\t')\n \n # Exclude 'NoResp' trials (not strictly necessary)\n event_df = event_df.query(\"participant_response != 'NoResp'\")\n \n # Set participant_response as the trial_type\n event_df = event_df.rename({'participant_response': 'trial_type'}, axis=1)\n events.append(event_df)\n\n # In case there are multiple masks, create an intersection;\n # if not, this function does nothing\n mask_img = masking.intersect_masks(masks, threshold=0.8)\n\n # Construct the first-level model!\n # We set the t_r to the first func we have, assuming\n # that the TR is the same for each run (if there are multiple runs)\n flm = FirstLevelModel(\n t_r=nib.load(func).header['pixdim'][4],\n slice_time_ref=0.5,\n mask_img=mask_img,\n **flm_kwargs\n )\n \n # Finally, fit the model and return the fitted model\n flm.fit(run_imgs=funcs, events=events, confounds=confs)\n return flm\n```\n\nNow we can create a loop across our two subjects, estimating a first-level model for each:\n\n\n```python\nbids_dir = os.path.join(os.path.expanduser('~'), 'NARPS')\nflms = []\n\n# This may take about 2-10 minutes, go get some coffee!\nfor sub in tqdm(('sub-001', 'sub-003')):\n flm = fit_firstlevel(sub, bids_dir, drift_model='cosine', high_pass=0.1)\n flms.append(flm)\n```\n\nAlright, so now we have two fitted `FirstLevelModel` objects (stored in `flms`). We will use these for our second-level model. Like `FirstLevelModel` objects, the `SecondLevelModel` object has a `fit` method, but it is a bit more complicated, because there are different ways to specify the input for second-level models (i.e., the `second_level_input` parameter):\n1. a list of fitted `FirstLevelModel` objects (easy, but limited to within-subject/run analyses only);\n2. a list of first-level contrast images (i.e., with $c\\hat{\\beta}$ values);\n3. a pandas `DataFrame` with information about the lower-level inputs (useful when using first-level output from other packages, such as FSL, but for Nilearn-only analyses, this is not very useful)\n\nThe first method is quite efficient (in terms of code), because it assumes that you are only interested in an intercept-only group-level model (i.e., the average of a particular first-level contrast across participants), and you can leave out the second-level design matrix. We'll show this below:\n\n\n```python\n# We don't supply any mask for simplicity\nslm_method1 = SecondLevelModel(smoothing_fwhm=5)\nslm_method1.fit(flms) # we don't have give it a design matrix!\n```\n\nThe second option is to give the `SecondLevelModel` a list of first-level contrast images in combination with a second-level design matrix. First, let's compute a first-level contrast (say \"strongly_accept\") for both participants:\n\n\n```python\nfl_cons = []\nfor flm in flms:\n con = flm.compute_contrast('strongly_accept', stat_type='t', output_type='effect_size')\n fl_cons.append(con)\n```\n\nNow we only need to construct a second-level design matrix, which should be a pandas `DataFrame`. The columns in this `DataFrame` represent the different predictors for our second-level model. For simplicity, let's just define an intercept-only design matrix (exactly the same as we did using method 1):\n\n\n```python\n# Specifying the index is not stringly necessary, but it looks nice\nslm_dm = pd.DataFrame([1, 1], columns=['intercept'], index=('sub-001', 'sub-003'))\nslm_dm\n```\n\nUsing the first-level contrast images (`fl_cons`) and the second-level design matrix (`slm_dm`), we can fit our second-level model:\n\n\n```python\nslm_method2 = SecondLevelModel(smoothing_fwhm=5)\nslm_method2.fit(fl_cons, design_matrix=slm_dm)\n```\n\nUsing this fitted second-level model for either method, we can also compute second-level contrasts. The way this is done is slightly different for each method. For method 1 (using fitted `FirstLevelModel` objects as inputs), the only option is to use the `first_level_contrast` parameter in the `compute_contrast` method. You can only use contrast definitions that were already present in the first-level model (because there is no second-level design matrix). \n\nFor example, if we'd want to compute the average (across participants) group-level contrast of \"strongly_accept > baseline\", we'd do:\n\n\n```python\nsr_average_method1 = slm_method1.compute_contrast(first_level_contrast='strongly_accept', output_type='z_score')\n```\n\nFor method 2 (using first-level contrast images + a second-level design matrix), we cannot use the `first_level_contrast` parameter; instead, we should use the `second_level_contrast` parameter, which applies to contrasts based on the second-level design matrix. Because we also defined an intercept-only model for method 2, the only contrast we *can* evaluate is the intercept contrast:\n\n\n```python\nsr_average_method2 = slm_method2.compute_contrast(second_level_contrast='intercept', output_type='z_score')\n```\n\nNote that the the results from both methods are virtually the same; the only difference is the way we used the `SecondLevelModel` interface. \n\n
\n ToDo Alright then, one last ToDo! Suppose that we want to construct a group-level model that investigates the difference between the first-level \"strongly_accept\" contrast images from sub-001 and sub-003 (a nonsensical analysis for just two subjects, but just ignore that for now). Can you construct an appropriate second-level design matrix for this analysis? Store this design matrix (a DataFrame) in a variable called slm_dm_todo.\n
\n\n\n```python\n''' Implement your ToDo here. '''\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n''' Tests the above ToDo. '''\nnp.testing.assert_array_equal(slm_dm_todo.sum(axis=0).values, [1, 1])\nnp.testing.assert_array_equal(slm_dm_todo.sum(axis=1).values, [1, 1])\nprint(\"Well done!\")\n```\n\n## Concluding remarks\nHopefully this tutorial gave you an idea how to use Nilearn to run statistical models! We recommend you check out the ample excellent tutorials and examples on the Nilearn [website](https://nilearn.github.io/) or to check out the codebase on [Github](https://github.com/nilearn/nilearn). As said before, Nilearn is a relatively novel project and its codebase may still change, so keep an eye out for new releases and extended functionality!\n\nThat said, we hope that this tutorial helps you to get started with your analyses using Nilearn.
\nHappy hacking!\n", "meta": {"hexsha": "5d32fb1a7f4e0b132c34aa5ce93fa9564a323d2c", "size": 97839, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "NI-edu/fMRI-introduction/week_7/nilearn_stats.ipynb", "max_stars_repo_name": "lukassnoek/NI-edu", "max_stars_repo_head_hexsha": "ceb0c0006ad1be7eaf6bcae41cc4557c4e72b7aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-02-23T16:06:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T10:28:39.000Z", "max_issues_repo_path": "NI-edu/fMRI-pattern-analysis/week_1/nilearn_stats.ipynb", "max_issues_repo_name": "lukassnoek/NI-edu", "max_issues_repo_head_hexsha": "ceb0c0006ad1be7eaf6bcae41cc4557c4e72b7aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-03-29T09:39:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T09:39:03.000Z", "max_forks_repo_path": "NI-edu/fMRI-pattern-analysis/week_1/nilearn_stats.ipynb", "max_forks_repo_name": "lukassnoek/NI-edu", "max_forks_repo_head_hexsha": "ceb0c0006ad1be7eaf6bcae41cc4557c4e72b7aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-01-15T14:32:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T22:33:06.000Z", "avg_line_length": 36.3308577794, "max_line_length": 870, "alphanum_fraction": 0.6224409489, "converted": true, "num_tokens": 15734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6297745935070808, "lm_q2_score": 0.44939263446475963, "lm_q1q2_score": 0.2830160636951201}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(color_codes=True)\n\nfrom IPython.display import Image\n```\n\nChapter 9 On-policy Prediction with Approximation\n=========\n\napproximate value function: parameterized function $\\hat{v}(s, w) \\approx v_\\pi(s)$\n\n+ applicable to partially observable problems.\n\n### 9.1 Value-function Approximation\n\n$s \\to u$: $s$ is the state updated and $u$ is the update target that $s$'s estimated value is shifted toward.\n\nWe use machine learning methods and pass to them the $s \\to g$ of each update as a training example. Then we interperet the approximate function they produce as an estimated value function.\n\nnot all function approximation methods are equally well suited for use in reinforcement learning:\n+ learn efficiently from incrementally acquired data: many traditional methods assume a static training set over which multiple passes are made.\n+ are able to handle nonstationary target functions.\n\n### 9.2 The Prediction Objective (VE)\n\nwhich states we care most about: a state distribution $\\mu(s) \\geq 0$, $\\sum_s \\mu(s) = 1$.\n+ Often $\\mu(s)$ is chosen to be the fraction of time spent in $s$.\n\nobjective function, the Mean Squared Value Error, denoted $\\overline{VE}$:\n\n\\begin{equation}\n \\overline{VE}(w) \\doteq \\sum_{s \\in \\delta} \\mu(s) \\left [ v_\\pi (s) - \\hat{v}(s, w) \\right ]^2\n\\end{equation}\n\nwhere $v_\\pi(s)$ is the true value and $\\hat{v}(s, w)$ is the approximate value.\n\nNote that best $\\overline{VE}$ is no guarantee of our ultimate purpose: to find a better policy.\n+ global optimum.\n+ local optimum.\n+ don't convergence, or diverge.\n\n### 9.3 Stochastic-gradient and Semi-gradient Methods\n\nSGD: well suited to online reinforcement learning.\n\n\\begin{align}\n w_{t+1} &\\doteq w_t - \\frac1{2} \\alpha \\nabla \\left [ v_\\pi(S_t) - \\hat{v}(S_t, w_t) \\right ]^2 \\\\\n &= w_t + \\alpha \\left [ \\color{blue}{v_\\pi(S_t)} - \\hat{v}(S_t, w_t) \\right ] \\nabla \\hat{v}(S_t, w_t) \\\\\n &\\approx w_t + \\alpha \\left [ \\color{blue}{U_t} - \\hat{v}(S_t, w_t) \\right ] \\nabla \\hat{v}(S_t, w_t) \\\\\n\\end{align}\n\n$S_t \\to U_t$, is not the true value $v_\\pi(S_t)$, but some, possibly random, approximation to it. (\u524d\u9762\u5404\u79cd\u65b9\u6cd5\u7d2f\u8ba1\u7684value\uff09:\n+ If $U_t$ is an unbiased estimate, $w_t$ is guaranteed to converge to a local optimum.\n+ Otherwise, like boostrappig target or DP target => semi-gradient methods. (might do not converge as robustly as gradient methods)\n - significantly faster learning.\n - enable learning to be continual and online.\n \nstate aggregation: states are grouped together, with one estimated value for each group.\n\n### 9.4 Linear Methods\n\nFor every state $s$, there is a real-valued feature vector $x(s) \\doteq (x_1(s), x_2(s), \\dots, x_d(s))^T$:\n\n\\begin{equation}\n \\hat{v}(s, w) \\doteq w^T x(s) \\doteq \\sum_{i=1}^d w_i x_i(s)\n\\end{equation}\n\n### 9.5 Feature Construction for Linear Methods\n\nChoosing features appropriate to the task is an important way of adding prior domain knowledge to reinforcement learing systems.\n\n+ Polynomials\n+ Fourier Basis: low dimension, easy to select, global properities\n+ Coarse Coding\n+ Tile Coding: convolution kernel?\n+ Radial Basis Functions\n\n### 9.6 Selecting Step-Size Parameters Manually\n\nA good rule of thumb for setting the step-size parameter of linear SGD methods is then $\\alpha \\doteq (\\gamma \\mathbf{E}[x^T x])^{-1}$\n\n\n\n### 9.7 Nonlinear Function Approximation: Artificial Neural Networks\n\n+ANN, CNN\n\n\n### 9.8 Least-Squares TD\n\n$w_{TD} = A^{-1} b$: data efficient, while expensive computation\n\n\n### 9.9 Memory-based Function Approximation\n\nnearest neighbor method\n\n\n### 9.10 Kernel-based Function Approximation\n\nRBF function\n\n\n### 9.11 Looking Deeper at On-policy Learning: Interest and Emphasis\n\nmore interested in some states than others:\n+ interest $I_t$: the degree to which we are interested in accurately valuing the state at time $t$.\n+ emphaisis $M_t$: \n\n\\begin{align}\n w_{t+n} & \\doteq w_{t+n-1} + \\alpha M_t \\left [ G_{t:t+n} - \\hat{v}(S_t, w_{t+n-1} \\right ] \\nabla \\hat{v}(S_t, w_{t+n-1}) \\\\ \n M_t & = I_t + \\gamma^n M_{t-n}, \\qquad 0 \\leq t < T\n\\end{align}\n\n\n```python\n\n```\n", "meta": {"hexsha": "dca3b0e603533486eff852ac18ad9f5863e833ad", "size": 6467, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Reinforcement_Learing_An_Introduction/On-policy_Prediction_with_Approximation/note.ipynb", "max_stars_repo_name": "ningchi/book_notes", "max_stars_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-12-31T12:10:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T15:49:34.000Z", "max_issues_repo_path": "Reinforcement_Learing_An_Introduction/On-policy_Prediction_with_Approximation/note.ipynb", "max_issues_repo_name": "ningchi/book_notes", "max_issues_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-12-05T13:04:14.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-07T16:24:50.000Z", "max_forks_repo_path": "Reinforcement_Learing_An_Introduction/On-policy_Prediction_with_Approximation/note.ipynb", "max_forks_repo_name": "ningchi/book_notes", "max_forks_repo_head_hexsha": "c6f8001f7d5f873896c4b3a8b1409b21ef33c328", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-06-27T07:19:28.000Z", "max_forks_repo_forks_event_max_datetime": "2017-11-19T08:57:35.000Z", "avg_line_length": 32.1741293532, "max_line_length": 199, "alphanum_fraction": 0.5636307407, "converted": true, "num_tokens": 1220, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5273165233795671, "lm_q2_score": 0.5350984286266115, "lm_q1q2_score": 0.2821662430492542}} {"text": "### Project 3: Tennis\n\n\n```python\nfrom IPython.display import HTML\nstyle = \"\"\nHTML(style)\n```\n\n\n\n\n\n\n\n\n## 1: Learning algorithm\n\n### Description of algorithm\n\n\n\nDeep learning and reinforcement learning have recently been combined in different ways, including variants of the \"Deep Q-Network\" (DQN) as used in the Navigation project. While DQN works well with high-dimensional _observation spaces_, it can only handle discrete and low dimensional _action spaces_.\n\nIn physical control tasks such as this one, we have continuous and high dimensional action spaces and algothithms like _deep deterministic policy gradient_ (DDPG) are better suited. In brief, DDPG is a \"model-free, off-policy actor-critic algorithm using deep function approximators\" ([Lillicrap et al., 2015](https://arxiv.org/abs/1509.02971)).\n\nFor this project, the DDPG implementation from Project 2: Continuous Control has been reused without modification - apart from minor configuration changes, including hyper parameters values, number of episodes, and time steps per episode. For reference, the DDPG algorithm is summarized below:\n

\n\n\n\n### Chosen Hyperparameters\n\n```\nBUFFER_SIZE = int(1e5) # replay buffer size\nBATCH_SIZE = 128 # minibatch size\nGAMMA = 0.99 # discount factor\nTAU = 1e-3 # for soft update of target parameters\nLR_ACTOR = 2e-4 # learning rate of the actor \nLR_CRITIC = 3e-4 # learning rate of the critic\nWEIGHT_DECAY = 0 # L2 weight decay\nNUM_AGENTS = 2 # number of agents\nUPDATE_RATE = 20 # number of time steps between updates\nNUM_UPDATES = 10 # how many times train the agens on each update\nEPSILON = 1.0 # initial noise magnitude\nEPSILON_DECAY = 0.01 # noise decay per episode\n```\n\n### Neural Networks\n\nTwo similarly structured networks are used for the _actor_ and the _critic_. For this problem ``state_size = 33``\n and ``action_size = 4``.\n\n#### Actor\nA basic three-layered feed-forward network with fully connected layers is used for the actor:\n\n* BatchNorm 1\n* Layer 1: (state_size, 128)\n* ReLU 1\n* BatchNorm 2\n* Layer 2: (128, 128)\n* ReLU 2\n* BatchNorm 3\n* Layer 3: (128, action_size)\n* Tanh\n\n#### Critic\nA slightly wider three-layered feed-forward network with fully connected layers is used for the critic:\n\n* Layer 1: (state_size, 256)\n* ReLU 1\n* BatchNorm\n* Layer 2: (cat(128, action_size), 128)\n* ReLU 2\n* Layer 3: (128, action_size)\n\n\n## 2: Plot of Rewards\n\nThis setup was able to solve the problem in 1507 episodes. A plot of (average) score per episode is illustraded below:\n\n\n\n## 3: Ideas for Future Work\n\nWhile the problem is solved reasonably quickly with DDPG, the work could possibly be improved in several ways including: \n\n* Search systematically for optimal hyperparameters and neural network size/shape.\n* Try other algorithms, including Truncated Natural Policy Gradient (TNPG) and Trust Region Policy Optimization (TRPO) which have been shown to achieve better performance in some tasks while being more stable.\n\n", "meta": {"hexsha": "72ad98e070b74654083fccf42316b76b75925c19", "size": 7073, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "p3/Report.ipynb", "max_stars_repo_name": "theiberg/rl-p1", "max_stars_repo_head_hexsha": "8eb895d3859e7ec2b0f7385c3753b65d2cad7a7e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "p3/Report.ipynb", "max_issues_repo_name": "theiberg/rl-p1", "max_issues_repo_head_hexsha": "8eb895d3859e7ec2b0f7385c3753b65d2cad7a7e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "p3/Report.ipynb", "max_forks_repo_name": "theiberg/rl-p1", "max_forks_repo_head_hexsha": "8eb895d3859e7ec2b0f7385c3753b65d2cad7a7e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6476683938, "max_line_length": 356, "alphanum_fraction": 0.5591686696, "converted": true, "num_tokens": 1368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.519521307073646, "lm_q2_score": 0.5428632831725052, "lm_q1q2_score": 0.2820290424360707}} {"text": "\n\n# Section 7: Capstone Project\n\n---\n\n# Table of Contents\n\n* [System Requirements (Part 1)](#System-Requirements)\n * [Model Introduction](#Model-Introduction)\n * [Requirements Analysis](#Requirements-Analysis)\n * [Visual System Mapping: Causal Loop Diagram](#Visual-System-Mapping:-Causal-Loop-Diagram)\n * [Visual System Mapping: Stock & Flow Diagram](#Visual-System-Mapping:-Stock-&-Flow-Diagram)\n * [Mathematical Specification](#Mathematical-Specification)\n* [System Design (Part 2)](#System-Design)\n * [Differential Specification](#Differential-Specification)\n * [cadCAD Standard Notebook Layout](#cadCAD-Standard-Notebook-Layout)\n 0. [Dependencies](#0.-Dependencies)\n 1. [State Variables](#1.-State-Variables)\n 2. [System Parameters](#2.-System-Parameters)\n 3. [Policy Functions](#3.-Policy-Functions)\n 4. [State Update Functions](#4.-State-Update-Functions)\n 5. [Partial State Update Blocks](#5.-Partial-State-Update-Blocks)\n 6. [Configuration](#6.-Configuration)\n 7. [Execution](#7.-Execution)\n 8. [Simulation Output Preparation](#8.-Simulation-Output-Preparation)\n* [System Validation (Part 3)](#System-Validation)\n * [What-if Matrix](#What-if-Matrix)\n * [System Analysis](#System-Analysis)\n\n---\n\n# System Requirements\n\n
\n\n## Model Introduction\n\n
\n\n\nProject Anthropocene is a model that enables the insightful analysis of the impact of carbon dioxide (CO2) on the Earth's temperature.\n\n## Requirements Analysis\n\n[Link to System Analysis](#System-Analysis)\n\n### Questions\n \n**Planned analysis:** How does the Earth's temperature evolve over the next 100 years under various assumptions regarding CO2 emissions?\n\n1. How will the __Earth's average temperature__ and the __rate of annual temperature change__ develop over the next 100 years, if we keep CO2 emissions __unchanged__ at today\u2019s annual emission levels vs. a __doubling__ of today\u2019s emission levels.\n2. How will the __Earth's average temperature__ and the __rate of annual temperature change__ develop over the next 100 years if we are able to reduce annual CO2 emissions to __zero__ after a given number of years?\n\n## Visual System Mapping: Causal Loop Diagram\n\nThe overall __relationships__ in the model are the following:\n* The __Earth's temperature is determined by what's called radiation balance__, i.e. how much radiation comes in via the Sun, minus how much is dissipating into space. If this balance is positive, heat accumulates, and the Earth warms up; if it is negative, the Earth cools down.\n* The __radiation balance__ is driven by the Sun's radiation, which tends to make the Earth hotter, and the Earth's radiation, which makes heat dissipate and the planet colder.\n* The __radiation balance is influenced by the well-known greenhouse effect__, i.e. the stronger the greenhouse effect, the more radiation from Earth gets trapped in the atmosphere unable to dissipate into space and the higher the radiation balance. Quick primer on the greenhouse effect: https://en.wikipedia.org/wiki/Greenhouse_effect\n* __CO2__ contributes strongly to the greenhouse effect.\n\n
\n\n\n## Visual System Mapping: Stock & Flow Diagram\n\n
\n\n\n## Mathematical Specification\n\n\nThe Anthropocene system is an IVP (initial value problem) which is described by the following equations:\n\n\\begin{align}\n\\tag{1}\ndCO_2(t) = \\begin{cases}\n \\mathcal{N}(\\mu, \\sigma) & \\forall t \\in [0, t_w] \\\\\n \\mathcal{N}(0, \\sigma) & \\forall t \\in [t_w, \\infty]\n \\end{cases}\n\\end{align}\n\n\\begin{align}\n\\tag{2}\n\\alpha(t) = 1 - e^{-\\beta * CO_2(t)}\n\\end{align}\n\n\\begin{align}\n\\tag{3}\nY(t) = \\alpha(t) Z(t)\n\\end{align}\n\n\\begin{align}\n\\tag{4}\nZ(t) = K T(t)^4\n\\end{align}\n\n\\begin{align}\n\\tag{5}\ndT(t) = \\gamma(a + Y(t) - Z(t))\n\\end{align}\n\n\nWhere $\\mathcal{N}$ represents the normal distribution with mean $\\mu$ and standard deviation $\\sigma$. For each timestep $t$ we have the $CO_2(t)$, $\\alpha(t)$, $Y(t)$, $T(t)$ and $Z(t)$ being the atmospheric $CO_2$ concentration, the atmosphere reflectance, the reflected radiation, the surface temperature and the outgoing radiation respectively. Also, we have the $\\beta$, $\\gamma$, $a$, $K$, $t_w$ constants as being a $CO_2$ to reflectance conversion factor, a radiation to temperature conversion factor, the Sun's yearly radiance, a constant for the blackbody effect and the year where emissions are stopped.\n\nThe system is tightly coupled and is both non-linear and stochastic, which can make mathematical treatment difficult, and as such, the characterization of it will be made easier through an experimental approach and computational simulations, which is exactly what **cadCAD** enables us to do.\n\n# System Design\n\n
\n\n## Differential Specification\n\n
\n\n## cadCAD Standard Notebook Layout\n\n### 0. Dependencies\n\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom random import normalvariate\nimport plotly.express as px\n\nfrom cadCAD.configuration.utils import config_sim\nfrom cadCAD.configuration import Experiment\nfrom cadCAD.engine import ExecutionContext, Executor\n```\n\n### 1. State Variables\n\nThe states we are interested in, their state variables and their initial values are:\n\n* The __atmosphere's CO2 concentration__ in parts per million (ppm): `co2`, initial value 400\n* The __earth's surface temperature__ in Kelvin (K): `temperature`, initial value 290\n \n\n\n\n```python\ninitial_state = {\n 'co2': 400,\n 'temperature': 290\n}\n```\n\n### 2. System Parameters\n\n**The system parameters we need to define are:**\n\n* The sun radiation: `sun_radiation` with value `1361`\n* A constant representing the relationship between temperature and radiation: `temperature_constant` with value `1e-4`\n* A constant representing CO2 impact on the radiation balance via the greenhouse effect: `co2_reflectance_factor` with value `1e-3`\n* A unit conversion constant that relates how much gigatons of CO2 we need to have an additional part per million unit in the atmosphere's concentration: `co2_gigatons_to_ppm` with value `1.2e-1`\n* The standard deviation for the stochastic process generating the yearly CO2 concentration: `co2_stdev` with value `40` ppm\n* A constant representing how much heat dissipitates into space: `heat_dissipation_constant` with value `2075`\n\n**There are two parameters which we want to sweep, which are:**\n\n* A parameter which represents the annual CO2 emissions in units of billion tons, which is the `co2_annual_emissions`. Let's sweep three values for it: `0`, `40` and `80`. The first value simulates a scenario where we stop all emissions at once, while using `40` means no additional emissions beyond what we already emmit every year, and `80` means that we are going to double our emissions.\n* The `year_of_the_wakening`, which is the number of years that must pass before we set the `co2_annual_emissions` to zero. Let's sweep four values for it: `0`, `10`, `50` and `100`.\n\n\n\n\n```python\nsystem_params = {\n 'sun_radiation': [1361],\n 'temperature_constant': [1e-4],\n 'co2_reflectance_factor': [1e-3],\n 'co2_gigatons_to_ppm': [1.2e-1],\n 'co2_stdev': [40],\n 'heat_dissipation_constant': [2075],\n 'co2_annual_emissions': [40, 80, 40, 80, 40, 80, 40, 80],\n 'year_of_the_wakening': [0, 0, 10, 10, 50, 50, 100, 100]\n}\n```\n\n\n```python\nassert 1e10 == 1*10**10\n```\n\n### 3. Policy Functions\n\n\n```python\ndef p_co2_emissions(params, \n subbstep, \n state_history, \n previous_state):\n # Parameters & variables\n mean = params['co2_annual_emissions']\n std = params['co2_stdev']\n conversion_factor = params['co2_gigatons_to_ppm']\n t_w = params['year_of_the_wakening']\n t = previous_state['timestep']\n \n # Logic\n if t > t_w:\n mean = 0\n else:\n mean = mean\n value = normalvariate(mean, std) * conversion_factor\n\n # Output\n return {'add_co2': value}\n```\n\n\n```python\ndef p_sun_radiation(params, \n substep, \n state_history, \n previous_state):\n # Parameters & variables\n g = params['temperature_constant']\n a = params['sun_radiation']\n \n # Logic\n temp_change = g * a\n \n # Output\n return {'add_temperature': temp_change}\n```\n\n\n```python\ndef p_earth_cooling(params, \n substep, \n state_history, \n previous_state):\n # Parameters & variables\n g = params['temperature_constant']\n K = params['heat_dissipation_constant']\n T = previous_state['temperature']\n \n # Logic\n temp_change = -(g * K * (T / 300) ** 4)\n \n # Output\n return {'add_temperature': temp_change}\n```\n\n\n```python\ndef p_greenhouse_effect(params, \n substep, \n state_history, \n previous_state):\n # Parameters & variables\n g = params['temperature_constant']\n K = params['heat_dissipation_constant']\n beta = params['co2_reflectance_factor']\n T = previous_state['temperature']\n CO2 = previous_state['co2']\n \n # Logic\n alpha = (1 - np.exp(-beta * CO2))\n temp_change = g * alpha * K * (T / 300) ** 4\n \n # Output\n return {'add_temperature': temp_change}\n```\n\n### 4. State Update Functions\n\n\n```python\ndef s_co2(params, \n substep, \n state_history, \n previous_state,\n policy_input):\n # Parameters & variables\n current_co2 = previous_state['co2']\n co2_change = policy_input['add_co2']\n \n # Logic\n new_co2 = max(current_co2 + co2_change, 0)\n \n # Output\n return ('co2', new_co2)\n```\n\n\n```python\ndef s_temperature(params, \n substep, \n state_history, \n previous_state,\n policy_input):\n # Parameters & variables\n current_temp = previous_state['temperature']\n temp_change = policy_input['add_temperature']\n \n # Logic\n new_temp = max(current_temp + temp_change, 0)\n \n # Output\n return ('temperature', new_temp)\n```\n\n### 5. Partial State Update Blocks\n\n\n```python\npartial_state_update_blocks = [\n {\n 'label': 'Temperature dynamics', # Useful metadata to describe our partial state update blocks\n 'policies': {\n 'sun_radiation': p_sun_radiation,\n 'earth_cooling': p_earth_cooling,\n 'greenhouse_effect': p_greenhouse_effect\n },\n 'variables': {\n 'temperature': s_temperature\n \n }\n },\n {\n 'label': 'CO2 dynamics', # Useful metadata to describe our partial state update blocks\n 'policies': {\n 'co2_emissions': p_co2_emissions\n },\n 'variables': {\n 'co2': s_co2\n }\n \n }\n]\n\n```\n\n### 6. Configuration\n\n\n```python\nMONTE_CARLO_RUNS = 50\nSIMULATION_TIMESTEPS = 100\n\nsim_config = config_sim(\n {\n 'N': MONTE_CARLO_RUNS,\n 'T': range(SIMULATION_TIMESTEPS),\n 'M': system_params,\n }\n)\n\nfrom cadCAD import configs\ndel configs[:] # Clear any prior configs\n\nexperiment = Experiment()\nexperiment.append_configs(\n sim_configs=sim_config,\n initial_state=initial_state,\n partial_state_update_blocks=partial_state_update_blocks\n)\n```\n\n### 7. Execution\n\n\n```python\nexec_context = ExecutionContext()\nrun = Executor(exec_context=exec_context, configs=configs)\n\n(system_events, tensor_field, sessions) = run.execute()\n```\n\n### 8. Simulation Output Preparation\n\n\n```python\n# Get system events and attribute index\ndf = (pd.DataFrame(system_events)\n .assign(years=lambda df: df.timestep)\n .assign(temperature_celsius=lambda df: df.temperature - 273)\n .query('timestep > 0')\n )\n\n# Clean substeps\nfirst_ind = (df.substep == 0) & (df.timestep == 0)\nlast_ind = df.substep == max(df.substep)\ninds_to_drop = (first_ind | last_ind)\ndf = df.loc[inds_to_drop].drop(columns=['substep'])\n\n# Attribute parameters to each row\ndf = df.assign(**configs[0].sim_config['M'])\nfor i, (_, n_df) in enumerate(df.groupby(['simulation', 'subset', 'run'])):\n df.loc[n_df.index] = n_df.assign(**configs[i].sim_config['M'])\n```\n\n# System Validation\n\n
\n\n[Link to Requirements Analysis](#Requirements-Analysis)\n\n## What-if Matrix\n\nWhat-if-question | Type of experiment | Variables / parameters | Values / Ranges to be tested\n- | - | - | -\nHow will the __Earth's average temperature__ develop over the next 100 years, if we keep CO2 emissions __unchanged__ at today\u2019s annual emission levels vs. a __doubling__ of today\u2019s emission levels? | Parameter Sweep + Monte Carlo runs | co2_annual_emissions | 40 and 80 Gigatons\nHow will the __rate of annual temperature change__ develop over the next 100 years if we keep CO2 emissions __unchanged__ at today\u2019s annual emission levels vs. a __doubling__ of today\u2019s emission levels? | Parameter Sweep + Monte Carlo runs | co2_annual_emissions | 40 and 80 Gigatons\nHow will the __rate of annual temperature change__ develop over the next 100 years if we are able to reduce annual CO2 emissions to __zero__ after a given number of years? | Parameter Sweep + Monte Carlo runs | year_of_the_wakening | 0, 10, 50 and 100 years\nHow will the __Earth's average temperature__ develop over the next 100 years if we are able to reduce annual CO2 emissions to __zero__ after a given number of years? | Parameter Sweep + Monte Carlo runs | year_of_the_wakening | 0, 10, 50 and 100 years\n\n## System Analysis\n\n### Analysis 1: How will the Earth's average temperature develop over the next 100 years, if we keep CO2 emissions unchanged at today\u2019s annual emission levels vs. a doubling of today\u2019s emission levels?\n\n\n```python\nfig_df = df.query('year_of_the_wakening == 100')\n\nfig = px.scatter(\n fig_df,\n x=fig_df.years,\n y=fig_df.temperature_celsius,\n color=fig_df.co2_annual_emissions.astype(str),\n opacity=0.1,\n trendline=\"lowess\",\n labels={'color': 'Yearly CO2 emissions (Gt)'}\n)\n\nfig.show()\n```\n\n\n```python\nfig_df = df.query('year_of_the_wakening == 100')\n\nfig = px.box(\n fig_df,\n x=fig_df.years,\n y=fig_df.temperature_celsius,\n color=fig_df.co2_annual_emissions.astype(str),\n points=False,\n labels={'color': 'Yearly CO2 emissions (Gt)'}\n)\n\nfig.show()\n```\n\n### Analysis 2: How will the rate of annual temperature change develop over the next 100 years if we keep CO2 emissions unchanged at today\u2019s annual emission levels vs. a doubling of today\u2019s emission levels?\n\n\n```python\nfig_df = (df.query('year_of_the_wakening == 100')\n .assign(annual_temperature_increase=lambda df: df.temperature.diff())\n .query('years > 1'))\n\nfig = px.scatter(\n fig_df,\n x=fig_df.years,\n y=fig_df.annual_temperature_increase,\n opacity=0.1,\n trendline=\"lowess\",\n color=fig_df.co2_annual_emissions.astype(str),\n labels={'color': 'Yearly CO2 emissions (Gt)'}\n)\n\nfig.show()\n```\n\n### Analysis 3: How will the rate of annual temperature change develop over the next 100 years if we are able to reduce annual CO2 emissions to zero after a given number of years?\n\n\n```python\nfig_df = (df.query('co2_annual_emissions == 40')\n .assign(annual_temperature_increase=lambda df: df.temperature.diff())\n .query('years > 1'))\n\nfig = px.scatter(\n fig_df,\n x=fig_df.years,\n y=fig_df.annual_temperature_increase,\n opacity=0.1,\n trendline=\"lowess\",\n color=fig_df.year_of_the_wakening.astype(str),\n labels={'color': 'Year of the wakening (years)'}\n)\n\nfig.show()\n```\n\n### Analysis 4: How will the Earth's average temperature develop over the next 100 years if we are able to reduce annual CO2 emissions to zero after a given number of years?\n\n\n```python\nfig_df = (df.query('co2_annual_emissions == 40')\n .assign(temperature_increase=lambda df: df.temperature - df.temperature.iloc[0]))\n\nfig = px.scatter(\n fig_df,\n x=fig_df.years,\n y=fig_df.temperature_increase,\n opacity=0.1,\n trendline=\"lowess\",\n color=fig_df.year_of_the_wakening.astype(str),\n labels={'color': 'Year of the wakening (years)'}\n)\n\nfig.show()\n```\n\n# Congratulations!\n\n\n", "meta": {"hexsha": "53d8420438c0d98e63c61048c0a5ae558aaa0381", "size": 26386, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "complete-foundations-bootcamp-output-main/content/section-7-capstone-project/notebook.ipynb", "max_stars_repo_name": "redditech/cadCad-training", "max_stars_repo_head_hexsha": "a1ab040e9baf1863a75b2c85cb3ea567049b6c2a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "complete-foundations-bootcamp-output-main/content/section-7-capstone-project/notebook.ipynb", "max_issues_repo_name": "redditech/cadCad-training", "max_issues_repo_head_hexsha": "a1ab040e9baf1863a75b2c85cb3ea567049b6c2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "complete-foundations-bootcamp-output-main/content/section-7-capstone-project/notebook.ipynb", "max_forks_repo_name": "redditech/cadCad-training", "max_forks_repo_head_hexsha": "a1ab040e9baf1863a75b2c85cb3ea567049b6c2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0789163722, "max_line_length": 630, "alphanum_fraction": 0.5599181384, "converted": true, "num_tokens": 4124, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5039061705290805, "lm_q2_score": 0.5583269943353745, "lm_q1q2_score": 0.28134441761855017}} {"text": "# Deciphering crop circle July 28th, 2012 message with Braille's writing system\n\n\n```\n# import necessary libraries to the notebook\nfrom sympy import Matrix\nimport matplotlib.pyplot as plt\nfrom numpy import array, fliplr, flipud, sort, zeros, matrix\nfrom IPython.core.display import HTML\nfrom pybrl import brailles, ordered_unicodes, matrixcodes, decodings\n\n%matplotlib inline\n```\n\n

Original picture

\n

\n Google for crop circle appearence at 28th of July, 2012:

\n \n

\n\n## Picture encoded to binary matrices\n\n\n```\n# 12*12 bits message\nmessage = [\n[1,0,1,1,1,0,1,1,1,0,1,0],\n[0,0,0,1,0,1,1,1,1,0,1,0],\n[0,1,1,0,0,0,1,0,1,0,0,0],\n[1,0,0,1,0,0,1,1,1,0,1,1],\n[0,0,0,1,0,0,0,1,1,1,1,0],\n[0,0,0,0,0,1,1,0,0,0,0,1],\n[1,0,1,0,0,0,1,1,0,0,1,1],\n[0,0,1,1,0,1,0,1,1,0,1,0],\n[0,1,0,0,0,0,1,0,1,0,0,1],\n[1,1,0,1,0,1,0,1,0,0,1,0],\n[0,0,1,1,1,1,0,1,0,0,1,1],\n[0,0,0,0,1,0,0,0,1,0,1,0]\n]\n\n# inverse\nmessage_inverse = [\n[0,1,0,0,0,1,0,0,0,1,0,1],\n[1,1,1,0,1,0,0,0,0,1,0,1],\n[1,0,0,1,1,1,0,1,0,1,1,1],\n[0,1,1,0,1,1,0,0,0,1,0,0],\n[1,1,1,0,1,1,1,0,0,0,0,1],\n[1,1,1,1,1,0,0,1,1,1,1,0],\n[0,1,0,1,1,1,0,0,1,1,0,0],\n[1,1,0,0,1,0,1,0,0,1,0,1],\n[1,0,1,1,1,1,0,1,0,1,1,0],\n[0,0,1,0,1,0,1,0,1,1,0,1],\n[1,1,0,0,0,0,1,0,1,1,0,0],\n[1,1,1,1,0,1,1,1,0,1,0,1]\n]\n\n# black and white plot\ndef show_matrix(Z):\n N = len(Z)\n G = zeros((N,N,3))\n G[Z<0.5] = [0,0,0]\n G[Z>0.5] = [1,1,1]\n plt.imshow(G, interpolation=\"nearest\")\n plt.show()\n```\n\n## Black and white matrix + inversed version\n\n\n```\n#show_matrix(matrix(message))\n#show_matrix(matrix(message_inverse))\nHTML(\n\"\"\"\n

\n \n \n

\n\"\"\")\n```\n\n\n\n\n\n

\n \n \n

\n\n\n\n\n

Braille symbols (1-64)

\n
    \n
  • https://en.wikipedia.org/wiki/Braille\n
  • https://en.wikipedia.org/wiki/Braille_Patterns\n
\n\n\n```\nHTML('

'+''.join(ordered_unicodes)+'

')\n```\n\n\n\n\n

\u2800\u2801\u2802\u2803\u2804\u2805\u2806\u2807\u2808\u2809\u280a\u280b\u280c\u280d\u280e\u280f\u2810\u2811\u2812\u2813\u2814\u2815\u2816\u2817\u2818\u2819\u281a\u281b\u281c\u281d\u281e\u281f\u2820\u2821\u2822\u2823\u2824\u2825\u2826\u2827\u2828\u2829\u282a\u282b\u282c\u282d\u282e\u282f\u2830\u2831\u2832\u2833\u2834\u2835\u2836\u2837\u2838\u2839\u283a\u283b\u283c\u283d\u283e\u283f

\n\n\n\n

Braille symbol meanings

\n
    \n
  • https://en.wikipedia.org/wiki/Braille_ASCII\n
  • http://www.brailleauthority.org/literary/ebae2002.pdf\n
  • http://www.brailleauthority.org/ueb/symbols_list.pdf\n
  • http://www.htctu.fhda.edu/trainings/manuals/alt/grade_two_day_2.pdf\n
\n\nUsing numpy and pybrl libraries to flip and decode message matrix:\n\n\n```\ndef decode(arr, row_skip = 3, column_skip = 2):\n y = []\n for i in xrange(0, len(arr), row_skip):\n for j in xrange(0, len(arr[0]), column_skip):\n m = Matrix(arr)[i:i+row_skip, j:j+column_skip]\n if row_skip < column_skip:\n l = [m[:][3], m[:][0], m[:][4], m[:][1], m[:][5], m[:][2]]\n else:\n l = m[:]\n x = str(l).replace(\"\\n\", '').replace(\" \", '')\n for mc, d, b in zip(matrixcodes, decodings, brailles):\n if [item for sublist in mc for item in sublist] == l:\n x = b+' '+d\n break\n y.append(x)\n return y\n```\n\n\n```\nprint ' | '.join(decode(message)), \"\\r\\n\"\nprint ' | '.join(decode(fliplr(message))), \"\\r\\n\"\nprint ' | '.join(decode(flipud(message))), \"\\r\\n\"\nprint ' | '.join(decode(fliplr(flipud(message)))), \"\\r\\n\"\n```\n\n \u2821 ch/CHILD/5-character | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u281f q/QUITE/5-question | \u2807 l/LIKE/5-lord/5-6-ful/RELEASE | \u2803 b/BUT/2 | \u2801 a/1 | \u2818 4-5- | \u2820 6- | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u2813 h/HAVE/5-here/4-5-6-had/8 | \u282b ed | \u2821 ch/CHILD/5-character | \u2813 h/HAVE/5-here/4-5-6-had/8 | \u2810 5- | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u2806 be/bb/; | \u282b ed | \u2809 c/CAN/4-5-6-cannot/3 | \u281a j/JUST/0 | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u2818 4-5- | \u2804 3- | \u2817 r/RATHER/5-right \r\n \n \u2818 4-5- | \u2838 4-5-6- | \u283b er | \u280a i/9 | \u282b ed | \u280c st/STILL | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u281a j/JUST/0 | \u282b ed | \u2804 3- | \u2803 b/BUT/2 | \u2808 \u00b4/@ | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u2830 5-6- | \u282b ed | \u2802 ea/, | \u281a j/JUST/0 | \u280c st/STILL | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2820 6- | \u2803 b/BUT/2 | \u2833 ou/OUT/5-ought | \u2813 h/HAVE/5-here/4-5-6-had/8 | \u2809 c/CAN/4-5-6-cannot/3 \r\n \n \u2824 com/- | \u2832 dis/dd/. | \u2833 ou/OUT/5-ought | \u2830 5-6- | \u2801 a/1 | \u2817 r/RATHER/5-right | \u280c st/STILL | \u2816 to/ff/! | \u2810 5- | \u2835 z/AS | \u2803 b/BUT/2 | \u282e THE/5-there/4-5-these/4-5-6-their | \u2804 3- | \u2830 5-6- | \u2808 \u00b4/@ | \u2835 z/AS | \u2816 to/ff/! | \u282e THE/5-there/4-5-these/4-5-6-their | \u280c st/STILL | \u2835 z/AS | \u2814 in | \u2837 OF | \u2807 l/LIKE/5-lord/5-6-ful/RELEASE | \u2806 be/bb/; \r\n \n \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2808 \u00b4/@ | \u2806 be/bb/; | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u2816 to/ff/! | \u2824 com/- | \u2835 z/AS | \u2818 4-5- | \u282e THE/5-there/4-5-these/4-5-6-their | \u2802 ea/, | \u2832 dis/dd/. | \u2821 ch/CHILD/5-character | \u2835 z/AS | \u2832 dis/dd/. | \u282e THE/5-there/4-5-these/4-5-6-their | \u2801 a/1 | \u2806 be/bb/; | \u2820 6- | \u2830 5-6- | \u2838 4-5-6- | \u283e WITH | \u2822 en/ENOUGH | \u282e THE/5-there/4-5-these/4-5-6-their | \u2821 ch/CHILD/5-character \r\n \n\n\n\n```\nprint ' | '.join(decode(message, row_skip = 2, column_skip = 3)), \"\\r\\n\"\nprint ' | '.join(decode(fliplr(message), row_skip = 2, column_skip = 3)), \"\\r\\n\"\nprint ' | '.join(decode(flipud(message), row_skip = 2, column_skip = 3)), \"\\r\\n\"\nprint ' | '.join(decode(fliplr(flipud(message)), row_skip = 2, column_skip = 3)), \"\\r\\n\"\n```\n\n \u2828 4-6- | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u283f FOR/full | \u2812 con/cc/: | \u2831 wh/WHICH/5-where/4-5-whose | \u2801 a/1 | \u282f AND | \u2806 be/bb/; | \u2800 SPACE/empty | \u280c st/STILL | \u2831 wh/WHICH/5-where/4-5-whose | \u281c ar | \u282c ing | \u2805 k/KNOWLEDGE/5-know | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u2832 dis/dd/. | \u2813 h/HAVE/5-here/4-5-6-had/8 | \u2805 k/KNOWLEDGE/5-know | \u282a ow | \u2822 en/ENOUGH | \u2820 6- | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2814 in | \u2832 dis/dd/. \r\n \n \u2812 con/cc/: | \u283f FOR/full | \u2835 z/AS | \u2828 4-6- | \u2803 b/BUT/2 | \u282f AND | \u2804 3- | \u281c ar | \u2831 wh/WHICH/5-where/4-5-whose | \u281c ar | \u2821 ch/CHILD/5-character | \u2800 SPACE/empty | \u281a j/JUST/0 | \u2833 ou/OUT/5-ought | \u2805 k/KNOWLEDGE/5-know | \u2829 sh/SHALL | \u280a i/9 | \u282a ow | \u2805 k/KNOWLEDGE/5-know | \u2816 to/ff/! | \u281a j/JUST/0 | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2808 \u00b4/@ \r\n \n \u2804 3- | \u2817 r/RATHER/5-right | \u2822 en/ENOUGH | \u2816 to/ff/! | \u281a j/JUST/0 | \u2828 4-6- | \u2815 o/5-one | \u2814 in | \u2825 u/US/5-under/4-5-upon | \u2828 4-6- | \u2833 ou/OUT/5-ought | \u2816 to/ff/! | \u2800 SPACE/empty | \u2821 ch/CHILD/5-character | \u280e s/SO/5-some/4-5-6-spirit/4-6-less/5-6-ness | \u2823 gh/RELEASE CAPS/< | \u280e s/SO/5-some/4-5-6-spirit/4-6-less/5-6-ness | \u2808 \u00b4/@ | \u283d y/YOU/5-young/5-6-ity/6-ally/ | \u2830 5-6- | \u2805 k/KNOWLEDGE/5-know | \u282b ed | \u283f FOR/full | \u2812 con/cc/: \r\n \n \u2813 h/HAVE/5-here/4-5-6-had/8 | \u280a i/9 | \u2817 r/RATHER/5-right | \u2801 a/1 | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u2815 o/5-one | \u2828 4-6- | \u2832 dis/dd/. | \u2813 h/HAVE/5-here/4-5-6-had/8 | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u2828 4-6- | \u280d m/MORE/5-mother/4-5-6-many | \u280e s/SO/5-some/4-5-6-spirit/4-6-less/5-6-ness | \u2823 gh/RELEASE CAPS/< | \u280c st/STILL | \u2800 SPACE/empty | \u2818 4-5- | \u283d y/YOU/5-young/5-6-ity/6-ally/ | \u2820 6- | \u2823 gh/RELEASE CAPS/< | \u2812 con/cc/: | \u283f FOR/full | \u282e THE/5-there/4-5-these/4-5-6-their | \u2805 k/KNOWLEDGE/5-know \r\n \n\n\n\n```\nprint ' | '.join(decode(message_inverse)), \"\\r\\n\"\nprint ' | '.join(decode(fliplr(message_inverse))), \"\\r\\n\"\nprint ' | '.join(decode(flipud(message_inverse))), \"\\r\\n\"\nprint ' | '.join(decode(fliplr(flipud(message_inverse)))), \"\\r\\n\"\n```\n\n \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u2822 en/ENOUGH | \u282e THE/5-there/4-5-these/4-5-6-their | \u2820 6- | \u2838 4-5-6- | \u283c ble/# | \u283e WITH | \u2827 v/VERY | \u281f q/QUITE/5-question | \u2822 en/ENOUGH | \u282c ing | \u2814 in | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u282c ing | \u282f AND | \u2822 en/ENOUGH | \u2839 th/THIS/5-through/4-5-those | \u2814 in | \u2836 were/gg/() | \u2825 u/US/5-under/4-5-upon | \u2821 ch/CHILD/5-character | \u2827 v/VERY | \u283b er | \u2828 4-6- \r\n \n \u2827 v/VERY | \u2807 l/LIKE/5-lord/5-6-ful/RELEASE | \u2804 3- | \u2835 z/AS | \u2814 in | \u2833 ou/OUT/5-ought | \u2822 en/ENOUGH | \u2825 u/US/5-under/4-5-upon | \u2814 in | \u283b er | \u283c ble/# | \u2837 OF | \u2822 en/ENOUGH | \u280f p/PEOPLE/5-part | \u2814 in | \u283d y/YOU/5-young/5-6-ity/6-ally/ | \u2825 u/US/5-under/4-5-upon | \u2833 ou/OUT/5-ought | \u2805 k/KNOWLEDGE/5-know | \u281f q/QUITE/5-question | \u283c ble/# | \u280c st/STILL | \u282c ing | \u2836 were/gg/() \r\n \n \u281b g/GO/5-6-ong/7 | \u280d m/MORE/5-mother/4-5-6-many | \u280c st/STILL | \u280f p/PEOPLE/5-part | \u283e WITH | \u2828 4-6- | \u2833 ou/OUT/5-ought | \u2829 sh/SHALL | \u282f AND | \u280a i/9 | \u283c ble/# | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u283b er | \u280f p/PEOPLE/5-part | \u2837 OF | \u280a i/9 | \u2829 sh/SHALL | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u2833 ou/OUT/5-ought | \u280a i/9 | \u282b ed | \u2808 \u00b4/@ | \u2838 4-5-6- | \u2839 th/THIS/5-through/4-5-those \r\n \n \u2805 k/KNOWLEDGE/5-know | \u2837 OF | \u2839 th/THIS/5-through/4-5-those | \u2821 ch/CHILD/5-character | \u2829 sh/SHALL | \u281b g/GO/5-6-ong/7 | \u280a i/9 | \u2827 v/VERY | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u283d y/YOU/5-young/5-6-ity/6-ally/ | \u280d m/MORE/5-mother/4-5-6-many | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u280a i/9 | \u280d m/MORE/5-mother/4-5-6-many | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u283e WITH | \u2839 th/THIS/5-through/4-5-those | \u281f q/QUITE/5-question | \u280f p/PEOPLE/5-part | \u2807 l/LIKE/5-lord/5-6-ful/RELEASE | \u2801 a/1 | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u281e t/THAT/5-time/4-6-ount/5-6-ment \r\n \n\n\n\n```\nprint ' | '.join(toBraille(message_inverse, row_skip = 2, column_skip = 3)), \"\\r\\n\"\nprint ' | '.join(toBraille(fliplr(message_inverse), row_skip = 2, column_skip = 3)), \"\\r\\n\"\nprint ' | '.join(toBraille(flipud(message_inverse), row_skip = 2, column_skip = 3)), \"\\r\\n\"\nprint ' | '.join(toBraille(fliplr(flipud(message_inverse)), row_skip = 2, column_skip = 3)), \"\\r\\n\"\n```\n\n \u2817 r/RATHER/5-right | \u2822 en/ENOUGH | \u2800 SPACE/empty | \u282d x/IT | \u280e s/SO/5-some/4-5-6-spirit/4-6-less/5-6-ness | \u283e WITH | \u2810 5- | \u2839 th/THIS/5-through/4-5-those | \u283f FOR/full | \u2833 ou/OUT/5-ought | \u280e s/SO/5-some/4-5-6-spirit/4-6-less/5-6-ness | \u2823 gh/RELEASE CAPS/< | \u2813 h/HAVE/5-here/4-5-6-had/8 | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2821 ch/CHILD/5-character | \u280d m/MORE/5-mother/4-5-6-many | \u282c ing | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2815 o/5-one | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u281f q/QUITE/5-question | \u2805 k/KNOWLEDGE/5-know | \u282b ed | \u280d m/MORE/5-mother/4-5-6-many \r\n \n \u282d x/IT | \u2800 SPACE/empty | \u280a i/9 | \u2817 r/RATHER/5-right | \u283c ble/# | \u2810 5- | \u283b er | \u2823 gh/RELEASE CAPS/< | \u280e s/SO/5-some/4-5-6-spirit/4-6-less/5-6-ness | \u2823 gh/RELEASE CAPS/< | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u283f FOR/full | \u2825 u/US/5-under/4-5-upon | \u280c st/STILL | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2816 to/ff/! | \u2835 z/AS | \u2815 o/5-one | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2829 sh/SHALL | \u2825 u/US/5-under/4-5-upon | \u282e THE/5-there/4-5-these/4-5-6-their | \u2805 k/KNOWLEDGE/5-know | \u2837 OF \r\n \n \u283b er | \u2828 4-6- | \u281d n/NOT/5-name/4-6-sion/5-6-tion/6-ation | \u2829 sh/SHALL | \u2825 u/US/5-under/4-5-upon | \u2817 r/RATHER/5-right | \u282a ow | \u282b ed | \u281a j/JUST/0 | \u2817 r/RATHER/5-right | \u280c st/STILL | \u2829 sh/SHALL | \u283f FOR/full | \u281e t/THAT/5-time/4-6-ount/5-6-ment | \u2831 wh/WHICH/5-where/4-5-whose | \u281c ar | \u2831 wh/WHICH/5-where/4-5-whose | \u2837 OF | \u2802 ea/, | \u280f p/PEOPLE/5-part | \u283a w/WILL/5-work/4-5-word/4-5-6-world | \u2814 in | \u2800 SPACE/empty | \u282d x/IT \r\n \n \u282c ing | \u2835 z/AS | \u2828 4-6- | \u283e WITH | \u282e THE/5-there/4-5-these/4-5-6-their | \u282a ow | \u2817 r/RATHER/5-right | \u280d m/MORE/5-mother/4-5-6-many | \u282c ing | \u2821 ch/CHILD/5-character | \u2817 r/RATHER/5-right | \u2832 dis/dd/. | \u2831 wh/WHICH/5-where/4-5-whose | \u281c ar | \u2833 ou/OUT/5-ought | \u283f FOR/full | \u2827 v/VERY | \u2802 ea/, | \u281f q/QUITE/5-question | \u281c ar | \u282d x/IT | \u2800 SPACE/empty | \u2811 e/EVERY/5-ever/4-6-ance/5-6-ence/5 | \u283a w/WILL/5-work/4-5-word/4-5-6-world \r\n \n\n\nWe can see from above decoding samples, that one of them gives reasonable continuous and unbreaking form of the message, while still leaving it up to a reader, that it meant with few words, namely CHILD and Q PLANET:\n\n\n```\nHTML(\"\"\"\n\n

\n\u2805 KNOWLEDGE \u2837 OF \u2839 THIS \u2821 CHILD \u2829 SHALL \n

\n

\n\u281b g \u280a i \u2827 v \u2811 e \u283d YOU \u280d MORE \n

\n

\n\u281e t \u280a i \u280d m \u2811 e \u283e WITH \u2839 THIS \n

\n

\n\u281f q \u280f p \u2807 l \u2801 a \u281d n \u2811 e \u281e t\n

\n\n\"\"\")\n```\n\n\n\n\n\n\n

\n\u2805 KNOWLEDGE \u2837 OF \u2839 THIS \u2821 CHILD \u2829 SHALL \n

\n

\n\u281b g \u280a i \u2827 v \u2811 e \u283d YOU \u280d MORE \n

\n

\n\u281e t \u280a i \u280d m \u2811 e \u283e WITH \u2839 THIS \n

\n

\n\u281f q \u280f p \u2807 l \u2801 a \u281d n \u2811 e \u281e t\n

\n\n\n\n\n\n## The [MIT](http://choosealicense.com/licenses/mit/) License\n\nCopyright (c) 2015 Marko Manninen\n", "meta": {"hexsha": "6aea2a7699d53dec68faabe9d28cf272af805098", "size": 21568, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Crop circle July 28th, 2012 Braille message.ipynb", "max_stars_repo_name": "markomanninen/cropcirclesquaredecodings", "max_stars_repo_head_hexsha": "53fb352101d30525409e8367efb2275c2583df7b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-05T20:58:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-05T20:58:03.000Z", "max_issues_repo_path": "Crop circle July 28th, 2012 Braille message.ipynb", "max_issues_repo_name": "markomanninen/cropcirclesquaredecodings", "max_issues_repo_head_hexsha": "53fb352101d30525409e8367efb2275c2583df7b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Crop circle July 28th, 2012 Braille message.ipynb", "max_forks_repo_name": "markomanninen/cropcirclesquaredecodings", "max_forks_repo_head_hexsha": "53fb352101d30525409e8367efb2275c2583df7b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.4674157303, "max_line_length": 733, "alphanum_fraction": 0.5304617953, "converted": true, "num_tokens": 6706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.28008140622701966}} {"text": "# What is Deep Learning\n\n\nTo understand what deep learning is, we first need to understand the relationship deep learning has with machine learning, neural networks, and artificial intelligence.\n\n\nThe best way to think of this relationship is to visualize them as concentric circles:\n\n\n\n\nAt the outer most ring you have artificial intelligence (using computers to reason). One layer inside of that is machine learning. With artificial neural networks and deep learning at the center.\n\n\nBroadly speaking, deep learning is a more approachable name for an artificial neural network. The \u201cdeep\u201d in deep learning refers to the depth of the network. An artificial neural network can be very shallow.\n\n\nNeural networks are inspired by the structure of the cerebral cortex. At the basic level is the perceptron, the mathematical representation of a biological neuron. Like in the cerebral cortex, there can be several layers of interconnected perceptrons.\n\n\nThe first layer is the input layer. Each node in this layer takes an input, and then passes its output as the input to each node in the next layer. There are generally no connections between nodes in the same layer and the last layer produces the outputs.\n\n\nWe call the middle part the hidden layer. These neurons have no connection to the outside (e.g. input or output) and are only activated by nodes in the previous layer.\n\n\n\n\nThink of deep learning as the technique for learning in neural networks that utilizes multiple layers of abstraction to solve pattern recognition problems. In the 1980s, most neural networks were a single layer due to the cost of computation and availability of data.\n\n\nMachine learning is considered a branch or approach of Artificial intelligence, whereas deep learning is a specialized type of machine learning.\n\n\nMachine learning involves computer intelligence that doesn\u2019t know the answers up front. Instead, the program will run against training data, verify the success of its attempts, and modify its approach accordingly. Machine learning typical requires a sophisticated education, spanning software engineering and computer science to statistical methods and linear algebra.\n\n\nThere are two broad classes of machine learning methods:\n\n * Supervised learning\n * Unsupervised learning\n\n\nIn supervised learning, a machine learning algorithm uses a labeled dataset to infer the desired outcome. This takes a lot of data and time, since the data needs to be labeled by hand. Supervised learning is great for classification and regression problems.\n\nFor example, let\u2019s say that we were running a company and want to determine the effect of bonuses on employee retention. If we had historical data \u2013 i.e. employee bonus amount and tenure \u2013 we could use supervised machine learning.\n\nWith unsupervised learning, there aren\u2019t any predefined or corresponding answers. The goal is to figure out the hidden patterns in the data. It\u2019s usually used for clustering and associative tasks, like grouping customers by behavior. Amazon\u2019s \u201ccustomers who also bought\u2026\u201d recommendations are a type of associative task.\n\nWhile supervised learning can be useful, we often have to resort to unsupervised learning. Deep learning has proven to be an effective unsupervised learning technique.\n\n## Why is Deep Learning Important?\n\n\n\nComputers have long had techniques for recognizing features inside of images. The results weren\u2019t always great. Computer vision has been a main beneficiary of deep learning. Computer vision using deep learning now rivals humans on many image recognition tasks.\n\n\nFacebook has had great success with identifying faces in photographs by using deep learning. It\u2019s not just a marginal improvement, but a game changer: \u201cAsked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.\u201d\n\n\nSpeech recognition is a another area that\u2019s felt deep learning\u2019s impact. Spoken languages are so vast and ambiguous. Baidu \u2013 one of the leading search engines of China \u2013 has developed a voice recognition system that is faster and more accurate than humans at producing text on a mobile phone. In both English and Mandarin.\n\n\nWhat is particularly fascinating, is that generalizing the two languages didn\u2019t require much additional design effort: \u201cHistorically, people viewed Chinese and English as two vastly different languages, and so there was a need to design very different features,\u201d Andrew Ng says, chief scientist at Baidu. \u201cThe learning algorithms are now so general that you can just learn.\u201d\n\nGoogle is now using deep learning to manage the energy at the company\u2019s data centers. They\u2019ve cut their energy needs for cooling by 40%. That translates to about a 15% improvement in power usage efficiency for the company and hundreds of millions of dollars in savings.\n\n## Deep Learning Microservices\n\nHere\u2019s a quick overview of some deep learning use cases and microservices.\n\nIllustration Tagger. An implementation of Illustration2Vec, this microservice can tag an image with the safe, questionable, or explicit rating, the copyright, and general category tag to understand what\u2019s in the image. DeepFilter is a style transfer service for applying artistic filters to images.\n\nThe age classifier uses face detection to determine the age of a person in a photo. The Places 365 Classifier uses a pre-trained CNN and based on Places: An Image Database for Deep Scene Understanding B. Zhou, et al., 2016 to identify particular locations in images, such as a courtyard, drugstore, hotel room, glacier, mountain, etc. Lastly, there is InceptionNet, a direct implementation of Google\u2019s InceptionNet using TensorFlow. It takes an image (such as a car), and returns the top 5 classes the model predicts are relevant to the image.\n\n## Open Source Deep Learning Frameworks\n\nDeep learnings is made accessible by a number of open source projects. Some of the most popular technologies include, but are not limited to, Deeplearning4j (DL4j), Theano, Torch, TensorFlow, and Caffe. The deciding factors on which one to use are the tech stack they target, and if they are low-level, academic, or application focused. Here\u2019s an overview of each:\n\nDL4J:\n\n * JVM-based\n * Distrubted\n * Integrates with Hadoop and Spark\n \n \nTheano:\n\n * Very popular in Academia\n * Fairly low level\n * Interfaced with via Python and Numpy\n\n\nTorch:\n\n * Lua based\n * In house versions used by Facebook and Twitter\n * Contains pretrained models\n\n\nTensorFlow:\n\n * Google written successor to Theano\n * Interfaced with via Python and Numpy\n * Highly parallel\n * Can be somewhat slow for certain problem sets\n\n\n\nCaffe:\n\n * Not general purpose. Focuses on machine-vision problems\n * Implemented in C++ and is very fast\n * Not easily extensible\n * Has a Python interface\n\n## McCulloch and Pitts Neuron\n\nIn 1943, McCulloch and Pitts introduced a mathematical model of a neuron. It consisted of three components:\n\n1. A set of **weights** $w_i$ corresponding to synapses (inputs)\n2. An **adder** for summing input signals; analogous to cell membrane that collects charge\n3. An **activation function** for determining when the neuron fires, based on accumulated input\n\nThe neuron model is shown schematically below. On the left are input nodes $\\{x_i\\}$, usually expressed as a vector. The strength with which the inputs are able to deliver the signal along the synapse is determined by their corresponding weights $\\{w_i\\}$. The adder then sums the inputs from all the synapses:\n\n$$h = \\sum_i w_i x_i$$\n\nThe parameter $\\theta$ determines whether or not the neuron fires given a weighted input of $h$. If it fires, it returns a value $y=1$, otherwise $y=0$. For example, a simple **activation function** is using $\\theta$ as a simple fixed threshold:\n\n$$y = g(h) = \\left\\{ \\begin{array}{l}\n1, \\text{if } h \\gt \\theta \\\\\n0, \\text{if } h \\le \\theta\n\\end{array} \\right.$$\n\nthis activation function may take any of several forms, such as a logistic function.\n\n\n\nA single neuron is not interesting, nor useful, from a learning perspective. It cannot learn; it simply receives inputs and either fires or not. Only when neurons are joined as a **network** can they perform useful work.\n\nLearning takes place by changing the weights of the connections in a neural network, and by changing the parameters of the activation functions of neurons.\n\n## Perceptron\n\nA collection of McCullough and Pitts neurons, along with a set of input nodes connected to the inputs via weighted edges, is a perceptron, the simplest neural network.\n\nEach neuron is independent of the others in the perceptron, in the sense that its behavior and performance depends only on its own weights and threshold values, and not of those for the other neurons. Though they share inputs, they operate independently.\n\nThe number of inputs and outputs are determined by the data. Weights are stored as a `N x K` matrix, with N observations and K neurons, with $w_{ij}$ specifying the weight on the *i*th observation on the *j*th neuron.\n\n\n\nIn order to use the perceptron for statistical learning, we compare the outputs $y_j$ from each neuron to the obervation target $t_j$, and adjust the input weights when they do not correspond (*e.g.* if a neuron fires when it should not have).\n\n$$t_j - y_j$$\n\nWe use this difference to update the weight $w_{ij}$, based on the input and a desired **learning rate**. This results in an update rule:\n\n$$w_{ij} \\leftarrow w_{ij} + \\eta (t_j - y_j) x_i$$\n\nAfter an incremental improvement, the perceptron is shown the training data again, resulting in another update. This is repeated until the performance no longer improves. Having a learning rate less than one results in a more stable learning rate, though this stability is traded off against having to expose the network to the data multiple times. Typical learning rates are in the 0.1-0.4 range.\n\nAn additional input node is typically added to the perceptron model, which is a constant value (usually -1, 0, or 1) that acts analogously to an intercept in a regression model. This establishes a baseline input for the case when all inputs are zero.\n\n\n\n## Learning with Perceptrons\n\n1. Initialize weights $w_{ij}$ to small, random numbers.\n2. For each t in T iterations\n * compute activation for each neuron *j* connected to each input vector *i*\n $$y_j = g\\left( h=\\sum_i w_{ij} x_i \\right) = \\left\\{ \\begin{array}{l}\n1, \\text{if } h \\gt 0 \\\\\n0, \\text{if } h \\le 0\n\\end{array} \\right.$$\n * update weights\n $$w_{ij} \\leftarrow w_{ij} + \\eta (t_j - y_j) x_i$$\n\n\nThis algorithm is $\\mathcal{O}(Tmn)$\n\n### Example: Logical functions\n\nLet's see how the perceptron learns by training it on a couple of of logical functions, AND and OR. For two variables `x1` and `x2`, the AND function returns 1 if both are true, or zero otherwise; the OR function returns 1 if either variable is true, or both. These functions can be expressed as simple lookup tables.\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\nfrom scipy import optimize\nfrom ipywidgets import *\nfrom IPython.display import SVG\nfrom sklearn import datasets\n\n```\n\n\n```python\nAND = pd.DataFrame({'x1': (0,0,1,1), 'x2': (0,1,0,1), 'y': (0,0,0,1)})\nAND\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
x1x2y
0000
1010
2100
3111
\n
\n\n\n\nFirst, we need to initialize weights to small, random values (can be positive and negative).\n\n\n```python\nw = np.random.randn(3)*1e-4\n```\n\nThen, a simple activation function for calculating $g(h)$:\n\n\n```python\ng = lambda inputs, weights: np.where(np.dot(inputs, weights)>0, 1, 0)\n```\n\nFinally, a training function that iterates the learning algorithm, returning the adapted weights.\n\n\n```python\ndef train(inputs, targets, weights, eta, n_iterations):\n\n # Add the inputs that match the bias node\n inputs = np.c_[inputs, -np.ones((len(inputs), 1))]\n\n for n in range(n_iterations):\n\n activations = g(inputs, weights);\n weights -= eta*np.dot(np.transpose(inputs), activations - targets)\n \n return(weights)\n```\n\nLet's test it first on the AND function.\n\n\n```python\ninputs = AND[['x1','x2']]\ntarget = AND['y']\n\nw = train(inputs, target, w, 0.25, 10)\n```\n\nChecking the performance:\n\n\n```python\ng(np.c_[inputs, -np.ones((len(inputs), 1))], w)\n```\n\n\n\n\n array([0, 0, 0, 1])\n\n\n\nThus, it has learned the function perfectly. Now for OR:\n\n\n```python\nOR = pd.DataFrame({'x1': (0,0,1,1), 'x2': (0,1,0,1), 'y': (0,1,1,1)})\nOR\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
x1x2y
0000
1011
2101
3111
\n
\n\n\n\n\n```python\nw = np.random.randn(3)*1e-4\n```\n\n\n```python\ninputs = OR[['x1','x2']]\ntarget = OR['y']\n\nw = train(inputs, target, w, 0.25, 20)\n```\n\n\n```python\ng(np.c_[inputs, -np.ones((len(inputs), 1))], w)\n```\n\n\n\n\n array([0, 1, 1, 1])\n\n\n\nAlso 100% correct.\n\n### Exercise: XOR\n\nNow try running the model on the XOR function, where a one is returned for either `x1` or `x2` being true, but *not* both. What happens here?\n\n\n```python\n# Write your answer here\n```\n\nLet's explore the problem graphically:\n\n\n```python\nAND.plot(kind='scatter', x='x1', y='x2', c='y', s=50, colormap='winter')\nplt.plot(np.linspace(0,1.4), 1.5 - 1*np.linspace(0,1.4), 'k--');\n```\n\n\n```python\nOR.plot(kind='scatter', x='x1', y='x2', c='y', s=50, colormap='winter')\nplt.plot(np.linspace(-.4,1), .5 - 1*np.linspace(-.4,1), 'k--');\n```\n\n\n```python\nXOR = pd.DataFrame({'x1': (0,0,1,1), 'x2': (0,1,0,1), 'y': (0,1,1,0)})\n\nXOR.plot(kind='scatter', x='x1', y='x2', c='y', s=50, colormap='winter');\n```\n\nThe perceptron tries to find a separating hyperplane for the two response classes. Namely, a set of weights that satisfies:\n\n$$\\mathbf{x_1}\\mathbf{w}^T=0$$\n\nand:\n\n$$\\mathbf{x_2}\\mathbf{w}^T=0$$\n\nHence,\n\n$$\\begin{aligned}\n\\mathbf{x}_1\\mathbf{w}^T &= \\mathbf{x}_2\\mathbf{w}^T \\\\\n\\Rightarrow (\\mathbf{x}_1 - \\mathbf{x}_2) \\mathbf{w}^T &= 0\n\\end{aligned}$$\n\nThis means that either the norms of $\\mathbf{x}_1 - \\mathbf{x}_2$ or $\\mathbf{w}$ are zero, or the cosine of the angle between them is equal to zero, due to the identity:\n\n$$\\mathbf{a}\\mathbf{b} = \\|a\\| \\|b\\| \\cos \\theta$$\n\nSince there is no reason for the norms to be zero in general, we need the two vectors to be at right angles to one another. So, we need a weight vector that is perpendicular to the decision boundary.\n\nClearly, for the XOR function, the output classes are not linearly separable. So, the algorithm does not converge on an answer, but simply cycles through two incorrect solutions.\n\n## Multi-layer Perceptron\n\nThe solution to fitting more complex (*i.e.* non-linear) models with neural networks is to use a more complex network that consists of more than just a single perceptron. The take-home message from the perceptron is that all of the learning happens by adapting the synapse weights until prediction is satisfactory. Hence, a reasonable guess at how to make a perceptron more complex is to simply **add more weights**.\n\nThere are two ways to add complexity:\n\n1. Add backward connections, so that output neurons feed back to input nodes, resulting in a **recurrent network**\n2. Add neurons between the input nodes and the outputs, creating an additional (\"hidden\") layer to the network, resulting in a **multi-layer perceptron**\n\nThe latter approach is more common in applications of neural networks.\n\n\n\nHow to train a multilayer network is not intuitive. Propagating the inputs forward over two layers is straightforward, since the outputs from the hidden layer can be used as inputs for the output layer. However, the process for updating the weights based on the prediction error is less clear, since it is difficult to know whether to change the weights on the input layer or on the hidden layer in order to improve the prediction.\n\nUpdating a multi-layer perceptron (MLP) is a matter of: \n\n1. moving forward through the network, calculating outputs given inputs and current weight estimates\n2. moving backward updating weights according to the resulting error from forward propagation. \n\nIn this sense, it is similar to a single-layer perceptron, except it has to be done twice, once for each layer.\n\n\n# Backpropagation\n\nBackpropagation is a method for efficiently computing the gradient of the cost function of a neural network with respect to its parameters. These partial derivatives can then be used to update the network's parameters using, e.g., gradient descent. This may be the most common method for training neural networks. Deriving backpropagation involves numerous clever applications of the chain rule for functions of vectors. \n\n\n\n\n## Review: The chain rule\n\nThe chain rule is a way to compute the derivative of a function whose variables are themselves functions of other variables. If $C$ is a scalar-valued function of a scalar $z$ and $z$ is itself a scalar-valued function of another scalar variable $w$, then the chain rule states that\n$$\n\\frac{\\partial C}{\\partial w} = \\frac{\\partial C}{\\partial z}\\frac{\\partial z}{\\partial w}\n$$\nFor scalar-valued functions of more than one variable, the chain rule essentially becomes additive. In other words, if $C$ is a scalar-valued function of $N$ variables $z_1, \\ldots, z_N$, each of which is a function of some variable $w$, the chain rule states that\n$$\n\\frac{\\partial C}{\\partial w} = \\sum_{i = 1}^N \\frac{\\partial C}{\\partial z_i}\\frac{\\partial z_i}{\\partial w}\n$$\n\n## Notation\n\nIn the following derivation, we'll use the following notation:\n\n$L$ - Number of layers in the network.\n\n$N^n$ - Dimensionality of layer $n \\in \\{0, \\ldots, L\\}$. $N^0$ is the dimensionality of the input; $N^L$ is the dimensionality of the output.\n\n$W^m \\in \\mathbb{R}^{N^m \\times N^{m - 1}}$ - Weight matrix for layer $m \\in \\{1, \\ldots, L\\}$. $W^m_{ij}$ is the weight between the $i^{th}$ unit in layer $m$ and the $j^{th}$ unit in layer $m - 1$.\n\n$b^m \\in \\mathbb{R}^{N^m}$ - Bias vector for layer $m$.\n\n$\\sigma^m$ - Nonlinear activation function of the units in layer $m$, applied elementwise.\n\n$z^m \\in \\mathbb{R}^{N^m}$ - Linear mix of the inputs to layer $m$, computed by $z^m = W^m a^{m - 1} + b^m$.\n\n$a^m \\in \\mathbb{R}^{N^m}$ - Activation of units in layer $m$, computed by $a^m = \\sigma^m(h^m) = \\sigma^m(W^m a^{m - 1} + b^m)$. $a^L$ is the output of the network. We define the special case $a^0$ as the input of the network.\n\n$y \\in \\mathbb{R}^{N^L}$ - Target output of the network.\n\n$C$ - Cost/error function of the network, which is a function of $a^L$ (the network output) and $y$ (treated as a constant).\n\n## Backpropagation in general\n\nIn order to train the network using a gradient descent algorithm, we need to know the gradient of each of the parameters with respect to the cost/error function $C$; that is, we need to know $\\frac{\\partial C}{\\partial W^m}$ and $\\frac{\\partial C}{\\partial b^m}$. It will be sufficient to derive an expression for these gradients in terms of the following terms, which we can compute based on the neural network's architecture:\n\n- $\\frac{\\partial C}{\\partial a^L}$: The derivative of the cost function with respect to its argument, the output of the network\n- $\\frac{\\partial a^m}{\\partial z^m}$: The derivative of the nonlinearity used in layer $m$ with respect to its argument\n\nTo compute the gradient of our cost/error function $C$ to $W^m_{ij}$ (a single entry in the weight matrix of the layer $m$), we can first note that $C$ is a function of $a^L$, which is itself a function of the linear mix variables $z^m_k$, which are themselves functions of the weight matrices $W^m$ and biases $b^m$. With this in mind, we can use the chain rule as follows:\n\n$$\\frac{\\partial C}{\\partial W^m_{ij}} = \\sum_{k = 1}^{N^m} \\frac{\\partial C}{\\partial z^m_k} \\frac{\\partial z^m_k}{\\partial W^m_{ij}}$$\n\nNote that by definition \n$$\nz^m_k = \\sum_{l = 1}^{N^m} W^m_{kl} a_l^{m - 1} + b^m_k\n$$\nIt follows that $\\frac{\\partial z^m_k}{\\partial W^m_{ij}}$ will evaluate to zero when $i \\ne k$ because $z^m_k$ does not interact with any elements in $W^m$ except for those in the $k$th row, and we are only considering the entry $W^m_{ij}$. When $i = k$, we have\n\n\\begin{align*}\n\\frac{\\partial z^m_i}{\\partial W^m_{ij}} &= \\frac{\\partial}{\\partial W^m_{ij}}\\left(\\sum_{l = 1}^{N^m} W^m_{il} a_l^{m - 1} + b^m_i\\right)\\\\\n&= a^{m - 1}_j\\\\\n\\rightarrow \\frac{\\partial z^m_k}{\\partial W^m_{ij}} &= \\begin{cases}\n0 & k \\ne i\\\\\na^{m - 1}_j & k = i\n\\end{cases}\n\\end{align*}\n\nThe fact that $\\frac{\\partial C}{\\partial a^m_k}$ is $0$ unless $k = i$ causes the summation above to collapse, giving\n\n$$\\frac{\\partial C}{\\partial W^m_{ij}} = \\frac{\\partial C}{\\partial z^m_i} a^{m - 1}_j$$\n\nor in vector form\n\n$$\\frac{\\partial C}{\\partial W^m} = \\frac{\\partial C}{\\partial z^m} a^{m - 1 \\top}$$\n\nSimilarly for the bias variables $b^m$, we have\n\n$$\\frac{\\partial C}{\\partial b^m_i} = \\sum_{k = 1}^{N^m} \\frac{\\partial C}{\\partial z^m_k} \\frac{\\partial z^m_k}{\\partial b^m_i}$$\n\nAs above, it follows that $\\frac{\\partial z^m_k}{\\partial b^m_i}$ will evaluate to zero when $i \\ne k$ because $z^m_k$ does not interact with any element in $b^m$ except $b^m_k$. When $i = k$, we have\n\n\\begin{align*}\n\\frac{\\partial z^m_i}{\\partial b^m_i} &= \\frac{\\partial}{\\partial b^m_i}\\left(\\sum_{l = 1}^{N^m} W^m_{il} a_l^{m - 1} + b^m_i\\right)\\\\\n&= 1\\\\\n\\rightarrow \\frac{\\partial z^m_i}{\\partial b^m_i} &= \\begin{cases}\n0 & k \\ne i\\\\\n1 & k = i\n\\end{cases}\n\\end{align*}\n\nThe summation also collapses to give\n\n$$\\frac{\\partial C}{\\partial b^m_i} = \\frac{\\partial C}{\\partial z^m_i}$$\n\nor in vector form\n\n$$\\frac{\\partial C}{\\partial b^m} = \\frac{\\partial C}{\\partial z^m}$$\n\nNow, we must compute $\\frac{\\partial C}{\\partial z^m_k}$. For the final layer ($m = L$), this term is straightforward to compute using the chain rule:\n\n$$\n\\frac{\\partial C}{\\partial z^L_k} = \\frac{\\partial C}{\\partial a^L_k} \\frac{\\partial a^L_k}{\\partial z^L_k}\n$$\n\nor, in vector form\n\n$$\n\\frac{\\partial C}{\\partial z^L} = \\frac{\\partial C}{\\partial a^L} \\frac{\\partial a^L}{\\partial z^L}\n$$\n\nThe first term $\\frac{\\partial C}{\\partial a^L}$ is just the derivative of the cost function with respect to its argument, whose form depends on the cost function chosen. Similarly, $\\frac{\\partial a^m}{\\partial z^m}$ (for any layer $m$ includling $L$) is the derivative of the layer's nonlinearity with respect to its argument and will depend on the choice of nonlinearity. For other layers, we again invoke the chain rule:\n\n\n\\begin{align*}\n\\frac{\\partial C}{\\partial z^m_k} &= \\frac{\\partial C}{\\partial a^m_k} \\frac{\\partial a^m_k}{\\partial z^m_k}\\\\\n&= \\left(\\sum_{l = 1}^{N^{m + 1}}\\frac{\\partial C}{\\partial z^{m + 1}_l}\\frac{\\partial z^{m + 1}_l}{\\partial a^m_k}\\right)\\frac{\\partial a^m_k}{\\partial z^m_k}\\\\\n&= \\left(\\sum_{l = 1}^{N^{m + 1}}\\frac{\\partial C}{\\partial z^{m + 1}_l}\\frac{\\partial}{\\partial a^m_k} \\left(\\sum_{h = 1}^{N^m} W^{m + 1}_{lh} a_h^m + b_l^{m + 1}\\right)\\right) \\frac{\\partial a^m_k}{\\partial z^m_k}\\\\\n&= \\left(\\sum_{l = 1}^{N^{m + 1}}\\frac{\\partial C}{\\partial z^{m + 1}_l} W^{m + 1}_{lk}\\right) \\frac{\\partial a^m_k}{\\partial z^m_k}\\\\\n&= \\left(\\sum_{l = 1}^{N^{m + 1}}W^{m + 1\\top}_{kl} \\frac{\\partial C}{\\partial z^{m + 1}_l}\\right) \\frac{\\partial a^m_k}{\\partial z^m_k}\\\\\n\\end{align*}\n\nwhere the last simplification was made because by convention $\\frac{\\partial C}{\\partial z^{m + 1}_l}$ is a column vector, allowing us to write the following vector form:\n\n$$\\frac{\\partial C}{\\partial z^m} = \\left(W^{m + 1\\top} \\frac{\\partial C}{\\partial z^{m + 1}}\\right) \\circ \\frac{\\partial a^m}{\\partial z^m}$$\n\nNote that we now have the ingredients to efficiently compute the gradient of the cost function with respect to the network's parameters: First, we compute $\\frac{\\partial C}{\\partial z^L_k}$ based on the choice of cost function and nonlinearity. Then, we recursively can compute $\\frac{\\partial C}{\\partial z^m}$ layer-by-layer based on the term $\\frac{\\partial C}{\\partial z^{m + 1}}$ computed from the previous layer and the nonlinearity of the layer (this is called the \"backward pass\").\n\n## Backpropagation in practice\n\nAs discussed above, the exact form of the updates depends on both the chosen cost function and each layer's chosen nonlinearity. The following two table lists the some common choices for nonlinearities and the required partial derivative for deriving the gradient for each layer:\n\n| Nonlinearity | $a^m = \\sigma^m(z^m)$ | $\\frac{\\partial a^m}{\\partial z^m}$ | Notes |\n|--------------|---|---|---|\n| Sigmoid | $\\frac{1}{1 + e^{z^m}}$ | $\\sigma^m(z^m)(1 - \\sigma^m(z^m)) = a^m(1 - a^m)$ | \"Squashes\" any input to the range $[0, 1]$ |\n| Tanh | $\\frac{e^{z^m} - e^{-z^m}}{e^{z^m} + e^{-z^m}}$ | $1 - (\\sigma^m(z^m))^2 = 1 - (a^m)^2$ | Equivalent, up to scaling, to the sigmoid function |\n| ReLU | $\\max(0, z^m)$ | $0, z^m < 0;\\; 1, z^m \\ge 0$ | Commonly used in neural networks with many layers|\n\nSimilarly, the following table collects some common cost functions and the partial derivative needed to compute the gradient for the final layer:\n\n| Cost Function | $C$ | $\\frac{\\partial C}{\\partial a^L}$ | Notes |\n|---------------|--------------------------------------|-----------------------------------|---|\n| Squared Error | $\\frac{1}{2}(y - a^L)^\\top(y - a^L)$ | $y - a^L$ | Commonly used when the output is not constrained to a specific range |\n| Cross-Entropy | $(y - 1)\\log(1 - a^L) - y\\log(a^L)$ | $\\frac{a^L - y}{a^L(1 - a^L)}$ | Commonly used for binary classification tasks; can yield faster convergence |\n\nIn practice, backpropagation proceeds in the following manner for each training sample:\n\n1. Forward pass: Given the network input $a^0$, compute $a^m$ recursively by\n $$a^1 = \\sigma^1(W^1 a^0 + b^1), \\ldots, a^L = \\sigma^L(W^L a^{L - 1} + b^L)$$\n1. Backward pass: Compute \n$$\\frac{\\partial C}{\\partial z^L} = \\frac{\\partial C}{\\partial a^L} \\frac{\\partial a^L}{\\partial z^L}$$\nfor the final layer based on the tables above, then recursively compute\n$$\\frac{\\partial C}{\\partial z^m} = \\left(W^{m + 1\\top} \\frac{\\partial C}{\\partial z^{m + 1}}\\right) \\circ \\frac{\\partial a^m}{\\partial z^m}$$\nfor all other layers. Plug these values into \n$$\\frac{\\partial C}{\\partial W^m} = \\frac{\\partial C}{\\partial z^m_i} a^{m - 1 \\top}$$\nand\n$$\\frac{\\partial C}{\\partial b^m} = \\frac{\\partial C}{\\partial z^m}$$\nto obtain the updates.\n\n### Example: Sigmoid network with cross-entropy loss using gradient descent\n\nA common network architecture is one with fully connected layers where each layer's nonlinearity is the sigmoid function $a^m = \\frac{1}{1 + e^{z^m}}$ and the cost function is the cross-entropy loss $(y - 1)\\log(1 - a^L) - y\\log(a^L)$. To compute the updates for gradient descent, we first compute (based on the tables above)\n\\begin{align*}\n\\frac{\\partial C}{\\partial z^L} &= \\frac{\\partial C}{\\partial a^L} \\frac{\\partial a^L}{\\partial z^L}\\\\\n&= \\left(\\frac{a^L - y}{a^L(1 - a^L)}\\right)a^L(1 - a^L)\\\\\n&= a^L - y\n\\end{align*}\nFrom here, we can compute\n\\begin{align*}\n\\frac{\\partial C}{\\partial z^{L - 1}} &= \\left(W^{L\\top} \\frac{\\partial C}{\\partial z^L} \\right) \\circ \\frac{\\partial a^{L - 1}}{\\partial z^{L - 1}}\\\\\n&= W^{L\\top} (a^L - y) \\circ a^{L - 1}(1 - a^{L - 1})\\\\\n\\frac{\\partial C}{\\partial z^{L - 2}} &= \\left(W^{L - 1\\top} \\frac{\\partial C}{\\partial z^{L - 1}} \\right) \\circ \\frac{\\partial a^{L - 2}}{\\partial z^{L - 2}}\\\\\n&= W^{L - 1\\top} \\left(W^{L\\top} (a^L - y) \\circ a^{L - 1}(1 - a^{L - 1})\\right) \\circ a^{L - 2}(1 - a^{L - 2})\n\\end{align*}\nand so on, until we have computed $\\frac{\\partial C}{\\partial z^m}$ for $m \\in \\{1, \\ldots, L\\}$. This allows us to compute $\\frac{\\partial C}{\\partial W^m_{ij}}$ and $\\frac{\\partial C}{\\partial b^m_i}$, e.g.\n\\begin{align*}\n\\frac{\\partial C}{\\partial W^L} &= \\frac{\\partial C}{\\partial z^L} a^{L - 1 \\top}\\\\\n&= (a^L - y)a^{L - 1\\top}\\\\\n\\frac{\\partial C}{\\partial W^{L - 1}} &= \\frac{\\partial C}{\\partial z^{L - 1}} a^{L - 2 \\top}\\\\\n&= W^{L\\top} (a^L - y) \\circ a^{L - 1}(1 - a^{L - 1}) a^{L - 2\\top}\n\\end{align*}\nand so on. Standard gradient descent then updates each parameter as follows:\n$$W^m = W^m - \\lambda \\frac{\\partial C}{\\partial W^m}$$\n$$b^m = b^m - \\lambda \\frac{\\partial C}{\\partial b^m}$$\nwhere $\\lambda$ is the learning rate. This process is repeated until some stopping criteria is met.\n\n## Toy Python example\n\nDue to the recursive nature of the backpropagation algorithm, it lends itself well to software implementations. The following code implements a multi-layer perceptron which is trained using backpropagation with user-supplied nonlinearities, layer sizes, and cost function.\n\n\n```python\n# Ensure python 3 forward compatibility\nfrom __future__ import print_function\nimport numpy as np\n\ndef sigmoid(x):\n return 1/(1 + np.exp(-x))\n\nclass SigmoidLayer:\n def __init__(self, n_input, n_output):\n self.W = np.random.randn(n_output, n_input)\n self.b = np.random.randn(n_output, 1)\n def output(self, X):\n if X.ndim == 1:\n X = X.reshape(-1, 1)\n return sigmoid(self.W.dot(X) + self.b)\n\nclass SigmoidNetwork:\n\n def __init__(self, layer_sizes):\n '''\n :parameters:\n - layer_sizes : list of int\n List of layer sizes of length L+1 (including the input dimensionality)\n '''\n self.layers = []\n for n_input, n_output in zip(layer_sizes[:-1], layer_sizes[1:]):\n self.layers.append(SigmoidLayer(n_input, n_output))\n \n def train(self, X, y, learning_rate=0.2):\n X = np.array(X)\n y = np.array(y)\n if X.ndim == 1:\n X = X.reshape(-1, 1)\n if y.ndim == 1:\n y = y.reshape(1, -1)\n \n # Forward pass - compute a^n for n in {0, ... L}\n layer_outputs = [X]\n for layer in self.layers:\n layer_outputs.append(layer.output(layer_outputs[-1]))\n \n # Backward pass - compute \\partial C/\\partial z^m for m in {L, ..., 1}\n cost_partials = [layer_outputs[-1] - y]\n for layer, layer_output in zip(reversed(self.layers), reversed(layer_outputs[:-1])):\n cost_partials.append(layer.W.T.dot(cost_partials[-1])*layer_output*(1 - layer_output))\n cost_partials.reverse()\n \n # Compute weight gradient step\n W_updates = []\n for cost_partial, layer_output in zip(cost_partials[1:], layer_outputs[:-1]):\n W_updates.append(cost_partial.dot(layer_output.T)/X.shape[1])\n # and biases\n b_updates = [cost_partial.mean(axis=1).reshape(-1, 1) for cost_partial in cost_partials[1:]]\n \n for W_update, b_update, layer in zip(W_updates, b_updates, self.layers):\n layer.W -= W_update*learning_rate\n layer.b -= b_update*learning_rate\n\n def output(self, X):\n a = np.array(X)\n if a.ndim == 1:\n a = a.reshape(-1, 1)\n for layer in self.layers:\n a = layer.output(a)\n return a\n```\n\n\n```python\nnn = SigmoidNetwork([2, 2, 1])\nX = np.array([[0, 1, 0, 1], \n [0, 0, 1, 1]])\ny = np.array([0, 1, 1, 0])\nfor n in range(int(1e3)):\n nn.train(X, y, learning_rate=1.)\nprint(\"Input\\tOutput\\tQuantized\")\nfor i in [[0, 0], [1, 0], [0, 1], [1, 1]]:\n print(\"{}\\t{:.4f}\\t{}\".format(i, nn.output(i)[0, 0], 1*(nn.output(i)[0] > .5)))\n```\n\n Input\tOutput\tQuantized\n [0, 0]\t0.5125\t[1]\n [1, 0]\t0.4927\t[0]\n [0, 1]\t0.9788\t[1]\n [1, 1]\t0.0161\t[0]\n\n\n\n```python\nlogistic = lambda h, beta: 1./(1 + np.exp(-beta * h))\n\n@interact(beta=(-1, 25))\ndef logistic_plot(beta=5):\n hvals = np.linspace(-2, 2)\n plt.plot(hvals, logistic(hvals, beta))\n```\n\nThis has the advantage of having a simple derivative:\n\n$$\\frac{dg}{dh} = \\beta g(h)(1 - g(h))$$\n\nAlternatively, the hyperbolic tangent function is also sigmoid:\n\n$$g(h) = \\tanh(h) = \\frac{\\exp(h) - \\exp(-h)}{\\exp(h) + \\exp(-h)}$$\n\n\n```python\nhyperbolic_tangent = lambda h: (np.exp(h) - np.exp(-h)) / (np.exp(h) + np.exp(-h))\n\n@interact(theta=(-1, 25))\ndef tanh_plot(theta=5):\n hvals = np.linspace(-2, 2)\n h = hvals*theta\n plt.plot(hvals, hyperbolic_tangent(h))\n```\n\nGradient Descent\n---\nThe simplest algorithm for iterative minimization of differentiable functions is known as just **gradient descent**.\nRecall that the gradient of a function is defined as the vector of partial derivatives:\n\n$$\\nabla f(x) = [{\\partial{f}{x_1}, \\partial{f}{x_2}, \\ldots, \\partial{f}{x_n}}]$$\n\nand that the gradient of a function always points towards the direction of maximal increase at that point.\n\nEquivalently, it points *away* from the direction of maximum decrease - thus, if we start at any point, and keep moving in the direction of the negative gradient, we will eventually reach a local minimum.\n\nThis simple insight leads to the Gradient Descent algorithm. Outlined algorithmically, it looks like this:\n\n1. Pick a point $x_0$ as your initial guess.\n2. Compute the gradient at your current guess:\n$v_i = \\nabla f(x_i)$\n3. Move by $\\alpha$ (your step size) in the direction of that gradient:\n$x_{i+1} = x_i + \\alpha v_i$\n4. Repeat steps 1-3 until your function is close enough to zero (until $f(x_i) < \\varepsilon$ for some small tolerance $\\varepsilon$)\n\nNote that the step size, $\\alpha$, is simply a parameter of the algorithm and has to be fixed in advance. \n\n\n\nNotice that the hyperbolic tangent function asymptotes at -1 and 1, rather than 0 and 1, which is sometimes beneficial, and its derivative is simple:\n\n$$\\frac{d \\tanh(x)}{dx} = 1 - \\tanh^2(x)$$\n\nPerforming gradient descent will allow us to change the weights in the direction that optimially reduces the error. The next trick will be to employ the **chain rule** to decompose how the error changes as a function of the input weights into the change in error as a function of changes in the inputs to the weights, mutliplied by the changes in input values as a function of changes in the weights. \n\n$$\\frac{\\partial E}{\\partial w} = \\frac{\\partial E}{\\partial h}\\frac{\\partial h}{\\partial w}$$\n\nThis will allow us to write a function describing the activations of the output weights as a function of the activations of the hidden layer nodes and the output weights, which will allow us to propagate error backwards through the network.\n\nThe second term in the chain rule simplifies to:\n\n$$\\begin{align}\n\\frac{\\partial h_k}{\\partial w_{jk}} &= \\frac{\\partial \\sum_l w_{lk} a_l}{\\partial w_{jk}} \\\\\n&= \\sum_l \\frac{\\partial w_{lk} a_l}{\\partial w_{jk}} \\\\\n& = a_j\n\\end{align}$$\n\nwhere $a_j$ is the activation of the jth hidden layer neuron.\n\nFor the first term in the chain rule above, we decompose it as well:\n\n$$\\frac{\\partial E}{\\partial h_k} = \\frac{\\partial E}{\\partial y_k}\\frac{\\partial y_k}{\\partial h_k} = \\frac{\\partial E}{\\partial g(h_k)}\\frac{\\partial g(h_k)}{\\partial h_k}$$\n\nThe second term of this chain rule is just the derivative of the activation function, which we have chosen to have a conveneint form, while the first term simplifies to:\n\n$$\\frac{\\partial E}{\\partial g(h_k)} = \\frac{\\partial}{\\partial g(h_k)}\\left[\\frac{1}{2} \\sum_k (t_k - y_k)^2 \\right] = t_k - y_k$$\n\nCombining these, and assuming (for illustration) a logistic activiation function, we have the gradient:\n\n$$\\frac{\\partial E}{\\partial w} = (t_k - y_k) y_k (1-y_k) a_j$$\n\nWhich ends up getting plugged into the weight update formula that we saw in the single-layer perceptron:\n\n$$w_{jk} \\leftarrow w_{jk} - \\eta (t_k - y_k) y_k (1-y_k) a_j$$\n\nNote that here we are *subtracting* the second term, rather than adding, since we are doing gradient descent.\n\nWe can now outline the MLP learning algorithm:\n\n1. Initialize all $w_{jk}$ to small random values\n2. For each input vector, conduct forward propagation:\n * compute activation of each neuron $j$ in hidden layer (here, sigmoid):\n $$h_j = \\sum_i x_i v_{ij}$$\n $$a_j = g(h_j) = \\frac{1}{1 + \\exp(-\\beta h_j)}$$\n * when the output layer is reached, calculate outputs similarly:\n $$h_k = \\sum_k a_j w_{jk}$$\n $$y_k = g(h_k) = \\frac{1}{1 + \\exp(-\\beta h_k)}$$\n3. Calculate loss for resulting predictions:\n * compute error at output:\n $$\\delta_k = (t_k - y_k) y_k (1-y_k)$$\n4. Conduct backpropagation to get partial derivatives of cost with respect to weights, and use these to update weights:\n * compute error of the hidden layers:\n $$\\delta_{hj} = \\left[\\sum_k w_{jk} \\delta_k \\right] a_j(1-a_j)$$\n * update output layer weights:\n $$w_{jk} \\leftarrow w_{jk} - \\eta \\delta_k a_j$$\n * update hidden layer weights:\n $$v_{ij} \\leftarrow v_{ij} - \\eta \\delta_{hj} x_i$$\n \nReturn to (2) and iterate until learning completes. Best practice is to shuffle input vectors to avoid training in the same order.\n\nIts important to be aware that because gradient descent is a hill-climbing (or descending) algorithm, it is liable to be caught in local minima with respect to starting values. Therefore, it is worthwhile training several networks using a range of starting values for the weights, so that you have a better chance of discovering a globally-competitive solution.\n\nOne useful performance enhancement for the MLP learning algorithm is the addition of **momentum** to the weight updates. This is just a coefficient on the previous weight update that increases the correlation between the current weight and the weight after the next update. This is particularly useful for complex models, where falling into local mimima is an issue; adding momentum will give some weight to the previous direction, making the resulting weights essentially a weighted average of the two directions. Adding momentum, along with a smaller learning rate, usually results in a more stable algorithm with quicker convergence. When we use momentum, we lose this guarantee, but this is generally seen as a small price to pay for the improvement momentum usually gives.\n\nA weight update with momentum looks like this:\n\n$$w_{jk} \\leftarrow w_{jk} - \\eta \\delta_k a_j + \\alpha \\Delta w_{jk}^{t-1}$$\n\nwhere $\\alpha$ is the momentum (regularization) parameter and $\\Delta w_{jk}^{t-1}$ the update from the previous iteration.\n\nThe multi-layer pereptron is implemented below in the `MLP` class. The implementation uses the scikit-learn interface, so it is uses in the same way as other supervised learning algorithms in that package.\n", "meta": {"hexsha": "c29ff86211f558035d725322b4fcfb67c73d22d3", "size": 121588, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Deep Learning/Reference Notebooks/01--Deep Learning Introduction Part 1.ipynb", "max_stars_repo_name": "Sahil-Chavan/ML_Playground", "max_stars_repo_head_hexsha": "cd6b12db7f64e58aae88d7672343aa0406347bb1", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Deep Learning/Reference Notebooks/01--Deep Learning Introduction Part 1.ipynb", "max_issues_repo_name": "Sahil-Chavan/ML_Playground", "max_issues_repo_head_hexsha": "cd6b12db7f64e58aae88d7672343aa0406347bb1", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2020-09-30T20:07:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-21T18:39:16.000Z", "max_forks_repo_path": "Deep Learning/Reference Notebooks/01--Deep Learning Introduction Part 1.ipynb", "max_forks_repo_name": "Sahil-Chavan/ML_Playground", "max_forks_repo_head_hexsha": "cd6b12db7f64e58aae88d7672343aa0406347bb1", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.8324958124, "max_line_length": 16338, "alphanum_fraction": 0.7883672731, "converted": true, "num_tokens": 11179, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.519521321952093, "lm_q2_score": 0.5389832206876841, "lm_q1q2_score": 0.2800132753216623}} {"text": "# Cross-Resonance \u95e8\n\n*\u7248\u6743\u6240\u6709 (c) 2021 \u767e\u5ea6\u91cf\u5b50\u8ba1\u7b97\u7814\u7a76\u6240\uff0c\u4fdd\u7559\u6240\u6709\u6743\u5229\u3002*\n\n## \u5185\u5bb9\u6982\u8981\n\n\u672c\u6559\u7a0b\u4ecb\u7ecd\u5982\u4f55\u4f7f\u7528\u91cf\u8109\u751f\u6210 Cross-Resonance\uff08CR\uff09\u95e8\u7684\u4f18\u5316\u8109\u51b2\u5e8f\u5217\u3002\u4e0e\u4e4b\u524d\u6559\u7a0b\u4e2d\u7684 iSWAP \u548c CZ \u95e8\u7684\u5b9e\u73b0\u4e0d\u540c\uff0cCR \u95e8\u662f\u4f7f\u7528\u5168\u5fae\u6ce2\u9a71\u52a8\u5b9e\u73b0\u7684\u3002\u672c\u6559\u7a0b\u7684\u6982\u8981\u5982\u4e0b\uff1a\n\n- \u80cc\u666f\u4ecb\u7ecd\n- \u51c6\u5907\u5de5\u4f5c\n- \u6784\u9020\u54c8\u5bc6\u987f\u91cf\n- \u901a\u8fc7\u91cf\u8109\u4e91\u670d\u52a1\u751f\u6210\u4e0e\u4f18\u5316\u8109\u51b2\n- \u603b\u7ed3\n\n## \u80cc\u666f\u4ecb\u7ecd\n\n**\u57fa\u672c\u539f\u7406**\n\n\u4e0e\u6211\u4eec\u4e4b\u524d\u4ecb\u7ecd\u7684\u91cf\u5b50\u95e8\u4e0d\u540c\u7684\u662f\uff0cCR \u95e8\u53ea\u4f7f\u7528\u5fae\u6ce2\u6765\u5b9e\u73b0\u4e24\u4e2a\u91cf\u5b50\u6bd4\u7279\u7684\u76f8\u4e92\u4f5c\u7528\uff0c\u8fd9\u6837\u6211\u4eec\u5c31\u53ef\u4ee5\u907f\u514d\u7531\u78c1\u901a\u800c\u5f15\u8d77\u7684\u566a\u58f0\u3002CR \u95e8\u7684\u7269\u7406\u5b9e\u73b0\u6d89\u53ca\u4e24\u4e2a\u8026\u5408\u7684\u9891\u7387\u56fa\u5b9a\u7684\u91cf\u5b50\u6bd4\u7279\uff0c\u5e76\u4e14\u4f7f\u9a71\u52a8\u8109\u51b2\u7684\u9891\u7387\u7b49\u4e8e\u76ee\u6807\u91cf\u5b50\u6bd4\u7279\uff08Target Qubit\uff09\u7684\u9891\u7387\uff0c\u5982\u4e0b\u56fe\u6240\u793a\uff1a\n\n\n\n\n\n\u6211\u4eec\u5148\u5173\u6ce8\u7cfb\u7edf\u7684\u6709\u6548\u54c8\u5bc6\u987f\u91cf\uff08\u8be6\u89c1\u53c2\u8003\u6587\u732e \\[1\\]\uff09\u3002\u5728\u53cc\u65cb\u8f6c\u5750\u6807\u7cfb\u4e2d\uff0c\u6211\u4eec\u7528\u9a71\u52a8\u5f3a\u5ea6 $A$\u3001\u5931\u8c10\u91cf $\\Delta$\u3001\u9a71\u52a8\u76f8\u4f4d $\\phi_0$ \u548c\u8026\u5408\u5f3a\u5ea6 $g_{01}$ \u8868\u793a cross-resonance \u6548\u5e94\u7684\u6709\u6548\u54c8\u5bc6\u987f\u91cf\uff08\u4e3a\u4e86\u7b80\u6d01\uff0c\u6211\u4eec\u4ee4 $\\hbar=1$\uff09\uff1a\n\n$$\n\\hat{H}_{\\rm eff} = \\frac{A}{4\\Delta}g_{01}(\\hat{\\sigma}_0^z\\hat{\\sigma}_1^x\\cos{\\phi_0}+\\hat{\\sigma}_0^z\\hat{\\sigma}_1^y\\sin{\\phi_0}).\n$$\n\n\u5f53 $\\phi_0=0$ \u65f6\uff0ccross-resonance \u6548\u5e94\u4ea7\u751f\u4e86 $\\hat{\\sigma}^z_0\\otimes\\hat{\\sigma}_1^x$ \u6709\u6548\u8026\u5408\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u53ef\u4ee5\u4ece\u4e0a\u9762\u7684\u6709\u6548\u54c8\u5bc6\u987f\u91cf\u63a8\u5bfc\u51fa\u65f6\u95f4\u6f14\u5316\u7b97\u7b26\uff1a\n\n$$\nU_{\\rm CR}(\\theta)=e^{-i\\frac{\\theta}{2}\\hat{\\sigma}^z_0\\otimes\\hat{\\sigma}^x_1},\n$$\n\n\u5176\u4e2d $\\theta=\\Omega_0 g_{01}t/(2\\Delta)$\uff08$t$ \u662f\u95e8\u65f6\u95f4\uff09\u3002\u53ef\u89c1\uff0ccross-resonance \u6548\u5e94\u4f7f\u5f97\u91cf\u5b50\u6bd4\u7279 $q_1$\uff08\u76ee\u6807\u91cf\u5b50\u6bd4\u7279\uff09\u7684\u65cb\u8f6c\u53d6\u51b3\u4e8e\u91cf\u5b50\u6bd4\u7279 $q_0$\uff08\u63a7\u5236\u91cf\u5b50\u6bd4\u7279\uff09\u7684\u72b6\u6001\u3002\n\n\n\n\u901a\u8fc7\u4e0a\u9762\u7684\u63a8\u5bfc, CR \u95e8\u7684\u77e9\u9635\u5f62\u5f0f\u662f\uff08\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 \\[2\\]\uff09\uff1a\n$$\nU_{\\rm CR}(\\theta) = \\begin{bmatrix} \n\\cos{\\frac{\\theta}{2}} & -i\\sin{\\frac{\\theta}{2}} & 0 & 0 \\\\\n-i\\sin{\\frac{\\theta}{2}} & \\cos{\\frac{\\theta}{2}} & 0 & 0 \\\\ \n0 & 0 & \\cos{\\frac{\\theta}{2}} & i\\sin{\\frac{\\theta}{2}} \\\\\n0 & 0 & i\\sin{\\frac{\\theta}{2}} & \\cos{\\frac{\\theta}{2}} \n\\end{bmatrix}.\n$$\n\n \n\u7279\u522b\u7684\uff0c\u5f53 $\\theta=-\\frac{\\pi}{2}$ \u65f6\uff0cCR \u95e8\u7684\u77e9\u9635\u8868\u793a\u4e3a\uff1a\n\n$$\nU_{\\rm CR}(-\\pi/2) = \\frac{\\sqrt{2}}{2} \n\\begin{bmatrix}\n1 & i & 0 & 0 \\\\\ni & 1 & 0 & 0 \\\\\n0 & 0 & 1 & -i \\\\\n0 & 0 & -i & 1\n\\end{bmatrix}.\n$$\n\n**\u5e94\u7528**\n\n\u5728\u5206\u6790\u4e86\u5b9e\u73b0 CR \u95e8\u7684\u4e00\u4e9b\u57fa\u672c\u539f\u7406\u4e4b\u540e\uff0c\u6211\u4eec\u73b0\u5728\u5173\u6ce8 CR \u95e8\u5728\u91cf\u5b50\u8ba1\u7b97\u4e2d\u7684\u5e94\u7528\uff1a\u5176\u4e2d\u4e4b\u4e00\u662f\u901a\u8fc7\u589e\u52a0\u4e24\u4e2a\u989d\u5916\u7684\u5355\u91cf\u5b50\u6bd4\u7279\u95e8\u6765\u5b9e\u73b0 CNOT \u95e8\u3002\n\n \n\n\u5728\u672c\u6559\u7a0b\u4e2d\uff0c\u6211\u4eec\u5bf9\u7531\u4e24\u4e2a\u4e09\u80fd\u7ea7\u7684\u91cf\u5b50\u6bd4\u7279\u7ec4\u6210\u7684\u7cfb\u7edf\u8fdb\u884c\u5efa\u6a21\uff0c\u5e76\u4ee5\u76ee\u6807\u91cf\u5b50\u6bd4\u7279\uff08\u91cf\u5b50\u6bd4\u7279 $q_1$\uff09\u7684\u9891\u7387\u5411\u63a7\u5236\u91cf\u5b50\u6bd4\u7279\uff08\u91cf\u5b50\u6bd4\u7279 $q_0$\uff09\u65bd\u52a0\u9a71\u52a8\u8109\u51b2\u3002\u901a\u8fc7\u65cb\u8f6c\u6ce2\u8fd1\u4f3c\uff08RWA\uff09\uff0c\u54c8\u5bc6\u987f\u91cf\u53ef\u4ee5\u8868\u793a\u4e3a\uff08\u66f4\u591a\u7ec6\u8282\u8bf7\u53c2\u9605 \\[1\\]\uff09\uff1a\n\n$$\n\\hat{H}_{\\rm sys} = (\\omega_{\\rm q0}-\\omega_{\\rm d})\\hat{a}_{0}^{\\dagger }\\hat{a}_0 + (\\omega_{\\rm q1}-\\omega_{\\rm d})\\hat{a}_1^\\dagger \\hat{a}_1 + \\frac{\\alpha_0}{2} \\hat{a}^{\\dagger2}_0\\hat{a}^2_0 + \\frac{\\alpha_1}{2} \\hat{a}^{\\dagger2}_1\\hat{a}^2_1+\\frac{g}{2}(\\hat{a}_0\\hat{a}_1^\\dagger + \\hat{a}_0^\\dagger\\hat{a}_1) + \\Omega_0^x(t)\\frac{\\hat{a}^\\dagger_0+\\hat{a}_0}{2}.\n$$\n\n\u5176\u4e2d\u5404\u79cd\u7b26\u53f7\u5b9a\u4e49\u89c1\u4e0b\u8868\uff1a\n\n\n|\u7b26\u53f7|\u5b9a\u4e49|\n|:--------:|:----------:|\n|$\\omega_{\\rm qi}$| \u91cf\u5b50\u6bd4\u7279 $q_i$ \u7684\u9891\u7387|\n|$\\omega_{\\rm d}$|\u9a71\u52a8\u9891\u7387|\n|$\\hat{a}_i^{\\dagger}$|\u4ea7\u751f\u7b97\u7b26|\n|$\\hat{a}_i$|\u6e6e\u706d\u7b97\u7b26|\n|$\\alpha_i$| \u91cf\u5b50\u6bd4\u7279 $q_i$ \u7684\u975e\u8c10\u6027|\n|$g$|\u8026\u5408\u5f3a\u5ea6|\n|$\\Omega_0^x$(t)| X \u901a\u9053\u7684\u8109\u51b2\u51fd\u6570|\n\n## \u51c6\u5907\u5de5\u4f5c\n\n\u6210\u529f\u5b89\u88c5\u91cf\u8109\u540e\uff0c\u60a8\u53ef\u4ee5\u6309\u7167\u672c\u6559\u7a0b\u8fd0\u884c\u4e0b\u9762\u7684\u91cf\u8109\u7a0b\u5e8f\u3002\u8981\u8fd0\u884c\u6b64\u6559\u7a0b\uff0c\u60a8\u9700\u8981\u4ece\u91cf\u8109\uff08Quanlse\uff09\u548c\u5176\u5b83\u5e38\u7528\u7684 Python \u5e93\u5bfc\u5165\u4ee5\u4e0b\u5305\uff1a\n\n\n```python\n# Import Hamiltonian-related module\nfrom Quanlse.QHamiltonian import QHamiltonian as QHam\nfrom Quanlse.QOperator import driveX, number, duff\n\n# Import optimizer for the cross-resonance gate\nfrom Quanlse.remoteOptimizer import remoteOptimizeCr\n\n# Import tools to analyze the result\nfrom Quanlse.Utils.Functions import project\n\n# Import numpy and math\nfrom numpy import round\nfrom math import pi\n```\n\n## \u6784\u9020\u54c8\u5bc6\u987f\u91cf\n\n\n\u73b0\u5728\uff0c\u6211\u4eec\u9700\u8981\u4f7f\u7528\u91cf\u8109\u6765\u6784\u9020\u54c8\u5bc6\u987f\u91cf\u3002\u5728\u91cf\u8109\u4e2d\uff0c\u6240\u6709\u5173\u4e8e\u54c8\u5bc6\u987f\u91cf\u7684\u4fe1\u606f\u90fd\u5b58\u50a8\u5728\u4e00\u4e2a\u5b57\u5178\u4e2d\u3002\u6211\u4eec\u9996\u5148\u5b9a\u4e49\u4e00\u4e9b\u6784\u9020\u54c8\u5bc6\u987f\u91cf\u7684\u5b57\u5178\u6240\u9700\u7684\u57fa\u672c\u53c2\u6570\uff1a\u91c7\u6837\u5468\u671f\u3001\u7cfb\u7edf\u4e2d\u7684\u91cf\u5b50\u6bd4\u7279\u6570\u91cf\u548c\u7cfb\u7edf\u7684\u80fd\u7ea7\u3002\u4e3a\u4e86\u521d\u59cb\u5316\u8be5\u54c8\u5bc6\u987f\u91cf\u7684\u5b57\u5178\uff0c\u6211\u4eec\u4ece\u6a21\u5757 `QHamiltonian` \u8c03\u7528\u51fd\u6570 `QHamiltonian()`\u3002\n\n\n```python\n# Sampling period\ndt = 1.0\n\n# Number of qubits\nqubits = 2\n\n# System energy level\nlevel = 3\n\n# Initialize the Hamiltonian\nham = QHam(subSysNum=qubits, sysLevel=level, dt=dt)\n```\n\n\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u5f00\u59cb\u6784\u9020\u54c8\u5bc6\u987f\u91cf\u4e86\u3002\u5728\u5f00\u59cb\u4e4b\u524d\uff0c\u6211\u4eec\u9700\u8981\u5b9a\u4e49\u51e0\u4e2a\u5e38\u91cf\u4f5c\u4e3a\u51fd\u6570\u7684\u53c2\u6570\uff1a\n\n\n```python\n# Parameters setting \nqubitArgs = {\n \"coupling\": 0.0038 * (2 * pi), # Coupling of Q0 and Q1\n \"qubit_freq0\": 5.114 * (2 * pi), # Frequency of Q0\n \"qubit_freq1\": 4.914 * (2 * pi), # Frequency of Q1\n \"drive_freq0\": 4.914 * (2 * pi), # Drive frequency on Q0\n \"drive_freq1\": 4.914 * (2 * pi), # Drive frequency on Q1\n \"qubit_anharm0\": -0.33 * (2 * pi), # Anharmonicity of Q0\n \"qubit_anharm1\": -0.33 * (2 * pi) # Anharmonicity of Q1\n}\n```\n\n\u7136\u540e\u6211\u4eec\u9700\u8981\u5728\u4e4b\u524d\u521d\u59cb\u5316\u7684\u54c8\u5bc6\u987f\u7684\u5b57\u5178\u4e2d\u6dfb\u52a0\u4ee5\u4e0b\u9879\uff1a\n\n$$\n\\begin{align}\n\\hat{H}_{\\rm drift} &= (\\omega_{\\rm q0}-\\omega_{\\rm d}) \\hat{a}_0^\\dagger \\hat{a}_0 + (\\omega_{\\rm q1}-\\omega_{\\rm d}) \\hat{a}_1^\\dagger \\hat{a}_1 + \\frac{\\alpha_0}{2} \\hat{a}_0^{\\dagger}\\hat{a}_0^{\\dagger}\\hat{a}_0 \n\\hat{a}_0 + \\frac{\\alpha_1}{2} \\hat{a}_1^{\\dagger}\\hat{a}_1^{\\dagger}\\hat{a}_1 \\hat{a}_1 , \\\\\n\\hat{H}_{\\rm coup} &= \\frac{g_{01}}{2}(\\hat{a}_0 \\hat{a}_1^\\dagger+\\hat{a}^\\dagger_0 \\hat{a}_1). \\\\\n\\end{align}\n$$\n\n\u5728\u91cf\u8109\u7684 `Operator` \u6a21\u5757\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86\u4e00\u4e9b\u5141\u8bb8\u7528\u6237\u5feb\u901f\u6784\u9020\u5e38\u7528\u7b97\u7b26\u7684\u5de5\u5177\u3002\u5931\u8c10\u9879 $(\\omega_{\\rm q}-\\omega_{\\rm d})\\hat{a}^\\dagger\\hat{a}$ \u548c\u975e\u8c10\u9879 $\\frac{\\alpha}{2}\\hat{a}^\\dagger\\hat{a}^\\dagger\\hat{a}\\hat{a}$ \u53ef\u4ee5\u5206\u522b\u4f7f\u7528 `Operator` \u6a21\u5757\u4e2d\u7684 `number(n)` \u548c `duff(n)` \u751f\u6210\uff1a\u8fd9\u4e24\u4e2a\u51fd\u6570 `number(n)` \u548c `duff(n)` \u5206\u522b\u8fd4\u56de $n\\times n$ \u7684\u7c92\u5b50\u6570\u7b97\u7b26\u548c Duffing \u7b97\u7b26\u3002\u8026\u5408\u9879\u7684\u5f62\u5f0f\u4e3a $\\frac{g}{2}(\\hat{a}_i^\\dagger\\hat{a}_j+\\hat{a}_i\\hat{a}_j^\\dagger)$\uff0c\u53ef\u4ee5\u4f7f\u7528\u51fd\u6570 `addCoupling()` \u76f4\u63a5\u6dfb\u52a0\u5230\u54c8\u5bc6\u987f\u91cf\u4e2d\u3002\n\n\n```python\n# Add the detuning and the anharmonicity terms\nfor qu in range(2):\n # Add the detuning term(s).\n ham.addDrift(number, qu, (qubitArgs[f\"qubit_freq{qu}\"] - qubitArgs[f\"drive_freq{qu}\"]))\n # Add the anharmonicity term(s).\n ham.addDrift(duff, qu, qubitArgs[f\"qubit_anharm{qu}\"] / 2)\n\n# Add the coupling term\nham.addCoupling([0, 1], qubitArgs[\"coupling\"] / 2)\n```\n\n\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u91cf\u8109\u7684\u4f18\u5316\u51fd\u6570\u4f1a\u81ea\u52a8\u6dfb\u52a0\u63a7\u5236\u9879\uff1a\n\n$$\n\\hat{H}_{\\rm ctrl}(t) = \\Omega_0^x(t)\\frac{\\hat{a}^\\dagger_0+\\hat{a}_0}{2},\n$$\n\n\u6240\u4ee5\u6211\u4eec\u4e0d\u9700\u8981\u624b\u52a8\u6dfb\u52a0\u8fd9\u9879\u3002\n\n\u7cfb\u7edf\u7684\u54c8\u5bc6\u987f\u91cf\u6784\u9020\u5b8c\u6210\u540e\uff0c\u6211\u4eec\u53ef\u4ee5\u8fdb\u884c\u91cf\u5b50\u7cfb\u7edf\u7684\u6a21\u62df\u3002\n\n## \u901a\u8fc7\u91cf\u8109\u4e91\u670d\u52a1\u751f\u6210\u4e0e\u4f18\u5316\u8109\u51b2\n\n\u5728\u672c\u5730\u8bbe\u5907\u4e0a\u5904\u7406\u4f18\u5316\u8fc7\u7a0b\u901a\u5e38\u9700\u8981\u5f88\u957f\u65f6\u95f4\uff0c\u800c\u6211\u4eec\u63d0\u4f9b\u7684\u4e91\u670d\u52a1\u53ef\u4ee5\u663e\u8457\u52a0\u901f\u6b64\u8fc7\u7a0b\u3002\u8981\u4f7f\u7528\u91cf\u8109\u4e91\u670d\u52a1\uff0c\u7528\u6237\u9700\u8981\u4ece http://quantum-hub.baidu.com \u83b7\u53d6 token\u3002\n\n\n```python\n# Import tools to get access to cloud service\nfrom Quanlse import Define\n\n# To use remoteOptimizerCr on cloud, paste your token (a string) here\nDefine.hubToken = ''\n```\n\n\u4e3a\u4e86\u627e\u5230 CR \u95e8\u7684\u4f18\u5316\u8109\u51b2\uff0c\u6211\u4eec\u4f7f\u7528\u51fd\u6570 `remoteOptimizeCr()`\u3002 \u6b64\u51fd\u6570\u4ee5\u6211\u4eec\u5148\u524d\u5b9a\u4e49\u7684\u54c8\u5bc6\u987f\u91cf\u3001\u632f\u5e45\u7684\u754c\u3001\u95e8\u7684\u6301\u7eed\u65f6\u95f4\u3001\u6700\u5927\u8fed\u4ee3\u6b21\u6570\u548c\u76ee\u6807\u5931\u771f\u5ea6\u3002\u901a\u8fc7\u8c03\u7528 `remoteOptimizeCr()`\uff0c\u7528\u6237\u53ef\u4ee5\u5411\u91cf\u8109\u4e91\u670d\u52a1\u63d0\u4ea4\u4f18\u5316\u4efb\u52a1\u3002\u5982\u679c\u7528\u6237\u60f3\u8981\u8fdb\u4e00\u6b65\u51cf\u5c11\u5931\u771f\u5ea6\uff0c\u6211\u4eec\u5efa\u8bae\u7528\u6237\u5c1d\u8bd5\u589e\u52a0\u95e8\u65f6\u95f4 `tg`\uff08CR \u95e8\u7684\u6301\u7eed\u65f6\u95f4\u7ea6\u4e3a 200 \u5230 400 \u7eb3\u79d2\uff09\u3002\u7528\u6237\u8fd8\u53ef\u4ee5\u901a\u8fc7\u8bbe\u7f6e\u66f4\u5927\u7684 `aBound` \u548c `maxIter` \u6765\u589e\u5927\u641c\u7d22\u7a7a\u95f4\u3002\n\n\u5728\u672c\u6559\u7a0b\u4e2d\uff0c\u7528\u6765\u8bc4\u4f30\u751f\u6210\u7684\u91cf\u5b50\u95e8\u6027\u80fd\u7684\u5931\u771f\u5ea6\u4e3a ${\\rm infid} = 1 - \\frac{1}{d}\\left|{\\rm Tr}[U^\\dagger_{\\rm goal}P(U)]\\right|$\uff0c\u5176\u4e2d $U_{\\rm goal}$ \u662f\u76ee\u6807\u9149\u53d8\u6362 $U_{\\rm CR}(-\\pi /2)$\uff0c$d$ \u662f $U_{\\rm goal}$ \u7684\u7ef4\u5ea6\uff0c$U$ \u662f\u524d\u9762\u5b9a\u4e49\u7684\u4e09\u7ea7\u4f53\u7cfb\u7684\u5b9e\u9645\u9149\u6f14\u5316\u3002\u6ce8\u610f\uff0c$P(U)$ \u63cf\u8ff0\u6295\u5f71\u5230\u8ba1\u7b97\u5b50\u7a7a\u95f4\u7684\u6f14\u5316\u3002\n\n\n```python\n# Set amplitude bound\naBound = (-1.0, 3.0)\n\n# Run the optimization\ngateJob, infidelity = remoteOptimizeCr(ham, aBound=aBound, tg=200, maxIter=5, targetInfidelity=0.005)\n```\n\n\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528 `plot()` \u53ef\u89c6\u5316\u751f\u6210\u7684\u8109\u51b2\u3002\u6709\u5173 `plot()` \u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u8003 [single-qubit-gate](https://quanlse.baidu.com/#/doc/tutorial-single-qubit)\u3002\n\n\n```python\n# Print waves and the infidelity\ngateJob.plot()\nprint(f'infidelity: {infidelity}')\n```\n\n\u4f7f\u7528\u51fd\u6570 `simulate()` \u4ee5\u53ca `project()`\uff0c\u7528\u6237\u53ef\u4ee5\u83b7\u5f97\u7cfb\u7edf\u6295\u5f71\u5230\u8ba1\u7b97\u5b50\u7a7a\u95f4\u7684\u6f14\u5316\u540e\u7684\u77e9\u9635 $P(U)$\u3002\n\n\n```python\n# Print the system's evolution\nresult = ham.simulate(job=gateJob)\nprocess2d = project(result[0][\"unitary\"], qubits, level, 2)\nprint(\"The projected evolution P(U):\\n\", round(process2d, 2))\n```\n\n\u6b64\u5916\uff0c\u5982\u679c\u60f3\u8981\u83b7\u53d6\u751f\u6210\u8109\u51b2\u7684\u6bcf\u4e2a\u65f6\u523b `dt` \u7684\u6570\u503c\uff0c\u53ef\u4ee5\u4f7f\u7528\u51fd\u6570 `getPulseSequences()`\u3002\u8be5\u51fd\u6570\u91c7\u7528\u54c8\u5bc6\u987f\u7684\u5b57\u5178\u548c\u8109\u51b2\u7684\u901a\u9053\u540d\u79f0\u4f5c\u4e3a\u8f93\u5165\u53c2\u6570\u3002\n\n\n```python\ngateJob.generatePulseSequence(driveX(3), 0)\ngateJob.plot()\n```\n\n## \u603b\u7ed3\n\n\n\u901a\u8fc7\u6784\u9020\u7cfb\u7edf\u54c8\u5bc6\u987f\u91cf\u5e76\u5728\u91cf\u8109\u4e91\u670d\u52a1\u4e0a\u4ea7\u751f\u4f18\u5316\u7684\u8109\u51b2\uff0c\u6211\u4eec\u6210\u529f\u5730\u8bbe\u8ba1\u4e86\u4e00\u4e2a\u53ef\u4ee5\u5b9e\u73b0\u9ad8\u4fdd\u771f\u5ea6\u7684 cross-resonance \u95e8\u7684\u8109\u51b2\u3002\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u70b9\u51fb\u8fd9\u4e2a\u94fe\u63a5 [tutorial-cr-gate.ipynb](https://github.com/baidu/Quanlse/blob/main/Tutorial/CN/tutorial-cr-cn.ipynb) \u8df3\u8f6c\u5230\u6b64 Jupyter Notebook \u6587\u6863\u76f8\u5e94\u7684 GitHub \u9875\u9762\u5e76\u4e14\u8fd0\u884c\u8fd9\u4e2a\u7a0b\u5e8f\u3002\u6211\u4eec\u9f13\u52b1\u7528\u6237\u5c1d\u8bd5\u4e0d\u540c\u4e8e\u672c\u6559\u7a0b\u7684\u53c2\u6570\u503c\u4ee5\u83b7\u5f97\u6700\u4f73\u7ed3\u679c\u3002\n\n## \u53c2\u8003\u6587\u732e\n \n\\[1\\] [Rigetti, Chad, and Michel Devoret. \"Fully microwave-tunable universal gates in superconducting qubits with linear couplings and fixed transition frequencies.\" *Physical Review B* 81.13 (2010): 134507.](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.81.134507)\n\n\\[2\\] [Nielsen, Michael A., and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 2010.](https://doi.org/10.1017/CBO9780511976667)\n", "meta": {"hexsha": "e68a71adbcde8e8b585e69e430b99915cc52227f", "size": 12198, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Tutorial/CN/tutorial-cr-cn.ipynb", "max_stars_repo_name": "baidu/Quanlse", "max_stars_repo_head_hexsha": "f84a9afc66a7a404fc1ee2ea0de110f966491279", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 33, "max_stars_repo_stars_event_min_datetime": "2021-01-22T11:15:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T06:04:42.000Z", "max_issues_repo_path": "Tutorial/CN/tutorial-cr-cn.ipynb", "max_issues_repo_name": "baidu/Quanlse", "max_issues_repo_head_hexsha": "f84a9afc66a7a404fc1ee2ea0de110f966491279", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tutorial/CN/tutorial-cr-cn.ipynb", "max_forks_repo_name": "baidu/Quanlse", "max_forks_repo_head_hexsha": "f84a9afc66a7a404fc1ee2ea0de110f966491279", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-01-25T02:56:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T13:32:49.000Z", "avg_line_length": 30.495, "max_line_length": 427, "alphanum_fraction": 0.5390227906, "converted": true, "num_tokens": 3970, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.39233683016710835, "lm_q1q2_score": 0.27943491750730437}} {"text": "```python\nimport csv \nfrom pulp import *\n```\n\n\\section{Fantasy Football}\n\nIn fantasy football every participant can assemble a team, that consists of \n \n\\begin{itemize}\n\\item 1 $\\times$ Quarterback\n\\item 1 $\\times$ Tight end\n\\item 2 $\\times$ Running backs\n\\item 3 $\\times$ Wide receivers\n\\item 1 $\\times$ Defense & Special teams\n\\item 1 $\\times$ Flex\n\\end{itemize}\n\nEvery Position is awarded points through a pre-defined rating system;\ne.g. Rushing yards, Touchdowns etc.\n\nEvery draft has a salary/cost.\nWhen picking a team, the salary/cost must not exceed the salary cap.\n\n\n\n\\section{The Data}\n\nIn this project, we consider fantasy football facilitated by https://www.draftkings.co.uk/.\nThe salary cap here is $50.000$, and we can download the data (like salary and position) of a player from there\n\n\n\n\n```python\nwith open('DKSalaries.csv', 'r') as f:\n reader = list(csv.reader(f))\nreader\n```\n\n\n\n\n [['QB', 'RB', 'RB', 'WR', 'WR', 'WR', 'TE', 'FLEX', 'DST', '', 'Instructions'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '1. Locate the player you want to select in the list below '],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '2. Copy the ID of your player (you can use the Name + ID column or the ID column) '],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '3. Paste the ID into the roster position desired '],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n \"4. You must include an ID for each player; you cannot use just the player's name \"],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '5. You can create up to 500 lineups per file '],\n [' '],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'Position',\n 'Name + ID',\n 'Name',\n 'ID',\n 'Roster Position',\n 'Salary',\n 'Game Info',\n 'TeamAbbrev',\n 'AvgPointsPerGame'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Christian McCaffrey (16006156)',\n 'Christian McCaffrey',\n '16006156',\n 'RB/FLEX',\n '9200',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '30.13'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tyreek Hill (16006352)',\n 'Tyreek Hill',\n '16006352',\n 'WR/FLEX',\n '9000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '23.39'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Patrick Mahomes (16006097)',\n 'Patrick Mahomes',\n '16006097',\n 'QB',\n '8500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '27.80'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Calvin Ridley (16006354)',\n 'Calvin Ridley',\n '16006354',\n 'WR/FLEX',\n '8500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '21.12'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Travis Kelce (16006636)',\n 'Travis Kelce',\n '16006636',\n 'TE/FLEX',\n '8500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '22.07'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Lamar Jackson (16006098)',\n 'Lamar Jackson',\n '16006098',\n 'QB',\n '8000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '23.40'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Nick Chubb (16006158)',\n 'Nick Chubb',\n '16006158',\n 'RB/FLEX',\n '7800',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '18.93'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DK Metcalf (16006356)',\n 'DK Metcalf',\n '16006356',\n 'WR/FLEX',\n '7800',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '19.31'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'David Montgomery (16006160)',\n 'David Montgomery',\n '16006160',\n 'RB/FLEX',\n '7700',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '17.42'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Allen Robinson II (16006358)',\n 'Allen Robinson II',\n '16006358',\n 'WR/FLEX',\n '7700',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '17.56'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Austin Ekeler (16006162)',\n 'Austin Ekeler',\n '16006162',\n 'RB/FLEX',\n '7600',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '16.55'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Deshaun Watson (16006099)',\n 'Deshaun Watson',\n '16006099',\n 'QB',\n '7600',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '24.91'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Keenan Allen (16006360)',\n 'Keenan Allen',\n '16006360',\n 'WR/FLEX',\n '7500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '18.51'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Justin Herbert (16006100)',\n 'Justin Herbert',\n '16006100',\n 'QB',\n '7400',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '24.24'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Jonathan Taylor (16006164)',\n 'Jonathan Taylor',\n '16006164',\n 'RB/FLEX',\n '7300',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '15.54'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Russell Wilson (16006101)',\n 'Russell Wilson',\n '16006101',\n 'QB',\n '7300',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '25.27'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Robert Woods (16006362)',\n 'Robert Woods',\n '16006362',\n 'WR/FLEX',\n '7000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '16.83'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Jalen Hurts (16006102)',\n 'Jalen Hurts',\n '16006102',\n 'QB',\n '7000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '6.88'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Miles Sanders (16006166)',\n 'Miles Sanders',\n '16006166',\n 'RB/FLEX',\n '7000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '14.54'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Jarvis Landry (16006364)',\n 'Jarvis Landry',\n '16006364',\n 'WR/FLEX',\n '6900',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '12.47'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Julio Jones (16006366)',\n 'Julio Jones',\n '16006366',\n 'WR/FLEX',\n '6800',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '17.23'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'James Robinson (16006168)',\n 'James Robinson',\n '16006168',\n 'RB/FLEX',\n '6800',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '18.81'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Terry McLaurin (16006368)',\n 'Terry McLaurin',\n '16006368',\n 'WR/FLEX',\n '6700',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '15.49'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Ezekiel Elliott (16006170)',\n 'Ezekiel Elliott',\n '16006170',\n 'RB/FLEX',\n '6700',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '15.28'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Antonio Gibson (16006172)',\n 'Antonio Gibson',\n '16006172',\n 'RB/FLEX',\n '6600',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '15.93'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Cooper Kupp (16006370)',\n 'Cooper Kupp',\n '16006370',\n 'WR/FLEX',\n '6600',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '14.58'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Mike Davis (16006174)',\n 'Mike Davis',\n '16006174',\n 'RB/FLEX',\n '6500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '14.19'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tyler Lockett (16006372)',\n 'Tyler Lockett',\n '16006372',\n 'WR/FLEX',\n '6500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '16.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Tony Pollard (16006176)',\n 'Tony Pollard',\n '16006176',\n 'RB/FLEX',\n '6500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '7.96'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'J.D. McKissic (16006180)',\n 'J.D. McKissic',\n '16006180',\n 'RB/FLEX',\n '6400',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '11.51'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Ben Roethlisberger (16006103)',\n 'Ben Roethlisberger',\n '16006103',\n 'QB',\n '6400',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '18.91'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Chris Carson (16006178)',\n 'Chris Carson',\n '16006178',\n 'RB/FLEX',\n '6400',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '16.66'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Diontae Johnson (16006374)',\n 'Diontae Johnson',\n '16006374',\n 'WR/FLEX',\n '6300',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '14.97'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Cam Akers (16006182)',\n 'Cam Akers',\n '16006182',\n 'RB/FLEX',\n '6300',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '8.56'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'J.K. Dobbins (16006184)',\n 'J.K. Dobbins',\n '16006184',\n 'RB/FLEX',\n '6200',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '9.98'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Brandin Cooks (16006376)',\n 'Brandin Cooks',\n '16006376',\n 'WR/FLEX',\n '6200',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '12.95'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Baker Mayfield (16006104)',\n 'Baker Mayfield',\n '16006104',\n 'QB',\n '6100',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '17.17'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'David Johnson (16006186)',\n 'David Johnson',\n '16006186',\n 'RB/FLEX',\n '6100',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '13.36'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'JuJu Smith-Schuster (16006378)',\n 'JuJu Smith-Schuster',\n '16006378',\n 'WR/FLEX',\n '6000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '13.71'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Clyde Edwards-Helaire (16006188)',\n 'Clyde Edwards-Helaire',\n '16006188',\n 'RB/FLEX',\n '6000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '14'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Chase Claypool (16006380)',\n 'Chase Claypool',\n '16006380',\n 'WR/FLEX',\n '5900',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '13.46'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Kareem Hunt (16006190)',\n 'Kareem Hunt',\n '16006190',\n 'RB/FLEX',\n '5900',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '14.44'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Jared Goff (16006105)',\n 'Jared Goff',\n '16006105',\n 'QB',\n '5900',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '18.88'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DJ Moore (16006382)',\n 'DJ Moore',\n '16006382',\n 'WR/FLEX',\n '5800',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '15.13'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'James Conner (16006192)',\n 'James Conner',\n '16006192',\n 'RB/FLEX',\n '5800',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '13.16'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Matt Ryan (16006106)',\n 'Matt Ryan',\n '16006106',\n 'QB',\n '5800',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '18.92'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n \"Le'Veon Bell (16006194)\",\n \"Le'Veon Bell\",\n '16006194',\n 'RB/FLEX',\n '5800',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '6.97'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Wayne Gallman Jr. (16006196)',\n 'Wayne Gallman Jr.',\n '16006196',\n 'RB/FLEX',\n '5700',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '10.48'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Marquise Brown (16006386)',\n 'Marquise Brown',\n '16006386',\n 'WR/FLEX',\n '5700',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '10.89'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Mark Andrews (16006638)',\n 'Mark Andrews',\n '16006638',\n 'TE/FLEX',\n '5700',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '12.48'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Mitchell Trubisky (16006107)',\n 'Mitchell Trubisky',\n '16006107',\n 'QB',\n '5700',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '15.99'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Amari Cooper (16006384)',\n 'Amari Cooper',\n '16006384',\n 'WR/FLEX',\n '5700',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '15.69'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Philip Rivers (16006108)',\n 'Philip Rivers',\n '16006108',\n 'QB',\n '5600',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '16.91'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Melvin Gordon III (16006198)',\n 'Melvin Gordon III',\n '16006198',\n 'RB/FLEX',\n '5600',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '13.81'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Robby Anderson (16006388)',\n 'Robby Anderson',\n '16006388',\n 'WR/FLEX',\n '5500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '15.01'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'T.Y. Hilton (16006390)',\n 'T.Y. Hilton',\n '16006390',\n 'WR/FLEX',\n '5500',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '11.12'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Joe Mixon (16006200)',\n 'Joe Mixon',\n '16006200',\n 'RB/FLEX',\n '5500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '17.27'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Andy Dalton (16006109)',\n 'Andy Dalton',\n '16006109',\n 'QB',\n '5500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '11.64'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Keke Coutee (16006392)',\n 'Keke Coutee',\n '16006392',\n 'WR/FLEX',\n '5400',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '11.27'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Carson Wentz (16006110)',\n 'Carson Wentz',\n '16006110',\n 'QB',\n '5400',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '18.37'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Teddy Bridgewater (16006112)',\n 'Teddy Bridgewater',\n '16006112',\n 'QB',\n '5300',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '19.07'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Benny Snell Jr. (16006204)',\n 'Benny Snell Jr.',\n '16006204',\n 'RB/FLEX',\n '5300',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '5.99'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Tyrod Taylor (16006111)',\n 'Tyrod Taylor',\n '16006111',\n 'QB',\n '5300',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '4.51'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Carlos Hyde (16006202)',\n 'Carlos Hyde',\n '16006202',\n 'RB/FLEX',\n '5300',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '9.03'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'CeeDee Lamb (16006394)',\n 'CeeDee Lamb',\n '16006394',\n 'WR/FLEX',\n '5300',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '13.69'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Alex Smith (16006114)',\n 'Alex Smith',\n '16006114',\n 'QB',\n '5200',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '10.44'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Dwayne Haskins Jr. (16006115)',\n 'Dwayne Haskins Jr.',\n '16006115',\n 'QB',\n '5200',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '13'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Gardner Minshew II (16006113)',\n 'Gardner Minshew II',\n '16006113',\n 'QB',\n '5200',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '19.85'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Russell Gage (16006396)',\n 'Russell Gage',\n '16006396',\n 'WR/FLEX',\n '5100',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '10.90'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Nick Foles (16006116)',\n 'Nick Foles',\n '16006116',\n 'QB',\n '5100',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '14.42'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Nyheim Hines (16006206)',\n 'Nyheim Hines',\n '16006206',\n 'RB/FLEX',\n '5000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '12.15'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Sammy Watkins (16006398)',\n 'Sammy Watkins',\n '16006398',\n 'WR/FLEX',\n '5000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '9.67'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Drew Lock (16006118)',\n 'Drew Lock',\n '16006118',\n 'QB',\n '5000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '14.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Sam Darnold (16006117)',\n 'Sam Darnold',\n '16006117',\n 'QB',\n '5000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '11.71'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Daniel Jones (16006119)',\n 'Daniel Jones',\n '16006119',\n 'QB',\n '5000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '13.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Colt McCoy (16006120)',\n 'Colt McCoy',\n '16006120',\n 'QB',\n '5000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '5.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Duke Johnson (16006208)',\n 'Duke Johnson',\n '16006208',\n 'RB/FLEX',\n '5000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '7.86'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Curtis Samuel (16006402)',\n 'Curtis Samuel',\n '16006402',\n 'WR/FLEX',\n '4900',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '13.48'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Logan Thomas (16006640)',\n 'Logan Thomas',\n '16006640',\n 'TE/FLEX',\n '4900',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '10.97'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Joe Flacco (16006121)',\n 'Joe Flacco',\n '16006121',\n 'QB',\n '4900',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '11.55'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tyler Boyd (16006400)',\n 'Tyler Boyd',\n '16006400',\n 'WR/FLEX',\n '4900',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '14.03'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Boston Scott (16006210)',\n 'Boston Scott',\n '16006210',\n 'RB/FLEX',\n '4900',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Brett Rypien (16006126)',\n 'Brett Rypien',\n '16006126',\n 'QB',\n '4800',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '7.65'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Noah Fant (16006642)',\n 'Noah Fant',\n '16006642',\n 'TE/FLEX',\n '4800',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '10.67'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Case Keenum (16006125)',\n 'Case Keenum',\n '16006125',\n 'QB',\n '4800',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '1.84'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Rashard Higgins (16006406)',\n 'Rashard Higgins',\n '16006406',\n 'WR/FLEX',\n '4800',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '10.58'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Brandon Allen (16006122)',\n 'Brandon Allen',\n '16006122',\n 'QB',\n '4800',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '10.48'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Ryan Finley (16006127)',\n 'Ryan Finley',\n '16006127',\n 'QB',\n '4800',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '4.23'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Giovani Bernard (16006212)',\n 'Giovani Bernard',\n '16006212',\n 'RB/FLEX',\n '4800',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '9.88'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Mike Glennon (16006124)',\n 'Mike Glennon',\n '16006124',\n 'QB',\n '4800',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '11.83'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DJ Chark Jr. (16006404)',\n 'DJ Chark Jr.',\n '16006404',\n 'WR/FLEX',\n '4800',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '11.7'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Ben DiNucci (16006123)',\n 'Ben DiNucci',\n '16006123',\n 'QB',\n '4800',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '4.48'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Todd Gurley II (16006214)',\n 'Todd Gurley II',\n '16006214',\n 'RB/FLEX',\n '4700',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '11.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Hunter Henry (16006644)',\n 'Hunter Henry',\n '16006644',\n 'TE/FLEX',\n '4700',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '10.38'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tee Higgins (16006408)',\n 'Tee Higgins',\n '16006408',\n 'WR/FLEX',\n '4700',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '13.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Garrett Gilbert (16006128)',\n 'Garrett Gilbert',\n '16006128',\n 'QB',\n '4700',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '15.52'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Jalen Reagor (16006416)',\n 'Jalen Reagor',\n '16006416',\n 'WR/FLEX',\n '4600',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '8.74'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'P.J. Walker (16006130)',\n 'P.J. Walker',\n '16006130',\n 'QB',\n '4500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '4.24'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Mike Williams (16006414)',\n 'Mike Williams',\n '16006414',\n 'WR/FLEX',\n '4500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '10.38'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Ty Johnson (16006216)',\n 'Ty Johnson',\n '16006216',\n 'RB/FLEX',\n '4500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '5.62'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Jamison Crowder (16006412)',\n 'Jamison Crowder',\n '16006412',\n 'WR/FLEX',\n '4500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '14.46'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Devonta Freeman (16006218)',\n 'Devonta Freeman',\n '16006218',\n 'RB/FLEX',\n '4500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '7.2'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Sterling Shepard (16006410)',\n 'Sterling Shepard',\n '16006410',\n 'WR/FLEX',\n '4500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '10.62'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'John Wolford (16006129)',\n 'John Wolford',\n '16006129',\n 'QB',\n '4500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Darrell Henderson Jr. (16006220)',\n 'Darrell Henderson Jr.',\n '16006220',\n 'RB/FLEX',\n '4500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '9.08'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Peyton Barber (16006224)',\n 'Peyton Barber',\n '16006224',\n 'RB/FLEX',\n '4400',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '4.42'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Chad Henne (16006132)',\n 'Chad Henne',\n '16006132',\n 'QB',\n '4400',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '3.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Jeff Driskel (16006131)',\n 'Jeff Driskel',\n '16006131',\n 'QB',\n '4400',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '10.69'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Gus Edwards (16006222)',\n 'Gus Edwards',\n '16006222',\n 'RB/FLEX',\n '4400',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '7.86'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Chad Hansen (16006418)',\n 'Chad Hansen',\n '16006418',\n 'WR/FLEX',\n '4400',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '14.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Will Grier (16006134)',\n 'Will Grier',\n '16006134',\n 'QB',\n '4300',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Jake Luton (16006133)',\n 'Jake Luton',\n '16006133',\n 'QB',\n '4300',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '12.42'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Michael Pittman Jr. (16006420)',\n 'Michael Pittman Jr.',\n '16006420',\n 'WR/FLEX',\n '4200',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '8.19'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Eric Ebron (16006646)',\n 'Eric Ebron',\n '16006646',\n 'TE/FLEX',\n '4200',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '9.08'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Matt Schaub (16006136)',\n 'Matt Schaub',\n '16006136',\n 'QB',\n '4200',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '-0.4'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Darrel Williams (16006232)',\n 'Darrel Williams',\n '16006232',\n 'RB/FLEX',\n '4200',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '3.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Phillip Lindsay (16006228)',\n 'Phillip Lindsay',\n '16006228',\n 'RB/FLEX',\n '4200',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '6.27'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tim Patrick (16006422)',\n 'Tim Patrick',\n '16006422',\n 'WR/FLEX',\n '4200',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '12.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Frank Gore (16006230)',\n 'Frank Gore',\n '16006230',\n 'RB/FLEX',\n '4200',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '6.78'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Mark Ingram II (16006226)',\n 'Mark Ingram II',\n '16006226',\n 'RB/FLEX',\n '4200',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '5.44'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'AJ McCarron (16006135)',\n 'AJ McCarron',\n '16006135',\n 'QB',\n '4200',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Mason Rudolph (16006141)',\n 'Mason Rudolph',\n '16006141',\n 'QB',\n '4100',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '-0.11'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Jacoby Brissett (16006142)',\n 'Jacoby Brissett',\n '16006142',\n 'QB',\n '4100',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '2.24'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Anthony McFarland Jr. (16006234)',\n 'Anthony McFarland Jr.',\n '16006234',\n 'RB/FLEX',\n '4100',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '2.1'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Tyler Huntley (16006138)',\n 'Tyler Huntley',\n '16006138',\n 'QB',\n '4100',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '2.08'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Robert Griffin III (16006139)',\n 'Robert Griffin III',\n '16006139',\n 'QB',\n '4100',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '1.40'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Trace McSorley (16006140)',\n 'Trace McSorley',\n '16006140',\n 'QB',\n '4100',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '4.65'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Geno Smith (16006137)',\n 'Geno Smith',\n '16006137',\n 'QB',\n '4100',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '1.12'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Michael Gallup (16006424)',\n 'Michael Gallup',\n '16006424',\n 'WR/FLEX',\n '4100',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '9.81'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Steven Montez (16006153)',\n 'Steven Montez',\n '16006153',\n 'QB',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Taylor Heinicke (16006154)',\n 'Taylor Heinicke',\n '16006154',\n 'QB',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Kyle Allen (16006155)',\n 'Kyle Allen',\n '16006155',\n 'QB',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '11.75'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Rodney Smith (16006338)',\n 'Rodney Smith',\n '16006338',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '3.86'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Trenton Cannon (16006340)',\n 'Trenton Cannon',\n '16006340',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '0.79'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Reggie Bonnafon (16006342)',\n 'Reggie Bonnafon',\n '16006342',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '8.35'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Alex Armah (16006344)',\n 'Alex Armah',\n '16006344',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '0.67'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Mike Warren (16006346)',\n 'Mike Warren',\n '16006346',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Lamar Miller (16006348)',\n 'Lamar Miller',\n '16006348',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '2.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Bryce Love (16006350)',\n 'Bryce Love',\n '16006350',\n 'RB/FLEX',\n '4000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Josh Dobbs (16006151)',\n 'Josh Dobbs',\n '16006151',\n 'QB',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Jacob Eason (16006152)',\n 'Jacob Eason',\n '16006152',\n 'QB',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Jaylen Samuels (16006328)',\n 'Jaylen Samuels',\n '16006328',\n 'RB/FLEX',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '1.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Trey Edmunds (16006330)',\n 'Trey Edmunds',\n '16006330',\n 'RB/FLEX',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Derek Watt (16006332)',\n 'Derek Watt',\n '16006332',\n 'RB/FLEX',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Jordan Wilkins (16006334)',\n 'Jordan Wilkins',\n '16006334',\n 'RB/FLEX',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '5.11'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Marlon Mack (16006336)',\n 'Marlon Mack',\n '16006336',\n 'RB/FLEX',\n '4000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '8.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Darwin Thompson (16006316)',\n 'Darwin Thompson',\n '16006316',\n 'RB/FLEX',\n '4000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '1.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Anthony Sherman (16006318)',\n 'Anthony Sherman',\n '16006318',\n 'RB/FLEX',\n '4000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '1.28'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Brian Hill (16006320)',\n 'Brian Hill',\n '16006320',\n 'RB/FLEX',\n '4000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '5.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Ito Smith (16006322)',\n 'Ito Smith',\n '16006322',\n 'RB/FLEX',\n '4000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '4.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Qadree Ollison (16006324)',\n 'Qadree Ollison',\n '16006324',\n 'RB/FLEX',\n '4000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '0.3'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Keith Smith (16006326)',\n 'Keith Smith',\n '16006326',\n 'RB/FLEX',\n '4000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '1.09'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Easton Stick (16006150)',\n 'Easton Stick',\n '16006150',\n 'QB',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '-0.04'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Kalen Ballage (16006306)',\n 'Kalen Ballage',\n '16006306',\n 'RB/FLEX',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '8.86'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Joshua Kelley (16006308)',\n 'Joshua Kelley',\n '16006308',\n 'RB/FLEX',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '6.4'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Justin Jackson (16006310)',\n 'Justin Jackson',\n '16006310',\n 'RB/FLEX',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '8.58'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Gabe Nabers (16006312)',\n 'Gabe Nabers',\n '16006312',\n 'RB/FLEX',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '1.87'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Royce Freeman (16006314)',\n 'Royce Freeman',\n '16006314',\n 'RB/FLEX',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '3.29'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tyron Johnson (16006428)',\n 'Tyron Johnson',\n '16006428',\n 'WR/FLEX',\n '4000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '7.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'James Morgan (16006149)',\n 'James Morgan',\n '16006149',\n 'QB',\n '4000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Josh Adams (16006298)',\n 'Josh Adams',\n '16006298',\n 'RB/FLEX',\n '4000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '3.78'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n \"La'Mical Perine (16006300)\",\n \"La'Mical Perine\",\n '16006300',\n 'RB/FLEX',\n '4000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '5.23'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n \"D'Ernest Johnson (16006302)\",\n \"D'Ernest Johnson\",\n '16006302',\n 'RB/FLEX',\n '4000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '1.75'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Andy Janovich (16006304)',\n 'Andy Janovich',\n '16006304',\n 'RB/FLEX',\n '4000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '0.62'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Clayton Thorson (16006147)',\n 'Clayton Thorson',\n '16006147',\n 'QB',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Joe Webb III (16006148)',\n 'Joe Webb III',\n '16006148',\n 'QB',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Justice Hill (16006286)',\n 'Justice Hill',\n '16006286',\n 'RB/FLEX',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '1.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Patrick Ricard (16006288)',\n 'Patrick Ricard',\n '16006288',\n 'RB/FLEX',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '2.04'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Dion Lewis (16006290)',\n 'Dion Lewis',\n '16006290',\n 'RB/FLEX',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '4.07'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Alfred Morris (16006292)',\n 'Alfred Morris',\n '16006292',\n 'RB/FLEX',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '5.31'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Saquon Barkley (16006294)',\n 'Saquon Barkley',\n '16006294',\n 'RB/FLEX',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '7.7'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Elijhaa Penny (16006296)',\n 'Elijhaa Penny',\n '16006296',\n 'RB/FLEX',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '0.61'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Ravens (16006812)',\n 'Ravens ',\n '16006812',\n 'DST',\n '4000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '8.57'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Joe Burrow (16006145)',\n 'Joe Burrow',\n '16006145',\n 'QB',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '19.77'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Josh McCown (16006146)',\n 'Josh McCown',\n '16006146',\n 'QB',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Samaje Perine (16006236)',\n 'Samaje Perine',\n '16006236',\n 'RB/FLEX',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '2.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Trayveon Williams (16006276)',\n 'Trayveon Williams',\n '16006276',\n 'RB/FLEX',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '3.35'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Buddy Howell (16006278)',\n 'Buddy Howell',\n '16006278',\n 'RB/FLEX',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '1.54'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'C.J. Prosise (16006280)',\n 'C.J. Prosise',\n '16006280',\n 'RB/FLEX',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '2.74'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Scottie Phillips (16006282)',\n 'Scottie Phillips',\n '16006282',\n 'RB/FLEX',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '2.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Dontrell Hilliard (16006284)',\n 'Dontrell Hilliard',\n '16006284',\n 'RB/FLEX',\n '4000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '2'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Dare Ogunbowale (16006260)',\n 'Dare Ogunbowale',\n '16006260',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0.99'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Devine Ozigbo (16006262)',\n 'Devine Ozigbo',\n '16006262',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '2.38'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Nathan Cottrell (16006264)',\n 'Nathan Cottrell',\n '16006264',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Ryquell Armstead (16006266)',\n 'Ryquell Armstead',\n '16006266',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Chris Thompson (16006268)',\n 'Chris Thompson',\n '16006268',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '7.1'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Ryan Nall (16006270)',\n 'Ryan Nall',\n '16006270',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '2.59'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Artavis Pierce (16006272)',\n 'Artavis Pierce',\n '16006272',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '0.45'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Tarik Cohen (16006274)',\n 'Tarik Cohen',\n '16006274',\n 'RB/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '5.83'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Darnell Mooney (16006426)',\n 'Darnell Mooney',\n '16006426',\n 'WR/FLEX',\n '4000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '8.84'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Rashaad Penny (16006246)',\n 'Rashaad Penny',\n '16006246',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '0.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'DeeJay Dallas (16006248)',\n 'DeeJay Dallas',\n '16006248',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '6.32'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Travis Homer (16006250)',\n 'Travis Homer',\n '16006250',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '3.64'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Nick Bellore (16006252)',\n 'Nick Bellore',\n '16006252',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '0.34'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Malcolm Brown (16006254)',\n 'Malcolm Brown',\n '16006254',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '7.34'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Xavier Jones (16006256)',\n 'Xavier Jones',\n '16006256',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Raymond Calais (16006258)',\n 'Raymond Calais',\n '16006258',\n 'RB/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Tyler Higbee (16006648)',\n 'Tyler Higbee',\n '16006648',\n 'TE/FLEX',\n '4000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '8.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Nate Sudfeld (16006143)',\n 'Nate Sudfeld',\n '16006143',\n 'QB',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'QB',\n 'Dak Prescott (16006144)',\n 'Dak Prescott',\n '16006144',\n 'QB',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '30.33'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Corey Clement (16006238)',\n 'Corey Clement',\n '16006238',\n 'RB/FLEX',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '2.13'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Jason Huntley (16006240)',\n 'Jason Huntley',\n '16006240',\n 'RB/FLEX',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '0.47'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Rico Dowdle (16006242)',\n 'Rico Dowdle',\n '16006242',\n 'RB/FLEX',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '0.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'RB',\n 'Sewo Olonilua (16006244)',\n 'Sewo Olonilua',\n '16006244',\n 'RB/FLEX',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Greg Ward (16006430)',\n 'Greg Ward',\n '16006430',\n 'WR/FLEX',\n '4000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '9.01'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Demarcus Robinson (16006432)',\n 'Demarcus Robinson',\n '16006432',\n 'WR/FLEX',\n '3900',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '6.61'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Willie Snead IV (16006434)',\n 'Willie Snead IV',\n '16006434',\n 'WR/FLEX',\n '3900',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '7.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Evan Engram (16006650)',\n 'Evan Engram',\n '16006650',\n 'TE/FLEX',\n '3900',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '9.09'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Zach Pascal (16006440)',\n 'Zach Pascal',\n '16006440',\n 'WR/FLEX',\n '3800',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '8.24'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Breshad Perriman (16006438)',\n 'Breshad Perriman',\n '16006438',\n 'WR/FLEX',\n '3800',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '9.01'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Darius Slayton (16006436)',\n 'Darius Slayton',\n '16006436',\n 'WR/FLEX',\n '3800',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '9.94'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Jerry Jeudy (16006446)',\n 'Jerry Jeudy',\n '16006446',\n 'WR/FLEX',\n '3700',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '8.68'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Donovan Peoples-Jones (16006444)',\n 'Donovan Peoples-Jones',\n '16006444',\n 'WR/FLEX',\n '3700',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '5.12'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Browns (16006813)',\n 'Browns ',\n '16006813',\n 'DST',\n '3700',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '6.14'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Laviska Shenault Jr. (16006442)',\n 'Laviska Shenault Jr.',\n '16006442',\n 'WR/FLEX',\n '3700',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '9.72'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'James Washington (16006448)',\n 'James Washington',\n '16006448',\n 'WR/FLEX',\n '3600',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '6.8'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Denzel Mims (16006450)',\n 'Denzel Mims',\n '16006450',\n 'WR/FLEX',\n '3600',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '8.31'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Bears (16006814)',\n 'Bears ',\n '16006814',\n 'DST',\n '3600',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '6.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Dallas Goedert (16006652)',\n 'Dallas Goedert',\n '16006652',\n 'TE/FLEX',\n '3600',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '11.26'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Steelers (16006815)',\n 'Steelers ',\n '16006815',\n 'DST',\n '3500',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '9.64'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Austin Hooper (16006654)',\n 'Austin Hooper',\n '16006654',\n 'TE/FLEX',\n '3500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '7.79'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Keelan Cole Sr. (16006454)',\n 'Keelan Cole Sr.',\n '16006454',\n 'WR/FLEX',\n '3500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '10.54'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Travis Fulgham (16006452)',\n 'Travis Fulgham',\n '16006452',\n 'WR/FLEX',\n '3500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '11.17'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Hayden Hurst (16006656)',\n 'Hayden Hurst',\n '16006656',\n 'TE/FLEX',\n '3400',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '8.61'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Chiefs (16006816)',\n 'Chiefs ',\n '16006816',\n 'DST',\n '3400',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '7.29'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Golden Tate (16006458)',\n 'Golden Tate',\n '16006458',\n 'WR/FLEX',\n '3400',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '7.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'A.J. Green (16006456)',\n 'A.J. Green',\n '16006456',\n 'WR/FLEX',\n '3400',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '7.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Cam Sims (16006460)',\n 'Cam Sims',\n '16006460',\n 'WR/FLEX',\n '3300',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '5.88'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Chargers (16006817)',\n 'Chargers ',\n '16006817',\n 'DST',\n '3300',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '4.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jordan Akins (16006658)',\n 'Jordan Akins',\n '16006658',\n 'TE/FLEX',\n '3300',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '6.72'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Vance McDonald (16006666)',\n 'Vance McDonald',\n '16006666',\n 'TE/FLEX',\n '3200',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '1.89'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Colts (16006818)',\n 'Colts ',\n '16006818',\n 'DST',\n '3200',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '9.71'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'KJ Hamler (16006470)',\n 'KJ Hamler',\n '16006470',\n 'WR/FLEX',\n '3200',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '7.48'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Jalen Guyton (16006472)',\n 'Jalen Guyton',\n '16006472',\n 'WR/FLEX',\n '3200',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '6.41'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Anthony Miller (16006464)',\n 'Anthony Miller',\n '16006464',\n 'WR/FLEX',\n '3200',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '7.39'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'David Moore (16006466)',\n 'David Moore',\n '16006466',\n 'WR/FLEX',\n '3200',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '7.99'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Josh Reynolds (16006468)',\n 'Josh Reynolds',\n '16006468',\n 'WR/FLEX',\n '3200',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '7.53'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Alshon Jeffery (16006462)',\n 'Alshon Jeffery',\n '16006462',\n 'WR/FLEX',\n '3200',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '3.38'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Dalton Schultz (16006660)',\n 'Dalton Schultz',\n '16006660',\n 'TE/FLEX',\n '3200',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '9.17'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Mecole Hardman (16006476)',\n 'Mecole Hardman',\n '16006476',\n 'WR/FLEX',\n '3100',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '8.51'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Miles Boykin (16006474)',\n 'Miles Boykin',\n '16006474',\n 'WR/FLEX',\n '3100',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '4.67'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jimmy Graham (16006664)',\n 'Jimmy Graham',\n '16006664',\n 'TE/FLEX',\n '3100',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '8.44'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Rams (16006819)',\n 'Rams ',\n '16006819',\n 'DST',\n '3100',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '8.93'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Zach Ertz (16006662)',\n 'Zach Ertz',\n '16006662',\n 'TE/FLEX',\n '3100',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '7.4'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Steven Sims Jr. (16006478)',\n 'Steven Sims Jr.',\n '16006478',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '4.77'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Dontrelle Inman (16006482)',\n 'Dontrelle Inman',\n '16006482',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '6.61'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Pharoh Cooper (16006618)',\n 'Pharoh Cooper',\n '16006618',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '0.8'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Brandon Zylstra (16006620)',\n 'Brandon Zylstra',\n '16006620',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '1.65'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Keith Kirkwood (16006622)',\n 'Keith Kirkwood',\n '16006622',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '2.3'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Omar Bayless (16006624)',\n 'Omar Bayless',\n '16006624',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Isaiah Wright (16006626)',\n 'Isaiah Wright',\n '16006626',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '4.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Robert Foster (16006628)',\n 'Robert Foster',\n '16006628',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0.95'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Antonio Gandy-Golden (16006630)',\n 'Antonio Gandy-Golden',\n '16006630',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0.88'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Kelvin Harmon (16006632)',\n 'Kelvin Harmon',\n '16006632',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Emanuel Hall (16006634)',\n 'Emanuel Hall',\n '16006634',\n 'WR/FLEX',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'WAS Football Team (16006820)',\n 'WAS Football Team ',\n '16006820',\n 'DST',\n '3000',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '7.71'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Ray-Ray McCloud III (16006606)',\n 'Ray-Ray McCloud III',\n '16006606',\n 'WR/FLEX',\n '3000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '2.25'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Ashton Dulin (16006608)',\n 'Ashton Dulin',\n '16006608',\n 'WR/FLEX',\n '3000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '1.08'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Marcus Johnson (16006610)',\n 'Marcus Johnson',\n '16006610',\n 'WR/FLEX',\n '3000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '4.72'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DeMichael Harris (16006612)',\n 'DeMichael Harris',\n '16006612',\n 'WR/FLEX',\n '3000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '3.75'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Dezmon Patmon (16006614)',\n 'Dezmon Patmon',\n '16006614',\n 'WR/FLEX',\n '3000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Parris Campbell (16006616)',\n 'Parris Campbell',\n '16006616',\n 'WR/FLEX',\n '3000',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '7.35'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Byron Pringle (16006596)',\n 'Byron Pringle',\n '16006596',\n 'WR/FLEX',\n '3000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '3.22'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Brandon Powell (16006598)',\n 'Brandon Powell',\n '16006598',\n 'WR/FLEX',\n '3000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '2.17'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Christian Blake (16006600)',\n 'Christian Blake',\n '16006600',\n 'WR/FLEX',\n '3000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '3.01'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Laquon Treadwell (16006602)',\n 'Laquon Treadwell',\n '16006602',\n 'WR/FLEX',\n '3000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '5.15'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Olamide Zaccheaus (16006604)',\n 'Olamide Zaccheaus',\n '16006604',\n 'WR/FLEX',\n '3000',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '5.64'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'K.J. Hill (16006584)',\n 'K.J. Hill',\n '16006584',\n 'WR/FLEX',\n '3000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '1.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Joe Reed (16006586)',\n 'Joe Reed',\n '16006586',\n 'WR/FLEX',\n '3000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '0.99'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DaeSean Hamilton (16006588)',\n 'DaeSean Hamilton',\n '16006588',\n 'WR/FLEX',\n '3000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '4.68'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Diontae Spencer (16006590)',\n 'Diontae Spencer',\n '16006590',\n 'WR/FLEX',\n '3000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '0.96'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Tyrie Cleveland (16006592)',\n 'Tyrie Cleveland',\n '16006592',\n 'WR/FLEX',\n '3000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '0.63'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Courtland Sutton (16006594)',\n 'Courtland Sutton',\n '16006594',\n 'WR/FLEX',\n '3000',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '9.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Braxton Berrios (16006566)',\n 'Braxton Berrios',\n '16006566',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '5.41'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Jeff Smith (16006568)',\n 'Jeff Smith',\n '16006568',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '4.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Vyncint Smith (16006570)',\n 'Vyncint Smith',\n '16006570',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '0.77'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Lawrence Cager (16006572)',\n 'Lawrence Cager',\n '16006572',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '2.75'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'KhaDarel Hodge (16006574)',\n 'KhaDarel Hodge',\n '16006574',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '4.08'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Marvin Hall (16006576)',\n 'Marvin Hall',\n '16006576',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '5.56'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Taywan Taylor (16006578)',\n 'Taywan Taylor',\n '16006578',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Odell Beckham Jr. (16006580)',\n 'Odell Beckham Jr.',\n '16006580',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '12.40'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'JoJo Natson (16006582)',\n 'JoJo Natson',\n '16006582',\n 'WR/FLEX',\n '3000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '0.1'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Dez Bryant (16006544)',\n 'Dez Bryant',\n '16006544',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '4.97'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Devin Duvernay (16006546)',\n 'Devin Duvernay',\n '16006546',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '3.78'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'James Proche (16006548)',\n 'James Proche',\n '16006548',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '0.18'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Chris Moore (16006550)',\n 'Chris Moore',\n '16006550',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Antoine Wesley (16006552)',\n 'Antoine Wesley',\n '16006552',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DeAndrew White (16006554)',\n 'DeAndrew White',\n '16006554',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Austin Mack (16006556)',\n 'Austin Mack',\n '16006556',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '4.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'C.J. Board (16006558)',\n 'C.J. Board',\n '16006558',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '1.88'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Dante Pettis (16006560)',\n 'Dante Pettis',\n '16006560',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '-0.25'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'David Sills (16006562)',\n 'David Sills',\n '16006562',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Cody Core (16006564)',\n 'Cody Core',\n '16006564',\n 'WR/FLEX',\n '3000',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Alex Erickson (16006526)',\n 'Alex Erickson',\n '16006526',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '0.56'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Mike Thomas (16006528)',\n 'Mike Thomas',\n '16006528',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '3.13'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Stanley Morgan (16006530)',\n 'Stanley Morgan',\n '16006530',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Auden Tate (16006532)',\n 'Auden Tate',\n '16006532',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '3.62'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'John Ross III (16006534)',\n 'John Ross III',\n '16006534',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '1.85'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Steven Mitchell Jr. (16006536)',\n 'Steven Mitchell Jr.',\n '16006536',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '3.4'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Isaiah Coulter (16006538)',\n 'Isaiah Coulter',\n '16006538',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Will Fuller V (16006540)',\n 'Will Fuller V',\n '16006540',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '18.54'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Randall Cobb (16006542)',\n 'Randall Cobb',\n '16006542',\n 'WR/FLEX',\n '3000',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '10.01'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Collin Johnson (16006480)',\n 'Collin Johnson',\n '16006480',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '5.38'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Chris Conley (16006512)',\n 'Chris Conley',\n '16006512',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '6.39'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Dede Westbrook (16006514)',\n 'Dede Westbrook',\n '16006514',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0.2'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Cordarrelle Patterson (16006516)',\n 'Cordarrelle Patterson',\n '16006516',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '4.83'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Javon Wims (16006518)',\n 'Javon Wims',\n '16006518',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '1.7'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Riley Ridley (16006520)',\n 'Riley Ridley',\n '16006520',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '3.95'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DeAndre Carter (16006522)',\n 'DeAndre Carter',\n '16006522',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '0.07'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Dwayne Harris (16006524)',\n 'Dwayne Harris',\n '16006524',\n 'WR/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '-0.33'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Cole Kmet (16006668)',\n 'Cole Kmet',\n '16006668',\n 'TE/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '3.8'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Tyler Eifert (16006670)',\n 'Tyler Eifert',\n '16006670',\n 'TE/FLEX',\n '3000',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '5.94'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Freddie Swain (16006498)',\n 'Freddie Swain',\n '16006498',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '3.72'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Penny Hart (16006500)',\n 'Penny Hart',\n '16006500',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '0.53'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Josh Gordon (16006502)',\n 'Josh Gordon',\n '16006502',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Phillip Dorsett II (16006504)',\n 'Phillip Dorsett II',\n '16006504',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Van Jefferson Jr. (16006506)',\n 'Van Jefferson Jr.',\n '16006506',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '3.16'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Nsimba Webster (16006508)',\n 'Nsimba Webster',\n '16006508',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '-0.07'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Trishton Jackson (16006510)',\n 'Trishton Jackson',\n '16006510',\n 'WR/FLEX',\n '3000',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Quez Watkins (16006484)',\n 'Quez Watkins',\n '16006484',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '4.77'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'John Hightower (16006486)',\n 'John Hightower',\n '16006486',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '3.19'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'JJ Arcega-Whiteside (16006488)',\n 'JJ Arcega-Whiteside',\n '16006488',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '2.9'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'DeSean Jackson (16006490)',\n 'DeSean Jackson',\n '16006490',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '7.42'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Noah Brown (16006492)',\n 'Noah Brown',\n '16006492',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '2.98'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Cedrick Wilson (16006494)',\n 'Cedrick Wilson',\n '16006494',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '4.44'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'WR',\n 'Malik Turner (16006496)',\n 'Malik Turner',\n '16006496',\n 'WR/FLEX',\n '3000',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Drew Sample (16006674)',\n 'Drew Sample',\n '16006674',\n 'TE/FLEX',\n '2900',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '4.9'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jacob Hollister (16006672)',\n 'Jacob Hollister',\n '16006672',\n 'TE/FLEX',\n '2900',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '4.55'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Eagles (16006821)',\n 'Eagles ',\n '16006821',\n 'DST',\n '2900',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '6.79'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jack Doyle (16006676)',\n 'Jack Doyle',\n '16006676',\n 'TE/FLEX',\n '2800',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '4.67'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Texans (16006822)',\n 'Texans ',\n '16006822',\n 'DST',\n '2800',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '3.71'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Gerald Everett (16006678)',\n 'Gerald Everett',\n '16006678',\n 'TE/FLEX',\n '2800',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '6.62'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Panthers (16006823)',\n 'Panthers ',\n '16006823',\n 'DST',\n '2700',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '5.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Trey Burton (16006680)',\n 'Trey Burton',\n '16006680',\n 'TE/FLEX',\n '2700',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '7.14'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Seahawks (16006824)',\n 'Seahawks ',\n '16006824',\n 'DST',\n '2600',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '6.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Ian Thomas (16006798)',\n 'Ian Thomas',\n '16006798',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '3.3'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Chris Manhertz (16006800)',\n 'Chris Manhertz',\n '16006800',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '1.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Colin Thompson (16006802)',\n 'Colin Thompson',\n '16006802',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'CAR',\n '3.85'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jeremy Sprinkle (16006804)',\n 'Jeremy Sprinkle',\n '16006804',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0.27'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Marcus Baugh (16006806)',\n 'Marcus Baugh',\n '16006806',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '1.2'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Temarrick Hemingway (16006808)',\n 'Temarrick Hemingway',\n '16006808',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0.67'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Thaddeus Moss (16006810)',\n 'Thaddeus Moss',\n '16006810',\n 'TE/FLEX',\n '2500',\n 'CAR@WAS 12/27/2020 04:05PM ET',\n 'WAS',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Kevin Rader (16006790)',\n 'Kevin Rader',\n '16006790',\n 'TE/FLEX',\n '2500',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Zach Gentry (16006792)',\n 'Zach Gentry',\n '16006792',\n 'TE/FLEX',\n '2500',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'PIT',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Mo Alie-Cox (16006794)',\n 'Mo Alie-Cox',\n '16006794',\n 'TE/FLEX',\n '2500',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '6.26'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Noah Togiai (16006796)',\n 'Noah Togiai',\n '16006796',\n 'TE/FLEX',\n '2500',\n 'IND@PIT 12/27/2020 01:00PM ET',\n 'IND',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Nick Keizer (16006778)',\n 'Nick Keizer',\n '16006778',\n 'TE/FLEX',\n '2500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '1.03'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Deon Yelder (16006780)',\n 'Deon Yelder',\n '16006780',\n 'TE/FLEX',\n '2500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '1.45'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Ricky Seals-Jones (16006782)',\n 'Ricky Seals-Jones',\n '16006782',\n 'TE/FLEX',\n '2500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'James Winchester (16006784)',\n 'James Winchester',\n '16006784',\n 'TE/FLEX',\n '2500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'KC',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jaeden Graham (16006786)',\n 'Jaeden Graham',\n '16006786',\n 'TE/FLEX',\n '2500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '0.69'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Luke Stocker (16006788)',\n 'Luke Stocker',\n '16006788',\n 'TE/FLEX',\n '2500',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '0.91'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Donald Parham Jr. (16006760)',\n 'Donald Parham Jr.',\n '16006760',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '2.72'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Stephen Anderson (16006762)',\n 'Stephen Anderson',\n '16006762',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '0.32'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Virgil Green (16006764)',\n 'Virgil Green',\n '16006764',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'LAC',\n '2.33'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Nick Vannett (16006766)',\n 'Nick Vannett',\n '16006766',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '2.68'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Troy Fumagalli (16006768)',\n 'Troy Fumagalli',\n '16006768',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '2.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Albert Okwuegbunam (16006770)',\n 'Albert Okwuegbunam',\n '16006770',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '7.28'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Jake Butt (16006772)',\n 'Jake Butt',\n '16006772',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '0.62'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Austin Fort (16006774)',\n 'Austin Fort',\n '16006774',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Andrew Beck (16006776)',\n 'Andrew Beck',\n '16006776',\n 'TE/FLEX',\n '2500',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Chris Herndon (16006744)',\n 'Chris Herndon',\n '16006744',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '3.91'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Ryan Griffin (16006746)',\n 'Ryan Griffin',\n '16006746',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '1.96'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Trevon Wesco (16006748)',\n 'Trevon Wesco',\n '16006748',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '0.5'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Daniel Brown (16006750)',\n 'Daniel Brown',\n '16006750',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Ross Travis (16006752)',\n 'Ross Travis',\n '16006752',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Harrison Bryant (16006754)',\n 'Harrison Bryant',\n '16006754',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '4.76'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'David Njoku (16006756)',\n 'David Njoku',\n '16006756',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '4.71'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Stephen Carlson (16006758)',\n 'Stephen Carlson',\n '16006758',\n 'TE/FLEX',\n '2500',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'CLE',\n '0.68'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Eric Tomlinson (16006736)',\n 'Eric Tomlinson',\n '16006736',\n 'TE/FLEX',\n '2500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Nick Boyle (16006738)',\n 'Nick Boyle',\n '16006738',\n 'TE/FLEX',\n '2500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'BAL',\n '4.14'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Kaden Smith (16006740)',\n 'Kaden Smith',\n '16006740',\n 'TE/FLEX',\n '2500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '2.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Levine Toilolo (16006742)',\n 'Levine Toilolo',\n '16006742',\n 'TE/FLEX',\n '2500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '1.09'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Giants (16006825)',\n 'Giants ',\n '16006825',\n 'DST',\n '2500',\n 'NYG@BAL 12/27/2020 01:00PM ET',\n 'NYG',\n '6.57'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Cethan Carter (16006722)',\n 'Cethan Carter',\n '16006722',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '0.66'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Mason Schreck (16006724)',\n 'Mason Schreck',\n '16006724',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'C.J. Uzomah (16006726)',\n 'C.J. Uzomah',\n '16006726',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '11.35'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Clark Harris (16006728)',\n 'Clark Harris',\n '16006728',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Pharaoh Brown (16006730)',\n 'Pharaoh Brown',\n '16006730',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '2.82'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Darren Fells (16006732)',\n 'Darren Fells',\n '16006732',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '6.09'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Kahale Warring (16006734)',\n 'Kahale Warring',\n '16006734',\n 'TE/FLEX',\n '2500',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'HOU',\n '1.73'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n \"James O'Shaughnessy (16006704)\",\n \"James O'Shaughnessy\",\n '16006704',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '3.72'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Eric Saubert (16006706)',\n 'Eric Saubert',\n '16006706',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0.68'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Tyler Davis (16006708)',\n 'Tyler Davis',\n '16006708',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Ben Ellefson (16006710)',\n 'Ben Ellefson',\n '16006710',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '1.33'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Josh Oliver (16006712)',\n 'Josh Oliver',\n '16006712',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Demetrius Harris (16006714)',\n 'Demetrius Harris',\n '16006714',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '1.15'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'J.P. Holtz (16006716)',\n 'J.P. Holtz',\n '16006716',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Darion Clark (16006718)',\n 'Darion Clark',\n '16006718',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Patrick Scales (16006720)',\n 'Patrick Scales',\n '16006720',\n 'TE/FLEX',\n '2500',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'CHI',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Will Dissly (16006692)',\n 'Will Dissly',\n '16006692',\n 'TE/FLEX',\n '2500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '4.11'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Colby Parkinson (16006694)',\n 'Colby Parkinson',\n '16006694',\n 'TE/FLEX',\n '2500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '3.6'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Greg Olsen (16006696)',\n 'Greg Olsen',\n '16006696',\n 'TE/FLEX',\n '2500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '5.14'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Tyler Ott (16006698)',\n 'Tyler Ott',\n '16006698',\n 'TE/FLEX',\n '2500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'SEA',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Johnny Mundt (16006700)',\n 'Johnny Mundt',\n '16006700',\n 'TE/FLEX',\n '2500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '3.1'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Brycen Hopkins (16006702)',\n 'Brycen Hopkins',\n '16006702',\n 'TE/FLEX',\n '2500',\n 'LAR@SEA 12/27/2020 04:25PM ET',\n 'LAR',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Richard Rodgers (16006682)',\n 'Richard Rodgers',\n '16006682',\n 'TE/FLEX',\n '2500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '8.81'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Joshua Perkins (16006684)',\n 'Joshua Perkins',\n '16006684',\n 'TE/FLEX',\n '2500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'PHI',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Blake Bell (16006686)',\n 'Blake Bell',\n '16006686',\n 'TE/FLEX',\n '2500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '2.05'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Sean McKeon (16006688)',\n 'Sean McKeon',\n '16006688',\n 'TE/FLEX',\n '2500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '0'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'TE',\n 'Blake Jarwin (16006690)',\n 'Blake Jarwin',\n '16006690',\n 'TE/FLEX',\n '2500',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '2.2'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Cowboys (16006826)',\n 'Cowboys ',\n '16006826',\n 'DST',\n '2400',\n 'PHI@DAL 12/27/2020 04:25PM ET',\n 'DAL',\n '4.43'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Broncos (16006827)',\n 'Broncos ',\n '16006827',\n 'DST',\n '2300',\n 'DEN@LAC 12/27/2020 04:05PM ET',\n 'DEN',\n '4.64'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Bengals (16006828)',\n 'Bengals ',\n '16006828',\n 'DST',\n '2300',\n 'CIN@HOU 12/27/2020 01:00PM ET',\n 'CIN',\n '3.21'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Jaguars (16006829)',\n 'Jaguars ',\n '16006829',\n 'DST',\n '2200',\n 'CHI@JAX 12/27/2020 01:00PM ET',\n 'JAX',\n '3.86'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Jets (16006830)',\n 'Jets ',\n '16006830',\n 'DST',\n '2000',\n 'CLE@NYJ 12/27/2020 01:00PM ET',\n 'NYJ',\n '4'],\n ['',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n '',\n 'DST',\n 'Falcons (16006831)',\n 'Falcons ',\n '16006831',\n 'DST',\n '1900',\n 'ATL@KC 12/27/2020 01:00PM ET',\n 'ATL',\n '4.86']]\n\n\n\nThe projections are downloaded from the following website https://fantasyfootballanalytics.net/\n\n\n```python\nimport pandas as pd\npd.read_csv('ffa_customrankings2020.csv').head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
playerIdplayerteampositionageexpbyepointslowerupper...dropofftierptSpreadoverallECRpositionECRsdRankrisksleeperactualPointssalary
02564832Jalen HurtsPHIQB50.00.0NaN25.07662924.98000025.126470...1.7566751.00.146470NaNNaNNaNNaNNaNNaNNaN
12558125Patrick MahomesKCQBNaN3.0NaN25.01187123.51060025.909615...3.7318261.02.399015NaNNaNNaNNaNNaNNaNNaN
22506363Aaron RodgersGBQB37.015.0NaN21.62803619.77400023.559190...0.8506552.03.785190NaNNaNNaNNaNNaNNaNNaN
32560757Lamar JacksonBALQB23.02.0NaN20.93205518.05808322.349667...0.5465362.04.291584NaNNaNNaNNaNNaNNaNNaN
42558063Deshaun WatsonHOUQB25.03.0NaN20.62270718.72000021.738160...0.7039742.03.018160NaNNaNNaNNaNNaNNaNNaN
\n

5 rows \u00d7 22 columns

\n
\n\n\n\nThe data processing is handled through the custom class Data.\n\n\n```python\nfrom Data_processing import Data\ndata = Data()\ndata.Get_cost('DKSalaries.csv')\ndata.Get_proj('ffa_customrankings2020.csv',0.5)\ndata.match_data()\n```\n\n\\section{Problem and Motivation}\n\nOur goal is it now to find the team with the best possible projected score while staying below the salary cap. \n\n\\begin{equation}\n\\mathrm{max} \\, \\, \\, c^\\intercal x\\\\\n\\mathrm{s.t.}\\,\\,\\, Ax \\stackrel{\\leq}{=} b.\n\\end{equation}\n\nLet $N$ be the number of variables (determined by the number of players and teams in the NFL)\n\n$c \\in \\mathbb{R}^N$ is the vector containing the projected points of each player\n\n$x \\in \\{0,1\\}^N$ represents the individual players,\n\n$A$ enforces the necessary constraints (e.g. only one Quarterback)\n\nWhy do we need this? Could we not just brut force the solution?\n\nLet's see how many combinations $C$ we need to try:\n\n\\begin{equation}\nC = {\\mathrm{\\#WR} \\brack 3} {\\mathrm{\\#RB} \\brack 2} \\times (\\mathrm{\\#FLX}-5)\\times \\mathrm{\\#QB}\\times \\mathrm{\\#TE}\\times\\mathrm{\\#DST}\n\\end{equation}\n\n\n```python\nimport scipy.special\n(scipy.special.binom(len(data.Cost_RB.keys()),2)\n *\n scipy.special.binom(len(data.Cost_WR.keys()),3)\n *\n (len(data.Cost_FLX.keys())-5)*len(data.Cost_QB.keys())\n *\n len(data.Cost_DST.keys()))\n```\n\n\n\n\n 21599372793600.0\n\n\n\nThis is the number of possible teams (some of them will not be admissible, i.e. above the salary cap)\n\n\\section{Define the IP}\nAll variables and constraints are defined using pulp an open-source library for mixed-integer optimisation, that comes with a free solver.\n\nWe start by defining the problem variables; they will be saved in a dictionary. In this way, we can index Projections, Costs and our Variables with the same key (namely the name of the Player)\n\n\n\n```python\nQB = LpVariable.dicts('QB',{qb for qb in data.Cost_QB.keys()}, cat = LpBinary)\nTE = LpVariable.dicts('TE',{te for te in data.Cost_TE.keys()}, cat = LpBinary)\nRB = LpVariable.dicts('RB',{rb for rb in data.Cost_RB.keys()}, cat = LpBinary)\nWR = LpVariable.dicts('WR',{wr for wr in data.Cost_WR.keys()}, cat = LpBinary)\nDST = LpVariable.dicts('DST', {dst for dst in data.Cost_DST.keys()}, \n cat = LpBinary)\nFLX = LpVariable.dicts('FLX', {flx for flx in data.Cost_FLX.keys()}, \n cat = LpBinary)\n```\n\nInitialize the problem\n\n\n```python\nprob = LpProblem('Fantasy Football',LpMaximize)\n```\n\n /Users/jj3217/opt/anaconda3/lib/python3.7/site-packages/pulp/pulp.py:1198: UserWarning: Spaces are not permitted in the name. Converted to '_'\n warnings.warn(\"Spaces are not permitted in the name. Converted to '_'\")\n\n\n\n```python\nprint(DST)\ntype(DST['Broncos'])\n```\n\n {'Chiefs': DST_Chiefs, 'Cowboys': DST_Cowboys, 'Seahawks': DST_Seahawks, 'Bears': DST_Bears, 'Falcons': DST_Falcons, 'Eagles': DST_Eagles, 'Giants': DST_Giants, 'Texans': DST_Texans, 'Chargers': DST_Chargers, 'Panthers': DST_Panthers, 'Steelers': DST_Steelers, 'Broncos': DST_Broncos, 'Jaguars': DST_Jaguars, 'Browns': DST_Browns, 'Rams': DST_Rams, 'Ravens': DST_Ravens, 'Jets': DST_Jets, 'Bengals': DST_Bengals, 'Colts': DST_Colts}\n\n\n\n\n\n pulp.pulp.LpVariable\n\n\n\n\\subsection{The cost function}\nThe cost function takes the following form:\n\\begin{equation}\n\\sum^N_{i=1} p_i x_i - r \\sum^N_{i=1} r_i x_i,\n\\end{equation}\nwhere $p_i$ denotes the projected score of player/special Team and $x_i$ dentoes the binary variable. We have also introduced the hyperparameter $r$, which penalises risky choicses. The risk of an player/sepcial team $r_i$ is essentially calcualted by looking at the standard deviation)\n\n\n```python\nr = 0.1\nprob += (lpSum(QB[qb]*data.Proj_QB[qb] for qb in data.Cost_QB.keys()) \n +\n lpSum(TE[te]*data.Proj_TE[te] for te in data.Cost_TE.keys())\n + \n lpSum(RB[rb]*data.Proj_RB[rb] for rb in data.Cost_RB.keys()) \n + \n lpSum(WR[wr]*data.Proj_WR[wr] for wr in data.Cost_WR.keys()) \n + \n lpSum([DST[dst]*data.Proj_DST[dst] for dst in data.Cost_DST.keys()])\n + \n lpSum([FLX[flx]*data.Proj_FLX[flx] for flx in data.Cost_FLX.keys()]) \n -\n r*(lpSum(QB[qb]*data.Risk_QB[qb] for qb in data.Cost_QB.keys())\n +\n lpSum(TE[te]*data.Risk_TE[te] for te in data.Cost_TE.keys())\n +\n lpSum(RB[rb]*data.Risk_RB[rb] for rb in data.Cost_RB.keys())\n + \n lpSum(WR[wr]*data.Risk_WR[wr] for wr in data.Cost_WR.keys())\n + \n lpSum([DST[dst]*data.Risk_DST[dst] for dst in data.Cost_DST.keys()])\n + \n lpSum([FLX[flx]*data.Risk_FLX[flx] for flx \n in data.Cost_FLX.keys()])))\n\n```\n\n\\subsection{The Constraints}\nFirst we habe to mae sure that the exact number of required players is chosen e.g.\n\\begin{equation}\n\\sum_{x_i \\mathrm{is} \\, \\mathrm{WR}} x_i = 3\n\\end{equation}\n\n\n```python\nprob +=(lpSum(QB[qb] for qb in data.Cost_QB.keys()) == 1)\nprob +=(lpSum(TE[te] for te in data.Cost_TE.keys()) == 1)\nprob +=(lpSum(RB[rb] for rb in data.Cost_RB.keys()) == 2)\nprob +=(lpSum(WR[wr] for wr in data.Cost_WR.keys()) == 3 )\nprob +=(lpSum(DST[dst] for dst in data.Cost_DST.keys()) == 1 )\nprob +=(lpSum(FLX[flx] for flx in data.Cost_FLX.keys()) == 1 )\n```\n\nThe next constraint makes sure that we do not select the same player as a Wide Receiver/Running back and Flex. We see again why storing the variables in a dictionary is useful \n\n\n```python\nfor wr in data.Cost_WR.keys():\n prob += (FLX[wr]+WR[wr] <= 1)\nfor rb in data.Cost_RB.keys():\n prob += (RB[rb]+FLX[rb] <= 1)\n```\n\nWe now make sure that the lineup stays below the salary cap \n\\begin{equation}\n\\sum^N_{i=1} \\mathrm{salary}_i x_i \\leq 50.000\n\\end{equation}\n\n\n```python\nsalary_cap = 50000\nprob += (lpSum(QB[qb]*data.Cost_QB[qb] for qb in data.Cost_QB.keys())\n + \n lpSum(TE[te]*data.Cost_TE[te] for te in data.Cost_TE.keys()) \n + \n lpSum(RB[rb]*data.Cost_RB[rb] for rb in data.Cost_RB.keys()) \n + \n lpSum(WR[wr]*data.Cost_WR[wr] for wr in data.Cost_WR.keys()) \n + \n lpSum(DST[dst]*data.Cost_DST[dst] for dst in data.Cost_DST.keys())\n + \n lpSum(FLX[flx]*data.Cost_FLX[flx] for flx in data.Cost_FLX.keys())\n <= salary_cap)\n\n```\n\nThis constraint is optional and again depends on the Hyperparameter Max_per_team.\nIt enforces that our lineup contains a maximum of Max_per_team players from one team.\n\n\n```python\nmax_per_team =2\nfor t in data.Teams:\n prob += (lpSum(QB[qb] for qb in data.Cost_QB.keys() \n if data.Player_Team[qb] == t)\n +\n lpSum(TE[te] for te in data.Cost_TE.keys() \n if data.Player_Team[te] == t)\n + \n lpSum(RB[rb] for rb in data.Cost_RB.keys()\n if data.Player_Team[rb] == t) \n + \n lpSum(WR[wr] for wr in data.Cost_WR.keys() \n if data.Player_Team[wr] == t) \n + \n lpSum(DST[dst] for dst in data.Cost_DST.keys() \n if data.Player_Team[dst] == t)\n +\n lpSum(FLX[flx] for flx in data.Cost_FLX.keys() \n if data.Player_Team[flx] == t)\n <= max_per_team)\n```\n\nWe now call the solver provided by pulp that solves the optimisation problem in no time.\n\n\n```python\nprob.solve()\n```\n\n\n\n\n 1\n\n\n\nThis is the optimal team for the given week:\n\n\n```python\nfor v in prob.variables():\n if v.varValue > 0:\n print( v.name, \"=\", v.varValue)\n \n```\n\n DST_Texans = 1.0\n FLX_David_Johnson = 1.0\n QB_Jalen_Hurts = 1.0\n RB_Austin_Ekeler = 1.0\n RB_Nick_Chubb = 1.0\n TE_Dallas_Goedert = 1.0\n WR_Amari_Cooper = 1.0\n WR_Jerry_Jeudy = 1.0\n WR_Marquise_Brown = 1.0\n\n\n\n```python\nprob += (lpSum(v for v in prob.variables() if v.varValue > 0) \n <= 0) \nprob.solve() \n```\n\n\n\n\n 1\n\n\n\nIt is possible to add new constraints and solve the optimisation problem again. \nHere we chose to add the constraint that no player of the previous line can be chosen again. In principle this can be limited to any number e.g. we can add a constraint that allows to reuse $n$ players.\n\n\n```python\nfor v in prob.variables():\n if v.varValue > 0:\n print( v.name, \"=\", v.varValue)\n```\n\n DST_Ravens = 1.0\n FLX_Nick_Chubb = 1.0\n QB_Patrick_Mahomes = 1.0\n RB_David_Johnson = 1.0\n RB_David_Montgomery = 1.0\n TE_Hayden_Hurst = 1.0\n WR_Brandin_Cooks = 1.0\n WR_Cam_Sims = 1.0\n WR_Marvin_Hall = 1.0\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1e846b81e2c919fe2e8deae8ea115ea1f39a1145", "size": 202366, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "presentation.ipynb", "max_stars_repo_name": "phyjonas/Fantasy-Football-optimiser", "max_stars_repo_head_hexsha": "2a868772dc2798429963eb991dfd9456b7d84058", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "presentation.ipynb", "max_issues_repo_name": "phyjonas/Fantasy-Football-optimiser", "max_issues_repo_head_hexsha": "2a868772dc2798429963eb991dfd9456b7d84058", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentation.ipynb", "max_forks_repo_name": "phyjonas/Fantasy-Football-optimiser", "max_forks_repo_head_hexsha": "2a868772dc2798429963eb991dfd9456b7d84058", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.580284316, "max_line_length": 442, "alphanum_fraction": 0.2513861024, "converted": true, "num_tokens": 40928, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.5195213219520929, "lm_q1q2_score": 0.27799504301458483}} {"text": "# Table of Contents\n

\n\n# Unicycle Model\n\n\u975e\u5e38\u306b\u30b7\u30f3\u30d7\u30eb\u306a\u30e2\u30c7\u30eb\u3002\n\u30b7\u30f3\u30d7\u30eb\u3060\u304c\u3001\u901f\u5ea6\u30d9\u30af\u30c8\u30eb\u304c\u8eca\u4e21\u306e\u9032\u884c\u65b9\u5411\u3068\u540c\u3058\u306a\u306e\u3067\u5c11\u3057\u7121\u7406\u304c\u3042\u308b\n\n\\begin{align}\nx_{k+1} &= x_{k} + v_k cos(\\phi_k)dt\\\\\ny_{k+1} &= y_{k} + v_k sin(\\phi_k)dt\\\\\n\\phi_{k+1} &= \\phi_{k} + \\frac{v_{k}}{L}tan(\\delta_t)dt\\\\\nv_{k+1} &= v_{k} + a_k dt \\\\\n\\end{align}\n\n\n# Kinematic bicycle Model\nUnicycle Model\u3068\u6bd4\u3079\u3066\u3001\u8eca\u4e21\u306e\u3059\u3079\u308a\u89d2\u306f\u8003\u616e\u3057\u3066\u3044\u308b\u304c\u3001\u8eca\u8f2a\u306e\u3059\u3079\u308a\u89d2\u306f\u8003\u616e\u3057\u3066\u3044\u306a\u3044\u3002\n\u901f\u5ea6\u304c15km/h\u4ee5\u4e0b\u306e\u6642\u306b\u306f\u6709\u52b9\u3089\u3057\u3044 (Parking \u306a\u3069\u3067\u306f\u6709\u52b9\u3089\u3057\u3044)\n\n\\begin{align}\nx_{k+1} &= x_{k} + v_k cos(\\phi_k+\\beta_k)dt \\\\\ny_{k+1} &= y_{k} + v_k sin(\\phi_k+\\beta_k)dt\\\\\n\\phi_{k+1} &= \\phi_{k} + \\frac{v_{k}}{L_r}sin(\\beta_k)dt\\\\\nv_{k+1} &= v_{k} + a_k dt\\\\\n\\beta_{k} &= tan^{-1}(\\frac{L_r}{L_f+L_r}tan(\\delta_{f,k}))\n\\end{align}\n\n\n\n# Dynamic bicycle Model\n\u9ad8\u901f\u3067\u8d70\u884c\u3059\u308b\u30e2\u30c7\u30eb\u306b\u306f\u57fa\u672c\u7684\u306b\u3053\u308c\u3092\u3064\u304b\u3046\n\u5206\u6bcd\u306bv_x\u304c\u3042\u308b\u305f\u3081\u3001\u901f\u5ea6\u304c0\u306e\u6642\u306b\u8a08\u7b97\u3067\u304d\u306a\u304f\u306a\u308b\u3002\n\n\\begin{align}\nx_{k+1} &= x_{k} + v_{x,k} cos(\\phi_k)dt - v_{y,k} sin(\\phi_k)dt \\\\\ny_{k+1} &= y_{k} + v_{x,k} sin(\\phi_k)dt + v_{y,k} cos(\\phi_k)dt\\\\\n\\phi_{k+1} &= \\phi_{k} + \\omega_k dt\\\\\nF_{fy,k} &= -C_f( \\frac{v_{y,k}+L_f\\omega_k}{v_{x,k}}-\\delta_t) \\\\\nF_{ry,k} &= -C_r( \\frac{v_{y,k}-L_r\\omega_k}{v_{x,k}}) \\\\\nv_{x,k+1} &= v_{x,k} + (a_k-\\frac{F_{fy,k}sin(\\delta_{k})}{m}+v_{y,k}\\omega_k)dt\\\\\nv_{y,k+1} &= v_{y,k} + (\\frac{F_{ry,t}}{m}+\\frac{F_{fy,k}cos(\\delta_{k})}{m}-v_{x,k}\\omega_k)dt\\\\\n\\omega_{k+1} &= \\omega_{k} + \\frac{dt}{I_z}(F_{fy,t}L_fcos(\\delta_{t})-F_{ry,t}L_r)\n\\end{align}\n\n# Compare Model\n\n\n```python\n#%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport unicycle_model\nimport kinematic_bicycle_model\nimport dynamic_bicyccle_model\nimport math\n\nT = 100\na = [1.0] * T\ndelta = [math.radians(1.0)] * T\n\nustate = unicycle_model.State()\nkstate = kinematic_bicycle_model.State()\ndstate = dynamic_bicyccle_model.State()\n\nux,uy,uyaw,uv, = [],[],[],[]\ndx,dy,dyaw,dv, = [],[],[],[]\nkx,ky,kyaw,kv, kbeta= [],[],[],[],[]\ntime = []\nt = 0.0\n\n\nfor (ai, di) in zip(a, delta):\n t = t + unicycle_model.dt\n time.append(t)\n \n ustate = unicycle_model.update(ustate, ai, di)\n ux.append(ustate.x+kinematic_bicycle_model.Lr*math.cos(ustate.yaw))\n uy.append(ustate.y+kinematic_bicycle_model.Lr*math.sin(ustate.yaw))\n uyaw.append(ustate.yaw)\n uv.append(ustate.v)\n \n kstate = kinematic_bicycle_model.update(kstate, ai, di)\n kx.append(kstate.x)\n ky.append(kstate.y)\n kyaw.append(kstate.yaw)\n kv.append(kstate.v)\n kbeta.append(kstate.beta)\n \n dstate = dynamic_bicyccle_model.update(dstate, ai, di)\n dx.append(dstate.x)\n dy.append(dstate.y)\n dyaw.append(dstate.yaw)\n \n\n```\n\n 0.0\n 6.0189879062819264e-05\n 0.00018056963718845778\n 0.0003611392743769156\n 0.0006018987906281927\n 0.0009028481859422891\n 0.0012639874603192047\n 0.0016853166137589396\n 0.0021668356462614937\n 0.002708544557826867\n 0.0033104433484550597\n 0.003972532018146071\n 0.004694810566899903\n 0.005477278994716553\n 0.006319937301596024\n 0.0072227854875383125\n 0.00818582355254342\n 0.009209051496611349\n 0.010292469319742096\n 0.011436077021935663\n 0.012639874603192047\n 0.013903862063511251\n 0.015228039402893275\n 0.01661240662133812\n 0.018056963718845784\n 0.019561710695416266\n 0.021126647551049565\n 0.022751774285745686\n 0.024437090899504625\n 0.026182597392326385\n 0.027988293764210963\n 0.02985418001515836\n 0.03178025614516858\n 0.03376652215424161\n 0.03581297804237747\n 0.03791962380957615\n 0.04008645945583764\n 0.042313484981161956\n 0.044600700385549086\n 0.04694810566899904\n 0.04935570083151181\n 0.0518234858730874\n 0.05435146079372581\n 0.05693962559342704\n 0.05958798027219109\n 0.062296524830017956\n 0.06506525926690765\n 0.06789418358286016\n 0.07078329777787548\n 0.07373260185195363\n 0.07674209580509458\n 0.07981177963729837\n 0.08294165334856497\n 0.08613171693889439\n 0.08938197040828663\n 0.09269241375674168\n 0.09606304698425956\n 0.09949387009084026\n 0.10298488307648378\n 0.10653608594119011\n 0.11014747868495926\n 0.11381906130779124\n 0.11755083380968603\n 0.12134279619064364\n 0.12519494845066406\n 0.12910729058974732\n 0.1330798226078934\n 0.13711254450510227\n 0.141205456281374\n 0.14535855793670852\n 0.14957184947110586\n 0.153845330884566\n 0.158179002177089\n 0.1625728633486748\n 0.1670269143993234\n 0.17154115532903486\n 0.17611558613780912\n 0.1807502068256462\n 0.1854450173925461\n 0.19020001783850882\n 0.19501520816353435\n 0.1998905883676227\n 0.20482615845077387\n 0.20982191841298786\n 0.21487786825426466\n 0.2199940079746043\n 0.22517033757400676\n 0.23040685705247202\n 0.2357035664100001\n 0.241060465646591\n 0.24647755476224473\n 0.25195483375696126\n 0.25749230263074063\n 0.2630899613835828\n 0.2687478100154878\n 0.2744658485264556\n 0.28024407691648623\n 0.2860824951855797\n 0.291981103333736\n 0.2979399013609551\n\n\n\n```python\n%timeit dynamic_bicyccle_model.update(dstate,ai,di)\n```\n\n 100000 loops, best of 3: 4.47 \u00b5s per loop\n\n\n\n```python\ndynamic_bicyccle_model?\n```\n\n\n```python\nplt.plot(ux,uy,label=\"unicycle_model\")\nplt.plot(kx,ky,label=\"kinematic_bicycle_model\")\nplt.plot(dx,dy,label=\"dynamic_bicyccle_model\")\nplt.axis(\"equal\")\nplt.xlabel(\"X[m]\")\nplt.ylabel(\"Y[m]\")\nplt.legend()\nplt.grid(True)\nplt.show()\n```\n\n\n```python\nplt.plot(ux,uy,label=\"unicycle_model\")\nplt.plot(kx,ky,label=\"kinematic_bicycle_model\")\nplt.axis(\"equal\")\nplt.xlabel(\"X[m]\")\nplt.ylabel(\"Y[m]\")\nplt.legend()\nplt.grid(True)\nplt.show()\n```\n\n\n```python\nplt.plot(time, uyaw,label=\"unicycle_model\")\nplt.plot(time, kyaw,label=\"kinematic_bicycle_model\")\nplt.xlabel(\"Time[s]\")\nplt.ylabel(\"Yaw[rad]\")\nplt.legend()\nplt.grid(True)\nplt.show()\n```\n\n# linealize Dynamic Bicycle Model memo\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nimport sympy\nfrom sympy import init_printing\ninit_printing()\n\nx,y,vx, vy, phi, a, d, cf, cr, m, Lf, Lr, w,Iz = sympy.symbols('x y vx vy phi a d cf cr m Lf Lr, w, Iz')\nf = vx*sympy.cos(phi) - vy*sympy.sin(phi)\nf\n```\n\n\n```python\nsympy.diff(f,x)\n```\n\n\n```python\nsympy.diff(f,y)\n```\n\n\n```python\nsympy.diff(f,phi)\n```\n\n\n```python\nsympy.diff(f,vy)\n```\n\n\n```python\nsympy.diff(f,vx)\n```\n\n\n```python\nf2 = vx*sympy.sin(phi) - vy*sympy.cos(phi)\nprint(f2)\n```\n\n vx*sin(phi) - vy*cos(phi)\n\n\n\n```python\nsympy.diff(f2,phi)\n```\n\n\n```python\nfvx = a + sympy.sin(d) * cf / m * ((vy+Lf*w)/vx - d) + vy*w\nfvx\n```\n\n\n```python\nsympy.diff(fvx,vx)\n```\n\n\n```python\nsympy.diff(fvx,vy)\n```\n\n\n```python\nsympy.diff(fvx,w)\n```\n\n\n```python\nsympy.diff(fvx,x)\n```\n\n\n```python\nsympy.diff(fvx,d)\n```\n\n\n```python\nfvy = - cr / m * (vy-Lr*w)/vx + sympy.cos(d)/m*((vy+Lf*w)/vx - d) -vx*w\nfvy\n```\n\n\n```python\nsympy.diff(fvy, vx)\n```\n\n\n```python\nsympy.diff(fvy, vy)\n```\n\n\n```python\nsympy.diff(fvy, w)\n```\n\n\n```python\nsympy.diff(fvy,d)\n```\n\n\n```python\nfw = -cf/Iz*((vy+Lf*w)/vx-d)*Lf*sympy.cos(d)+cr/Iz*(vy-Lr*w)/vx*Lr\nfw\n```\n\n\n```python\nsympy.diff(fw,vx)\n```\n\n\n```python\nsympy.diff(fw,vy)\n```\n\n\n```python\nsympy.diff(fw,w)\n```\n\n\n```python\nsympy.diff(fw,d)\n```\n", "meta": {"hexsha": "6b069458d4fd505f59428034d567e928910cfd49", "size": 91763, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "steer_vehicle_model/vehicle_model_note.ipynb", "max_stars_repo_name": "stevemats/PyAdvancedControl", "max_stars_repo_head_hexsha": "f767c3753c7a8e21dd3ac58c4b036480fa4623c4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 373, "max_stars_repo_stars_event_min_datetime": "2017-06-02T17:40:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T08:27:05.000Z", "max_issues_repo_path": "steer_vehicle_model/vehicle_model_note.ipynb", "max_issues_repo_name": "stevemats/PyAdvancedControl", "max_issues_repo_head_hexsha": "f767c3753c7a8e21dd3ac58c4b036480fa4623c4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-07-26T11:01:18.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-26T18:39:15.000Z", "max_forks_repo_path": "steer_vehicle_model/vehicle_model_note.ipynb", "max_forks_repo_name": "stevemats/PyAdvancedControl", "max_forks_repo_head_hexsha": "f767c3753c7a8e21dd3ac58c4b036480fa4623c4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 166, "max_forks_repo_forks_event_min_datetime": "2017-05-06T17:50:38.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T01:06:14.000Z", "avg_line_length": 87.8955938697, "max_line_length": 4822, "alphanum_fraction": 0.8143805237, "converted": true, "num_tokens": 3193, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.546738151984614, "lm_q2_score": 0.5078118642792044, "lm_q1q2_score": 0.27764012023187384}} {"text": "# Symbolic Partial Derivative Routine\n\n## Authors: Zach Etienne & Tyler Knowles\n\n## This module contains a routine for computing an analytic partial derivative of a mathematical expression that is written as seveal subexpressions.\n\n**Notebook Status:** In progress \n\n**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). Additionally, this notebook has been validated by checking that results are consistent with finite-difference derivative values in [LALSuite](https://git.ligo.org/lscsoft/lalsuite).\n\n### NRPy+ Source Code for this module: [SEOBNR_Derivative_Routine.py](../../edit/in_progress/SEOBNR/SEOBNR_Derivative_Routine.py)\n\n## Introduction\n$$\\label{intro}$$\n\nThis notebook documents the symbolic partial derivative routine used to generate analytic derivatives of the [SEOBNRv3](https://git.ligo.org/lscsoft/lalsuite) Hamiltonian (documented [here](../Tutorial-SEOBNR_Documentation.ipynb)) and described in [this article](https://arxiv.org/abs/1803.06346).\n\n\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis notebook is organized as follows\n\n1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules\n1. [Step 2:](#step2) Read in expressions\n1. [Step 3:](#step3) Read in constants\n1. [Step 4:](#step4) List free symbols\n1. [Step 5:](#step5) Convert expressions to function notation\n1. [Step 6:](#step6) Differentiate with respect to xx\n1. [Step 7:](#step7) Simplify derivative expressions\n1. [Step 9:](#step9) Differentiate with respect to a specific free variable\n1. [Step 10:](#step10) Compute derivatives with respect to each free variable\n1. [Step 11:](#step11) Output result\n1. [Step 12:](#code_validation): Code Validation against `SEOBNR_Derivative_Routine` NRPy+ module\n1. [Step 13:](#latex_pdf_output) Output this notebook to $\\LaTeX$-formatted PDF file\n\n\n\n# Step 1: Initialize core Python/NRPy+ modules \\[Back to [top](#toc)\\]\n$$\\label{initializenrpy}$$\n\nLet's start by importing all the needed modules from Python/NRPy+:\n\n\n```python\n# Step 1.a: import all needed modules from Python/NRPy+:\nimport sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends\nimport os, sys # Standard Python modules for multiplatform OS-level functions\n\n# Step 1.?: check system path so can use outputC; #TylerK: remove and put outputC back with other imports\nnrpy_dir_path = os.path.join(\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\n\nfrom outputC import * # TylerK: check what is imported and remove *; also find appropriate description\n```\n\n\n\n# Step 2: Read in expressions \\[Back to [top](#toc)\\]\n$$\\label{step2}$$\n\nWe read in the expressions of which we will compute partial derivatives in a single large string before splitting the string by line (carriage return) and by \"=\". Doing so allows us to manipulate the right- and left-hand sides of the expressions appropriately. We store the left- and right-hand sides in the array $\\texttt{lr}$, which consists of $\\texttt{lhrh}$ arrays with left-hand sides $\\texttt{lhs}$ and right-hand sides $\\texttt{rhs}$. Note that $\\texttt{Lambda}$ is a protected keyword in Python, so the variable $\\Lambda$ in the Hamiltonian is renamed $\\texttt{Lamb}$.\n\n\n```python\n# Step 2.a: Read in expressions as a (single) string\nwith open('SEOBNR/Hamstring_lite.txt', 'r') as file:\n all_expressions = file.read()\n\n# Step 2.b: Split the expression string by carriage returns\nstring_lines = all_expressions.splitlines()\n\n#TylerK: for debuggin\nprint(string_lines)\n\n# Step 2.c: Create and populate the \"lr\" array, which separates each line into left- and right-hand sides\n# Each entry is a string of the form lhrh(lhs='',rhs='')\nlr = []\n\nfor i in range(len(string_lines)):\n # Ignore lines with 2 or fewer characters and those starting with #\n if len(string_lines[i]) > 2 and string_lines[i][0] != \"#\":\n # Split each line by its equals sign\n split_line = string_lines[i].split(\"=\")\n # Append the line to \"lr\", removing spaces, \"sp.\" prefixes, and replacing Lambda->Lamb\n # (Lambda is a protected keyword):\n lr.append(lhrh(lhs=split_line[0].replace(\" \",\"\").replace(\"Lambda\",\"Lamb\"),\n rhs=split_line[1].replace(\" \",\"\").replace(\"sp.\",\"\").replace(\"Lambda\",\"Lamb\")))\n\n# Step 2.d: Separate and simplify right- and left-hand sides into separate arrays\nlhss = []\nrhss = []\nfor i in range(len(lr)):\n lhss.append(sp.sympify(lr[i].lhs))\n rhss.append(sp.sympify(lr[i].rhs))\n\n# Step 2.e: Read in variables with which to take derivatives\nwith open('SEOBNR/Hamstring_variables.txt', 'r') as file:\n variables = file.read()\n# Step 2.f: Split the variable string by carriage returns\ndynamic_variables = variables.splitlines()\n#TylerK: for debuggin\nprint(lhss)\nprint(rhss)\nprint(dynamic_variables)\n```\n\n ['sigmaKerr0 = s1x + s2x', 'r2 = x*x + y*y + z*z', 'r = sp.sqrt(r2)', 'u = 1/r', 'tmppx = px - r*eta', 'quagsire = eta*eta']\n [sigmaKerr0, r2, r, u, tmppx, quagsire]\n [s1x + s2x, x**2 + y**2 + z**2, sqrt(r2), 1/r, -eta*r + px, eta**2]\n ['x', 'y', 'z', 'px', 'py', 'pz', 's1x', 's1y', 's1z', 's2x', 's2y', 's2z']\n\n\n\n\n# Step 3: Read in constants \\[Back to [top](#toc)\\]\n$$\\label{step3}$$\n\nWe declare the constant values; derivatives with respect to these variables will be set to zero.\n\n\n```python\n# Step 3.a: Read in constants as a (single) string\nwith open('SEOBNR/constants.txt', 'r') as file:\n constants = file.read()\n\n# Step 3.b: Split the input string by carriage returns\nconstants_as_strings = constants.splitlines()\n\n# Step 3.c: Create \"input_constants\" array and populate with SymPy constants\ninput_constants = []\nfor constant in constants_as_strings:\n constant = sp.symbols(constant,real=True)\n input_constants.append(constant)\n\n#TylerK: for debuggin\nprint(input_constants)\n```\n\n [m1, m2, eta, tortoise, dSO, dSS]\n\n\n\n\n# Step 4: List free symbols \\[Back to [top](#toc)\\]\n$$\\label{step4}$$\n\nBy ''free symbols'' we mean the variables in the right-hand sides. We first create a list of all such terms (using SymPy's built-in free_symbol attribute), including duplicates, and then strip the duplicates. We then remove input constants from the symbol list.\n\n\n```python\n# Step 4.a: Prepare array of \"free symbols\" in the right-hand side expressions\nfull_symbol_list_with_dups = []\nfor i in range(len(lr)):\n for variable in rhss[i].free_symbols:\n full_symbol_list_with_dups.append(variable)\n\n# TylerK: print for debuggin\nprint(full_symbol_list_with_dups)\n\n# Step 4.b: Remove duplicate free symbols\nfull_symbol_list = superfast_uniq(full_symbol_list_with_dups)\n\n# Step 4.c: Remove input constants from symbol list\nfor inputconst in input_constants:\n for symbol in full_symbol_list:\n if str(symbol) == str(inputconst):\n full_symbol_list.remove(symbol)\n\n# TylerK: print for debuggin\nprint(full_symbol_list)\n```\n\n [s1x, s2x, z, x, y, r2, r, r, eta, px, eta]\n [s1x, s2x, z, x, y, r2, r, px]\n\n\n\n\n# Step 5: Convert expressions to function notation \\[Back to [top](#toc)\\]\n$$\\label{step5}$$\n\nIn order to compute the partial derivative of each right-hand side, we mark each variable (left-hand side) and each free symbol (in right-hand sides) as a function with argument $\\texttt{xx}$.\n\n\n```python\n# Step 5.a: Convert each left-hand side to function notation\n# while separating and simplifying left- and right-hand sides\nxx = sp.Symbol('xx')\nfunc = []\nfor i in range(len(lr)):\n func.append(sp.sympify(sp.Function(lr[i].lhs)(xx)))\n\n# Step 5.b: Mark each free variable as a function with argument xx\nfull_function_list = []\nfor symb in full_symbol_list:\n func = sp.sympify(sp.Function(str(symb))(xx))\n full_function_list.append(func)\n for i in range(len(rhss)):\n for var in rhss[i].free_symbols:\n if str(var) == str(symb):\n rhss[i] = rhss[i].subs(var,func)\n```\n\n\n\n# Step 6: Differentiate with respect to xx \\[Back to [top](#toc)\\]\n$$\\label{step6}$$\n\nNow we differentiate the right-hand expressions with respect to $\\textrm{xx}$. We use the SymPy $\\texttt{diff}$ command, differentiating with respect to $\\texttt{xx}$. After so doing, we remove $\\texttt{(xx)}$ and \"Derivative\" (which is output by $\\texttt{diff}$, and use \"prm\" suffix to denote the derivative with respect to $\\texttt{xx}$.\n\n\n```python\n# Step 6.a: Use SymPy's diff function to differentiate right-hand sides with respect to xx\n# and append \"prm\" notation to left-hand sides\nlhss_deriv = []\nrhss_deriv = []\nfor i in range(len(rhss)):\n lhss_deriv.append(sp.sympify(str(lhss[i])+\"prm\"))\n newrhs = sp.sympify(str(sp.diff(rhss[i],xx)).replace(\"(xx)\",\"\").replace(\", xx\",\"prm\").replace(\"Derivative\",\"\"))\n rhss_deriv.append(newrhs)\n#TylerK: for debuggin\nprint(lhss_deriv)\nprint(rhss_deriv)\n```\n\n [sigmaKerr0prm, r2prm, rprm, uprm, tmppxprm, quagsireprm]\n [s1xprm + s2xprm, 2*x*xprm + 2*y*yprm + 2*z*zprm, r2prm/(2*sqrt(r2)), -rprm/r**2, -eta*rprm + pxprm, 0]\n\n\n\n\n# Step 7: Simplify derivative expressions \\[Back to [top](#toc)\\]\n$$\\label{step7}$$\n\nWe declare a function to simply the derivative expressions. In particular, we want to remove terms equal to zero.\n\n\n```python\n# Derivative simplification function\ndef simplify_deriv(lhss_deriv,rhss_deriv):\n # Copy expressions into another array\n lhss_deriv_simp = []\n rhss_deriv_simp = []\n for i in range(len(rhss_deriv)):\n lhss_deriv_simp.append(lhss_deriv[i])\n rhss_deriv_simp.append(rhss_deriv[i])\n # If a right-hand side is 0, substitute value 0 for the corresponding left-hand side in later terms\n for i in range(len(rhss_deriv_simp)):\n if rhss_deriv_simp[i] == 0:\n for j in range(i+1,len(rhss_deriv_simp)):\n for var in rhss_deriv_simp[j].free_symbols:\n if str(var) == str(lhss_deriv_simp[i]):\n rhss_deriv_simp[j] = rhss_deriv_simp[j].subs(var,0)\n zero_elements_to_remove = []\n # Create array of indices for expressions that are zero\n for i in range(len(rhss_deriv_simp)):\n if rhss_deriv_simp[i] == sp.sympify(0):\n zero_elements_to_remove.append(i)\n\n # When removing terms that are zero, we need to take into account their new index (after each removal)\n count = 0\n for i in range(len(zero_elements_to_remove)):\n del lhss_deriv_simp[zero_elements_to_remove[i]+count]\n del rhss_deriv_simp[zero_elements_to_remove[i]+count]\n count -= 1\n return lhss_deriv_simp,rhss_deriv_simp\n\n# Step 7: Call the simplication function and then copy results\nlhss_deriv_simp,rhss_deriv_simp = simplify_deriv(lhss_deriv,rhss_deriv)\nlhss_deriv = lhss_deriv_simp\nrhss_deriv = rhss_deriv_simp\n#TylerK: for debuggin\nprint(lhss_deriv)\nprint(rhss_deriv)\n```\n\n [sigmaKerr0prm, r2prm, rprm, uprm, tmppxprm]\n [s1xprm + s2xprm, 2*x*xprm + 2*y*yprm + 2*z*zprm, r2prm/(2*sqrt(r2)), -rprm/r**2, -eta*rprm + pxprm]\n\n\n\n\n# Step 8: Differentiate with respect to a specific free variable \\[Back to [top](#toc)\\]\n$$\\label{step8}$$\n\nIn [Step 6](#step6) we took a generic derivative of each term, assuming it is a function of the varible $\\textrm{xx}$. We now define a function that will select a specific free variable for differentiation.\n\n\n```python\ndef deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0):\n\n # Copy expressions into another array\n lhss_deriv_new = []\n rhss_deriv_new = []\n for i in range(len(rhss_deriv)):\n lhss_deriv_new.append(lhss_deriv[i])\n rhss_deriv_new.append(rhss_deriv[i])\n # For each free symbol, replace it with the desired derivative\n for i in range(len(rhss_deriv_new)):\n for var in rhss_deriv_new[i].free_symbols:\n if str(var)==\"xprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,xprm)\n elif str(var)==\"yprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,yprm)\n elif str(var)==\"zprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,zprm)\n elif str(var)==\"pxprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,pxprm)\n elif str(var)==\"pyprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,pyprm)\n elif str(var)==\"pzprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,pzprm)\n elif str(var)==\"s1xprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,s1xprm)\n elif str(var)==\"s1yprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,s1yprm)\n elif str(var)==\"s1zprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,s1zprm)\n elif str(var)==\"s2xprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,s2xprm)\n elif str(var)==\"s2yprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,s2yprm)\n elif str(var)==\"s2zprm\":\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,s2zprm)\n # Simplify derivative expressions again\n lhss_deriv_simp,rhss_deriv_simp = simplify_deriv(lhss_deriv_new,rhss_deriv_new)\n return lhss_deriv_simp,rhss_deriv_simp\n\n#def deriv_onevar_test(lhss_deriv,rhss_deriv,variable_list,variable):\ndef deriv_onevar_test(lhss_deriv,rhss_deriv,variable_list,variable):\n variableprm_list = []\n for variable in variable_list:\n variableprm_list.append(str(variable)+\"prm\")\n\n # Copy expressions into another array\n lhss_deriv_new = []\n rhss_deriv_new = []\n for i in range(len(rhss_deriv)):\n lhss_deriv_new.append(lhss_deriv[i])\n rhss_deriv_new.append(rhss_deriv[i])\n # For each free symbol, replace it with the desired derivative\n for i in range(len(rhss_deriv_new)):\n #for var in rhss_deriv_new[i].free_symbols:\n for var in variableprm_list:\n# if variableprm_list.index(str(var))==index:\n if str(var)==str(variable+\"prm\"):\n #TylerK: print for debuggin\n print(\"I'm in the == loop\")\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,1)\n else:\n #TylerK: print for debuggin\n print(\"I'm in the else loop with var \", var, \" and variable \", variable)\n rhss_deriv_new[i] = rhss_deriv_new[i].subs(var,0)\n # Simplify derivative expressions again\n lhss_deriv_simp,rhss_deriv_simp = simplify_deriv(lhss_deriv_new,rhss_deriv_new)\n #TylerK: print for debuggin\n print(lhss_deriv_simp)\n print(rhss_deriv_simp)\n return lhss_deriv_simp,rhss_deriv_simp\n```\n\n\n\n# Step 9: Compute derivatives with respect to each free variable \\[Back to [top](#toc)\\]\n$$\\label{step9}$$\n\nThis needs to be made into a loop!\n\n\n```python\nprint(\"New routine\")\nlhss_derivative = dict()\nrhss_derivative = dict()\n#for index in range(len(dynamic_variables)):\nfor var in dynamic_variables:\n #TylerK: print for debuggin\n print(\"computing deriv with respect to \",var)\n lhss_derivative[var],rhss_derivative[var] = deriv_onevar_test(lhss_deriv,rhss_deriv,dynamic_variables,var)\n #lhss_derivative[dynamic_variables[index]],rhss_derivative[dynamic_variables[index]] = deriv_onevar_test(lhss_deriv,rhss_deriv,dynamic_variables,index)\n #lhss_deriv_partial,rhss_deriv_partial = deriv_onevar_test(lhss_deriv,rhss_deriv,dynamic_variables,index)\n #lhss_derivative.append(lhss_deriv_partial)\n #rhss_derivative.append(rhss_deriv_partial)\n#TylerK: for debuggin\nprint(\"left-hand side is\", lhss_derivative)\nprint(\"right-hand side is\", rhss_derivative)\nprint(\"left-hand side is\", lhss_derivative['x'])\nprint(\"right-hand side is\", rhss_derivative['x'])\n\nprint(\"Old routine\")\nlhss_deriv_x,rhss_deriv_x = deriv_onevar(lhss_deriv,rhss_deriv, xprm=1,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_y,rhss_deriv_y = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=1,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_z,rhss_deriv_z = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=1,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_px,rhss_deriv_px = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=1,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_py,rhss_deriv_py = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=1,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_pz,rhss_deriv_pz = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=1,pzprm=1,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_s1x,rhss_deriv_s1x = deriv_onevar(lhss_deriv,rhss_deriv, xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=1,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_s1y,rhss_deriv_s1y = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=1,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_s1z,rhss_deriv_s1z = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=1,s2xprm=0,s2yprm=0,s2zprm=0)\nlhss_deriv_s2x,rhss_deriv_s2x = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=1,s2yprm=0,s2zprm=0)\nlhss_deriv_s2y,rhss_deriv_s2y = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=1,s2zprm=0)\nlhss_deriv_s2z,rhss_deriv_s2z = deriv_onevar(lhss_deriv,rhss_deriv,xprm=0,yprm=0,zprm=0,pxprm=0,pyprm=0,pzprm=0,\n s1xprm=0,s1yprm=0,s1zprm=0,s2xprm=0,s2yprm=0,s2zprm=1)\n#TylerK: for debuggin\nprint(\"left-hand side is\", lhss_deriv_px)\nprint(\"right-hand side is\", rhss_deriv_px)\n```\n\n New routine\n computing deriv with respect to x\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to y\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to z\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to px\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to py\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to pz\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to s1x\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to s1y\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to s1z\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to s2x\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to s2y\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n computing deriv with respect to s2z\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n I'm in the else loop with var xprm and variable s2z\n I'm in the else loop with var yprm and variable s2z\n I'm in the else loop with var zprm and variable s2z\n I'm in the else loop with var pxprm and variable s2z\n I'm in the else loop with var pyprm and variable s2z\n I'm in the else loop with var pzprm and variable s2z\n I'm in the else loop with var s1xprm and variable s2z\n I'm in the else loop with var s1yprm and variable s2z\n I'm in the else loop with var s1zprm and variable s2z\n I'm in the else loop with var s2xprm and variable s2z\n I'm in the else loop with var s2yprm and variable s2z\n I'm in the == loop\n []\n []\n left-hand side is {'y': [], 's1z': [], 's2x': [], 'x': [], 's1y': [], 's2z': [], 'pz': [], 'z': [], 'px': [], 's2y': [], 's1x': [], 'py': []}\n right-hand side is {'y': [], 's1z': [], 's2x': [], 'x': [], 's1y': [], 's2z': [], 'pz': [], 'z': [], 'px': [], 's2y': [], 's1x': [], 'py': []}\n left-hand side is []\n right-hand side is []\n Old routine\n left-hand side is [tmppxprm]\n right-hand side is [1]\n\n\n\n\n# Step 10: Output result \\[Back to [top](#toc)\\]\n$$\\label{step10}$$\n\nWe write the resulting derivatives in C code.\n\n\n```python\nfor var in dynamic_variables:\n with open(\"dHreal_d\"+str(var)+\".txt\", \"w\") as output:\n outstring = \"/* SEOBNR Hamiltonian expression: */\\n\"\n outstringsp = \"\"\n outsplhs = []\n outsprhs = []\n for i in range(len(lr)):\n outstring += outputC(sp.sympify(lr[i].rhs),lr[i].lhs,\"returnstring\",\"outCverbose=False,includebraces=False,CSE_enable=False\")\n outstringsp += lr[i].lhs+\" = \"+lr[i].rhs+\"\\n\"\n outsplhs.append(sp.sympify(lr[i].lhs))\n outsprhs.append(sp.sympify(lr[i].rhs))\n outstring += \"\\n\\n\\n/* SEOBNR \\partial_\"+str(var)+\" H expression: */\\n\"\n# for i in range(len(lhss_deriv_x)):\n# outstring += outputC(rhss_deriv_x[i],str(lhss_deriv_x[i]),\"returnstring\",\"outCverbose=False,includebraces=False,CSE_enable=False\")\n# outstringsp += str(lhss_deriv_x[i])+\" = \"+str(rhss_deriv_x[i])+\"\\n\"\n# outsplhs.append(lhss_deriv_x[i])\n# outsprhs.append(rhss_deriv_x[i])\n for i in range(len(lhss_derivative)):\n outstring += outputC(rhss_derivative[var][i],str(lhss_deriv_x[i]),\"returnstring\",\"outCverbose=False,includebraces=False,CSE_enable=False\")\n outstringsp += str(lhss_deriv_x[i])+\" = \"+str(rhss_deriv_x[i])+\"\\n\"\n outsplhs.append(lhss_deriv_x[i])\n outsprhs.append(rhss_deriv_x[i])\n output.write(\"%s\" % outstring)\n```\n\n\n\n# Step 11: Code Validation against `SEOBNR_Derivative_Routine` NRPy+ module \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\n\n\n# Step 11: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-SEOBNR_Derivative_Routine.pdf](Tutorial-SEOBNR_Derivative_Routine.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)\n\n\n```python\nimport os,sys # Standard Python modules for multiplatform OS-level functions\nnrpy_dir_path = os.path.join(\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\ncmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"NRPyPN_shortcuts\",location_of_template_file=os.path.join(\"..\"))\n```\n", "meta": {"hexsha": "4c2e9ab1be49b0d4388cc05c86082ec4c747ad0d", "size": 78710, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "in_progress/Tutorial-SEOBNR_Derivative_Routine.ipynb", "max_stars_repo_name": "ksible/nrpytutorial", "max_stars_repo_head_hexsha": "4ca6e9da22def2a9c9bcbcad75847fd1db159f4b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "in_progress/Tutorial-SEOBNR_Derivative_Routine.ipynb", "max_issues_repo_name": "ksible/nrpytutorial", "max_issues_repo_head_hexsha": "4ca6e9da22def2a9c9bcbcad75847fd1db159f4b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "in_progress/Tutorial-SEOBNR_Derivative_Routine.ipynb", "max_forks_repo_name": "ksible/nrpytutorial", "max_forks_repo_head_hexsha": "4ca6e9da22def2a9c9bcbcad75847fd1db159f4b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.2453480358, "max_line_length": 2135, "alphanum_fraction": 0.6005081946, "converted": true, "num_tokens": 20963, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.39233683016710835, "lm_q1q2_score": 0.2769054654569517}} {"text": "# \u53d8\u5206\u91cf\u5b50\u672c\u5f81\u6c42\u89e3\u5668\n\n Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. \n\n## \u6982\u89c8\n\n\u76ee\u524d\u666e\u904d\u8ba4\u4e3a\uff0c\u91cf\u5b50\u8ba1\u7b97\u5728\u8fd1\u671f\u5f88\u6709\u524d\u666f\u7684\u4e00\u4e2a\u5e94\u7528\u662f\u5904\u7406\u91cf\u5b50\u5316\u5b66\u95ee\u9898 [1-2]\u3002**\u53d8\u5206\u91cf\u5b50\u672c\u5f81\u6c42\u89e3\u5668** \uff08VQE\uff09\u4f5c\u4e3a\u8fd9\u4e2a\u7814\u7a76\u65b9\u5411\u7684\u6838\u5fc3\u5e94\u7528\u4e4b\u4e00\uff0c\u4e3a\u7814\u7a76\u8005\u4eec\u63d0\u4f9b\u4e86\u53ef\u4ee5\u5728\u76ee\u524d\u542b\u566a\u7684\u4e2d\u7b49\u89c4\u6a21\u91cf\u5b50\u8bbe\u5907\uff08NISQ device\uff09\u4e0a\u7814\u7a76\u91cf\u5b50\u5316\u5b66\u7684\u53ef\u80fd [1-4]\u3002\u5176\u6838\u5fc3\u4efb\u52a1\u662f\u6c42\u89e3\u4e00\u4e2a\u91cf\u5b50\u5c3a\u5ea6\u4e0a\u5c01\u95ed\u7269\u7406\u7cfb\u7edf\u7684\u54c8\u5bc6\u987f\u91cf $\\hat{H}$ \u7684\u57fa\u6001\u80fd\u91cf\u53ca\u5176\u5bf9\u5e94\u7684\u91cf\u5b50\u6001\u3002\u4e3b\u8981\u7684\u5b9e\u73b0\u65b9\u6cd5\u662f\u901a\u8fc7\u5728\u91cf\u5b50\u8bbe\u5907\u4e0a\u51c6\u5907\u4e00\u4e2a\u53c2\u6570\u5316\u7684\u8bd5\u63a2\u6ce2\u51fd\u6570 $|\\Psi(\\boldsymbol\\theta)\\rangle$ \u7136\u540e\u7ed3\u5408\u7ecf\u5178\u673a\u5668\u5b66\u4e60\u4e2d\u7684\u4f18\u5316\u7b97\u6cd5\uff08\u4f8b\u5982\u68af\u5ea6\u4e0b\u964d\u6cd5\uff09\u53bb\u4e0d\u65ad\u5730\u8c03\u6574\u3001\u4f18\u5316\u53c2\u6570 $\\boldsymbol\\theta$ \u4f7f\u5f97\u671f\u671b\u503c $\\langle \\Psi(\\boldsymbol\\theta)|\\hat{H}|\\Psi(\\boldsymbol\\theta)\\rangle$ \u6700\u5c0f\u5316\u3002\u8fd9\u5957\u65b9\u6848\u7684\u57fa\u672c\u539f\u7406\u662f\u57fa\u4e8e **Rayleigh-Ritz \u53d8\u5206\u539f\u7406**\u3002 \n\n$$\nE_0 = \\min_{\\boldsymbol\\theta} \\langle \\Psi(\\boldsymbol\\theta)|\\hat{H}|\\Psi(\\boldsymbol\\theta)\\rangle.\n\\tag{1}\n$$\n\n\u5176\u4e2d $E_0$ \u8868\u793a\u8be5\u7cfb\u7edf\u7684\u57fa\u6001\u80fd\u91cf\u3002\u4ece\u6570\u503c\u5206\u6790\u7684\u89d2\u5ea6\u6765\u770b\uff0c\u8be5\u95ee\u9898\u53ef\u4ee5\u88ab\u7406\u89e3\u4e3a\u6c42\u89e3\u4e00\u4e2a**\u79bb\u6563\u5316**\u54c8\u5bc6\u987f\u91cf $H$\uff08\u5384\u7c73\u77e9\u9635\uff09\u7684\u6700\u5c0f\u672c\u5f81\u503c $\\lambda_{\\min}$ \u548c\u5176\u5bf9\u5e94\u7684\u672c\u5f81\u5411\u91cf $|\\Psi_0\\rangle$\u3002\u5177\u4f53\u7684\u79bb\u6563\u5316\u8fc7\u7a0b\u662f\u5982\u4f55\u901a\u8fc7\u5efa\u7acb\u6a21\u578b\u5b9e\u73b0\u7684\uff0c\u8fd9\u5c5e\u4e8e\u91cf\u5b50\u5316\u5b66\u7684\u4e13\u4e1a\u9886\u57df\u8303\u7574\u3002\u7cbe\u786e\u5730\u89e3\u91ca\u8be5\u8fc7\u7a0b\u9700\u8981\u5f88\u957f\u7684\u7bc7\u5e45\uff0c\u8fd9\u8d85\u8fc7\u4e86\u672c\u6559\u7a0b\u6240\u80fd\u5904\u7406\u7684\u8303\u56f4\u3002\u6211\u4eec\u4f1a\u5728\u4e0b\u4e00\u8282\u80cc\u666f\u77e5\u8bc6\u6a21\u5757\u7c97\u7565\u7684\u4ecb\u7ecd\u4e00\u4e0b\u76f8\u5173\u77e5\u8bc6\uff0c\u611f\u5174\u8da3\u7684\u8bfb\u8005\u53ef\u4ee5\u53c2\u8003 `\u91cf\u5b50\u5316\u5b66: \u57fa\u672c\u539f\u7406\u548c\u4ece\u5934\u8ba1\u7b97\u6cd5`\u7cfb\u5217\u4e1b\u4e66 [5]\u3002\u901a\u5e38\u6765\u8bf4\uff0c\u4e3a\u4e86\u80fd\u5728\u91cf\u5b50\u8bbe\u5907\u4e0a\u5904\u7406\u91cf\u5b50\u5316\u5b66\u95ee\u9898\uff0c\u54c8\u5bc6\u987f\u91cf $H$ \u4f1a\u88ab\u8868\u793a\u6210\u4e3a\u6ce1\u5229\u7b97\u7b26 $\\{X,Y,Z\\}$ \u7684\u52a0\u6743\u6c42\u548c\u5f62\u5f0f\u3002\n\n$$\nH = \\sum_k c_k ~ \\bigg( \\bigotimes_{j=0}^{M-1} \\sigma_j^{(k)} \\bigg),\n\\tag{2}\n$$\n\n\u5176\u4e2d $c_k$ \u8868\u793a\u6743\u91cd\u7cfb\u6570, $\\sigma_j^{(k)} \\in \\{I,X,Y,Z\\}$ \u5e76\u4e14 $M$ \u8868\u793a\u6240\u9700\u7684\u91cf\u5b50\u6bd4\u7279\u4e2a\u6570\u3002\u8fd9\u6837\u4e00\u79cd\u54c8\u5bc6\u987f\u91cf\u7684\u8868\u793a\u5f62\u5f0f\u88ab\u79f0\u4e3a **\u6ce1\u5229\u5b57\u7b26\u4e32**\u3002\u4ee5\u4e0b\u4e3a\u4e00\u4e2a2\u91cf\u5b50\u6bd4\u7279\u7684\u5177\u4f53\u4f8b\u5b50\uff0c\n\n$$\nH= 0.12~Y_0 \\otimes I_1-0.04~X_0\\otimes Z_1.\n\\tag{3}\n$$\n\n\u5728\u4e0b\u4e00\u8282\uff0c\u6211\u4eec\u4f1a\u8865\u5145\u4e00\u4e9b\u5173\u4e8e\u7535\u5b50\u7ed3\u6784\u95ee\u9898\u7684\u80cc\u666f\u77e5\u8bc6\u3002\u672c\u8d28\u4e0a\u8ba8\u8bba\u7684\u5c31\u662f\u4e0a\u8ff0\u54c8\u5bc6\u987f\u91cf $H$ \u7a76\u7adf\u662f\u4ece\u54ea\u91cc\u6765\u7684\u3002\u5bf9\u4e8e\u719f\u6089\u76f8\u5173\u80cc\u666f\u7684\u8bfb\u8005\uff0c\u6216\u8005\u4e3b\u8981\u5173\u5fc3\u5982\u4f55\u5728\u91cf\u6868\u4e0a\u5b9e\u73b0 VQE \u7684\u8bfb\u8005\uff0c\u8bf7\u76f4\u63a5\u8df3\u8f6c\u81f3\u7b2c\u4e09\u8282\u5206\u6790\u6c22\u5206\u5b50\uff08$H_2$\uff09\u57fa\u6001\u7684\u5177\u4f53\u4f8b\u5b50\u3002 \n\n## \u80cc\u666f\uff1a \u7535\u5b50\u7ed3\u6784\u95ee\u9898\n\n\u8fd9\u672c\u5c0f\u8282\uff0c\u6211\u4eec\u96c6\u4e2d\u8ba8\u8bba\u4e0b\u91cf\u5b50\u5316\u5b66\u4e2d\u7684\u4e00\u4e2a\u57fa\u672c\u95ee\u9898 -- **\u7535\u5b50\u7ed3\u6784\u95ee\u9898**\u3002\u66f4\u51c6\u786e\u7684\u8bf4\uff0c\u6211\u4eec\u5173\u5fc3\u7684\u662f\u7ed9\u5b9a\u5206\u5b50\uff08molecule\uff09\u7684\u4f4e\u4f4d\u80fd\u91cf\u672c\u5f81\u6001\u3002\u8fd9\u4e9b\u4fe1\u606f\u53ef\u4ee5\u5e2e\u52a9\u6211\u4eec\u9884\u6d4b\u5316\u5b66\u53cd\u5e94\u7684\u901f\u7387\u548c\u5206\u5b50\u7684\u7a33\u5b9a\u7ed3\u6784\u7b49\u7b49 [6]\u3002\u5047\u8bbe\u4e00\u4e2a\u5206\u5b50\u7531 $N_n$ \u4e2a\u539f\u5b50\u6838\u548c $N_e$ \u4e2a\u7535\u5b50\u7ec4\u6210\uff0c\u63cf\u8ff0\u8be5\u5206\u5b50\u7cfb\u7edf\u603b\u80fd\u91cf\u7684\u54c8\u5bc6\u987f\u91cf\u7b97\u7b26 $\\hat{H}_{mol}$ \u5728\u4e00\u6b21\u91cf\u5b50\u5316\u8868\u793a\u4e0b\u53ef\u4ee5\u5199\u4e3a\uff0c\n\n$$\n\\begin{align}\n\\hat{H}_{\\text{mol}} & = -\\sum_{i}\\frac{\\nabla_{R_i}^2}{2M_i} - \\sum_{i} \\frac{\\nabla_{r_i}^2}{2} -\\sum_{i,j}\\frac{Z_i}{\\lvert R_i - r_j\\lvert} + \\sum_{i,j>i}\\frac{Z_iZ_j}{\\lvert R_i - R_j\\lvert} + \\sum_{i, j>i}\\frac{1}{\\lvert r_i - r_j\\lvert}, \n\\tag{4}\n\\end{align}\n$$\n\n\u5176\u4e2d $R_i\u3001M_i$ \u548c $Z_i$ \u5206\u522b\u8868\u793a\u7b2c $i$ \u4e2a\u539f\u5b50\u6838\u7684\u4f4d\u7f6e\u3001\u8d28\u91cf\u548c\u539f\u5b50\u5e8f\u6570\uff08\u539f\u5b50\u6838\u5185\u8d28\u5b50\u6570\uff09\uff0c\u7b2c $i$ \u4e2a\u7535\u5b50\u7684\u4f4d\u7f6e\u5219\u8868\u793a\u4e3a $r_i$\u3002\u4ee5\u4e0a\u516c\u5f0f\u53f3\u8fb9\u524d\u4e24\u9879\u5206\u522b\u4ee3\u8868\u539f\u5b50\u6838\u548c\u7535\u5b50\u7684\u603b\u52a8\u80fd\u3002\u7b2c\u4e09\u9879\u8868\u793a\u5e26\u6b63\u7535\u7684\u8d28\u5b50\u548c\u5e26\u8d1f\u7535\u7684\u7535\u5b50\u4e4b\u95f4\u7684\u5e93\u4f26\u76f8\u4e92\u5438\u5f15\u4f5c\u7528\u3002\u6700\u540e\u4e24\u9879\u5219\u8868\u793a\u539f\u5b50\u6838-\u539f\u5b50\u6838\u4e4b\u95f4\uff0c\u7535\u5b50-\u7535\u5b50\u4e4b\u95f4\u7684\u76f8\u4e92\u6392\u65a5\u4f5c\u7528\u3002\u8fd9\u91cc\uff0c\u5206\u5b50\u54c8\u5bc6\u987f\u91cf $\\hat{H}_\\text{mol}$ \u4f7f\u7528\u7684\u662f\u539f\u5b50\u5355\u4f4d\u5236\u80fd\u91cf **\u54c8\u7279\u91cc\u80fd\u91cf**\uff08Hartree\uff09\uff0c\u8bb0\u4e3a Ha\u30021\u54c8\u7279\u91cc\u80fd\u91cf\u7684\u5927\u5c0f\u4e3a $[\\hbar^2/(m_ee^2a_0^2)] = 27.2$ \u7535\u5b50\u4f0f\u6216 630 \u5343\u5361/\u6469\u5c14\uff0c\u5176\u4e2d $m_e\u3001e$ \u548c $a_0$ \u5206\u522b\u8868\u793a\u7535\u5b50\u8d28\u91cf\u3001\u57fa\u672c\u7535\u8377\u548c\u73bb\u5c14\u534a\u5f84\u3002\n\n**\u6ce8\u91ca1\uff1a** \u5728\u8fd9\u4e2a\u56fe\u666f\u4e0b\uff0c\u6211\u4eec\u4e0d\u8003\u8651\u81ea\u65cb-\u8f68\u9053\u8026\u5408\u4ee5\u53ca\u8d85\u7cbe\u7ec6\u7ed3\u6784\u3002\u5982\u679c\u51fa\u4e8e\u8ba1\u7b97\u9700\u8981\uff0c\u53ef\u4ee5\u4f5c\u4e3a\u5fae\u6270\u52a0\u5165\u3002\n\n### \u73bb\u6069-\u5965\u672c\u6d77\u9ed8\u8fd1\u4f3c\n\n\u7531\u4e8e\u4e00\u822c\u539f\u5b50\u6838\u7684\u8d28\u91cf\u8981\u8fdc\u5927\u4e8e\u7535\u5b50\uff0c\u56e0\u800c\u5728\u540c\u6837\u7684\u76f8\u4e92\u4f5c\u7528\u4e0b\u7535\u5b50\u7684\u8fd0\u52a8\u901f\u5ea6\u4f1a\u6bd4\u539f\u5b50\u6838\u5feb\u5f88\u591a\u3002\u6240\u4ee5\uff0c\u5c06\u539f\u5b50\u6838\u6240\u5904\u7684\u4f4d\u7f6e\u770b\u6210\u56fa\u5b9a $R_i =$\u5e38\u6570 \u662f\u4e00\u79cd\u5408\u7406\u7684\u8fd1\u4f3c\u3002\u8fd9\u79cd\u901a\u8fc7\u5728\u65f6\u95f4\u5c3a\u5ea6\u4e0a\u5c06\u7535\u5b50\u884c\u4e3a\u548c\u539f\u5b50\u6838\u884c\u4e3a\u53bb\u8026\u5408\u7684\u8fd1\u4f3c\u5904\u7406\u601d\u60f3\u88ab\u79f0\u4e3a\u73bb\u6069-\u5965\u672c\u6d77\u9ed8\u8fd1\u4f3c\u3002\u4f5c\u4e3a\u8fd1\u4f3c\u7684\u76f4\u63a5\u7ed3\u679c\uff0c\u516c\u5f0f\uff084\uff09\u4e2d\u539f\u5b50\u6838\u7684\u52a8\u80fd\u9879\u4f1a\u88ab\u6d88\u53bb\u5e76\u4e14\u8868\u793a\u539f\u5b50\u6838-\u539f\u5b50\u6838\u76f8\u4e92\u6392\u65a5\u4f5c\u7528\u7684\u9879\u53ef\u4ee5\u88ab\u8ba4\u4e3a\u662f\u4e00\u4e2a\u80fd\u91cf\u79fb\u4f4d\uff08\u8fd9\u4e2a\u9879\u662f\u4e0e\u7535\u5b50\u4f4d\u7f6e $r_i$ \u65e0\u5173\u7684\uff09\u4ece\u800c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a\u5e38\u6570\u9879\u88ab\u5ffd\u7565\u3002\u7ecf\u8fc7\u8fd9\u4e9b\u6b65\u9aa4\u540e\uff0c\u6211\u4eec\u53ef\u4ee5\u628a\u54c8\u5bc6\u987f\u91cf\u8fd1\u4f3c\u4e3a\uff1a\n\n$$\n\\begin{align}\n\\hat{H}_{\\text{electron}} & = - \\sum_{i} \\frac{\\nabla_{r_i}^2}{2} -\\sum_{i,j}\\frac{Z_i}{\\lvert R_i - r_j\\lvert} + \\sum_{i, j>i}\\frac{1}{\\lvert r_i - r_j\\lvert} \n\\tag{5},\n\\end{align}\n$$\n\n\u5728\u7ecf\u8fc7\u4ee5\u4e0a\u8fd1\u4f3c\u540e\uff0c\u5206\u5b50\u4e2d\u591a\u7535\u5b50\u7ed3\u6784\u7684\u80fd\u7ea7\u5728\u7406\u8bba\u4e0a\u53ef\u4ee5\u901a\u8fc7\u6c42\u89e3\u4ee5\u4e0b\u4e0d\u542b\u65f6\u859b\u5b9a\u8c14\u65b9\u7a0b\u83b7\u5f97\uff1a\n\n$$\n\\hat{H}_{\\text{electron}} |\\Psi_n \\rangle = E_n |\\Psi_n \\rangle,\n\\tag{6}\n$$\n\n\u5176\u4e2d $n$ \u6307\u4ee3\u80fd\u7ea7\u3002\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0c\u7535\u5b50\u54c8\u5bc6\u987f\u91cf\u4e2d\u7535\u5b50-\u7535\u5b50\u76f8\u4e92\u6392\u65a5\u4f5c\u7528\u7684\u6c42\u548c\u9879\u6570\u4f1a\u968f\u7740\u7535\u5b50\u6570 $N_e$ \u7684\u589e\u591a\u81f3 $N_e(N_e-1)/2$ \u9879\u3002\u8fd9\u610f\u5473\u7740\u5bf9\u4e8e\u4e00\u4e2a\u542b\u670916\u4e2a\u7535\u5b50\u7684\u6c27\u5206\u5b50\uff08$O_2$\uff09\u6211\u4eec\u9700\u8981\u8ba1\u7b97\u591a\u8fbe120\u9879\u7684\u76f8\u4e92\u6392\u65a5\u4f5c\u7528\u9879\u3002 \u4e00\u822c\u6765\u8bf4\uff0c\u8fd9\u6837\u7684\u95ee\u9898\u662f\u65e0\u6cd5\u4ece\u7406\u8bba\u4e0a\u7cbe\u786e\u6c42\u89e3\u7684\u3002\u6b63\u5982\u72c4\u62c9\u514b\u5728 [Quantum mechanics of many-electron systems](https://royalsocietypublishing.org/doi/10.1098/rspa.1929.0094) [7] \u6240\u6307\u51fa\u7684\u90a3\u6837\uff0c\n\n> *The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.* \n> \n> -- Paul Dirac (1929)\n\n\u65e2\u7136\u89e3\u6790\u7684\u505a\u6cd5\u56e0\u4e3a\u592a\u590d\u6742\u4e86\u4e0d\u592a\u53ef\u884c\uff0c\u90a3\u4e48\u6211\u4eec\u53ef\u4ee5\u91c7\u7528\u6570\u503c\u65b9\u6cd5\u6765\u5904\u7406\u3002\u4e00\u4e2a\u6700\u7b80\u5355\u7684\u6570\u503c\u65b9\u6cd5\uff08\u79bb\u6563\u5316\u65b9\u6cd5\uff09\u5c31\u662f\u628a\u4e0a\u8ff0\u4f5c\u7528\u4e2d\u65e0\u9650\u7ef4\u5ea6\u5e0c\u5c14\u4f2f\u7279\u7a7a\u95f4\u79bb\u6563\u5316\u4e3a\u7b49\u95f4\u8ddd\u6392\u5f00\u7684\u7acb\u65b9\u4f53\u6676\u683c\u70b9\u3002\u5728\u8fd9\u6837\u4e00\u4e2a\u79bb\u6563\u5316\u7684\u7a7a\u95f4\u91cc\uff0c\u4e3b\u8981\u8fd0\u7b97\u89c4\u5219\u4e3a\u590d\u6570\u57df\u7684\u7ebf\u6027\u4ee3\u6570\u3002\u5047\u8bbe\u7a7a\u95f4\u7684\u6bcf\u4e2a\u8f74\u90fd\u79bb\u6563\u4e3a\u7b49\u95f4\u8ddd\u6392\u5f00\u7684 $k$ \u4e2a\u70b9\uff0c\u5219 $N$-\u7535\u5b50\uff08\u4e3a\u4e86\u65b9\u4fbf\u53bb\u6389\u4e0b\u6807 $e$\uff09\u7684\u591a\u4f53\u6ce2\u51fd\u6570\u53ef\u4ee5\u5199\u4e3a [2]\uff1a\n\n$$\n|\\Psi \\rangle = \\sum_{\\mathbf{x_1}, \\ldots, \\mathbf{x_N}} \\psi(\\mathbf{x_1}, \\ldots, \\mathbf{x_N}) \\mathcal{A}(|\\mathbf{x_1}, \\ldots, \\mathbf{x_N}\\rangle).\n\\tag{7}\n$$\n\n\u5176\u4e2d\u5750\u6807 $|\\mathbf{x_j}\\rangle = |r_j\\rangle |\\sigma_j\\rangle$ \u8bb0\u5f55\u7b2c $j$ \u4e2a\u7535\u5b50\u7684\u7a7a\u95f4\u4f4d\u7f6e\u4fe1\u606f\u548c\u81ea\u65cb\uff0c$|r_j\\rangle = |x_j,y_j,z_j\\rangle$ \u4e14 $j\\in \\{1,2,\\cdots,N\\}$, $x_j,y_j,z_j \\in \\{0,1,\\cdots,k-1\\}$ \u540c\u65f6 $\\sigma_j \\in \\{\\downarrow,\\uparrow\\}$ \u8868\u793a\u81ea\u65cb\u5411\u4e0b\u548c\u5411\u4e0a\u3002\u8fd9\u6837\u4e00\u79cd\u79bb\u6563\u5316\u65b9\u5f0f\u5171\u8ba1\u9700\u8981 $k^{3N}\\times 2^{N}$ \u4e2a\u6570\u636e\u6765\u8868\u793a\u6ce2\u51fd\u6570\u3002\u5728\u8fd9\u91cc\uff0c$\\mathcal{A}$ \u8868\u793a\u53cd\u5bf9\u79f0\u5316\u64cd\u4f5c\uff08\u51fa\u4e8e\u6ce1\u5229\u4e0d\u76f8\u5bb9\u539f\u7406\uff09\u5e76\u4e14 $\\psi(\\mathbf{x_1}, \\mathbf{x_2}, \\ldots, \\mathbf{x_N})=\\langle\\mathbf{x_1}, \\mathbf{x_2}, \\ldots, \\mathbf{x_N}|\\Psi\\rangle$\u3002 \u53ef\u4ee5\u770b\u51fa\uff0c\u7ecf\u5178\u8ba1\u7b97\u673a\u5b58\u50a8\u8fd9\u6837\u4e00\u4e2a\u6ce2\u51fd\u6570\u9700\u8981\u7684\u5185\u5b58\u662f\u968f\u7740\u7535\u5b50\u4e2a\u6570\u5448\u6307\u6570\u589e\u957f\u7684\u3002\u8fd9\u4f7f\u5f97\u57fa\u4e8e\u8fd9\u79cd\u79bb\u6563\u5316\u7684\u7ecf\u5178\u6570\u503c\u65b9\u6cd5\uff0c\u65e0\u6cd5\u6a21\u62df\u8d85\u8fc7\u51e0\u5341\u4e2a\u7535\u5b50\u7684\u7cfb\u7edf\u3002\u90a3\u4e48\uff0c\u6211\u4eec\u662f\u4e0d\u662f\u80fd\u591f\u901a\u8fc7\u91cf\u5b50\u8bbe\u5907\u6765\u5b58\u50a8\u548c\u51c6\u5907\u8fd9\u6837\u4e00\u4e2a\u6ce2\u51fd\u6570\u7136\u540e\u6c42\u89e3\u57fa\u6001\u80fd\u91cf $E_0$ \u5462\uff1f\u5728\u4e0b\u4e00\u8282\u4e2d\uff0c\u6211\u4eec\u5c06\u4ee5\u6700\u7b80\u5355\u7684\u5206\u5b50\u7cfb\u7edf -- \u6c22\u5206\u5b50\uff08$H_2$\uff09\u4e3a\u4f8b\uff0c\u8bb2\u89e3 VQE \u7b97\u6cd5\u3002\n\n**\u6ce8\u91ca2\uff1a** \u5173\u4e8e\u91cf\u5b50\u5316\u5b66\u548c\u73b0\u6709\u6570\u503c\u8ba1\u7b97\u65b9\u6cd5\u7684\u7efc\u8ff0\u4e5f\u8d85\u8fc7\u4e86\u672c\u6559\u7a0b\u7684\u5904\u7406\u8303\u56f4\uff0c\u6211\u4eec\u63a8\u8350\u611f\u5174\u8da3\u7684\u8bfb\u8005\u53bb\u67e5\u9605\u4ee5\u4e0b\u7ecf\u5178\u6559\u6750 Helgaker \u7b49\u4eba\u64b0\u5199\u7684 *'Molecular Electronic-Structure Theory'* [6] \u4ee5\u53ca Szabo & Ostlund \u64b0\u5199\u7684 *'Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory'* [8]\u3002 \u5982\u679c\u9700\u8981\u5f25\u8865\u91cf\u5b50\u8ba1\u7b97\u548c\u91cf\u5b50\u5316\u5b66\u4e4b\u95f4\u77e5\u8bc6\u7a7a\u7f3a\uff0c\u8bf7\u53c2\u8003\u4ee5\u4e0b\u7efc\u8ff0\u6587\u7ae0 [Quantum chemistry in the age of quantum computing](https://pubs.acs.org/doi/10.1021/acs.chemrev.8b00803) [1] \u548c [Quantum computational chemistry](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.92.015003) [2] \u3002\n\n**\u6ce8\u91ca3\uff1a** \u5bf9\u4e8e\u91cf\u5b50\u5316\u5b66\u4e2d\u7684\u80fd\u91cf\u8ba1\u7b97\uff0c\u6211\u4eec\u671f\u671b\u80fd\u591f\u8fbe\u5230 **\u5316\u5b66\u7cbe\u5ea6**\uff08chemical accuracy\uff09$1.6\\times10^{-3}$ Ha \u6216\u8005 1 \u5343\u5361/\u6469\u5c14\u3002\n\n\n\n## \u6c22\u5206\u5b50 $H_2$ \u57fa\u6001\u80fd\u91cf\n\n### \u6784\u9020\u7535\u5b50\u54c8\u5bc6\u987f\u91cf\n\n\u9996\u5148\uff0c\u8ba9\u6211\u4eec\u901a\u8fc7\u4e0b\u9762\u51e0\u884c\u4ee3\u7801\u5f15\u5165\u5fc5\u8981\u7684 library \u548c package\u3002\n\n\n```python\nimport os\nimport platform\nimport matplotlib.pyplot as plt\nfrom IPython.display import clear_output\n\nimport numpy\nfrom numpy import concatenate\nfrom numpy import pi as PI\nfrom numpy import savez, zeros\n\nimport paddle\nfrom paddle_quantum.circuit import UAnsatz\nfrom paddle_quantum.utils import pauli_str_to_matrix\nfrom paddle_quantum.VQE.chemistrysub import H2_generator\n```\n\n\u5bf9\u4e8e\u5177\u4f53\u9700\u8981\u5206\u6790\u7684\u5206\u5b50\uff0c\u6211\u4eec\u9700\u8981\u5176**\u51e0\u4f55\u6784\u578b** (geometry)\u3001**\u57fa\u7ec4**\uff08basis set\uff0c\u4f8b\u5982 STO-3G \u57fa\u4e8e\u9ad8\u65af\u51fd\u6570\uff09\u3001**\u591a\u91cd\u5ea6**\uff08multiplicity\uff09\u4ee5\u53ca**\u5206\u5b50\u7684\u51c0\u7535\u8377\u6570** (charge) \u7b49\u591a\u9879\u4fe1\u606f\u6765\u5efa\u6a21\u83b7\u53d6\u63cf\u8ff0\u7cfb\u7edf\u7684\u54c8\u5bc6\u987f\u91cf\u3002\u5177\u4f53\u7684\uff0c\u901a\u8fc7\u6211\u4eec\u5185\u7f6e\u7684\u91cf\u5b50\u5316\u5b66\u5de5\u5177\u5305\u53ef\u4ee5\u5229\u7528 fermion-to-qubit \u6620\u5c04\u7684\u6280\u672f\u6765\u8f93\u51fa\u76ee\u6807\u5206\u5b50\u7684\u91cf\u5b50\u6bd4\u7279\u54c8\u5bc6\u987f\u91cf\u8868\u793a\uff08\u6ce1\u5229\u5b57\u7b26\u4e32\uff09\u3002\n\n\n```python\nHamiltonian, N = H2_generator()\n```\n\n\u9762\u5411\u66f4\u9ad8\u7ea7\u7684\u7528\u6237\uff0c\u6211\u4eec\u8fd9\u91cc\u63d0\u4f9b\u4e00\u4e2a\u7b80\u5355\u7684\u751f\u6210\u6c22\u5206\u5b50 (H2)\u54c8\u5bc6\u987f\u91cf\u7684\u6559\u7a0b\u3002\u5148\u5b89\u88c5\u4ee5\u4e0b\u4e24\u4e2apackage (**\u4ec5Mac/Linux\u7528\u6237\u53ef\u4f7f\u7528\uff0cWindows\u7528\u6237\u6682\u65f6\u4e0d\u652f\u6301**):\n\n\n```python\n!pip install openfermion\nclear_output()\n```\n\n\n```python\n!pip install openfermionpyscf\nclear_output()\n```\n\n\n```python\n# \u64cd\u4f5c\u7cfb\u7edf\u4fe1\u606f\nsysStr = platform.system()\n\n# \u5224\u65ad\u64cd\u4f5c\u7cfb\u7edf\nif sysStr in ('Linux', 'Darwin'):\n\n import openfermion\n import openfermionpyscf\n\n # \u8bf7\u68c0\u67e5\u662f\u5426\u6b63\u786e\u4e0b\u8f7d\u4e86 h2 \u7684\u51e0\u4f55\u6784\u578b\u6587\u4ef6\n geometry = 'h2.xyz'\n # geometry = [('H', (0.0, 0.0, 0.0)), ('H', (0.0, 0.0, 0.74))]\n basis = 'sto-3g'\n charge = 0\n multiplicity = 1\n\n # \u751f\u6210\u54c8\u5bc6\u987f\u91cf\n molecular_hamiltonian = openfermionpyscf.generate_molecular_hamiltonian(geometry, basis, multiplicity, charge)\n qubit_op = openfermion.transforms.jordan_wigner(molecular_hamiltonian)\n\n # \u6253\u5370\u7ed3\u679c\n print(\"The generated h2 Hamiltonian is \\n\", qubit_op)\n```\n\n The generated h2 Hamiltonian is \n -0.042078976477822494 [] +\n -0.04475014401535163 [X0 X1 Y2 Y3] +\n 0.04475014401535163 [X0 Y1 Y2 X3] +\n 0.04475014401535163 [Y0 X1 X2 Y3] +\n -0.04475014401535163 [Y0 Y1 X2 X3] +\n 0.17771287465139946 [Z0] +\n 0.17059738328801055 [Z0 Z1] +\n 0.12293305056183797 [Z0 Z2] +\n 0.1676831945771896 [Z0 Z3] +\n 0.17771287465139946 [Z1] +\n 0.1676831945771896 [Z1 Z2] +\n 0.12293305056183797 [Z1 Z3] +\n -0.24274280513140462 [Z2] +\n 0.1762764080431959 [Z2 Z3] +\n -0.24274280513140462 [Z3]\n\n\n**\u6ce8\u91ca4\uff1a** \u751f\u6210\u8fd9\u4e2a\u54c8\u5bc6\u987f\u91cf\u7684\u51e0\u4f55\u6784\u578b\u4e2d\uff0c\u4e24\u4e2a\u6c22\u539f\u5b50\u95f4\u7684\u539f\u5b50\u95f4\u9694\uff08interatomic distance\uff09\u4e3a $d = 74$ pm\u3002\n\n\u9664\u4e86\u6c22\u5206\u5b50 (H2) \u4e4b\u5916, \u6211\u4eec\u4e5f\u63d0\u4f9b\u4e86\u6c1f\u5316\u6c22 (HF) \u5206\u5b50\u7684\u51e0\u4f55\u6784\u578b\u6587\u4ef6 `hf.xyz`\u3002\u5982\u679c\u4f60\u9700\u8981\u6d4b\u8bd5\u66f4\u591a\u5206\u5b50\u7684\u51e0\u4f55\u6784\u578b\uff0c\u8bf7\u79fb\u6b65\u81f3\u8fd9\u4e2a[\u6570\u636e\u5e93](http://smart.sns.it/molecules/index.html)\u3002\u6b64\u5916\uff0c\u6211\u4eec\u8fd8\u9700\u8981\u628a\u8fd9\u4e9b\u672c\u8d28\u4e0a\u7531\u6ce1\u5229\u7b97\u7b26\u8868\u793a\u7684\u54c8\u5bc6\u987f\u91cf\u8f6c\u5316\u6210\u91cf\u6868\u652f\u6301\u7684\u6570\u636e\u683c\u5f0f\uff0c\u8fd9\u91cc\u6211\u4eec\u63d0\u4f9b\u8fd9\u4e2a\u63a5\u53e3\u3002\n\n\n```python\ndef Hamiltonian_str_convert(qubit_op):\n '''\n \u5c06\u4e0a\u8ff0\u63d0\u4f9b\u7684\u54c8\u5bc6\u987f\u91cf\u4fe1\u606f\u8f6c\u4e3a\u91cf\u6868\u652f\u6301\u7684\u6ce1\u5229\u5b57\u7b26\u4e32\n H = [[1.0, \"z0,x1\"], [-1.0, \"y0,z1\"], ...]\n '''\n info_dic = qubit_op.terms\n \n def process_tuple(tup):\n if len(tup) == 0:\n return 'i0'\n else:\n res = ''\n for ele in tup:\n res += ele[1].lower()\n res += str(ele[0])\n res += ','\n return res[:-1]\n H_info = []\n \n for key, value in qubit_op.terms.items():\n H_info.append([value.real, process_tuple(key)])\n \n return H_info\n\nif sysStr in ('Linux', 'Darwin'):\n Hamiltonian = Hamiltonian_str_convert(qubit_op)\n```\n\n### \u642d\u5efa\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\uff08QNN\uff09\u548c\u8bd5\u63a2\u6ce2\u51fd\u6570\n\n\u5728\u5b9e\u73b0VQE\u7684\u8fc7\u7a0b\u4e2d\uff0c\u6211\u4eec\u9996\u5148\u9700\u8981\u8bbe\u8ba1\u91cf\u5b50\u795e\u7ecf\u7f51\u7edcQNN\uff08\u4e5f\u53ef\u4ee5\u7406\u89e3\u4e3a\u53c2\u6570\u5316\u91cf\u5b50\u7535\u8def\uff09\u6765\u51c6\u5907\u8bd5\u63a2\u6ce2\u51fd\u6570 $|\\Psi(\\boldsymbol\\theta)\\rangle$\u3002\u8fd9\u91cc\uff0c\u6211\u4eec\u63d0\u4f9b\u4e00\u4e2a\u9884\u8bbe\u597d\u7684\u7684\u6df1\u5ea6\u4e3a $D$ \u5c42\u7684 4-\u91cf\u5b50\u6bd4\u7279\u7684\u91cf\u5b50\u7535\u8def\u6a21\u677f\uff0c\u56fe\u4e2d\u7684\u865a\u7ebf\u6846\u5185\u4e3a\u4e00\u5c42\uff1a\n\n\n\n- \u6211\u4eec\u9884\u8bbe\u4e00\u4e9b\u8be5\u53c2\u6570\u5316\u7535\u8def\u7684\u53c2\u6570\uff0c\u6bd4\u5982\u5bbd\u5ea6\u4e3a $N = 4$ \u91cf\u5b50\u4f4d\u3002\n\n- \u521d\u59cb\u5316\u5176\u4e2d\u7684\u53d8\u91cf\u53c2\u6570\uff0c${\\bf{\\theta }}$ \u4ee3\u8868\u6211\u4eec\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u4e2d\u7684\u53c2\u6570\u7ec4\u6210\u7684\u5411\u91cf\u3002\n\n\u63a5\u4e0b\u6765\u6211\u4eec\u6839\u636e\u4e0a\u56fe\u4e2d\u7684\u7535\u8def\u8bbe\u8ba1\uff0c\u901a\u8fc7 Paddle Quantum \u7684 `UAnsatz` \u51fd\u6570\u548c\u5185\u7f6e\u7684 `real_entangled_layer(theta, D)` \u7535\u8def\u6a21\u677f\u6765\u9ad8\u6548\u642d\u5efa\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u3002 \n\n\n```python\ndef U_theta(theta, Hamiltonian, N, D):\n \"\"\"\n Quantum Neural Network\n \"\"\"\n \n # \u6309\u7167\u91cf\u5b50\u6bd4\u7279\u6570\u91cf/\u7f51\u7edc\u5bbd\u5ea6\u521d\u59cb\u5316\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\n cir = UAnsatz(N)\n \n # \u5185\u7f6e\u7684 {R_y + CNOT} \u7535\u8def\u6a21\u677f\n cir.real_entangled_layer(theta[:D], D)\n \n # \u94fa\u4e0a\u6700\u540e\u4e00\u5217 R_y \u65cb\u8f6c\u95e8\n for i in range(N):\n cir.ry(theta=theta[D][i][0], which_qubit=i)\n \n # \u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u4f5c\u7528\u5728\u9ed8\u8ba4\u7684\u521d\u59cb\u6001 |0000>\u4e0a\n cir.run_state_vector()\n \n # \u8ba1\u7b97\u7ed9\u5b9a\u54c8\u5bc6\u987f\u91cf\u7684\u671f\u671b\u503c\n expectation_val = cir.expecval(Hamiltonian)\n\n return expectation_val, cir\n```\n\n### \u914d\u7f6e\u8bad\u7ec3\u6a21\u578b - \u635f\u5931\u51fd\u6570\n\n\u73b0\u5728\u6211\u4eec\u5df2\u7ecf\u6709\u4e86\u6570\u636e\u548c\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u7684\u67b6\u6784\uff0c\u6211\u4eec\u5c06\u8fdb\u4e00\u6b65\u5b9a\u4e49\u8bad\u7ec3\u53c2\u6570\u3001\u6a21\u578b\u548c\u635f\u5931\u51fd\u6570\u3002\u901a\u8fc7\u4f5c\u7528\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc $U(\\theta)$ \u5728\u521d\u59cb\u6001 $|0..0\\rangle$ \u4e0a\uff0c\u6211\u4eec\u5c06\u5f97\u5230\u8f93\u51fa\u6001 $\\left| {\\psi \\left( {\\bf{\\theta }} \\right)} \\right\\rangle $\u3002\u8fdb\u4e00\u6b65\uff0c\u5728VQE\u6a21\u578b\u4e2d\u7684\u635f\u5931\u51fd\u6570\u4e00\u822c\u7531\u91cf\u5b50\u6001 $\\left| {\\psi \\left( {\\bf{\\theta }} \\right)} \\right\\rangle$ \u5173\u4e8e\u54c8\u5bc6\u987f\u91cf $H$ \u7684\u671f\u671b\u503c (\u80fd\u91cf\u671f\u671b\u503c expectation value) \u7ed9\u51fa\uff0c\n\n$$\n\\min_{\\boldsymbol\\theta} \\mathcal{L}(\\boldsymbol \\theta) = \\min_{\\boldsymbol\\theta} \\langle \\Psi(\\boldsymbol\\theta)|H |\\Psi(\\boldsymbol\\theta)\\rangle\n= \\min_{\\boldsymbol\\theta} \\sum_k c_k~\\langle \\Psi(\\boldsymbol\\theta)| \\bigotimes_j \\sigma_j^{(k)}|\\Psi(\\boldsymbol\\theta)\\rangle.\n\\tag{8}\n$$\n\n\n```python\nclass StateNet(paddle.nn.Layer):\n \"\"\"\n Construct the model net\n \"\"\"\n\n def __init__(self, shape, dtype=\"float64\"):\n super(StateNet, self).__init__()\n \n # \u521d\u59cb\u5316 theta \u53c2\u6570\u5217\u8868\uff0c\u5e76\u7528 [0, 2*pi] \u7684\u5747\u5300\u5206\u5e03\u6765\u586b\u5145\u521d\u59cb\u503c\n self.theta = self.create_parameter(shape=shape, \n default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2*PI),\n dtype=dtype, is_bias=False)\n \n # \u5b9a\u4e49\u635f\u5931\u51fd\u6570\u548c\u524d\u5411\u4f20\u64ad\u673a\u5236\n def forward(self, N, D):\n \n # \u8ba1\u7b97\u635f\u5931\u51fd\u6570/\u671f\u671b\u503c\n loss, cir = U_theta(self.theta, Hamiltonian, N, D)\n\n return loss, cir\n```\n\n### \u914d\u7f6e\u8bad\u7ec3\u6a21\u578b - \u6a21\u578b\u53c2\u6570\n\n\u5728\u8fdb\u884c\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u7684\u8bad\u7ec3\u4e4b\u524d\uff0c\u6211\u4eec\u8fd8\u9700\u8981\u8fdb\u884c\u4e00\u4e9b\u8bad\u7ec3\u7684\u8d85\u53c2\u6570\u8bbe\u7f6e\uff0c\u4e3b\u8981\u662f\u5b66\u4e60\u901f\u7387\uff08LR, learning rate\uff09\u3001\u8fed\u4ee3\u6b21\u6570\uff08ITR, iteration\uff09\u548c\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u8ba1\u7b97\u6a21\u5757\u7684\u6df1\u5ea6\uff08D, Depth\uff09\u3002\u8fd9\u91cc\u6211\u4eec\u8bbe\u5b9a\u5b66\u4e60\u901f\u7387\u4e3a 0.5, \u8fed\u4ee3\u6b21\u6570\u4e3a 50 \u6b21\u3002\u8bfb\u8005\u4e0d\u59a8\u81ea\u884c\u8c03\u6574\u6765\u76f4\u89c2\u611f\u53d7\u4e0b\u8d85\u53c2\u6570\u8c03\u6574\u5bf9\u8bad\u7ec3\u6548\u679c\u7684\u5f71\u54cd\u3002\n\n\n```python\nITR = 80 # \u8bbe\u7f6e\u8bad\u7ec3\u7684\u603b\u8fed\u4ee3\u6b21\u6570\nLR = 0.4 # \u8bbe\u7f6e\u5b66\u4e60\u901f\u7387\nD = 2 # \u8bbe\u7f6e\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u4e2d\u91cd\u590d\u8ba1\u7b97\u6a21\u5757\u7684\u6df1\u5ea6 Depth\n```\n\n### \u8fdb\u884c\u8bad\u7ec3\n\n\u5f53\u8bad\u7ec3\u6a21\u578b\u7684\u5404\u9879\u53c2\u6570\u90fd\u8bbe\u7f6e\u5b8c\u6210\u540e\uff0c\u6211\u4eec\u5c06\u6570\u636e\u8f6c\u5316\u4e3a Paddle \u4e2d\u7684\u5f20\u91cf\uff0c\u8fdb\u800c\u8fdb\u884c\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u7684\u8bad\u7ec3\u3002\u8fc7\u7a0b\u4e2d\u6211\u4eec\u7528\u7684\u662fAdam Optimizer\uff0c\u4e5f\u53ef\u4ee5\u8c03\u7528Paddle\u4e2d\u63d0\u4f9b\u7684\u5176\u4ed6\u4f18\u5316\u5668\u3002\u6211\u4eec\u5c06\u8bad\u7ec3\u8fc7\u7a0b\u4e2d\u7684\u7ed3\u679c\u5b58\u50a8\u5728summary_data\u6587\u4ef6\u4e2d\u3002\n\n\n```python\n# \u786e\u5b9a\u7f51\u7edc\u7684\u53c2\u6570\u7ef4\u5ea6\nnet = StateNet(shape=[D + 1, N, 1])\n\n# \u4e00\u822c\u6765\u8bf4\uff0c\u6211\u4eec\u5229\u7528Adam\u4f18\u5316\u5668\u6765\u83b7\u5f97\u76f8\u5bf9\u597d\u7684\u6536\u655b\uff0c\n# \u5f53\u7136\u4f60\u53ef\u4ee5\u6539\u6210SGD\u6216\u8005\u662fRMS prop.\nopt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())\n\n# \u8bb0\u5f55\u4f18\u5316\u7ed3\u679c\nsummary_iter, summary_loss = [], []\n\n# \u4f18\u5316\u5faa\u73af\nfor itr in range(1, ITR + 1):\n\n # \u524d\u5411\u4f20\u64ad\u8ba1\u7b97\u635f\u5931\u51fd\u6570\n loss, cir = net(N, D)\n\n # \u5728\u52a8\u6001\u56fe\u673a\u5236\u4e0b\uff0c\u53cd\u5411\u4f20\u64ad\u6781\u5c0f\u5316\u635f\u5931\u51fd\u6570\n loss.backward()\n opt.minimize(loss)\n opt.clear_grad()\n\n # \u66f4\u65b0\u4f18\u5316\u7ed3\u679c\n summary_loss.append(loss.numpy())\n summary_iter.append(itr)\n\n # \u6253\u5370\u7ed3\u679c\n if itr % 20 == 0:\n print(\"iter:\", itr, \"loss:\", \"%.4f\" % loss.numpy())\n print(\"iter:\", itr, \"Ground state energy:\", \"%.4f Ha\" \n % loss.numpy())\n if itr == ITR:\n print(\"\\n\u8bad\u7ec3\u540e\u7684\u7535\u8def\uff1a\") \n print(cir)\n\n# \u50a8\u5b58\u8bad\u7ec3\u7ed3\u679c\u5230 output \u6587\u4ef6\u5939\nos.makedirs(\"output\", exist_ok=True)\nsavez(\"./output/summary_data\", iter = summary_iter, \n energy=summary_loss)\n```\n\n iter: 20 loss: -0.9930\n iter: 20 Ground state energy: -0.9930 Ha\n iter: 40 loss: -1.1221\n iter: 40 Ground state energy: -1.1221 Ha\n iter: 60 loss: -1.1333\n iter: 60 Ground state energy: -1.1333 Ha\n iter: 80 loss: -1.1359\n iter: 80 Ground state energy: -1.1359 Ha\n \n \u8bad\u7ec3\u540e\u7684\u7535\u8def\uff1a\n --Ry(4.717)----*--------------X----Ry(4.718)----*--------------X----Ry(-0.02)--\n | | | | \n --Ry(4.733)----X----*---------|----Ry(4.486)----X----*---------|----Ry(4.828)--\n | | | | \n --Ry(-3.25)---------X----*----|----Ry(4.729)---------X----*----|----Ry(-0.01)--\n | | | | \n --Ry(-1.55)--------------X----*----Ry(4.704)--------------X----*----Ry(3.094)--\n \n\n\n### \u6d4b\u8bd5\u6548\u679c\n\u6211\u4eec\u73b0\u5728\u5df2\u7ecf\u5b8c\u6210\u4e86\u91cf\u5b50\u795e\u7ecf\u7f51\u7edc\u7684\u8bad\u7ec3\uff0c\u901a\u8fc7 VQE \u5f97\u5230\u7684\u57fa\u6001\u80fd\u91cf\u7684\u4f30\u8ba1\u503c\u5927\u81f4\u4e3a $E_0 \\approx -1.136$ Ha\uff0c\u8fd9\u4e0e\u901a\u8fc7\u5168\u4ef7\u6784\u578b\u76f8\u4e92\u4f5c\u7528\uff08FCI\uff09$E_0 = -1.13618$ Ha \u8ba1\u7b97\u5f97\u51fa\u7684\u503c\u662f\u5728\u5316\u5b66\u7cbe\u5ea6 $\\varepsilon = 1.6 \\times 10^{-3}$ Ha \u5185\u76f8\u7b26\u5408\u7684\u3002\n\n\n```python\nresult = numpy.load('./output/summary_data.npz')\n\neig_val, eig_state = numpy.linalg.eig(\n pauli_str_to_matrix(Hamiltonian, N))\nmin_eig_H = numpy.min(eig_val.real)\nmin_loss = numpy.ones([len(result['iter'])]) * min_eig_H\n\nplt.figure(1)\nfunc1, = plt.plot(result['iter'], result['energy'], \n alpha=0.7, marker='', linestyle=\"-\", color='r')\nfunc_min, = plt.plot(result['iter'], min_loss, \n alpha=0.7, marker='', linestyle=\":\", color='b')\nplt.xlabel('Number of iteration')\nplt.ylabel('Energy (Ha)')\n\nplt.legend(handles=[\n func1,\n func_min\n],\n labels=[\n r'$\\left\\langle {\\psi \\left( {\\theta } \\right)} '\n r'\\right|H\\left| {\\psi \\left( {\\theta } \\right)} \\right\\rangle $',\n 'Ground-state energy',\n ], loc='best')\n\n#plt.savefig(\"vqe.png\", bbox_inches='tight', dpi=300)\nplt.show()\n```\n\n## \u901a\u8fc7 VQE \u786e\u5b9a\u539f\u5b50\u95f4\u9694\n\n\u8fd8\u8bb0\u5f97\u5728\u524d\u9762\u7684\u6ce8\u91ca\u4e2d\u63d0\u5230\u6211\u4eec\u9ed8\u8ba4\u4f7f\u7528\u7684\u4e24\u4e2a\u6c22\u539f\u5b50\u95f4\u539f\u5b50\u95f4\u9694\u4e3a $74$ pm \u5417\uff1fVQE \u7684\u53e6\u4e00\u4e2a\u7528\u6cd5\u4fbf\u662f\u901a\u8fc7\u5728\u4e0d\u540c\u7684\u539f\u5b50\u95f4\u9694\u4e0b\u591a\u6b21\u8fd0\u884c\u7136\u540e\u89c2\u5bdf\u8fd0\u884c\u7ed3\u679c\u7684\u6700\u5c0f\u503c\u662f\u5728\u4ec0\u4e48\u539f\u5b50\u95f4\u9694\u53d1\u751f\u7684\uff0c\u8fd9\u4e2a\u95f4\u9694\u5373\u4e3a\u4f30\u8ba1\u5f97\u771f\u5b9e\u539f\u5b50\u95f4\u9694\u3002\n\n\n\n\u4ece\u4e0a\u8ff0\u8fc7\u7a0b\u4e2d\u53ef\u4ee5\u770b\u51fa\uff0c\u6700\u5c0f\u503c\u786e\u5b9e\u53d1\u751f\u5728 $d = 74$ pm (1 pm = $1\\times 10^{-12}$m) \u9644\u8fd1\uff0c\u8fd9\u662f\u4e0e[\u5b9e\u9a8c\u6d4b\u5f97\u6570\u636e](https://cccbdb.nist.gov/exp2x.asp?casno=1333740&charge=0)\u76f8\u7b26\u5408\u7684 $d_{exp} (H_2) = 74.14$ pm.\n\n_______\n\n## \u53c2\u8003\u6587\u732e\n\n[1] Cao, Yudong, et al. Quantum Chemistry in the Age of Quantum Computing. [Chemical reviews 119.19 (2019): 10856-10915.](https://pubs.acs.org/doi/10.1021/acs.chemrev.8b00803)\n\n[2] McArdle, Sam, et al. Quantum computational chemistry. [Reviews of Modern Physics 92.1 (2020): 015003.](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.92.015003)\n\n\n[3] Peruzzo, A. et al. A variational eigenvalue solver on a photonic quantum processor. [Nat. Commun. 5, 4213 (2014).](https://www.nature.com/articles/ncomms5213)\n\n[4] Moll, Nikolaj, et al. Quantum optimization using variational algorithms on near-term quantum devices. [Quantum Science and Technology 3.3 (2018): 030503.](https://iopscience.iop.org/article/10.1088/2058-9565/aab822)\n\n[5] \u5f90\u5149\u5baa, \u9ece\u4e50\u6c11, \u738b\u5fb7\u6c11. \u91cf\u5b50\u5316\u5b66: \u57fa\u672c\u539f\u7406\u548c\u4ece\u5934\u8ba1\u7b97\u6cd5(\u4e0a)[M], \u7b2c\u4e8c\u7248. \u5317\u4eac: \u79d1\u5b66\u51fa\u7248\u793e, 2012; \n\n[6] Helgaker, Trygve, Poul Jorgensen, and Jeppe Olsen. Molecular electronic-structure theory. John Wiley & Sons, 2014.\n\n[7] Dirac, Paul Adrien Maurice. Quantum mechanics of many-electron systems. [Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 123.792 (1929): 714-733.](https://royalsocietypublishing.org/doi/10.1098/rspa.1929.0094)\n\n[8] Szabo, Attila, and Neil S. Ostlund. Modern quantum chemistry: introduction to advanced electronic structure theory. Courier Corporation, 2012.\n", "meta": {"hexsha": "9932ae1422945ce8944286ddced2bb2de00ac44d", "size": 40371, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial/quantum_simulation/VQE_CN.ipynb", "max_stars_repo_name": "rickyHong/Quantum-1", "max_stars_repo_head_hexsha": "73973c92d540b9e83b50c66860537aef0590a1a5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tutorial/quantum_simulation/VQE_CN.ipynb", "max_issues_repo_name": "rickyHong/Quantum-1", "max_issues_repo_head_hexsha": "73973c92d540b9e83b50c66860537aef0590a1a5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorial/quantum_simulation/VQE_CN.ipynb", "max_forks_repo_name": "rickyHong/Quantum-1", "max_forks_repo_head_hexsha": "73973c92d540b9e83b50c66860537aef0590a1a5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.8238897397, "max_line_length": 17072, "alphanum_fraction": 0.7233410121, "converted": true, "num_tokens": 7631, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.519521321952093, "lm_q2_score": 0.5312093733737563, "lm_q1q2_score": 0.27597459588847684}} {"text": "# _*Quantum chemistry with Qiskit `Terra`, `Aqua` and CQC's `t|ket\u3009` compiler*_ \n\nIn this tutorial, we discuss how to use IBM's Qiskit `Terra` and `Aqua` packages, and the `t|ket\u3009` compiler by Cambridge Quantum Computing (CQC), to calculate the excited state energies of simple molecules using the quantum subspace expansion (QSE) technique. By the end of this tutorial, you will\n\n* Understand why optimizing circuits for quantum chemistry is necessary\n* Learn about the quantum subspace expansion technique for computing excited state energies\n* See how Qiskit `Terra` enables 3rd-party passes in its transpiler architecture\n* Use `pytket` to perform native circuit optimization and circuit routing\n\nNOTE: Throughout this tutorial, we assume the reader has some familiarity with quantum chemistry methods, though an extensive knowledge is not necessary.\n\n**Code setup**\n\nThis tutorial makes use of Qiskit `Terra` and `Aqua`, as well as `pytket` (the Python interface to `t|ket\u3009`). To install `Terra` and `Aqua`, follow instructions available [here](https://qiskit.org/). To install `pytket`, follow the instructions at [this Github repository](https://github.com/CQCL/pytket).\n\n## Quantum chemistry in the NISQ era, and the need for circuit optimization techniques\n\nOne of the expected applications of noisy, intemediate-scale quantum (NISQ) devices is simulating quantum chemistry. A simple quantum chemistry question is \"Given how the electrons and nuclei of a particular molecule are arranged, what is the _energy_ of that configuration?\". By examining how the energy of the configuration changes as the nuclei and electrons are moved relative to one another, we can map out an _energy surface_. The configuration that minimizes the energy is a stable, equilibrium point for the molecule. Knowing this configuration (and its associated energy), we can deduce a variety of molecular properties, such as reaction rates.\n\nMost techniques for computing molecular energies on near term quantum devices rely on the [variational quantum eigensolver (VQE) algorithm](https://doi.org/10.1038/ncomms5213). This algorithm uses a parameterized _ansatz_ $|{\\psi}(\\boldsymbol{\\theta})\\rangle$ to describe the ground state energy of the Hamiltonian $H$ for a given configuration. By examining how the expected energy $\\langle \\psi(\\boldsymbol{\\theta})|H| \\psi(\\boldsymbol{\\theta})\\rangle/\\langle \\psi(\\boldsymbol{\\theta})|\\psi(\\boldsymbol{\\theta})\\rangle$ changes as $\\boldsymbol{\\theta}$ is varied, we can optimize the parameters to find an estimate for the ground state and its corresponding energy.\n\nRunning the VQE algorithm on actual hardware has two complications:\n\n* Efficiently preparing the trial state $|\\psi(\\boldsymbol{\\theta})\\rangle$.\n\n* Efficiently measuring the terms in the Hamiltonian $H$ that describe the configuration.\n\nIn this tutorial, we'll focus on the problem of efficiently preparing the trial state. We do so for two reasons:\n\n* An accurate estimate of the ground state is all that's necessary to do the quantum subspace expansion technique to estimate excited state energies\n\n* To demonstrate the capabilies of Qiskit `Terra` and CQC's `t|ket\u3009` compiler to reduce the circuit _resources_ (number of gates, depth, etc.) for the state preparation. In principle, these capabilities can be deployed for other problems.\n\nReducing the resources is crucial in the NISQ era, because the noise present in NISQ devices dominates more for larger circuits (with more operations) and reduces the accuracy of the results. Therefore techniques which reduce circuit requirements can significantly improve the results on real hardware. Note that there are techniques, such as [qubit tapering](https://arxiv.org/abs/1701.08213), or ['gate-efficient' circuits](https://arxiv.org/abs/1809.05057), which are useful for constructing a lower-resource circuit that could be run on _ideal_ hardware. In practice, even these \"resource-efficient\" circuits can be further optimized, especially when the imperfections and constraints of a real piece of hardware are taken into account.\n\nAt a high level, our approach utilizes Qiskit `Aqua` to generate a parameterized ansatz based on the [_Unitary Coupled Cluster, Single-Double_ (UCCSD) ansatz](https://en.wikipedia.org/wiki/Coupled_cluster). We then use Qiskit `Terra` and the `t|ket\u3009` compiler (as implemented in `pytket`) to optimize the circuit for that ansatz. We will show that the optimized ansatz requires substantially fewer circuit resources than the naive UCCSD ansatz. An application of our approach, we show how to use the quantum subspace expansion (QSE) technique to compute excited state energies for gaseous hydrogen (H$_{2}$) and lithium hydride (LiH).\n\n## Computing excited state energies using the Quantum Subspace Expansion (QSE) technique\n\nThe VQE algorithm is often used to compute the _ground state_ energy of a given molecular configuration. However, knowing the energies of _excited states_ of the system is also useful. The energies of electronic excited states prove, in general, challenging to compute. Excited states are usually more _entangled_ than ground states and so require more computational resources to compute. While there exists techniques for efficiently representing certain classes of entangled states, in general, the complexity of the simulation will be non-trivial. This is a problem as it makes classical computation of the energy, to an appropriate accuracy, of many molecules impossible. \n\nThis notebook demonstrates calculation of excited states of molecules using the [quantum subspace expanansion (QSE) technique](https://doi.org/10.1103/PhysRevA.95.042308). The QSE technique uses an accurate estimate of the ground state energy of a given molecular configuration to estimate the energies of excited states. Consider a fixed molecule (represented by a given Hamiltonian $H$), and suppose $\\left|\\Psi_{0}\\right\\rangle$ is the output of the VQE algorithm for estimating the ground state energy of $H$.\n\nThe QSE technique constructs a subspace of state vectors $\\left|\\Psi_j^k\\right\\rangle$ formed by one-electron excitations of the ground state wavefunction:\n\n\\begin{equation}\n\\left|\\Psi_{j}^{k}\\right\\rangle = c_k^{\\dagger}c_{j}\\left|\\Psi_0\\right\\rangle.\n\\end{equation}\n\nwhere $c_k^{\\dagger}, c_{j}$ are the fermionic creation and annihilation operators over spin orbitals $k$ and $j$, respectively. That is, these vectors are formed by reducing the occupation of spin orbital $j$ by one, and increasing the occupation of spin orbital $k$ by one. The vectors are not in general orthogonal to $\\Psi_{0}$ hence we will need to calculate an overlap matrix.\n\n\nWithin this subspace, we solve a generalized eigenvalue problem. Consider the operator $H'$ with matrix elements given by\n$$(H')_{jk}^{lm} = \\langle\\Psi_j^l \\left| H \\right| \\Psi_k^m\\rangle,$$\n\nand define an overlap matrix $S$ whose matrix elements are given by\n\n$$S_{jk}^{lm} = \\langle \\Psi_j^l \\left|\\Psi_k^m\\right\\rangle.$$\n\nThe generalized eigenvalue equation to be solved is \n\n\\begin{equation}\nH'C=SCE,\n\\end{equation}\n\nwhere $C$ is the matrix of eigenvectors, and $E$ is the vector of eigenvalues. Crucially, _the energy eigenvalues $E$ provide an estimate of the excited state energies of $H$_ as well as a refined value of the ground state energy.\n\nNotice that the solution to the generalized eigenvalue equation can be done on a classical computer, provided $H'$ and $S$ have been calculated. The matrix elements of both of these matrices can be constructed using a quantum computer, in the following way. First, re-write the matrix elements in terms of $\\left | \\Psi_{0}\\right \\rangle$:\n\n\\begin{align}\n(H')_{jk}^{lm} =& \\langle\\Psi_j^l \\left| H \\right| \\Psi_k^m\\rangle = \\langle \\Psi_{0} | c_{j}^\\dagger c_{l} Hc_{m}^{\\dagger}c_{k}|\\Psi_{0}\\rangle\\\\\nS_{jk}^{lm} &= \\langle \\Psi_j^l \\left|\\Psi_k^m\\right\\rangle = \\langle \\Psi_{0} | c_{j}^\\dagger c_{l} c_{m}^{\\dagger}c_{k}|\\Psi_{0}\\rangle.\n\\end{align}\n\nThe matrix elements can be calculated using a quantum computer or simluator. How? By transforming the operators \n$c_{j}^\\dagger c_{l} c_{k}^{\\dagger}c_{m}$ and $c_{l}^\\dagger c_{j} H c_{m}^{\\dagger}c_{k}$ to a set of Pauli quantum gates according to an appropriate scheme such as Jordan-Wigner or Bravyi-Kitaev, apply this gate set to the ground state wavefunction (constructed with the coefficients obtained from the VQE calculation) and perform a measurement to obtain the expected value. By measuring these expectation values, we obtain estimates of $S_{jk}^{lm}$ and $(H')_{jk}^{lm}$, respectively.\n\nTherefore, to use the QSE technique, we need to use the quantum computer to calculate 2 quantities:\n\n* An estimate of the ground state $\\left |\\Psi_{0}\\right\\rangle$ (by running the VQE algorithm)\n* The matrix elements of $H'$ and $S$.\n\nFor the first quantity, we want to have an efficient representation of the trial state $|\\psi(\\boldsymbol{\\theta})\\rangle$. And for the second, we need to be able to take the estimate of the ground state and efficiently compute matrix elements. In both of these cases, being able to optimize the circuit for preparing would be useful. Thankfully, Qiskit `Terra`, in conjunction with `t|ket\u3009`, provides us tools for doing so. We discuss the problem of _circuit compilation/optimization_ in the next section.\n\n## Circuit compilation with Qiskit `Terra` and `t|ket\u3009`\n\nIn the remainder of this tutorial, we show how to use Qiskit `Terra` and `t|ket\u3009` to optimize the circuits for preparing VQE trial states, and simulate the QSE technique.\n\n## Application I: Reducing circuit resources for trial state preparation\n\nIn this tutorial, we'll focus on two simple molecules: hydrogen (H$_{2}$) and lithium hydride (LiH). NOTE: the code is much slower for LiH, which is why here, we'll demonstrate H$_{2}$.\n\n\nAs a first application of the pipelines provided by `Terra`, `Aqua`, and `t|ket\u3009`, we first show how to configure a VQE experiment using `Aqua`, and then use `Terra` and `t|ket\u3009` to compile the trial state preparation circuit.\n\nLet's start by importing the necessary packages.\n\n\n```python\n# numpy, for random number generation\nimport numpy as np\n\n# Qiskit, for transpiler-related functions, the IBMQ provider, and the Aer simulator\nfrom qiskit import IBMQ, Aer, QuantumRegister\nfrom qiskit.transpiler import transpile, transpile_dag, PassManager\nfrom qiskit.converters import circuit_to_dag, dag_to_circuit\n\n# pytket, for optimization\nimport pytket\nfrom pytket.qiskit import TketPass\n\n# Qiskit Aqua, for chemistry\nfrom qiskit.chemistry.drivers import PySCFDriver, UnitsType\nfrom qiskit.chemistry import FermionicOperator\nfrom qiskit.chemistry.aqua_extensions.components.initial_states import HartreeFock\nfrom qiskit.chemistry.aqua_extensions.components.variational_forms import UCCSD\n```\n\n### Step 0: Enable IBMQ account\n\n\n```python\nprovider = IBMQ.load_account()\n```\n\n### Step 1: setting up the molecule\n\nWe choose the basis set spanning the molecular wavefunction, the molecular geometry, the chemical identity of each atom, the charge and spin quantum number.\n\nTo calculate results for LiH (slower), just comment out the H$_{2}$ string and replace it with the LiH one.\n\nNOTE: Here, we focus only on one particular value for the `bond_length`. If you wanted to replicate the final plot for the excited state energies of LiH as a function of bond length, you'd need to sweep `bond_length` over several values.\n\n\n```python\n# Choose a particular bond length\n# NOTE: Units are in Angstroms\n\nbond_length = 0.7\n\n# Set up molecule\n\n# base_molecule_str = 'Li .0 .0 .0; H .0 .0 {}'\nbase_molecule_str = 'H .0 .0 .0; H .0 .0 {}'\n\n# Specify other molecular properties\ncharge = 0\nspin = 0\nbasis = 'sto3g'\n```\n\nHaving set up the molecule, we now execute our classical chemistry driver to obtain the integrals that define the terms in the molecule's Hamiltonian. In this case, we choose `PYSCF` as our driver, so make sure you have that installed.\n\n\n\n```python\n# Using driver to get fermionic Hamiltonian\n# PySCF example\n\ndriver = PySCFDriver(atom=base_molecule_str.format(bond_length),\n unit=UnitsType.ANGSTROM,\n charge=charge,\n spin=spin,\n basis=basis)\n\nmolecule = driver.run()\n\nprint(\"Molecular repulsion energy: \", molecule.nuclear_repulsion_energy)\n```\n\n Molecular repulsion energy: 0.7559674441714287\n\n\nThe molecular repulsion energy calculated by the driver corresponds to the coulombic repulsion between the nuclei in the molecule, and we can add it to our electron structure calculation at the end to get the total energy.\n\n### Step 2: Set up a variational form for VQE\n\nTo run the VQE algorithm, we need to specify a mapping from the molecular Hamiltonian to qubits, an initial state, and the ansatz we use for the trial state. Here, we use the Jordan-Wigner transform to map the molecular Hamiltonian onto qubits (Pauli operators). We choose the initial state to be a Hartree-Fock state, and we take the variational ansatz to be the unitary coupled cluster with single and double excitations (UCCSD).\n\n\n```python\nn_qubits = molecule.one_body_integrals.shape[0]\nn_electrons = molecule.num_alpha + molecule.num_beta - molecule.molecular_charge\n\n# get fermionic operator and mapping to qubit operator\nferOp = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)\n\nqubitOp = ferOp.mapping(map_type='JORDAN_WIGNER', threshold=0.00000001)\nqubitOp.chop(10**-10)\n\n# Instantiate the initial state as a Hartree-Fock state\ninitial_hf = HartreeFock(num_qubits=n_qubits, num_orbitals=n_qubits, \n qubit_mapping='jordan_wigner', two_qubit_reduction=False, num_particles= n_electrons)\n\n# Create the variational form\nvar_form = UCCSD(num_qubits=n_qubits, num_orbitals=n_qubits, \n num_particles=n_electrons, depth=1, initial_state=initial_hf, qubit_mapping='jordan_wigner')\n\n# How many qubits do we need?\nprint('Number of qubits: {0}'.format(n_qubits))\n```\n\n Number of qubits: 4\n\n\n### Step 3: Transpile circuit and examine circuit properties\n\nAt this point, we have instantiated a variational form for the molecule according to the UCCSD formulation. Aqua's `UCCSD` method has returned back to us an abstraction of the VQE variational form. Let's query some of its properties.\n\n\n```python\n# Query the variational form for the number of parameters, and the parameter bounds.\nvar_form.num_parameters, var_form.parameter_bounds[0]\n```\n\n\n\n\n (3, (-3.141592653589793, 3.141592653589793))\n\n\n\nAs a VQE circuit, `var_form` has some number of parameters, and each parameter can take values in $[-\\pi, \\pi]$.\nThese parameters control the angles of rotation in the quantum circuit that represents the variational anatz.\n\nFor a particular set of parameters, there is an assocated quantum circuit. Let's input some fiducial parameter values and query properties of the resulting circuit to introduce some nomenclature for describing circuits.\n\n\n```python\n# Instantiate a concrete instance of the VQE ansatz by setting all the parameters to the\n# arbitrarily-chosen value of 0.\nvar_circ = var_form.construct_circuit(np.zeros(var_form.num_parameters))\n\n# Use Terra to convert the circuit to its directed, acyclic graph (DAG) representation.\nvar_circ_dag = circuit_to_dag(var_circ)\n\n# The .properties() method of the DAG to get circuit properties.\nvar_circ_dag.properties()\n```\n\n\n\n\n {'size': 150,\n 'depth': 83,\n 'width': 4,\n 'bits': 0,\n 'factors': 1,\n 'operations': {'u3': 42, 'u2': 40, 'cx': 56, 'u1': 12}}\n\n\n\nThese circuit properties are:\n\n\n* `size`: The total number of gates in the circuit\n* `depth`: The total number of _layers_ in the circuit\n* `width`: The number of qubits in the circuit\n* `bits`: The number of classical bits in the circuit. (NOTE: Because the circuit prepares a VQE trial state, and does not have any measurements, `bits` will be 0.)\n* `factors`: The number of tensor factors the circuit could be decomposed into (by looking at the number of weakly connected components of the DAG)\n\nThe `.properties()` method of the DAG representation also breaks down the total number of gates (`size`) by the individual gates themselves. These are the $u1,u2,u3$ and CNOT gates described [here](https://qiskit.org/documentation/terra/summary_of_quantum_operations.html). You can verify that the total number of gates is in fact equal to `size`.\n\n### Step 4: Transpile the circuit using Terra and `t|ket\u3009`\n\nHaving instantiated a VQE variational form and examined some of its properties, we'd like to optimize those properties so that the circuit could be run on a near-term device. Terra provides a framework (the [_transpiler_](https://qiskit.org/documentation/terra/overview.html#transpiler)) for manipulating circuits according to certain _passes_, and where the execution of the passes is orchestrated by a _PassManager_, which we can use to do this optimization. Importantly, _the transpiler manipulates the circuit to change circuit properties, without actually changing the input-output relationship the circuit defines_. That is, the transpiler takes a circuit and re-writes it, but doesn't change what the circuit actually does.\n\nCQC has written several passes for manipulating quantum circuits which are available via `pytket`. For an extensive discussion of the framework `t|ket\u3009` uses to manipulate circuits, see **[TODO: include link]**. Here, we'll demonstrate using Terra and `pytket` to optimize a randomly-chosen realization of the VQE variational form. Currently, passes in the transpiler require a backend in order to run. For simplicity, we start by using a simulator backend.\n\n\n```python\n# Grab an Aer backend\naer_backend = Aer.get_backend('qasm_simulator')\n```\n\n\n```python\n# Choose a random set of parameters\nseed = 0\nnp.random.seed(seed)\nparams = np.random.uniform(low=-3.1, high=3.1, size=var_form.num_parameters)\n\n# Construct a random instance of the variational circuit\nvar_circuit = var_form.construct_circuit(params)\n\n# Turn the circuit into a DAG\nvar_dag = circuit_to_dag(var_circuit)\n```\n\nThis randomly-chosen realization of the variational form has the same circuit properties as the circuit we instantiated in Step 3.\n\n\n```python\nvar_dag.properties()\n```\n\n\n\n\n {'size': 150,\n 'depth': 83,\n 'width': 4,\n 'bits': 0,\n 'factors': 1,\n 'operations': {'u3': 42, 'u2': 40, 'cx': 56, 'u1': 12}}\n\n\n\nNow, we set up a transpiler using Terra and the `TketPass` from `pytket`.\n\n\n```python\n# Create a Terra PassManager object\ntk_pass_manager = PassManager()\n\n# Set up the TketPass\ntk_pass = TketPass(aer_backend)\n\n# Add the TketPass to the PassManager\ntk_pass_manager.append(tk_pass)\n```\n\nWith the transpiler set up and the realization of the variational form put into a DAG, we can now use the `transpile_dag` function to run the PassManger.\n\n\n```python\nvar_dag_transpiled = transpile_dag(var_dag, pass_manager=tk_pass_manager)\n```\n\nLet's check the properties of this circuit to see how the transpiled circuit differs from the original one.\n\n\n```python\nvar_dag_transpiled.properties()\n```\n\n\n\n\n {'size': 95,\n 'depth': 57,\n 'width': 4,\n 'bits': 0,\n 'factors': 1,\n 'operations': {'u3': 30, 'cx': 52, 'u1': 12, 'u2': 1}}\n\n\n\nThe `t|ket\u3009` compiler has optimized the circuit, and as we might hope for, _the transpiled circuit has a lower `size` and `depth` than the original circuit._\n\n\nThe table below gives properties of the transpiled circuit for H$_{2}$ and LiH (with `seed=0`, corresponding to the worst-case and also most common performance of the transpiler).\n\n\n| Molecule: H$_2$ | Total Gates | Depth | CNOT Count |\n|--------------------------------------|-------------|---------------|--------------------|\n| Input circuit | 150 | 83 | 56 |\n| `tket` circuit optimization (Aer backend) | 95 | 57 | 52 |\n\n\n| Molecule: LiH | Total Gates | Depth | CNOT Count |\n|--------------------------------------|-------------|---------------|--------------------|\n| Input circuit | 13700 | 9342 | 8064 |\n| `tket` circuit optimization (Aer backend) | 7411 | 4416 | 5096 |\n\nFor both H$_2$ and LiH, circuit optimization using `t|ket\u3009` reduces the number of gates necessary to prepare the trial state. The depth also decreases, which makes running the circuit more feasible on near-term devices.\n\n### Step 5: Route the circuit onto real hardware\n\nEven though we've optimized the circuit, there's no guarantee that it can run, as written, on a real backend. This is because the directionaly of the CNOT gates on the backend may not be respected by the circuit. For this reason, we need to re-write the circuit in such as way that the CNOT gates respect the _coupling map_ of the backend. `t|ket\u3009` knows how to do this, and handles this problem (called circuit \"routing\") when a real backend is put into the `TketPass` object. Routing refers to the process of making quantum circuits hardware compliant by the addition of SWAP gates such that all multi-qubit interactions occur on adjacent physical qubits.\n\nBecause we know _a priori_ that the H$_{2}$ and LiH molecules requires at most 12 qubits, we make sure to use an IBMQ backend with no less than 12 qubits.\n\n\n```python\n# Grab only backends that have at least 12 qubits\nprovider.backends(filters=lambda x: x.configuration().n_qubits >= 12)\n```\n\n\n\n\n [,\n ]\n\n\n\nWe'll use the `ibmq_16_melbourne` backend.\n\n\n```python\nreal_backend = provider.get_backend('ibmq_16_melbourne')\n```\n\nTo route the variational circuit onto real hardware, we simply need to change the backend.\n\n\n```python\n# Create a Terra PassManager object\ntk_pass_manager = PassManager()\n\n# Set up the TketPass\ntk_pass = TketPass(real_backend)\n\n# Add the TketPass to the PassManager\ntk_pass_manager.append(tk_pass)\n```\n\nAgain, we can transpile the DAG representation of the variational circuit to a backend-compliant circuit using the `tranpsile_dag` function. Here, because the backend has a non-trivial coupling map, the `TketPass` will perform both circuit optimization and optimal routing calculations. We need to add a register containing the ancilla qubits on the architecture that `TketPass` can use in routing.\n\n\n```python\nblank_qubits = QuantumRegister(len(real_backend.properties().qubits) - var_dag.width())\nvar_dag.add_qreg(blank_qubits)\nvar_dag_transpiled = transpile_dag(var_dag, pass_manager=tk_pass_manager)\n```\n\n\n```python\nvar_dag_transpiled.properties()\n```\n\n\n\n\n {'size': 108,\n 'depth': 69,\n 'width': 4,\n 'bits': 0,\n 'factors': 1,\n 'operations': {'u3': 30, 'cx': 52, 'u1': 12, 'u2': 14}}\n\n\n\nWe can compare these results to Qiskit's own default transpilation by passing in the backend's `coupling_map` to the `transpile_dag` function.\n\n\n```python\ntranspile_dag(var_dag, coupling_map=real_backend.configuration().coupling_map).properties()\n```\n\n\n\n\n {'size': 119,\n 'depth': 89,\n 'width': 14,\n 'bits': 0,\n 'factors': 11,\n 'operations': {'u3': 15, 'cx': 56, 'u1': 14, 'u2': 34}}\n\n\n\nThe table below shows circuit properties for H$_{2}$ (with `seed=0`).\n\n| Molecule: H$_2$ | Total Gates | Overall Depth | Overall CNOT Count |\n|--------------------------------------|-------------|---------------|--------------------|\n| Input circuit | 150 | 83 | 56 |\n| `tket` circuit optimization (Aer backend) | 95 | 57 | 52 |\n| `Qiskit ` default routing (real backend) | 119 | 89 | 56 |\n| `tket` circuit optimzation + routing (real backend) | 108 | 69 | 52 |\n\nThe table below shows circuit properties for LiH (with `seed=0`).\n\n| Molecule: LiH | Total Gates | Overall Depth | Overall CNOT Count |\n|--------------------------------------|-------------|---------------|--------------------|\n| Input circuit | 13700 | 9342 | 8064 |\n| `tket` circuit optimization (Aer backend) | 7411 | 4416 | 5096 |\n| `Qiskit ` default routing (real backend) | 42178 | 23367 | 17977 |\n| `tket` circuit optimzation + routing (real backend) | 19256 | 10022 | 8711 |\n\n## Application II: using QSE to compute excited state energies\n\nThe previous application showed us how to use `t|ket\u3009` and Terra to optimally route circuits onto real hardware. In the introduction, we observed that in order to use the quantum subspace expansion to compute excited state energies, we need to first come up with an estimate of the ground state. For this, we use VQE.\n\nIn this application, we'll instantiate a VQE circuit, run it, and use the estimated ground state as input to `pytket`'s quantum subspace expansion function(s). NOTE: We'll use a simulator backend, and will not run the VQE algorithm on real hardware.\n\n\n```python\n# Code imports\n\n# From Aqua, we need \nfrom qiskit.aqua import QuantumInstance\n\nfrom qiskit.aqua.algorithms.adaptive import VQE\nfrom qiskit.aqua.components.optimizers import L_BFGS_B\n\n# From pytket, we need QSE functions\n\nfrom pytket.chemistry import QseMatrices, QSE\n```\n\n\n```python\nbackend = Aer.get_backend('statevector_simulator')\n```\n\n\n```python\npass_manager = PassManager()\ntk_pass = TketPass(backend)\npass_manager.append(tk_pass)\n\nquantum_instance = QuantumInstance(backend, pass_manager=pass_manager)\n```\n\n### Step 1: Use VQE to estimate ground state\n\nFirst, we'll use VQE to estimate the ground state and its energy.\n\n\n```python\n# Temporary code for Aer on Macbook\nimport os\nos.environ['KMP_DUPLICATE_LIB_OK']='True'\n```\n\n\n```python\n# Set initial values of parameters\nnumber_amplitudes = len(var_form._single_excitations)+ len(var_form._double_excitations)\n\namplitudes_0 = []\nfor i in range(number_amplitudes):\n amplitudes_0.append(0.00001)\n\noptimizer = L_BFGS_B()\noptimizer.set_options(maxfun=1000, factr=10, iprint=10)\n\n# setup VQE with operator, variation form, and optimzer\nvqe_algorithm = VQE(operator=qubitOp, operator_mode='matrix', \n var_form=var_form, optimizer=optimizer, initial_point=amplitudes_0)\n\nresults = vqe_algorithm.run(quantum_instance)\n\neigval = results['eigvals'][0]\ngs_energy = eigval.real + molecule.nuclear_repulsion_energy\n\nprint(\"GS Minimum value: {}\".format(gs_energy))\nprint(\"GS Parameters: {}\".format(results['opt_params']))\n\n# store ground state amplitudes for subsequent steps\nopti_amplitudes = results['opt_params']\n```\n\n GS Minimum value: -1.1361894540653963\n GS Parameters: [ 5.06008657e-07 5.12457730e-07 -1.04867316e-01]\n\n\n### Step 2: Use QSE to find excited states from ground state \n\nWe now have the main ingredients to perform a QSE calculation: the molecular Hamiltonian and the optimized parameters to reconstruct the ground state wavefunction. We build our excitation hamiltonian and overlap operators, and measure the elements that compose $H$ and $S$. After we have obtained these arrays, we perform a diagonalization to obtain the excited state energies and vectors.\n\n\n\n```python\nqubitOp = ferOp.mapping(map_type='JORDAN_WIGNER', threshold=0.00000001)\nn_qubits = qubitOp.num_qubits\nqubitOp.chop(10**-10)\n\n# Use matrix term helper class\nmatrix_terms = QseMatrices(qubitOp, n_qubits)\n\n# Instantiate an instance of the QSE algorithm\nqse_algorithm = QSE(matrix_terms, 'matrix', var_form, opt_init_point=opti_amplitudes)\n\n# Run the algorithm\nenergies = qse_algorithm.run(quantum_instance)['eigvals']\n\n# The excited state energies are the energies from above,\n# plus the nuclear repulsion energy.\nprint(\"Excited State Energies: \", energies+molecule.nuclear_repulsion_energy)\n\n\n```\n\n Excited State Energies: [-1.13618945 -0.47845306 -0.47845306 -0.47845306 -0.1204519 0.5833141\n 0.75596744 0.75596744 0.75596744 0.75596744 0.75596744 0.75596744\n 0.75596744 0.75596744 0.75596744 0.75596744]\n\n\nThe calculation provides a refined value of the ground state and a series of excited state energies, whose number depends on the size of the basis set chosen for the molecule, as well as its nature and symmetry. Some of the energies obtained are repeated several times, signaling that some of the states obtained are degenerate. This result can be improved by either increasing the basis set or considering higher-order excitations in the subspace expansion.\n\nThe following graph shows us the excited states of LiH at a range of bond distances calculated via our method, compared to values computed using the classical [EOM-CCSD method](https://aip.scitation.org/doi/10.1063/1.464746). To generate this data yourself, you can scan the bond length parameter we set at the start of the calculation.\n\n\n\nUsing the QSE technique, and our simulated VQE ground state, we find the ground state curve has a minimum at a separation of about 1.5 \u00c5, which is in reasonable agreement with experimental data. The calculation also finds a number of excited states. Looking at the first three of these, we find that at the equilibrium distance, these states are 0.11, 0.12 and 0.17 Ha higher in energy than the ground state, which is again in reasonable agreement with experimental data. Note the small kink in one of the excited states energy curve at a distance of approximately 1.2 \u00c5. This indicates that our restriction to single electron excitations is not enough to provide an accurate description at this distance. Overall, the comparison with classically computed EOM-CCSD curves shows that this method reproduces excited state energies with good accuracy at most distances.\n\n\n```python\n\n```\n", "meta": {"hexsha": "8082b2b17d6fbe43e20cb5542e2a3b4043a1c970", "size": 40628, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chemistry/QSE_pytket.ipynb", "max_stars_repo_name": "omarcostahamido/qiskit-community-tutorials", "max_stars_repo_head_hexsha": "869bb03bda9180db51d5b52e019de5e543c8d34b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-12T01:50:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-09T20:52:19.000Z", "max_issues_repo_path": "chemistry/QSE_pytket.ipynb", "max_issues_repo_name": "omarcostahamido/qiskit-community-tutorials", "max_issues_repo_head_hexsha": "869bb03bda9180db51d5b52e019de5e543c8d34b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chemistry/QSE_pytket.ipynb", "max_forks_repo_name": "omarcostahamido/qiskit-community-tutorials", "max_forks_repo_head_hexsha": "869bb03bda9180db51d5b52e019de5e543c8d34b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-06T08:54:17.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-06T08:54:17.000Z", "avg_line_length": 43.0837751856, "max_line_length": 872, "alphanum_fraction": 0.6292950674, "converted": true, "num_tokens": 7470, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.585101139733739, "lm_q2_score": 0.46879062662624377, "lm_q1q2_score": 0.27428992993550894}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\nimport scipy.signal as signal\nimport matplotlib.pyplot as plt\nfrom ipywidgets import widgets\nfrom ipywidgets import interact\nimport numpy as np\nimport sympy as sym\n```\n\n## PID krmilnik - \u010dasovni odziv\n\nProporcionalno-integrirni-diferencirni (PID) krmilni algoritem je najpogosteje uporabljen krmilni algoritem. Njegovo prenosno funkcijo zapi\u0161emo kot:\n\n\\begin{equation}\n P(s)=K_p \\cdot \\left( 1 + \\frac{1}{T_i s} + T_d s \\right).\n\\end{equation}\n\nPrenosna funkcija je sestavljena iz vsote proporcionalne, integrirne in diferencirne komponente. Ni nujno, da so v izbranem krmilniku prisotne vse tri komponente; \u010de ni diferencirne ali integrirne komponente, govorimo tako o PI oz. PD krmilniku. V tem interaktivnem primeru je prikazan odziv P, Pi, PD in PID krmilnika na enotsko sko\u010dno, enotsko impulzno in sinusno funkcijo ter enotsko rampo.\n\n---\n\n### Kako upravljati s tem interaktivnim primerom?\n1. Izberi vstopni signal s preklapljanjem med *enotsko sko\u010dno funkcijo*, *enotsko impulzno funkcijo*, *enotsko rampo* in *sinusno funkcijo*.\n2. Izberi tip krmilnega algoritma s klikom na *P*, *PI*, *PD* ali *PID* gumb.\n3. Z uporabo drsnikov spreminjaj vrednosti koeficientov proporcionalnega ($K_p$), integrirnega ($T_i$) in diferencirnega ($T_d$) oja\u010dnja. \n4. Z uporabo drsnika $t_{max}$ lahko spreminja\u0161 interval vrednosti prikazanih na x osi.\n\n\n\n\n```python\na = 0.1\n\n# make figure\nfig = plt.figure(figsize=(9.8, 5),num='PID krmilnik')\n# add axes\nax = fig.add_subplot(111)\nax.grid(which='both', axis='both', color='lightgray')\nax.set_title('\u010casovni odziv')\n# plot step function and responses (initalisation)\ninput_plot, = ax.plot([],[],'C0', linewidth=1,label='vstopni signal')\nresponse_plot, = ax.plot([],[], 'C1', linewidth=2,label='izstopni signal')\nax.axhline(linewidth=.5, color='k')\nax.axvline(linewidth=.5, color='k')\nax.legend()\n\nax.set_xlabel('$t$ [s]')\nax.set_ylabel('vhod, izhod')\nplt.show()\n\nP, I, D, s = sym.symbols('P, I, D, s')\n\ninput_type = 'enotska sko\u010dna funkcija' #input function\nTime_span = 10 # max time on x-axis plot\n\n#initialize global variables\nKP = 1.\nTI = 1.\nTD = 1.\nnum = []\nden = []\n\ndef update_plot():\n global num, den, input_type, Time_span\n num_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in num]\n den_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in den]\n \n system = signal.TransferFunction(num_temp, den_temp)\n \n #time, response = signal.step(system) #only for setting time borders (for nicer plot. could also calculate dominant frequency)\n #time = np.linspace(0,time[-1],1000)\n time = np.linspace(0, Time_span, 600)\n \n if input_type == 'enotska sko\u010dna funkcija':\n u = np.ones_like(time)\n u = np.concatenate((np.array([0]),u))\n time, response = signal.step(system, T=time)\n time = np.concatenate((np.array([0]), time))\n response = np.concatenate((np.array([0]), response))\n elif input_type == 'enotska impulzna funkcija':\n u = np.zeros_like(time)\n u = np.concatenate((np.array([10]), u))\n time, response = signal.impulse(system, T=time)\n time = np.concatenate((np.array([0]), time))\n response = np.concatenate((np.array([0]), response))\n elif input_type == 'sinusna funkcija':\n u = np.sin(time*2*np.pi)\n time, response, _ = signal.lsim(system, U=u, T=time)\n elif input_type == 'enotska rampa':\n u = time\n time, response, _ = signal.lsim(system, U=u, T=time)\n else:\n raise Exception(\"Napaka v programu. Prosim znova za\u017eeni primer.\")\n \n response_plot.set_data(time, response)\n input_plot.set_data(time, u)\n ax.set_ylim([min([np.min(u), min(response),-.1]),min(100,max([max(response)*1.05, 1, 1.05*np.max(u[1:])]))])\n ax.set_xlim([-0.1,max(time)])\n plt.show()\n \n\ndef transfer_func(controller_type):\n global num, den\n proportional = P\n integral = P/(I*s)\n differential = P*D*s/(a*D*s+1)\n if controller_type =='P':\n controller_func = proportional\n Kp_widget.disabled=False\n Ti_widget.disabled=True\n Td_widget.disabled=True\n elif controller_type =='PI':\n controller_func = proportional+integral\n Kp_widget.disabled=False\n Ti_widget.disabled=False\n Td_widget.disabled=True\n elif controller_type == 'PD':\n controller_func = proportional+differential\n Kp_widget.disabled=False\n Ti_widget.disabled=True\n Td_widget.disabled=False\n else:\n controller_func = proportional+integral+differential\n Kp_widget.disabled=False\n Ti_widget.disabled=False\n Td_widget.disabled=False\n system_func = controller_func\n \n num = [sym.fraction(system_func.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[0], gen=s)))]\n den = [sym.fraction(system_func.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[1], gen=s)))]\n update_plot()\n \ndef func(Kp, Ti, Td, time_span):\n global KP, TI, TD, Time_span\n KP = Kp\n TI = Ti\n TD = Td\n Time_span = time_span\n update_plot()\n \nstyle = {'description_width': 'initial'}\n\ndef buttons_controller_clicked(event):\n controller = buttons_controller.options[buttons_controller.index]\n transfer_func(controller)\nbuttons_controller = widgets.ToggleButtons(\n options=['P', 'PI', 'PD', 'PID'],\n description='Izberi tip krmilnega algoritma:',\n disabled=False,\n style=style)\nbuttons_controller.observe(buttons_controller_clicked)\nstyle = {'description_width': 'initial','button_width':'180px'}\ndef buttons_input_clicked(event):\n global input_type\n input_type = buttons_input.options[buttons_input.index]\n update_plot()\nbuttons_input = widgets.ToggleButtons(\n options=['enotska sko\u010dna funkcija','enotska impulzna funkcija', 'enotska rampa', 'sinusna funkcija'],\n description='Izberi vstopni signal:',\n disabled=False,\n style=style)\nbuttons_input.observe(buttons_input_clicked)\n\n\nKp_widget = widgets.IntSlider(value=20,min=1,max=100,step=1,description=r'\\(K_p \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1d')\nTi_widget = widgets.FloatSlider(value=.1,min=0.001,max=3.,step=0.001,description=r'\\(T_{i} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\nTd_widget = widgets.FloatSlider(value=.1,min=0.001,max=3.,step=0.001,description=r'\\(T_{d} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')\n\ntime_span_widget = widgets.FloatSlider(value=10.,min=.5,max=50.,step=0.1,description=r'\\(t_{max} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1f')\n\ntransfer_func('P')\n\ndisplay(buttons_input)\ndisplay(buttons_controller)\n\ninteract(func, Kp=Kp_widget, Ti=Ti_widget, Td=Td_widget, time_span=time_span_widget);\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Izberi vstopni signal:', options=('enotska sko\u010dna funkcija', 'enotska impulzna funk\u2026\n\n\n\n ToggleButtons(description='Izberi tip krmilnega algoritma:', options=('P', 'PI', 'PD', 'PID'), style=ToggleBut\u2026\n\n\n\n interactive(children=(IntSlider(value=20, description='\\\\(K_p \\\\)', min=1, readout_format='.1d'), FloatSlider(\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9eeebb02f3723eded9ee954aeb5c044f8dd000df", "size": 176646, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_si/examples/02/TD-15-PID_krmilnik_casovni_odziv.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_si/examples/02/TD-15-PID_krmilnik_casovni_odziv.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_si/examples/02/TD-15-PID_krmilnik_casovni_odziv.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 154.2759825328, "max_line_length": 128173, "alphanum_fraction": 0.8389264405, "converted": true, "num_tokens": 2551, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5312093733737563, "lm_q2_score": 0.5156199157230156, "lm_q1q2_score": 0.2739021323302521}} {"text": "# Self Consistent Field Theory - Lab 1\n\n\n```python\nfrom IPython.core.display import HTML\ncss_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'\nHTML(url=css_file)\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nrcParams['figure.figsize'] = (12,6)\nfrom scipy.integrate import quad\n```\n\nIn Hartree-Fock theory the time-independent Schr\u00f6dinger equation\n\n\\begin{equation}\n H \\psi = E \\psi\n\\end{equation}\n\nis solved (approximately), where $H$ is the Hamiltonian operator, $\\psi$ the wavefunction, and $E$ the energy. The theory assumes that the wave function depends on the locations of the electrons ${\\bf r}_i$, where eg ${\\bf r}_1$ is the spatial location of the first electron, and that the wave function can be written as the *Slater determinant*\n\n\\begin{equation}\n \\psi = \\frac{1}{\\sqrt{N}} \\begin{vmatrix} \\chi_1({\\bf r}_1) & \\chi_2({\\bf r}_1) & \\dots &\\chi_N({\\bf r}_1) \\\\ \\chi_1({\\bf r}_2) & \\chi_2({\\bf r}_2) & \\dots &\\chi_N({\\bf r}_2) \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\chi_1({\\bf r}_N) & \\chi_2({\\bf r}_N) & \\dots &\\chi_N({\\bf r}_N) \\end{vmatrix}\n\\end{equation}\n\nAfter some considerable theoretical work, the Hartree-Fock equations are\n\n\\begin{equation}\n F({\\bf x}_1) \\chi_i({\\bf x}_1) = \\epsilon_i \\chi_i({\\bf x}_1),\n\\end{equation}\n\nwhere $F$ is the *Fock* operator\n\n\\begin{equation}\n F({\\bf x}_1) = H({\\bf x}_1) + \\sum_j \\left( J_j({\\bf x}_1) - K_j({\\bf x}_1) \\right),\n\\end{equation}\n\nwith $J$ being the Coulomb operator\n\n\\begin{equation}\n J_j({\\bf x}_1) = \\int \\text{d} {\\bf x}_2 \\, \\frac{| \\chi_j ({\\bf x}_2) |^2}{r_{12}}\n\\end{equation}\n\nand $K$ is the exchange operator\n\n\\begin{equation}\n K_j({\\bf x}_1) \\chi_i({\\bf x}_1) = \\left[ \\int \\text{d} {\\bf x}_2 \\, \\chi^*_j ({\\bf x}_2) \\frac{1}{r_{12}} \\chi_i ({\\bf x}_2) \\right] \\chi_j({\\bf x}_1).\n\\end{equation}\n\nIn the above $r_{12}$ is the distance between the first and second electrons, $r_{12} = \\| {\\bf x}_1 - {\\bf x}_2 \\|$.\n\nAs the Hamiltonian operator $H$ contains a second partial derivative ($H = -\\frac{1}{2} \\nabla^2 + \\dots$) this is a set of integro-differential equations, which is painful to solve numerically (see [this review](http://dx.doi.org/10.1016/j.cpc.2012.09.033) for an example). Instead, as with Finite Elements, it's better to write the orbitals $\\chi$ in terms of a function basis, as\n\n\\begin{equation}\n \\chi_i = \\sum_{\\mu=1}^K C_{i\\mu} \\tilde{\\chi}_{\\mu}.\n\\end{equation}\n\nHere the function basis is *global*: there is one expansion that holds over all of space.\n\nThis leads to the Hartree-Fock-Roothaan equations\n\n\\begin{equation}\n {\\bf F} {\\bf C} = {\\bf S} {\\bf C} \\epsilon\n\\end{equation}\n\nwhere all of the terms are matrices representing the operators. Written in more detail we have\n\n\\begin{equation}\n \\sum_{\\nu} F_{\\mu\\nu} C_{\\nu i} = \\epsilon_i \\sum_{\\nu} S_{\\mu \\nu} C_{\\nu i}\n\\end{equation}\n\nwhere the matrices are\n\n\\begin{align}\n S_{\\mu \\nu} &= \\int \\text{d} {\\bf x}_1 \\, \\tilde{\\chi}^*_{\\mu}({\\bf x}_1) \\tilde{\\chi}_{\\nu}({\\bf x}_1), \\\\\n F_{\\mu \\nu} &= \\int \\text{d} {\\bf x}_1 \\, \\tilde{\\chi}^*_{\\mu}({\\bf x}_1) F({\\bf x}_1) \\tilde{\\chi}_{\\nu}({\\bf x}_1).\n\\end{align}\n\nFor later purposes we define the *density matrix* ${\\bf D}$ as\n\n\\begin{equation}\n D_{\\mu \\nu} = \\sum_{j=1}^{N_{\\text{electrons}}/2} 2 C_{\\mu j} C_{\\nu j},\n\\end{equation}\n\nfrom which we write the Fock matrix as\n\n\\begin{equation}\n F_{\\mu \\nu} = H_{\\mu \\nu} + \\sum_{\\alpha} \\sum_{\\beta} \\left( G_{\\mu \\nu \\alpha \\beta} - \\frac{1}{2} G_{\\mu \\beta \\alpha \\nu} \\right) D_{\\alpha \\beta},\n\\end{equation}\n\nwhere $H$ is the one-electron operator in the function basis\n\n\\begin{equation}\n H_{\\mu \\nu} = \\int \\text{d}{\\bf x}_1 \\, \\chi_{\\mu}({\\bf x}_1) \\left( - \\frac{1}{2} \\nabla^2 \\right) \\chi_{\\nu}({\\bf x}_1) + \\sum_a \\int \\text{d}{\\bf x}_1 \\, \\chi_{\\mu}({\\bf x}_1) \\frac{Z_a}{|{\\bf R}_a - {\\bf r}_1|} \\chi_{\\nu}({\\bf x}_1)\n\\end{equation}\n\nand $G$ is the two-electron operator in the function basis\n\n\\begin{equation}\n G_{\\mu \\nu \\alpha \\beta} = \\int \\text{d}{\\bf x}_1 \\, \\text{d}{\\bf x}_2 \\, \\chi_{\\mu}({\\bf x}_1) \\chi_{\\nu}({\\bf x}_2) \\frac{1}{r_{12}} \\chi_{\\alpha}({\\bf x}_1) \\chi_{\\beta}({\\bf x}_2).\n\\end{equation}\n\nFinally, the total energy $E$ is given by\n\n\\begin{equation}\n E = \\frac{1}{2} \\sum_{\\mu=1}^N \\sum_{\\nu=1}^N D_{\\mu\\nu} \\left( H_{\\mu\\nu} + F_{\\mu\\nu} \\right) + V_{\\text{nn}},\n\\end{equation}\n\nwhere $V_{\\text{nn}}$ is the nucleon-nucleon interaction energy\n\n\\begin{equation}\n V_{\\text{nn}} = \\sum_{a} \\sum_{b} \\frac{Z_a Z_b}{\\| {\\bf R}_a - {\\bf R}_b \\|}.\n\\end{equation}\n\n## Self-consistent field solution procedure\n\nThis is an iterative prcedure, so must start from some initial guess.\n\n1. Calculate all one- and two-electron integrals, $H$ and $G$.\n2. Generate starting guess for the $C$ (molecular orbital [MO]) coefficients.\n3. Form the density matrix $D$.\n4. Form the Fock matrix $F$ from the core (one-electron) integrals $H$ plus the density matrix $D$ times the two-electron integrals $G$.\n5. Diagonalize the Fock matrix $F$. The eigenvectors contain the new MO coefficients.\n6. Form the new density matrix $D$. If sufficiently close to the old matrix we are done; otherwise, return to step 4.\n\nThe first step is difficult, so we will assume here that the elements of the $H$ matrix and $G$ tensor are given.\n\nThe crucial point is that all steps must be performed in the right basis, and whilst the basis changes between steps, the transformation matrix stays fixed. Given the overlap matrix $S$ between the basis functions, the transformation matrix $X$ is given by $U \\Lambda^{-1/2} U^*$, where $U$ and $\\Lambda$ are the eigenvectors and eigenvalues of $S$ respectively.\n\n### Code\n\nWrite a function that, given $S$, computes the transformation matrix $X = U \\Lambda^{-1/2} U^*$ (using `numpy.linalg.eig`).\n\n\n```python\n\n```\n\nWrite a function that, given $C$ and the number of electrons, computes the density matrix $D$, where\n\n\\begin{equation}\n D_{\\mu \\nu} = \\sum_{j=1}^{N_{\\text{electrons}}/2} 2 C_{\\mu j} C_{\\nu j}.\n\\end{equation}\n\n\n```python\n\n```\n\nWrite a function that, given $H, G$ and $D$, computes the Fock matrix $F$, where\n\n\n\\begin{equation}\n F_{\\mu \\nu} = H_{\\mu \\nu} + \\sum_{\\alpha} \\sum_{\\beta} \\left( G_{\\mu \\nu \\alpha \\beta} - \\frac{1}{2} G_{\\mu \\beta \\alpha \\nu} \\right) D_{\\alpha \\beta}.\n\\end{equation}\n\n\n```python\n\n```\n\nWrite a function that, given $F$, uses the `numpy.linalg.eigh` function to extract the eigenvalues and eigenvectors. It should return the orbital energies (the eigenvalues in order) and the new orbital coefficients ($X V$, where $V$ is the matrix of eigenvectors). It should compute $F' = X^* F X$ in the transformed basis, compute its eigenvalues $\\epsilon$ and eigenvectors $V$, and hence get the new coefficients $X V$.\n\n\n```python\n\n```\n\nWrite a function that, given $X, H, G$, and a guess for $C$ with its associated density matrix $D$, returns the new density matrix, new basis coefficients, and orbital energies.\n\n\n```python\n\n```\n\nWrite a function that, given $D, H, F$ and $V_{\\text{nn}}$, returns the total energy of the configuration.\n\n\n```python\n\n```\n\nWrite a function that, given $S, H, G, V_{\\text{nn}}$ and a guess for $C$, iterates the Hartree-Fock method until it converges to a certain tolerance. It should print the total energy of the configuration.\n\n\n```python\n\n```\n\n### Example\n\nA two-electron system would be $\\text{He} - \\text{H}_+$ - one Helium and one Hydrogen, with an electron missing. The required input data is:\n\n\n```python\nNelectrons = 2\nS = numpy.array([[1.0, 0.434311], [0.434311, 1.0]])\nH = numpy.array([[-1.559058, -1.111004], [-1.111004, -2.49499]])\nG = numpy.array([[[[ 0.77460594, 0.27894304],[ 0.27894304, 0.52338927]],\n [[ 0.27894304, 0.14063907],[ 0.14063907, 0.34321967]]],\n [[[ 0.27894304, 0.14063907],[ 0.14063907, 0.34321967]],\n [[ 0.52338927, 0.34321967],[ 0.34321967, 1.05571294]]]])\nVnn = 1.3668670357\n```\n\nCheck that your algorithm works: the total energy should be approximately $-2.626$ (Hartrees), and the initial guess can be pure zeros. It should take around 15 iterations.\n\n\n```python\n\n```\n", "meta": {"hexsha": "b4a9f89273ecaef827c75c5219a694a154204799", "size": 18648, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FEEG6016 Simulation and Modelling/11-Self-Consistent-Fields-Lab-1.ipynb", "max_stars_repo_name": "ngcm/training-public", "max_stars_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-06-23T05:50:49.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-22T10:29:53.000Z", "max_issues_repo_path": "FEEG6016 Simulation and Modelling/11-Self-Consistent-Fields-Lab-1.ipynb", "max_issues_repo_name": "Jhongesell/training-public", "max_issues_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-11-28T08:29:55.000Z", "max_issues_repo_issues_event_max_datetime": "2017-11-28T08:29:55.000Z", "max_forks_repo_path": "FEEG6016 Simulation and Modelling/11-Self-Consistent-Fields-Lab-1.ipynb", "max_forks_repo_name": "Jhongesell/training-public", "max_forks_repo_head_hexsha": "e5a0d8830df4292315c8879c4b571eef722fdefb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2015-04-18T21:44:48.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-09T17:35:58.000Z", "avg_line_length": 34.5333333333, "max_line_length": 429, "alphanum_fraction": 0.5095452595, "converted": true, "num_tokens": 3786, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5117166047041654, "lm_q2_score": 0.5350984286266115, "lm_q1q2_score": 0.2738187510793438}} {"text": "# Multi-TX/TL\n\nMatthieu Kratz (BE240 Submission)\n\n## Why this model?\n\nLike most genetic circuit models, you typically start with a model that captures the two fundamental processes of transcription and translation. These processes can modelled with varying degrees of complexity, ranging from a basic tx/tl model (__Equation 1__) to complex models that simulate tx/tl at a base pair level. For my project, it was of particular importance that I accurately model the sequestration of transcriptional machinery. Further, I was working without RPU data, making typical models that assume some maximum steady state saturation e.g. positive proportional hill intractable for my system. \n\n\\begin{align}\n\\\\ \\\\\n&G \\xrightarrow{ktx} T + G \\\\ \\\\ \n&\\textbf{Equation 1.} \\ \\ \\text{Simple Transcription}\n\\\\\n\\end{align}\n\n\n\nHence, I needed a model that gave reasonable steady state transcription and translation dynamics from non-RPU parameters, all while accurately tracking the sequestration of the relevant machinery. \n\n## The Multi-TX Model\n\nThe model CRN is below and the mechanism effectively relies on accounting for all possible RNAp occupancy states, including the various transitions between these occupancy states.\n\n\\begin{align}\n\\\\ \\\\\n&\\textbf{1. Binding} \\\\ \n&T7_p + T7_p:G_n \\underset{kT72}{\\overset{kT71}\\rightleftharpoons} T7_p:G_{n\\alpha} \\ \\ , \\ \\text{n $\\neq$ $n_{max}$} \\\\ \\\\\n&\\textbf{2. Closed --> Open} \\\\ \n&T7_p:G_{n\\alpha} \\xrightarrow{k_{iso}} T7_p:G_{n+1} \\ \\ , \\ \\text{n $\\neq$ $n_{max}$} \\\\ \\\\\n&\\textbf{3. TX} \\\\ \n&T7_p:G_{n\\alpha} \\xrightarrow{ktx} nT7_m + nT7_p + T7_p:G_{0\\alpha} \\ \\ , \\ \\text{n $\\neq$ 0} \\\\ \n&T7_p:G_n \\xrightarrow{ktx} nT7_m + nT7_p + T7_p:G_{0} \\ \\ , \\ \\text{n $\\neq$ 0} \\\\\n\\end{align}\n\nkT71 --> Promoter binding rate constant (bimolecular)\n\nkT72 --> Promoter unbinding rate constant (unimolecular)\n\nk_iso --> Closed to open complex transition rate constant (unimolecular)\n\nktx --> Single polymerase mRNA synthesis rate constant (unimolecular)\n\nA couple of comments:\n- We explicitly model the process of transitioning from the closed ($T7_p:G_{n\\alpha}$) to open complex ($T7_p:G_{n+1}$). This a pretty slow reaction in vivo, and in the various iterations of this model, including this seemed to be key in accurately reflecting transcription dynamics.\n\n- $n_{max}$ refers to the maximum possible occupancy of the gene. This is the physical limit which is ultimately determined by the footprint of the polymerase and the length of the gene. In relation to the point above, without isomerization, this physical limit is always met. With the explicit isomerization, this physical saturation is not met (with my parameter set at least).\n\n- Polymerase can only be added one at a time to existing genes i.e. cannot have multiple binding events or closed complexes simultaneously.\n\n- We have two TX reactions, one from the closed state and the other from the open state, both with $N$ actively transcribing polymerases. We decided to allow release from the closed state as there is no reason why one polymerase in the closed form should inhibit of other activitely transcribing polymerases. Further, at a modelling level, not allowing release from the closed state results in excessive sequestration of polymerase due to the long time scale of isomerization.\n\n- In both TX reactions, the entirety of the $N$ polymerases (bar the one in the closed state) are simultaneously release, along with $N$ transcripts and the unoccupied gene ($T7_p:G_{0}$)\n\n## Biocrnpyler multi_tx Mechanism subclass\n\nSubclass should be availabe via `from biocrnpyler import mechanism`, code is below for clarity\n\nCouple of comments:\n- The subclass has been designed to be used in concert with the Promoter and DNA assemblies subclasses\n\n- Have to define a maximum occupancy (int) and cognate polymerase (species or str) when instantiating \n\n- The various complex species are generated within the subclass from the names of the polymerase and dna objects in the DNAassembly object. They are defined as species with DNA material types.\n \n\n## Example: T7 Polymerase Transcription of GFP mRNA\n\n\n```python\nfrom biocrnpyler import *\n\nclass multi_tx(Mechanism):\n '''\n Multi-RNAp Transcription w/ Isomerization:\n Detailed transcription mechanism accounting for each individual \n RNAp occupancy states of gene.\n \n n ={0, max_occ}\n DNA:RNAp_n + RNAp <--> DNA:RNAp_n_c --> DNA:RNAp_n+1\n DNA:RNAp_n --> DNA:RNAp_0 + n RNAp + n mRNA\n DNA:RNAp_n_c --> DNA:RNAp_0_c + n RNAp + n mRNA\n \n n --> number of open configuration RNAp on DNA\n max_occ --> Physical maximum number of RNAp on DNA (based on RNAp and DNA dimensions)\n DNA:RNAp_n --> DNA with n open configuration RNAp on it\n DNA:RNAp_n_c --> DNA with n open configuration RNAp and 1 closed configuration RNAp on it\n \n For more details, see examples/MultiTX_Demo.ipynb\n '''\n \n # initialize mechanism subclass\n def __init__(self, pol, name='multi_tx', mechanism_type='transcription', **keywords):\n\n if isinstance(pol,str):\n self.pol = Species(name=pol, material_type='protein')\n \n elif isinstance(pol,Species):\n self.pol = pol\n \n else:\n raise ValueError(\"'pol' must be a string or Species\")\n \n \n Mechanism.__init__(self, name=name, mechanism_type=mechanism_type, **keywords)\n \n # species update\n def update_species(self, dna, transcript, component, part_id, **keywords):\n max_occ = int(component.get_parameter(\"max_occ\", part_id = part_id, mechanism = self))\n cp_open = []\n cp_closed = []\n for n in range(1,max_occ + 1):\n name_open = self.pol.name + 'x' + dna.name + '_' + str(n)\n cp_open.append(ComplexSpecies([dna]+[self.pol for i in range(n)],name=name_open))\n if n > 1:\n name_closed = self.pol.name + 'x' + dna.name + '_closed' + '_' + str(n-1)\n cp_closed.append(ComplexSpecies([dna]+[self.pol for i in range(n-1)],name=name_closed))\n else:\n name_closed = self.pol.name + 'x' + dna.name + '_closed' + '_' + str(0)\n cp_closed.append(ComplexSpecies([dna]+[self.pol for i in range(1)],name=name_closed))\n \n cp_misc = [self.pol,dna,transcript]\n \n \n return cp_open + cp_closed + cp_misc\n \n def update_reactions(self, dna, transcript, component, part_id, **keywords):\n \n '''\n DNA:RNAp_n + RNAp <--> DNA:RNAp_n_c --> DNA:RNAp_n+1\n kf1 = k1, kr1 = k2, kf2 = k_iso\n DNA:RNAp_n --> DNA:RNAp_0 + n RNAp + n mRNA\n kf = ktx_solo\n DNA:RNAp_n_c --> DNA:RNAp_0_c + n RNAp + n mRNA\n kf = ktx_solo\n \n max_occ = maximum occupancy of gene (physical limit)\n '''\n \n # parameter loading\n k1 = component.get_parameter(\"k1\", part_id = part_id, mechanism = self)\n k2 = component.get_parameter(\"k2\", part_id = part_id, mechanism = self)\n k_iso = component.get_parameter(\"k_iso\", part_id = part_id, mechanism = self)\n ktx_solo = component.get_parameter(\"ktx_solo\", part_id = part_id, mechanism = self)\n max_occ = int(component.get_parameter(\"max_occ\", part_id = part_id, mechanism = self))\n \n # complex species instantiation\n cp_open = []\n cp_closed = []\n for n in range(1,max_occ + 1):\n name_open = self.pol.name + 'x' + dna.name + '_' + str(n)\n cp_open.append(ComplexSpecies([dna]+[self.pol for i in range(n)],name=name_open))\n if n > 1:\n name_closed = self.pol.name + 'x' + dna.name + '_closed' + '_' + str(n-1)\n cp_closed.append(ComplexSpecies([dna]+[self.pol for i in range(n-1)],name=name_closed))\n else:\n name_closed = self.pol.name + 'x' + dna.name + '_closed' + '_' + str(0)\n cp_closed.append(ComplexSpecies([dna]+[self.pol for i in range(1)],name=name_closed))\n \n \n # Reactions\n # polymerase + complex(n) --> complex(n_closed)\n rxn_open_pf = [Reaction(inputs=[self.pol, cp_open[n]], outputs=[cp_closed[n+1]], k=k1) for n in range(0,max_occ-1)]\n rxn_open_pr = [Reaction(inputs=[cp_closed[n+1]], outputs=[self.pol, cp_open[n],], k=k2) for n in range(0,max_occ-1)]\n \n # isomerization\n rxn_iso = [Reaction(inputs=[cp_closed[n]], outputs=[cp_open[n]], k=k_iso) for n in range(0,max_occ)]\n \n # release/transcription from open and closed states\n rxn_release_open = []\n rxn_release_closed = []\n for n in range(0,max_occ):\n rxn_temp1 = Reaction(inputs= [cp_open[n]], outputs=[self.pol for i in range(n+1)] + \n [transcript for i in range(n+1)] + [dna], k=ktx_solo)\n rxn_release_open.append(rxn_temp1)\n \n for n in range(1,max_occ):\n rxn_temp2 = Reaction(inputs= [cp_closed[n]], outputs=[self.pol for i in range(n)] + \n [transcript for i in range(n)] + [cp_closed[0]], k=ktx_solo)\n rxn_release_closed.append(rxn_temp2)\n \n # missing reactions (0 --> 0_closed and v.v. 0_closed --> 0)\n rxn_m1 = Reaction(inputs=[dna,self.pol], outputs=[cp_closed[0]], k=k1)\n rxn_m2 = Reaction(inputs=[cp_closed[0]], outputs=[dna,self.pol], k=k2)\n \n rxn_all = rxn_open_pf + rxn_open_pr + rxn_iso + rxn_release_open + rxn_release_closed + [rxn_m1, rxn_m2]\n \n return rxn_all\n```\n\n C:\\Users\\mkratz\\Anaconda3\\lib\\site-packages\\biocrnpyler-0.2-py3.7.egg\\biocrnpyler\\__init__.py:36: UserWarning: No module named 'fa2'\n C:\\Users\\mkratz\\Anaconda3\\lib\\site-packages\\biocrnpyler-0.2-py3.7.egg\\biocrnpyler\\__init__.py:37: UserWarning: plotting is disabled because you are missing some libraries\n\n\nFirst we define a dilution mixture class, a mixture based of ExpressionExtract which adds dilution and degredation interactions to RNA species, respectively. I haven't added this for proteins as it makes it easier to glean the degree of sequestration present with this mechanism. In practical use, you would of course allow your polymerase to be diluted.\n\n\n```python\nclass DilutionMixture(Mixture):\n def __init__(self, name=\"\", **keywords):\n \n simple_transcription = SimpleTranscription() #Transcription will not involve machinery\n simple_translation = SimpleTranslation()\n \n default_mechanisms = {\n \"transcription\": simple_transcription, #This will be overwritten by the NegativeHillPromotor\n \"translation\": simple_translation\n }\n \n #By Default Species are diluted S-->0 Unless:\n # They are of type 'dna'\n # They have the attribute 'machinery'\n dilution_mechanism = Dilution(filter_dict = {\"dna\":False,'protein':False,'complex':False}, default_on = True)\n dilution_mrn = Dilution(name = \"rna_degredation\", filter_dict = {\"rna\":True}, default_on = False)\n\n #Add this mechanism to a dictionary which is passed into the Mixture txtl.TxTlExtract\n global_mechanisms = {\"dilution\":dilution_mechanism, \"rna_degredation\":dilution_mrn}\n \n #Always call the superclass __init__ with **keywords\n Mixture.__init__(self, name=name, default_mechanisms=default_mechanisms, global_mechanisms = global_mechanisms, **keywords)\n```\n\nNow we define a DNA assembly that use our mechanism in the following steps:\n- Create a species for the relevant polymerase\n- Create multi_tx object, give a maximum occupancy and polymerase (must be species or str)\n- Associate this mechanism with a promoter\n- Place this promoter into a DNA assembly\n\nAnd voila, the cassette regulated by T7p is ready to use!\n\n\n```python\n# Define Polymerase, and max occupancy and instatiate Mechanism Object\nT7P = Species('T7p','protein')\nMX = multi_tx(pol=T7P,name='MX')\n\n# create promoter object, associated MX and params with it\npT7 = Promoter(name='pT7',mechanisms={'transcription':MX})\n\n# place promoter object into DNA assembly\nPFL = DNAassembly('PFL',dna=Species('T7',material_type='dna'),promoter=pT7,transcript='GFP')\n\n# create simple promoter and DNA assembly objects that synthesize polymerase, uncomment if you want\n# a constant source of T7p\n# pJ = Promoter('J23107',mechanisms={'transcription':OneStepGeneExpression()})\n# SC = DNAassembly('SCF',dna='T7_source',promoter=pJ,protein=T7P)\n# Test_EX = DilutionMixture(components=[PFL,SC],parameter_file = \"parameters.txt\")\n# CRN = Test_EX.compile_crn()\n\n# make extract with T7p source and GFP and compile CRN \nTest_EX = DilutionMixture(components=[PFL],parameter_file = \"params_demo.txt\")\nCRN1 = Test_EX.compile_crn()\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nts = np.arange(0,2000,1)\nx0_dict = {repr(T7P):10000,repr(PFL.dna):1}\n\n# Do not use simulate_with_bioscrape_via_sbml, not working at time of writing (5/31/2020) because of a bug related to sbml model writing \ntry:\n R1= CRN1.simulate_with_bioscrape(ts, initial_condition_dict = x0_dict, stochastic = False,)\n\n fig, ax = plt.subplots(1,2,figsize=(18,8))\n ax[0].set_title('Polymerase Levels',pad=20,fontdict={'fontsize':18})\n ax[0].plot(R1['protein_T7p'],linewidth=3)\n ax[0].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[0].set_ylabel('Polymerase Count',labelpad=15,fontdict={'fontsize':14})\n\n ax[1].set_title('Transcript Levels',pad=20,fontdict={'fontsize':18})\n ax[1].plot(R1['rna_GFP'],linewidth=3,c='k')\n ax[1].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[1].set_ylabel('Transcript Count',labelpad=15,fontdict={'fontsize':14})\nexcept ModuleNotFoundError:\n pass\n```\n\n## Comparison of Transcript SS with RPU Data\n\nHere we will do a head to head comparison with a simple transcription model built using RPU data. I consider this to effectively be the ground truth for the SS Transcript Count, although it should be noted this comparison does not necessary validate any of the pre-SS dynamics as the simple transcription model assumes immediate saturation. RPU data is from Qi et al.(2012) and RPU standard is from supplement of Nielsen et al. (2016).\n\n\n```python\n# place promoter object into DNA assembly\nPFL = DNAassembly('PFL',dna='T7',promoter='pT7',transcript='GFP')\n\n# make extract with T7p source and GFP and compile CRN \nTest_EX = DilutionMixture(components=[PFL],parameter_file = \"params_demo.txt\")\nCRN2 = Test_EX.compile_crn()\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nts = np.arange(0,2000,1)\nx0_dict = {repr(PFL.dna):1}\n\n# Do not use simulate_with_bioscrape_via_sbml, not working at time of writing (5/31/2020) because of a bug related to sbml model writing\ntry:\n R2 = CRN2.simulate_with_bioscrape(ts, initial_condition_dict = x0_dict, stochastic = False,)\n\n fig = plt.figure(figsize=(10,6))\n plt.title('Transcript Levels',pad=20,fontdict={'fontsize':18})\n plt.plot(R1['rna_GFP'],linewidth=3,c='k',label='MTX Model')\n plt.plot(R2['rna_GFP'],linewidth=3,c='b', label='RPU Model')\n plt.xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n plt.ylabel('Transcript Count',labelpad=15,fontdict={'fontsize':14})\n plt.legend(fontsize=14)\n\n print('\\n \\n')\n print(f\"MX Model predicts {np.round(R1['rna_GFP'].iloc[-1])} transcripts and RPU Model predicts {np.round(R2['rna_GFP'].iloc[-1])} transcripts at SS \\n \\n\")\nexcept ModuleNotFoundError:\n pass\n```\n\nThe two models seem to agree pretty well, with some minor difference. This is pretty cool given the fact that MX model uses indirect parameters (promoter affinity, isomerization and tx rates) to predict this SS value. Of course, it's also important to keep in mind that alternative models typically do to not account for the constant sequestration of the polymerase machinery to sustain this SS transcript level. As such, they may not reflect key dynamics resulting from machinery allocation/sharing e.g. retroactivity in tx/tl.\n\n## Putative Multi-TL Model\n\nThe general model of binding, isomerization and production can be readily extended to the process of translation. However, there are several caveats for translation. \n\n- First of all, I don't have any direct data like RPU experiments to compare my model with. All I can say is that with biologically reasonable parameter sets, you get something like a few hundred proteins per mRNA (RBZ in excess) and e. coli has a protein to mRNA ratio in that range (100:1 - 1000:1) (Taniguchi et al.(2010) or bionumbers book). \n\n- Second of all, since we're no longer working with DNA complexes, the mRNA-RBZ complexes should be subject to degredation/dilution. However, it seems that at the level of the biology, initiation can have a stabilizing effect and for elongation it is uncertain (Roy et al. (2013)). In my simulations, I've effectively done as if only dilution is applied to these complexes, but there is an argument to include some form of active degredation (at a reduced rate). Depending on how this implemented, one may need to make a new degredation mechanism where the complexes effectively releases the ribosomes i.e. only mRNA degraded. This is also complicated by the fact that using subclasses like ComplexSpecies() results in inheritance of degredation/material properties. \n\n## Quick Example \n\n\n```python\nfrom biocrnpyler import *\nimport warnings\n\nclass multi_tl(Mechanism):\n '''\n Multi-RBZ Translation w/ Isomerization:\n Detailed translation mechanism accounting for each individual \n RBZ occupancy states of mRNA. Still needs some work, so use with caution,\n read all warnings and consult the example notebook.\n \n n ={0, max_occ}\n mRNA:RBZ_n + RBZ <--> mRNA:RBZ_n_c --> mRNA:RBZ_n+1\n mRNA:RBZ_n --> mRNA:RBZ_0 + n RBZ + n Protein\n mRNA:RBZ_n_c --> mRNA:RBZ_0_c + n RBZ + n Protein\n \n n --> number of open configuration RBZ on mRNA\n max_occ --> Physical maximum number of RBZ on mRNA (based on RBZ and mRNA dimensions)\n mRNA:RBZ_n --> mRNA with n open configuration RBZ on it\n mRNA:RBZ_n_c --> mRNA with n open configuration RBZ and 1 closed configuration RBZ on it\n \n For more details, see examples/MultiTX_Demo.ipynb\n ''' \n \n # initialize mechanism subclass\n def __init__(self, ribosome, name='multi_tl', mechanism_type='translation', **keywords):\n\n if isinstance(ribosome,str):\n self.ribosome = Species(name=ribosome, material_type='protein')\n \n elif isinstance(ribosome,Species):\n self.ribosome = ribosome\n \n else:\n raise ValueError(\"'ribosome' must be a string or Species\")\n \n warnings.warn('This mechanism still needs some extra validation, use at your own peril and read the warnings!')\n warnings.warn(\"To properly use this mechanism, set dilution for mRNA-RBZ complexes!\")\n warnings.warn(\"I've set RBZ and mRNA-RBZ complexes as protein Species to apply dilution to them, edit if you want something else!\")\n\n Mechanism.__init__(self, name=name, mechanism_type=mechanism_type, **keywords)\n \n # species update\n def update_species(self, transcript, protein, component, part_id, **keywords):\n max_occ = int(component.get_parameter(\"max_occ\", part_id = part_id, mechanism = self))\n cp_open = []\n cp_closed = []\n for n in range(1,max_occ + 1):\n name_open = self.ribosome.name + 'x' + transcript.name + '_' + str(n)\n cp_open.append(ComplexSpecies([transcript]+[self.ribosome for i in range(n)],name=name_open))\n \n if n > 1:\n name_closed = self.ribosome.name + 'x' + transcript.name + '_closed' + '_' + str(n-1)\n cp_closed.append(ComplexSpecies([transcript]+[self.ribosome for i in range(n-1)],name=name_closed))\n else:\n name_closed = self.ribosome.name + 'x' + transcript.name + '_closed' + '_' + str(0)\n cp_closed.append(ComplexSpecies([transcript]+[self.ribosome for i in range(1)],name=name_closed))\n \n\n cp_misc = [self.ribosome,transcript,protein]\n\n return cp_open + cp_closed + cp_misc\n \n def update_reactions(self, transcript, protein, component, part_id, **keywords):\n '''\n mRNA:RBZ_n + RBZ <--> mRNA:RBZ_n_c --> mRNA:RBZ_n+1\n kf1 = kbr, kr1 = kur, kf2 = k_iso_r\n mRNA:RBZ_n --> mRNA:RBZ_0 + n RBZ + n Protein\n kf = ktl_solo\n mRNA:RBZ_n_c --> mRNA:RBZ_0_c + n RBZ + n Protein\n kf = ktl_solo\n '''\n \n # parameter loading\n kbr = component.get_parameter(\"kbr\", part_id = part_id, mechanism = self)\n kur = component.get_parameter(\"kur\", part_id = part_id, mechanism = self)\n k_iso_r = component.get_parameter(\"k_iso_r\", part_id = part_id, mechanism = self)\n ktl_solo = component.get_parameter(\"ktl_solo\", part_id = part_id, mechanism = self)\n max_occ = int(component.get_parameter(\"max_occ\", part_id = part_id, mechanism = self))\n\n \n # complex species instantiation\n cp_open = []\n cp_closed = []\n for n in range(1,max_occ + 1):\n name_open = self.ribosome.name + 'x' + transcript.name + '_' + str(n)\n cp_open.append(ComplexSpecies([transcript]+[self.ribosome for i in range(n)],name=name_open))\n \n if n > 1:\n name_closed = self.ribosome.name + 'x' + transcript.name + '_closed' + '_' + str(n-1)\n cp_closed.append(ComplexSpecies([transcript]+[self.ribosome for i in range(n-1)],name=name_closed))\n else:\n name_closed = self.ribosome.name + 'x' + transcript.name + '_closed' + '_' + str(0)\n cp_closed.append(ComplexSpecies([transcript]+[self.ribosome for i in range(1)],name=name_closed))\n \n # Reactions\n # ribosome + complex(n) --> complex(n_closed)\n rxn_open_pf = [Reaction(inputs=[self.ribosome, cp_open[n]], outputs=[cp_closed[n+1]], k=kbr) for n in range(0,max_occ-1)]\n rxn_open_pr = [Reaction(inputs=[cp_closed[n+1]], outputs=[self.ribosome, cp_open[n],], k=kur) for n in range(0,max_occ-1)]\n \n # isomerization\n rxn_iso = [Reaction(inputs=[cp_closed[n]], outputs=[cp_open[n]], k=k_iso_r) for n in range(0,max_occ)]\n \n # release/translation from open and closed states\n rxn_release_open = []\n rxn_release_closed = []\n for n in range(0,max_occ):\n rxn_temp1 = Reaction(inputs= [cp_open[n]], outputs=[self.ribosome for i in range(n+1)] + \n [protein for i in range(n+1)] + [transcript], k=ktl_solo)\n rxn_release_open.append(rxn_temp1)\n \n for n in range(1,max_occ):\n rxn_temp2 = Reaction(inputs= [cp_closed[n]], outputs=[self.ribosome for i in range(n)] + \n [protein for i in range(n)] + [cp_closed[0]], k=ktl_solo)\n rxn_release_closed.append(rxn_temp2)\n \n # missing reactions (0 --> 0_closed and v.v. 0_closed --> 0)\n rxn_m1 = Reaction(inputs=[transcript,self.ribosome], outputs=[cp_closed[0]], k=kbr)\n rxn_m2 = Reaction(inputs=[cp_closed[0]], outputs=[transcript,self.ribosome], k=kur)\n \n rxn_all = rxn_open_pf + rxn_open_pr + rxn_iso + rxn_release_open + rxn_release_closed + [rxn_m1, rxn_m2]\n \n return rxn_all\n```\n\n\n```python\nclass DilutionMixture(Mixture):\n def __init__(self, name=\"\", **keywords):\n \n simple_transcription = SimpleTranscription() #Transcription will not involve machinery\n simple_translation = SimpleTranslation()\n \n default_mechanisms = {\n \"transcription\": simple_transcription, #This will be overwritten by the NegativeHillPromotor\n \"translation\": simple_translation\n }\n \n #By Default Species are diluted S-->0 Unless:\n # They are of type 'dna'\n # They have the attribute 'machinery'\n dilution_mechanism = Dilution(filter_dict = {\"dna\":False,'protein':True,'complex':True}, default_on = True)\n dilution_mrn = Dilution(name = \"rna_degredation\", filter_dict = {\"rna\":True}, default_on = False)\n\n #Add this mechanism to a dictionary which is passed into the Mixture txtl.TxTlExtract\n global_mechanisms = {\"dilution\":dilution_mechanism, \"rna_degredation\":dilution_mrn}\n \n #Always call the superclass __init__ with **keywords\n Mixture.__init__(self, name=name, default_mechanisms=default_mechanisms, global_mechanisms = global_mechanisms, **keywords)\n```\n\nPFL is transcribed through simple transcription and translated through my multi_tl mechanism. We also have a constitutive source of our RBZ being produced to provide a constant saturating concentration of RBZ.\n\n\n```python\n# Instantiate RBZ species\nRBZ = Species('RBZ',material_type='protein')\n\n# make RBZ source\nJP = Promoter('J23101')\nRBZ_S = DNAassembly('RBZ_source',promoter=JP,transcript=RBZ)\n\n# Instantiate mechanism and mixture\nML = multi_tl(RBZ)\nPFL = DNAassembly('PFL',dna='T7',rbs='RBSG',promoter='pT7',transcript='GFP',protein='GFP')\nEM = DilutionMixture('EM',components=[PFL,RBZ_S],parameter_file = \"params_demo.txt\",mechanisms={'translation':ML})\nCRN3 = EM.compile_crn()\n```\n\n C:\\Users\\mkratz\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:36: UserWarning: This mechanism still needs some extra validation, use at your own peril and read the warnings!\n C:\\Users\\mkratz\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:37: UserWarning: To properly use this mechanism, set dilution for mRNA-RBZ complexes!\n C:\\Users\\mkratz\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:38: UserWarning: I've set RBZ and mRNA-RBZ complexes as protein Species to apply dilution to them, edit if you want something else!\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nts = np.arange(0,10000,1)\nx0_dict = {repr(RBZ_S.dna):100,repr(PFL.dna):1}\n\ntry:\n # Do not use simulate_with_bioscrape_via_sbml, not working at time of writing (5/31/2020) because of a bug related to sbml model writing \n R3 = CRN3.simulate_with_bioscrape(ts, initial_condition_dict = x0_dict, stochastic = False,)\n\n fig, ax = plt.subplots(1,2,figsize=(18,8))\n ax[0].set_title('GFP Protein Levels',pad=20,fontdict={'fontsize':18})\n ax[0].plot(R3['protein_GFP'],linewidth=3)\n ax[0].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[0].set_ylabel('GFP Protein Count',labelpad=15,fontdict={'fontsize':14})\n\n ax[1].set_title('GFP Transcript Levels',pad=20,fontdict={'fontsize':18})\n ax[1].plot(R3['rna_GFP'],linewidth=3,c='k')\n ax[1].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[1].set_ylabel('GFP Transcript Count',labelpad=15,fontdict={'fontsize':14})\n\n\n # tiny script to aggregate T7m containing species\n import pandas as pd\n j = pd.DataFrame()\n for col in R3.columns:\n if 'xGFP' in col:\n j[col] = R3[col]\n j['sum'] = j.sum(axis=1)\n\n # tiny script to calculate total ribosomes bound to mRNA\n rbz_sum = 0\n for col in j.columns[0:-1]:\n if 'closed' in col:\n c = int(col.split('_')[-1])\n c+=1\n else:\n c = int(col.split('_')[-1])\n rbz_sum += c * j[col].iloc[-1]\n\n print('\\n')\n print(f\"The ratio of protein to mRNA is {np.round(R3['protein_GFP'].iloc[-1]/j['sum'].iloc[-1])} protein per mRNA\")\n print('\\n')\n print(f\"The average mRNA occupancy is {np.round(rbz_sum/j['sum'].iloc[-1])} ribosomes per mRNA\")\n print('\\n \\n')\n\nexcept ModuleNotFoundError:\n pass\n```\n\nGFP transcript count is very low as all of it is currently occupied in various RBZ-transcript complexes\n\n# Future Work\n\n- Want to do more rigorous validation of multi-tx mechanism. As part of that, I first plan on comparing RPU data of existing consitutive promoters and non-native polymerase expression systems e.g. T5 with what the multi-tx model would predict. \n- Is there data out there that would help validate the predicted RNAp occupancy of genes from the multi-tx model (same story for multi-tl model)?\n- Eventually want to develop a multi-tx mechanism that can be used for TF-mediated transcription.\n- Start looking towards validating multi-tl model, currently plan on seeing what data and parameters I could scrape from existing resource e.g. BCD RBS binding rates from biocrnpyler. \n- Do some model comparison using bioscrape inference, where I generate parameters with RPU data model and fit parameters (vary known and unknown parameters) from the MTX model. Definitely very interesting for deriving isomerization rates as that seems hard to come by in literature and seems to be a key part of the model in determining system behaviour (Pre-SS and SS dynamics).\n\n# Mentioned Papers and Parameter Resources\n\nT7 parameters:\n- Promoter Binding and Unbinding: Jia, Y., Kumar, A., & Patel, S. S. (1996). Equilibrium and Stopped-flow Kinetic Studies of Interaction between T7 RNA Polymerase and Its Promoters Measured by Protein and 2-Aminopurine Fluorescence Changes. Journal of Biological Chemistry , 271(48), 30451\u201330458. https://doi.org/10.1074/jbc.271.48.30451 \n- Isomerization Rate: Skinner, G. M., Baumann, C. G., Quinn, D. M., Molloy, J. E., & Hoggett, J. G. (2004). Promoter Binding, Initiation, and Elongation By Bacteriophage T7 RNA Polymerase: A SINGLE-MOLECULE VIEW OF THE TRANSCRIPTION CYCLE . Journal of Biological Chemistry , 279(5), 3239\u20133244. https://doi.org/10.1074/jbc.M310471200 \n- Translation Rate (T7p is VERY fast): Kochetkov, S. N., Rusakova, E. E., & Tunitskaya, V. L. (1998). Recent studies of T7 RNA polymerase mechanism. FEBS Letters, 440(3), 264\u2013267. https://doi.org/https://doi.org/10.1016/S0014-5793(98)01484-7\n \nTranslation Parameters:\n- RBS Binding and Unbinding: Chandra, F., & Del Vecchio, D. (2016). The Effects of Ribosome Autocatalysis and Negative Feedback in Resource Competition. bioRxiv. https://doi.org/10.1101/042127\n- Isomerization Rate: Draper, D. E. (1993). Mechanisms of Translational Initiation and Repression in Prokaryotes BT - The Translational Apparatus: Structure, Function, Regulation, Evolution. In K. H. Nierhaus, F. Franceschi, A. R. Subramanian, V. A. Erdmann, & B. Wittmann-Liebold (Eds.) (pp. 197\u2013207). Boston, MA: Springer US. https://doi.org/10.1007/978-1-4615-2407-6_19\n\nMisc:\n- mRNA Degredation: Roy, B., & Jacobson, A. (2013). The intimate relationships of mRNA decay and translation. Trends in Genetics : TIG, 29(12), 691\u2013699. https://doi.org/10.1016/j.tig.2013.09.002\n- RPU Data: Nielsen, A. A. K., Der, B. S., Shin, J., Vaidyanathan, P., Paralanov, V., Strychalski, E. A., \u2026 Voigt, C. A. (2016). Genetic circuit design automation. Science, 352(6281), aac7341. https://doi.org/10.1126/science.aac7341 and Qi, L., Haurwitz, R. E., Shao, W., Doudna, J. A., & Arkin, A. P. (2012). RNA processing enables predictable programming of gene expression. Nature Biotechnology, 30(10), 1002\u20131006. https://doi.org/10.1038/nbt.2355\n- Protein:mRNA ratios: Taniguchi, Y., Choi, P. J., Li, G.-W., Chen, H., Babu, M., Hearn, J., \u2026 Xie, X. S. (2010). Quantifying E. coli Proteome and Transcriptome with Single-Molecule Sensitivity in Single Cells. Science, 329(5991), 533 LP \u2013 538. https://doi.org/10.1126/science.1188308\n \n", "meta": {"hexsha": "c390428b2019854fc79a34753c60198426fefb78", "size": 148057, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/MultiTX_Demo.ipynb", "max_stars_repo_name": "eldad-a/BioCRNPyler", "max_stars_repo_head_hexsha": "c165e885b6f5efe59d03e09015f297ad82e2c5ab", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/MultiTX_Demo.ipynb", "max_issues_repo_name": "eldad-a/BioCRNPyler", "max_issues_repo_head_hexsha": "c165e885b6f5efe59d03e09015f297ad82e2c5ab", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/MultiTX_Demo.ipynb", "max_forks_repo_name": "eldad-a/BioCRNPyler", "max_forks_repo_head_hexsha": "c165e885b6f5efe59d03e09015f297ad82e2c5ab", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 174.8016528926, "max_line_length": 38740, "alphanum_fraction": 0.8626407397, "converted": true, "num_tokens": 8397, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5506073655352404, "lm_q2_score": 0.4960938294709195, "lm_q1q2_score": 0.2731529165032718}} {"text": "# Attributes lookup and sympy-based equations\n\n## ANNarchy side\n\n### Core\n\n\n```python\nimport numpy as np\n\nclass Value(object):\n \"Placeholder for single parameters\"\n def __init__(self, value, dtype):\n self._init_value = value\n self._instantiated = False\n self._dtype = dtype\n\n def _copy(self):\n return Value(self._init_value, self._dtype)\n\n def _instantiate(self, shape):\n self._instantiated = True\n self.shape = 1\n self._value = self._init_value\n \n def get_value(self):\n if not self._instantiated:\n return self._init_value\n return self._value\n \n def set_value(self, val):\n if not self._instantiated:\n self._init_value = val\n else:\n self._value = val\n\nclass Array(object):\n \"Placeholder for arrays\"\n def __init__(self, init, dtype):\n\n self._init_value = init\n self._dtype = dtype\n\n self._instantiated = False\n\n def _copy(self):\n return Array(\n self._init_value, \n self._dtype\n )\n\n def _instantiate(self, shape):\n self._instantiated = True\n self._value = np.full(shape, self._init_value, dtype=self._dtype)\n self.shape = self._value.shape\n \n def get_value(self):\n if not self._instantiated:\n return self._init_value\n return self._value\n \n def set_value(self, val):\n if not self._instantiated:\n self._init_value = val\n else:\n if isinstance(val, np.ndarray):\n if val.shape == self.shape:\n self._value = val\n elif val.shape == (1,):\n self._value = np.full(self.shape, val[0], dtype=self._dtype)\n else:\n print(\"Array assignment error\", val, val.shape)\n elif isinstance(val, (float, int, bool)):\n self._value = np.full(self.shape, val)\n else:\n print(\"Array assignment error\", val)\n\nclass Neuron(object):\n\n def Value(self, value, dtype=np.float32):\n \"Creates and returns a single value.\"\n if not hasattr(self, \"_data\"):\n self._data = []\n val = Value(value, dtype)\n self._data.append(val)\n return val\n\n def Array(self, init=0.0, dtype=np.float32):\n \"Creates and returns an array.\"\n if not hasattr(self, \"_data\"):\n self._data = []\n val = Array(init, dtype)\n self._data.append(val)\n return val\n\nclass Synapse(object):\n\n def Value(self, value, dtype=np.float32):\n \"Creates and returns a single value.\"\n if not hasattr(self, \"_data\"):\n self._data = []\n val = Value(value, dtype)\n self._data.append(val)\n return val\n\n def Array(self, init=0.0, dtype=np.float32):\n \"Creates and returns an array.\"\n if not hasattr(self, \"_data\"):\n self._data = []\n val = Array(init, dtype)\n self._data.append(val)\n return val\n\nclass Population(object):\n\n def __init__(self, shape, neuron):\n\n # Obligatory arguments\n self.shape = shape\n if isinstance(shape, tuple):\n size = 1\n for n in shape:\n size *= n\n self.size = size\n else: # single value\n self.size = int(shape)\n\n # Neuron type\n self._neuron_type = neuron\n self._spiking = False\n\n # Internal stuff\n self._attributes = {}\n self._values_list = []\n self._arrays_list = []\n\n def _analyse(self):\n\n # List attributes\n current_attributes = list(self._neuron_type.__dict__.keys())\n\n for attr in current_attributes:\n if isinstance(getattr(self._neuron_type, attr), (Value, )):\n self._values_list.append(attr)\n self._attributes[attr] = getattr(self._neuron_type, attr)._copy()\n self._attributes[attr]._instantiate(self.shape)\n if isinstance(getattr(self._neuron_type, attr), (Array, )):\n self._arrays_list.append(attr)\n self._attributes[attr] = getattr(self._neuron_type, attr)._copy()\n self._attributes[attr]._instantiate(self.shape)\n\n # Get lists of values and arrays\n self.attributes = list(self._attributes.keys())\n\n # Set the attributes to the neuron\n self._neuron_type.attributes = self.attributes\n self._neuron_type._values_list = self._values_list\n self._neuron_type._arrays_list = self._arrays_list\n\n # Analyse update()\n self._update_equations = self._neuron_type.update().equations\n\n # Analyse spike()\n if 'spike' in [f for f in dir(self._neuron_type) if callable(getattr(self._neuron_type, f))]:\n self._spiking = True\n self._spiking_equation = self._neuron_type.spike()\n self._reset_equations = self._neuron_type.reset().equations\n\n\n def __getattribute__(self, name):\n if name in ['attributes']:\n return object.__getattribute__(self, name)\n else:\n if hasattr(self, 'attributes') and name in self.attributes:\n return self._attributes[name].get_value()\n return object.__getattribute__(self, name)\n\n def __setattr__(self, name, value):\n\n if hasattr(self, 'attributes') and name in self.attributes:\n self._attributes[name].set_value(value)\n else:\n object.__setattr__(self, name, value)\n\n\n\n```\n\n\n```python\nclass Network(object):\n\n def __init__(self):\n\n self.populations = []\n\n def add(self, shape, neuron):\n \n # Create the population\n pop = Population(shape, neuron)\n\n # Have the population analyse its attributes\n pop._analyse()\n\n # Store the population\n self.populations.append(pop)\n\n return pop\n\n\n```\n\n### Parser\n\n\n```python\nimport sympy as sp\n\nclass Equations(object):\n\n def __init__(self, neuron=None, symbols=None):\n\n self.neuron = neuron\n self.symbols = {'t': sp.Symbol(\"t\")}\n if self.neuron is None:\n self._custom_symbols = symbols\n self._equations_list = []\n self._started = False\n \n def __enter__(self):\n\n if self.neuron is not None:\n\n for attr in self.neuron.attributes:\n if attr in self.neuron._values_list:\n # Symbol\n symbol = sp.Symbol(\"%(pop_prefix)s\"+attr+\"%(pop_suffix_value)s\")\n self.symbols[attr] = symbol\n setattr(self, attr, symbol)\n\n elif attr in self.neuron._arrays_list:\n # Symbol\n symbol = sp.Symbol(\"%(pop_prefix)s\"+attr+\"%(pop_suffix_array)s\")\n self.symbols[attr] = symbol\n setattr(self, attr, symbol)\n\n # Derivative if needed\n symbol = sp.Symbol(\"__grad__\" + attr)\n self.symbols[\"d\"+attr+\"_dt\"] = symbol\n setattr(self, \"d\"+attr+\"_dt\", symbol)\n \n else:\n print(\"Error\")\n\n else: # Custom set of variables\n for attr in self._custom_symbols:\n # Symbol\n symbol = sp.Symbol(attr)\n self.symbols[attr] = symbol\n setattr(self, attr, symbol)\n\n # Derivative if needed\n symbol = sp.Symbol(\"d\" + attr + \"/dt\")\n self.symbols[\"d\"+attr+\"_dt\"] = symbol\n setattr(self, \"d\"+attr+\"_dt\", symbol)\n\n\n\n self._started = True\n\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n \"Would be done later by the generator\"\n\n code = \"\"\n for var, eq in self._equations_list:\n code += sp.ccode(self.symbols[var]) + \" = \" + sp.ccode(eq) + \";\\n\"\n if var.endswith(\"_dt\"):\n orig_var = var[1:-3]\n code += \"%(pop_prefix)s\"+orig_var+\"%(pop_suffix_array)s += dt*(\" + sp.ccode(self.symbols[var]) + \");\\n\"\n\n self.equations = code\n\n def __str__(self):\n string = \"\"\n for var, eq in self._equations_list:\n string += sp.ccode(self.symbols[var]) + \" = \" + sp.ccode(eq) + \"\\n\"\n return string\n\n def __getattribute__(self, name):\n\n return object.__getattribute__(self, name)\n\n def __setattr__(self, name, value):\n\n # After __enter__(), track modifications to the variables\n if hasattr(self, '_started') and self._started:\n # Do not assign equations to symbols, just store them\n if name in self.symbols.keys():\n self._equations_list.append((name, value))\n else:\n object.__setattr__(self, name, value)\n else:\n object.__setattr__(self, name, value)\n\n def ite(self, cond, then, els):\n\n return sp.Piecewise((then, cond), (els, True))\n\n```\n\n## User side\n\n\n```python\nclass RateCoded(Neuron):\n \"\"\"\n Simple rate-coded neuron.\n\n $$\\tau \\, \\dfrac{d v(t)}{dt} = ge(t) + ite(ge(t) > 1, ge(t), 0) + v^2(t) - v(t))/\\tau$$\n\n $$r(t) = tanh(v(t))$$\n \"\"\"\n\n def __init__(self, tau):\n\n self.tau = self.Value(tau)\n\n self.ge = self.Array(init=0.0)\n self.v = self.Array(init=0.0)\n self.r = self.Array(init=0.0)\n\n def update(self):\n\n # eq will contain all variables of the model as sympy symbols, plus some operations (ite = if/then/else)\n with Equations(self) as n:\n\n # One can declare intermediary variables that won't be allocated in memory!\n shunting = n.ite(n.ge > 1, n.ge, 0)\n \n # ODEs use the dX_dt trick\n n.dv_dt = (n.ge + shunting + sp.exp(n.v**2) - n.v) / n.tau\n \n # Sympy function can be used directly\n n.r = sp.tanh(n.v)\n\n return n\n\n\n```\n\n\n```python\nnet = Network()\npop = net.add(10, RateCoded(tau=20.))\n\nprint(\"Attributes:\", pop.attributes)\n\nprint(\"Tau:\", pop.tau)\nprint(\"v = 0:\", pop.v)\n\npop.v = 1.\n\nprint(\"v = 1:\", pop.v)\n\npop.v *= 5.\n\nprint(\"v = 5:\", pop.v)\n\npop.v[:3] = 1.\n\nprint(\"v[:3] = 1:\", pop.v)\n\n\nprint()\nprint(\"Generated C code:\")\nprint(pop._update_equations %{'pop_prefix': \"\", \"pop_suffix_value\": \"\", \"pop_suffix_array\": \"[i]\"})\n```\n\n Attributes: ['tau', 'ge', 'v', 'r']\n Tau: 20.0\n v = 0: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n v = 1: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n v = 5: [5. 5. 5. 5. 5. 5. 5. 5. 5. 5.]\n v[:3] = 1: [1. 1. 1. 5. 5. 5. 5. 5. 5. 5.]\n \n Generated C code:\n __grad__v = (ge[i] - v[i] + ((ge[i] > 1) ? (\n ge[i]\n )\n : (\n 0\n )) + exp(pow(v[i], 2)))/tau;\n v[i] += dt*(__grad__v);\n r[i] = tanh(v[i]);\n \n\n\n\n```python\nclass LIF(Neuron):\n\n def __init__(self, params):\n\n self.tau = self.Value(params['tau'])\n self.V_th = self.Value(params['V_th'])\n\n self.ge = self.Array(init=0.0)\n self.v = self.Array(init=0.0)\n\n def update(self):\n\n with Equations(self) as n:\n\n n.dv_dt = (n.ge - n.v) / n.tau\n\n return n\n\n def spike(self):\n\n with Equations(self) as n:\n\n return n.v >= n.V_th\n\n def reset(self):\n\n with Equations(self) as n:\n\n n.v = 0\n\n return n\n\n```\n\n\n```python\nnet = Network()\npop = net.add(100, LIF({'tau': 20., 'V_th': 1.0}))\n\nprint(\"Neural update:\")\nprint(pop._update_equations %{'pop_prefix': \"\", \"pop_suffix_value\": \"\", \"pop_suffix_array\": \"[i]\"})\n\nprint()\nprint(\"Spiking condition:\")\nprint(sp.ccode(pop._spiking_equation) %{'pop_prefix': \"\", \"pop_suffix_value\": \"\", \"pop_suffix_array\": \"[i]\"})\n\nprint()\nprint(\"Reset:\")\nprint(pop._reset_equations %{'pop_prefix': \"\", \"pop_suffix_value\": \"\", \"pop_suffix_array\": \"[i]\"})\n```\n\n Neural update:\n __grad__v = (ge[i] - v[i])/tau;\n v[i] += dt*(__grad__v);\n \n \n Spiking condition:\n v[i] >= V_th\n \n Reset:\n v[i] = 0;\n \n\n\nOne can use the Equations() in a standalone mode to check correctness:\n\n\n```python\nwith Equations(symbols=['v', 'r', 'tau']) as n:\n n.dv_dt = (1 - n.v)/n.tau\n n.r = n.ite(n.v > 0, n.v, 0)\n\nprint(n)\n```\n\n dv/dt = (1 - v)/tau\n r = ((v > 0) ? (\n v\n )\n : (\n 0\n ))\n \n\n\nSanity check whether it works in multiprocessing:\n\n\n```python\nimport multiprocessing\n\ndef worker(val):\n\n with Equations(symbols=['v', 'r', 'tau']) as n:\n n.dv_dt = (val - n.v)/n.tau\n\n return n\n\njobs = []\nvals = np.linspace(0.0, 1.0, 5)\n\nwith multiprocessing.Pool() as pool:\n res = pool.map(worker, vals)\n \n for w in res:\n print(w)\n print(\"----\")\n```\n\n dv/dt = -v/tau\n \n ----\n dv/dt = (0.25 - v)/tau\n \n ----\n dv/dt = (0.5 - v)/tau\n \n ----\n dv/dt = (0.75 - v)/tau\n \n ----\n dv/dt = (1.0 - v)/tau\n \n ----\n\n\n\n```python\nclass RateCodedSynapse(Synapse):\n\n def __init__(self):\n\n self.w = self.Weight()\n\n def transmit(self):\n\n with Equations(self) as s:\n\n s.post.target += s.w * s.pre.r\n\n return s\n\nclass SpikingSynapse(Synapse):\n\n def __init__(self):\n\n self.w = self.Weight()\n\n\n def transmit(self):\n\n self.transmit_on_spike = True # default when pre is a spiking population\n # self.transmit_on_spike = False # continuous integration for NMDA synapse\n\n\n with Equations(self) as s:\n\n s.post.target += s.w\n\n return s\n```\n\n\n```python\nclass TraceSynapse(RateCodedSynapse):\n\n def __init__(self, tau, eta):\n\n self.tau = self.Value(tau)\n\n self.eta = self.Value(eta)\n\n self.w = self.Weight()\n\n self.trace = self.Array(init=0.0)\n\n def update(self):\n\n with Equations(self, method=\"euler\") as s:\n\n s.dtrace_dt = (s.pre.r - s.trace) / s.tau\n\n return s\n\n def apply(self):\n\n with Equations(self) as s:\n\n s.w += s.eta * s.trace * s.post.r * s.post.R\n\n return s\n\n\n\nproj = net.connect(pop, pop, 'ge', TraceSynapse)\n#...\nnet.simulate(100.)\nif success:\n pop.R = 1.0\nelse:\n pop.R = 0.0\nproj.apply()\n\n\n\n```\n\n\n```python\nclass STDP(SpikingSynapse):\n\n def __init__(self, tau):\n\n self.tau = self.Value(tau)\n\n self.w = self.Weight()\n\n self.x = self.Array(init=0.0)\n self.y = self.Array(init=0.0)\n\n def update(self):\n\n with Equations(self, method=Euler) as s:\n\n s.dx_dt = (- s.x) / s.tau\n s.dy_dt = (- s.y) / s.tau\n\n return s\n\n def transmit(self):\n\n with Equations(self) as s:\n\n s.post.target += s.w\n\n return s\n \n def on_pre(self):\n\n with Equations(self) as s:\n\n s.x += 1.0\n s.w -= s.y\n\n return s\n\n def on_post(self):\n\n with Equations(self) as s:\n\n s.y += 1.0\n s.w += s.x\n\n return s\n\n```\n", "meta": {"hexsha": "638e76a39f151ae0cfbbd83e1eb75a33fac1b87b", "size": 23693, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/mockup/Equations.ipynb", "max_stars_repo_name": "vitay/ANNarchy_future", "max_stars_repo_head_hexsha": "2c2a43c67f4201cf72175793aaa51189d208436b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-11T18:11:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-12T09:15:17.000Z", "max_issues_repo_path": "examples/mockup/Equations.ipynb", "max_issues_repo_name": "vitay/ANNarchy_future", "max_issues_repo_head_hexsha": "2c2a43c67f4201cf72175793aaa51189d208436b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/mockup/Equations.ipynb", "max_forks_repo_name": "vitay/ANNarchy_future", "max_forks_repo_head_hexsha": "2c2a43c67f4201cf72175793aaa51189d208436b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.191576087, "max_line_length": 1120, "alphanum_fraction": 0.45038619, "converted": true, "num_tokens": 3909, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6076631698328916, "lm_q2_score": 0.44939263446475963, "lm_q1q2_score": 0.2730793527584098}} {"text": "\n\n\n```\n!nvidia-smi\n```\n\n Sat Mar 13 00:02:05 2021 \n +-----------------------------------------------------------------------------+\n | NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 |\n |-------------------------------+----------------------+----------------------+\n | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n | | | MIG M. |\n |===============================+======================+======================|\n | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |\n | | | N/A |\n +-------------------------------+----------------------+----------------------+\n \n +-----------------------------------------------------------------------------+\n | Processes: |\n | GPU GI CI PID Type Process name GPU Memory |\n | ID ID Usage |\n |=============================================================================|\n | No running processes found |\n +-----------------------------------------------------------------------------+\n\n\n\n```\n!pip install pymatgen==2020.12.31\n!pip install --pre graphdot\n!pip install gdown\n\n```\n\n Collecting pymatgen==2020.12.31\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/e1/18/274b40cff34257a728071199d21105ced3116b42dd60793113eee7b1b5ca/pymatgen-2020.12.31.tar.gz (2.8MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.8MB 14.3MB/s \n \u001b[?25hRequirement already satisfied: numpy>=1.14.3 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (1.19.5)\n Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (2.23.0)\n Collecting ruamel.yaml>=0.15.6\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/ed/c3/4c823dac2949a6baf36a4987d04c50d30184147393ba6f4bfb4c67d15a13/ruamel.yaml-0.16.13-py2.py3-none-any.whl (111kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 112kB 51.9MB/s \n \u001b[?25hCollecting monty>=3.0.2\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/41/47/e6f045cd69f24df0b4ddd55fab329c079b7c9ce978a32431dce904f6a1d6/monty-2021.3.3-py3-none-any.whl (63kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 71kB 10.3MB/s \n \u001b[?25hCollecting scipy>=1.5.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/b6/3a/9e0649ab2d5ade703baa70ef980aa08739226e5d6a642f084bb201a92fc2/scipy-1.6.1-cp37-cp37m-manylinux1_x86_64.whl (27.4MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 27.4MB 111kB/s \n \u001b[?25hRequirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (0.8.9)\n Collecting spglib>=1.9.9.44\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/4b/98/fa4760b9c71e2eace5a60b85dbf36dece49c379f8291cf1203056f287766/spglib-1.16.1-cp37-cp37m-manylinux2010_x86_64.whl (296kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 296kB 49.5MB/s \n \u001b[?25hRequirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (2.5)\n Requirement already satisfied: matplotlib>=1.5 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (3.2.2)\n Requirement already satisfied: palettable>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (3.3.0)\n Requirement already satisfied: sympy in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (1.7.1)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from pymatgen==2020.12.31) (1.1.5)\n Collecting plotly>=4.5.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/1f/f6/bd3c17c8003b6641df1228e80e1acac97ed8402635e46c2571f8e1ef63af/plotly-4.14.3-py2.py3-none-any.whl (13.2MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13.2MB 252kB/s \n \u001b[?25hCollecting uncertainties>=3.1.4\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/45/41/fc7e7b73b603e7c2c9e040b7aa8caf4a88d74b6faa567601ed82b6f0d8e1/uncertainties-3.1.5-py2.py3-none-any.whl (246kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 256kB 53.5MB/s \n \u001b[?25hRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2020.12.31) (3.0.4)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2020.12.31) (1.24.3)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2020.12.31) (2.10)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2020.12.31) (2020.12.5)\n Collecting ruamel.yaml.clib>=0.1.2; platform_python_implementation == \"CPython\" and python_version < \"3.10\"\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/5e/6e/f652c56bbb2c3d3fca252ffc7c0358597f57a1bbdf484dac683054950c63/ruamel.yaml.clib-0.2.2-cp37-cp37m-manylinux1_x86_64.whl (547kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 552kB 50.4MB/s \n \u001b[?25hRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.2->pymatgen==2020.12.31) (4.4.2)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen==2020.12.31) (0.10.0)\n Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen==2020.12.31) (2.8.1)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen==2020.12.31) (1.3.1)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5->pymatgen==2020.12.31) (2.4.7)\n Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy->pymatgen==2020.12.31) (1.2.1)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->pymatgen==2020.12.31) (2018.9)\n Requirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.7/dist-packages (from plotly>=4.5.0->pymatgen==2020.12.31) (1.3.3)\n Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from plotly>=4.5.0->pymatgen==2020.12.31) (1.15.0)\n Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from uncertainties>=3.1.4->pymatgen==2020.12.31) (0.16.0)\n Building wheels for collected packages: pymatgen\n Building wheel for pymatgen (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pymatgen: filename=pymatgen-2020.12.31-cp37-cp37m-linux_x86_64.whl size=3590921 sha256=fa2718dae94dee414021e253578fc4e052b71fa7fe2b7288e0c7adce3c6bbbac\n Stored in directory: /root/.cache/pip/wheels/bd/fd/4c/bbea735ca0989c51e67a45d1384b1ce3481bc2aa1337b4a6e9\n Successfully built pymatgen\n \u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\n Installing collected packages: ruamel.yaml.clib, ruamel.yaml, monty, scipy, spglib, plotly, uncertainties, pymatgen\n Found existing installation: scipy 1.4.1\n Uninstalling scipy-1.4.1:\n Successfully uninstalled scipy-1.4.1\n Found existing installation: plotly 4.4.1\n Uninstalling plotly-4.4.1:\n Successfully uninstalled plotly-4.4.1\n Successfully installed monty-2021.3.3 plotly-4.14.3 pymatgen-2020.12.31 ruamel.yaml-0.16.13 ruamel.yaml.clib-0.2.2 scipy-1.6.1 spglib-1.16.1 uncertainties-3.1.5\n Collecting graphdot\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/c5/3f/2b8521853057f5b8ffd7e8a56b9f6e4a7805f5fac4491561e26475f369b9/graphdot-0.8a16.tar.gz (103kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 112kB 19.4MB/s \n \u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from graphdot) (1.19.5)\n Requirement already satisfied: scipy>=1.3.0 in /usr/local/lib/python3.7/dist-packages (from graphdot) (1.6.1)\n Requirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.7/dist-packages (from graphdot) (1.7.1)\n Requirement already satisfied: pandas>=0.24 in /usr/local/lib/python3.7/dist-packages (from graphdot) (1.1.5)\n Requirement already satisfied: networkx>=2.4 in /usr/local/lib/python3.7/dist-packages (from graphdot) (2.5)\n Collecting pycuda>=2019\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/46/61/47d3235a4c13eec5a5f03594ddb268f4858734e02980afbcd806e6242fa5/pycuda-2020.1.tar.gz (1.6MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.6MB 37.0MB/s \n \u001b[?25hCollecting treelib>=1.6.1\n Downloading https://files.pythonhosted.org/packages/04/b0/2269c328abffbb63979f7143351a24a066776b87526d79956aea5018b80a/treelib-1.6.1.tar.gz\n Collecting kahypar>=1.1.4\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/d2/c6/4b6ff807f2136079e9bc7289a793544541de4a04b402c678524647d6b0cb/kahypar-1.1.6-cp37-cp37m-manylinux2014_x86_64.whl (1.1MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.1MB 39.9MB/s \n \u001b[?25hRequirement already satisfied: numba>=0.51.0 in /usr/local/lib/python3.7/dist-packages (from graphdot) (0.51.2)\n Collecting ase>=3.17\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/a5/36/de17e79f29e06d9a92746d0dd9ec4636487ab03f6af10e78586aae533f7a/ase-3.21.1-py3-none-any.whl (2.2MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.2MB 47.7MB/s \n \u001b[?25hCollecting pymatgen==2019.11.11\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/57/f8/1d5f9bacde237107f917a5ca31bdf73e55a774ffb8e7b9e5f9cd7d7cf114/pymatgen-2019.11.11.tar.gz (2.5MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.5MB 43.4MB/s \n \u001b[?25hCollecting mendeleev\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/f0/75/5863bb298aa1390cb9ecb0548a62b8213ef085273f9d3c73e513b9c36214/mendeleev-0.6.1.tar.gz (193kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 194kB 54.6MB/s \n \u001b[?25hRequirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy>=1.3->graphdot) (1.2.1)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24->graphdot) (2018.9)\n Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24->graphdot) (2.8.1)\n Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.4->graphdot) (4.4.2)\n Collecting pytools>=2011.2\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/fb/fc/9628f0d2ec698360f4475bea0a88cb767b935f5347e6687bae0ffa342aab/pytools-2021.2.tar.gz (65kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 71kB 9.7MB/s \n \u001b[?25hRequirement already satisfied: appdirs>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from pycuda>=2019->graphdot) (1.4.4)\n Collecting mako\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/f3/54/dbc07fbb20865d3b78fdb7cf7fa713e2cba4f87f71100074ef2dc9f9d1f7/Mako-1.1.4-py2.py3-none-any.whl (75kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 81kB 11.0MB/s \n \u001b[?25hRequirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from treelib>=1.6.1->graphdot) (0.16.0)\n Requirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.51.0->graphdot) (0.34.0)\n Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.51.0->graphdot) (54.0.0)\n Requirement already satisfied: matplotlib>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from ase>=3.17->graphdot) (3.2.2)\n Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from pymatgen==2019.11.11->graphdot) (2.23.0)\n Requirement already satisfied: ruamel.yaml>=0.15.6 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2019.11.11->graphdot) (0.16.13)\n Requirement already satisfied: monty>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2019.11.11->graphdot) (2021.3.3)\n Collecting pydispatcher>=2.0.5\n Downloading https://files.pythonhosted.org/packages/cd/37/39aca520918ce1935bea9c356bcbb7ed7e52ad4e31bff9b943dfc8e7115b/PyDispatcher-2.0.5.tar.gz\n Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from pymatgen==2019.11.11->graphdot) (0.8.9)\n Requirement already satisfied: spglib>=1.9.9.44 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2019.11.11->graphdot) (1.16.1)\n Requirement already satisfied: palettable>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from pymatgen==2019.11.11->graphdot) (3.3.0)\n Requirement already satisfied: sqlalchemy>=1.3.0 in /usr/local/lib/python3.7/dist-packages (from mendeleev->graphdot) (1.3.23)\n Collecting colorama\n Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl\n Collecting pyfiglet\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/33/07/fcfdd7a2872f5b348953de35acce1544dab0c1e8368dca54279b1cde5c15/pyfiglet-0.8.post1-py2.py3-none-any.whl (865kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 870kB 45.5MB/s \n \u001b[?25hRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.24->graphdot) (1.15.0)\n Requirement already satisfied: MarkupSafe>=0.9.2 in /usr/local/lib/python3.7/dist-packages (from mako->pycuda>=2019->graphdot) (1.1.1)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->ase>=3.17->graphdot) (1.3.1)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->ase>=3.17->graphdot) (0.10.0)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->ase>=3.17->graphdot) (2.4.7)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2019.11.11->graphdot) (3.0.4)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2019.11.11->graphdot) (2020.12.5)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2019.11.11->graphdot) (1.24.3)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->pymatgen==2019.11.11->graphdot) (2.10)\n Requirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == \"CPython\" and python_version < \"3.10\" in /usr/local/lib/python3.7/dist-packages (from ruamel.yaml>=0.15.6->pymatgen==2019.11.11->graphdot) (0.2.2)\n Building wheels for collected packages: graphdot, pycuda, treelib, pymatgen, mendeleev, pytools, pydispatcher\n Building wheel for graphdot (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for graphdot: filename=graphdot-0.8a16-cp37-none-any.whl size=148200 sha256=c779dc3799fdf9a5a09f4c20619ef195d03192873e67c84c0f47fa6a08ff7277\n Stored in directory: /root/.cache/pip/wheels/e6/0d/f0/8edb618420134169eceece7cb4c1f071f9ea6e84385383310b\n Building wheel for pycuda (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pycuda: filename=pycuda-2020.1-cp37-cp37m-linux_x86_64.whl size=621092 sha256=7df4961826788892cfcc89433882633b1b6c3f76e3fbcb3541c2d3303d209059\n Stored in directory: /root/.cache/pip/wheels/8f/78/d1/5bb826f81d9d490297a348d818ff3ee6dd6f2075b06dde6ea0\n Building wheel for treelib (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for treelib: filename=treelib-1.6.1-cp37-none-any.whl size=18370 sha256=6d8beb53754f9d3c4e88c1316d9d35c15dbce1d6996398127f10fa867db39c90\n Stored in directory: /root/.cache/pip/wheels/68/1d/92/c50ec52951ccebafb40f3b8f0beb28fbaf745431c14a17c497\n Building wheel for pymatgen (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pymatgen: filename=pymatgen-2019.11.11-cp37-cp37m-linux_x86_64.whl size=3324718 sha256=387ba9130806a420a65a0eeab5219c0dd55fa8406fa8ca85eea3c2abc95a0e3d\n Stored in directory: /root/.cache/pip/wheels/98/23/3c/3b929997e5f8976361933411e8a3b9bdf1b2702e7830e7a7ad\n Building wheel for mendeleev (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for mendeleev: filename=mendeleev-0.6.1-py2.py3-none-any.whl size=174964 sha256=bd558acea44caa80addf69f0165c7e3c6408cc85d92c1bf04c17e6248c3d87b3\n Stored in directory: /root/.cache/pip/wheels/fb/28/5d/95e69a718b35dd00169889b0139a692f6c265d399cab3aa097\n Building wheel for pytools (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pytools: filename=pytools-2021.2-py2.py3-none-any.whl size=62446 sha256=a8d8944ef3da8121e31191e9ae48eee46ebb8aa6218d944e27e8e32017777f7a\n Stored in directory: /root/.cache/pip/wheels/d3/13/e8/fcb236c8cb91fb549a37f1d86783af99f0e0bcbeb9568ee5e2\n Building wheel for pydispatcher (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pydispatcher: filename=PyDispatcher-2.0.5-cp37-none-any.whl size=11517 sha256=2ae5b088af67f209f3cb85511b60ad04bcf38515c81d3884293bf7cc722159b0\n Stored in directory: /root/.cache/pip/wheels/88/99/96/cfef6665f9cb1522ee6757ae5955feedf2fe25f1737f91fa7f\n Successfully built graphdot pycuda treelib pymatgen mendeleev pytools pydispatcher\n Installing collected packages: pytools, mako, pycuda, treelib, kahypar, ase, pydispatcher, pymatgen, colorama, pyfiglet, mendeleev, graphdot\n Found existing installation: pymatgen 2020.12.31\n Uninstalling pymatgen-2020.12.31:\n Successfully uninstalled pymatgen-2020.12.31\n Successfully installed ase-3.21.1 colorama-0.4.4 graphdot-0.8a16 kahypar-1.1.6 mako-1.1.4 mendeleev-0.6.1 pycuda-2020.1 pydispatcher-2.0.5 pyfiglet-0.8.post1 pymatgen-2019.11.11 pytools-2021.2 treelib-1.6.1\n Requirement already satisfied: gdown in /usr/local/lib/python3.7/dist-packages (3.6.4)\n Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gdown) (2.23.0)\n Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gdown) (1.15.0)\n Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from gdown) (4.41.1)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (2.10)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (3.0.4)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (2020.12.5)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (1.24.3)\n\n\n\n```\n%matplotlib inline\nimport io\nimport sys\nsys.path.append('/usr/local/lib/python3.6/site-packages/')\nimport os\nimport urllib\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport graphdot\nfrom graphdot import Graph\nfrom graphdot.graph.adjacency import AtomicAdjacency\nfrom graphdot.graph.reorder import rcm\nfrom graphdot.kernel.marginalized import MarginalizedGraphKernel\nfrom graphdot.kernel.marginalized.starting_probability import Uniform\nfrom graphdot.model.gaussian_process import (\n GaussianProcessRegressor,\n LowRankApproximateGPR\n)\nfrom graphdot.kernel.fix import Normalization\nimport graphdot.microkernel as uX\nimport ase.io\n```\n\n /usr/local/lib/python3.7/dist-packages/graphdot/graph/__init__.py:24: UserWarning: Cannot import RDKit, `graph.from_rdkit()` will be unavailable.\n \n 'Cannot import RDKit, `graph.from_rdkit()` will be unavailable.\\n'\n\n\n\n```\nfrom google.colab import drive\ndrive.mount('/content/gdrive')\n```\n\n Mounted at /content/gdrive\n\n\n\n```\ncd gdrive/MyDrive/Google\\ Colab/Covid-Data\n```\n\n /content/gdrive/MyDrive/Google Colab/Covid-Data\n\n\n\n```\nfiles = ['uncharged_NSP15_6W01_A_3_H.Orderable_zinc_db_enaHLL.2col.csv.1.xz']\ndataset = pd.read_pickle(files[0])\n#frames = [pd.read_pickle(f) for f in files]\n#dataset = pd.concat(frames)\n```\n\n\n```\ndataset.head(5)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
energysmilesgraphs
0-2.265697Cc1nc(no1)c2cncnc2[C@@H]3CCCN(C3)C(=O)Cc4ccccc4Graph(nodes={'!i': [ 0, 1, 2, 3, 4, 5, 6, 7, 8...
1-2.351519Cc1ccc(cc1)c2nnc(n2N)SCC(=O)Nc3ccccc3OCGraph(nodes={'!i': [ 0, 1, 2, 3, 4, 5, 6, 7, 8...
2-3.308996CCOC(=O)Cn1c2ccccc2nc1[C@H]3CC(=O)N(C3)c4ccc(c...Graph(nodes={'!i': [ 0, 1, 2, 3, 4, 5, 6, 7, 8...
3-3.760179CC[C@H](C)NC(=O)[C@@H](CC)N(Cc1ccccc1)C(=O)Cc2...Graph(nodes={'!i': [ 0, 1, 2, 3, 4, 5, 6, 7, 8...
4-2.895577Cc1cccc2c1ccn2CCC(=O)N3CCc4cc(c(cc4C3)OC)OCGraph(nodes={'!i': [ 0, 1, 2, 3, 4, 5, 6, 7, 8...
\n
\n\n\n\n\n```\ntarget = 'energy'\nN_train = 2500\nN_test = 5000\n```\n\n\n```\nnp.random.seed(0)\ntrain_sel = np.random.choice(len(dataset), N_train, replace=False)\ntest_sel = np.random.choice(np.setxor1d(np.arange(len(dataset)), train_sel), N_test, replace=False)\ntrain = dataset.iloc[train_sel]\ntest = dataset.iloc[test_sel]\n```\n\n\n```\ngpr = GaussianProcessRegressor(\n kernel=Normalization(\n MarginalizedGraphKernel(\n node_kernel=uX.Additive(\n aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n atomic_number=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),\n charge=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),\n chiral=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n hcount=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),\n hybridization=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n ring_list=uX.Constant(0.5, (0.01, 100.0)) * uX.Convolution(uX.KroneckerDelta(0.5,(0.1, 0.9)))\n ).normalized,\n edge_kernel=uX.Additive(\n aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n conjugated=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n order=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),\n ring_stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),\n stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9))\n ).normalized,\n p=Uniform(1.0, p_bounds='fixed'),\n q=0.05\n )\n ),\n alpha=1e-4,\n optimizer=True,\n normalize_y=True,\n regularization='+',\n)\n```\n\n\n```\ngpr.fit(train.graphs, train[target], repeat=3, verbose=True)\n```\n\n | logP| dlogP| y^T.K.y| log|K| | Cond(K)| GPU time| CPU time|\n |------------|------------|------------|------------|------------|----------|----------|\n | 9.2483e+06| 5.6205e+07| 9.2681e+06| -19798| 2.2708e+07| 1.3e+02| 2|\n | 7.0477e+06| 5.4659e+07| 7.0664e+06| -18684| 2.1871e+07| 1.3e+02| 1.8|\n | 4.8851e+06| 4.7042e+07| 4.9024e+06| -17313| 2.0436e+07| 1.2e+02| 1.9|\n | 4.1663e+06| 4.0942e+07| 4.1831e+06| -16749| 1.9691e+07| 1.2e+02| 3.1|\n | 3.6447e+06| 3.5014e+07| 3.6609e+06| -16275| 1.8951e+07| 1.2e+02| 3.2|\n | 3.3263e+06| 3.0755e+07| 3.3423e+06| -15945| 1.8326e+07| 1.1e+02| 3.4|\n | 3.1808e+06| 2.8965e+07| 3.1966e+06| -15788| 1.8127e+07| 1.1e+02| 3.5|\n | 3.1636e+06| 2.8096e+07| 3.1793e+06| -15754| 1.7693e+07| 1.1e+02| 3.5|\n | 3.1148e+06| 2.8068e+07| 3.1305e+06| -15707| 1.7766e+07| 1.1e+02| 3.5|\n | 3.1039e+06| 2.7961e+07| 3.1196e+06| -15698| 1.7887e+07| 1.1e+02| 3.6|\n | 3.0983e+06| 2.7874e+07| 3.114e+06| -15692| 1.7891e+07| 1.1e+02| 3.7|\n | 3.0871e+06| 2.787e+07| 3.1028e+06| -15680| 1.7867e+07| 1.1e+02| 3.8|\n | 3.0815e+06| 2.7741e+07| 3.0971e+06| -15675| 1.7885e+07| 1.1e+02| 3.9|\n | 3.0762e+06| 2.7713e+07| 3.0918e+06| -15673| 1.7843e+07| 1.1e+02| 4|\n | 3.0733e+06| 2.7745e+07| 3.089e+06| -15672| 1.7842e+07| 1.1e+02| 2|\n | 3.0743e+06| 2.7867e+07| 3.09e+06| -15672| 1.784e+07| 1.1e+02| 2|\n | 3.0747e+06| 2.7782e+07| 3.0904e+06| -15672| 1.7844e+07| 1.1e+02| 1.9|\n | 3.0748e+06| 2.7762e+07| 3.0904e+06| -15672| 1.7849e+07| 1.1e+02| 1.9|\n | 3.073e+06| 2.7737e+07| 3.0887e+06| -15672| 1.7842e+07| 1.1e+02| 1.9|\n | 3.0737e+06| 2.775e+07| 3.0893e+06| -15672| 1.783e+07| 1.1e+02| 1.9|\n | 3.0723e+06| 2.7726e+07| 3.088e+06| -15672| 1.7824e+07| 1.1e+02| 1.9|\n | 3.075e+06| 2.7889e+07| 3.0906e+06| -15673| 1.7805e+07| 1.1e+02| 1.9|\n | 3.075e+06| 2.7778e+07| 3.0907e+06| -15672| 1.7836e+07| 1.1e+02| 2|\n | 3.0736e+06| 2.7744e+07| 3.0893e+06| -15672| 1.7848e+07| 1.1e+02| 1.9|\n | 3.0738e+06| 2.7755e+07| 3.0895e+06| -15672| 1.7836e+07| 1.1e+02| 1.9|\n | 3.0723e+06| 2.7726e+07| 3.0879e+06| -15672| 1.7843e+07| 1.1e+02| 2|\n | 3.0729e+06| 2.7733e+07| 3.0886e+06| -15672| 1.7837e+07| 1.1e+02| 1.9|\n | 3.0737e+06| 2.7751e+07| 3.0894e+06| -15672| 1.7844e+07| 1.1e+02| 2|\n | 3.0728e+06| 2.7731e+07| 3.0885e+06| -15672| 1.7845e+07| 1.1e+02| 2|\n | logP| dlogP| y^T.K.y| log|K| | Cond(K)| GPU time| CPU time|\n |------------|------------|------------|------------|------------|----------|----------|\n | 9.7674e+06| 7.105e+07| 9.7874e+06| -19989| 2.2325e+07| 1.6e+02| 2|\n | 7.4121e+06| 6.7322e+07| 7.431e+06| -18879| 2.1812e+07| 1.5e+02| 1.9|\n | 5.4618e+06| 5.7277e+07| 5.4796e+06| -17744| 2.0827e+07| 1.4e+02| 1.9|\n | 4.6314e+06| 4.9361e+07| 4.6486e+06| -17148| 2.0213e+07| 1.3e+02| 2|\n | 4.1207e+06| 4.3868e+07| 4.1375e+06| -16726| 1.9615e+07| 1.3e+02| 2|\n | 3.6953e+06| 3.8578e+07| 3.7116e+06| -16337| 1.9095e+07| 1.2e+02| 2.1|\n | 3.4328e+06| 3.4685e+07| 3.4489e+06| -16076| 1.8614e+07| 1.2e+02| 2|\n | 3.2487e+06| 2.9831e+07| 3.2645e+06| -15864| 1.8236e+07| 1.1e+02| 2|\n | 3.1564e+06| 2.9162e+07| 3.1721e+06| -15769| 1.8106e+07| 1.1e+02| 2|\n | 3.1359e+06| 2.8967e+07| 3.1516e+06| -15744| 1.7972e+07| 1.1e+02| 2|\n | 3.1081e+06| 2.8518e+07| 3.1239e+06| -15713| 1.7922e+07| 1.1e+02| 2|\n | 3.0969e+06| 2.8242e+07| 3.1126e+06| -15698| 1.7902e+07| 1.1e+02| 2|\n | 3.0828e+06| 2.7696e+07| 3.0985e+06| -15682| 1.7926e+07| 1.1e+02| 2|\n | 3.0814e+06| 2.7725e+07| 3.0971e+06| -15679| 1.7879e+07| 1.1e+02| 2|\n | 3.0786e+06| 2.7724e+07| 3.0943e+06| -15678| 1.7854e+07| 1.1e+02| 2|\n | 3.0788e+06| 2.7803e+07| 3.0945e+06| -15676| 1.7835e+07| 1.1e+02| 2|\n | 3.0793e+06| 2.7747e+07| 3.095e+06| -15678| 1.7856e+07| 1.1e+02| 2|\n | 3.0798e+06| 2.7753e+07| 3.0954e+06| -15678| 1.7841e+07| 1.1e+02| 2.1|\n | 3.0812e+06| 2.7766e+07| 3.0968e+06| -15678| 1.7836e+07| 1.1e+02| 2|\n | 3.0791e+06| 2.7736e+07| 3.0948e+06| -15678| 1.7868e+07| 1.1e+02| 2|\n | 3.0789e+06| 2.773e+07| 3.0946e+06| -15678| 1.7852e+07| 1.1e+02| 2|\n | logP| dlogP| y^T.K.y| log|K| | Cond(K)| GPU time| CPU time|\n |------------|------------|------------|------------|------------|----------|----------|\n | 1.3416e+07| 2.7823e+07| 1.3437e+07| -21350| 2.3927e+07| 1.1e+02| 2|\n | 1.1311e+07| 3.7692e+07| 1.1332e+07| -20625| 2.321e+07| 1.2e+02| 1.9|\n | 5.3701e+06| 4.4484e+07| 5.3878e+06| -17634| 2.0953e+07| 1.2e+02| 1.9|\n | 3.7039e+06| 3.3779e+07| 3.7202e+06| -16298| 1.8805e+07| 1.2e+02| 1.9|\n | 3.4237e+06| 3.0303e+07| 3.4397e+06| -16027| 1.8516e+07| 1.1e+02| 1.9|\n | 3.2623e+06| 2.902e+07| 3.2782e+06| -15866| 1.8245e+07| 1.1e+02| 2|\n | 3.1641e+06| 2.8458e+07| 3.1799e+06| -15761| 1.793e+07| 1.1e+02| 1.9|\n | 3.1255e+06| 2.815e+07| 3.1412e+06| -15720| 1.7994e+07| 1.1e+02| 1.9|\n | 3.0972e+06| 2.7847e+07| 3.1129e+06| -15686| 1.7824e+07| 1.1e+02| 2|\n | 3.0941e+06| 2.779e+07| 3.1098e+06| -15681| 1.7811e+07| 1.1e+02| 1.9|\n | 3.0908e+06| 2.788e+07| 3.1065e+06| -15678| 1.7836e+07| 1.1e+02| 2|\n | 3.0893e+06| 2.7791e+07| 3.1049e+06| -15677| 1.7835e+07| 1.1e+02| 2|\n | 3.0886e+06| 2.7739e+07| 3.1042e+06| -15676| 1.7826e+07| 1.1e+02| 2|\n | 3.0864e+06| 2.7668e+07| 3.1021e+06| -15675| 1.7838e+07| 1.1e+02| 1.9|\n | 3.0834e+06| 2.7706e+07| 3.0991e+06| -15673| 1.786e+07| 1.1e+02| 2|\n | 3.0821e+06| 2.7687e+07| 3.0977e+06| -15672| 1.7816e+07| 1.1e+02| 1.9|\n | 3.0821e+06| 2.77e+07| 3.0977e+06| -15672| 1.7825e+07| 1.1e+02| 2|\n | 3.0817e+06| 2.7683e+07| 3.0974e+06| -15672| 1.7826e+07| 1.1e+02| 1.9|\n | 3.0813e+06| 2.7732e+07| 3.097e+06| -15672| 1.7815e+07| 1.1e+02| 1.9|\n | 3.0804e+06| 2.7898e+07| 3.0961e+06| -15677| 1.7811e+07| 1.1e+02| 2|\n | 3.0775e+06| 2.7864e+07| 3.0932e+06| -15675| 1.7796e+07| 1.1e+02| 2|\n | 3.0758e+06| 2.7888e+07| 3.0915e+06| -15674| 1.785e+07| 1.1e+02| 1.9|\n | 3.075e+06| 2.79e+07| 3.0907e+06| -15674| 1.7856e+07| 1.1e+02| 1.9|\n | 3.0745e+06| 2.7864e+07| 3.0901e+06| -15674| 1.7843e+07| 1.1e+02| 2|\n | 3.0741e+06| 2.7849e+07| 3.0898e+06| -15673| 1.784e+07| 1.1e+02| 2|\n | 3.075e+06| 2.7851e+07| 3.0907e+06| -15672| 1.7851e+07| 1.1e+02| 2|\n | 3.0743e+06| 2.7844e+07| 3.09e+06| -15673| 1.7834e+07| 1.1e+02| 1.9|\n | 3.0746e+06| 2.7848e+07| 3.0903e+06| -15673| 1.7826e+07| 1.1e+02| 2|\n | 3.074e+06| 2.7841e+07| 3.0897e+06| -15673| 1.786e+07| 1.1e+02| 2|\n | 3.0748e+06| 2.7863e+07| 3.0905e+06| -15673| 1.7837e+07| 1.1e+02| 1.9|\n | 3.0748e+06| 2.7858e+07| 3.0904e+06| -15673| 1.7846e+07| 1.1e+02| 1.9|\n Optimization result:\n fun: 3072837.549856819\n hess_inv: <25x25 LbfgsInvHessProduct with dtype=float64>\n jac: array([ 2.77173629e+03, 8.63091068e+02, 2.24459039e+02, -5.40556268e+03,\n 5.34039129e+04, 1.09446494e+03, 0.00000000e+00, -2.01899144e+03,\n 1.10823575e+03, 8.21839298e+03, 7.60393519e-15, 6.05781870e+02,\n 3.50613662e+02, -3.40939052e+03, 6.56884593e+04, 1.17076522e+03,\n 7.37607748e+03, -1.54956743e+03, 1.06405848e+04, -1.05655723e+03,\n 9.70572233e+02, 1.09689017e+03, 0.00000000e+00, 3.33513390e+02,\n 1.31117462e+03])\n message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'\n nfev: 29\n nit: 16\n njev: 29\n status: 0\n success: True\n x: array([-9.21034037, -3.87543474, -1.69111274, 1.66992633, -2.30258509,\n -4.40365014, 0. , -2.5704035 , -2.1973491 , 2.29742905,\n -2.30904354, -3.62985763, -1.79849954, 1.88246002, -2.30258509,\n 0.51066812, -2.30258509, 0.84382344, -2.30258509, -2.71750085,\n -1.57822743, -3.60981968, -0.22314355, -1.29211429, -2.22365827])\n\n\n\n\n\n \n\n\n\n\n```\ngpr.kernel.hyperparameters\n```\n\n\n\n\n starting_probability : Uniform\n \tp : 1.0\n stopping_probability : 0.00010000000000000009\n node_kernel : Composite\n \taromatic : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.020745317040533336\n \t\trhs : KroneckerDelta\n \t\t\th : 0.18431431562315057\n \tatomic_number : Multiply\n \t\tlhs : Constant\n \t\t\tc : 5.311776469014586\n \t\trhs : KroneckerDelta\n \t\t\th : 0.10000000000000002\n \tcharge : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.012232607607837629\n \t\trhs : SquareExponential\n \t\t\tlength_scale : 1.0\n \tchiral : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.07650466922172743\n \t\trhs : KroneckerDelta\n \t\t\th : 0.11109727647636178\n \thcount : Multiply\n \t\tlhs : Constant\n \t\t\tc : 9.948572239474858\n \t\trhs : SquareExponential\n \t\t\tlength_scale : 0.09935623660147404\n \thybridization : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.026519959900936984\n \t\trhs : KroneckerDelta\n \t\t\th : 0.16554709935637946\n \tring_list : Multiply\n \t\tlhs : Constant\n \t\t\tc : 6.569646480637104\n \t\trhs : Convolution\n \t\t\tbase : KroneckerDelta\n \t\t\t\th : 0.10000000000000002\n edge_kernel : Composite\n \taromatic : Multiply\n \t\tlhs : Constant\n \t\t\tc : 1.6664041788716395\n \t\trhs : KroneckerDelta\n \t\t\th : 0.10000000000000002\n \tconjugated : Multiply\n \t\tlhs : Constant\n \t\t\tc : 2.3252404085414984\n \t\trhs : KroneckerDelta\n \t\t\th : 0.10000000000000002\n \torder : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.0660395912613341\n \t\trhs : KroneckerDelta\n \t\t\th : 0.20634052632087976\n \tring_stereo : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.02705672519567324\n \t\trhs : KroneckerDelta\n \t\t\th : 0.8\n \tstereo : Multiply\n \t\tlhs : Constant\n \t\t\tc : 0.27468939675048937\n \t\trhs : KroneckerDelta\n \t\t\th : 0.10821251378092044\n\n\n\n\n```\nmu = gpr.predict(train.graphs)\n```\n\n\n```\nplt.scatter(train[target], mu)\nplt.show()\n```\n\n\n```\nprint('Training set')\nprint('MAE:', np.mean(np.abs(train[target] - mu)))\nprint('RMSE:', np.std(train[target] - mu))\n```\n\n Training set\n MAE: 0.20686970935482746\n RMSE: 0.3186909692030075\n\n\n\n```\nmu_test = gpr.predict(test.graphs)\n```\n\n\n```\nplt.scatter(test[target], mu_test)\nplt.show()\n```\n\n\n```\nprint('Test set')\nprint('MAE:', np.mean(np.abs(test[target] - mu_test)))\nprint('RMSE:', np.std(test[target] - mu_test))\n```\n\n MAE: 1.244993771620827\n RMSE: 1.6025986390757625\n\n\n\n```\ngpr2 = GaussianProcessRegressor(\n kernel=Normalization(\n MarginalizedGraphKernel(\n node_kernel=uX.Additive(\n aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n atomic_number=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),\n charge=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),\n chiral=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n hcount=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),\n hybridization=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n ring_list=uX.Constant(0.5, (0.01, 100.0)) * uX.Convolution(uX.KroneckerDelta(0.5,(0.1, 0.9)))\n ).normalized,\n edge_kernel=uX.Additive(\n aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n conjugated=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),\n order=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),\n ring_stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),\n stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9))\n ).normalized,\n p=Uniform(1.0, p_bounds='fixed'),\n q=0.05\n )\n ),\n alpha=1e-2,\n optimizer=True,\n normalize_y=True,\n regularization='+',\n)\n```\n\n\n```\ngpr2.fit(train.graphs, train[target], repeat=3, verbose=True)\n```\n\n | logP| dlogP| y^T.K.y| log|K| | Cond(K)| GPU time| CPU time|\n |------------|------------|------------|------------|------------|----------|----------|\n | 1.667e+05| 1.9282e+05| 1.7773e+05| -11034| 2.2742e+05| 1.3e+02| 1.9|\n | 1.5585e+05| 2.2172e+05| 1.6671e+05| -10857| 2.2218e+05| 1.3e+02| 2|\n | 1.3711e+05| 2.4873e+05| 1.4755e+05| -10447| 2.0803e+05| 1.2e+02| 2|\n | 1.3286e+05| 2.3628e+05| 1.432e+05| -10338| 2.0336e+05| 1.2e+02| 1.9|\n | 1.2977e+05| 2.2742e+05| 1.4002e+05| -10249| 1.9714e+05| 1.1e+02| 2|\n | 1.3007e+05| 2.1427e+05| 1.4032e+05| -10258| 2.0121e+05| 1.1e+02| 1.9|\n | 1.2923e+05| 2.2102e+05| 1.3946e+05| -10238| 1.9906e+05| 1.1e+02| 2|\n | 1.2888e+05| 2.2097e+05| 1.3911e+05| -10229| 1.9866e+05| 1.1e+02| 1.9|\n | 1.2843e+05| 2.2339e+05| 1.3864e+05| -10211| 1.9581e+05| 1.1e+02| 2|\n | 1.2837e+05| 2.2244e+05| 1.3858e+05| -10212| 1.969e+05| 1.1e+02| 2|\n | 1.2836e+05| 2.2224e+05| 1.3857e+05| -10212| 1.9697e+05| 1.1e+02| 2|\n | 1.2832e+05| 2.2155e+05| 1.3853e+05| -10211| 1.9716e+05| 1.1e+02| 1.9|\n | 1.2826e+05| 2.2184e+05| 1.3847e+05| -10209| 1.9704e+05| 1.1e+02| 1.9|\n | 1.2821e+05| 2.218e+05| 1.3841e+05| -10207| 1.9672e+05| 1.1e+02| 1.9|\n | 1.2819e+05| 2.2165e+05| 1.384e+05| -10207| 1.9664e+05| 1.1e+02| 1.9|\n | 1.2818e+05| 2.2129e+05| 1.3839e+05| -10207| 1.9671e+05| 1.1e+02| 1.9|\n | 1.2817e+05| 2.2108e+05| 1.3838e+05| -10207| 1.9674e+05| 1.1e+02| 2|\n | 1.2815e+05| 2.2069e+05| 1.3836e+05| -10207| 1.9678e+05| 1.1e+02| 1.9|\n | 1.2812e+05| 2.1806e+05| 1.3833e+05| -10210| 1.9699e+05| 1.1e+02| 1.9|\n | 1.281e+05| 2.1842e+05| 1.383e+05| -10208| 1.9697e+05| 1.1e+02| 2|\n | 1.2807e+05| 2.1757e+05| 1.3828e+05| -10208| 1.9698e+05| 1.1e+02| 1.9|\n | 1.2806e+05| 2.1787e+05| 1.3827e+05| -10208| 1.9678e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1731e+05| 1.3825e+05| -10208| 1.969e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1704e+05| 1.3825e+05| -10208| 1.9693e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1692e+05| 1.3825e+05| -10208| 1.9695e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1703e+05| 1.3825e+05| -10208| 1.9693e+05| 1.1e+02| 1.9|\n | 1.2804e+05| 2.1694e+05| 1.3825e+05| -10208| 1.9694e+05| 1.1e+02| 1.8|\n | logP| dlogP| y^T.K.y| log|K| | Cond(K)| GPU time| CPU time|\n |------------|------------|------------|------------|------------|----------|----------|\n | 1.598e+05| 2.3746e+05| 1.7074e+05| -10939| 2.2637e+05| 1.4e+02| 1.9|\n | 1.51e+05| 2.4982e+05| 1.6178e+05| -10778| 2.2182e+05| 1.3e+02| 2|\n | 1.3818e+05| 2.5682e+05| 1.4865e+05| -10478| 2.1043e+05| 1.2e+02| 1.9|\n | 1.3264e+05| 2.3404e+05| 1.4297e+05| -10329| 2.0251e+05| 1.1e+02| 1.9|\n | 1.3026e+05| 2.2465e+05| 1.4053e+05| -10266| 2.0075e+05| 1.2e+02| 1.9|\n | 1.2905e+05| 2.2268e+05| 1.3928e+05| -10231| 1.98e+05| 1.1e+02| 1.9|\n | 1.2875e+05| 2.2218e+05| 1.3898e+05| -10222| 1.972e+05| 1.1e+02| 1.9|\n | 1.2846e+05| 2.2218e+05| 1.3867e+05| -10215| 1.9746e+05| 1.1e+02| 1.9|\n | 1.2842e+05| 2.2237e+05| 1.3863e+05| -10213| 1.97e+05| 1.1e+02| 1.9|\n | 1.2839e+05| 2.223e+05| 1.386e+05| -10212| 1.9683e+05| 1.1e+02| 1.9|\n | 1.2834e+05| 2.221e+05| 1.3855e+05| -10211| 1.9661e+05| 1.1e+02| 1.9|\n | 1.2827e+05| 2.2204e+05| 1.3848e+05| -10209| 1.9656e+05| 1.1e+02| 1.9|\n | 1.2847e+05| 2.1854e+05| 1.3869e+05| -10219| 1.9944e+05| 1.1e+02| 2|\n | 1.2825e+05| 2.2117e+05| 1.3846e+05| -10209| 1.9725e+05| 1.1e+02| 1.9|\n | 1.2818e+05| 2.2112e+05| 1.3839e+05| -10207| 1.9694e+05| 1.1e+02| 1.9|\n | 1.2815e+05| 2.2103e+05| 1.3836e+05| -10207| 1.9673e+05| 1.1e+02| 1.9|\n | 1.2809e+05| 2.199e+05| 1.383e+05| -10206| 1.9649e+05| 1.1e+02| 1.9|\n | 1.2859e+05| 2.0873e+05| 1.3881e+05| -10225| 1.9638e+05| 1.1e+02| 1.9|\n | 1.2806e+05| 2.1858e+05| 1.3827e+05| -10207| 1.9647e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1677e+05| 1.3826e+05| -10207| 1.9652e+05| 1.1e+02| 1.8|\n | 1.2805e+05| 2.1694e+05| 1.3826e+05| -10207| 1.967e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.167e+05| 1.3825e+05| -10207| 1.9688e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1664e+05| 1.3825e+05| -10207| 1.9691e+05| 1.1e+02| 1.9|\n | logP| dlogP| y^T.K.y| log|K| | Cond(K)| GPU time| CPU time|\n |------------|------------|------------|------------|------------|----------|----------|\n | 1.6725e+05| 1.9567e+05| 1.7829e+05| -11040| 2.2401e+05| 1.4e+02| 1.9|\n | 1.5632e+05| 2.2001e+05| 1.6717e+05| -10858| 2.2063e+05| 1.3e+02| 1.9|\n | 1.4534e+05| 2.4266e+05| 1.5597e+05| -10631| 2.1449e+05| 1.2e+02| 1.9|\n | 1.3481e+05| 2.3484e+05| 1.4519e+05| -10385| 2.0372e+05| 1.2e+02| 1.9|\n | 1.3073e+05| 2.2726e+05| 1.4102e+05| -10288| 2.0201e+05| 1.2e+02| 1.9|\n | 1.2984e+05| 2.1929e+05| 1.4009e+05| -10254| 1.992e+05| 1.1e+02| 1.9|\n | 1.2914e+05| 2.2195e+05| 1.3938e+05| -10239| 1.9863e+05| 1.1e+02| 1.9|\n | 1.2885e+05| 2.2174e+05| 1.3909e+05| -10232| 1.9832e+05| 1.1e+02| 1.9|\n | 1.2852e+05| 2.1978e+05| 1.3875e+05| -10224| 1.9795e+05| 1.1e+02| 1.9|\n | 1.2827e+05| 2.1632e+05| 1.3848e+05| -10216| 1.9653e+05| 1.1e+02| 2|\n | 1.2824e+05| 2.1302e+05| 1.3846e+05| -10220| 1.9876e+05| 1.1e+02| 1.9|\n | 1.2818e+05| 2.1425e+05| 1.384e+05| -10216| 1.9779e+05| 1.1e+02| 1.9|\n | 1.2817e+05| 2.1466e+05| 1.3838e+05| -10215| 1.9752e+05| 1.1e+02| 1.9|\n | 1.2815e+05| 2.1536e+05| 1.3836e+05| -10213| 1.9717e+05| 1.1e+02| 1.9|\n | 1.2811e+05| 2.1626e+05| 1.3832e+05| -10211| 1.9686e+05| 1.1e+02| 1.9|\n | 1.2806e+05| 2.1764e+05| 1.3826e+05| -10208| 1.9668e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1736e+05| 1.3826e+05| -10208| 1.9717e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1749e+05| 1.3826e+05| -10208| 1.9693e+05| 1.1e+02| 1.8|\n | 1.2805e+05| 2.1742e+05| 1.3826e+05| -10208| 1.9697e+05| 1.1e+02| 1.9|\n | 1.2805e+05| 2.1729e+05| 1.3826e+05| -10208| 1.9703e+05| 1.1e+02| 1.8|\n Optimization result:\n fun: 128044.16324564976\n hess_inv: <25x25 LbfgsInvHessProduct with dtype=float64>\n jac: array([ 21.67021161, 5.72796744, 0.97392078, 3.84598927,\n 636.86139094, 10.22583356, 0. , 6.14285024,\n 0.81411101, -39.71352434, 0. , 9.59334241,\n 2.14451472, 4.28175272, 767.23922512, -0.97552918,\n 93.48096814, -2.22591045, 118.52285938, 5.57539791,\n 38.26594591, 8.48864576, 0. , -10.85883055,\n 7.64668233])\n message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'\n nfev: 27\n nit: 21\n njev: 27\n status: 0\n success: True\n x: array([-9.21034037, -4.46870495, -1.96764908, 1.72425809, -2.30258509,\n -4.60517019, 0. , -4.60517019, -1.79430812, 2.30258509,\n -2.86003467, -3.56649247, -2.30258509, 1.91041179, -2.30258509,\n 0.29155196, -2.30258509, 0.52796843, -2.30258509, -0.58445925,\n -2.30258509, -4.30363654, -0.22314355, -2.54592993, -2.18130617])\n\n\n\n\n\n \n\n\n\n\n```\nmu = gpr2.predict(train.graphs)\nplt.scatter(train[target], mu)\nplt.show()\n```\n\n\n```\nprint('Training set')\nprint('MAE:', np.mean(np.abs(train[target] - mu)))\nprint('RMSE:', np.std(train[target] - mu))\n```\n\n Training set\n MAE: 0.7238574928392492\n RMSE: 0.9635062924566815\n\n\n\n```\nmu_test = gpr2.predict(test.graphs)\nplt.scatter(test[target], mu_test)\nplt.show()\n```\n\n\n```\nprint('Test set')\nprint('MAE:', np.mean(np.abs(test[target] - mu_test)))\nprint('RMSE:', np.std(test[target] - mu_test))\n```\n\n Test set\n MAE: 0.9561466500156314\n RMSE: 1.2284123729830978\n\n\n\n```\n\n```\n", "meta": {"hexsha": "041f8b8c6e6b9ec0f96f3c379accc59bed769138", "size": 121768, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Docking_0605.ipynb", "max_stars_repo_name": "ChrizZhuang/marginalized_graph_kernel_protein", "max_stars_repo_head_hexsha": "5c0d0c36f66be174521b86fb11d53933dfa6bc1e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Docking_0605.ipynb", "max_issues_repo_name": "ChrizZhuang/marginalized_graph_kernel_protein", "max_issues_repo_head_hexsha": "5c0d0c36f66be174521b86fb11d53933dfa6bc1e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Docking_0605.ipynb", "max_forks_repo_name": "ChrizZhuang/marginalized_graph_kernel_protein", "max_forks_repo_head_hexsha": "5c0d0c36f66be174521b86fb11d53933dfa6bc1e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.8127090301, "max_line_length": 16694, "alphanum_fraction": 0.7005206622, "converted": true, "num_tokens": 21035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5117166047041654, "lm_q2_score": 0.5312093733737562, "lm_q1q2_score": 0.27182865692984576}} {"text": "\n\n\n# Start-to-Finish Example: `GiRaFFE_NRPy` 1D tests\n\n### Author: Patrick Nelson\n\n### Adapted from [Start-to-Finish Example: Head-On Black Hole Collision](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb)\n\n## This module implements a basic GRFFE code to evolve one-dimensional GRFFE waves.\n\n### NRPy+ Source Code for this module: \n* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Exact_Wald.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Exact_Wald.py) [\\[**tutorial**\\]](Tutorial-GiRaFFEfood_NRPy_Exact_Wald.ipynb) Generates Exact Wald initial data\n* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Aligned_Rotator.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Aligned_Rotator.py) [\\[**tutorial**\\]](Tutorial-GiRaFFEfood_NRPy_Aligned_Rotator.ipynb) Generates Aligned Rotator initial data\n* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py) [\\[**tutorial**\\]](Tutorial-GiRaFFEfood_NRPy_1D_tests.ipynb) Generates Alfvén Wave initial data.\n* [GiRaFFE_NRPy/Afield_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Afield_flux.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb) Generates the expressions to find the flux term of the induction equation.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb) Generates the driver to compute the magnetic field from the vector potential/\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-BCs.ipynb) Generates the code to apply boundary conditions to the vector potential, scalar potential, and three-velocity.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-C2P_P2C.ipynb) Generates the conservative-to-primitive and primitive-to-conservative solvers.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) Generates code to interpolate metric gridfunctions to cell faces.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-PPM.ipynb) Genearates code to reconstruct primitive variables on cell faces.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.\n* [GiRaFFE_NRPy/Stilde_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Stilde_flux.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.\n* [../GRFFE/equations.py](../../edit/GRFFE/equations.py) [\\[**tutorial**\\]](../Tutorial-GRFFE_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.\n* [../GRHD/equations.py](../../edit/GRHD/equations.py) [\\[**tutorial**\\]](../Tutorial-GRHD_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.\n\nHere we use NRPy+ to generate the C source code necessary to set up initial data for an Alfvén wave (see [the original GiRaFFE paper](https://arxiv.org/pdf/1704.00599.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).\n\n\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis notebook is organized as follows\n\n1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids\n1. [Step 2](#grffe): Output C code for GRFFE evolution\n 1. [Step 2.a](#mol): Output macros for Method of Lines timestepping\n1. [Step 3](#gf_id): Import `GiRaFFEfood_NRPy` initial data modules\n1. [Step 4](#cparams): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`\n1. [Step 5](#mainc): `GiRaFFE_NRPy_standalone.c`: The Main C Code\n\n\n\n# Step 1: Set up core functions and parameters for solving GRFFE equations \\[Back to [top](#toc)\\]\n$$\\label{setup}$$\n\n\n\n```python\nimport shutil, os, sys # Standard Python modules for multiplatform OS-level functions\n# First, we'll add the parent directory to the list of directories Python will check for modules.\nnrpy_dir_path = os.path.join(\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\n\n# Step P1: Import needed NRPy+ core modules:\nfrom outputC import outCfunction, lhrh # NRPy+: Core C code output module\nimport sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends\nimport finite_difference as fin # NRPy+: Finite difference C code generation module\nimport NRPy_param_funcs as par # NRPy+: Parameter interface\nimport grid as gri # NRPy+: Functions having to do with numerical grids\nimport indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\n\n# Step P2: Create C code output directory:\nCcodesdir = os.path.join(\"GiRaFFE_standalone_Ccodes/\")\n# First remove C code output directory if it exists\n# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty\n# !rm -r ScalarWaveCurvilinear_Playground_Ccodes\nshutil.rmtree(Ccodesdir, ignore_errors=True)\n# Then create a fresh directory\ncmd.mkdir(Ccodesdir)\n\n# Step P3: Create executable output directory:\noutdir = os.path.join(Ccodesdir,\"output/\")\ncmd.mkdir(Ccodesdir)\ncmd.mkdir(outdir)\n\n# Step P5: Set timestepping algorithm (we adopt the Method of Lines)\nREAL = \"double\" # Best to use double here.\ndefault_CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.\n\n# Step P6: Set the finite differencing order to 2.\npar.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\",2)\n\nthismodule = \"Start_to_Finish-GiRaFFE_NRPy-1D_tests\"\nTINYDOUBLE = par.Cparameters(\"REAL\", thismodule, \"TINYDOUBLE\", 1e-100)\n\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_Main_Driver as md\n# par.set_paramsvals_value(\"GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C::enforce_speed_limit_StildeD = False\")\npar.set_paramsvals_value(\"GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C::enforce_current_sheet_prescription = False\")\n```\n\n\n\n# Step 2: Output C code for GRFFE evolution \\[Back to [top](#toc)\\]\n$$\\label{grffe}$$\n\nWe will first write the C codes needed for GRFFE evolution. We have already written a module to generate all these codes and call the functions in the appropriate order, so we will import that here. We will take the slightly unusual step of doing this before we generate the initial data functions because the main driver module will register all the gridfunctions we need. It will also generate functions that, in addition to their normal spot in the MoL timestepping, will need to be called during the initial data step to make sure all the variables are appropriately filled in. \n\nAll of this is handled with a single call to `GiRaFFE_NRPy_Main_Driver_generate_all()`, which will register gridfunctions, write all the C code kernels, and write the C code functions to call those.\n\n\n```python\nmd.GiRaFFE_NRPy_Main_Driver_generate_all(Ccodesdir)\n```\n\n Output C function calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs() to file GiRaFFE_standalone_Ccodes/RHSs/calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs.h\n Output C function calculate_StildeD0_source_term() to file GiRaFFE_standalone_Ccodes/RHSs/calculate_StildeD0_source_term.h\n Output C function calculate_StildeD1_source_term() to file GiRaFFE_standalone_Ccodes/RHSs/calculate_StildeD1_source_term.h\n Output C function calculate_StildeD2_source_term() to file GiRaFFE_standalone_Ccodes/RHSs/calculate_StildeD2_source_term.h\n Output C function calculate_Stilde_rhsD() to file GiRaFFE_standalone_Ccodes/RHSs/calculate_Stilde_rhsD.h\n Output C function GiRaFFE_NRPy_cons_to_prims() to file GiRaFFE_standalone_Ccodes/C2P/GiRaFFE_NRPy_cons_to_prims.h\n Output C function GiRaFFE_NRPy_prims_to_cons() to file GiRaFFE_standalone_Ccodes/C2P/GiRaFFE_NRPy_prims_to_cons.h\n\n\n\n\n## Step 2.a: Output macros for Method of Lines timestepping \\[Back to [top](#toc)\\]\n$$\\label{mol}$$\n\nNow, we generate the code to implement the method of lines using the fourth-order Runge-Kutta algorithm.\n\n\n```python\nRK_method = \"RK4\"\n\n# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.\n# As described above the Table of Contents, this is a 3-step process:\n# 3.A: Evaluate RHSs (RHS_string)\n# 3.B: Apply boundary conditions (post_RHS_string, pt 1)\nimport MoLtimestepping.C_Code_Generation as MoL\nfrom MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict\nRK_order = Butcher_dict[RK_method][1]\ncmd.mkdir(os.path.join(Ccodesdir,\"MoLtimestepping/\"))\nMoL.MoL_C_Code_Generation(RK_method,\n RHS_string = \"\"\"\nGiRaFFE_NRPy_RHSs(¶ms,auxevol_gfs,RK_INPUT_GFS,RK_OUTPUT_GFS);\"\"\",\n post_RHS_string = \"\"\"\nGiRaFFE_NRPy_post_step(¶ms,xx,auxevol_gfs,RK_OUTPUT_GFS,n+1);\\n\"\"\",\n outdir = os.path.join(Ccodesdir,\"MoLtimestepping/\"))\n\n```\n\n\n\n# Step 3: Import `GiRaFFEfood_NRPy` initial data modules \\[Back to [top](#toc)\\]\n$$\\label{gf_id}$$\n\nWith the preliminaries out of the way, we will write the C functions to set up initial data. There are two categories of initial data that must be set: the spacetime metric variables, and the GRFFE plasma variables. We will set up the spacetime first.\n\n\n```python\n# There are several initial data routines we need to test. We'll control which one we use with a string option\ninitial_data = \"AlfvenWave\" # Valid options: \"AlignedRotator\", \"AlfvenWave\", \"FastWave\",\n # \"DegenAlfvenWave\", \"ThreeWaves\", \"FFE_Breakdown\"\nspacetime = \"flat\" # Valid options: \"flat\"\n\ngammaDD = ixp.zerorank2(DIM=3)\nfor i in range(3):\n for j in range(3):\n if i==j:\n gammaDD[i][j] = sp.sympify(1) # else: leave as zero\nbetaU = ixp.zerorank1() # All should be 0\nalpha = sp.sympify(1)\n\n# Description and options for this initial data\ndesc = \"Generate a flat spacetime metric.\"\nloopopts_id =\"AllPoints\" # we don't need to read coordinates for flat spacetime.\n\n\nname = \"set_initial_spacetime_metric_data\"\nvalues_to_print = [\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"gammaDD00\"),rhs=gammaDD[0][0]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"gammaDD01\"),rhs=gammaDD[0][1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"gammaDD02\"),rhs=gammaDD[0][2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"gammaDD11\"),rhs=gammaDD[1][1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"gammaDD12\"),rhs=gammaDD[1][2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"gammaDD22\"),rhs=gammaDD[2][2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"betaU0\"),rhs=betaU[0]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"betaU1\"),rhs=betaU[1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"betaU2\"),rhs=betaU[2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"alpha\"),rhs=alpha),\n ]\n\noutCfunction(\n outfile = os.path.join(Ccodesdir,name+\".h\"), desc=desc, name=name,\n params =\"const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs\",\n body = fin.FD_outputC(\"returnstring\",values_to_print,params=\"outCverbose=False\"),\n loopopts = loopopts_id)\n\n```\n\n Output C function set_initial_spacetime_metric_data() to file GiRaFFE_standalone_Ccodes/set_initial_spacetime_metric_data.h\n\n\nNow, we will write out the initial data function for the GRFFE variables.\n\n\n```python\nimport GiRaFFEfood_NRPy.GiRaFFEfood_NRPy as gf\n\ngf.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = True)\n\nif initial_data==\"AlfvenWave\":\n desc = \"Generate Alfven wave 1D initial test data for GiRaFFEfood_NRPy.\"\nelif initial_data==\"FastWave\":\n desc = \"Generate fast wave 1D initial test data for GiRaFFEfood_NRPy.\"\nelif initial_data==\"DegenAlfvenWave\":\n desc = \"Generate degenerate Alfven wave 1D initial test data for GiRaFFEfood_NRPy.\"\nelif initial_data==\"ThreeWaves\":\n desc = \"Generate three waves 1D initial test data for GiRaFFEfood_NRPy.\"\nelif initial_data==\"FFE_Breakdown\":\n desc = \"Generate FFE breakdown 1D initial test data for GiRaFFEfood_NRPy.\"\nelif initial_data==\"AlignedRotator\":\n desc = \"Generate aligned rotator initial test data for GiRaFFEfood_NRPy.\"\nelse:\n print(\"Unsupported Initial Data string \"+initial_data+\"! Supported ID: AllTests, AlfvenWave, FastWave, DegenAlfvenWave, ThreeWaves, FFE_Breakdown, AlignedRotator, or ExactWald\")\n\nname = \"initial_data\"\n\nvalues_to_print = [\n lhrh(lhs=gri.gfaccess(\"out_gfs\",\"AD0\"),rhs=gf.AD[0]),\n lhrh(lhs=gri.gfaccess(\"out_gfs\",\"AD1\"),rhs=gf.AD[1]),\n lhrh(lhs=gri.gfaccess(\"out_gfs\",\"AD2\"),rhs=gf.AD[2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"ValenciavU0\"),rhs=gf.ValenciavU[0]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"ValenciavU1\"),rhs=gf.ValenciavU[1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"ValenciavU2\"),rhs=gf.ValenciavU[2]),\n# lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"BU0\"),rhs=gf.BU[0]),\n# lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"BU1\"),rhs=gf.BU[1]),\n# lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"BU2\"),rhs=gf.BU[2]),\n lhrh(lhs=gri.gfaccess(\"out_gfs\",\"psi6Phi\"),rhs=sp.sympify(0))\n ]\n\noutCfunction(\n outfile = os.path.join(Ccodesdir,name+\".h\"), desc=desc, name=name,\n params =\"const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs,REAL *out_gfs\",\n body = fin.FD_outputC(\"returnstring\",values_to_print,params=\"outCverbose=False\"),\n loopopts =\"AllPoints,Read_xxs\")\n\n```\n\n Output C function initial_data() to file GiRaFFE_standalone_Ccodes/initial_data.h\n\n\n\n\n# Step 4: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \\[Back to [top](#toc)\\]\n$$\\label{cparams}$$\n\nBased on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.\n\nThen we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above\n\n\n```python\n# Step 3.e: Output C codes needed for declaring and setting Cparameters; also set free_parameters.h\n# Step 3.e.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h\npar.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))\n\n# Step 3.e.ii: Set free_parameters.h\nwith open(os.path.join(Ccodesdir,\"free_parameters.h\"),\"w\") as file:\n file.write(\"\"\"// Override parameter defaults with values based on command line arguments and NGHOSTS.\nparams.Nxx0 = atoi(argv[1]);\nparams.Nxx1 = atoi(argv[2]);\nparams.Nxx2 = atoi(argv[3]);\nparams.Nxx_plus_2NGHOSTS0 = params.Nxx0 + 2*NGHOSTS;\nparams.Nxx_plus_2NGHOSTS1 = params.Nxx1 + 2*NGHOSTS;\nparams.Nxx_plus_2NGHOSTS2 = params.Nxx2 + 2*NGHOSTS;\n// Step 0d: Set up space and time coordinates\n// Step 0d.i: Declare \\Delta x^i=dxx{0,1,2} and invdxx{0,1,2}, as well as xxmin[3] and xxmax[3]:\nconst REAL xxmin[3] = {-1.5,-0.1,-0.1};\nconst REAL xxmax[3] = { 1.5, 0.1, 0.1};\n//const REAL xxmin[3] = {-1.5,-1.5,-1.5};\n//const REAL xxmax[3] = { 1.5, 1.5, 1.5};\n\nparams.dxx0 = (xxmax[0] - xxmin[0]) / ((REAL)params.Nxx0+1);\nparams.dxx1 = (xxmax[1] - xxmin[1]) / ((REAL)params.Nxx1+1);\nparams.dxx2 = (xxmax[2] - xxmin[2]) / ((REAL)params.Nxx2+1);\nprintf(\"dxx0,dxx1,dxx2 = %.5e,%.5e,%.5e\\\\n\",params.dxx0,params.dxx1,params.dxx2);\nparams.invdx0 = 1.0 / params.dxx0;\nparams.invdx1 = 1.0 / params.dxx1;\nparams.invdx2 = 1.0 / params.dxx2;\n\nconst int poison_grids = 0;\n// Standard GRFFE parameters:\nparams.GAMMA_SPEED_LIMIT = 2000.0;\nparams.diss_strength = 0.1;\n\"\"\")\nif initial_data==\"ExactWald\":\n with open(os.path.join(out_dir,\"free_parameters.h\"),\"a\") as file:\n file.write(\"\"\"params.r0 = 0.4;\nparams.a = 0.0;\n\"\"\")\n\n```\n\n\n\n# Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \\[Back to [top](#toc)\\]\n$$\\label{bc_functs}$$\n\nNext apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)\n\n...But, for the moment, we're actually just using this because it writes the file `gridfunction_defines.h`.\n\n\n```python\nimport CurviBoundaryConditions.CurviBoundaryConditions as cbcs\ncbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,\"boundary_conditions/\"),Cparamspath=os.path.join(\"../\"),enable_copy_of_static_Ccodes=False)\n```\n\n Wrote to file \"GiRaFFE_standalone_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h\"\n Evolved parity: ( AD0:1, AD1:2, AD2:3, StildeD0:1, StildeD1:2, StildeD2:3,\n psi6Phi:0 )\n \n AuxEvol parity: ( AevolParen:0, BU0:1, BU1:2, BU2:3, B_lU0:1, B_lU1:2,\n B_lU2:3, B_rU0:1, B_rU1:2, B_rU2:3, PhievolParenU0:1, PhievolParenU1:2,\n PhievolParenU2:3, Stilde_flux_HLLED0:1, Stilde_flux_HLLED1:2,\n Stilde_flux_HLLED2:3, ValenciavU0:1, ValenciavU1:2, ValenciavU2:3,\n Valenciav_lU0:1, Valenciav_lU1:2, Valenciav_lU2:3, Valenciav_rU0:1,\n Valenciav_rU1:2, Valenciav_rU2:3, alpha:0, alpha_face:0, betaU0:1,\n betaU1:2, betaU2:3, beta_faceU0:1, beta_faceU1:2, beta_faceU2:3,\n gammaDD00:4, gammaDD01:5, gammaDD02:6, gammaDD11:7, gammaDD12:8,\n gammaDD22:9, gamma_faceDD00:4, gamma_faceDD01:5, gamma_faceDD02:6,\n gamma_faceDD11:7, gamma_faceDD12:8, gamma_faceDD22:9 )\n Wrote to file \"GiRaFFE_standalone_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h\"\n\n\n\n\n# Step 5: `GiRaFFE_NRPy_standalone.c`: The Main C Code \\[Back to [top](#toc)\\]\n$$\\label{mainc}$$\n\n\n```python\n# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),\n# and set the CFL_FACTOR (which can be overwritten at the command line)\n\nwith open(os.path.join(Ccodesdir,\"GiRaFFE_NRPy_REAL__NGHOSTS__CFL_FACTOR.h\"), \"w\") as file:\n file.write(\"\"\"\n// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER\n#define NGHOSTS \"\"\"+str(3)+\"\"\"\n#define NGHOSTS_A2B \"\"\"+str(2)+\"\"\"\n// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point\n// numbers are stored to at least ~16 significant digits\n#define REAL \"\"\"+REAL+\"\"\"\n// Part P0.c: Set the CFL Factor. Can be overwritten at command line.\nREAL CFL_FACTOR = \"\"\"+str(default_CFL_FACTOR)+\";\")\n```\n\n\n```python\n%%writefile $Ccodesdir/GiRaFFE_NRPy_standalone.c\n// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.\n#include \"GiRaFFE_NRPy_REAL__NGHOSTS__CFL_FACTOR.h\"\n\n#include \"declare_Cparameters_struct.h\"\n\nconst int NSKIP_1D_OUTPUT = 1;\n\n// Step P1: Import needed header files\n#include \"stdio.h\"\n#include \"stdlib.h\"\n#include \"math.h\"\n#include \"time.h\"\n#include \"stdint.h\" // Needed for Windows GCC 6.x compatibility\n#ifndef M_PI\n#define M_PI 3.141592653589793238462643383279502884L\n#endif\n#ifndef M_SQRT1_2\n#define M_SQRT1_2 0.707106781186547524400844362104849039L\n#endif\n\n// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of\n// data in a 1D array. In this case, consecutive values of \"i\"\n// (all other indices held to a fixed value) are consecutive in memory, where\n// consecutive values of \"j\" (fixing all other indices) are separated by\n// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of\n// \"k\" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.\n#define IDX4S(g,i,j,k) \\\n( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )\n#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )\n#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )\n#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \\\n for(int i2=i2min;i2 x-1 and output initial data again, giving the exact solution.\n LOOP_REGION(0,Nxx_plus_2NGHOSTS0,0,1,0,1) {\n xx[0][i0] += -mu_AW*time;\n //xx[0][i0] += -time;\n }\n set_initial_spacetime_metric_data(¶ms,xx,auxevol_gfs_exact);\n initial_data(¶ms,xx,auxevol_gfs_exact,evol_gfs_exact);\n // Fill in the remaining quantities\n //driver_A_to_B(¶ms,evol_gfs_exact,auxevol_gfs_exact);\n GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs_exact,evol_gfs_exact);\n // And now, we'll set the grid back to rights.\n LOOP_REGION(0,Nxx_plus_2NGHOSTS0,0,1,0,1) {\n xx[0][i0] -= -mu_AW*time;\n //xx[0][i0] -= -time;\n }\n sprintf(filename,\"out%d-%08d_exact.txt\",Nxx0,n);\n FILE *out2D_exact = fopen(filename, \"w\");\n for(int i0=0;i0 t+dt) in time using\n // chosen RK-like MoL timestepping algorithm\n#include \"MoLtimestepping/RK_MoL.h\"\n } // End main loop to progress forward in time.\n\n // Step 4: Free all allocated memory\n#include \"MoLtimestepping/RK_Free_Memory.h\"\n free(auxevol_gfs);\n free(auxevol_gfs_exact);\n free(evol_gfs_exact);\n for(int i=0;i<3;i++) free(xx[i]);\n return 0;\n}\n```\n\n Writing GiRaFFE_standalone_Ccodes//GiRaFFE_NRPy_standalone.c\n\n\n\n```python\ncmd.C_compile(os.path.join(Ccodesdir,\"GiRaFFE_NRPy_standalone.c\"),\n os.path.join(Ccodesdir,\"output\",\"GiRaFFE_NRPy_standalone\"),compile_mode=\"safe\")\n# !gcc -g -O2 -fopenmp GiRaFFE_standalone_Ccodes/GiRaFFE_NRPy_standalone.c -o GiRaFFE_NRPy_standalone -lm\n\n# Change to output directory\nos.chdir(outdir)\n# Clean up existing output files\ncmd.delete_existing_files(\"out*.txt\")\ncmd.delete_existing_files(\"out*.png\")\n# cmd.Execute(os.path.join(Ccodesdir,\"output\",\"GiRaFFE_NRPy_standalone\"), \"640 16 16\", os.path.join(outdir,\"out640.txt\"))\ncmd.Execute(\"GiRaFFE_NRPy_standalone\", \"119 7 7\",\"out119.txt\")\n# cmd.Execute(\"GiRaFFE_NRPy_standalone\", \"119 119 119\",\"out119.txt\")\n# cmd.Execute(\"GiRaFFE_NRPy_standalone\", \"239 15 15\",\"out239.txt\")\n# !OMP_NUM_THREADS=1 valgrind --track-origins=yes -v ./GiRaFFE_NRPy_standalone 1280 32 32\n# Return to root directory\nos.chdir(os.path.join(\"../../\"))\n```\n\n Compiling executable...\n (EXEC): Executing `gcc -std=gnu99 -O2 -g -fopenmp GiRaFFE_standalone_Ccodes/GiRaFFE_NRPy_standalone.c -o GiRaFFE_standalone_Ccodes/output/GiRaFFE_NRPy_standalone -lm`...\n (BENCH): Finished executing in 2.412449836730957 seconds.\n Finished compilation.\n (EXEC): Executing `taskset -c 0,1,2,3 ./GiRaFFE_NRPy_standalone 119 7 7`...\n (BENCH): Finished executing in 1.211672067642212 seconds.\n\n\nNow, we will load the data generated by the simulation and plot it in order to test for convergence. \n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nData_numer = np.loadtxt(os.path.join(\"GiRaFFE_standalone_Ccodes\",\"output\",\"out119-00000040.txt\"))\n# Data_num_2 = np.loadtxt(os.path.join(\"GiRaFFE_standalone_Ccodes\",\"output\",\"out239-00000080.txt\"))\n# Data_old = np.loadtxt(\"/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave/giraffe-grmhd_primitives_bi.x.asc\")\n# Data_o_2 = np.loadtxt(\"/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave_2/giraffe-grmhd_primitives_bi.x.asc\")\n# Data_numer = Data_old[5000:5125,11:15] # The column range is chosen for compatibility with the plotting script.\n# Data_num_2 = Data_o_2[19600:19845,11:15] # The column range is chosen for compatibility with the plotting script.\nData_exact = np.loadtxt(os.path.join(\"GiRaFFE_standalone_Ccodes\",\"output\",\"out119-00000040_exact.txt\"))\n# Data_exa_2 = np.loadtxt(os.path.join(\"GiRaFFE_standalone_Ccodes\",\"output\",\"out239-00000080_exact.txt\"))\n\npredicted_order = 2.0\ncolumn = 3\nplt.figure()\n# # plt.plot(Data_exact[2:-2,0],np.log2(np.absolute((Data_numer[2:-2,column]-Data_exact[2:-2,column])/\\\n# # (Data_num_2[2:-2:2,column]-Data_exa_2[2:-2:2,column]))),'.')\nplt.plot(Data_exact[:,0],Data_exact[:,column])\nplt.plot(Data_exact[:,0],Data_numer[:,column],'.')\n# plt.xlim(-0.0,1.0)\n# # plt.ylim(-1.0,5.0)\n# # plt.ylim(-0.0005,0.0005)\n# plt.xlabel(\"x\")\n# plt.ylabel(\"BU2\")\nplt.show()\n\n# # 0 1 2 3 4 5 6 7 8 9 10 11 12 13\n# labels = [\"x\",\"BU0\",\"BU1\",\"BU2\",\"AD0\",\"AD1\",\"AD2\",\"StildeD0\",\"StildeD1\",\"StildeD2\",\"ValenciavU0\",\"ValenciavU1\",\"ValenciavU2\", \"psi6Phi\"]\n# old_files = [\"\",\n# \"giraffe-grmhd_primitives_bi.x.asc\",\"giraffe-grmhd_primitives_bi.x.asc\",\"giraffe-grmhd_primitives_bi.x.asc\",\n# # \"giraffe-em_ax.x.asc\",\"giraffe-em_ay.x.asc\",\"giraffe-em_az.x.asc\",\n# \"cell_centered_Ai.txt\",\"cell_centered_Ai.txt\",\"cell_centered_Ai.txt\",\n# \"giraffe-grmhd_conservatives.x.asc\",\"giraffe-grmhd_conservatives.x.asc\",\"giraffe-grmhd_conservatives.x.asc\",\n# \"giraffe-grmhd_primitives_allbutbi.x.asc\",\"giraffe-grmhd_primitives_allbutbi.x.asc\",\"giraffe-grmhd_primitives_allbutbi.x.asc\",\n# \"giraffe-em_psi6phi.x.asc\"]\n# column = 5\n# column_old = [0,12,13,14,0,1,2,12,13,14,12,13,14,12]\n# old_path = \"/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave\"\n# new_path = os.path.join(\"GiRaFFE_standalone_Ccodes\",\"output\")\n# data_old = np.loadtxt(os.path.join(old_path,old_files[column]))\n# # data_old = data_old[250:375,:]# Select only the second timestep\n# # data_old = data_old[125:250,:]# Select only the first timestep\n# # data_old = data_old[0:125,:]# Select only the zeroth timestep\n# data_new = np.loadtxt(os.path.join(new_path,\"out119-00000001.txt\"))\n\n# deltaA_old = data_old[125:250,:] - data_old[0:125,:]\n# data_new_t0 = np.loadtxt(os.path.join(new_path,\"out119-00000000.txt\"))\n# deltaA_new = data_new[:,:] - data_new_t0[:,:]\n\n# plt.figure()\n# # plt.plot(data_new[3:-3,0],data_new[3:-3,column]-data_old[3:-3,column_old[column]])\n# # plt.plot(data_new[:,0],data_new[:,column]-((3*np.sin(5*np.pi*data_new[:,0]/np.sqrt(1 - (-0.5)**2))/20 + 23/20)*(data_new[:,0]/2 + np.sqrt(1 - (-0.5)**2)/20 + np.absolute(data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)/2)*(-1e-100/2 + data_new[:,0]/2 - np.sqrt(1 - (-0.5)**2)/20 - np.absolute(-1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)/2)/((-1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)*(1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)) + 13*(data_new[:,0]/2 - np.sqrt(1 - (-0.5)**2)/20 + np.absolute(data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)/2)/(10*(1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)) + (-1e-100/2 + data_new[:,0]/2 + np.sqrt(1 - (-0.5)**2)/20 - np.absolute(-1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)/2)/(-1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10))/np.sqrt(1 - (-0.5)**2))\n# # plt.plot(data_new[1:,0]-(data_new[0,0]-data_new[1,0])/2.0,(data_new[0:-1,column]+data_new[1:,column])/2,'.',label=\"GiRaFFE_NRPy+injected BU\")\n# # plt.plot(data_new[1:,0]-(data_new[0,0]-data_new[1,0])/2.0,data_old[1:,column_old[column]],label=\"old GiRaFFE\")\n# # -(data_old[0,9]-data_old[1,9])/2.0\n# # plt.plot(data_new[3:-3,0],deltaA_new[3:-3,column],'.')\n# plt.plot(data_new[3:-3,0],deltaA_old[3:-3,column_old[column]]-deltaA_new[3:-3,column])\n# # plt.xlim(-0.1,0.1)\n# # plt.ylim(-0.2,0.2)\n# plt.legend()\n# plt.xlabel(labels[0])\n# plt.ylabel(labels[column])\n# plt.show()\n# # print(np.argmin(deltaA_old[3:-3,column_old[column]]-deltaA_new[3:-3,column]))\n```\n\nThis code will create an animation of the wave over time.\n\n\n```python\n# import matplotlib.pyplot as plt\nfrom matplotlib.pyplot import savefig\nfrom IPython.display import HTML\nimport matplotlib.image as mgimg\n\nimport glob\nimport sys\nfrom matplotlib import animation\n\ncmd.delete_existing_files(\"out119-00*.png\")\nglobby = glob.glob(os.path.join('GiRaFFE_standalone_Ccodes','output','out119-00*.txt'))\nfile_list = []\nfor x in sorted(globby):\n file_list.append(x)\n\nnumber_of_files = int(len(file_list)/2)\n\nfor timestep in range(number_of_files):\n fig = plt.figure()\n numer_filename = file_list[2*timestep]\n exact_filename = file_list[2*timestep+1]\n Numer = np.loadtxt(numer_filename)\n Exact = np.loadtxt(exact_filename)\n\n plt.title(\"Alfven Wave\")\n plt.xlabel(\"x\")\n plt.ylabel(\"BU2\")\n plt.xlim(-0.5,0.5)\n plt.ylim(1.0,1.7)\n\n plt.plot(Numer[3:-3,0],Numer[3:-3,3],'.',label=\"Numerical\")\n plt.plot(Exact[3:-3,0],Exact[3:-3,3],label=\"Exact\")\n plt.legend()\n savefig(numer_filename+\".png\",dpi=150)\n plt.close(fig)\n sys.stdout.write(\"%c[2K\" % 27)\n sys.stdout.write(\"Processing file \"+numer_filename+\"\\r\")\n sys.stdout.flush()\n```\n\n \u001b[2KProcessing file GiRaFFE_standalone_Ccodes/output/out119-00000040.txt\r\n\n\n```python\n## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##\n# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame\n# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation\n# !rm -f GiRaFFE_NRPy-1D_tests.mp4\ncmd.delete_existing_files(\"GiRaFFE_NRPy-1D_tests.mp4\")\n\nfig = plt.figure(frameon=False)\nax = fig.add_axes([0, 0, 1, 1])\nax.axis('off')\n\nmyimages = []\n\nfor i in range(number_of_files):\n img = mgimg.imread(file_list[2*i]+\".png\")\n imgplot = plt.imshow(img)\n myimages.append([imgplot])\n\nani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)\nplt.close()\nani.save('GiRaFFE_NRPy-1D_tests.mp4', fps=5,dpi=150)\n```\n\n\n```python\n%%HTML\n\n```\n\n\n\n\n\n\n\n```python\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\ncmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"Tutorial-GiRaFFE_NRPy_Main_Driver\",location_of_template_file=os.path.join(\"..\"))\n```\n\n Created Tutorial-GiRaFFE_NRPy_Main_Driver.tex, and compiled LaTeX file to\n PDF file Tutorial-GiRaFFE_NRPy_Main_Driver.pdf\n\n", "meta": {"hexsha": "3bcb73db09a086055e84d73fab29d94673426736", "size": 55727, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "in_progress/Tutorial-Start_to_Finish-GiRaFFE_NRPy-1D_tests-unstaggered.ipynb", "max_stars_repo_name": "Harmohit-Singh/nrpytutorial", "max_stars_repo_head_hexsha": "81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 66, "max_stars_repo_stars_event_min_datetime": "2018-06-26T22:18:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T21:12:33.000Z", "max_issues_repo_path": "in_progress/Tutorial-Start_to_Finish-GiRaFFE_NRPy-1D_tests-unstaggered.ipynb", "max_issues_repo_name": "Harmohit-Singh/nrpytutorial", "max_issues_repo_head_hexsha": "81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2020-02-13T16:09:29.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-12T14:59:59.000Z", "max_forks_repo_path": "in_progress/Tutorial-Start_to_Finish-GiRaFFE_NRPy-1D_tests-unstaggered.ipynb", "max_forks_repo_name": "Harmohit-Singh/nrpytutorial", "max_forks_repo_head_hexsha": "81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2019-01-09T09:57:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T18:45:08.000Z", "avg_line_length": 57.8080912863, "max_line_length": 6488, "alphanum_fraction": 0.6444272974, "converted": true, "num_tokens": 12076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5698526514141571, "lm_q2_score": 0.47657965106367595, "lm_q1q2_score": 0.27158017776866955}} {"text": "# \u7b80\u5355\u7406\u89e3 RHF \u542b\u9891\u6781\u5316\u7387\u53ca\u5176\u4e0e TD-HF \u95f4\u7684\u5173\u7cfb\n\n> \u521b\u5efa\u65e5\u671f\uff1a2020-01-02\n>\n> \u6700\u540e\u4fee\u6539\uff1a2020-06-10\n\n\u542b\u9891\u6781\u5316\u7387\u5728\u975e\u7ebf\u6027\u5149\u5b66\u4e2d\u6709\u6240\u5e94\u7528\u3002\u8fd9\u91cc\u6307\u7684\u9891\u7387\u662f\u5165\u5c04\u6fc0\u53d1\u5149\u9891\u7387\uff0c\u800c\u975e\u5206\u5b50\u632f\u52a8\u9891\u7387\u3002\u542b\u9891\u6781\u5316\u7387\u82f1\u6587\u662f Frequency-Dependent Polarizability\uff0c\u4e5f\u6709\u65f6\u4f7f\u7528\u52a8\u6001 Dynamic \u66ff\u4ee3\u542b\u9891 Frequency-Dependent\uff1b\u76f8\u5bf9\u5730\uff0c\u6ca1\u6709\u5165\u5c04\u6fc0\u53d1\u5149\u7ed9\u51fa\u7684\u6781\u5316\u7387\u79f0\u4e3a\u9759\u6001 Static \u6781\u5316\u7387\u3002\n\n\u4f46\u8fd9\u4e5f\u53ea\u662f\u9053\u542c\u9014\u8bf4\u3002\u5bf9\u4e8e\u6211\u6765\u8bb2\u66f4\u76f4\u63a5\u7684\u610f\u4e49\u4f1a\u662f\uff0c\u542b\u9891\u6781\u5316\u7387\u5bf9\u5750\u6807\u7684\u4e00\u9636\u5bfc\u6570\u53ef\u4ee5\u7528\u4e8e\u8ba1\u7b97\u542b\u9891 Raman \u5149\u8c31\u3002\n\n\u5199\u8fd9\u7bc7\u6587\u6863\u4e00\u5f00\u59cb\u7684\u539f\u56e0\u662f\uff0c\u66fe\u7ecf\u5728\u5c1d\u8bd5\u8ba1\u7b97\u7b80\u5355\u7684 SERS \u5149\u8c31\u65f6\uff0c\u53d1\u73b0 Valley, Schatz et al. [^Valley-Schatz.JPCL.2013] \u7684\u542b\u9891 Raman \u5149\u8c31\u8ba1\u7b97\u7684\u6587\u7ae0\u63d0\u5230\u4f7f\u7528 TD-DFT (time-dependent density functional theory)\uff1b\u5176\u5b83\u6587\u732e\u4e5f\u51e0\u4e4e\u65e0\u4e00\u4f8b\u5916\u3002\u8fd9\u591a\u5c11\u5bf9\u6211\u6765\u8bf4\u6709\u70b9\u610f\u5916\u3002Raman \u5149\u8c31\u7684\u8ba1\u7b97\u901a\u8fc7\u6c42\u53d6\u6781\u5316\u7387\u5bf9\u7b80\u6b63\u5750\u6807\u7684\u5bfc\u6570 (\u4e0d\u7ba1\u662f\u89e3\u6790\u7684\u8fd8\u662f\u6570\u503c\u7684) \u5f97\u5230\uff0c\u800c\u6781\u5316\u7387\u5219\u53ef\u4ee5\u901a\u8fc7 CP-HF (coupled-perturbated Hartree-Fock) \u65b9\u7a0b\u7ed9\u51fa\u3002\u6b64\u524d\u6211\u786e\u5b9e\u5730\u6210\u529f\u5f97\u5230\u4e86 Gaussian \u6240\u7ed9\u51fa\u7684 RKS (GGA level) \u6781\u5316\u7387\uff0c\u5e76\u4e14\u5e76\u6ca1\u6709\u4f7f\u7528 TD-DFT \u8ba1\u7b97\uff0c\u800c\u662f CP-KS (coupled-perturbated Kohn-Sham) \u65b9\u7a0b\u8ba1\u7b97\u5f97\u5230\u7684\u6781\u5316\u7387\uff1b\u6211\u66fe\u7ecf\u4e00\u5ea6\u4ee5\u4e3a\u8fd9\u662f ADF \u8f6f\u4ef6\u4e0e Gaussian \u8f6f\u4ef6\u4e24\u8005\u7684\u533a\u522b\u3002\u540e\u6765\u5728\u5404\u8def\u540c\u5b66\u7684\u63d0\u9192\u4e0b\uff0c\u624d\u6e10\u6e10\u660e\u767d\u6781\u5316\u7387\u4e0e\u542b\u65f6\u5206\u6790 (TD) \u4e4b\u95f4\u7684\u5173\u7cfb\u3002\n\n\u8fd9\u7bc7\u6587\u6863\u5c06\u4f1a\u5ffd\u7565\u5927\u90e8\u5206\u4e0e\u516c\u5f0f\u63a8\u5bfc\u6709\u5173\u7684\u95ee\u9898\uff1b\u8fd9\u662f\u7531\u4e8e TD-DFT \u6216 TD-HF \u7684\u516c\u5f0f\u63a8\u5bfc\u5e76\u4e0d\u7b80\u5355\uff1b\u5728\u77ed\u65f6\u95f4\u4e4b\u5185\u6211\u91cd\u590d\u4e0d\u51fa\u8ba9\u6211\u81ea\u5df1\u4fe1\u670d\u7684\u63a8\u5bfc\u3002\u8fd9\u7bc7\u6587\u6863\u4e0d\u8ba8\u8bba\u4e0e\u590d\u6570\u6709\u5173\u7684\u8bdd\u9898\uff0c\u53d8\u91cf\u4e0e\u516c\u5f0f\u5168\u90e8\u91c7\u7528\u5b9e\u6570\u4e0e\u5b9e\u51fd\u6570\u3002\n\n\u6211\u4eec\u4f1a\u4f7f\u7528\u975e\u5bf9\u79f0\u7684\u53cc\u6c27\u6c34\u5206\u5b50\u4f5c\u4e3a\u6f14\u793a\u5206\u5b50\uff0c\u57fa\u7ec4\u4f7f\u7528 6-31G\u3002\u8ba1\u7b97\u7a0b\u5e8f\u4f7f\u7528 PySCF \u63d0\u4f9b\u7535\u5b50\u79ef\u5206\uff0c\u5e76\u4e0e Gaussian \u7684\u542b\u9891\u6781\u5316\u7387\u3001PySCF \u7684\u6fc0\u53d1\u9891\u7387\u8ba1\u7b97\u7ed3\u679c\u4f5c\u5bf9\u5e94\u3002\n\n\n```python\n%matplotlib notebook\n\nimport numpy as np\nimport scipy\nfrom pyscf import gto, scf, tdscf\nfrom functools import partial\nimport matplotlib.pyplot as plt\nfrom matplotlib import patches\nfrom formchk_interface import FormchkInterface\n\nnp.einsum = partial(np.einsum, optimize=[\"greedy\", 1024 ** 3 * 2 / 8])\nnp.set_printoptions(5, linewidth=150, suppress=True)\n```\n\n\u5168\u6587\u4f7f\u7528\u4ee5\u4e0b\u8bb0\u53f7\uff1a\n\n- $p, q, r, s, m$ \u8868\u793a\u5168\u90e8\u8f68\u9053\n\n- $i, j$ \u8868\u793a\u5360\u636e\u5206\u5b50\u8f68\u9053\n\n- $a, b$ \u8868\u793a\u975e\u5360\u5206\u5b50\u8f68\u9053\n\n- $\\mu, \\nu, \\kappa, \\lambda$ \u8868\u793a\u539f\u5b50\u8f68\u9053\n\n- $t, s$ \u5728\u4e0d\u5f15\u8d77\u6b67\u4e49\u7684\u60c5\u51b5\u4e0b\u8868\u793a\u7a7a\u95f4\u53d6\u5411 $x, y, z$\n\n- $P, Q, R, S$ \u5728\u8fd9\u7bc7\u6587\u6863\u8868\u793a\u7c7b\u4f3c\u4e8e $ai$ \u7684\u7ec4\u5408\u4e0b\u6807\n\n- $n$ \u8868\u793a TD-HF \u6fc0\u53d1\u6001\n\n\u5168\u6587\u4f7f\u7528\u7b80\u5316\u4e0e\u4e0d\u4e25\u683c\u7684 Einstein Summation Convention\u3002\n\n\u4e0b\u9762\u8865\u5145\u4e00\u4e2a\u539f\u5b50\u5355\u4f4d\u80fd\u91cf $E_\\mathrm{h}$ \u5230\u6ce2\u6570 $\\mathrm{cm}^{-1}$ \u7684\u6362\u7b97 `Eh_cm`\uff1a\n\n$$\n1 \\, E_\\mathrm{h} = 219474.6 \\, \\mathrm{cm}^{-1}\n$$\n\n\n```python\nfrom scipy.constants import physical_constants\nEh_cm = physical_constants[\"hartree-inverse meter relationship\"][0] / 100\nEh_cm\n```\n\n\n\n\n 219474.6313702\n\n\n\n## \u5206\u5b50\u4f53\u7cfb\u4e0e\u6807\u51c6\u7ed3\u679c\n\n:::{admonition} \u9605\u8bfb\u63d0\u793a\n\n\u6211\u4eec\u4f1a\u82b1\u5f88\u957f\u7684\u65f6\u95f4\u8fdb\u884c\u5206\u5b50\u4f53\u7cfb\u4e0e\u6807\u51c6\u7ed3\u679c\u7684\u5b9a\u4e49\u3002\u5982\u679c\u5bf9\u4ee3\u7801\u4e0e\u6587\u672c\u9605\u8bfb\u80fd\u529b\u6709\u4fe1\u5fc3\uff0c\u8fd9\u6bb5\u53ef\u4ee5\u8df3\u8fc7\u3002\n\n:::\n\n### PySCF \u4f53\u7cfb\u5b9a\u4e49\n\n\u5728\u8fdb\u5165\u4e0b\u9762\u7684\u8ba8\u8bba\u524d\uff0c\u6211\u4eec\u5148\u5b9a\u4e49\u5982\u4e0b\u53d8\u91cf\uff1a\n\n- `mol` PySCF \u5206\u5b50\u5b9e\u4f8b\n\n\n```python\nmol = gto.Mole()\nmol.atom = \"\"\"\nO 0.0 0.0 0.0\nO 0.0 0.0 1.5\nH 1.0 0.0 0.0\nH 0.0 0.7 1.0\n\"\"\"\nmol.basis = \"6-31G\"\nmol.verbose = 0\nmol.build()\n```\n\n\n\n\n \n\n\n\n- `nao` \u8f68\u9053\u6570\u91cf $n_\\mathrm{AO}$, `nocc` \u5360\u636e\u8f68\u9053\u6570 $n_\\mathrm{occ}$, `nvir` \u975e\u5360\u8f68\u9053\u6570 $n_\\mathrm{vir}$\n\n- `so` \u5360\u636e\u8f68\u9053\u5206\u5272\uff0c`sv` \u975e\u5360\u8f68\u9053\u5206\u5272\uff0c`sa` \u5168\u8f68\u9053\u5206\u5272\n\n- `eri0_ao` \u539f\u5b50\u8f68\u9053\u57fa\u7ec4\u53cc\u7535\u5b50\u79ef\u5206 ERI (electron repulsion integral)\n\n $$\n (\\mu \\nu | \\kappa \\lambda) = \\int \\phi_\\mu (\\boldsymbol{r}) \\phi_\\nu (\\boldsymbol{r}) \\frac{1}{|\\boldsymbol{r} - \\boldsymbol{r}'|} \\phi_\\kappa (\\boldsymbol{r}') \\phi_\\lambda (\\boldsymbol{r}') \\, \\mathrm{d} \\boldsymbol{r} \\, \\mathrm{d} \\boldsymbol{r}'\n $$\n\n- `d_ao` \u5076\u6781\u79ef\u5206\uff0c\u5176\u4e2d\u4e0b\u8ff0\u7684 $t$ \u6216\u540e\u6587\u4f1a\u51fa\u73b0\u7684 $s$ \u8868\u793a\u5076\u6781\u79ef\u5206\u7684\u65b9\u5411 $x, y$ \u6216 $z$\n\n $$\n d_{\\mu \\nu}^t = - \\langle \\mu | t | \\nu \\rangle = \\int \\phi_\\mu (\\boldsymbol{r}) t \\phi_\\nu (\\boldsymbol{r}) \\, \\mathrm{d} \\boldsymbol{r}\n $$\n\n\n```python\nnao = nmo = mol.nao\nnocc = mol.nelec[0]\nnvir = nmo - nocc\nso, sv, sa = slice(0, nocc), slice(nocc, nmo), slice(0, nmo)\n```\n\n\n```python\neri0_ao = mol.intor(\"int2e\")\nd_ao = - mol.intor(\"int1e_r\")\n```\n\n- `scf_eng` PySCF \u7684 RHF \u8ba1\u7b97\u5b9e\u4f8b\n\n\n```python\nscf_eng = scf.RHF(mol).run()\n```\n\n- `C` $C_{\\mu p}$ \u5206\u5b50\u8f68\u9053\u7cfb\u6570\n\n- `e` $e_p$ RHF \u8f68\u9053\u80fd\u91cf\n\n- `D` $D_{\\mu \\nu} = 2 C_{\\mu i} C_{\\nu i}$ RHF \u7535\u5b50\u6001\u5bc6\u5ea6\n\n\n```python\nC, e = scf_eng.mo_coeff, scf_eng.mo_energy\nD = 2 * C[:, so] @ C[:, so].T\n```\n\n- `eri0_mo` \u5206\u5b50\u8f68\u9053\u53cc\u7535\u5b50 ERI $(pq|rs) = C_{\\mu p} C_{\\nu q} (\\mu \\nu | \\kappa \\lambda) C_{\\kappa r} C_{\\lambda s}$\n\n- `d_mo` \u5206\u5b50\u8f68\u9053\u5076\u6781\u79ef\u5206 $d^t_{pq} = C_{\\mu p} d^t_{\\mu \\nu} C_{\\nu q}$\n\n- `d_ia` \u5360\u636e-\u975e\u5360\u7684\u5206\u5b50\u8f68\u9053\u5076\u6781\u79ef\u5206 $d^t_{ia}$\n\n- `d_P` \u4ee5\u53cc\u4e0b\u6807 $P = ia$ \u4e3a\u8bb0\u53f7\u7684\u5360\u636e-\u975e\u5360\u5206\u5b50\u8f68\u9053\u5076\u6781\u79ef\u5206 $d^t_P$\n\n\n```python\neri0_mo = np.einsum(\"up, vq, uvkl, kr, ls -> pqrs\", C, C, eri0_ao, C, C)\nd_mo = np.einsum(\"up, tuv, vq -> tpq\", C, d_ao, C)\nd_ia = d_mo[:, so, sv]\nd_P = d_ia.reshape(3, nocc*nvir)\n```\n\n- `scf_td` PySCF \u7684 TD-RHF \u8ba1\u7b97\u5b9e\u4f8b\n\n\n```python\nscf_td = tdscf.TDHF(scf_eng)\nscf_td.nstates = nvir * nocc\nscf_td.run()\n```\n\n\n\n\n \n\n\n\n\u5c31\u6211\u76ee\u524d\u6240\u77e5\uff0cPySCF \u53ef\u4ee5\u8ba1\u7b97\u6781\u5316\u7387\uff1b\u4f46\u4e0e\u542b\u9891\u6781\u5316\u7387\u6709\u5173\u7684\u8ba1\u7b97\uff0c\u6211\u5728\u8fd9\u91cc\u91c7\u7528\u4e0b\u6587\u6240\u8ff0\u7684\u3001\u4e0e Gaussian \u53ef\u4ee5\u5927\u81f4\u5339\u914d\u7ed3\u679c\u7684\u7a0b\u5e8f\u3002\n\n### Gaussian \u8ba1\u7b97\u542b\u9891\u6781\u5316\u7387\n\n\u6211\u4eec\u9700\u8981\u4e00\u4e2a\u53ef\u4ee5\u6838\u9a8c\u7ed3\u679c\u7684\u7ed3\u679c\u4e0e\u5de5\u5177\u3002Gaussian \u4e8b\u5b9e\u4e0a\u63d0\u4f9b\u4e86\u542b\u9891\u7684\u6781\u5316\u7387\u7684\u9009\u9879\u3002\u8fd9\u4e00\u5c0f\u6bb5\u6211\u4eec\u7b80\u5355\u4e86\u89e3\u5982\u4f55\u4f7f\u7528 Gaussian \u6765\u8ba1\u7b97\u542b\u9891\u6781\u5316\u7387\u3002\n\n\u6211\u4eec\u9996\u5148\u7ed9\u7ed9\u51fa\u4e00\u4e2a\u793a\u8303\u7684\u4f8b\u5b50\u3002\u8fd9\u4e2a\u4f8b\u5b50\u53ea\u662f\u6f14\u793a\uff0c\u5e76\u4e0d\u80fd\u4ee3\u8868\u771f\u5b9e\u7684\u7269\u7406\u3002\n\nGaussian \u7684\u8f93\u5165\u5361 {download}`assets/H2O2_freq_polar_example.gjf` \u5982\u4e0b\uff1a\n\n\n```python\nwith open(\"assets/H2O2_freq_polar_example.gjf\", \"r\") as f:\n print(f.read()[:-1])\n```\n\n %chk=H2O2_freq_polar_example.chk\n # RHF/6-31G NoSymm Freq(Raman) CPHF=RdFreq\n \n H2O2 Frequency-Dependent Polarizability (Raman)\n \n 0 1\n O 0.0 0.0 0.0\n O 0.0 0.0 1.5\n H 1.0 0.0 0.0\n H 0.0 0.7 1.0\n \n 1nm\n\n\n\u4e8b\u5b9e\u4e0a\u8fd9\u6bb5\u7a0b\u5e8f\u4e0d\u53ea\u8ba1\u7b97\u4e86\u542b\u9891\u6781\u5316\u7387\uff0c\u8fd8\u8ba1\u7b97\u4e86\u542b\u9891 Raman\uff1b\u4f46\u8fd9\u4efd\u6587\u6863\u53ea\u8ba8\u8bba\u6781\u5316\u7387\u95ee\u9898\u3002\u8fd9\u91cc\u7684\u201c\u542b\u9891\u201d\u6307\u7684\u662f\u4e24\u4e2a\u9891\u7387\uff0c\u5176\u4e00\u4e3a\u9759\u6001 (static) \u6781\u5316\u7387\uff0c\u5176\u4e8c\u662f\u5165\u5c04\u5149\u7ebf\u4e3a $\\omega = 1 \\, \\mathrm{nm}$ \u9891\u7387\u4e0b\u7684\u6781\u5316\u7387\u3002\u540e\u8005\u662f\u4e00\u4e2a\u76f8\u5f53\u6781\u7aef\u7684\u4f8b\u5b50\uff0c\u56e0\u4e3a\u901a\u5e38\u53ef\u80fd\u6ca1\u6709\u4eba\u4f1a\u60f3\u5230\u7528 X-Ray \u7167\u5c04\u4e00\u4e2a\u666e\u901a\u7684\u6db2\u4f53\u5206\u5b50\u3002\n\nGaussian \u7a0b\u5e8f\u9ed8\u8ba4\u4f1a\u8f93\u51fa .out \u6216 .log \u6587\u4ef6\u4f5c\u4e3a\u6587\u672c\u4fe1\u606f\u8f93\u51fa\uff0c\u5176\u8f93\u51fa\u6587\u4ef6\u5728 {download}`assets/H2O2_freq_polar_example.out` \u4e2d\u3002\u6211\u4eec\u8fd8\u8981\u6c42 Gaussian \u8f93\u51fa .chk \u6587\u4ef6\uff0c\u8be5\u6587\u4ef6\u53ea\u5305\u542b\u5355\u7eaf\u7684\u8ba1\u7b97\u6570\u636e\u4fe1\u606f\uff0c\u5176\u901a\u8fc7 Gaussian Utility `formchk` \u5bfc\u51fa\u4e3a ASCII \u683c\u5f0f\u7684\u6587\u4ef6\u5728 {download}`assets/H2O2_freq_polar_example.fch`\u3002\n\n\u5bf9\u4e8e .out \u6587\u4ef6\uff0c\u6211\u4eec\u53ef\u4ee5\u7528\u4e0b\u8ff0\u547d\u4ee4\u4ec5\u67e5\u770b\u542b\u9891\u6781\u5316\u7387\uff1a\n\n\n```python\nwith open(\"assets/H2O2_freq_polar_example.out\", \"r\") as f:\n while f.readable():\n line = f.readline()\n if \"Alpha(-w,w) frequency\" in line:\n print(line[:-1])\n for _ in range(4):\n print(f.readline()[:-1])\n if \"Beta(-w,w,0) frequency\" in line:\n break\n```\n\n Property number 1 -- Alpha(-w,w) frequency 1 0.000000:\n 1 2 3 \n 1 0.658142D+01 -0.841017D-01 -0.145378D+01\n 2 -0.841017D-01 0.426836D+01 0.399688D+00\n 3 -0.145378D+01 0.399688D+00 0.178903D+02\n Property number 1 -- Alpha(-w,w) frequency 2 45.563353:\n 1 2 3 \n 1 -0.339006D-02 0.323942D-04 -0.262886D-04\n 2 0.323942D-04 -0.353767D-02 0.379883D-03\n 3 -0.262886D-04 0.379883D-03 -0.437398D-02\n\n\n\u5bf9\u4e8e\u6bcf\u4e2a\u9891\u7387\uff0c\u7a0b\u5e8f\u4f1a\u7ed9\u51fa\u4e00\u4e2a $3 \\times 3$ \u5927\u5c0f\u7684\u77e9\u9635\uff1b\u8fd9\u5c31\u662f\u6781\u5316\u7387\u5f20\u91cf $\\alpha_{ts} (-\\omega, \\omega)$\uff0c\u5176\u4e2d $t, s$ \u53ef\u4ee5\u662f $x, y, z$\u3002\u4ee5\u540e\u7684\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u4f1a\u7b80\u8bb0 $\\alpha_{ts} (-\\omega, \\omega)$ \u4e3a $\\alpha_{ts} (\\omega)$\u3002\n\n\u6781\u5316\u7387\u5355\u4f4d\u662f\u539f\u5b50\u5355\u4f4d\uff1b\u5173\u4e8e\u6781\u5316\u7387\u539f\u5b50\u5355\u4f4d\u4e0e SI \u5355\u4f4d\u5236\u7684\u8f6c\u6362\uff0c\u53c2\u8003\u4e0b\u8ff0 [NIST \u7f51\u9875](https://www.physics.nist.gov/cgi-bin/cuu/Value?auepol)\u3002\n\n\u4e0a\u8ff0\u51fa\u73b0\u7684\u4e24\u4e2a\u9891\u7387\u503c `0.000000`, `45.563353` \u5e76\u4e0d\u662f\u4ee5 $\\mathrm{nm}$ \u4e3a\u5355\u4f4d\uff0c\u800c\u662f\u4ee5 $E_\\mathrm{h}$ Hartree \u4e3a\u5355\u4f4d\u3002\u5173\u4e8e\u4e0a\u8ff0\u5355\u4f4d\u6362\u7b97\u7684\u8fc7\u7a0b\uff0c\u6211\u4eec\u91c7\u7528\u4e0b\u8ff0\u4ee3\u7801\u6765\u8bf4\u660e\uff1a\n\n\n```python\n1 / Eh_cm * 1e7\n```\n\n\n\n\n 45.56335252766616\n\n\n\n\u4e0a\u9762\u51fa\u73b0\u7684 `Eh_cm` \u5df2\u7ecf\u5728\u6587\u6863\u5f00\u5934\u6709\u6240\u89e3\u91ca\u3002\u8fd9\u6837\u6211\u4eec\u5c31\u5b8c\u6210\u4e86 .out \u6587\u4ef6\u7684\u542b\u9891\u6781\u5316\u7387\u7684\u8bfb\u53d6\u3002\n\n\u4f46\u5728\u540e\u9762\u7684\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u5c06\u4f1a\u5229\u7528 .chk \u6587\u4ef6 (\u6216\u8005\u51e0\u4e4e\u7b49\u4ef7\u7684 .fch \u6587\u4ef6) \u7684\u4fe1\u606f\uff0c\u6765\u7ed9\u51fa Gaussian \u8ba1\u7b97\u7684\u6807\u51c6\u7ed3\u679c\u3002\u5728\u8fd9\u4efd\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u4f1a\u5229\u7528\u5230\u7684\u952e\u503c\u6709\u4e24\u6bb5\uff1a`Frequencies for FD properties` \u50a8\u5b58\u4e86\u4ee5 $E_\\mathrm{h}$ Hartree \u4e3a\u5355\u4f4d\u7684\u9891\u7387 $\\omega$\uff0c`Alpha(-w,w)` \u50a8\u5b58\u4e86\u4ee5\u539f\u5b50\u5355\u4f4d\u50a8\u5b58\u7684\u6781\u5316\u7387\u5f20\u91cf $\\alpha_{ts} (\\omega)$\u3002\n\n\n```python\nwith open(\"assets/H2O2_freq_polar_example.fch\", \"r\") as f:\n print_flag = False\n while f.readable():\n line = f.readline()\n if \"Frequencies for FD properties\" in line:\n print_flag = True\n if \"Beta(-w,w,0)\" in line:\n break\n if print_flag is True:\n print(line[:-1])\n```\n\n Frequencies for FD properties R N= 2\n 0.00000000E+00 4.55633525E+01\n Alpha(-w,w) R N= 18\n 6.58141820E+00 -8.41017140E-02 -1.45378248E+00 -8.41017140E-02 4.26835620E+00\n 3.99687823E-01 -1.45378248E+00 3.99687823E-01 1.78903287E+01 -3.39006427E-03\n 3.23941556E-05 -2.62886421E-05 3.23941556E-05 -3.53766531E-03 3.79883414E-04\n -2.62886421E-05 3.79883414E-04 -4.37398325E-03\n\n\n\u6211\u4eec\u53ef\u4ee5\u770b\u5230\uff0c\u81f3\u5c11\u4ece\u7a0b\u5e8f\u8f93\u51fa\u7684\u89d2\u5ea6\u6765\u8bb2\uff0c\u5305\u542b\u9891\u7387\u7684\u6781\u5316\u7387\u5f88\u53ef\u80fd\u4e0e\u4e0d\u542b\u6781\u5316\u7387\u7684\u9891\u7387\u503c\u76f8\u53bb\u751a\u8fdc\u3002\u5f53\u7136\uff0c\u81f3\u4e8e\u5728 $1 \\, \\mathrm{nm}$ \u5982\u6b64\u9ad8\u7684\u5149\u80fd\u91cf\u4e0b\uff0c\u8fd9\u4e2a\u542b\u9891\u6781\u5316\u7387\u662f\u5426\u5728\u7269\u7406\u4e0a\u6b63\u786e\uff0c\u4e0d\u662f\u8fd9\u7bc7\u6587\u6863\u8ba8\u8bba\u7684\u95ee\u9898\u3002\n\n\u901a\u8fc7\u6211\u4eec\u5bfc\u5165\u7684 `FormchkInterface` (\u53d6\u81ea [pyxdh \u9879\u76ee](https://github.com/ajz34/Py_xDH/blob/master/pyxdh/Utilities/formchk_interface.py))\uff0c\u6211\u4eec\u4e5f\u53ef\u4ee5\u5bf9\u4e0a\u8ff0\u6570\u503c\u5bfc\u5165\u5230 numpy \u5411\u91cf\u4e2d\u3002\u8b6c\u5982\u6211\u4eec\u9700\u8981\u63d0\u53d6\u6240\u6709\u9891\u7387\uff0c\u90a3\u4e48\u4e0b\u9762\u7684\u4ee3\u7801\u5c31\u53ef\u4ee5\u7ed9\u51fa\uff1a\n\n\n```python\nfchk_helper = FormchkInterface(\"assets/H2O2_freq_polar_example.fch\")\nfchk_helper.key_to_value(\"Frequencies for FD properties\")\n```\n\n\n\n\n array([ 0. , 45.56335])\n\n\n\n\u4e0b\u9762\u662f Gaussian \u6240\u7ed9\u51fa\u7684\u9759\u6001\u6781\u5316\u7387 `ref_alpha_static` $\\alpha_{ts} (-\\omega, \\omega)$\u3002\u4e0b\u4e00\u6bb5\u6211\u4eec\u4f1a\u5148\u56de\u987e\u5982\u4f55\u901a\u8fc7 CP-HF \u65b9\u7a0b\uff0c\u5f97\u5230\u9759\u6001\u6781\u5316\u7387\u3002\u4e0b\u8ff0\u7684\u77e9\u9635\u5c06\u53ef\u4ee5\u662f\u6211\u4eec\u8ba1\u7b97\u7ed3\u679c\u7684\u53c2\u8003\u503c\u3002\n\n\n```python\nref_alpha_static = fchk_helper.key_to_value(\"Alpha(-w,w)\")[:9].reshape(3, 3)\nref_alpha_static\n```\n\n\n\n\n array([[ 6.58142, -0.0841 , -1.45378],\n [-0.0841 , 4.26836, 0.39969],\n [-1.45378, 0.39969, 17.89033]])\n\n\n\n\u518d\u4e4b\u540e\u7684\u6bb5\u843d\uff0c\u6211\u4eec\u4f1a\u9700\u8981\u8ba1\u7b97\u542b\u9891\u6781\u5316\u7387\u3002Gaussian \u4e00\u6b21\u6027\u81f3\u591a\u8ba1\u7b97 100 \u4e2a\u9891\u7387\u4e0b\u7684\u5149\u5b66\u6027\u8d28\uff0c\u5426\u5219\u7a0b\u5e8f\u4f1a\u62a5\u9519\uff1b\u56e0\u6b64\u6211\u4eec\u7684\u8f93\u5165\u6587\u4ef6\u5c06\u4f1a\u662f\u591a\u4e2a\u6587\u4ef6\uff0c\u8fd9\u91cc\u4e0d\u5217\u4e3e\u5176\u8d85\u94fe\u63a5\u3002\u6211\u4eec\u7528 `freq_all_list` \u8868\u793a\u542b\u9891\u6781\u5316\u7387\u5bf9\u5e94\u7684\u9891\u7387\uff0c\u800c `alpha_all_list` \u8868\u793a\u8fd9\u4e9b\u9891\u7387\u4e0b\u7684\u6781\u5316\u7387\u3002\n\n\n```python\nfreq_full_list = []\nalpha_full_list = []\nfor idx in (1, 2, 3):\n fchk_helper = FormchkInterface(\"assets/H2O2_freq_polar_{:1d}.fch\".format(idx))\n freq_full_list.append(fchk_helper.key_to_value(\"Frequencies for FD properties\")[1:])\n alpha_full_list.append(fchk_helper.key_to_value(\"Alpha(-w,w)\").reshape(-1, 3, 3)[1:])\nfreq_full_list = np.concatenate(freq_full_list)\nalpha_full_list = np.concatenate(alpha_full_list)\n```\n\n\u5c06\u6240\u83b7\u5f97\u7684\u542b\u9891\u6781\u5316\u7387 (\u4ec5\u7ed8\u5236\u5176\u4e2d\u4e00\u4e2a\u5206\u91cf $\\alpha_{zz} (\\omega)$) \u7ed8\u56fe\u53ef\u4ee5\u5f97\u5230\uff1a\n\n\n```python\nfig, ax = plt.subplots()\nax.plot(freq_full_list, alpha_full_list[:, 2, 2])\nrect = patches.Rectangle((0.184, -24), 0.01, 78, linewidth=1, edgecolor='C1', facecolor='C1', alpha=.25)\nax.add_patch(rect)\nax.set_ylim(-25, 75)\nax.set_xlabel(r\"$\\omega$ / $E_\\mathrm{h}$\")\nax.set_ylabel(r\"$\\alpha_{zz} (\\omega)$ / a.u.\")\nax.set_title(\"Frequency-Dependent Polarizability of $\\mathrm{H_2O_2}$ (RHF/6-31G)\")\nfig.show()\n```\n\n\n \n\n\n\n\n\n\n\u542b\u9891\u6781\u5316\u7387\u7684\u56fe\u50cf\u5728\u5904\u4e8e\u5206\u5b50\u7684\u6fc0\u53d1\u6001\u533a\u57df\u4f1a\u5448\u5267\u70c8\u632f\u8361\u3002\u5728\u540e\u7eed\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u4f1a\u5148\u66f4\u591a\u5730\u770b\u4e0a\u56fe\u4e2d\u6a59\u8272\u533a\u57df\u8868\u793a\u7684\u524d\u4e24\u4e2a\u6fc0\u53d1\u6001\u3002\u7531\u4e8e\u4e0a\u56fe\u5bf9\u6a59\u8272\u90e8\u5206\u7684\u63cf\u8ff0\u4e0d\u5f88\u7cbe\u7ec6\uff0c\u6211\u4eec\u4e0b\u9762\u505a\u4e00\u4efd\u66f4\u7cbe\u7ec6\u7684\u6781\u5316\u7387\u56fe\u7ed8\u5236\u3002\n\n\n```python\nfchk_helper = FormchkInterface(\"assets/H2O2_freq_polar_small_range.fch\")\nfreq_small_list = fchk_helper.key_to_value(\"Frequencies for FD properties\")[1:]\nalpha_small_list = fchk_helper.key_to_value(\"Alpha(-w,w)\").reshape(-1, 3, 3)[1:]\n```\n\n\n```python\nfig, ax = plt.subplots()\nax.plot(freq_small_list, alpha_small_list[:, 2, 2])\nax.set_xlabel(r\"$\\omega$ / $E_\\mathrm{h}$\")\nax.set_ylabel(r\"$\\alpha_{zz} (\\omega)$ / a.u.\")\nax.set_title(\"Frequency-Dependent Polarizability of $\\mathrm{H_2O_2}$ (RHF/6-31G)\\nFor First Two Excited States\")\nfig.show()\n```\n\n\n \n\n\n\n\n\n\n\u4ece\u8fd9\u4e24\u5f20\u56fe\u7684\u7eb5\u5750\u6807\u7684\u7f29\u653e\u5173\u7cfb\u6765\u770b\uff0c\u4e8b\u5b9e\u4e0a\uff0c\u5bf9\u4e8e\u6bcf\u4e00\u4e2a\u632f\u8361\u5cf0\uff0c\u5176\u632f\u8361\u662f\u8d8b\u4e8e\u65e0\u7a77\u5927\u7684\uff1b\u5e76\u4e14\u5176\u5bf9\u5e94\u7684\u9891\u7387\u6070\u597d\u662f\u5206\u5b50 TD-HF \u8ba1\u7b97\u5f97\u5230\u7684\u6fc0\u53d1\u80fd\u3002\u8fd9\u5c06\u4f1a\u5728\u540e\u9762\u7684\u6587\u6863\u4e2d\u53d9\u8ff0\u5e76\u9a8c\u8bc1\u3002\n\n## TD-HF \u65b9\u7a0b\u8fc7\u7a0b\u56de\u987e\n\n### TD-HF \u65b9\u7a0b\u4e0e\u6fc0\u53d1\u80fd\n\n\u4e00\u822c\u4f1a\u8ba4\u4e3a\uff0cTD \u65b9\u6cd5\u662f\u7528\u4e8e\u6c42\u89e3\u4e0e\u7535\u5b50\u6fc0\u53d1\u8fc7\u7a0b\u6709\u5173\u7684\u65b9\u6cd5\u3002\u6700\u5e38\u7528\u7684\u5e94\u7528\u5373\u662f\u6c42\u89e3\u6fc0\u53d1\u80fd\u4e0e\u8dc3\u8fc1\u5076\u6781\u77e9\u3002\u6211\u4eec\u5728\u8fd9\u4e00\u6bb5\u5148\u56de\u987e\u8fd9\u4e24\u8005\u7684\u8ba1\u7b97\u8fc7\u7a0b\u3002\n\n\u5728\u8fdb\u884c\u540e\u7eed\u7684\u63cf\u8ff0\u524d\uff0c\u6211\u4eec\u4f1a\u5b9a\u4e49\u4e0b\u8ff0\u4e0e TD-HF \u65b9\u7a0b\u6709\u5173\u7684\u5f20\u91cf\u6216\u77e9\u9635 `A` $\\mathbb{A}_{ia, jb}$ \u4e0e `B` $\\mathbb{B}_{ia, jb}$\uff1a\n\n$$\n\\begin{align}\n\\mathbb{A}_{ia, jb} &= (\\varepsilon_a - \\varepsilon_i) \\delta_{ij} \\delta_{ab} + 2 (ia|jb) - (ij|ab) \\\\\n\\mathbb{B}_{ia, jb} &= 2 (ia|jb) - (ib|ja)\n\\end{align}\n$$\n\n\u5176\u4e2d\u4e24\u4e2a\u8f85\u52a9\u53d8\u91cf\u4e3a\uff1a\n\n- `delta_ij` $\\delta_{ij}$ \u4e3a\u5360\u636e\u8f68\u9053\u6570\u7ef4\u5ea6\u7684\u5355\u4f4d\u77e9\u9635\n\n- `delta_ab` $\\delta_{ab}$ \u4e3a\u975e\u5360\u8f68\u9053\u6570\u7ef4\u5ea6\u7684\u5355\u4f4d\u77e9\u9635\n\n\n```python\ndelta_ij, delta_ab = np.eye(nocc), np.eye(nvir)\n```\n\n\n```python\nA_iajb = (\n np.einsum(\"ia, ij, ab -> iajb\", - e[so, None] + e[sv], delta_ij, delta_ab)\n + 2 * eri0_mo[so, sv, so, sv]\n - eri0_mo[so, so, sv, sv].swapaxes(1, 2))\nB_iajb = (\n + 2 * eri0_mo[so, sv, so, sv]\n - eri0_mo[so, sv, so, sv].swapaxes(1, 3))\n```\n\n\u4e3a\u4e86\u540e\u6587\u7684\u4ee3\u7801\u65b9\u4fbf\uff0c\u6211\u4eec\u628a\u53cc\u4e0b\u6807\u7684\u77e9\u9635\u8bb0\u4e3a `A` $A_{PQ}$ \u4e0e `B` $B_{PQ}$\uff1a\n\n\n```python\nA = A_iajb.reshape(nocc*nvir, nocc*nvir)\nB = B_iajb.reshape(nocc*nvir, nocc*nvir)\n```\n\n\u6839\u636e TD-DFT \u4e2d\u7684 Casida \u65b9\u7a0b (TD-DFT \u53ef\u4ee5\u770b\u4f5c\u662f TD-HF \u60c5\u5f62\u7684\u6269\u5c55)\uff0c\u6211\u4eec\u53ef\u4ee5\u5199\u51fa TD-HF \u7684\u9891\u7387\u53ca\u5176\u5bf9\u5e94\u7684\u672c\u5f81\u5411\u91cf\u4e3a $\\omega_n, X_{ia}^n, Y_{ia}^n$\uff0c\u6216\u8005\u53cc\u4e0b\u6807\u8bb0\u53f7\u7684 $X_{P}^n, Y_{P}^n$\u3002\u5176\u4e2d\uff0c$X_{ia}^n$ \u6709\u65f6\u79f0\u4e3a\u7b2c $n$ \u4e2a\u6fc0\u53d1\u6001\u7684\u6fc0\u53d1\u77e9\u9635\uff0c$Y_{ia}^n$ \u5219\u79f0\u9000\u6fc0\u53d1\u77e9\u9635\u3002\u8fd9\u51e0\u8005\u4e4b\u95f4\u6ee1\u8db3\u4e0b\u8ff0 TD-HF \u77e9\u9635\u65b9\u7a0b\u3002\n\n$$\n\\begin{pmatrix} \\mathbb{A} & \\mathbb{B} \\\\ - \\mathbb{B} & - \\mathbb{A} \\end{pmatrix}\n\\begin{pmatrix} \\mathbf{X}^n \\\\ \\mathbf{Y}^n \\end{pmatrix}\n= \\omega_n \\begin{pmatrix} \\mathbf{X}^n \\\\ \\mathbf{Y}^n \\end{pmatrix}\n$$\n\n\u6211\u4eec\u5728\u7a0b\u5e8f\u4e0a\uff0c\u8bb0\u7b49\u5f0f\u5de6\u8fb9\u7684\u5927\u77e9\u9635\u4e3a `AB`\u3002\n\n\n```python\nAB = np.block([\n [ A, B],\n [- B, - A]\n])\nAB.shape\n```\n\n\n\n\n (234, 234)\n\n\n\n\u6211\u4eec\u9996\u5148\u89e3\u51fa\u4e0a\u8ff0\u77e9\u9635\u7684\u672c\u5f81\u503c `eigs` \u4e0e\u672c\u5f81\u5411\u91cf `xys`\uff1a\n\n\n```python\neigs, xys = np.linalg.eig(AB)\n```\n\n\u4f46\u6211\u4eec\u4f1a\u53d1\u73b0\uff0c\u6211\u4eec\u672c\u6765\u9884\u671f\u7684\u5728 6-31G \u57fa\u7ec4\u4e0b\u53ef\u89e3\u7684\u6fc0\u53d1\u6001\u6570\u91cf\u53ea\u6709 $n_\\mathrm{occ} n_\\mathrm{vir} = 117$\uff0c\u4f46\u672c\u5f81\u503c\u6570\u91cf\u5374\u662f 234 \u4e2a\u3002\u6211\u4eec\u9700\u8981\u820d\u53bb\u6240\u6709\u8d1f\u503c\u7684\u672c\u5f81\u503c\u3002\u4e8b\u5b9e\u4e0a\uff0c\u8d1f\u503c\u7684\u672c\u5f81\u503c\u4e0e\u6b63\u503c\u7684\u672c\u5f81\u503c\u4e4b\u95f4\u6709\u4e00\u4e00\u5bf9\u5e94\u7684\u5173\u7cfb\u3002\n\n\n```python\n(eigs < 0).sum()\n```\n\n\n\n\n 117\n\n\n\n\u6211\u4eec\u820d\u53bb\u8d1f\u672c\u5f81\u503c\u53ca\u5176\u5bf9\u5e94\u7684\u672c\u5f81\u5411\u91cf\uff0c\u5e76\u5bf9\u672c\u5f81\u503c\u4f5c\u6392\u5e8f\uff0c\u5f97\u5230\u6b63\u7684\u672c\u5f81\u503c `eigs_sorted` \u53ca\u5176\u76f8\u5bf9\u5e94\u7684\u672c\u5f81\u5411\u91cf `xys_sorted`\uff1a\n\n\n```python\neigs_sorted = eigs[eigs.argsort()[int(eigs.size / 2):]]\nxys_sorted = xys[:, eigs.argsort()[int(eigs.size / 2):]]\n```\n\n\u6211\u4eec\u5e94\u5f53\u53ef\u4ee5\u9a8c\u8bc1\uff0c\u4e0a\u8ff0\u672c\u5f81\u503c\u4e0e\u672c\u5f81\u5411\u91cf\u786e\u5b9e\u6ee1\u8db3 TD-HF \u77e9\u9635\u65b9\u7a0b\uff1a\n\n\n```python\nnp.allclose(AB @ xys_sorted, eigs_sorted * xys_sorted)\n```\n\n\n\n\n True\n\n\n\n\u6700\u540e\uff0c\u6211\u4eec\u7528 `td_eig` $\\omega_n$\u3001`td_x_unnormed` \u672a\u5f52\u4e00\u5316\u7684 $X^n_P$\u3001`td_y_unnormed` \u672a\u5f52\u4e00\u5316\u7684 $Y^n_P$ \u6765\u91cd\u65b0\u6574\u7406\u4e0a\u8ff0\u7684\u7ed3\u679c `eigs_sorted` \u4e0e `xys_sorted`\u3002\u9700\u8981\u6ce8\u610f\uff0c\u53d8\u91cf `td_x_unnormed` \u7684\u4e24\u4e2a\u7ef4\u5ea6\u4e2d\uff0c\u7b2c\u4e00\u4e2a\u7ef4\u5ea6\u4e3a\u6fc0\u53d1\u6001 $n$\uff0c\u7b2c\u4e8c\u4e2a\u7ef4\u5ea6\u4e3a\u53cc\u4e0b\u6807 $P = ia$\uff1b\u5c3d\u7ba1\u4e24\u4e2a\u7ef4\u5ea6\u7684\u5927\u5c0f\u90fd\u662f $n_\\mathrm{occ} n_\\mathrm{vir} = 117$\uff0c\u4f46\u610f\u4e49\u5b8c\u5168\u4e0d\u540c\u3002\n\n\n```python\ntd_eig = eigs_sorted\ntd_x_unnormed = xys_sorted.T[:, :nvir*nocc]\ntd_y_unnormed = xys_sorted.T[:, nvir*nocc:]\n```\n\n\u6211\u4eec\u7b80\u5355\u770b\u4e00\u4e0b\u6700\u4f4e\u6fc0\u53d1\u80fd\u7684\u51e0\u4e2a\u6fc0\u53d1\u6001\u7684\u80fd\u7ea7\u5927\u5c0f\uff0c\u5355\u4f4d\u662f\u539f\u5b50\u5355\u4f4d\u6216 Hartree $E_\\mathrm{h}$\uff1a\n\n\n```python\neigs_sorted[:10]\n```\n\n\n\n\n array([0.18674, 0.19114, 0.35357, 0.39384, 0.41744, 0.42516, 0.45701, 0.4702 , 0.50732, 0.55833])\n\n\n\n\u6211\u4eec\u80fd\u770b\u5230\u6700\u4f4e\u7684\u6fc0\u53d1\u6001\u4e2d\uff0c\u6709 0.187 \u4e0e 0.191\uff1b\u8fd9\u6070\u597d\u4e0e\u4e0a\u9762 Gaussian \u7ed8\u5236\u51fa\u6765\u7684\u542b\u9891\u6781\u5316\u7387\u56fe\u4e2d\u7684\u4e24\u4e2a\u632f\u8361\u5cf0\u4f4d\u7f6e\u6070\u597d\u543b\u5408\u3002\u8fd9\u5e76\u975e\u662f\u5076\u7136\uff0c\u5e76\u4e14\u6211\u4eec\u4f1a\u5728\u540e\u6587\u8fdb\u884c\u66f4\u8be6\u7ec6\u7684\u63cf\u8ff0\u3002\n\n### TD-HF \u8dc3\u8fc1\u5076\u6781\u77e9\n\n\u4ece\u57fa\u6001\u6ce2\u51fd\u6570 $| 0 \\rangle$ \u5230\u6fc0\u53d1\u6001\u6ce2\u51fd\u6570 $| n \\rangle$ \u7684\u8dc3\u8fc1\u5076\u6781\u77e9\u53ef\u4ee5\u5199\u4f5c $\\langle 0 | \\hat d{}^t | n \\rangle$ \u6216\u7b49\u4ef7\u7684 $- \\langle 0 | t | n \\rangle$\uff1b\u7559\u610f\u5230 $t \\in \\{ x, y, z \\}$\u3002\u4f46\u5b9e\u65bd\u4e0a\uff0c\u6211\u4eec\u5c1a\u4e0d\u80fd\u5199\u51fa\u6fc0\u53d1\u6001\u6ce2\u51fd\u6570 $| n \\rangle$ \u7684\u5177\u4f53\u5f62\u5f0f\u3002\u8fd9\u4e2a\u6fc0\u53d1\u6001\u6ce2\u51fd\u6570\u9700\u8981\u901a\u8fc7\u6fc0\u53d1\u77e9\u9635 $X_{ia}^n$ \u4e0e\u9000\u6fc0\u53d1\u77e9\u9635 $Y_{ia}^n$ \u6765\u63cf\u8ff0\u3002\n\n\u521a\u624d\u7684\u8ba1\u7b97\u4e2d\uff0c\u6211\u4eec\u5f97\u5230\u7684\u672c\u5f81\u5411\u91cf\u662f\u672a\u7ecf\u5f52\u4e00\u5316\u7684\uff1b\u5b83\u4e58\u4ee5\u4efb\u4f55\u975e\u96f6\u5e38\u6570\uff0c\u4ecd\u7136\u4f1a\u662f TD-HF \u77e9\u9635\u65b9\u7a0b\u7684\u672c\u5f81\u5411\u91cf\u3002\u4f46\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u6fc0\u53d1\u4e0e\u9000\u6fc0\u53d1\uff0c\u8d4b\u4e88\u8fd9\u4e2a\u672c\u5f81\u5411\u91cf\u4ee5\u7269\u7406\u542b\u4e49\u3002\u5176\u5f52\u4e00\u5316\u6761\u4ef6\u662f\uff0c\u6001 $| n \\rangle$ \u7684\u7535\u5b50\u6570\u5b88\u6052\uff0c\u5373\u4e0e $| 0 \\rangle$ \u7684\u7535\u5b50\u6570\u76f8\u540c\u3002\u5728 RHF \u95ee\u9898\u4e0b\uff0c\u8fd9\u8981\u6c42\n\n$$\n(X_{ia}^n)^2 - (Y_{ia}^n)^2 = 2\n$$\n\n\u6211\u4eec\u4ee4\u5f52\u4e00\u5316\u8fc7\u7a0b\u4e2d\u7684\u4e2d\u95f4\u91cf\u4e3a `td_renorm` $N_n = \\frac{1}{2} \\left( (X_{ia}^n)^2 - (Y_{ia}^n)^2 \\right)$\uff1a\n\n\n```python\ntd_renorm = ((td_x_unnormed**2).sum(axis=1) - (td_y_unnormed**2).sum(axis=1)) / 2\n```\n\n\u90a3\u4e48\u91cd\u65b0\u5f52\u4e00\u5316\u540e\u7684 `X` $X_P^n$ \u4e0e `Y` $Y_P^n$ \u4e3a\n\n\n```python\nX = td_x_unnormed / np.sqrt(td_renorm)[:, None]\nY = td_y_unnormed / np.sqrt(td_renorm)[:, None]\n```\n\n\u4e3a\u4e86\u5904\u7406\u4e00\u4e9b\u95ee\u9898\u7684\u4fbf\u5229\uff0c\u6211\u4eec\u58f0\u660e\u53d8\u91cf `X_ia` $X_{ia}^n$ \u4e0e `Y_ia` $Y_{ia}^n$\uff1b\u5b83\u4eec\u7684\u7ef4\u5ea6\u5747\u662f $(n, i, a)$\uff1a\n\n\n```python\nX_ia = X.reshape(nocc*nvir, nocc, nvir)\nY_ia = Y.reshape(nocc*nvir, nocc, nvir)\nX_ia.shape\n```\n\n\n\n\n (117, 9, 13)\n\n\n\n\u4ee5\u6b64\u4e3a\u57fa\u7840\uff0c\u6211\u4eec\u53ef\u4ee5\u5199\u51fa TD-HF \u7684\u8dc3\u8fc1\u5076\u6781\u77e9 `td_transdip`\n\n$$\n\\langle 0 | \\hat d{}^t | n \\rangle = d_{ia}^t (X_{ia}^n + Y_{ia}^n)\n$$\n\n\u6211\u4eec\u4f1a\u6253\u5370\u51fa\u6700\u4f4e\u80fd\u7ea7\u7684 5 \u4e2a\u6fc0\u53d1\u6001\u7684\u8dc3\u8fc1\u5076\u6781\u77e9\uff1a\n\n\n```python\ntd_transdip = np.einsum(\"tia, nia -> nt\", d_ia, X_ia + Y_ia)\ntd_transdip[:5]\n```\n\n\n\n\n array([[ 0.00096, -0.01194, 0.10696],\n [-0.01912, -0.02168, 0.07287],\n [ 0.01189, 0.06348, -0.09044],\n [-0.28032, 0.11958, 1.16236],\n [-0.11797, 0.0042 , -0.50674]])\n\n\n\n\u8fd9\u4e0e PySCF \u6240\u7ed9\u51fa\u7684\u8dc3\u8fc1\u5076\u6781\u77e9\u51e0\u4e4e\u662f\u76f8\u540c\u7684\uff0c\u4f46\u7b26\u53f7\u4e0a\u4f1a\u6709\u5dee\u5f02\u3002\u6211\u4eec\u8ba4\u4e3a\u8fd9\u5df2\u7ecf\u5b8c\u6574\u5e76\u6210\u529f\u5730\u91cd\u590d\u4e86\u8dc3\u8fc1\u5076\u6781\u77e9\u4e86\u3002\n\n\n```python\nscf_td.transition_dipole()[:5]\n```\n\n\n\n\n array([[ 0.00096, -0.01194, 0.10696],\n [ 0.01912, 0.02168, -0.07287],\n [ 0.01189, 0.06348, -0.09044],\n [ 0.28032, -0.11958, -1.16236],\n [-0.11797, 0.0042 , -0.50674]])\n\n\n\n\u9700\u8981\u6ce8\u610f\uff0c\u8fd9\u53ef\u80fd\u4e0e Gaussian \u8ba1\u7b97\u5f97\u5230\u7684\u8dc3\u8fc1\u5076\u6781\u77e9\u7684\u503c\u63a5\u8fd1\u4f46\u5e76\u4e0d\u5b8c\u5168\u76f8\u7b49\u3002\u8fd9\u53ef\u80fd\u4e0e Gaussian \u9ed8\u8ba4\u7684 TD-HF \u7cbe\u5ea6\u504f\u4f4e\u6709\u5173\u3002\n\n## \u9759\u6001\u6781\u5316\u7387\n\n### \u5076\u6781\u5fae\u6270\u4e0b\u7684 CP-HF \u65b9\u7a0b\n\n\u8fd9\u7bc7\u6587\u6863\u7684\u4e00\u4e2a\u76ee\u7684\u662f\u5c06 CP-HF \u65b9\u7a0b\u4e0e TD-HF \u65b9\u7a0b\u4e4b\u95f4\u7684\u5173\u7cfb\u4f5c\u4e00\u4e2a\u8054\u7cfb\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u9700\u8981\u9996\u5148\u4e86\u89e3 CP-HF \u65b9\u7a0b\u5728\u9759\u6001\u6781\u5316\u7387\u4e2d\u7684\u5de5\u4f5c\u8fc7\u7a0b\u3002\n\n:::{attention}\n\n\u5c3d\u7ba1\u6211\u4eec\u786e\u5b9e\u53ef\u4ee5\u7528\u540e\u7eed\u6587\u6863\u4e2d\u7684\u4ee3\u7801\u6216\u516c\u5f0f\u8ba1\u7b97\u5f97\u5230\u4e00\u4e9b\u7ed3\u679c\uff0c\u4f46\u8fd9\u5e76\u4e0d\u610f\u5473\u7740\u6210\u578b\u7684\u91cf\u5316\u8f6f\u4ef6\u4e5f\u4f7f\u7528\u8fd9\u4e9b\u7b97\u6cd5\u3002\u8b6c\u5982\u5bf9\u4e8e RHF \u4e0b\u9759\u6001\u6781\u5316\u7387\u8ba1\u7b97\uff0c\u901a\u5e38\u66f4\u9ad8\u6548\u7684\u505a\u6cd5\u662f\u4f7f\u7528\u7c7b\u4f3c\u4e8e CP-HF \u65b9\u7a0b\u7684 Z-Vector \u65b9\u7a0b\u3002\n\n:::\n\n\u7531\u6b64\uff0c\u6211\u4eec\u4f1a\u5199\u5076\u6781\u5fae\u6270\u4e0b\u7684 CP-HF \u65b9\u7a0b\u4e3a\n\n$$\nA'_{ia, jb} U^t_{jb} = d^t_{ia}\n$$\n\n\u4e0b\u9762\u7b80\u5355\u4f46\u4e0d\u4e25\u8c28\u5730\u56de\u987e CP-HF \u65b9\u7a0b\u7684\u63a8\u5bfc\u601d\u8def\u3002\u6211\u4eec\u5728\u5206\u5b50\u4f53\u7cfb\u4e0a\uff0c\u5916\u52a0\u4e00\u4e2a\u5fae\u6270\u5076\u6781\u573a\uff0c\u5176\u5927\u5c0f\u662f\u5206\u5b50\u8f68\u9053\u57fa\u7ec4\u4e0b\u7684 $d_{pq}^t$ \u5076\u6781\u77e9\u9635\uff0c\u5fae\u6270\u54c8\u5bc6\u987f\u91cf\u4e3a $t$ (\u5373\u5355\u4f4d\u65b9\u5411\u4e3a $t$ \u7684\u7535\u573a\u5fae\u6270)\u3002\u6839\u636e RHF \u7684\u53d8\u5206\u6761\u4ef6\uff0c\u4efb\u4f55\u5916\u52a0\u5fae\u6270\u7684\u54c8\u5bc6\u987f\u91cf $t$ \u90fd\u5e94\u8be5\u6ee1\u8db3\n\n$$\n\\frac{\\partial F_{pq}}{\\partial t} = 0\n$$\n\n\u901a\u8fc7\u8be5\u5f0f\uff0c\u51e0\u4e4e\u53ef\u4ee5\u76f4\u63a5\u5f97\u5230 CP-HF \u65b9\u7a0b\u3002\u65b9\u7a0b\u7684\u5de6\u8fb9\u5b9a\u4e49\u4e0a\u662f\u5076\u6781\u79ef\u5206\uff0c\u53f3\u8fb9\u7684 `A_p` $A'_{ia, jb}$ \u4e3a\n\n$$\nA'_{ia, jb} = (\\varepsilon_a - \\varepsilon_i) \\delta_{ij} \\delta_{ab} + 4 (ia|jb) - (ij|ab) - (ib|ja)\n$$\n\n\u800c `U_ia` $U_{jb}^t$ \u79f0\u4e3a U \u77e9\u9635\uff0c\u8868\u793a\u7684\u662f\u4e0e\u7535\u5b50\u6001\u5bc6\u5ea6\u5728\u5916\u52a0\u5076\u6781\u5fae\u6270\u5f71\u54cd\u4e0b\u7684\u53d8\u5316\u6709\u5173\u7684\u91cf\uff1b\u4e00\u79cd\u5bfc\u51fa\u5f0f\u5982\u4e0b\uff1a\n\n$$\n\\frac{\\partial D_{pq}}{\\partial t} = D_{pm} U^t_{mq} + D_{mq} U^t_{mp}\n$$\n\n\u56e0\u6b64\uff0cCP-HF \u7684\u4e00\u79cd\u76f4\u89c2\u7684\u89e3\u91ca\u601d\u8def\u662f\uff0c\u5b83\u6c42\u53d6\u7684\u662f\u5206\u5b50\u53d7\u5230\u5916\u52a0\u5076\u6781\u7684\u5fae\u6270 $d^t_{ia}$ \u4e0b\uff0c\u7535\u5b50\u6001\u5bc6\u5ea6\u5f62\u53d8\u7684\u5927\u5c0f\uff0c\u800c\u8fd9\u4e2a\u5927\u5c0f\u662f\u7531 $U_{jb}^t$ U \u77e9\u9635\u523b\u753b\u7684\u3002\u5f88\u5bb9\u6613\u60f3\u5230\u7684\u6027\u8d28\u662f\uff0c\u82e5\u5916\u52a0\u5076\u6781\u5fae\u6270\u8d8b\u4e8e\u96f6\uff0c\u90a3\u4e48\u5916\u52a0\u7684\u5f62\u53d8\u5fae\u6270\u4e5f\u8d8b\u4e8e\u96f6\u77e9\u9635\u3002\n\n\u4e0b\u9762\u6211\u4eec\u6765\u6c42\u53d6 CP-HF \u65b9\u7a0b\uff0c\u7ed9\u51fa `A_p` $A'_{PQ} = A'_{ia, jb}$ \u4e0e `U_ia` $U_{jb}^t$\u3002\u9700\u8981\u6ce8\u610f\u5728\u8fd9\u4efd\u6587\u6863\u4e2d\uff0c\u89d2\u6807\u987a\u5e8f\u662f $ia, jb$ \u800c\u975e $ai, bj$\uff1b\u8fd9\u53ef\u80fd\u4e0e\u5176\u5b83\u8bfe\u672c\u6216\u6587\u6863\u7684\u987a\u5e8f\u4e0d\u592a\u76f8\u540c\uff0c\u5728\u4e00\u4e9b\u77e9\u9635\u7684\u6b63\u8d1f\u53f7\u4e0a\u4e5f\u53ef\u80fd\u5b58\u5728\u5dee\u5f02\u3002\n\n\n```python\nA_p = (\n + np.einsum(\"ia, ij, ab -> iajb\", - e[so, None] + e[sv], delta_ij, delta_ab)\n + 4 * eri0_mo[so, sv, so, sv]\n - eri0_mo[so, so, sv, sv].swapaxes(1, 2)\n - eri0_mo[so, sv, so, sv].swapaxes(1, 3)\n).reshape(nvir*nocc, nvir*nocc)\n```\n\n\n```python\nU_ia = np.einsum(\"PQ, sQ -> sP\", np.linalg.inv(A_p), d_P)\nU_ia.shape = (3, nocc, nvir)\n```\n\n\u968f\u540e\uff0c\u6839\u636e\u6c42\u5bfc\u6cd5\u5219\u4e0e\u77e9\u9635\u7684\u5bf9\u79f0\u6027\u3001\u53cd\u5bf9\u79f0\u6027\u7684\u5e94\u7528\uff0c\u5e94\u5f53\u53ef\u4ee5\u5f97\u5230\u9759\u6001\u6781\u5316\u7387\u8868\u8fbe\u5f0f\u4e3a\n\n$$\n\\alpha_{ts} (0) = \\frac{\\partial^2 E_\\mathrm{RHF}}{\\partial t \\partial s} = \\frac{\\partial D_{ij} d^t_{ij} \\delta_{ij}}{\\partial s} = 4 d^t_{ia} U^s_{ia}\n$$\n\n\n```python\n4 * np.einsum(\"tia, sia -> ts\", d_ia, U_ia)\n```\n\n\n\n\n array([[ 6.58142, -0.0841 , -1.45378],\n [-0.0841 , 4.26835, 0.39969],\n [-1.45378, 0.39969, 17.89033]])\n\n\n\n\u4e0a\u8ff0\u7684\u7ed3\u679c\u4e0e Gaussian \u8ba1\u7b97\u6240\u5f97\u5230\u7684\u9759\u6001\u6781\u5316\u7387 `ref_alpha_static` \u5b8c\u5168\u4e00\u81f4\u3002\n\n\n```python\nnp.allclose(\n 4 * np.einsum(\"tia, sia -> ts\", d_ia, U_ia),\n ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n### \u77e9\u9635\u6c42\u9006\u76f4\u63a5\u83b7\u5f97\u9759\u6001\u6781\u5316\u7387\n\n\u6839\u636e\u6211\u4eec\u6240\u5199\u7684 CP-HF \u65b9\u7a0b\n\n$$\nA'_{ia, jb} U^t_{jb} = d^t_{ia}\n$$\n\n\u6211\u4eec\u5e94\u5f53\u5f88\u5bb9\u6613\u5730\u60f3\u5230\uff0c\u5982\u679c\u6211\u4eec\u6709\u8db3\u591f\u7684\u8ba1\u7b97\u80fd\u529b\uff0c\u53ef\u4ee5\u5bf9\u56db\u811a\u6807\u77e9\u9635 $A'_{ia, jb}$ \u6c42\u9006\uff0c\u90a3\u4e48\u6211\u4eec\u4e0d\u4e00\u5b9a\u9700\u8981\u660e\u786e\u5199\u51fa U \u77e9\u9635\uff0c\u4e5f\u4e00\u6837\u53ef\u4ee5\u6c42\u5f97\u9759\u6001\u6781\u5316\u7387\uff1a\n\n$$\n\\alpha_{ts} (0) = 4 d^t_{ia} (A'{}^{-1})_{ia, jb} d^s_{ia}\n$$\n\n\u5f53\u7136\uff0c\u4e0a\u9762\u7684\u8ba1\u7b97\u8fc7\u7a0b\u5b9e\u9645\u4e0a\u662f\u7528\u53cc\u4e0b\u6807 ($P = ia$, $Q = jb$) \u8868\u8fbe\u5f0f\u5b9e\u73b0\u7684\uff1a\n\n$$\n\\alpha_{ts} (0) = 4 d^t_{P} (A'{}^{-1})_{PQ} d^s_{Q}\n$$\n\n\n```python\nnp.allclose(4 * np.einsum(\"tP, PQ, sQ -> ts\", d_P, np.linalg.inv(A_p), d_P), ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n### \u8dc3\u8fc1\u5076\u6781\u77e9\u83b7\u5f97\u9759\u6001\u6781\u5316\u7387\n\nCP-HF \u65b9\u7a0b\u6c42\u89e3\u9759\u6001\u6781\u5316\u7387\u7684\u601d\u8def\u662f\u975e\u5e38\u76f4\u89c2\u7684\uff1b\u4f46\u9759\u6001\u6781\u5316\u7387\u8fd8\u53ef\u4ee5\u901a\u8fc7 TD-HF \u7684\u65b9\u5f0f\u6c42\u5f97\u3002\u8868\u9762\u4e0a\uff0c\u8fd9\u4e24\u79cd\u63a8\u5bfc\u601d\u8def\u548c\u524d\u63d0\u51e0\u4e4e\u5b8c\u5168\u4e0d\u540c\uff1b\u4f46\u6211\u4eec\u5374\u53ef\u4ee5\u5f97\u5230\u6570\u503c\u4e0a **\u5b8c\u5168** (\u800c\u975e\u8fd1\u4f3c) \u76f8\u7b49\u7684\u9759\u6001\u6781\u5316\u7387\u3002\u8fd9\u91cc\u6211\u4eec\u4f1a\u4f5c\u8bf4\u660e\u3002\n\n\u6211\u4eec\u9996\u5148\u4e0d\u52a0\u8bf4\u660e\u5730\u76f4\u63a5\u7ed9\u51fa TD-HF \u65b9\u5f0f\u7ed9\u51fa\u7684\u9759\u6001\u6781\u5316\u7387\u516c\u5f0f\uff1a\n\n$$\n\\alpha_{ts} (0) = 2 \\frac{\\langle 0 | \\hat d{}^t | n \\rangle \\langle n | \\hat d{}^s | 0 \\rangle}{\\omega_n}\n$$\n\n\u5b83\u5f88\u5bb9\u6613\u5316\u4e3a\u7a0b\u5e8f\u8868\u8fbe\u5f0f\uff1a\n\n\n```python\n2 * np.einsum(\"nt, n, ns -> ts\", td_transdip, 1 / td_eig, td_transdip)\n```\n\n\n\n\n array([[ 6.58142, -0.0841 , -1.45378],\n [-0.0841 , 4.26835, 0.39969],\n [-1.45378, 0.39969, 17.89033]])\n\n\n\n\u6211\u4eec\u4e0e\u4e0a\u9762\u7528 CP-HF \u65b9\u5f0f\u8ba1\u7b97\u5f97\u5230\u7684\u9759\u6001\u6781\u5316\u7387\u4f5c\u5bf9\u6bd4\uff0c\u4e0d\u96be\u53d1\u73b0\u4e24\u8005\u7684\u503c\u662f\u5b8c\u5168\u76f8\u7b49\u7684\u3002\u53ef\u4ee5\u7528\u4e0b\u8ff0\u7a0b\u5e8f\u4e0e Gaussian \u7684\u8ba1\u7b97\u7ed3\u679c\u4f5c\u5bf9\u6bd4\uff1a\n\n\n```python\nnp.allclose(2 * np.einsum(\"nt, n, ns -> ts\", td_transdip, 1 / td_eig, td_transdip), ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n\u8fd9\u8bf4\u660e\uff0c\u5bf9\u4e8e\u9759\u6001\u6781\u5316\u7387\u95ee\u9898\uff0cTD-HF \u4e0e CP-HF \u65b9\u6cd5\u4e4b\u95f4\u6709\u7740\u786e\u5b9e\u7684\u8054\u7cfb\u3002\u6211\u4eec\u4e0b\u9762\u5c31\u4f7f\u7528 TD-HF \u65b9\u7a0b\u6765\u5bfc\u51fa CP-HF \u65b9\u7a0b\u7684\u7ed3\u679c\uff0c\u6216\u8005\u8bf4\u4ece\u7b80\u5355\u7684\u7ebf\u6027\u4ee3\u6570\u89d2\u5ea6\u8bc1\u660e\uff1a\n\n$$\n\\alpha_{ts} (0) = 2 \\frac{\\langle 0 | \\hat d{}^t | n \\rangle \\langle n | \\hat d{}^s | 0 \\rangle}{\\omega_n} = 4 d^t_{P} (A'{}^{-1})_{PQ} d^s_{Q}\n$$\n\n### \u9759\u6001\u6781\u5316\u7387\u4e0b TD-HF \u65b9\u7a0b\u4e0e CP-HF \u65b9\u7a0b\u7684\u7b49\u4ef7\u63a8\u5bfc\n\n\u8fd9\u91cc\u6211\u4eec\u4f1a\u7ed9\u51fa\u9759\u6001\u60c5\u51b5\u4e0b\uff0cTD-HF \u4e0e CP-HF \u65b9\u7a0b\u7684\u63a8\u6f14\u8fc7\u7a0b\u3002\u5bf9\u4e8e\u52a8\u6001 (\u542b\u9891) \u8fc7\u7a0b\u7684\u63a8\u6f14\uff0c\u6211\u4eec\u4f1a\u653e\u5728\u6587\u6863\u7684\u540e\u9762\u63cf\u8ff0\u3002\n\n\u9996\u5148\uff0c\u6211\u4eec\u4f1a\u8bf4\u660e TD-HF \u65b9\u7a0b\u7684 `A` $\\mathbb{A}_{PQ}$ \u4e0e `B` $\\mathbb{B}_{PQ}$ \u4e4b\u548c\uff0c\u6070\u597d\u662f CP-HF \u65b9\u7a0b\u7684 `A_p` $A'_{PQ}$\uff1a\n\n$$\nA'_{PQ} = \\mathbb{A}_{PQ} + \\mathbb{B}_{PQ}\n$$\n\n\n```python\nnp.allclose(A + B, A_p)\n```\n\n\n\n\n True\n\n\n\n\u5229\u7528 $\\langle 0 | \\hat d{}^t | n \\rangle = d_{ia}^t (X_{ia}^n + Y_{ia}^n)$\uff0c\u6211\u4eec\u91cd\u65b0\u7528 $X_P^n, Y_P^n$ \u7684\u5f62\u5f0f\u5199\u4e00\u4e0b TD-HF \u65b9\u7a0b\u6240\u7ed9\u51fa\u7684\u6781\u5316\u7387\u516c\u5f0f\uff1a\n\n$$\n\\alpha_{ts} (0) = 2 \\frac{d_P^t d_Q^s (X_P^n + Y_P^n) (X_Q^n + Y_Q^n)}{\\omega_n}\n$$\n\n\n```python\nnp.allclose(2 * np.einsum(\"tP, sQ, nP, nQ, n -> ts\", d_P, d_P, X + Y, X + Y, 1 / td_eig), ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n\u56de\u987e\u5230 TD-HF \u65b9\u7a0b\n\n$$\n\\begin{pmatrix} \\mathbb{A} & \\mathbb{B} \\\\ - \\mathbb{B} & - \\mathbb{A} \\end{pmatrix}\n\\begin{pmatrix} \\mathbf{X}^n \\\\ \\mathbf{Y}^n \\end{pmatrix}\n= \\omega_n \\begin{pmatrix} \\mathbf{X}^n \\\\ \\mathbf{Y}^n \\end{pmatrix}\n$$\n\n\u6211\u4eec\u53ef\u4ee5\u63a8\u77e5\uff0c$(\\mathbb{A} + \\mathbb{B}) (\\mathbf{X}^n + \\mathbf{Y}^n) = \\omega_n (\\mathbf{X}^n - \\mathbf{Y}^n)$\uff0c\u6216\u8005\u5199\u4e3a\u89d2\u6807\u6c42\u548c\u7684\u5f62\u5f0f\uff0c\n\n$$\n(\\mathbb{A} + \\mathbb{B})_{RQ} (\\mathbf{X}^n + \\mathbf{Y}^n)_Q = \\omega_n (\\mathbf{X}^n - \\mathbf{Y}^n)_R\n$$\n\n\u90a3\u4e48\u6211\u4eec\u53ef\u4ee5\u5c06\u4e0a\u5f0f\uff0c\u4ee5\u53ca\u77e9\u9635\u9006\u5173\u7cfb $(\\mathbb{A} + \\mathbb{B})^{-1} (\\mathbb{A} + \\mathbb{B}) = \\mathbb{1}$ \u5373\n\n$$\n(\\mathbb{A} + \\mathbb{B})^{-1}_{SR} (\\mathbb{A} + \\mathbb{B})_{RQ} = \\delta_{SQ}\n$$\n\n\u4ee3\u5165\u5230\u4e0a\u9762\u63d0\u5230\u7684 TD-HF \u7684\u6781\u5316\u7387\u516c\u5f0f\u4e2d\uff0c\u5f97\u5230\n\n$$\n\\begin{align}\n\\alpha_{ts} (0)\n&= 2 \\frac{d_P^t d_Q^s (X_P^n + Y_P^n) (\\mathbb{A} + \\mathbb{B})^{-1}_{QR} (\\mathbb{A} + \\mathbb{B})_{RQ} (X_Q^n + Y_Q^n)}{\\omega_n} \\\\\n&= 2 d_P^t d_Q^s (X_P^n + Y_P^n) (\\mathbb{A} + \\mathbb{B})^{-1}_{QR} (X_R^n - Y_R^n)\n\\end{align}\n$$\n\n\u968f\u540e\u6211\u4eec\u9700\u8981\u5229\u7528 $\\mathbf{X}^n, \\mathbf{Y}^n$ \u7684\u6b63\u4ea4\u5316\u6761\u4ef6\u3002\u6b63\u4ea4\u5316\u6761\u4ef6\u6211\u4eec\u66fe\u7ecf\u5728\u7ed9\u51fa $\\mathbf{X}^n, \\mathbf{Y}^n$ \u65f6\u786e\u5b9e\u5229\u7528\u5230\u8fc7\uff0c\u4f46\u5176\u66f4\u6709\u7528\u7684\u63a8\u8bba\u662f $(\\mathbf{X} + \\mathbf{Y})^\\dagger (\\mathbf{X} - \\mathbf{Y}) = 2 \\cdot \\mathbb{1}$\n\n$$\n(\\mathbf{X}^n + \\mathbf{Y}^n)_P (\\mathbf{X}^n - \\mathbf{Y}^n)_R = 2 \\delta_{PR}\n$$\n\n\u6211\u4eec\u53ef\u4ee5\u7528\u4e0b\u9762\u7684\u7a0b\u5e8f\u6765\u8bf4\u660e\u8fd9\u4e00\u95ee\u9898\uff1a\n\n\n```python\nnp.einsum(\"nP, nQ -> PQ\", X + Y, X - Y)\n```\n\n\n\n\n array([[ 2., -0., 0., ..., 0., 0., -0.],\n [ 0., 2., 0., ..., 0., 0., 0.],\n [ 0., 0., 2., ..., 0., -0., 0.],\n ...,\n [ 0., 0., -0., ..., 2., -0., -0.],\n [ 0., 0., 0., ..., -0., 2., -0.],\n [-0., 0., -0., ..., -0., -0., 2.]])\n\n\n\n\u90a3\u4e48\u6211\u4eec\u5c31\u53ef\u4ee5\u5c06\u4e0a\u9762\u7684\u6781\u5316\u7387\u516c\u5f0f\u5316\u4e3a\n\n$$\n\\begin{align}\n\\alpha_{ts} (0)\n&= 2 d_P^t d_Q^s (\\mathbb{A} + \\mathbb{B})^{-1}_{QR} \\cdot 2 \\delta_{PR} \\\\\n&= 4 d_Q^s (\\mathbb{A} + \\mathbb{B})^{-1}_{QP} d_P^t \\\\\n&= 4 d_Q^s (A'{}^{-1})_{QP} d^t_P\n\\end{align}\n$$\n\n\u6211\u4eec\u77e5\u9053\uff0c\u4e0a\u5f0f\u7684\u7b49\u5f0f\u5de6\u8fb9\u662f\u5bf9 $Q, P$ \u53cc\u4e0b\u6807\u8fdb\u884c\u6c42\u548c\u3002\u4f5c\u4e3a\u88ab\u6c42\u548c\u7684\u4e24\u4e2a\u4e0b\u6807\u662f\u53ef\u4ee5\u88ab\u4ea4\u6362\u7684\uff0c\u56e0\u6b64\u6211\u4eec\u53ef\u4ee5\u5c06\u4e0a\u5f0f\u5199\u4e3a\n\n$$\n\\alpha_{ts} (0) = 4 d_P^s (A'{}^{-1})_{PQ} d^t_Q\n$$\n\n\n```python\nnp.allclose(4 * np.einsum(\"sP, PQ, tQ -> ts\", d_P, np.linalg.inv(A_p), d_P), ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n\u4e0a\u8ff0\u63a8\u5bfc\u5e76\u6ca1\u6709\u7ed3\u675f\u3002\u6211\u4eec\u56de\u987e\u5230\u521a\u624d CP-HF \u6240\u7ed9\u51fa\u7684\u6781\u5316\u7387\u5e76\u4e0d\u662f\u4e0a\u8ff0\u7684\u8868\u8fbe\u5f0f\uff0c\u800c\u662f\u4ea4\u6362\u4e86 $t, s$ \u4e24\u8005\u7684\u6781\u5316\u7387\uff1a\n\n$$\n\\alpha_{ts} (0) = 4 d_P^t (A'{}^{-1})_{PQ} d^s_Q\n$$\n\n\n```python\nnp.allclose(4 * np.einsum(\"tP, PQ, sQ -> ts\", d_P, np.linalg.inv(A_p), d_P), ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n\u4ece\u6781\u5316\u7387\u4f5c\u4e3a\u80fd\u91cf\u7684\u4e8c\u9636\u68af\u5ea6\u7684\u89d2\u5ea6\u6765\u8bf4\uff0c\u8fd9\u662f\u56e0\u4e3a\u88ab\u6c42\u5bfc\u91cf\u53ef\u4ea4\u6362\uff0c\u56e0\u6b64\u6781\u5316\u7387\u5177\u6709 Hermite \u6027\u8d28\uff1a\n\n$$\n\\alpha_{ts} (\\omega) = \\alpha_{st} (\\omega)\n$$\n\n\u5230\u8fd9\u4e00\u6b65\u4e3a\u6b62\uff0c\u4ece TD-HF \u65b9\u7a0b\u7ed9\u51fa\u7684\u6781\u5316\u7387\uff0c\u6210\u529f\u5730\u63a8\u5bfc\u51fa\u4e86 CP-HF \u65b9\u7a0b\u6240\u7ed9\u51fa\u7684\u6781\u5316\u7387\u3002\n\n## \u542b\u9891\u6781\u5316\u7387\n\n### \u8dc3\u8fc1\u5076\u6781\u77e9\u83b7\u5f97\u542b\u9891\u6781\u5316\u7387\n\n\u6211\u4eec\u5148\u4e0d\u5bf9\u542b\u9891\u6781\u5316\u7387\u4f5c\u516c\u5f0f\u4e0a\u7684\u5206\u6790\uff0c\u5148\u53ea\u770b\u6570\u503c\u7684\u7ed3\u679c\uff0c\u5e76\u4e0e Gaussian \u8f93\u51fa\u7684\u7ed3\u679c\u8fdb\u884c\u6bd4\u8f83\u3002\n\n\u6211\u4eec\u4ecd\u7136\u4e0d\u52a0\u8bf4\u660e\u5730\u76f4\u63a5\u7ed9\u51fa TD-HF \u65b9\u5f0f\u7ed9\u51fa\u7684\u542b\u9891\u6781\u5316\u7387\u516c\u5f0f\uff1a\n\n$$\n\\alpha_{ts} (\\omega_n) = \\frac{\\langle 0 | \\hat d{}^t | n \\rangle \\langle n | \\hat d{}^s | 0 \\rangle}{\\omega_n - \\omega} + \\frac{\\langle 0 | \\hat d{}^t | n \\rangle \\langle n | \\hat d{}^s | 0 \\rangle}{\\omega_n + \\omega}\n$$\n\n\u9700\u8981\u7559\u610f\u7684\u662f\uff0c$\\omega_n$ \u4e3a\u5206\u5b50\u901a\u8fc7 TD-HF \u65b9\u7a0b\u89e3\u51fa\u6765\u7684\u6fc0\u53d1\u80fd\uff0c\u800c $\\omega$ \u662f\u5916\u52a0\u7684\u3001\u4efb\u610f\u7684\u6fc0\u53d1\u5149\u675f\u9891\u7387\uff1b\u4e24\u8005\u9664\u4e86\u5355\u4f4d\u4e00\u81f4\u5916\u51e0\u4e4e\u5b8c\u5168\u65e0\u5173\u3002\n\n\u4e0a\u8ff0\u516c\u5f0f\u4e2d\uff0c\u524d\u4e00\u9879\u79f0\u4e3a\u5171\u632f\u9879 (resonance term)\uff0c\u540e\u4e00\u9879\u79f0\u4e3a\u975e\u5171\u632f\u9879\u3002\u8fd9\u4e24\u9879\u5728\u4e0d\u540c\u9891\u7387\u4e0b\u7684\u884c\u4e3a\u53ef\u4ee5\u5f88\u5bb9\u6613\u5730\u7528\u56fe\u7247\u8868\u793a\u51fa\u6765\uff1b\u5f53 $\\omega$ \u63a5\u8fd1\u6fc0\u53d1\u9891\u7387 $\\omega_n$ \u65f6\u4ea7\u751f\u65ad\u70b9\u884c\u4e3a\u7684\u9879\u662f\u5171\u632f\u9879\u3002\n\n\u5b83\u4e5f\u5f88\u5bb9\u6613\u5316\u4e3a\u7a0b\u5e8f\u8868\u8fbe\u5f0f\u3002\u6211\u4eec\u7528\u4e0b\u8ff0\u51fd\u6570 `freq_to_alpha`\uff0c\u8f93\u5165 `omega` $\\omega$ \u6765\u8fd4\u56de\u542b\u9891\u9891\u7387 $\\alpha_{ts} (\\omega_n)$\uff1b\u5e76\u5c06\u5176\u4e2d\u7684\u5171\u632f\u9879\u4e0e\u975e\u5171\u632f\u9879\u62c6\u5206\u4e3a\u51fd\u6570 `freq_to_res` \u4e0e `freq_to_nonres`\uff1a\n\n\n```python\nfreq_to_res = lambda omega: np.einsum(\"nt, n, ns -> ts\", td_transdip, 1 / (td_eig - omega), td_transdip)\nfreq_to_nonres = lambda omega: np.einsum(\"nt, n, ns -> ts\", td_transdip, 1 / (td_eig + omega), td_transdip)\nfreq_to_alpha = lambda omega: freq_to_res(omega) + freq_to_nonres(omega)\n```\n\n\u82e5\u5916\u52a0\u6fc0\u53d1\u5149\u675f\u9891\u7387\u4e3a 0\uff0c\u90a3\u4e48\u5c06\u9000\u5316\u5230\u9759\u6001\u6781\u5316\u7387\u7684\u60c5\u5f62\u4e2d\uff1a\n\n\n```python\nfreq_to_alpha(0)\n```\n\n\n\n\n array([[ 6.58142, -0.0841 , -1.45378],\n [-0.0841 , 4.26835, 0.39969],\n [-1.45378, 0.39969, 17.89033]])\n\n\n\n\n```python\nnp.allclose(freq_to_alpha(0), ref_alpha_static)\n```\n\n\n\n\n True\n\n\n\n\u5728\u9759\u6001\u60c5\u51b5\u4e0b\uff0c\u5171\u632f\u9879\u4e0e\u975e\u5171\u632f\u9879\u5bf9\u603b\u6781\u5316\u7387\u7684\u8d21\u732e\u662f\u76f8\u7b49\u7684\uff1a\n\n\n```python\nfreq_to_res(0)\n```\n\n\n\n\n array([[ 3.29071, -0.04205, -0.72689],\n [-0.04205, 2.13418, 0.19984],\n [-0.72689, 0.19984, 8.94517]])\n\n\n\n\u6211\u4eec\u73b0\u5728\u60f3\u8981\u5c1d\u8bd5\u4e0e Gaussian \u7684\u6570\u636e\u8fdb\u884c\u6838\u5bf9\u5e76\u7ed8\u5236\u56fe\u7247\u3002\u56de\u987e\u5230\u6211\u4eec\u66fe\u7ecf\u5b9a\u4e49\u8fc7 `freq_full_list` \u4e3a\u8f83\u5e7f\u7684\u9891\u7387\u8303\u56f4\uff0c\u5176\u5bf9\u5e94\u7684 Gaussian \u8ba1\u7b97\u7684\u6781\u5316\u7387\u5728 `alpha_full_list`\u3002\u6211\u4eec\u5c06\u7528 `freq_to_alpha` \u51fd\u6570\u7ed8\u5236\u7684 $\\alpha_{zz} (\\omega)$ \u6781\u5316\u7387\u5206\u91cf\u653e\u5728 numpy \u5f20\u91cf\u5217\u8868 `alpha_zz_full_calc` \u4e2d\uff1b\u5e76\u5c06\u5176\u5171\u632f\u9879\u4e0e\u975e\u5171\u632f\u9879\u5206\u522b\u653e\u5728 `alpha_zz_full_res` \u4e0e `alpha_zz_full_nonres`\uff1a\n\n\n```python\nalpha_zz_full_calc = np.vectorize(lambda omega: freq_to_alpha (omega)[2, 2])(freq_full_list)\nalpha_zz_full_res = np.vectorize(lambda omega: freq_to_res (omega)[2, 2])(freq_full_list)\nalpha_zz_full_nonres = np.vectorize(lambda omega: freq_to_nonres(omega)[2, 2])(freq_full_list)\n```\n\n\u4e0b\u9762\u6211\u4eec\u7ed8\u5236\u5728\u8fd9\u4e2a\u9891\u7387\u533a\u95f4\u5185\uff0cGaussian \u8ba1\u7b97\u5f97\u5230\u7684\u7ed3\u679c\u4e0e\u6211\u4eec\u7528\u4e0a\u9762\u8dc3\u8fc1\u5076\u6781\u77e9\u7684\u516c\u5f0f\u83b7\u5f97\u7684\u7ed3\u679c\u4f5c\u6bd4\u8f83\u3002\u5c3d\u7ba1\u5728\u65ad\u70b9\u9644\u8fd1\u4e24\u8005\u8868\u73b0\u7565\u6709\u4e0d\u540c\uff0c\u4f46\u7a33\u5b9a\u533a\u95f4\u7684\u542b\u9891\u6781\u5316\u7387\u4e0e Gaussian \u7684\u7ed3\u679c\u57fa\u672c\u4e0a\u662f\u4e00\u81f4\u7684\u3002\n\n\n```python\nfig, ax = plt.subplots()\nax.plot(freq_full_list, alpha_full_list[:, 2, 2], label=\"Gaussian\")\nax.plot(freq_full_list, alpha_zz_full_res, linestyle=\"-.\", c=\"C2\", label=\"Resonance\")\nax.plot(freq_full_list, alpha_zz_full_nonres, linestyle=\"-.\", c=\"C3\", label=\"Non-Resonance\")\nax.plot(freq_full_list, alpha_zz_full_calc, linestyle=\":\", label=\"Calculated\")\nrect = patches.Rectangle((0.184, -24), 0.01, 78, linewidth=1, edgecolor='C4', facecolor='C4', alpha=.25)\nax.add_patch(rect)\nax.set_ylim(-25, 75)\nax.set_xlabel(r\"$\\omega$ / $E_\\mathrm{h}$\")\nax.set_ylabel(r\"$\\alpha_{zz} (\\omega)$ / a.u.\")\nax.set_title(\"Frequency-Dependent Polarizability of $\\mathrm{H_2O_2}$ (RHF/6-31G)\")\nax.legend()\nfig.show()\n```\n\n\n \n\n\n\n\n\n\n\u5bf9\u4e8e\u524d\u4e24\u4e2a\u6fc0\u53d1\u6001\u80fd\u91cf\u7684\u7a84\u533a\u95f4\u4e2d\uff0c\u6211\u4eec\u7684\u7ed3\u679c\u5728\u975e\u65ad\u70b9\u9644\u8fd1\u5176\u5b9e\u4e0e Gaussian \u7684\u7ed3\u679c\u4e5f\u57fa\u672c\u4e00\u81f4\uff1a\n\n\n```python\nalpha_zz_small_calc = np.vectorize(lambda omega: freq_to_alpha (omega)[2, 2])(freq_small_list)\nalpha_zz_small_res = np.vectorize(lambda omega: freq_to_res (omega)[2, 2])(freq_small_list)\nalpha_zz_small_nonres = np.vectorize(lambda omega: freq_to_nonres(omega)[2, 2])(freq_small_list)\n```\n\n\n```python\nfig, ax = plt.subplots()\nax.plot(freq_small_list, alpha_small_list[:, 2, 2], label=\"Gaussian\")\nax.plot(freq_small_list, alpha_zz_small_res, linestyle=\"-.\", c=\"C2\", label=\"Resonance\")\nax.plot(freq_small_list, alpha_zz_small_nonres, linestyle=\"-.\", c=\"C3\", label=\"Non-Resonance\")\nax.plot(freq_small_list, alpha_zz_small_calc, linestyle=\":\", label=\"Calculated\")\nax.set_xlabel(r\"$\\omega$ / $E_\\mathrm{h}$\")\nax.set_ylabel(r\"$\\alpha_{zz} (\\omega)$ / a.u.\")\nax.set_title(\"Frequency-Dependent Polarizability of $\\mathrm{H_2O_2}$ (RHF/6-31G)\\nFor First Two Excited States\")\nax.legend()\nfig.show()\n```\n\n\n \n\n\n\n\n\n\n\u6211\u4eec\u8ba4\u4e3a\u6211\u4eec\u786e\u5b9e\u6b63\u786e\u8ba1\u7b97\u4e86\u542b\u9891\u6781\u5316\u7387\u3002\u5728\u65ad\u70b9\u9644\u8fd1\u7684\u884c\u4e3a\u5e94\u5f53\u88ab\u8ba4\u4e3a\u662f\u6570\u503c\u4e0a\u7684\u5fae\u5c0f\u5dee\u522b\uff1b\u5e76\u4e14\u6211\u4eec\u8ba4\u4e3a\uff0c\u5728\u65ad\u70b9 (\u5171\u632f) \u5904\u9644\u8fd1\u4ea7\u751f\u7684\u6781\u5316\u7387\u7684\u4e3b\u8981\u8d21\u732e\u90e8\u5206\u5e94\u4e3a\u6781\u5316\u7387\u7684\u5171\u632f\u9879\u6240\u4ea7\u751f\u3002\n\n### TD-HF \u65b9\u7a0b\u542b\u9891\u6781\u5316\u7387\u53ca\u5176\u4e0e\u8dc3\u8fc1\u5076\u6781\u77e9\u7684\u5173\u8054\n\n\u4e0a\u4e00\u5927\u6bb5\u4e2d\uff0c\u6211\u4eec\u4ec5\u4ec5\u662f\u7528\u4e86 TD-HF \u7ed9\u51fa\u7684\u8dc3\u8fc1\u5076\u6781\u77e9\u7ed3\u679c\uff0c\u53cd\u63a8\u51fa\u4e86 CP-HF \u7684\u516c\u5f0f\u3002\u4f46\u6211\u4eec\u5e76\u6ca1\u6709\u4ecb\u7ecd\u8fc7\u6700\u539f\u59cb\u7684 TD-HF \u6781\u5316\u7387\u8868\u8fbe\u5f0f\u3002\u4e0b\u9762\u6211\u4eec\u4f1a\u4ece\u6700\u666e\u904d\u7684\u516c\u5f0f\uff0c\u63a8\u5bfc\u542b\u9891\u6781\u5316\u7387\u7684\u8868\u8fbe\u5f0f\u3002\u4e0b\u9762\u7684\u63a8\u5bfc\u8fc7\u7a0b\u4e2d\uff0c\u7a0b\u5e8f\u7684\u90e8\u5206\u4f1a\u5c11\u4e00\u4e9b\u3002\n\n\u6211\u4eec\u4ece TD-DFT \u7684 Casida \u65b9\u7a0b\u5f00\u59cb\uff1bCasida \u65b9\u7a0b\u53ef\u4ee5\u7b80\u5355\u5730\u9000\u5316\u5230 TD-HF \u7684\u60c5\u5f62\u3002\u6211\u4eec\u4e0a\u9762\u5728\u63a8\u6f14\u6fc0\u53d1\u9891\u7387 $\\omega_n$ \u65f6\uff0c\u4e5f\u63d0\u5230\u4e86 Casida \u65b9\u7a0b\uff1b\n\n$$\n\\begin{align}\n\\begin{pmatrix} \\mathbb{A} & \\mathbb{B} \\\\ - \\mathbb{B} & - \\mathbb{A} \\end{pmatrix}\n\\begin{pmatrix} \\mathbf{X}^n \\\\ \\mathbf{Y}^n \\end{pmatrix}\n= \\omega_n \\begin{pmatrix} \\mathbf{X}^n \\\\ \\mathbf{Y}^n \\end{pmatrix}\n\\tag{1}\n\\end{align}\n$$\n\n\u4f46\u4e0b\u9762\u7684 Casida \u65b9\u7a0b\u5177\u6709\u66f4\u4e3a\u5e7f\u6cdb\u7684\u9002\u7528\u60c5\u5f62\u3002\u6211\u4eec\u5f15\u5165\u5916\u52a0\u7684\u5076\u6781\u5fae\u6270 `d_P` $\\mathbf{d}^t$ \u4e0e\u5916\u52a0\u6fc0\u53d1\u5149\u675f\u9891\u7387 `omega` $\\omega$ \u7684\u5fae\u6270\uff0c\u5219\u6709\n\n$$\n\\begin{align}\n\\begin{pmatrix} \\mathbb{A} & \\mathbb{B} \\\\ \\mathbb{B} & \\mathbb{A} \\end{pmatrix}\n\\begin{pmatrix} \\mathbf{X}'{}^t \\\\ \\mathbf{Y}'{}^t \\end{pmatrix}\n= \\omega \\begin{pmatrix} \\mathbf{X}'{}^t \\\\ - \\mathbf{Y}'{}^t \\end{pmatrix} +\n\\begin{pmatrix} 2 \\mathbf{d}^t \\\\ 2 \\mathbf{d}^t \\end{pmatrix}\n\\tag{2}\n\\end{align}\n$$\n\n\u8fd9\u91cc\u6709\u4e0d\u5c11\u7b26\u53f7\u4e0a\u7684\u533a\u522b\u3002\u9996\u5148\uff0c(1) \u5f0f\u7684\u6fc0\u53d1\u6001\u9891\u7387 $\\omega_n$ \u4e0e (2) \u5f0f\u7684\u5916\u52a0\u5149\u675f\u7684\u9891\u7387 $\\omega$ \u5e76\u4e0d\u76f8\u540c\uff1b\u4f46 (2) \u5728\u4e00\u79cd\u60c5\u5f62\u4e0b\u786e\u5b9e\u5730\u53ef\u4ee5\u9000\u5316\u5230 (1) \u5f0f\u3002\u82e5\u73b0\u5728\u6ca1\u6709\u5916\u52a0\u5076\u6781\u5fae\u6270 $\\mathbf{d}^t$\uff0c\u90a3\u4e48\u5916\u52a0\u5149\u675f\u5fc5\u987b\u8981\u6070\u597d\u5904\u4e8e\u5206\u5b50\u7535\u5b50\u7684\u6fc0\u53d1\u9891\u7387\u4e0a\uff0c\u5206\u5b50\u7684\u7535\u5b50\u4e91\u5fae\u6270\u53d8\u5316\u624d\u80fd\u88ab\u5141\u8bb8 (\u5373\u4f7f\u65f6\u95f4\u975e\u5e38\u77ed\uff0c\u8fd9\u5bf9\u5e94\u7684\u662f\u7d2b\u5916\u5149\u8c31\u7535\u5b50\u6001\u7684\u5e73\u5747\u5bff\u547d)\u3002\u800c\u7535\u5b50\u7684\u6fc0\u53d1\u9891\u7387\u4e0d\u4e00\u5b9a\u53ea\u6709\u4e00\u4e2a\uff0c\u56e0\u6b64\u4f1a\u4ea7\u751f\u4e0a\u4e0b\u6807 $n$ \u8868\u793a\u4e0d\u540c\u7684\u6fc0\u53d1\u9891\u7387\uff1b\u7b2c $n$ \u4e2a\u6fc0\u53d1\u6001\u7535\u5b50\u4e91\u5fae\u6270\u7684\u5f62\u53d8\u5927\u5c0f\u548c\u53d6\u5411\u7531 $\\mathbf{X}^n$ \u4e0e $\\mathbf{Y}^n$ \u5171\u540c\u51b3\u5b9a\uff1b\u5b83\u4eec\u5c06\u4f1a\u4ea7\u751f\u7b2c $n$ \u4e2a\u6fc0\u53d1\u6001\u7684\u8dc3\u8fc1\u5bc6\u5ea6 $\\rho^n (\\boldsymbol{r}, \\omega)$ (\u4e0b\u5f0f\u5bf9 $i, a$ \u6c42\u548c)\uff1a\n\n$$\n\\rho^n (\\boldsymbol{r}, \\omega_n) = (X_{ia}^n + Y_{ia}^n) \\phi_i (\\boldsymbol{r}) \\phi_a (\\boldsymbol{r})\n$$\n\n\u5176\u6b21\uff0c(2) \u5f0f\u7684 $\\mathbf{X}'{}^t$ \u82e5\u4e0d\u770b\u5076\u6781\u6fc0\u53d1\u65b9\u5411 $t$\uff0c\u5b83\u8fd8\u6bd4 (1) \u5f0f\u7684 $\\mathbf{X}^n$ \u5c11\u4e86 $n$ \u5e76\u591a\u4e86\u4e00\u6487\uff1b\u591a\u7684\u4e00\u6487\u662f\u4e3a\u4e86\u533a\u5206\u4e24\u8005\u3002\u4e4b\u6240\u4ee5\u8fd9\u91cc\u6ca1\u6709 $n$\uff0c\u6211\u4eec\u53ef\u4ee5\u8fd9\u6837\u8003\u8651\uff1a\u5728\u67d0\u4e00\u4e2a\u7279\u5b9a\u7684\u5916\u52a0\u5076\u6781\u5fae\u6270 $\\mathbf{d}^t$ \u4e0e\u9891\u7387 $\\omega$ \u5fae\u6270\u4e0b\uff0c\u5206\u5b50\u7684\u7535\u5b50\u4e91\u786e\u5b9e\u4f1a\u53d1\u751f\u6539\u53d8\uff1b\u4f46\u8fd9\u79cd\u6539\u53d8\u7684\u65b9\u5f0f\u4e00\u822c\u662f\u552f\u4e00\u7684 (\u7b80\u5e76\u60c5\u51b5\u6211\u4eec\u4e0d\u4f5c\u8ba8\u8bba)\u3002\u8fd9\u79cd\u5f62\u53d8\u6240\u4ea7\u751f\u7684\u5bc6\u5ea6\u5f62\u5f0f\u4e5f\u662f\u7c7b\u4f3c\u7684 (\u4e0b\u5f0f\u5bf9 $i, a$ \u6c42\u548c)\uff1a\n\n$$\n\\rho (\\boldsymbol{r}, \\mathbf{d}^t, \\omega) = (X_{ia}'{}^t + Y_{ia}'{}^t) \\phi_i (\\boldsymbol{r}) \\phi_a (\\boldsymbol{r})\n$$\n\n\u542b\u9891\u6781\u5316\u7387\u53ef\u4ee5\u901a\u8fc7\u4e0a\u8ff0\u7684\u5f62\u53d8\u5bc6\u5ea6\u5728\u5076\u6781\u7b97\u7b26\u7684\u4f5c\u7528\u4e0b\u7ed9\u51fa\uff1a\n\n$$\n\\alpha_{ts} (\\omega)\n= \\int -s \\cdot \\rho (\\boldsymbol{r}, \\mathbf{d}^t, \\omega) \\, \\mathrm{d} \\boldsymbol{r}\n= (X_{ia}'{}^t + Y_{ia}'{}^t) \\int -s \\cdot \\phi_i (\\boldsymbol{r}) \\phi_a (\\boldsymbol{r}) \\, \\mathrm{d} \\boldsymbol{r}\n= (X_{ia}'{}^t + Y_{ia}'{}^t) d_{ia}^s = (X_P'{}^t + Y_P'{}^t) d_P^s\n$$\n\n\u5176\u4e2d\u5173\u4e8e $X_P'{}^t, Y_P'{}^t$ \u9700\u8981\u901a\u8fc7 Casida \u65b9\u7a0b\u6c42\u53d6\u3002(2) \u5f0f\u7ecf\u8fc7\u7b80\u5355\u7684\u4ee3\u6570\u5904\u7406\u540e\u5f97\u5230 (3) \u5f0f\uff1a\n\n$$\n\\begin{align}\n\\begin{pmatrix} \\mathbb{A} - \\omega \\mathbb{1} & \\mathbb{B} \\\\ - \\mathbb{B} & - \\mathbb{A} - \\omega \\mathbb{1} \\end{pmatrix}\n\\begin{pmatrix} \\mathbf{X}'{}^t \\\\ \\mathbf{Y}'{}^t \\end{pmatrix}\n= \\begin{pmatrix} 2 \\mathbf{d}^t \\\\ - 2 \\mathbf{d}^t \\end{pmatrix}\n\\tag{3}\n\\end{align}\n$$\n\n\u90a3\u4e48\u6211\u4eec\u6709\n\n$$\n\\begin{align}\n\\alpha_{ts} (\\omega) = (X_P'{}^t + Y_P'{}^t) d_P^s\n= \\begin{pmatrix} \\mathbf{d}^s & \\mathbf{d}^s \\end{pmatrix} \\begin{pmatrix} \\mathbf{X}'{}^t \\\\ \\mathbf{Y}'{}^t \\end{pmatrix}\n= \\begin{pmatrix} \\mathbf{d}^s & \\mathbf{d}^s \\end{pmatrix}\n\\begin{pmatrix} \\mathbb{A} - \\omega \\mathbb{1} & \\mathbb{B} \\\\ - \\mathbb{B} & - \\mathbb{A} - \\omega \\mathbb{1} \\end{pmatrix}^{-1}\n\\begin{pmatrix} 2 \\mathbf{d}^t \\\\ - 2 \\mathbf{d}^t \\end{pmatrix}\n\\tag{4}\n\\end{align}\n$$\n\n\u5bf9\u4e8e\u4e0d\u5e26\u9891\u7387\u7684\u60c5\u5f62\uff0c\u6211\u4eec\u53ef\u4ee5\u7528\u4e0b\u9762\u7684\u4ee3\u7801\u9a8c\u8bc1\uff1a\n\n\n```python\nnp.einsum(\"tP, PQ, sQ -> ts\",\n np.concatenate([d_P, d_P], axis=1),\n np.linalg.inv(AB),\n np.concatenate([2 * d_P, - 2 * d_P], axis=1))\n```\n\n\n\n\n array([[ 6.58142, -0.0841 , -1.45378],\n [-0.0841 , 4.26835, 0.39969],\n [-1.45378, 0.39969, 17.89033]])\n\n\n\n\u800c\u5bf9\u4e8e\u5e26\u9891\u7387\u7684\u60c5\u5f62\uff0c\u6211\u4eec\u53ef\u4ee5\u4e3e\u4e00\u4e2a $\\omega = 0.186 \\, E_\\mathrm{h}$ \u7684\u4f8b\u5b50\uff1a\n\n\n```python\nomega = 0.186\nnp.einsum(\"tP, PQ, sQ -> ts\",\n np.concatenate([d_P, d_P], axis=1),\n np.linalg.inv(AB - np.eye(nvir*nocc*2) * omega),\n np.concatenate([2 * d_P, - 2 * d_P], axis=1))\n```\n\n\n\n\n array([[ 7.28458, -0.05683, -2.08145],\n [-0.05683, 4.79845, -1.39731],\n [-2.08145, -1.39731, 37.73368]])\n\n\n\n\u4f46\u8fd9\u6837\u7684\u8868\u8fbe\u5f0f (4) \u5e76\u6ca1\u6709\u51fa\u73b0\u8dc3\u8fc1\u5076\u6781\u77e9\u3002\u4e0b\u9762\u6211\u4eec\u9700\u8981\u7b80\u5316\u8868\u8fbe\u5f0f\u3002\n\n\u9996\u5148\uff0c\u6211\u4eec\u56de\u987e\u5f0f (3)\uff0c\u5f97\u5230\u65b9\u7a0b\u7ec4\n\n$$\n\\begin{align}\n\\mathbb{A} \\mathbf{X}'{}^t + \\mathbb{B} \\mathbf{Y}'{}^t - \\omega \\mathbf{X}'{}^t &= 2 \\mathbf{d}^t \\\\\n\\mathbb{B} \\mathbf{X}'{}^t + \\mathbb{A} \\mathbf{Y}'{}^t + \\omega \\mathbf{Y}'{}^t &= 2 \\mathbf{d}^t\n\\end{align}\n$$\n\n\u4e24\u5f0f\u52a0\u51cf\u540e\uff0c\u53ef\u4ee5\u5f97\u5230\n\n$$\n\\begin{align}\n(\\mathbb{A} + \\mathbb{B}) (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) - \\omega (\\mathbf{X}'{}^t - \\mathbf{Y}'{}^t) &= 4 \\mathbf{d}^t \\tag{5} \\\\\n(\\mathbb{A} - \\mathbb{B}) (\\mathbf{X}'{}^t - \\mathbf{Y}'{}^t) &= \\omega (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) \\tag{6}\n\\end{align}\n$$\n\n\u5229\u7528 (6) \u5f0f\u66ff\u6362 (5) \u5f0f\u4e2d\u51fa\u73b0\u7684 $(\\mathbf{X}'{}^t - \\mathbf{Y}'{}^t)$\uff0c\u6709\n\n$$\n(\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) - \\omega^2 (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) = 4 (\\mathbb{A} - \\mathbb{B}) \\mathbf{d}^t\n$$\n\n\u6216\u8005\uff0c\u7b49\u4ef7\u5730\uff0c\n\n$$\n(\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) = 4 \\left( (\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) - \\omega^2 \\mathbb{1} \\right)^{-1} (\\mathbb{A} - \\mathbb{B}) \\mathbf{d}^t\n$$\n\n\u4e24\u8fb9\u518d\u4e58\u4e0a $\\mathbf{d}^s$\uff0c\u5c31\u5f97\u5230\u4e86\u542b\u9891\u6781\u5316\u7387\uff1a\n\n$$\n\\begin{align}\n\\alpha_{ts} (\\omega) = 4 \\mathbf{d}^s{}^\\dagger \\left( (\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) - \\omega^2 \\mathbb{1} \\right)^{-1} (\\mathbb{A} - \\mathbb{B}) \\mathbf{d}^t \\tag{7}\n\\end{align}\n$$\n\n\u4ee5 $\\omega = 0.186 \\, E_\\mathrm{h}$ \u6765\u8868\u8fbe\u4e0a\u5f0f\uff0c\u5219\u6709\n\n\n```python\n4 * np.einsum(\"tP, PR, RQ, sQ -> ts\", d_P, np.linalg.inv((A - B) @ (A + B) - omega**2 * np.eye(nvir*nocc)), A - B, d_P)\n```\n\n\n\n\n array([[ 7.28458, -0.05683, -2.08145],\n [-0.05683, 4.79845, -1.39731],\n [-2.08145, -1.39731, 37.73368]])\n\n\n\n\u4e0a\u8ff0\u7ed3\u679c\u4e0e\u901a\u8fc7\u5f0f (4) \u7ed9\u51fa\u7684\u7ed3\u679c\u4e00\u81f4\u3002\n\n\u4e0b\u9762\u6211\u4eec\u5f15\u5165\u4e00\u4e2a\u6280\u5de7\u3002\u6211\u4eec\u5c06\u4f1a\u5bf9\u77e9\u9635 $(\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) - \\omega^2 \\mathbb{1}$ \u4f5c\u9006\u77e9\u9635\u5206\u89e3\u3002\u6211\u4eec\u91c7\u7528\u7684\u505a\u6cd5\u662f\u5206\u6790\u77e9\u9635\u7684\u7279\u5f81\u503c\u4e0e\u7279\u5f81\u5411\u91cf\u3002\u4f9d\u636e\u6211\u4eec\u5bf9\u5f0f (1) \u5f62\u5f0f\u7684 Casida \u65b9\u7a0b\u7684\u8ba8\u8bba\uff0c\u6211\u4eec\u5e94\u5f53\u5bb9\u6613\u63a8\u77e5\n\n$$\n\\begin{align}\n(\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) (\\mathbf{X}^n + \\mathbf{Y}^n) = \\omega_n^2 (\\mathbf{X}^n + \\mathbf{Y}^n) \\tag{8}\n\\end{align}\n$$\n\n\n```python\nnp.allclose((A - B) @ (A + B) @ (X + Y).T, td_eig**2 * (X + Y).T)\n```\n\n\n\n\n True\n\n\n\n\u6211\u4eec\u66fe\u7ecf\u6307\u51fa\u8fc7\u6b63\u4ea4\u6761\u4ef6\u4e3a $(\\mathbf{X} + \\mathbf{Y})^\\dagger (\\mathbf{X} - \\mathbf{Y}) = 2 \\cdot \\mathbb{1}$\uff0c\u56e0\u6b64\uff0c\u82e5\u5c06 $\\mathbf{X}, \\mathbf{Y}$ \u5206\u522b\u770b\u4f5c\u662f\u6a2a\u5411\u7ef4\u5ea6 $n$ \u8868\u793a\u6fc0\u53d1\u6001\uff0c\u7eb5\u5411\u7ef4\u5ea6 $P$ \u8868\u793a\u6fc0\u53d1\u548c\u9000\u6fc0\u53d1\u5206\u91cf\u7684\u65b9\u5f62\u77e9\u9635\uff1a\n\n$$\n\\mathbf{X} = \\begin{pmatrix} \\mathbf{X}^1{}^\\dagger \\\\ \\mathbf{X}^2{}^\\dagger \\\\ \\vdots \\\\ \\mathbf{X}^{n_\\mathrm{occ} n_\\mathrm{vir}}{}^\\dagger \\end{pmatrix}\n$$\n\n\u90a3\u4e48 $(\\mathbf{X} + \\mathbf{Y})$ \u4e0e $(\\mathbf{X} - \\mathbf{Y})$ \u4e4b\u95f4\u5b58\u5728\u4e92\u9006\u5173\u7cfb\uff1a\n\n$$\n(\\mathbf{X} + \\mathbf{Y})^{-1} = \\frac{1}{2} (\\mathbf{X} - \\mathbf{Y})^\\dagger\n$$\n\n\n```python\nnp.allclose(np.linalg.inv(X + Y), 0.5 * (X - Y).T)\n```\n\n\n\n\n True\n\n\n\n\u56e0\u6b64\uff0c\u4e00\u5b9a\u7a0b\u5ea6\u4e0a\uff0c\u6211\u4eec\u53ef\u4ee5\u8ba4\u4e3a $(\\mathbf{X} + \\mathbf{Y})$ \u4e0e $(\\mathbf{X} - \\mathbf{Y})$ \u4e24\u8005\u662f\u76f8\u4e92\u5bf9\u5076\u7684\u5411\u91cf\u7ec4\u3002\u6ce8\u610f\u533a\u522b\u8fd9\u91cc\u7684 $\\mathbf{X}$ \u5f15\u5165\u7684\u76ee\u7684\u662f\u4e3a\u4e86\u523b\u753b $\\mathbb{A}, \\mathbb{B}$ \u7684\u6027\u8d28\uff0c\u800c\u4e0e $\\mathbf{X}'{}^t$ \u6ca1\u6709\u76f4\u63a5\u5173\u8054\u3002\n\n\u8fd9\u79cd\u5411\u91cf\u7ec4\u6ee1\u8db3\u4e0b\u8ff0\u7279\u5f81\u5411\u91cf\u5206\u89e3\u516c\u5f0f\uff1a\n\n$$\n(\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) = \\frac{1}{2} (\\mathbf{X} + \\mathbf{Y}) \\mathbf{\\Omega}^2 (\\mathbf{X} - \\mathbf{Y})^\\dagger\n$$\n\n\u5176\u4e2d\uff0c\n\n$$\n\\mathbf{\\Omega} =\n\\begin{pmatrix}\n\\omega_1 & 0 & \\cdots & 0 \\\\\n0 & \\omega_2 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & \\cdots & \\omega_{n_\\mathrm{occ} n_\\mathrm{vir}}\n\\end{pmatrix}\n$$\n\n\n```python\nnp.allclose(\n 0.5 * np.einsum(\"nP, n, nQ -> PQ\", X + Y, td_eig**2, X - Y),\n (A - B) @ (A + B))\n```\n\n\n\n\n True\n\n\n\n\u4e0b\u9762\u6211\u4eec\u6765\u8ba8\u8bba\u77e9\u9635 $(\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) - \\omega^2 \\mathbb{1}$\u3002\u6211\u4eec\u5bb9\u6613\u4ece\u5f0f (8) \u63a8\u5f97\n\n$$\n\\left( (\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) - \\omega^2 \\mathbb{1} \\right) (\\mathbf{X}^n + \\mathbf{Y}^n) = (\\omega_n^2 - \\omega^2) (\\mathbf{X}^n + \\mathbf{Y}^n)\n$$\n\n\u5373\u4e0a\u8ff0\u65b9\u7a0b\u7684\u672c\u5f81\u5411\u91cf\u4ecd\u7136\u662f $(\\mathbf{X}^n + \\mathbf{Y}^n)$\uff0c\u4f46\u672c\u5f81\u503c\u5374\u53d8\u4e3a $\\omega_n^2 - \\omega^2$\u3002\u56e0\u6b64\uff0c\n\n$$\n\\begin{align}\n\\left( (\\mathbb{A} - \\mathbb{B}) (\\mathbb{A} + \\mathbb{B}) - \\omega^2 \\mathbb{1} \\right)^{-1} = \\left( \\frac{1}{2} (\\mathbf{X} + \\mathbf{Y}) (\\mathbf{\\Omega}^2 - \\omega^2 \\mathbb{1}) (\\mathbf{X} - \\mathbf{Y})^\\dagger \\right)^{-1} = \\frac{1}{2} (\\mathbf{X} + \\mathbf{Y}) (\\mathbf{\\Omega}^2 - \\omega^2 \\mathbb{1})^{-1} (\\mathbf{X} - \\mathbf{Y})^\\dagger \\tag{9}\n\\end{align}\n$$\n\n\u4e0a\u5f0f\u4e2d $(\\mathbf{\\Omega}^2 - \\omega^2 \\mathbb{1})^{-1}$ \u662f\u4e00\u4e2a\u6781\u4e3a\u5bb9\u6613\u8ba1\u7b97\u7684\u5bf9\u89d2\u77e9\u9635\u3002$\\omega = 0.186 \\, E_\\mathrm{h}$ \u4e0b\uff0c\u7a0b\u5e8f\u8868\u793a\u4e0a\u8ff0\u8fc7\u7a0b\u5219\u4e3a\n\n\n```python\nnp.allclose(\n 0.5 * np.einsum(\"nP, n, nQ -> PQ\", X + Y, 1 / (td_eig**2 - omega**2), X - Y),\n np.linalg.inv((A - B) @ (A + B) - omega**2 * np.eye(nvir*nocc)))\n```\n\n\n\n\n True\n\n\n\n\u5c06\u5f0f (9) \u4ee3\u5165\u5230\u5f0f (7)\uff0c\u5f97\u5230\n\n$$\n\\alpha_{ts} (\\omega) = 2 \\mathbf{d}^s{}^\\dagger (\\mathbf{X} + \\mathbf{Y}) (\\mathbf{\\Omega}^2 - \\omega^2 \\mathbb{1})^{-1} (\\mathbf{X} - \\mathbf{Y})^\\dagger (\\mathbb{A} - \\mathbb{B}) \\mathbf{d}^t\n$$\n\n\u5229\u7528\u6781\u5316\u7387\u7684 Hermite \u6027\uff0c\u5e76\u5c06\u4e0a\u8ff0\u77e9\u9635\u8868\u8fbe\u5f0f\u5c55\u5f00\u4e3a\u6c42\u548c\u5f0f\uff0c\u5f97\u5230\n\n$$\n\\alpha_{ts} (\\omega) = 2 d_P^t (\\mathbf{X}^n + \\mathbf{Y}^n)_P \\cdot \\frac{1}{\\omega_n^2 - \\omega^2} \\cdot (\\mathbf{X}^n - \\mathbf{Y}^n)_R (\\mathbb{A} - \\mathbb{B})_{RQ} d_Q^s\n$$\n\n\n```python\n2 * np.einsum(\"tP, nP, n, nR, RQ, sQ\", d_P, X + Y, 1 / (td_eig**2 - omega**2), X - Y, A - B, d_P)\n```\n\n\n\n\n array([[ 7.28458, -0.05683, -2.08145],\n [-0.05683, 4.79845, -1.39731],\n [-2.08145, -1.39731, 37.73368]])\n\n\n\n\u82e5\u8981\u5316\u7b80\u4e0a\u5f0f\uff0c\u9996\u5148\u9700\u8981\u5229\u7528 $(\\mathbb{A} - \\mathbb{B})_{RQ}$ \u4e8b\u5b9e\u4e0a\u6070\u597d\u662f\u4e00\u4e2a Hermite \u77e9\u9635\uff1a\n\n\n```python\nnp.allclose(A - B, (A - B).T)\n```\n\n\n\n\n True\n\n\n\n\u968f\u540e\u6ce8\u610f\u5230 $(\\mathbb{A} - \\mathbb{B})_{QR} (\\mathbf{X}^n - \\mathbf{Y}^n)_R = \\omega_n (\\mathbf{X}^n + \\mathbf{Y}^n)_Q$\uff1b\u4e8e\u662f\u4e0a\u5f0f\u5316\u4e3a\n\n$$\n\\alpha_{ts} (\\omega) = 2 d_P^t (\\mathbf{X}^n + \\mathbf{Y}^n)_P \\cdot \\frac{\\omega_n}{\\omega_n^2 - \\omega^2} \\cdot (\\mathbf{X}^n + \\mathbf{Y}^n)_Q d_Q^s\n$$\n\n\u6211\u4eec\u7559\u610f\u5230\u8dc3\u8fc1\u5076\u6781\u77e9\u7684\u5b9a\u4e49\u662f $\\langle 0 | \\hat d{}^t | n \\rangle = d_P^t (\\mathbf{X}^n + \\mathbf{Y}^n)_P$\uff0c\u5e76\u4e14\u5229\u7528\u4e0b\u9762\u7684\u5c0f\u6280\u5de7\uff1a\n\n$$\n\\frac{\\omega_n}{\\omega_n^2 - \\omega^2} = \\frac{1}{2} \\left( \\frac{1}{\\omega_n - \\omega} - \\frac{1}{\\omega_n + \\omega} \\right)\n$$\n\n\u6211\u4eec\u5c31\u53ef\u4ee5\u63a8\u77e5\uff0c\n\n$$\n\\alpha_{ts} (\\omega) = \\frac{\\langle 0 | \\hat d{}^t | n \\rangle \\langle n | \\hat d{}^s | 0 \\rangle}{\\omega_n - \\omega} + \\frac{\\langle 0 | \\hat d{}^t | n \\rangle \\langle n | \\hat d{}^s | 0 \\rangle}{\\omega_n + \\omega}\n$$\n\n\n```python\n(\n + np.einsum(\"tP, nP, n, nQ, sQ -> ts\", d_P, X + Y, 1 / (td_eig - omega), X + Y, d_P)\n + np.einsum(\"tP, nP, n, nQ, sQ -> ts\", d_P, X + Y, 1 / (td_eig + omega), X + Y, d_P)\n)\n```\n\n\n\n\n array([[ 7.28458, -0.05683, -2.08145],\n [-0.05683, 4.79845, -1.39731],\n [-2.08145, -1.39731, 37.73368]])\n\n\n\n\u8fd9\u5c31\u5b8c\u6210\u4e86\u4ece\u666e\u9002\u7684 Casida \u65b9\u7a0b\u63a8\u6f14\u5f97\u5230\u8dc3\u8fc1\u5076\u6781\u77e9\u8868\u793a\u7684\u6781\u5316\u7387\u7684\u516c\u5f0f\u4e86\u3002\n\n\u82e5\u63a5\u53d7 Casida \u65b9\u7a0b\u7684\u5047\u8bbe\u524d\u63d0\uff0c\u90a3\u4e48\u4e0a\u8ff0\u7684\u63a8\u6f14\u5c06\u4f1a\u662f\u4e25\u683c\u7684\u3002\n\n### TD-HF \u65b9\u7a0b\u542b\u9891\u6781\u5316\u7387\u4e0e CP-HF \u65b9\u7a0b\u95f4\u7684\u5173\u7cfb\n\n\u6211\u4eec\u5148\u56de\u987e\u9759\u6001\u6781\u5316\u7387\u6c42\u53d6\u65f6\u6240\u4f7f\u7528\u7684 CP-HF \u65b9\u7a0b\uff1a\n\n$$\nA'_{ia, jb} U^t_{jb} = d^t_{ia}\n$$\n\n\u5199\u4e3a\u53cc\u4e0b\u6807\u7684\u5f62\u5f0f\uff0c\u5219\u4e3a\n\n$$\nA'_{PQ} U^t_Q = d^t_P\n$$\n\n\u5199\u4e3a\u77e9\u9635\u5f62\u5f0f\uff0c\u5219\u4e3a\n\n$$\n\\mathbf{A}' \\mathbf{U}^t = \\mathbf{d}^t\n$$\n\n\u7559\u610f\u5230 $A'_{PQ} = (\\mathbb{A} + \\mathbb{B})_{PQ}$\uff0c\u56e0\u6b64\u4e0a\u5f0f\u8fd9\u5bf9\u5e94\u5230 Casida \u65b9\u7a0b\u7684\u4e00\u4e2a\u5bfc\u51fa\u5f0f (5)\u3002\u82e5 $\\omega = 0$\uff0c\u5219\n\n$$\n(\\mathbb{A} + \\mathbb{B}) (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) = 4 \\mathbf{d}^t\n$$\n\n\u56e0\u6b64\uff0c\u5728\u9759\u6001\u60c5\u5f62 $\\omega = 0$ \u4e0b\uff0c$\\mathbf{U}^t = \\frac{1}{4} (\\mathbf{X}' + \\mathbf{Y}')$\u3002\n\n\u4f46\u82e5 $\\omega \\neq 0$\uff0c\u90a3\u4e48\u5f0f (5) \u5e94\u5199\u4f5c\n\n$$\n\\left( (\\mathbb{A} + \\mathbb{B}) - \\omega^2 (\\mathbb{A} - \\mathbb{B})^{-1} \\right) (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) = 4 \\mathbf{d}^t\n$$\n\n\u82e5\u6211\u4eec\u62d3\u5c55 CP-HF \u65b9\u7a0b\u4e3a\u542b\u9891\u5f62\u5f0f\n\n$$\n\\mathbf{A}' (\\omega) \\mathbf{U}^t (\\omega) = \\mathbf{d}^t\n$$\n\n\u5e76\u4e14\u6781\u5316\u7387\u53ef\u4ee5\u5199\u4e3a\n\n$$\n\\alpha_{ts} (\\omega) = 4 \\mathbf{U}^t (\\omega)^\\dagger \\mathbf{d}^t\n$$\n\n\u90a3\u4e48\n\n$$\n\\begin{align}\n\\mathbf{A}' (\\omega) &= (\\mathbb{A} + \\mathbb{B}) - \\omega^2 (\\mathbb{A} - \\mathbb{B})^{-1} \\\\\n\\mathbf{U}^t (\\omega) &= \\frac{1}{4} (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t)\n\\end{align}\n$$\n\n\u9700\u8981\u7559\u610f\u5c3d\u7ba1\u6211\u4eec\u4e4b\u524d\u4e00\u76f4\u6ca1\u6709\u5f15\u5165 $(\\omega)$ \u8bb0\u53f7\u6765\u5f3a\u8c03\uff0c\u4f46 $\\mathbf{X}'{}^t, \\mathbf{Y}'{}^t$ \u662f\u968f\u9891\u7387\u53d8\u5316\u800c\u53d8\u5316\u7684\u3002\n\n\n```python\nomega = 0.186\nA_p_omega = (A + B) - omega**2 * np.linalg.inv(A - B)\nU_omega = np.einsum(\"PQ, tQ -> tP\", np.linalg.inv(A_p_omega), d_P)\n4 * np.einsum(\"tP, sP -> ts\", U_omega, d_P)\n```\n\n\n\n\n array([[ 7.28458, -0.05683, -2.08145],\n [-0.05683, 4.79845, -1.39731],\n [-2.08145, -1.39731, 37.73368]])\n\n\n\n\u8fd9\u5c31\u5728\u542b\u9891\u60c5\u5f62\u4e0b\uff0c\u5c06 CP-HF \u4e0e TD-HF \u7684\u516c\u5f0f\u8054\u7cfb\u5728\u4e86\u4e00\u8d77\u3002\n\n## \u603b\u7ed3\n\n\u8fd9\u7bc7\u6587\u6863\u6211\u4eec\u7b80\u5355\u4e14\u4e0d\u592a\u4e25\u683c\u548c\u5b8c\u6574\u5730\u56de\u987e\u4e86\u9759\u6001\u4e0e\u542b\u9891\u6781\u5316\u7387\u7684\u8ba1\u7b97\uff0c\u901a\u8fc7 Casida \u65b9\u7a0b\u63a8\u5bfc\u4e86\u542b\u9891\u6781\u5316\u7387\uff0c\u5e76\u5c06 TD-HF \u5206\u6790\u65b9\u6cd5\u4e0e CP-HF \u65b9\u6cd5\u8054\u7cfb\u8d77\u6765\u3002\u4e00\u4e9b\u4e3b\u8981\u7684\u7ed3\u8bba\u548c\u62d3\u5c55\u601d\u8def\u4f1a\u662f\uff1a\n\n- TD-HF \u65b9\u7a0b\u4e0e CP-HF \u65b9\u7a0b\u5728\u9759\u6001\u6781\u5316\u7387\u60c5\u5f62\u4e0b\u6709\u6781\u4e3a\u7d27\u5bc6\u7684\u8054\u7cfb\uff0c\u4e24\u79cd\u8868\u8fbe\u5f0f\u5b8c\u5168\u7b49\u4ef7\uff1b\u800c\u542b\u9891\u6781\u5316\u7387\u60c5\u5f62\u4e0b\uff0cTD-HF \u65b9\u7a0b (Casida \u65b9\u7a0b) \u53ef\u4ee5\u5bfc\u51fa\u4e0e CP-HF \u65b9\u7a0b\u5f62\u5f0f\u7c7b\u4f3c\u7684\u65b9\u7a0b\u3002\u8fd9\u53ef\u80fd\u662f Frequency-Dependent CP-HF \u65b9\u7a0b\u7684\u539f\u578b\u3002\n\n- \u4ece CP \u7684\u89d2\u5ea6\u8bb2\uff0c\u6839\u636e\u4e0b\u5f0f\n\n $$\n (\\mathbb{A} + \\mathbb{B}) (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t) = 4 \\mathbf{d}^t + \\omega (\\mathbb{A} - \\mathbb{B})^{-1} (\\mathbf{X}'{}^t + \\mathbf{Y}'{}^t)\n $$\n \n \u6216\n \n $$\n \\mathbf{A}' (\\omega) \\mathbf{U}^t (\\omega) = \\mathbf{d}^t\n $$\n\n \u6211\u4eec\u53ef\u4ee5\u60f3\u5230\u7b49\u5f0f\u5de6\u8fb9\u662f\u7535\u5b50\u4e91\u7684\u5f1b\u8c6b\u8fc7\u7a0b\uff1b\u7b49\u5f0f\u53f3\u8fb9\u662f\u7535\u5b50\u4e91\u6240\u53d7\u5230\u7684\u5916\u52a0\u5fae\u6270\u573a\u3002\u56e0\u6b64\u7535\u5b50\u4e91\u7684\u6fc0\u53d1\u8fc7\u7a0b\u53ef\u4ee5\u770b\u4f5c\u5916\u573a\u5fae\u6270\u3002\n \n \u82e5\u9000\u5316\u4e3a\u6ca1\u6709\u5076\u6781\u5fae\u6270\u7684\u5206\u5b50\u6fc0\u53d1\u8fc7\u7a0b\uff0c\u4ece CP \u7684\u89d2\u5ea6\u770b\uff0c\u7535\u5b50\u4e91\u53d8\u5f62\u7684\u5f1b\u8c6b\u6548\u5e94 (\u7b49\u5f0f\u5de6\u8fb9) \u5e94\u5f53\u6070\u597d\u80fd\u8865\u8db3\u7535\u5b50\u4e91\u6fc0\u53d1\u6240\u5bfc\u81f4\u7684\u53d8\u5f62\u5916\u573a (\u7b49\u5f0f\u53f3\u8fb9)\u3002\u5f53\u4e24\u8005\u81ea\u6d3d\u65f6\uff0c\u7535\u5b50\u4e91\u80fd\u6001\u7684\u6fc0\u53d1\u6210\u7acb\u3002\n\n- \u4e0a\u8ff0\u6240\u8ff0\u7684 TD-HF \u65b9\u7a0b\u662f TD \u5206\u6790\u4e2d\u7684 Linear Response \u5173\u7cfb\u3002\u4e8b\u5b9e\u4e0a\uff0cTD \u8fd8\u5177\u6709\u66f4\u9ad8\u9636\u7684\u5206\u6790\u8fc7\u7a0b\uff0c\u4f46\u8fd9\u91cc\u6211\u4eec\u5e76\u6ca1\u6709\u6d89\u53ca\u3002\n\n \u4e00\u4e2a\u663e\u800c\u6613\u89c1\u7684\u95ee\u9898\u4f1a\u662f\uff0c\u7535\u5b50\u4e91\u5df2\u7ecf\u5728\u5916\u52a0\u5076\u6781\u573a\u4e2d\uff0c\u56e0\u6b64\u7535\u5b50\u4e91\u7684\u6027\u8d28\u5e94\u5f53\u5df2\u7ecf\u53d1\u751f\u53d8\u5316\uff1b\u4f46 Linear Response \u7ed9\u51fa\u7684 TD-HF \u65b9\u7a0b (Casida \u65b9\u7a0b) \u5374\u544a\u8bc9\u6211\u4eec\u4f53\u7cfb\u7684\u6fc0\u53d1\u80fd $\\omega_n$ \u5e76\u6ca1\u6709\u53d8\u5316\uff0c\u65e0\u8bba\u5076\u6781\u573a\u7684\u503c\u6709\u591a\u5927\u3002\u8fd9\u663e\u7136\u8fdd\u80cc\u6211\u4eec\u7684\u7269\u7406\u76f4\u89c9\u3002\u8fd9\u4e5f\u53ef\u80fd\u610f\u5473\u7740\u4ee5 MP2\u3001CC \u4e3a\u5e95\u5c42\u91cf\u5316\u65b9\u6cd5\uff0c\u82e5\u5916\u52a0\u7684\u76f8\u5173\u80fd\u5bc6\u5ea6\u5fae\u6270\u8fd1\u4f3c\u573a\u6ca1\u6709\u628a\u63e1\u4f4f\u7269\u7406\u5b9e\u5728\uff0c\u90a3\u4e48\u542b\u9891\u6781\u5316\u7387\u4e5f\u53ef\u80fd\u9762\u4e34\u6fc0\u53d1\u80fd $\\omega_n$ \u8fd8\u4ecd\u7136\u662f TD-HF \u6fc0\u53d1\u80fd\u7684\u60c5\u51b5\u3002\n\n- \u6587\u6863\u4e2d\u6ca1\u6709\u753b\u51fa\uff0c\u4f46\u786e\u5b9e\u5b58\u5728\u7684\u60c5\u51b5\u662f\uff0c\u82e5\u5165\u5c04\u9891\u7387\u6070\u597d\u5904\u5728\u5206\u5b50\u7684\u6fc0\u53d1\u6001\u5bc6\u96c6\u533a\uff0c\u90a3\u4e48\u5355\u4e2a\u5165\u5c04\u9891\u7387\u4e0b\u7684\u542b\u9891\u6781\u5316\u7387\u7684\u8ba1\u7b97\u5f88\u53ef\u80fd\u662f\u6ca1\u6709\u610f\u4e49\u7684\uff1b\u56e0\u4e3a\u5728\u6fc0\u53d1\u6001\u5bc6\u96c6\u533a\uff0c\u542b\u9891\u6781\u5316\u7387\u66f2\u7ebf\u7684\u632f\u8361\u60c5\u51b5\u53ca\u5176\u4e25\u91cd\uff0c\u4e0d\u4ec5\u8fd1\u4e4e\u6beb\u65e0\u89c4\u5f8b\u53ef\u8a00\uff0c\u5176\u7edd\u5bf9\u503c\u4e5f\u5927\u5230\u975e\u5e38\u79bb\u8c31\u3002\n\n Raman \u5149\u8c31\u53ef\u4ee5\u8ba4\u4e3a\u662f\u542b\u9891\u6781\u5316\u7387\u5bf9\u5206\u5b50\u7684\u7b80\u6b63\u5750\u6807\u5bfc\u6570\u5f97\u6765\u3002\u4ece\u5b9e\u9a8c\u7684\u89d2\u5ea6\u6765\u8bb2\uff0c\u8fd9\u53ef\u80fd\u662f\u8868\u9762\u589e\u5f3a Raman (SERS) \u4e2d\u5316\u5b66\u589e\u5f3a\u6548\u5e94\u7684\u4e00\u4e2a\u5f88\u597d\u7684\u6027\u8d28\u3002\u82e5\u6709\u673a\u7269\u8d1f\u8f7d\u5728\u94f6\u539f\u5b50\u7c07\u4e0a\uff0c\u5165\u5c04\u7684\u6fc0\u53d1\u5149\u5904\u4e8e\u94f6\u539f\u5b50\u7c07\u7684\u6fc0\u53d1\u80fd\u5e26\u4f46\u53c8\u4e0d\u7834\u574f\u6709\u673a\u7269\u5206\u5b50\uff0c\u6216\u8ba9\u94f6\u539f\u5b50\u7c07\u4e0e\u6709\u673a\u7269\u5206\u5b50\u4ea7\u751f\u7535\u8377\u8f6c\u79fb\u65f6\uff0cRaman \u4fe1\u53f7\u786e\u5b9e\u53ef\u4ee5\u6210\u5343\u4e0a\u4e07\u500d\u5730\u589e\u5f3a\u3002\u4f46\u4ece\u8ba1\u7b97\u7684\u89d2\u5ea6\u6765\u8bb2\uff0c\u8fd9\u53ef\u80fd\u662f\u975e\u5e38\u4ee4\u4eba\u7edd\u671b\u7684\uff0c\u56e0\u4e3a\u8ba1\u7b97\u8fc7\u7a0b\u4e2d\u7684\u8bef\u5dee\u628a\u63a7\u8fd1\u4e4e\u4e8e\u4e0d\u5b58\u5728\uff1b\u82e5\u4f7f\u7528\u4e0d\u540c\u7684\u5bc6\u5ea6\u6cdb\u51fd\u8fd1\u4f3c\uff0c\u5373\u4f7f\u5f97\u5230\u76f8\u5dee 0.1 eV \u7684\u6fc0\u53d1\u80fd\u5e26\u5dee\uff0c\u5373\u4f7f\u5b9a\u6027\u4e0a\u7ed3\u679c\u6b63\u786e\uff0c\u4f46\u8ba1\u7b97\u5f97\u5230\u7684 Raman \u4fe1\u53f7\u4e5f\u53ef\u80fd\u4f1a\u6709\u6210\u767e\u4e0a\u5343\u7684\u8bef\u5dee\u3002\n \n \u56e0\u6b64\uff0c\u8fd9\u53ef\u80fd\u9700\u8981\u6211\u4eec\u5bf9\u5165\u5c04\u5149\u4f5c\u67d0\u79cd\u9891\u7387\u589e\u5bbd\uff0c\u4ee5\u5e73\u7f13\u6781\u5316\u7387\u65ad\u70b9\uff0c\u5f97\u5230\u76f8\u5bf9\u5b9a\u91cf\u7684\u6570\u503c\u7ed3\u679c\uff1b\u6216\u8005\u5c06\u6781\u5316\u7387\u5f53\u4f5c\u539f\u5b50\u53ef\u52a0\u6027\u7684\u7269\u7406\u6027\u8d28\uff0c\u8fdb\u884c\u7c7b\u4f3c\u4e8e AIM (Atomic in Molecule) \u7684\u6781\u5316\u7387\u62c6\u5206\u7684\u5b9a\u6027\u8fd1\u4f3c\u5206\u6790\u3002\n\n[^Valley-Schatz.JPCL.2013]: Valley, N.; Greeneltch, N.; Duyne, R. P. V.; Schatz, G. C. A Look at the Origin and Magnitude of the Chemical Contribution to the Enhancement Mechanism of Surface-Enhanced Raman Spectroscopy (SERS): Theory and Experiment. *J. Phys. Chem. Lett.* **2013**, *4* (16), 2599\u20132604. doi: [10.1021/jz4012383](https://doi.org/10.1021/jz4012383).\n", "meta": {"hexsha": "df2604e008f4178552bb3add58f492f6bfe4099d", "size": 564602, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "source/QC_Notes/Freq_Polar/Freq_Polar.ipynb", "max_stars_repo_name": "ajz34/ajz34.readthedocs.io", "max_stars_repo_head_hexsha": "73be05a73241c18b98fd0d4dbdc48c643278c3da", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-30T12:31:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-14T03:56:56.000Z", "max_issues_repo_path": "source/QC_Notes/Freq_Polar/Freq_Polar.ipynb", "max_issues_repo_name": "ajz34/ajz34.readthedocs.io", "max_issues_repo_head_hexsha": "73be05a73241c18b98fd0d4dbdc48c643278c3da", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/QC_Notes/Freq_Polar/Freq_Polar.ipynb", "max_forks_repo_name": "ajz34/ajz34.readthedocs.io", "max_forks_repo_head_hexsha": "73be05a73241c18b98fd0d4dbdc48c643278c3da", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-30T12:32:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-30T12:32:09.000Z", "avg_line_length": 97.5975799481, "max_line_length": 107823, "alphanum_fraction": 0.7882614656, "converted": true, "num_tokens": 21822, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.6370307944803831, "lm_q2_score": 0.42632159254749036, "lm_q1q2_score": 0.27157998280467}} {"text": "

Table of Contents

\n\n\n\n```\n# IMPORTS\nfrom audiomodels import *\n```\n\n\n```\ndataload = Sass('/home/penalvad/stattus4/dnn_test_data/', num_samples=7, number_of_batches=1, split_tax=.9, freq_size=600, time_size=80, same_data_validation=True)\ndataload_2 = Sass('/home/penalvad/stattus4/test/', num_samples=5, number_of_batches=1, split_tax=.9, freq_size=480, time_size=600, same_data_validation=True)\n```\n\n\n```\ndtrain = dataload.training()\ndtrain_2 = dataload_2.training()\n```\n\n\n```\ndata_train = np.ndarray( shape=(12,600,80,1) )\ndata_test = np.ndarray( shape=(8,600,50) )\nfor i in range(6):\n\n data_train[2*i,:,:,0] = dtrain[i][0][2]\n data_train[2*i+1,:,:,0] = dtrain[i][1][2]\n```\n\n# Architecture Basics\n## Gated CNN\nGated CNN is a doubled CNN in whom one of the convoluted signals does the role of opening/closing the network, giving an **Attention Mechanism** to the convolution, for being activated by a sigmoid.\\n\",\nIt gives non-vanishing gradient, since the multiplication rule for the derivative applies, also, applies gradient to the linear convoluted part.\n\n### GCNN 2D\nThis unit uses sigmoid function in all frame pixels and do element-wise multiplication. Use it for images or time-series TF Features frame. The standard convolution mode is Valid. The output size of the convolution is given by:\n\nPadding Valid\n\\\\begin{align}\n\\dimemsion output size = \\\\ceil{ \\\\frac{dim size - (kernel dim size - 1)*dilation rate}{stride} }\n\\\\end{align}\n\nWhere dilation rate is how may pixel are each filter spaced from each other, when dilation rate >= 1 then stride = 1 (since if you stride will be losing information from the data).\n\n\n```\n#Cascading deepness networks mem size\n\ndef channeling_rule(channelin, filterlist):\n channellist = [channelin]\n for filter in filterlist:\n channellist.append(channellist[-1] + filter + 3)\n\n return channellist\n```\n\n\n```\n# Label Vector Feed\n# dtrain[0][0][1]\nltrain = []\nltest = []\n\nfor i in range(6):\n ltrain.append([0,1])\n ltrain.append([1,0])\n\nfor i in range(4):\n ltest.append([0,1])\n ltest.append([1,0])\n```\n\n\n```\n# Define Filter-Bank of the CNN \nfilterlist = []\nfilterlista = [3,3,3,3]\nfilterlist.append(filterlista)\na = calculatefilter(600, filterlista)\n\n# Max pooling\n\nfilterlistb = [5,5,5,5]\nfilterlist.append(filterlistb)\nb = calculatefilter(a//2.0, filterlistb)\n\n# Max pooling\n\nfilterlistc = [7,7,7,7]\nfilterlist.append(filterlistc)\nc = calculatefilter(b//2., filterlistc)\n\n# Max pooling \n\nfilterlistd = [5,5,5,5]\nfilterlist.append(filterlistd)\nd = calculatefilter(c//2.0, filterlistd)\n\n\n# Max pooling \n\nfilterliste = [5,4,4]\nfilterlist.append(filterliste)\ne = calculatefilter(d//2.0, filterliste)\nprint(e)\n```\n\n\n```\n#Model Building\n\n# Input Frames\n\n#Frequency\nfreq = 600*np.ones(10)\n\n#Time \ntime = 8*np.ones(10) \n\n\nbuildgraph = Builder(dtype=tf.float32, datasize=(freq,time), num_input = 10, channels=1)\nbuildgraph.get_directives('gcnn2d')\nbuildgraph.get_directives('softmax')\n#buildgraph.set_archname('frame')\nbuildgraph.get_directives('reducemean')\nbuildgraph.get_directives('losscrossentropy')\npooling = [2,2,2,2]\n\nfor i in range(10):\n\n buildgraph.config_block_name(deepness=len(filterlist[0]),numblock=i)\n for j in np.arange(len(filterlist)):\n\n cout = 20 + j*10\n if j != 0:\n buildgraph.config_block_name(deepness=len(filterlist[j]),numblock=i)\n\n for z in np.arange(len(filterlist[j])):\n \n fw = (np.floor((np.arange(0,80,4) + 4)*0.03)+1)\n\n if z == 0 and j == 0:\n buildgraph.build_graph_module('cnn2d', channels_out = 64, filter_size=(filterlist[j][0], 8), isinput=True, lastnamescope=True, verbose=False)\n\n # z:= layer onde esta o gate dentro de um block de cnn, entre maxpoolings.\n # j:= n\u00famero do block de cnn, onde esta o layer z de gate\n if z == (len(filterlist[j])-1) and (j == 4):\n buildgraph.build_graph_module('gcnn2d', channels_out = cout, filter_size=(filterlist[j][z], 1), isinput=False, lastnamescope=True, verbose=False)\n else:\n buildgraph.build_graph_module('cnn2d', channels_out = cout, filter_size=(filterlist[j][z], 1), isinput=False, lastnamescope=True, verbose=False)\n \n if j < len(filterlist) - 1:\n buildgraph.config_block_name(deepness=-1,numblock=i,pool=j) \n buildgraph.build_graph_module('maxpooling2d', poolsize = (pooling[j],1), isinput=False, lastnamescope=True, verbose=False)\n \n buildgraph.config_block_name(deepness=0,numblock=i)\n buildgraph.build_graph_module('softmax', num_labels=2, isinput=False, lastnamescope=True)\n\nbuildgraph.config_block_name(deepness=0,numblock=0)\nbuildgraph.build_graph_module('reducemean', isinput=False, lastnamescope=True)\nbuildgraph.config_block_name(deepness=0,numblock=0)\nbuildgraph.build_graph_module('losscrossentropy', num_labels = 2, isinput=False, lastnamescope=True, show_cgraph=False)\n```\n\n\n```\nprint_display_cgraph(buildgraph.graph)\n```\n\n\n```\nfeed_dict = {}\nfor i in range(10): \n feed_dict[buildgraph.signal_in[i]] = data_train[:12, :, 8*i:8*(i+1), :]\n feed_dict[buildgraph.label_tensor] = ltrain[:12]\n feed_dict[buildgraph.learning_rate] = 0.00001\noutput = buildgraph.run_cgraph(feed_dict, op_to_run =buildgraph.namescopes['reducemean'][-1].name, number_of_runs = 5, mode = 'minimize', ckpt_dir_name='../ckpt/model', new_session=True, output_log = True, stop=0.4, adaptative_lr=True, verbose=True)\n\n```\n\n\n```\n# Example of Recovering Model from .meta and .data. It uses .index and checkpoint to map variables and last checkpoint to the graph \\n\",\ngraph = tf.Graph()\n \nwith graph.as_default():\n \n saver = tf.train.import_meta_graph(\\\"../ckpt/model.meta\\\", import_scope='')\n feed_dict = {}\n with tf.Session(graph=graph) as sess:\n\n\n saver.restore(sess, tf.train.latest_checkpoint('../ckpt/') ) \\n\",\n feed_dict[sess.graph.get_tensor_by_name('signal_in:0')] = data_train[:14,:,:8,:]\\n\",\n for i in np.arange(1,10): \\n\",\n \" feed_dict[sess.graph.get_tensor_by_name('signal_in_'+str(i)+':0')] = data_train[:14,:,8*i:8*(i+1),:]\\n\",\n \" feed_dict[sess.graph.get_tensor_by_name('losscrossentropy/labels:0')] = ltrain[:14]\\n\",\n \" feed_dict[sess.graph.get_tensor_by_name('learning_rate:0')] = 0.0001 \\n\",\n \" #for op in graph.get_operations():\\n\",\n \" #print(2*'\\\\n')\\n\",\n \" #print(op.name) \\n\",\n \" minimize_op = sess.graph.get_operation_by_name('losscrossentropy/Adam')\\n\",\n \" loss = sess.graph.get_tensor_by_name('losscrossentropy/categorical_crossentropy/weighted_loss/value:0')\\n\",\n \" acc = sess.graph.get_tensor_by_name('reducemean/Mean:0')\\n\",\n \" for i in range(400):\\n\",\n \" lossval,_ = sess.run([loss,minimize_op], feed_dict=feed_dict)\\n\",\n \" accval = sess.run([acc], feed_dict=feed_dict)\\n\",\n if lossval < 0.3:\\n\",\n saver.save(sess, '../ckpt/model2')\\n\",\n print('\\\\n')\\n\",\n print('saving')\\n\",\n print('\\\\n')\\n\",\n \\n\",\n print('loss ',lossval) \\n\",\n print('acc ', accval)\\n\",\n print(2*'\\\\n')\\n\",\n \n\n```\n", "meta": {"hexsha": "d15651e70af718ffdb06acea8e6e239c6cf33940", "size": 11371, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Audio_GatedCNNArch.ipynb", "max_stars_repo_name": "Uiuran/stattus4-audio-models", "max_stars_repo_head_hexsha": "1ef4a9eca5d7540585c77d9fac85d5d4b3a7b07e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Audio_GatedCNNArch.ipynb", "max_issues_repo_name": "Uiuran/stattus4-audio-models", "max_issues_repo_head_hexsha": "1ef4a9eca5d7540585c77d9fac85d5d4b3a7b07e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Audio_GatedCNNArch.ipynb", "max_forks_repo_name": "Uiuran/stattus4-audio-models", "max_forks_repo_head_hexsha": "1ef4a9eca5d7540585c77d9fac85d5d4b3a7b07e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7737003058, "max_line_length": 373, "alphanum_fraction": 0.5547445255, "converted": true, "num_tokens": 2193, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5774953651858118, "lm_q2_score": 0.4687906266262437, "lm_q1q2_score": 0.27072441411920817}} {"text": "These tutorials are licensed by [Bernard Koch](http://www.github.com/kochbj) under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).\n\n# Tutorial 1: Introduction to Deep Learning for Causal Inference on Observables\n \nThe following tutorials are a gentle introduction to building deep learning models for causal inference using the selection on observables identification strategy. In particular, these model are designed to estimate the average treatment effect (ATE) and the conditional average treatment effect (CATE). The ATE is defined as:\n \n$$ATE =\\mathbb{E}[Y(1)-Y(0)]$$\n \nwhere $Y(1)$ and $Y(0)$ are the potential outcomes had the unit received or not received the treatment, respectively. The CATE is defined as,\n \n$$CATE =\\mathbb{E}[Y(1)-Y(0)|X=x]$$\n \nwhere $X$ is the set of selected, observable covariates, and $x \\in X$.\n \nBecause selection on observables is a simple identification strategy, these estimators are simple neural networks. This tutorial is thus also a gentle introduction to writing models in TensorFlow, and getting started coding deep learning models.\n \n**These tutorials are for you if:**\n \n1. You want a quick and dirty introduction to DL + Selection on Observables literature with minimal math, or...\n \n2. You want a gentle introduction to writing and training custom models in Tensorflow 2 and...\n \n3. You have a basic familiarity with causal inference and...\n \n4. You have a basic familiarity with Python and object oriented programming.\n\n## Why use deep learning for causal inference?\n\n1. Appropriately built neural network models are among the **lowest bias** estimators in our statistical arsenal.\n\n2. For similar reasons, the complex response surfaces learned by neural networks make them well-suited for estimating **heterogeneous treatment effects**.\n\n4. Most excitingly, deep learning has the ability to allow us to control for confounding found in **complex data types like images, text, and networks**.\n\n3. Although most of these models don't make theoretical guarantees, representation learning **might be more robust to empirical violations of overlap** than simpler adjustment strategies.\n\n5. Although most of these models don't make theoretical guarantees, representation learning **might be more robust to bias induced by including instruments** in your conditioning covariates.\n\nOne more point: even if we cannot formally satisfy causal inference assumptions, these architectures are still very useful for **creating interpretable ML models** where we can isolate the contributions of specific covariates to predicting the outcome.\n\n## Notation\n**Causal identification**\n\n- Observed covariates/features: $X$\n\n- Potential outcomes: $Y(0)$ and $Y(1)$\n\n- Treatment: $T$\n\n- Average Treatment Effect: $ATE =\\mathbb{E}[Y(1)-Y(0)]$\n\n- Conditional Average Treatment Effect: $CATE =\\mathbb{E}[Y(1)-Y(0)|X=x]$\n\n**Deep learning estimation**\n\n- Predicted outcomes: $\\hat{Y}(0)$ and $\\hat{Y}(1)$\n\n- Outcome modeling functions: $\\hat{Y}(T)=h(X,T)$\n\n- Representation functions: $\\Phi(X)$ (producing representations $\\phi$)\n\n- Loss functions: $\\mathcal{L}(true,predicted)$, with the mean squared error abbreviated $MSE$ and binary cross-entropy as $BCE$\n\n- Estimated CATE: $\\hat{CATE}=(1-2t)(\\hat{{y}}(t)-\\hat{y}(1-t))$\n\n- Estimated ATE: $\\hat{ATE}=\\frac{1}{n}\\sum_{i=1}^n\\hat{CATE_i}$\n\n\n## Standard assumptions for causal identification under selection on observables\nStandard assumptions for model-based causal inference apply here (from [Johansson et al., 2020](https://arxiv.org/pdf/2001.07426.pdf)): \n1. **Ignorability/Exchangability**.The potential outcomes $Y(0)$, $Y(1)$ and the treatment $T$ are conditionally independent given $X$,\n$$Y(0),Y(1)\\perp \\!\\!\\! \\perp T|X $$\nIgnorability specifies that there are no *unmeasured confounders* that affect both treatment and outcome outside of those in the observed covariates/features $X$.\n\n\n2. **Consistency/Stable Unit Treatment Value Assumption (SUTVA)**. Consistency specifies that when a unit recieves treatment, we observe the potential outcome. Moreover, the response of any unit does not vary with the treatment assignment to other units (i.e., no network effects), and the form/level of treatment is homogeneous and consistent across units,\n$$T=t \\rightarrow Y=Y(T)$$\n\n\n3. **Overlap** In any context $x \\in X$, any treatment $t\\in \\{0,1\\}$ has a non-zero probability of being observed in the data, \n\n$$\\forall x \\in X, t\\in\\{0,1\\}:p(T=t|X=x)>0$$\n\nNote that the overlap assumption does not require that the empirical data are necessarily balanced, but that the two treatment distributions have common support. \n\n## Representation learning as a balancing strategy\n\nA core concept in deep learning is the idea that artificial neural networks have the capacity to project a set of complex features $X$ into a useful vector space. When data are transformed into this space, we call the resulting tensor a **representation** ([Goodfellow, et al. 2016](https://www.deeplearningbook.org/contents/representation.html)) (you might also see the term \"embedding\"). For social scientists most comfortable with linear models, we can think about the parameters in each feed-forward layer of a deep neural network as capturing every possible interaction between the values produced by the previous layer. Tasking the network to minimize error on a relevant downstream task encourages it to adjust these interaction parameters to learn useful representations. We can also think about these representation layers as automatically extracting useful latent covariates/features.\n\nThe key intuition in this literature is that we want to train neural networks to learn a representation function $\\Phi(X)$ where the data are deconfounded/balanced in the representation space. In other words, the distributions of the representations $\\Phi(X|T=0)$ and $\\Phi(X|T=1)$ are similar.\n\n
From Shen and Johansson talk 2018
\n\nNote that $\\Phi$ must, in theory, be an invertible function for the ignorability and overlap assumptions to hold. By invertible we mean that there is an inverse function such that $\\Phi^{-1}(\\Phi(X))=X$.\n \n\n# TARNet (Baseline model)\n\nTo encourage balanced representations, [Johansson et al., 2017](https://arxiv.org/pdf/1605.03661.pdf) propose a simple two-headed neural network called Treatment Agnostic Regression Network (TARNet). Each head models a separate outcome. One head learns the function $\\hat{Y}(1)=h(\\Phi(X),1)$, and the other head learns the function $\\hat{Y}(0)=h(\\Phi(X),0)$. Both heads backpropagate their gradients to shared representation layers that learn $\\Phi(X)$. Again, the hope is that these representation layers will learn to balance the data because they are used to predict both outcomes.\n\n
TARNet architecture originally introduced in Johansson et al, 2017. Dashed lines indicate plug-in estimation of CATE (empirical risk) as described above. Green indicates raw data. Figure from Johansson et al, 2020.
\n\n\nOther than this architectural change, this is a standard regression DNN where we minimize the factual MSE for each head:\n\n$$\\mathcal{L}(Y,h(\\Phi(X),T))=MSE(Y,h(\\Phi(X),T))=\\frac{1}{n}\\sum_{i=1}^n [h(\\Phi(x_i),t_i)-y_i(t_i)]^2$$\n\n The complete objective for the network is to minimize the parameters of $h$ and $\\Phi$ for all $n$ units in the training sample such that,\n\n\\begin{equation}\n\\min_{h,\\Phi}\\frac{1}{n}\\sum_{i=1}^n \\mathcal{L}(y_i(t_i),h(\\Phi(x_i),t_i)) + \\lambda \\mathcal{R}(h)\\end{equation}\n\nwhere $\\mathcal{R}(h)$ is a model complexity term (e.g., for $L_2$ regularization) and $\\lambda$ is a hyperparameter chosen by the user.\n\n##Estimating Causal Effects\nAfter training, we will get two predictions: $\\hat{y}(t)$ and $\\hat{y}(1-t)$. We can estimate the $CATE$ in the testing sample (or full sample since this is inference) as,\n$$\\hat{CATE}=(1-2t)(\\hat{y}(t)-\\hat{y}(1-t))$$\nand the average treatment effect as,\n$$\\hat{ATE}=\\frac{1}{n}\\sum_{i=1}^n\\hat{CATE_i}$$\n\n\n## Coding up TARNet\n\nOkay, let's build the model! The rest of this tutorial basically annotates Claudia Shi's beautiful [implementation](https://github.com/claudiashi57/dragonnet) of TARNet from her [DragonNet paper](https://arxiv.org/pdf/1906.02120.pdf) (featured in a subsequent tutorial). Tensorflow provides a lot of flexibility for building custom models. As I go along I will try and point out alternative ways we could have done things. If you've never used Keras or Tensorflow 2, I'd recommend going over some standard tutorials first to get your bearings, but otherwise this is about as simple as it gets.\n\nFirst let's get our packages...\n\n\n```\nimport tensorflow as tf\nimport numpy as np #numpy is the numerical computing package in python\nimport datetime #we'll use dates to label our logs\nprint(tf.__version__)\n```\n\n\nThe next block specifies a function to build the model using Tensorflow 2's functional API. The functional API is one of three API's in Tensorflow (see this [post](https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0-dfccecb01021) for pros and cons). An example implementation of TARNet in the imperative object-oriented API is in the spoiler below (closer to how it would be implemented in PyTorch).\n\n
Imperative API Implementation\n\n The same model above might look something like this in the imperative API:\n```python\nclass TarNet(tf.keras.Model):\n def __init__(self,\n input_dim,\n name='tarnet',\n regularization=.01,\n **kwargs):\n super(TarNet, self).__init__(name=name, **kwargs)\n self.encoder1=Dense(units=200, activation='elu', kernel_initializer='RandomNormal')\n self.encoder2=Dense(units=200, activation='elu', kernel_initializer='RandomNormal')\n self.encoder3=Dense(units=200, activation='elu', kernel_initializer='RandomNormal')\n\n self.regressor1_y0 = Dense(units=100, activation='elu', kernel_regularizer=tf.keras.regularizers.l2(regularization))\n self.regressor2_y0 = Dense(units=100, activation='elu', kernel_regularizer=tf.keras.regularizers.l2(regularization))\n self.regressorO_y0 = Dense(units=1, activation=None, kernel_regularizer=tf.keras.regularizers.l2(regularization))\n\n self.regressor1_y1 = Dense(units=100, activation='elu', kernel_regularizer=tf.keras.regularizers.l2(regularization))\n self.regressor2_y1 = Dense(units=100, activation='elu', kernel_regularizer=tf.keras.regularizers.l2(regularization))\n self.regressorO_y1 = Dense(units=1, activation=None, kernel_regularizer=tf.keras.regularizers.l2(regularization))\n\n\n def call(self,inputs):\n x=self.encoder1(inputs)\n x=self.encoder2(x)\n phi=self.encoder3(x)\n\n out_y0=self.regressor1_y0(phi)\n out_y0=self.regressor2_y0(out_y0)\n y0=self.regressorO_y0(out_y0)\n\n out_y1=self.regressor1_y1(phi)\n out_y1=self.regressor2_y1(out_y1)\n y1=self.regressorO_y1(out_y1)\n\n concat=tf.concat([y0,y1,propensity],axis=1)\n return concat\n```\n
\n\nThe model itself is pretty simple. We use two representation layers with 200 neurons each, and two output layers for each head with 100 neurons each. Because treatment information is implicit in in the two heads, we only need one input: $X$. There are again a couple ways to have multiple outputs in the Functional API, but here we concatenate the two outputs into a list of vectors. We apply regularization to the output heads. \n\n\n```\nfrom tensorflow.keras.layers import Input\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.layers import Concatenate\nfrom tensorflow.keras import regularizers\nfrom tensorflow.keras import Model\n\ndef make_tarnet(input_dim, reg_l2):\n '''\n The first argument is the column dimension of our data.\n It needs to be specified because the functional API creates a static computational graph\n The second argument is the strength of regularization we'll apply to the output layers\n '''\n x = Input(shape=(input_dim,), name='input')\n\n # REPRESENTATION\n #in TF2/Keras it is idiomatic to instantiate a layer and pass its inputs on the same line unless the layer will be reused\n #Note that we apply no regularization to the representation layers \n phi = Dense(units=200, activation='elu', kernel_initializer='RandomNormal',name='phi_1')(x)\n phi = Dense(units=200, activation='elu', kernel_initializer='RandomNormal',name='phi_2')(phi)\n phi = Dense(units=200, activation='elu', kernel_initializer='RandomNormal',name='phi_3')(phi)\n\n # HYPOTHESIS\n y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2),name='y0_hidden_1')(phi)\n y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2),name='y1_hidden_1')(phi)\n\n # second layer\n y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2),name='y0_hidden_2')(y0_hidden)\n y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2),name='y1_hidden_2')(y1_hidden)\n\n # third\n y0_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y0_predictions')(y0_hidden)\n y1_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y1_predictions')(y1_hidden)\n\n #a convenience \"layer\" that concatenates arrays as columns in a matrix\n concat_pred = Concatenate(1)([y0_predictions, y1_predictions])\n #the declarations above have specified the computational graph of our network, now we instantiate it\n model = Model(inputs=x, outputs=concat_pred)\n\n return model\n\n```\n\nThe `summary` method can be used to confirm that the architecture is specified correctly.\n\nOne of the advantages of the functional API is that you can also visualize static computational graphs (very similar to the cartoon representation above).\n\n\n```\ntarnet_model=make_tarnet(25,.01)\n\nprint(tarnet_model.summary())\ntf.keras.utils.plot_model(tarnet_model, show_shapes=True, show_layer_names=True, to_file='tarnet.png')\n\nfrom IPython.display import Image # this just Jupyter notebook stuff\nImage(retina=True, filename='tarnet.png')\n```\n\n## Specifying the loss function\nThere are again at least four different ways to specify loss functions in Tensorflow2: if you have a standard loss there are built-in options, you can specify them as custom functions, custom objects, or build them into custom layers of your network. Here we've written a function.\n\n Note that we compute $\\mathcal{L}(Y(0),h(\\Phi(X),0))$ and $\\mathcal{L}(Y(1),h(\\Phi(X),1))$ separately and just add them to get the whole loss. Tensorflow will apply the gradients appropriately to the different outcome and representation layers.\n\n\n```\n# every loss function in TF2 takes 2 arguments, a vector of true values and a vector predictions\ndef regression_loss(concat_true, concat_pred):\n #computes a standard MSE loss for TARNet\n y_true = concat_true[:, 0] #get individual vectors\n t_true = concat_true[:, 1]\n\n y0_pred = concat_pred[:, 0]\n y1_pred = concat_pred[:, 1]\n\n #Each head outputs a prediction for both potential outcomes\n #We use t_true as a switch to only calculate the factual loss\n loss0 = tf.reduce_sum((1. - t_true) * tf.square(y_true - y0_pred))\n loss1 = tf.reduce_sum(t_true * tf.square(y_true - y1_pred))\n #note Shi uses tf.reduce_sum for her losses instead of tf.reduce_mean.\n #They should be equivalent but it's possible that having larger gradients accelerates convergence.\n #You can always try changing it!\n return loss0 + loss1\n```\n\n## Data \n\nThe IHDP dataset used in this example is a naturalistic simulation introduced in [Hill, 2011](https://www.tandfonline.com/doi/abs/10.1198/jcgs.2010.08162?casa_token=b8-rfzagECIAAAAA:QeP7C4lKN6nZ7MkDjJHFrEberXopD9M5qPBMeBqbk84mI_8qGxj01ctgt4jdZtORpu9aZvpVRe07PA) to evaluate estimation of heterogeneous treatment effects ($CATE$). The 25 covariates/features for the 747 units (139 treated) in the dataset were taken from an experiment, but Hill simulated the outcomes to create known counterfactuals. The data are available from Fredrik Johansson's website. IHDP is the de facto benchmark in this literature.\n\n
Additional details from Hill, 2011\n
[Hill] used experimental data from the Infant Health and Development Program (IHDP), a randomized experiment that began in 1985, targeted low-birth-weight, premature infants, and provided the treatment group with both intensive high-quality child care and home visits from a trained provider.... [The response surface] is nonlinear and not parallel across treatment conditions, with $Y(0)\u223c\\mathcal{N}(exp((X+W)\\beta_B),1)$ and $Y(1)\u223c\\mathcal{N}(X\\beta_B\u2212\\omega^s_B,1)$, where $W$ is an offset matrix of the same dimension as $X$ with every value equal to 0.5, $\\beta_B$ is a vector of regression coefficients (0, 0.1, 0.2, 0.3, 0.4) randomly sampled with probabilities (0.6, 0.1, 0.1, 0.1,0.1). For the sth simulation, $\\omega^s_B$ was chosen in the overlap setting, where we estimate the effect of the treatment on the treated, such that theconditional average treatment effect for the treated equals 4.
\n
\n\n`y` is the simulated outcome that may represent $Y(0)$ or $Y(1)$ depending on `t`. Note that we rescale it here to improve convergence. `mu_0` and `mu_1` are \"noiseless\" potential outcomes where Hill simply used the mean of the normal distribution described in the spoiler.\n\nThere are 100 stochastic simulations in this data. For this example we will just use the eighth one.\n\n\n```\nfrom sklearn.preprocessing import StandardScaler\n!wget -nc http://www.fredjo.com/files/ihdp_npci_1-100.train.npz\n!wget -nc http://www.fredjo.com/files/ihdp_npci_1-100.test.npz \n \ndef load_IHDP_data(training_data,testing_data,i=7):\n with open(training_data,'rb') as trf, open(testing_data,'rb') as tef:\n train_data=np.load(trf); test_data=np.load(tef)\n y=np.concatenate( (train_data['yf'][:,i], test_data['yf'][:,i])).astype('float32') #most GPUs only compute 32-bit floats\n t=np.concatenate( (train_data['t'][:,i], test_data['t'][:,i])).astype('float32')\n x=np.concatenate( (train_data['x'][:,:,i], test_data['x'][:,:,i]),axis=0).astype('float32')\n mu_0=np.concatenate((train_data['mu0'][:,i], test_data['mu0'][:,i])).astype('float32')\n mu_1=np.concatenate((train_data['mu1'][:,i], test_data['mu1'][:,i])).astype('float32')\n \n data={'x':x,'t':t,'y':y,'t':t,'mu_0':mu_0,'mu_1':mu_1}\n data['t']=data['t'].reshape(-1,1) #we're just padding one dimensional vectors with an additional dimension \n data['y']=data['y'].reshape(-1,1)\n \n #rescaling y between 0 and 1 often makes training of DL regressors easier\n data['y_scaler'] = StandardScaler().fit(data['y'])\n data['ys'] = data['y_scaler'].transform(data['y'])\n \n return data\n \ndata=load_IHDP_data(training_data='./ihdp_npci_1-100.train.npz',testing_data='./ihdp_npci_1-100.test.npz')\n```\n\n# Training and Fitting the Model\n\n\n
A brief spoiler about training neural networks if you've never done so before.\n\nWhen you use other types of machine learning models, optimization of the model parameters is typically done for you under the hood and you simply wait for training to finish. In contrast, neural networks have so many parameters that optimization becomes an art.\n\nRather than training on the whole training dataset at once, neural networks are trained on mini-batches of dozens to a few hundred examples. This is a compromise between applying error gradients from a single example (computationally expensive) and using the whole training dataset (expensive in terms of memory; may not work as well for losses that are not perfectly convex). The error gradient is applied to the network parameters after each mini-batch. A complete iteration through all mini-batches in the training set is called an **epoch.** \n\nAfter each epoch we run prediction on the entire validation set. While there are a number of regularization techniques used in DL to prevent overfitting (norms, dropout, batch normalization), the most important is **early stopping.** To prevent overfitting, we wish to stop training after several consecutive epochs where the validation loss has failed to improve. The number of epochs to wait after early stopping is often called a *patience* hyperparameter. \n\nThe proportion of the gradient the optimizer backpropagates to the parameters is called the **learning rate.** A learning rate that is too small takes a long time to train. A learning rate that is too large will overshoot optima. Learning rate schedulers are used to adaptively slow the learning rate as you get closer to an optimum.\n\n---\n\n
\n\n\nShi uses the builtin Keras `.fit` infrastructure for training the model which makes things super easy. There are a lot of hyperparameter choices here, but I won't dwell on them because hyperparameter selection will be covered in the next tutorial.\n\n In this example we use stochastic gradient descent to optimize the model with an initial learning rate of 1E-4 and momentum of .9. You can also try other optimizers (e.g., ADAM). **While you should experiment with different learning rates, I recommend having a conservative (smaller) learning rate because we really want our estimator to be unbiased.**\n \n To avoid overfitting, we stop training deep learning models when the validation loss stops improving. In Tensorflow the `EarlyStopping` callback automatically stops training after a number of epochs with no improvement on the validation loss (`patience` parameter). The `ReduceLROnPlateau` adaptively lowers the learning rate of the optimizer as we approach validation loss plateaus so that the optimizer does not overshoot the current optimum.\n\nWe use a mini-batch size of 64. Other papers have recommmended batch sizes up to 200 with this dataset. **The batch size is an important consideration for these causal inference architectures because you really want to make sure each mini-batch has both treatment and control examples for the representation layers.** This is obviously less of a problem for datasets with high proportions of treated units.\n\n\n```\nfrom tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TerminateOnNaN\nfrom tensorflow.keras.optimizers import SGD\n\n\nval_split=0.2\nbatch_size=64\nverbose=1\ni = 0\ntf.random.set_seed(i)\nnp.random.seed(i)\nyt = np.concatenate([data['ys'], data['t']], 1) #we'll use both y and t to compute the loss\n\n\nsgd_callbacks = [\n TerminateOnNaN(),\n EarlyStopping(monitor='val_loss', patience=40, min_delta=0.), \n #40 is Shi's recommendation for this dataset, but you should tune for your data \n ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, verbose=verbose, mode='auto',\n min_delta=0., cooldown=0, min_lr=0),\n ]\n#optimzier hyperparameters\nsgd_lr = 1e-5\nmomentum = 0.9\ntarnet_model.compile(optimizer=SGD(lr=sgd_lr, momentum=momentum, nesterov=True),\n loss=regression_loss,\n metrics=regression_loss)\n\ntarnet_model.fit(x=data['x'],y=yt,\n callbacks=sgd_callbacks,\n validation_split=val_split,\n epochs=300,\n batch_size=batch_size,\n verbose=verbose)\nprint(\"DONE!\")\n```\n\n# Estimating the ATE/CATE\n\nNow we can do inference on either the whole dataset or a heldout testing sample. For simplicity, we just use the whole dataset here. \n\nWe'll also plot our CATE estimates against the the true cates from the simulation. \n\n\n```\nimport pandas as pd\n\nconcat_pred=tarnet_model.predict(data['x'])\n#dont forget to rescale the outcome before estimation!\ny0_pred = data['y_scaler'].inverse_transform(concat_pred[:, 0])\ny1_pred = data['y_scaler'].inverse_transform(concat_pred[:, 1])\ncate_pred=y1_pred-y0_pred\ncate_true=data['mu_1']-data['mu_0'] #Hill's noiseless true values\nate_pred=tf.reduce_mean(cate_pred)\nprint(\"Estimated ATE (True is 4):\", ate_pred.numpy(),'\\n\\n')\n\nprint(\"Individualized CATE Estimates: BLUE\")\nprint(pd.Series(cate_pred).plot.kde(color='blue'))\nprint(\"Individualized CATE True: Green\")\nprint(pd.Series(cate_true).plot.kde(color='green'))\n\nprint(\"\\nError CATE Estimates: RED\")\nprint(pd.Series(cate_pred-cate_true).plot.kde(color='red'))\n\n```\n\nWhile our estimates have a broader spread than the real values, we haven't done any hyperparameter optimization yet so we can definitely do better. That's the focus of the next tutorial.\n\nOf course we can also break down these heterogeneous treatment effects to see if we can find any interesting patterns using, for example, Google's [Facet Dive](https://pair-code.github.io/facets/). This is just demonstrative since our covariates are meaningless in the simulation, but it's still cool. The Facet Dive is now built into TensorBoard. \n\n\n```\n#@title Explore Heterogeneity Using the Facet Dive\n\ndata['cate_pred']=cate_pred\nfacet_df=pd.DataFrame(data['x'])\nfacet_df['t']=data['t']\nfacet_df['y']=data['y']\nfacet_df['cate_pred']=data['cate_pred']\n\n\n# Display the Dive visualization for the training data.\nfrom IPython.core.display import display, HTML\n\njsonstr = facet_df.to_json(orient='records')\nHTML_TEMPLATE = \"\"\"\n \n \n \n \"\"\"\nhtml = HTML_TEMPLATE.format(jsonstr=jsonstr)\ndisplay(HTML(html))\n```\n\n# Thats it!\n\n- In this tutorial we introduce representation learning for causal inference.\n\n- We learned how to write custom models using the functional API and custom losses in TF2.\n\n- We built a TARNet model and tested it on the IHDP data.\n\n# Up next...\n- In the [second tutorial](https://colab.research.google.com/drive/1y9i8koqPqs8JSyVHkdZmjGEW6ntqPV73?usp=sharing) we will dig a bit deeper into model design. We'll learn how to assess convergence using Tensorboard, and tailor TarNet models to your dataset through hyperparameter optimization using [kerastuner](https://keras-team.github.io/keras-tuner/).\n\n- In the [third tutorial](https://colab.research.google.com/drive/19JJNyGAvSJCY8xP8vkVUXFf3-uEdDuss?usp=sharing), we'll introduce some more sophisticated elaborations of TARNet built on semiparametric theory shown in [Shi et al., 2019](https://arxiv.org/pdf/1906.02120.pdf).\n\n- In the [fourth tutorial](https://colab.research.google.com/drive/1d8kvEXk_j268rrYq8QC_hbkfhLmp742Y?usp=sharing), we demonstrate TARNet extensions using integral probability metrics from [Shalit et al., 2017](https://arxiv.org/abs/1606.03976), [Johansson et al. 2018](https://arxiv.org/abs/1903.03448), and [Johansson et al., 2020](https://arxiv.org/abs/2001.07426).\n", "meta": {"hexsha": "3cf5f179cfdee48bcc9f94265b42e713201e64a0", "size": 36537, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-01-01-Tutorial_1_Introduction_to_Deep_Learning_for_Causal_Inference_on_Observables.ipynb", "max_stars_repo_name": "kochbj/notebooks", "max_stars_repo_head_hexsha": "d2a75ec86cc0424d5091d44ac83b0ca3346bf5d8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-01-01-Tutorial_1_Introduction_to_Deep_Learning_for_Causal_Inference_on_Observables.ipynb", "max_issues_repo_name": "kochbj/notebooks", "max_issues_repo_head_hexsha": "d2a75ec86cc0424d5091d44ac83b0ca3346bf5d8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_notebooks/2021-01-01-Tutorial_1_Introduction_to_Deep_Learning_for_Causal_Inference_on_Observables.ipynb", "max_forks_repo_name": "kochbj/notebooks", "max_forks_repo_head_hexsha": "d2a75ec86cc0424d5091d44ac83b0ca3346bf5d8", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.7411575563, "max_line_length": 949, "alphanum_fraction": 0.6177026028, "converted": true, "num_tokens": 6781, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5583269943353745, "lm_q2_score": 0.4843800842769844, "lm_q1q2_score": 0.2704424765702841}} {"text": "# reaction mass delta signatures\n\n- Examples using ModelSeed universal model\n- Minghao Gong, 04-14-2022\n\n\n\n```python\n!pip install --upgrade metDataModel\n!pip install --upgrade numpy\n!pip install --upgrade mass2chem\n\n!pip install cobra\n```\n\n Requirement already up-to-date: metDataModel in /opt/conda/lib/python3.7/site-packages (0.4.14)\n Collecting numpy\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/6d/ad/ff3b21ebfe79a4d25b4a4f8e5cf9fd44a204adb6b33c09010f566f51027a/numpy-1.21.6-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15.7MB 9.0MB/s eta 0:00:01\n \u001b[?25hInstalling collected packages: numpy\n Found existing installation: numpy 1.17.2\n Uninstalling numpy-1.17.2:\n Successfully uninstalled numpy-1.17.2\n Successfully installed numpy-1.21.6\n Requirement already up-to-date: mass2chem in /opt/conda/lib/python3.7/site-packages (0.3.2)\n Requirement already satisfied, skipping upgrade: scipy in /opt/conda/lib/python3.7/site-packages (from mass2chem) (1.3.1)\n Requirement already satisfied, skipping upgrade: numpy in /opt/conda/lib/python3.7/site-packages (from mass2chem) (1.21.6)\n Collecting cobra\n Using cached https://files.pythonhosted.org/packages/29/1c/63549e9e73ad4faa2d80700da1c8e4ca13836adc95201efa476a7387560a/cobra-0.24.0-py2.py3-none-any.whl\n Collecting httpx~=0.14 (from cobra)\n Using cached https://files.pythonhosted.org/packages/2f/d3/6a990516a43a522a72da356c4a91c03e09c0cddce8106e7e1215c120011f/httpx-0.22.0-py3-none-any.whl\n Collecting depinfo~=1.7 (from cobra)\n Using cached https://files.pythonhosted.org/packages/af/8b/cee6dca4c4708705444c9cad9e783b9212cc51cab8a5e05ccfe930f53058/depinfo-1.7.0-py2.py3-none-any.whl\n Requirement already satisfied: numpy~=1.13 in /opt/conda/lib/python3.7/site-packages (from cobra) (1.21.6)\n Collecting appdirs~=1.4 (from cobra)\n Using cached https://files.pythonhosted.org/packages/3b/00/2344469e2084fb287c2e0b57b72910309874c3245463acd6cf5e3db69324/appdirs-1.4.4-py2.py3-none-any.whl\n Collecting optlang~=1.5 (from cobra)\n Using cached https://files.pythonhosted.org/packages/12/3e/9d0b72cf5a8ff660e5787a0797906e04942081f3ad4a95f860488affff2b/optlang-1.5.2-py2.py3-none-any.whl\n Collecting rich>=8.0 (from cobra)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/a6/7c/5ccbe95d3fc9d737249eb05d90edd70a1eb07671024558bddf3076df4a0e/rich-12.2.0-py3-none-any.whl (229kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 235kB 8.3MB/s eta 0:00:01\n \u001b[?25hCollecting pydantic~=1.6 (from cobra)\n Using cached https://files.pythonhosted.org/packages/d4/4e/00724eebf52854e65dabe2c190b4842afbda0e09817f415683a3130a123c/pydantic-1.9.0-py3-none-any.whl\n Collecting diskcache~=5.0 (from cobra)\n Using cached https://files.pythonhosted.org/packages/a1/c4/80d38cf6852ba87f8b506a91f18b3a485c668f452700689add960d7e2ecc/diskcache-5.4.0-py3-none-any.whl\n Collecting pandas~=1.0 (from cobra)\n Collecting ruamel.yaml~=0.16 (from cobra)\n Using cached https://files.pythonhosted.org/packages/9e/cb/938214ac358fbef7058343b3765c79a1b7ed0c366f7f992ce7ff38335652/ruamel.yaml-0.17.21-py3-none-any.whl\n Collecting swiglpk (from cobra)\n Using cached https://files.pythonhosted.org/packages/80/48/ff3ce61567f667629268b272d4c57f980ff0a8d4bdf991a593be12384186/swiglpk-5.0.3-cp37-cp37m-manylinux2010_x86_64.whl\n Collecting python-libsbml==5.19.2 (from cobra)\n Using cached https://files.pythonhosted.org/packages/ab/a5/c3ffea492fb3b36e527cfaf6aa8fbb843907cd84378f2e2b575d44b250a9/python_libsbml-5.19.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl\n Collecting future (from cobra)\n Collecting importlib-resources (from cobra)\n Using cached https://files.pythonhosted.org/packages/28/6e/bacd1e6760f49748c7483929545aa65a3a8c62f779b18c51743a07c3e6ab/importlib_resources-5.6.0-py3-none-any.whl\n Collecting httpcore<0.15.0,>=0.14.5 (from httpx~=0.14->cobra)\n Using cached https://files.pythonhosted.org/packages/e7/38/7b76d3d71c462dc936e333b358a3106e7af913e6c8c9dd5a45684fec08cc/httpcore-0.14.7-py3-none-any.whl\n Collecting rfc3986[idna2008]<2,>=1.3 (from httpx~=0.14->cobra)\n Using cached https://files.pythonhosted.org/packages/c4/e5/63ca2c4edf4e00657584608bee1001302bbf8c5f569340b78304f2f446cb/rfc3986-1.5.0-py2.py3-none-any.whl\n Requirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from httpx~=0.14->cobra) (2019.6.16)\n Collecting charset-normalizer (from httpx~=0.14->cobra)\n Using cached https://files.pythonhosted.org/packages/06/b3/24afc8868eba069a7f03650ac750a778862dc34941a4bebeb58706715726/charset_normalizer-2.0.12-py3-none-any.whl\n Collecting sniffio (from httpx~=0.14->cobra)\n Using cached https://files.pythonhosted.org/packages/52/b0/7b2e028b63d092804b6794595871f936aafa5e9322dcaaad50ebf67445b3/sniffio-1.2.0-py3-none-any.whl\n Collecting importlib-metadata; python_version < \"3.8\" (from depinfo~=1.7->cobra)\n Using cached https://files.pythonhosted.org/packages/92/f2/c48787ca7d1e20daa185e1b6b2d4e16acd2fb5e0320bc50ffc89b91fa4d7/importlib_metadata-4.11.3-py3-none-any.whl\n Requirement already satisfied: sympy>=1.0 in /opt/conda/lib/python3.7/site-packages (from optlang~=1.5->cobra) (1.4)\n Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.7/site-packages (from optlang~=1.5->cobra) (1.12.0)\n Collecting typing-extensions<5.0,>=4.0.0; python_version < \"3.9\" (from rich>=8.0->cobra)\n Using cached https://files.pythonhosted.org/packages/45/6b/44f7f8f1e110027cf88956b59f2fad776cca7e1704396d043f89effd3a0e/typing_extensions-4.1.1-py3-none-any.whl\n Collecting commonmark<0.10.0,>=0.9.0 (from rich>=8.0->cobra)\n Using cached https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl\n Collecting pygments<3.0.0,>=2.6.0 (from rich>=8.0->cobra)\n Using cached https://files.pythonhosted.org/packages/1d/17/ed4d2df187995561b28f1073df24137cb750e12f9879d291cc8ab67c65d2/Pygments-2.11.2-py3-none-any.whl\n Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.7/site-packages (from pandas~=1.0->cobra) (2.8.0)\n Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.7/site-packages (from pandas~=1.0->cobra) (2019.2)\n Collecting ruamel.yaml.clib>=0.2.6; platform_python_implementation == \"CPython\" and python_version < \"3.11\" (from ruamel.yaml~=0.16->cobra)\n Using cached https://files.pythonhosted.org/packages/98/8a/ba37489b423916162b086b01c7c18001cf297350694180468e1698085c58/ruamel.yaml.clib-0.2.6-cp37-cp37m-manylinux1_x86_64.whl\n Collecting zipp>=3.1.0; python_version < \"3.10\" (from importlib-resources->cobra)\n Downloading https://files.pythonhosted.org/packages/80/0e/16a7ee38617aab6a624e95948d314097cc2669edae9b02ded53309941cfc/zipp-3.8.0-py3-none-any.whl\n Collecting anyio==3.* (from httpcore<0.15.0,>=0.14.5->httpx~=0.14->cobra)\n Using cached https://files.pythonhosted.org/packages/b1/ae/9a8af72d6f0c551943903eefcf93c3a29898fb7b594603c0d70679c199b1/anyio-3.5.0-py3-none-any.whl\n Collecting h11<0.13,>=0.11 (from httpcore<0.15.0,>=0.14.5->httpx~=0.14->cobra)\n Using cached https://files.pythonhosted.org/packages/60/0f/7a0eeea938eaf61074f29fed9717f2010e8d0e0905d36b38d3275a1e4622/h11-0.12.0-py3-none-any.whl\n Requirement already satisfied: idna; extra == \"idna2008\" in /opt/conda/lib/python3.7/site-packages (from rfc3986[idna2008]<2,>=1.3->httpx~=0.14->cobra) (2.8)\n Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.7/site-packages (from sympy>=1.0->optlang~=1.5->cobra) (1.1.0)\n Installing collected packages: sniffio, typing-extensions, anyio, h11, httpcore, rfc3986, charset-normalizer, httpx, zipp, importlib-metadata, depinfo, appdirs, swiglpk, optlang, commonmark, pygments, rich, pydantic, diskcache, pandas, ruamel.yaml.clib, ruamel.yaml, python-libsbml, future, importlib-resources, cobra\n Found existing installation: Pygments 2.4.2\n Uninstalling Pygments-2.4.2:\n Successfully uninstalled Pygments-2.4.2\n Found existing installation: pandas 0.25.1\n Uninstalling pandas-0.25.1:\n Successfully uninstalled pandas-0.25.1\n Successfully installed anyio-3.5.0 appdirs-1.4.4 charset-normalizer-2.0.12 cobra-0.24.0 commonmark-0.9.1 depinfo-1.7.0 diskcache-5.4.0 future-0.18.2 h11-0.12.0 httpcore-0.14.7 httpx-0.22.0 importlib-metadata-4.11.3 importlib-resources-5.6.0 optlang-1.5.2 pandas-1.3.5 pydantic-1.9.0 pygments-2.11.2 python-libsbml-5.19.2 rfc3986-1.5.0 rich-12.2.0 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.6 sniffio-1.2.0 swiglpk-5.0.3 typing-extensions-4.1.1 zipp-3.8.0\n\n\n\n```python\nimport sys\nsys.path.append(\"/Users/gongm/Documents/projects/mass2chem/\")\n```\n\n\n```python\nimport cobra\nfrom metDataModel.core import Compound, Reaction, Pathway\nfrom mass2chem.formula import *\n```\n\n## Switching to MG's JSON model\n\nafter parsing and formula-mass calculation.\n\nExample of model conversion - ?\n\n\n```python\nimport json\nfrom operator import itemgetter\n```\n\n\n```python\n# JSON models are in JMS repo\nM = json.load(open('../output/ModelSeed/Universal_ModelSeed.json'))\n```\n\n\n```python\nM.keys() \n```\n\n\n\n\n dict_keys(['id', 'list_of_reactions', 'list_of_compounds', 'list_of_pathways', 'meta_data'])\n\n\n\n\n```python\n[print(x) for x in [\n M['meta_data'],\n len(M['list_of_reactions']),\n M['list_of_reactions'][222],\n len(M['list_of_compounds']),\n M['list_of_compounds'][1000], ]\n]\n```\n\n {'species': 'universal', 'version': '', 'sources': ['https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/, retrieved 2022-04-13'], 'status': '', 'last_update': '20220413', 'note': 'ModelSeed Universal Model. Add all reactions and compounds, '}\n 38269\n {'id': 'rxn00228', 'reactants': ['cpd00001', 'cpd03385'], 'products': ['cpd00009', 'cpd00029'], 'genes': [], 'enzymes': ['3.11.1.2']}\n 33992\n {'id': 'cpd01021', 'name': 'Glu-Glu; Glutamyl-glutamic acid; L-alpha-Glutamyl-L-glutamic acid; L-glutamyl-L-glutamate; glu-glu', 'identifiers': [['KEGG', 'C01425'], ['MetaCyc', 'CPD-13219'], ['inchikey', 'KOSRFJWDECSPRO-WDSKDSINSA-L']], 'neutral_formula': 'C10H16N2O7', 'charge': -2, 'charged_formula': 'C10H14N2O7', 'neutral_mono_mass': 276.095751, 'SMILES': '[NH3+][C@@H](CCC(=O)[O-])C(=O)N[C@@H](CCC(=O)[O-])C(=O)[O-]', 'inchi': ''}\n\n\n\n\n\n [None, None, None, None, None]\n\n\n\n\n```python\nM['list_of_compounds'][300: 302]\n```\n\n\n\n\n [{'id': 'cpd00304',\n 'name': 'Retinal; Retinene; Vitamin A aldehyde; all-trans-Retinal; all-trans-Retinene; all-trans-Vitamin A aldehyde; all-trans-retinal; all-trans-retinene; axerophthal; retinal; retinaldehyde; retinene; vitamin A aldehyde',\n 'identifiers': [['AraCyc', 'RETINAL'],\n ['BiGG', 'retinal'],\n ['BrachyCyc', 'RETINAL'],\n ['KEGG', 'C00376'],\n ['MetaCyc', 'RETINAL'],\n ['inchikey', 'NCYCYZXNIZJOKI-OVSJKPMPSA-N']],\n 'neutral_formula': 'C20H28O',\n 'charge': 0,\n 'charged_formula': 'C20H28O',\n 'neutral_mono_mass': 284.214016,\n 'SMILES': 'CC1=C(/C=C/C(C)=C/C=C/C(C)=C/C=O)C(C)(C)CCC1',\n 'inchi': ''},\n {'id': 'cpd00305',\n 'name': 'Aneurin; Antiberiberi factor; THI; Thiamin; Thiamine; Vitamin B1; thiamin; thiamine; vitamin B1',\n 'identifiers': [['AraCyc', 'THIAMINE'],\n ['BiGG', 'thm'],\n ['BrachyCyc', 'THIAMINE'],\n ['KEGG', 'C00378'],\n ['MetaCyc', 'THIAMINE'],\n ['inchikey', 'JZRWCGZRTZMZEH-UHFFFAOYSA-N']],\n 'neutral_formula': 'C12H16N4OS',\n 'charge': 1,\n 'charged_formula': 'C12H17N4OS',\n 'neutral_mono_mass': 264.104482,\n 'SMILES': 'Cc1ncc(C[n+]2csc(CCO)c2C)c(N)n1',\n 'inchi': ''}]\n\n\n\n\n```python\n# index metabolites, and add calculated charged_formula_mass via mass2chem.formula.calculate_formula_mass\ncpdDict = {}\nfor C in M['list_of_compounds']:\n if C['neutral_mono_mass']:\n cpdDict[C['id']] = C\n else:\n print(C['id'], C['charged_formula'])\n```\n\n cpd00049 CO2R\n cpd00057 HOR\n cpd00109 C42H42FeN8O8R4S2\n cpd00110 C42H42FeN8O8R4S2\n cpd00124 C5H9O7PR\n cpd00148 C5H8O7PR\n cpd00167 C19H36NO3R\n cpd00173 C5H9O13P3R\n cpd00181 C53H82CoN11O14PR2\n cpd00195 CH3OR\n cpd00354 C5H9O10P2R\n cpd00375 C7H6NOR\n cpd00410 C42H42FeN8O8R4S2\n cpd00431 C24H48N2O6PR\n cpd00459 C10H15NO9R\n cpd00468 C5H8O6PR\n cpd00487 C3H4O2R\n cpd00512 C5H8O6PR\n cpd00513 C5H8O6PR\n cpd00514 C5H9O12P3R\n cpd00542 CNR\n cpd00560 W\n cpd00595 C5H9O4R\n cpd00676 C5H9O4R\n cpd00710 C6H11O6R\n cpd00716 C3H2O3R\n cpd00748 C5H8O7PR\n cpd00753 C3H6NO2R\n cpd00783 C19H26N4O16PR\n cpd00821 C5H8O7PR\n cpd00839 C6H11O6R\n cpd00846 C5H9O7PR\n cpd00878 C25H46NO8R\n cpd00883 C10H14NO12PR\n cpd00889 C7H14N3O3R2\n cpd00913 C5H7O6PR\n cpd00941 C27H51NO8PR\n cpd00947 C31H56NO13R\n cpd00959 C20H34NO15R\n cpd00970 C5H8O7PR\n cpd00975 HgR\n cpd00987 C11H11N2O2R\n cpd01039 C8H10N4O3R\n cpd01125 C11H18O14P2R2\n cpd01226 C6H13NO2R\n cpd01236 C54H69MgN4O5R\n cpd01240 C6H11O6R\n cpd01248 C2H2O2R\n cpd01297 C4H7O4R\n cpd01299 H3NR\n cpd01305 C31H43O8R\n cpd01315 C6H11O6R\n cpd01323 C2HO2R\n cpd01326 C21H37N2O17P2R3\n cpd01364 C5H9O7PR\n cpd01377 C6H13NO2R\n cpd01431 C4H7O4R\n cpd01452 C5H9O4R\n cpd01453 C5H9O5R\n cpd01469 C5H8O7PR\n cpd01529 C5H9O3R\n cpd01601 C62H89CoN13O15PR\n cpd01611 C7H13N2O3R\n cpd01626 C5H9O5R\n cpd01649 C5H8O7PR\n cpd01659 C5H8O7PR\n cpd01664 C28H45O2R\n cpd01729 CH2NOR\n cpd01743 C25H46NO8R\n cpd01744 C4H7O4R\n cpd01748 C6H13N3O3R\n cpd01759 C5H4NO5R\n cpd01760 C6H8N2O4R\n cpd01814 C9H10N3O5R2\n cpd01899 C19H35NO6PR\n cpd01913 C9H14N4O5R2\n cpd01922 C2H4NOR\n cpd01978 C6H6NR\n cpd01986 C7H7N3O3R\n cpd02000 C3H7O6PR\n cpd02002 C6H5R\n cpd02017 C8H15ClN2O2R\n cpd02055 C5H9O3R\n cpd02088 C45H79N2O23R\n cpd02111 C11H9O2R\n cpd02149 C2H3O2R\n cpd02157 C42H74N5O18R\n cpd02163 C5H8O7PR\n cpd02164 C5H8O7PR\n cpd02165 C19H35NO3R2\n cpd02217 C8H14NO6R\n cpd02236 C26H50N5O13PR\n cpd02237 C22H47N5O15PR\n cpd02248 C31H45O8R\n cpd02251 C5H9O7PR\n cpd02256 C18H27O2R\n cpd02270 C5H8O6PR\n cpd02271 C6H11O5R\n cpd02282 C27H51NO8PR\n cpd02290 C5H9N2O3R\n cpd02323 C29H54NO8R\n cpd02355 C5H9O7PR\n cpd02379 C5H9O13P3R\n cpd02422 C25H49NO8PR\n cpd02486 C6H10O9PR\n cpd02589 C5H7O6PR\n cpd02601 C9H19NO7PR\n cpd02602 C5H9O9P2R\n cpd02603 C9H19NO7PR\n cpd02627 C5H9O12P3R\n cpd02634 C5H8O13P3R\n cpd02647 C10H17O9R\n cpd02649 C8H19NO6PR\n cpd02676 C11H18NO11PR2\n cpd02687 C19H30N3O11R3\n cpd02710 C11H15N5O4R2\n cpd02743 C8H14NO6R\n cpd02749 C10H21NO6PR\n cpd02803 C10H21NO7PR\n cpd02811 C12H21O11R\n cpd02822 C7H15NO6PR\n cpd02823 C27H49NO8PR\n cpd02831 C12H21O14PR\n cpd02833 C12H21O10R\n cpd02838 C14H24NO11R\n cpd02858 C12H17N5O4R2\n cpd02866 C26H48NO9R\n cpd02885 C37H66NO18R\n cpd02888 C42H72N2O21R\n cpd02895 C37H66NO18R\n cpd02902 C18H31O16R\n cpd02903 C18H29N3O13R2\n cpd02909 C14H24NO14PR\n cpd02910 C14H24NO11R\n cpd02911 C27H51NO9PR\n cpd02926 C14H24NO11R\n cpd02942 C39H69N2O18R\n cpd02947 C20H34NO15R\n cpd02950 C20H34NO16R\n cpd02966 C20H34NO16R\n cpd02970 C25H40N2O19R\n cpd02972 C28H43N2O22R\n cpd02974 C22H37N2O16R\n cpd02975 C22H37N2O16R\n cpd02976 C22H37N2O16R\n cpd02982 C28H44N3O20R3\n cpd02983 C22H37N2O16R\n cpd02986 C22H37N2O16R\n cpd02987 C25H40N2O19R\n cpd02989 C45H79N2O23R\n cpd02990 C26H44NO21R\n cpd02992 C24H40N3O16R\n cpd02994 C45H79N2O23R\n cpd02996 C50H85N3O26R\n cpd02997 C53H92N3O28R\n cpd02998 C36H56N3O27R\n cpd02999 C67H111N4O39R\n cpd03000 C30H50N3O21R\n cpd03003 C30H50N3O21R\n cpd03004 C56H95N3O31R\n cpd03005 C34H57N2O26R\n cpd03006 C53H92N3O28R\n cpd03007 C53H92N3O28R\n cpd03008 C34H57N2O26R\n cpd03009 C6H11O6R\n cpd03010 C28H47N2O21R\n cpd03011 C20H34NO16R\n cpd03015 C38H35FeN6O6R2S\n cpd03022 C5H9O7PR\n cpd03072 C28H47N2O21R\n cpd03077 C2H4NO2R\n cpd03097 C28H49NO7PR\n cpd03098 C5H13NO6PR\n cpd03099 C25H43NO7PR\n cpd03101 C3H4N2O3R\n cpd03290 C13H24N3O8R2\n cpd03384 C21H42N2O5PR\n cpd03424 C62H91CoN13O14PR\n cpd03510 C20H28R\n cpd03511 C20H28R\n cpd03512 C20H28R\n cpd03513 C20H28R\n cpd03516 C20H26R\n cpd03556 C12H24N2O14P2R2\n cpd03557 C6H13NO7PR\n cpd03558 C7H12NO9PR\n cpd03580 C2H2OR\n cpd03602 CH2NOR\n cpd03610 NO2R\n cpd03611 CH3N2R\n cpd03612 H2NR\n cpd03613 C21H42N2O6PR\n cpd03652 C31H56NO13R\n cpd03653 C36H62N2O16R\n cpd03654 C53H88N3O29R\n cpd03655 C61H101N4O34R\n cpd03656 C39H69N2O18R\n cpd03657 C45H79N2O23R\n cpd03658 C78H127N5O47R\n cpd03659 C89H143N6O55R\n cpd03660 C78H127N5O47R\n cpd03661 C67H111N4O39R\n cpd03729 HO4PR\n cpd03739 C45H79N2O23R\n cpd03740 C45H79N2O23R\n cpd03741 C53H88N3O31R\n cpd03742 C64H104N4O37R\n cpd03743 C64H104N4O40R\n cpd03744 C61H101N4O36R\n cpd03798 C9H7O2R\n cpd03806 C6H8O7R\n cpd03810 C8H5NO3R\n cpd03811 H3NR\n cpd03814 C6H6NO5R\n cpd03815 C5H4NO5R\n cpd03823 C4H3O4R\n cpd03824 C3H4O2R\n cpd03863 CH2NO2R\n cpd03877 C2H3O2R\n cpd03895 C20H34NO15R\n cpd03996 C12H23NO14P2R\n cpd04254 H8Cl2N2Pt\n cpd04937 C7H5BiO4\n cpd05035 C25H45N14O15RS2\n cpd05140 C6H11AuO5S\n cpd08231 C25H40N2O19R\n cpd08239 C45H79N2O23R\n cpd08363 C3H4N2O3R\n cpd08364 C11H19O10R\n cpd08562 C7H10N2O4R\n cpd08627 C3H2O2R\n cpd08822 C37H51N12O14R\n cpd08908 C19H38NO3R\n cpd08927 C19H38NO4R\n cpd09044 C42H70N8O12R\n cpd09045 C37H51N12O14R\n cpd09046 C37H50N10O15R\n cpd09047 C44H68ClN14O17R\n cpd09081 C30H19N4O5R\n cpd09191 C20H12NO6R\n cpd09193 C19H22NO10RS\n cpd09323 C2H8N2O3Pt\n cpd09375 H2BiN3O9\n cpd09595 C104H204N32O46R2S5\n cpd09693 Ba\n cpd09720 C45H79N2O23R\n cpd09721 C50H87N2O27R\n cpd09722 C47H82N3O23R\n cpd09723 C55H95N4O28R\n cpd09724 C37H66NO18R\n cpd09725 C45H79N2O23R\n cpd09726 C53H92N3O28R\n cpd10919 ClTl\n cpd11159 H3Bi\n cpd11250 C15H12O5R2\n cpd11416 None\n cpd11420 C10H13N4O4R4S2\n cpd11421 C10H15N4O4R4S2\n cpd11422 C5H5O8PR2\n cpd11423 C5H6O5R2\n cpd11427 C14H17N3O15P2R2\n cpd11454 C8H11O13P2R2\n cpd11455 C8H11NO10PR2\n cpd11456 C7H12NO8PR2\n cpd11461 C15H23O13P2R3\n cpd11462 None\n cpd11463 C4H6N2O3R2\n cpd11464 C15H28N2O8PRS\n cpd11465 C15H25N2O8PRS\n cpd11466 C25H48N2O8PRS\n cpd11467 C25H45N2O8PRS\n cpd11468 C23H44N2O8PRS\n cpd11469 C23H41N2O8PRS\n cpd11470 C19H36N2O8PRS\n cpd11471 C19H34N2O8PRS\n cpd11472 C17H31N2O8PRS\n cpd11473 C17H29N2O8PRS\n cpd11474 C21H40N2O8PRS\n cpd11475 C21H37N2O8PRS\n cpd11476 C27H51N2O8PRS\n cpd11477 C27H49N2O8PRS\n cpd11478 C15H27N2O9PRS\n cpd11479 C17H31N2O9PRS\n cpd11480 C23H43N2O9PRS\n cpd11481 C27H49N2O9PRS\n cpd11482 C21H40N2O9PRS\n cpd11483 C19H36N2O9PRS\n cpd11484 C25H47N2O9PRS\n cpd11485 C27H50N2O9PRS\n cpd11486 C17H30N2O9PRS\n cpd11487 C21H38N2O9PRS\n cpd11488 C15H26N2O9PRS\n cpd11489 C23H42N2O9PRS\n cpd11490 C19H34N2O9PRS\n cpd11491 C25H46N2O9PRS\n cpd11492 C14H24N2O10PRS\n cpd11493 C11H22N2O7PRS\n cpd11494 C13H24N2O8PRS\n cpd11495 C16H29N2O8PRS\n cpd11496 C18H31N2O9PRS\n cpd11497 C18H33N2O9PRS\n cpd11498 C18H31N2O8PRS\n cpd11499 C18H33N2O8PRS\n cpd11500 C20H35N2O9PRS\n cpd11501 C20H37N2O9PRS\n cpd11502 C20H35N2O8PRS\n cpd11503 C20H37N2O8PRS\n cpd11504 C22H39N2O9PRS\n cpd11505 C22H41N2O9PRS\n cpd11506 C22H39N2O8PRS\n cpd11507 C22H41N2O8PRS\n cpd11508 C24H43N2O9PRS\n cpd11509 C24H45N2O9PRS\n cpd11510 C24H43N2O8PRS\n cpd11511 C24H45N2O8PRS\n cpd11512 C26H47N2O9PRS\n cpd11513 C26H49N2O9PRS\n cpd11514 C26H47N2O8PRS\n cpd11515 C26H49N2O8PRS\n cpd11516 C28H51N2O9PRS\n cpd11517 C28H53N2O9PRS\n cpd11518 C28H51N2O8PRS\n cpd11519 C28H53N2O8PRS\n cpd11520 C16H29N2O8PRS\n cpd11521 C18H31N2O9PRS\n cpd11522 C18H33N2O9PRS\n cpd11523 C18H31N2O8PRS\n cpd11524 C18H33N2O8PRS\n cpd11525 C20H35N2O9PRS\n cpd11526 C20H37N2O9PRS\n cpd11527 C20H35N2O8PRS\n cpd11528 C20H37N2O8PRS\n cpd11529 C22H39N2O9PRS\n cpd11530 C22H41N2O9PRS\n cpd11531 C22H39N2O8PRS\n cpd11532 C22H41N2O8PRS\n cpd11533 C24H43N2O9PRS\n cpd11534 C24H45N2O9PRS\n cpd11535 C24H43N2O8PRS\n cpd11536 C24H45N2O8PRS\n cpd11537 C26H47N2O9PRS\n cpd11538 C26H49N2O9PRS\n cpd11539 C26H47N2O8PRS\n cpd11540 C26H49N2O8PRS\n cpd11541 C28H51N2O9PRS\n cpd11542 C28H53N2O9PRS\n cpd11543 C28H51N2O8PRS\n cpd11544 C28H53N2O8PRS\n cpd11545 C15H27N2O8PRS\n cpd11546 C17H29N2O9PRS\n cpd11547 C17H31N2O9PRS\n cpd11548 C17H29N2O8PRS\n cpd11549 C17H31N2O8PRS\n cpd11550 C19H33N2O9PRS\n cpd11551 C19H35N2O9PRS\n cpd11552 C19H33N2O8PRS\n cpd11553 C19H35N2O8PRS\n cpd11554 C21H37N2O9PRS\n cpd11555 C21H39N2O9PRS\n cpd11556 C21H37N2O8PRS\n cpd11557 C21H39N2O8PRS\n cpd11558 C23H41N2O9PRS\n cpd11559 C23H43N2O9PRS\n cpd11560 C23H41N2O8PRS\n cpd11561 C23H43N2O8PRS\n cpd11562 C25H45N2O9PRS\n cpd11563 C25H47N2O9PRS\n cpd11564 C25H45N2O8PRS\n cpd11565 C25H47N2O8PRS\n cpd11566 C27H49N2O9PRS\n cpd11567 C27H51N2O9PRS\n cpd11568 C27H49N2O8PRS\n cpd11569 C27H51N2O8PRS\n cpd11570 C29H53N2O9PRS\n cpd11571 C29H55N2O9PRS\n cpd11572 C29H53N2O8PRS\n cpd11573 C29H55N2O8PRS\n cpd11594 C12H20O10R2\n cpd11601 C26H34O24R2\n cpd11608 C4H6N2O3R2\n cpd11609 H2NR\n cpd11611 C22H31N7O17P3RS\n cpd11612 C2H4NO2R\n cpd11613 C15H22O19P3R3\n cpd11614 None\n cpd11615 C15H23O16P2R3\n cpd11616 CHOR\n cpd11617 C4H6N2O3R2\n cpd11618 C4H6N2O3R2\n cpd11620 Fe2R4S6\n cpd11621 Fe2R4S6\n cpd11622 HRS\n cpd11623 C2H4NO2R\n cpd11624 C10H18NO8PR2\n cpd11625 C2O3R\n cpd11627 C15H21O22P4R3\n cpd11628 C3H4OR2S\n cpd11629 None\n cpd11630 C12H8N4O2R\n cpd11632 None\n cpd11633 C11H15N4O5R3\n cpd11634 None\n cpd11635 C15H23O16P2R3\n cpd11636 CH2NOR\n cpd11637 C24H33N7O18P3RS\n cpd11639 C15H23O19P4R3\n cpd11641 CO2R2\n cpd11644 None\n cpd11645 C18H36N3O10R2\n cpd11646 C6H7N3O4R3\n cpd11647 None\n cpd11648 None\n cpd11649 None\n cpd11650 CH3OR\n cpd11651 None\n cpd11652 C8H12O10PR2\n cpd11653 C3H4O2R\n cpd11655 C15H21O19P4R3\n cpd11656 C6H7N3O4R3\n cpd11657 C12H20O10R2\n cpd11658 C12H20O10R2\n cpd11659 C26H33N2O36R2S5\n cpd11660 CH5NR\n cpd11663 None\n cpd11664 C12H29N4O5R\n cpd11666 None\n cpd11667 None\n cpd11668 C9H10N2O4RS\n cpd11670 C14H20NO11R2\n cpd11671 C6H8N3O4R3\n cpd11673 C2H4NO2R\n cpd11674 C15H23O16P2R3\n cpd11676 None\n cpd11677 C6H5O6R3\n cpd11678 C14H17NO11R5\n cpd11679 CH2OR2\n cpd11680 C30H46O26P4R6\n cpd11681 None\n cpd11682 C15H21O22P4R3\n cpd11683 C8H13NO5R2\n cpd11684 R\n cpd11687 None\n cpd11688 C36H60O30R2\n cpd11689 None\n cpd11690 None\n cpd11692 None\n cpd11693 C2HO2R\n cpd11694 CO2R\n cpd11697 None\n cpd11698 None\n cpd11699 None\n cpd11700 None\n cpd11701 C28H43N2O29R2S3\n cpd11702 C6H11O5R\n cpd11703 C15H22O16P3R3\n cpd11704 C10H10N2O3R2\n cpd11705 None\n cpd11706 C6H11O6R\n cpd11707 C24H33N7O17P3RS\n cpd11708 C14H17NO11R5\n cpd11709 C3H5OR\n cpd11710 C7H14N5O2R2\n cpd11711 C6H7N2O4R2\n cpd11712 C7H8N4O2R2\n cpd11714 C7H12NO6R\n cpd11715 C11H16O13PR2\n cpd11716 C14H19NO14R2S\n cpd11717 C14H19NO14R2S\n cpd11718 C24H35N7O18P3RS\n cpd11719 None\n cpd11720 C5H10O7PR2\n cpd11721 None\n cpd11722 C24H33N7O17P3RS\n cpd11723 None\n cpd11724 None\n cpd11725 None\n cpd11726 C3H2O2R2S\n cpd11727 C7H11NO9PR\n cpd11728 C3H2OR2S\n cpd11729 C14H24NO11R\n cpd11730 None\n cpd11731 H2NR\n cpd11732 C10H15O8R5\n cpd11733 COR2\n cpd11734 C2H5O2R\n cpd11735 C6H10O5R2\n cpd11736 BrR\n cpd11737 C2H2OR2\n cpd11738 None\n cpd11740 None\n cpd11743 None\n cpd11744 None\n cpd11745 None\n cpd11746 C6H10O5R2\n cpd11747 None\n cpd11749 None\n cpd11750 C20H28R\n cpd11751 C10H12N5O3R\n cpd11752 None\n cpd11753 HRS\n cpd11754 None\n cpd11755 C20H26N5O16P3R2\n cpd11757 None\n cpd11758 None\n cpd11759 C4H6N2O3R2\n cpd11763 C19H26N3O17P3R2\n cpd11764 HOR\n cpd11765 C5H5O8PR3\n cpd11766 None\n cpd11767 C19H25N2O21P3R2\n cpd11768 C15H23O13P2R3\n cpd11769 C8H7N2O3R2S\n cpd11770 C18H29NO17P2R3\n cpd11771 C32H47N6O18R4\n cpd11773 H3NR\n cpd11775 None\n cpd11776 None\n cpd11777 C6H10O5R2\n cpd11778 None\n cpd11779 C12H15NO10R6\n cpd11780 C15H23O13P2R3\n cpd11781 C12H20O11R2\n cpd11783 C5H8O5R2\n cpd11784 None\n cpd11785 None\n cpd11786 None\n cpd11787 None\n cpd11788 None\n cpd11789 C11H20NO7PR2\n cpd11790 C15H22O19P3R3\n cpd11791 C6H10O5R2\n cpd11793 C2H4OR2\n cpd11795 None\n cpd11796 None\n cpd11797 None\n cpd11798 None\n cpd11800 None\n cpd11801 HO2R\n cpd11802 C5H9O5R\n cpd11803 None\n cpd11804 C15H24O4PR\n cpd11806 None\n cpd11809 None\n cpd11810 C2H4NOR2\n cpd11811 C5H7NOR2\n cpd11812 C24H35N7O18P3RS\n cpd11813 C24H33N7O17P3RS\n cpd11814 None\n cpd11815 None\n cpd11816 C4H6N2O3R2\n cpd11817 C4H6N2O3R2\n cpd11818 HO4PR\n cpd11819 None\n cpd11820 C10H9N2O6PR2\n cpd11822 C11H16O13PR2\n cpd11823 C15H23O16P2R3\n cpd11824 C15H23O16P2R3\n cpd11825 C29H53N2O8PRS\n cpd11826 C8H14NO6R\n cpd11827 C14H21O8PR3\n cpd11828 C3H2O2R2\n cpd11829 C8H14NO8PR2\n cpd11830 C9H20N2ORS2\n cpd11832 C4H5N2O6PR2\n cpd11833 C22H33N10O15P2R2\n cpd11834 C4H6N2O3R2\n cpd11835 C5H8O7PR\n cpd11836 C15H27N2O9PRS\n cpd11837 C11H15O16P2R2\n cpd11838 C10H10N2O3R2\n cpd11839 C19H29NO18PR2\n cpd11840 C4H5N2O6PR2\n cpd11841 C20H21N7O9PR2\n cpd11842 C16H27N2O11R\n cpd11843 C4H5N2O6PR2\n cpd11844 None\n cpd11845 IR\n cpd11846 R2\n cpd11847 ClR\n cpd11848 HOR\n cpd11849 HRS\n cpd11850 None\n cpd11852 None\n cpd11853 None\n cpd11854 C2H3O2R\n cpd11855 HR\n cpd11856 C2H2R2\n cpd11857 None\n cpd11859 C13H15N3O5R2\n cpd11860 None\n cpd11861 C2H2O2R2\n cpd11862 C12H18O9R2\n cpd11863 C7H6NOR\n cpd11864 None\n cpd11865 None\n cpd11866 C9H15N3O3R2\n cpd11867 None\n cpd11868 C7H12N2O3RS\n cpd11869 None\n cpd11871 C7H10N2O3R\n cpd11872 COR2\n cpd11873 C8H16N5O3R\n cpd11874 C8H15N3O3R\n cpd11875 C10H12N5O10P2R\n cpd11877 None\n cpd11878 None\n cpd11879 None\n cpd11881 C2H5O2R\n cpd11882 None\n cpd11883 None\n cpd11884 CH2R2S\n cpd11885 None\n cpd11886 None\n cpd11887 None\n cpd11889 None\n cpd11890 None\n cpd11891 C3H4N2O2R2\n cpd11892 None\n cpd11893 C14H21N5O4R2\n cpd11894 None\n cpd11895 None\n cpd11897 None\n cpd11898 C5H7O9P2R3\n cpd11899 None\n cpd11900 None\n cpd11901 CH2OR2\n cpd11902 HNO3RS\n cpd11903 O2RS\n cpd11904 C11H7O2R\n cpd11905 None\n cpd11906 C10H12N5O3R\n cpd11907 C10H12N5O3R\n cpd11908 C10H12N5O3R\n cpd11909 C10H12N5O3R\n cpd11910 C10H12N5O3R\n cpd11911 C10H12N5O3R\n cpd11912 C10H12N5O3R\n cpd11913 C10H12N5O3R\n cpd11914 C10H12N5O3R\n cpd11915 C10H12N5O3R\n cpd11916 C10H12N5O3R\n cpd11917 C10H12N5O3R\n cpd11918 C10H12N5O3R\n cpd11919 C10H12N5O3R\n cpd11920 C10H12N5O3R\n cpd11921 C10H12N5O3R\n cpd11922 C10H12N5O3R\n cpd11923 C10H12N5O3R\n cpd11924 C10H12N5O3R\n cpd11925 C6H10O5R2\n cpd11926 C4H4O3R\n cpd11927 CHNOR2\n cpd11928 CH2NOR\n cpd11929 H3NR\n cpd11931 None\n cpd11934 None\n cpd11935 None\n cpd11937 C10H10N2O3R2\n cpd11938 C2H2R2\n cpd11939 CH2O2R2\n cpd11940 None\n cpd11941 None\n cpd11942 None\n cpd11943 HgR2S2\n cpd11946 None\n cpd11947 C20H27N2O21P3R2\n cpd11948 C19H25N2O21P3R2\n cpd11949 C6H7O6R2\n cpd11950 C16H12N4O4R2\n cpd11951 C19H32N12O6R2\n cpd11953 C14H23N6O3R2\n cpd11954 None\n cpd11956 CH2OR2\n cpd11957 None\n cpd11958 None\n cpd11959 C19H35NO2R2S\n cpd11960 None\n cpd11962 None\n cpd11963 None\n cpd11964 CH2R2\n cpd11965 None\n cpd11966 C2H3O2R\n cpd11967 C26H43O2R\n cpd11969 CNR\n cpd11970 C10H16O8R2\n cpd11971 C6HOR5\n cpd11972 C6H10O5R2\n cpd11973 C10H16O13P2R2\n cpd11974 C19H42N5O7R2\n cpd11975 None\n cpd11976 C12H20O10R2\n cpd11977 None\n cpd11979 C18H27O2R\n cpd11980 C8H10N4O5R4\n cpd11982 None\n cpd11983 None\n cpd11984 HNO3PR2\n cpd11985 C20H26N5O20P3R2\n cpd11986 C27H37N5O22P3R2\n cpd11989 None\n cpd11990 C9H15N3O3R2\n cpd11991 O4RS\n cpd11992 None\n cpd11993 C15H8O5R2\n cpd11994 None\n cpd11995 None\n cpd11996 C8H10NO2R2\n cpd11997 None\n cpd11998 C6H11O6R\n cpd12001 C25H34N5O29P5R3\n cpd12002 C4H7N2O3R\n cpd12003 C26H39N6O17P2R2\n cpd12004 None\n cpd12005 C8H14NORS2\n cpd12006 None\n cpd12007 None\n cpd12008 C3H3NO3R\n cpd12009 C19H28N2O18P2R5\n cpd12010 H2NO3PR\n cpd12012 None\n cpd12013 C4H6N2O2R2\n cpd12014 None\n cpd12015 None\n cpd12017 C15H21O18P3R3\n cpd12018 C21H29O2R\n cpd12019 C6H11O5RS\n cpd12020 C9H14NO2R\n cpd12021 C7H13N3O3R\n cpd12022 C15H23O22P4R3\n cpd12023 None\n cpd12024 C2HO2R2\n cpd12025 None\n cpd12026 C4HO4R2\n cpd12027 C5H2O4R3\n cpd12028 C4H2O4R\n cpd12029 C7H5N2O3R\n cpd12030 C15H23O16P3R3\n cpd12031 CO5PR\n cpd12032 C5H5N2O3R2\n cpd12033 COR2\n cpd12034 None\n cpd12035 C7H10N2O4R\n cpd12036 C26H41N9O17P2R2\n cpd12037 None\n cpd12038 C10H15NO8R\n cpd12040 None\n cpd12041 C4H8N3O2R2\n cpd12042 C7H11N2O3R\n cpd12043 C7H14N3O2R2\n cpd12044 C4H6N2O3R2\n cpd12045 NR3\n cpd12047 None\n cpd12048 C15H23O16P2R3\n cpd12049 CHNOR2\n cpd12050 HORS\n cpd12051 None\n cpd12052 C8H12N4O4R4S2\n cpd12053 None\n cpd12054 H2NOR\n cpd12055 C13H18N3O14P2R\n cpd12057 None\n cpd12058 C15H23O17P3R2\n cpd12059 None\n cpd12060 C25H36N7O18P2R2\n cpd12061 None\n cpd12062 None\n cpd12063 None\n cpd12064 C8H14NO4R\n cpd12065 None\n cpd12066 C6H9N3O4R2\n cpd12067 None\n cpd12068 None\n cpd12069 C10H13NO8R3\n cpd12070 C6H7O6R2\n cpd12072 C7H11N3O3R2S2\n cpd12073 None\n cpd12074 C2H2R4S2\n cpd12075 C3H5O3R\n cpd12076 C10H15N3O6RS\n cpd12077 C5H8R2\n cpd12078 H2NR2\n cpd12079 C6H11O5RS\n cpd12080 C21H29O2R\n cpd12081 C5H9O5R\n cpd12082 C26H37N5O29P5R3\n cpd12083 None\n cpd12085 C5H8O4R2\n cpd12086 C15H23O13P2R3\n cpd12087 C20H28NO18P2R3\n cpd12088 CO2R2\n cpd12089 None\n cpd12090 None\n cpd12091 None\n cpd12092 C9H14N2O7R2\n cpd12093 C6H5NR2\n cpd12094 CO2R\n cpd12095 None\n cpd12097 None\n cpd12098 None\n cpd12099 C21H33O18P2R3\n cpd12100 C17H27NO17P2R3\n cpd12101 C7H14N3O2R2\n cpd12103 C6H15N4O2R\n cpd12104 C10H17N3O4R2\n cpd12105 C25H37N6O17P2R2S\n cpd12106 C2H2O2R2\n cpd12107 CH5NR\n cpd12108 C12H17NO17R2S2\n cpd12109 C3H5O3R\n cpd12110 C21H28N4O17P2R2\n cpd12111 None\n cpd12112 C7H11N3O3R2S2\n cpd12114 C6H11O5R\n cpd12115 C15H24O12R2\n cpd12116 C6H11O5R\n cpd12117 C6H11O6R\n cpd12118 C2H2O3R\n cpd12119 C6H11O6R\n cpd12120 C6H10O5R2\n cpd12122 None\n cpd12124 C15H23O16P2R3\n cpd12125 HOR\n cpd12126 None\n cpd12127 O4PR2\n cpd12129 None\n cpd12130 None\n cpd12131 C4H7N2O3R\n cpd12132 C18H29NO18P2R3\n cpd12133 C25H37N6O17P2R2\n cpd12134 None\n cpd12135 C7H13N4O3R\n cpd12136 C5H8N2O3R\n cpd12137 C6H7O7R2\n cpd12138 H2NR\n cpd12139 C7H9N3O3R2S2\n cpd12140 C6H9N3O3R2\n cpd12141 C11H15N3O7RS\n cpd12142 None\n cpd12143 C4H6N2O3R2\n cpd12144 C6H11O6R\n cpd12145 C6H11O5R\n cpd12146 C12H16O13R3S\n cpd12147 C2H2O3R\n cpd12148 C6H10O5R2\n cpd12149 C2H3NO2R2\n cpd12150 C9H11N3O7PR\n cpd12151 None\n cpd12152 None\n cpd12154 C4H5NO3RS\n cpd12155 C5H9N2O3R\n cpd12156 None\n cpd12157 None\n cpd12159 None\n cpd12161 C16H20N2O8R2\n cpd12162 None\n cpd12164 C20H31NO17P2R3\n cpd12165 C4H6N2O3R2\n cpd12166 C5H8N2O3R2\n cpd12167 None\n cpd12168 C12H11N3O2R2\n cpd12169 C4H6N2O2R2\n cpd12170 C5H5O8PR3\n cpd12172 C4H6N2O2R2S\n cpd12173 None\n cpd12174 C3H6NO2RS\n cpd12175 None\n cpd12177 None\n cpd12178 C6H11O5R\n cpd12179 None\n cpd12180 C5H9O5R\n cpd12181 C12H20O10R2\n cpd12182 C19H25N2O21P3R2\n cpd12183 C9H18O16P3R\n cpd12184 C3H7O3R\n cpd12185 C8H12O10PR2\n cpd12186 C15H23O16P2R3\n cpd12188 None\n cpd12189 None\n cpd12190 C7H14N3O2R2\n cpd12191 None\n cpd12192 C7H10N2O3R\n cpd12193 C34H48N3O42R2S5\n cpd12194 C29H37N6O18P2R2\n cpd12195 C30H46O26P4R6\n cpd12197 C4H5NO3R\n cpd12198 C8H14NO6R\n cpd12199 C3H2NO3R2\n cpd12200 C7H11N3O4R\n cpd12201 C3H3NO3R\n cpd12202 C20H37N3O11R2\n cpd12203 C6H11N2O3R\n cpd12204 C5H4N5R\n cpd12205 C10H16N2O8R2\n cpd12206 C15H23O16P2R3\n cpd12207 None\n cpd12208 C5H5N2O4R2\n cpd12210 C10H17N4O4R2\n cpd12212 None\n cpd12213 C4H6O2RS\n cpd12214 O3RS2\n cpd12216 C5H9O5R\n cpd12217 None\n cpd12218 C6H11O6R\n cpd12220 C37H52N5O31P4R4\n cpd12221 C2H2O3R\n cpd12223 C20H28N3O17P3R2\n cpd12225 C8H16NORS2\n cpd12226 C19H28NO19P2R3\n cpd12227 C25H34N6O19P2R2\n cpd12228 C21H31N3O17P2R3\n cpd12229 C19H31NO18P2R3\n cpd12230 C7H12NO6R\n cpd12231 C6H7N2O4R2\n cpd12232 C5H9O7PR\n cpd12233 C4H5N2O6PR2\n cpd12234 C6H11O6R\n cpd12235 C5H7N3O3R2\n cpd12236 C7H12N4O3R2\n cpd12237 C6H10N2O2R2S\n cpd12238 None\n cpd12239 CuR\n cpd12240 None\n cpd12241 C6H8O7R\n cpd12244 None\n cpd12245 C7H5NOR2\n cpd12246 C11H11N4O9PR\n cpd12247 C11H11N4O10PR\n cpd12248 None\n cpd12249 None\n cpd12250 None\n cpd12251 C7H14N3O2R2\n cpd12252 None\n cpd12253 C20H28N3O17P3R2\n cpd12254 None\n cpd12255 C23H33N6O17P2R2S\n cpd12256 C26H39N6O17P2R2\n cpd12257 C5H8O2R\n cpd12258 HOR\n cpd12259 C4H5NO3R\n cpd12260 None\n cpd12261 C3H6NO2R\n cpd12262 C8H15ClN4O2R\n cpd12263 None\n cpd12264 None\n cpd12265 CuR\n cpd12266 C6H12NO5R2\n cpd12267 None\n cpd12268 None\n cpd12269 None\n cpd12271 C9H12N3O6R2\n cpd12273 C4H6O4R2\n cpd12274 C10H15O13P2R2\n cpd12275 None\n cpd12276 C2H2O3R\n cpd12277 None\n cpd12278 C6H5O4PR2\n cpd12279 None\n cpd12280 None\n cpd12281 None\n cpd12282 None\n cpd12283 None\n cpd12284 None\n cpd12285 C26H36N6O18P2R2S\n cpd12286 C21H43N5O8R2\n cpd12287 None\n cpd12288 C2H4NO3R\n cpd12289 None\n cpd12290 C5H5N2O4R2\n cpd12291 None\n cpd12292 C7H14N5O2R2\n cpd12293 C2H4NO2R2\n cpd12294 C7H14N3O2R2\n cpd12295 None\n cpd12297 C11H12N5O8PR\n cpd12298 C7H10N2O5R\n cpd12299 C5H4N2O3R4S\n cpd12300 C4H4O7PR\n cpd12301 None\n cpd12302 None\n cpd12303 None\n cpd12304 None\n cpd12305 None\n cpd12306 None\n cpd12307 None\n cpd12308 C21H28N5O16P3R2\n cpd12309 None\n cpd12310 None\n cpd12311 None\n cpd12312 C6H11O6R\n cpd12313 C19H29N2O18P2R3\n cpd12314 C31H55NO16RS\n cpd12315 None\n cpd12317 None\n cpd12318 C7H14N3O3R2\n cpd12319 C6H11O6R\n cpd12320 C6H2NO3R2S\n cpd12321 C20H28N2O21P3R2\n cpd12322 C6H8O4R2\n cpd12323 C23H47N4O17PR\n cpd12324 C24H49N5O13PR\n cpd12325 C15H22O19P3R3\n cpd12326 C15H22O19P3R3\n cpd12327 None\n cpd12328 None\n cpd12329 None\n cpd12330 None\n cpd12331 C7H13OR\n cpd12332 H2O2PR\n cpd12333 HO3PR\n cpd12334 None\n cpd12335 C24H33NO17P2R3\n cpd12336 C31H38N7O17P2R2\n cpd12337 HO7P2R\n cpd12338 C3H2NO3R2\n cpd12339 C12H11N3O4R2\n cpd12340 C16H25N7O7R5\n cpd12341 C16H21N6O8R5\n cpd12342 None\n cpd12343 C6H8O11R2S2\n cpd12344 C3H6NO2RS\n cpd12346 None\n cpd12348 C6H11O5R\n cpd12350 None\n cpd12351 None\n cpd12352 None\n cpd12353 C22H35N2O18P2R4S\n cpd12354 CH2NOR\n cpd12355 C6H11N2O3R2\n cpd12356 None\n cpd12358 C6H7NO2R2\n cpd12359 None\n cpd12360 C6H8N3O3R3\n cpd12361 C9H14N2O7R2\n cpd12362 C15H22O19P3R3\n cpd12363 None\n cpd12364 C5H9O5R\n cpd12365 C4H9O4R\n cpd12366 C2H3O2R\n cpd12368 C10H8N3O6R2S\n cpd12370 HOR\n cpd12371 C11H16O10R2\n cpd12372 C8H16N3O2R2\n cpd12373 C6H11O6R\n cpd12374 C2H3O2R\n cpd12375 C3H4O6PR\n cpd12377 C10H9N2O6R2S\n cpd12378 C5H9O10P2R\n cpd12379 C3H6NO3RS\n cpd12380 C15H21O22P4R3\n cpd12381 C7H10N2O5R\n cpd12382 C8H6O2R\n cpd12383 C8H13O7R\n cpd12384 None\n cpd12385 None\n cpd12386 C5H9O9P2R\n cpd12387 C4H6N2O3R2\n cpd12388 None\n cpd12389 C7H10NO6R\n cpd12390 C6H7NO2R2\n cpd12391 C11H19O9PR\n cpd12392 C5H8N2O2R2S\n cpd12394 C3H8N2OR\n cpd12395 None\n cpd12396 C4H6N2O6PR2\n cpd12397 None\n cpd12399 C10H17O12PR\n cpd12400 C5H9O4R\n cpd12401 C20H38NO4R\n cpd12402 C3H4O3R\n cpd12403 C6H10O9PR\n cpd12405 None\n cpd12406 None\n cpd12408 C21H31O18R2\n cpd12409 None\n cpd12410 C27H49NO8PR\n cpd12411 C5H7N2O6PR2\n cpd12412 C4H5N2O6PR2\n cpd12413 C8H14NO6R\n cpd12414 C18H24NO18P2R5\n cpd12415 C18H32N2O4R\n cpd12416 C6H8O8PR2\n cpd12417 C15H21N4O10RS2\n cpd12419 C12H14O12R2\n cpd12420 C6H10N2O3R2S\n cpd12421 C15H19N4O10RS2\n cpd12422 C12H17N3O8RS\n cpd12423 None\n cpd12424 C11H14N4O7R3\n cpd12425 C11H16N5O6R3\n cpd12427 C6H14N2O3R2\n cpd12428 C15H22O19P3R3\n cpd12429 C10H14N3O6R\n cpd12430 C21H18O10R2\n cpd12431 C8H16N3O2R2\n cpd12432 None\n cpd12433 None\n cpd12434 C25H45NO8R2\n cpd12435 C5H5N2O5R2\n cpd12436 None\n cpd12437 None\n cpd12438 C3H6O6PR\n cpd12439 C4H6O7PR\n cpd12440 C15H23O16P2R3\n cpd12441 C6H12NO7PR2\n cpd12442 None\n cpd12443 None\n cpd12444 C25H37N7O17P3RS\n cpd12445 None\n cpd12446 C8H14NO6R\n cpd12447 C11H15O16P2R2\n cpd12449 C6H14N4O2R\n cpd12450 C10H16O4RS2\n cpd12452 C11H16O10R2\n cpd12453 C8H16N3O2R2\n cpd12454 C12H14NO23R2S4\n cpd12455 C12H17NO13R2S\n cpd12456 C3H6NO2R\n cpd12457 C8H10N4O2R2\n cpd12459 None\n cpd12461 C2H2O3R\n cpd12462 None\n cpd12463 None\n cpd12464 C18H30O15R2\n cpd12465 C8H6NR\n cpd12466 C15H21O22P4R3\n cpd12467 C4H6N2O3R2\n cpd12468 C4H6N2O3R2\n cpd12469 None\n cpd12471 C8H14NO6R\n cpd12472 C8H14NO6R\n cpd12473 C7H11NO9PR\n cpd12474 C4H5N2O6PR2\n cpd12475 C7H10N2O4R2\n cpd12476 C8H16N5O2R2\n cpd12477 C9H15N3O3R2\n cpd12478 C21H28N5O20P3R2\n cpd12479 C21H28N5O20P3R2\n cpd12480 C21H28N5O19P3R2\n cpd12481 C10H13N3O11P2R2\n cpd12482 C11H13N5O10P2R2\n cpd12483 C11H13N5O11P2R2\n cpd12484 C21H28N5O20P3R2\n cpd12485 C21H28N5O19P3R2\n cpd12486 C21H29N5O20P3R2\n cpd12487 None\n cpd12489 C21H37N2O8PRS\n cpd12490 None\n cpd12491 C12H17NO13R2S\n cpd12492 None\n cpd12493 C3H5O2R\n cpd12494 C41H69N2O17P2R\n cpd12495 None\n cpd12496 C15H23O8PR3\n cpd12497 C4H2OR2S\n cpd12498 C15H22O19P3R3\n cpd12499 C12H21O11R\n cpd12500 C17H28NO17PR2\n cpd12501 None\n cpd12502 C21H28N5O17P3R2\n cpd12505 None\n cpd12506 C23H34N2O19P2R4S\n cpd12507 C8H16NO8PR2\n cpd12508 C7H7N4O5PR2\n cpd12509 C7H7N4O5PR2\n cpd12510 None\n cpd12511 C24H41O21R\n cpd12512 HRS\n cpd12513 C7H8NO5R\n cpd12514 C9H16NO8PR2\n cpd12516 C6H8N2O4R2\n cpd12517 C15H21O18P3R3\n cpd12518 C3H5NO2R2\n cpd12519 C15H22O16P3R3\n cpd12520 C2H3ORS\n cpd12521 None\n cpd12522 C29H42N11O16P2R2\n cpd12523 C15H23O8PR3\n cpd12524 C5H9N2O2R\n cpd12525 C60H102N4O49PR2\n cpd12526 C5H7NO3RS\n cpd12527 C4H3ORS\n cpd12528 C6H8O5R2\n cpd12529 None\n cpd12530 C15H23O13P2R3\n cpd12531 C13H20N4O8R2\n cpd12532 C20H29N5O17P3R2\n cpd12533 C18H28N2O24RS3\n cpd12534 C20H36N2O4R2\n cpd12535 C8H14NO6R\n cpd12536 C4H5N2O6PR2\n cpd12537 None\n cpd12539 C6H8N2O3R2\n cpd12540 C6H8N2O3R2\n cpd12541 C15H21O18P3R3\n cpd12543 C18H26N5O6R2S\n cpd12544 C11H14N5O5R\n cpd12545 C11H20O12PR2\n cpd12546 C25H34N5O19P3R2\n cpd12547 C6H13NO7PR\n cpd12548 C9H7N2O5R2S\n cpd12549 C86H143N3O33P4R\n cpd12550 C22H33O18R2\n cpd12551 C9H12NO8R4\n cpd12552 C12H21O10R\n cpd12553 C5H9O5R\n cpd12554 C6H12NO7PR2\n cpd12555 C5H13NO6PR\n cpd12556 C14H26N2O11PR2\n cpd12558 None\n cpd12560 C8H15N2O4R2\n cpd12561 C12H15NO9R3\n cpd12562 C22H35N3O4R2S\n cpd12563 C27H48O31P3R\n cpd12564 C23H32N3O23P3R2\n cpd12565 C43H62N5O36P4R4\n cpd12566 C26H35N7O17P3RS\n cpd12567 C6H9NO3R2S\n cpd12569 None\n cpd12570 C13H24N3O8R2\n cpd12571 C11H13N5O11P2R2\n cpd12572 C11H15O16P2R2\n cpd12573 C4H5N2O6PR2\n cpd12574 C4H5N2O6PR2\n cpd12575 None\n cpd12576 None\n cpd12578 None\n cpd12580 C14H24NO11R\n cpd12581 C4H6N2O3R2\n cpd12582 C11H14O19P3R2\n cpd12583 C20H29N2O25RS3\n cpd12584 C14H24NO11R\n cpd12585 C6H6O6R\n cpd12586 C13H23NO12PR2\n cpd12588 C11H24N4O3R2\n cpd12589 C21H29O2R\n cpd12590 C17H27N5O4R2S\n cpd12591 C17H27N5O4R2S\n cpd12592 C17H27NO14R\n cpd12593 C26H37N5O29P5R3\n cpd12594 C17H26O15R2\n cpd12595 C19H32O15R2\n cpd12596 C4H6N2O3R2\n cpd12597 C17H27N5O4R2S\n cpd12598 C21H31N3O20P3R2S\n cpd12599 C42H72N2O21R\n cpd12600 C7H14N3O2R2\n cpd12601 C7H14N3O2R2\n cpd12602 None\n cpd12603 C23H38N3O4R2S\n cpd12604 None\n cpd12605 C14H24NO11R\n cpd12606 C8H14NO7PR2\n cpd12607 C19H34N3O13R2\n cpd12608 C7H14N3O2R2\n cpd12609 None\n cpd12610 None\n cpd12611 C4H5N2O6PR2\n cpd12612 C30H51O26R\n cpd12613 None\n cpd12614 C21H28N5O19P3R2\n cpd12615 None\n cpd12616 C23H36N3O19R2\n cpd12617 C13H21NO12R\n cpd12618 None\n cpd12619 C22H32N2O16R2\n cpd12620 C41H61N3O34R2\n cpd12621 C17H27N5O4R2S\n cpd12622 C27H39N5O29P5R3\n cpd12623 None\n cpd12625 None\n cpd12626 C12H13O12R\n cpd12627 None\n cpd12628 C19H34N3O13R2\n cpd12629 C21H34N2O17R2\n cpd12630 C7H14N3O2R2\n cpd12631 None\n cpd12632 C33H45N10O29P5R2\n cpd12633 C17H26O15R2\n cpd12634 None\n cpd12635 C18H31O15R\n cpd12636 C22H32NO18R2\n cpd12637 C20H26O18R\n cpd12638 C27H41N2O23R2\n cpd12639 C22H36O19R2\n cpd12640 C56H95N3O31R\n cpd12641 C49H74N4O39R2\n cpd12642 C22H35O22R2S\n cpd12643 None\n cpd12644 C10H18N6O4R2\n cpd12645 C10H16N3O4R2\n cpd12646 C10H17N4O4R2\n cpd12647 C13H14N3O4R2\n cpd12648 C15H15N4O4R2\n cpd12649 C13H14N3O5R2\n cpd12650 None\n cpd12651 None\n cpd12652 None\n cpd12653 None\n cpd12654 C24H34N7O17P3RS\n cpd12655 C25H35N7O17P3RS\n cpd12656 C21H29N5O16P2R2\n cpd12657 C25H35N7O17P3RS\n cpd12658 None\n cpd12659 None\n cpd12660 C3H4O3R\n cpd12661 None\n cpd12664 None\n cpd12665 C17H26NO17P2R4\n cpd12666 C21H30N3O19P2R6\n cpd12667 C5H5N2O4R2\n cpd12668 C18H30NO17P2R3Se\n cpd12669 None\n cpd12670 None\n cpd12671 None\n cpd12672 None\n cpd12673 C3H4N2O2R2\n cpd12674 None\n cpd12675 C6H10N2O2R2S\n cpd12676 C7H14N5O2R2\n cpd12677 C7H14N5O2R2\n cpd12678 C9H18NO7PR2\n cpd12679 C3H6O6PR2\n cpd12680 C5H8O7PR2\n cpd12681 None\n cpd12682 C24H35N7O17P3RS\n cpd12683 C4H6N2O3R2\n cpd12684 None\n cpd12685 None\n cpd12686 None\n cpd12687 C10H10N2O3R2\n cpd12690 None\n cpd12694 None\n cpd12695 None\n cpd12696 None\n cpd12697 C3H3N2O2R2\n cpd12698 C7H13N2O2R2S\n cpd12699 C8H16N5O2R2\n cpd12700 C8H16N5O2R2\n cpd12702 C19H20N4O11PR2\n cpd12703 C24H35N7O17P3RS\n cpd12704 C25H37N6O17P2R2Se\n cpd12706 None\n cpd12708 None\n cpd12710 None\n cpd12711 CuR\n cpd12712 CuR\n cpd12713 None\n cpd12717 None\n cpd12718 None\n cpd12719 None\n cpd12720 None\n cpd12722 None\n cpd12724 None\n cpd12725 None\n cpd12726 None\n cpd12727 None\n cpd12728 None\n cpd12730 None\n cpd12731 None\n cpd12733 None\n cpd12734 C6H10NO3RS\n cpd12735 None\n cpd12736 C8H16N3O2R2\n cpd12737 C9H18N3O2R2\n cpd12738 C10H20N3O2R2\n cpd12739 None\n cpd12740 None\n cpd12741 None\n cpd12743 None\n cpd12744 None\n cpd12745 None\n cpd12746 C7H12NO7PR2\n cpd12747 C10H18NO7PR2\n cpd12748 None\n cpd12749 None\n cpd12750 None\n cpd12752 None\n cpd12753 C2H3NO3R\n cpd12754 C2HNOR2\n cpd12755 C8H11NO6R2\n cpd12756 C2HNO3R\n cpd12759 C5H7NO3R2\n cpd12761 C5H7NO3RS\n cpd12762 C5H8N2O3RS\n cpd12763 None\n cpd12764 None\n cpd12766 None\n cpd12767 None\n cpd12768 None\n cpd12769 None\n cpd12770 None\n cpd12771 None\n cpd12772 None\n cpd12773 None\n cpd12774 None\n cpd12775 None\n cpd12776 None\n cpd12777 C12H20O11R2\n cpd12778 None\n cpd12784 None\n cpd12793 None\n cpd12794 None\n cpd12795 None\n cpd12798 None\n cpd12799 C10H15NO11PR3\n cpd12800 C4H5O7PR2\n cpd12801 C13H16O17P2R4\n cpd12802 C11H13O22P4R2\n cpd12803 None\n cpd12804 None\n cpd12805 None\n cpd12806 None\n cpd12807 None\n cpd12808 None\n cpd12809 C20H6CoN4R21\n cpd12810 C19H3CoN4R21\n cpd12812 C23H36O20R2\n cpd12813 C17H26O15R2\n cpd12814 C12H15O11R3\n cpd12815 C17H26O15R2\n cpd12816 C20H32O20PR2\n cpd12817 C92H176O140P25R2\n cpd12818 C167H301N25O165P25R2\n cpd12819 None\n cpd12820 None\n cpd12821 None\n cpd12822 None\n cpd12824 None\n cpd12825 None\n cpd12826 None\n cpd12827 None\n cpd12828 C25H34N6O19P2R2\n cpd12829 C19H28NO19P2R3\n cpd12830 C25H45NO11RS\n cpd12831 C31H55NO16RS\n cpd12832 None\n cpd12833 C57H99N2O32R\n cpd12834 C51H89N2O27R\n cpd12835 C59H102N3O32R\n cpd12836 C4H6O2R2\n cpd12837 C8H12O5R\n cpd12838 None\n cpd12839 None\n cpd12845 C7H6N2O6R2\n cpd12846 C6H7N2O4R2\n cpd12847 C7H14N3O2R2\n cpd12848 C17H27N5O4R2S\n cpd12849 C4H6O7PR2\n cpd12850 None\n cpd12851 None\n cpd12852 None\n cpd12853 None\n cpd12854 None\n cpd12855 None\n cpd12856 None\n cpd12857 None\n cpd12861 C27H28O15R2\n cpd12863 C36H29O15R8\n cpd12864 C11H16O10R2\n cpd12865 C17H26O15R2\n cpd12866 C3H2NO3R2\n cpd12868 C4H6N2O3R2\n cpd12870 C7H14N3O2R2\n cpd12871 C8H16N3O2R2\n cpd12872 C69H100CoN17O16PR2\n cpd12873 C70H103CoN17O16PR2\n cpd12874 None\n cpd12875 None\n cpd12877 C5H6O5R2\n cpd12879 None\n cpd12880 None\n cpd12881 None\n cpd12884 C8H12N2O4RS\n cpd12885 C9H9NO4RS\n cpd12886 None\n cpd12887 None\n cpd12888 None\n cpd12889 None\n cpd12890 None\n cpd12891 None\n cpd12892 None\n cpd12893 None\n cpd12894 None\n cpd12895 None\n cpd12898 None\n cpd12899 C15H19N5O13P2R2\n cpd12900 None\n cpd12901 None\n cpd12902 None\n cpd12903 None\n cpd13044 C7H14N3O2R2\n cpd13045 C11H24N4O2R2\n cpd13048 None\n cpd13051 None\n cpd13052 None\n cpd13053 None\n cpd13055 C12H14O11R8\n cpd13195 None\n cpd13196 None\n cpd13197 None\n cpd13198 None\n cpd13199 None\n cpd13218 None\n cpd13267 C9H13ClNR\n cpd13269 C6H9I2NOR2\n cpd13319 None\n cpd13332 C18H27O21R4S3\n cpd13333 C12H15NO9R3\n cpd13334 None\n cpd13335 None\n cpd13336 None\n cpd13337 None\n cpd13338 None\n cpd13339 None\n cpd13340 None\n cpd13341 None\n cpd13342 None\n cpd13343 None\n cpd13345 None\n cpd13346 None\n cpd13347 None\n cpd13357 C45H51MgN4O4R3\n cpd13362 C19H42N5O11R2S\n cpd13363 None\n cpd13364 C12H16N4O6R3S\n cpd13365 C11H17N4O5R3S\n cpd13366 C2H3ORS\n cpd13367 C9H10ClNO3RS\n cpd13368 C9H10NO3RS\n cpd13369 C9H9ClNO3RS\n cpd13370 C20H26N5O17P3R2\n cpd13371 None\n cpd13372 C20H29N3O20P3R2S\n cpd13373 HRS\n cpd13374 None\n cpd13375 None\n cpd13376 C54H87N2O25P2R7\n cpd13377 C52H89N2O23P2R5\n cpd13378 C8H14O2R2\n cpd13379 C8H14O2R2\n cpd13380 C4H6O2R2\n cpd13381 C4H6O2R2\n cpd13382 C18H27N2O27RS4\n cpd13383 C11H14O19P3R2\n cpd13384 C11H14O19P3R2\n cpd13385 C11H15O16P2R2\n cpd13386 C21H32N5O13R3\n cpd13387 C55H100N16O13R\n cpd13388 C55H100N16O17RS\n cpd13389 C12H19N3O4R2S\n cpd13395 C9H10N2O4RS\n cpd13397 C10H12N2O4RS\n cpd13399 None\n cpd13402 None\n cpd13403 C24H35N4O16R\n cpd13404 C22H31N7O17P3RS\n cpd13408 C4H6O2R2\n cpd13411 C16H17O8RS\n cpd13412 C17H19O8RS\n cpd13413 C18H21O8RS\n cpd13414 C18H19O9RS\n cpd13415 C19H21O9RS\n cpd13416 C20H23O9RS\n cpd13417 C20H21O10RS\n cpd13418 C21H23O10RS\n cpd13419 C22H25O10RS\n cpd13420 C16H19O8RS\n cpd13421 C17H21O8RS\n cpd13422 C18H23O8RS\n cpd13423 C18H21O9RS\n cpd13424 C19H23O9RS\n cpd13425 C20H25O9RS\n cpd13426 C20H23O10RS\n cpd13427 C21H25O10RS\n cpd13428 C22H27O10RS\n cpd13430 C19H20NO10RS\n cpd13431 C14H19NO3RS\n cpd13432 C9H11NO2RS\n cpd13433 C9H11NO3RS\n cpd13434 C9H8NO3RS\n cpd13435 C10H10NO3RS\n cpd13436 C10H10NO4RS\n cpd13437 C9H8NO4RS\n cpd13438 C5H9NORS\n cpd13439 C5H6NORS\n cpd13440 C5H4NORS\n cpd13441 C6H6NORS\n cpd13780 CH2OR2\n cpd13880 C41H65NO16R\n cpd13883 None\n cpd13904 C16H30GdN5O9\n cpd13942 C42H70O35R2\n cpd14226 C39H62NO19R2\n cpd14231 C11H15O12R2S\n cpd14242 C128H219N44O38R2\n cpd14262 C49H91N16Na5O27RS5\n cpd14328 None\n cpd14338 C12H14O29R2S6\n cpd14339 C2H3O4R2S\n cpd14340 C5H6O10R2S2\n cpd14345 C78H78Cl2N9O32R\n cpd14346 C58H82N2O17R2\n cpd14379 C50H72N16O25RS3\n cpd14447 C29H38N5O8RS\n cpd14448 C35H58NO13R2\n cpd14464 None\n cpd14465 None\n cpd14466 None\n cpd14467 C4H6MnN2R2S4\n cpd14469 C22H34N2O8R2\n cpd14470 None\n cpd14471 None\n cpd14472 None\n cpd14473 C2H2O3R\n cpd14474 R2S2\n cpd14475 HO2R\n cpd14476 None\n cpd14477 None\n cpd14478 CH2O3RS\n cpd14479 C7H14N5O2R2\n cpd14480 C8H16N5O2R2\n cpd14481 C2H7NR\n cpd14482 C4H8NOR\n cpd14483 C24H20O13R\n cpd14484 C11H19O15P2R\n cpd14485 C12H21O11R\n cpd14486 C14H24NO11R\n cpd14488 C14H19NO14R2S\n cpd14489 C2HO2R2\n cpd14490 C14H21NO12R\n cpd14491 C22H34N2O17R\n cpd14492 C14H21NO12R\n cpd14493 C20H28NO18R\n cpd14494 C14H21NO12R\n cpd14495 C20H28NO18R\n cpd14496 None\n cpd14497 None\n cpd14498 None\n cpd14499 None\n cpd14500 None\n cpd14501 None\n cpd14502 None\n cpd14503 None\n cpd14504 C5H9O3R\n cpd14505 C5H8O6PR\n cpd14506 C6H7O7PR2\n cpd14507 C6H10N2O3R2S\n cpd14508 C35H54N3O28R2\n cpd14509 C66H57Cl2N8O23R\n cpd14510 C50H72N16O21RS2\n cpd14511 C53H79N17O21RS2\n cpd14512 C50H73N17O21RS2\n cpd14540 None\n cpd14541 None\n cpd14542 None\n cpd14543 None\n cpd14546 None\n cpd14547 C4H6N2O2R2S\n cpd14548 C4H6N2O2R2S2\n cpd14549 None\n cpd14550 None\n cpd14551 None\n cpd14561 None\n cpd14562 None\n cpd14563 None\n cpd14564 None\n cpd14565 None\n cpd14567 None\n cpd14569 None\n cpd14570 None\n cpd14571 None\n cpd14572 None\n cpd14573 None\n cpd14576 None\n cpd14579 None\n cpd14580 None\n cpd14582 None\n cpd14583 None\n cpd14585 C11H22N4O2R2\n cpd14586 None\n cpd14587 None\n cpd14593 None\n cpd14594 None\n cpd14595 C47H67O7R\n cpd14596 None\n cpd14597 None\n cpd14599 None\n cpd14600 None\n cpd14602 None\n cpd14603 None\n cpd14604 None\n cpd14606 None\n cpd14607 None\n cpd14611 None\n cpd14620 None\n cpd14621 None\n cpd14622 None\n cpd14624 None\n cpd14625 None\n cpd14627 None\n cpd14629 None\n cpd14631 None\n cpd14634 None\n cpd14636 None\n cpd14641 None\n cpd14642 None\n cpd14645 None\n cpd14646 C47H63O7R\n cpd14647 None\n cpd14649 None\n cpd14650 None\n cpd14651 None\n cpd14658 None\n cpd14660 None\n cpd14664 None\n cpd14666 None\n cpd14671 None\n cpd14672 None\n cpd14673 None\n cpd14674 None\n cpd14675 None\n cpd14676 None\n cpd14684 None\n cpd14685 None\n cpd14686 None\n cpd14699 C13H24NO2RS2\n cpd14703 C13H24NO2RS2\n cpd14717 None\n cpd14722 None\n cpd14725 None\n cpd14726 None\n cpd14727 None\n cpd14728 None\n cpd14729 None\n cpd14731 None\n cpd14732 None\n cpd14733 C22H33N10O15P2R2\n cpd14734 None\n cpd14735 None\n cpd14738 None\n cpd14739 None\n cpd14740 None\n cpd14741 None\n cpd14742 None\n cpd14743 None\n cpd14744 None\n cpd14745 None\n cpd14746 None\n cpd14747 None\n cpd14748 None\n cpd14750 None\n cpd14751 None\n cpd14752 None\n cpd14753 None\n cpd14754 None\n cpd14755 None\n cpd14756 None\n cpd14757 None\n cpd14758 None\n cpd14760 None\n cpd14761 None\n cpd14762 None\n cpd14763 None\n cpd14764 None\n cpd14765 None\n cpd14766 None\n cpd14767 None\n cpd14768 None\n cpd14769 None\n cpd14770 None\n cpd14771 None\n cpd14772 None\n cpd14773 None\n cpd14774 None\n cpd14775 None\n cpd14776 None\n cpd14777 None\n cpd14778 None\n cpd14779 None\n cpd14780 None\n cpd14781 None\n cpd14782 None\n cpd14783 None\n cpd14784 None\n cpd14785 None\n cpd14786 None\n cpd14787 None\n cpd14788 None\n cpd14789 None\n cpd14793 CNR\n cpd14794 CH2NOR\n cpd14797 None\n cpd14798 None\n cpd14799 None\n cpd14800 None\n cpd14801 None\n cpd14802 None\n cpd14803 None\n cpd14804 None\n cpd14805 None\n cpd14806 None\n cpd14807 None\n cpd14808 None\n cpd14809 None\n cpd14810 None\n cpd14811 None\n cpd14812 None\n cpd14813 None\n cpd14814 None\n cpd14815 None\n cpd14816 None\n cpd14817 None\n cpd14820 None\n cpd14821 None\n cpd14822 None\n cpd14823 None\n cpd14824 None\n cpd14825 None\n cpd14826 None\n cpd14827 None\n cpd14828 None\n cpd14830 None\n cpd14831 None\n cpd14832 None\n cpd14833 None\n cpd14834 None\n cpd14835 None\n cpd14836 None\n cpd14837 C4H6N2O3R2\n cpd14838 None\n cpd14839 None\n cpd14840 None\n cpd14841 None\n cpd14842 None\n cpd14843 None\n cpd14844 None\n cpd14845 None\n cpd14846 None\n cpd14847 None\n cpd14848 None\n cpd14849 None\n cpd14850 None\n cpd14851 None\n cpd14852 None\n cpd14853 None\n cpd14854 None\n cpd14855 None\n cpd14856 None\n cpd14857 None\n cpd14858 None\n cpd14860 None\n cpd14861 C21H37N2O17P2R3\n cpd14879 None\n cpd14881 C18H31ORS\n cpd14882 C18H29ORS\n cpd14895 None\n cpd14896 None\n cpd14897 None\n cpd14898 None\n cpd14938 C29H54N2O9PRS\n cpd14939 C18H35O2RS\n cpd14940 C29H54N2O8PRS\n cpd14953 C8H16NOR\n cpd14954 C8H14NORS2\n cpd14956 C8H13ORS3\n cpd14957 H3NR\n cpd15146 None\n cpd15155 None\n cpd15158 Rn\n cpd15159 H2Ra\n cpd15160 Ra\n cpd15161 Ra\n cpd15162 None\n cpd15171 C25H34N5O20P3R2\n cpd15190 C42H50O6R2\n cpd15192 C5H6O5R2\n cpd15193 C4H5O7PR\n cpd15229 C5H6O11P2R2\n cpd15239 C27H49N2O8PRS\n cpd15249 C16H26N7O4R\n cpd15268 C29H55N2O8PRS\n cpd15271 C29H51N2O8PRS\n cpd15276 C5H6O8PR2\n cpd15277 C27H51N2O8PRS\n cpd15284 C11H16O19P3R2\n cpd15287 C11H16O16P2R2\n cpd15288 C11H16O16P2R2\n cpd15294 C25H45N2O8PRS\n cpd15366 C23H41N2O9PRS\n cpd15368 C25H45N2O9PRS\n cpd15369 C27H49N2O9PRS\n cpd15370 C29H53N2O9PRS\n cpd15371 C29H55N2O9PRS\n cpd15373 C23H39N2O9PRS\n cpd15374 C25H43N2O9PRS\n cpd15375 C27H47N2O9PRS\n cpd15376 C29H51N2O9PRS\n cpd15377 C29H53N2O9PRS\n cpd15395 C3H6NOR2S\n cpd15398 C36H57N7O18R\n cpd15415 C23H41N2O8PRS\n cpd15416 C21H37N2O8PRS\n cpd15446 X\n cpd15447 X\n cpd15448 X\n cpd15449 X\n cpd15450 X\n cpd15451 X\n cpd15452 X\n cpd15453 X\n cpd15464 X\n cpd15465 H2X\n cpd15480 C6H7NO2R2S2\n cpd15481 C6H9NO2R2S2\n cpd15501 C68H104N12O39R2\n cpd15504 C71H109N13O40R2\n cpd15505 C74H114N14O41R2\n cpd15509 C74H114N14O41R2\n cpd15510 C77H119N15O42R2\n cpd15511 C80H124N16O43R2\n cpd15512 C120H186N24O64R2\n cpd15563 C18H28NO17P2R3Se\n cpd15565 C18H29NO18P2R3\n cpd15568 C29H51N2O8PRS\n cpd15569 C23H39N2O8PRS\n cpd15570 C25H43N2O8PRS\n cpd15571 C27H47N2O8PRS\n cpd15572 C29H53N2O8PRS\n cpd15573 C10H12N5O3R\n cpd15574 O4W\n cpd15646 C9H19NO7PR\n cpd15647 C6H13NO7PR\n cpd15648 C7H13O9PR\n cpd15649 C9H11O11PR3\n cpd15650 C13H16O17P2R4\n cpd15653 C7H12NO8PR2\n cpd15655 C8H12O10PR2\n cpd15657 C8H11NO10PR2\n cpd15664 None\n cpd15665 C80H125N16O42R\n cpd15666 C40H63N8O21R\n cpd15667 C191H359N10O259P46R\n cpd15668 C326H584N55O304P46R\n cpd15669 C461H809N10O484P46R\n cpd15670 None\n cpd15800 None\n cpd15876 Fe8S8X\n cpd15877 Fe8S8X\n cpd15914 C18H25N2O9PRS\n cpd15930 C31H59N2O8PRS\n cpd15965 C70H133N2O8PRS\n cpd15966 C71H135N2O8PRS\n cpd15973 C61H115N2O8PRS\n cpd15974 C62H117N2O7PRS\n cpd15975 C63H119N2O7PRS\n cpd15979 C70H135N2O9PRS\n cpd15980 C71H137N2O9PRS\n cpd15983 C67H127N2O8PRS\n cpd15984 C68H129N2O8PRS\n cpd15985 C69H131N2O8PRS\n cpd15987 C68H131N2O9PRS\n cpd15988 C69H133N2O9PRS\n cpd16000 C30H57N2O8PRS\n cpd16014 C34H57N2O9PRS\n cpd16026 C43H81N2O11PRS\n cpd16029 C46H79N2O12PRS\n cpd16031 CMRY\n cpd16060 None\n cpd16061 None\n cpd16063 None\n cpd16064 None\n cpd16065 None\n cpd16066 None\n cpd16067 None\n cpd16068 None\n cpd16069 None\n cpd16070 None\n cpd16071 None\n cpd16072 None\n cpd16073 None\n cpd16074 None\n cpd16075 None\n cpd16076 None\n cpd16079 None\n cpd16080 None\n cpd16081 None\n cpd16082 None\n cpd16083 None\n cpd16084 None\n cpd16085 None\n cpd16086 None\n cpd16087 None\n cpd16088 None\n cpd16089 None\n cpd16090 None\n cpd16091 None\n cpd16092 None\n cpd16093 None\n cpd16095 None\n cpd16096 None\n cpd16097 None\n cpd16099 None\n cpd16100 None\n cpd16101 None\n cpd16102 None\n cpd16103 None\n cpd16104 None\n cpd16105 None\n cpd16106 None\n cpd16107 None\n cpd16108 None\n cpd16109 None\n cpd16110 None\n cpd16111 None\n cpd16113 None\n cpd16114 None\n cpd16115 None\n cpd16118 None\n cpd16119 None\n cpd16120 None\n cpd16121 None\n cpd16123 None\n cpd16124 None\n cpd16125 None\n cpd16127 None\n cpd16128 None\n cpd16129 None\n cpd16130 None\n cpd16131 None\n cpd16132 None\n cpd16133 None\n cpd16134 None\n cpd16135 None\n cpd16136 None\n cpd16137 None\n cpd16138 None\n cpd16139 None\n cpd16140 None\n cpd16141 None\n cpd16142 None\n cpd16143 None\n cpd16144 None\n cpd16145 None\n cpd16147 None\n cpd16148 None\n cpd16149 None\n cpd16151 None\n cpd16152 None\n cpd16153 None\n cpd16154 None\n cpd16155 None\n cpd16156 None\n cpd16157 None\n cpd16158 None\n cpd16159 None\n cpd16160 None\n cpd16161 None\n cpd16162 None\n cpd16163 None\n cpd16164 None\n cpd16165 None\n cpd16167 None\n cpd16168 None\n cpd16169 None\n cpd16170 None\n cpd16171 None\n cpd16172 None\n cpd16173 None\n cpd16175 None\n cpd16176 None\n cpd16177 None\n cpd16178 None\n cpd16179 None\n cpd16180 None\n cpd16181 None\n cpd16182 None\n cpd16183 None\n cpd16185 None\n cpd16186 None\n cpd16187 None\n cpd16188 None\n cpd16189 None\n cpd16190 None\n cpd16191 None\n cpd16192 None\n cpd16193 None\n cpd16194 None\n cpd16195 None\n cpd16196 None\n cpd16197 None\n cpd16198 None\n cpd16199 None\n cpd16200 None\n cpd16201 None\n cpd16203 None\n cpd16204 None\n cpd16205 None\n cpd16206 None\n cpd16207 None\n cpd16208 None\n cpd16209 None\n cpd16210 None\n cpd16211 None\n cpd16212 None\n cpd16213 None\n cpd16218 None\n cpd16219 None\n cpd16220 None\n cpd16224 None\n cpd16225 None\n cpd16226 None\n cpd16227 None\n cpd16228 None\n cpd16229 None\n cpd16230 None\n cpd16231 None\n cpd16232 None\n cpd16233 None\n cpd16234 None\n cpd16235 C16H29ORS\n cpd16238 None\n cpd16239 None\n cpd16240 None\n cpd16241 None\n cpd16242 None\n cpd16243 None\n cpd16245 None\n cpd16276 None\n cpd16298 None\n cpd16302 None\n cpd16303 None\n cpd16307 C7H9N2O3R3S\n cpd16308 C10H15N3O6R\n cpd16313 C91H162N4O54PR2\n cpd16442 C18H28NO21P3R3\n cpd16469 C15H23O22P4R3\n cpd16474 C11H14N5O5R\n cpd16483 C10H15N3O6R\n cpd16484 C20H28N5O18P3R2\n cpd16491 HgR2S2\n cpd16492 C10H14N3O6R\n cpd16496 C5H9N2O3RS\n cpd16497 C13H21NO4RS2\n cpd16524 C13H19N5O4R2\n cpd16532 C16H10I4N2O4R2\n cpd16533 C16H11I3N2O4R2\n cpd16534 C10H8I2N2O3R2\n cpd16535 C10H9IN2O3R2\n cpd16536 C10H10N2O3R2\n cpd16537 C4H4N2O2R2\n cpd16541 O4RS\n cpd16584 C8H16NORS2\n cpd16594 None\n cpd16595 None\n cpd16596 None\n cpd16597 None\n cpd16598 None\n cpd16599 None\n cpd16600 None\n cpd16601 None\n cpd16602 None\n cpd16603 None\n cpd16604 None\n cpd16605 None\n cpd16606 None\n cpd16607 None\n cpd16608 None\n cpd16609 None\n cpd16610 None\n cpd16611 None\n cpd16612 None\n cpd16613 None\n cpd16614 None\n cpd16615 None\n cpd16616 None\n cpd16617 None\n cpd16618 None\n cpd16619 None\n cpd16620 None\n cpd16621 None\n cpd16622 None\n cpd16623 None\n cpd16624 None\n cpd16625 None\n cpd16626 None\n cpd16627 None\n cpd16628 None\n cpd16629 None\n cpd16630 None\n cpd16631 None\n cpd16632 None\n cpd16633 None\n cpd16634 None\n cpd16635 None\n cpd16636 None\n cpd16637 None\n cpd16639 None\n cpd16640 None\n cpd16641 None\n cpd16642 None\n cpd16643 None\n cpd16644 None\n cpd16645 None\n cpd16646 None\n cpd16653 None\n cpd16660 None\n cpd16661 None\n cpd16662 None\n cpd16663 None\n cpd16669 None\n cpd16670 None\n cpd16672 None\n cpd16673 None\n cpd16674 None\n cpd16675 C15H27N2O8PRS\n cpd16676 C23H41N2O8PRS\n cpd16677 C21H37N2O8PRS\n cpd16687 C21H39N2O8PRS\n cpd16688 C27H49N2O8PRS\n cpd16689 C17H31N2O8PRS\n cpd16744 C10H18O2R\n cpd16745 C12H22O2R\n cpd16746 C12H20O2R\n cpd16747 C14H26O2R\n cpd16748 C14H24O2R\n cpd16749 C6H10O2R\n cpd16750 C8H14O2R\n cpd16751 C36H64O8R4\n cpd16752 R\n cpd16760 None\n cpd16761 None\n cpd16763 None\n cpd16767 None\n cpd16769 None\n cpd16770 None\n cpd16771 None\n cpd16772 None\n cpd16773 None\n cpd16774 None\n cpd16775 None\n cpd16776 None\n cpd16777 None\n cpd16778 None\n cpd16779 None\n cpd16780 None\n cpd16781 None\n cpd16782 None\n cpd16783 None\n cpd16784 None\n cpd16785 None\n cpd16786 None\n cpd16787 None\n cpd16788 None\n cpd16789 None\n cpd16790 None\n cpd16791 None\n cpd16792 None\n cpd16794 None\n cpd16795 None\n cpd16796 None\n cpd16797 None\n cpd16798 None\n cpd16799 None\n cpd16800 None\n cpd16801 None\n cpd16803 None\n cpd16804 None\n cpd16805 None\n cpd16806 None\n cpd16807 None\n cpd16808 None\n cpd16809 None\n cpd16810 None\n cpd16811 None\n cpd16812 None\n cpd16817 None\n cpd16818 None\n cpd16819 None\n cpd16820 None\n cpd16821 None\n cpd16822 None\n cpd16823 None\n cpd16824 None\n cpd16825 None\n cpd16826 None\n cpd16827 None\n cpd16828 None\n cpd16829 None\n cpd16830 None\n cpd16831 None\n cpd16832 None\n cpd16833 None\n cpd16834 None\n cpd16835 None\n cpd16836 None\n cpd16837 None\n cpd16838 None\n cpd16839 None\n cpd16840 None\n cpd16841 None\n cpd16842 None\n cpd16843 None\n cpd16844 None\n cpd16845 None\n cpd16846 None\n cpd16847 None\n cpd16848 None\n cpd16849 None\n cpd16850 None\n cpd16851 None\n cpd16852 None\n cpd16853 None\n cpd16854 None\n cpd16855 None\n cpd16856 None\n cpd16857 C8H15O2RS\n cpd16858 None\n cpd16859 None\n cpd16863 None\n cpd16864 None\n cpd16865 None\n cpd16866 None\n cpd16867 None\n cpd16868 None\n cpd16869 None\n cpd16870 None\n cpd16905 C65H123N2O8PRS\n cpd16907 C64H121N2O8PRS\n cpd16909 C33H63N2O8PRS\n cpd16911 C33H61N2O8PRS\n cpd16912 C51H95N2O8PRS\n cpd16913 C67H127N2O8PRS\n cpd16914 C69H131N2O8PRS\n cpd16915 C49H93N2O8PRS\n cpd16916 C53H99N2O8PRS\n cpd16917 C67H127N2O8PRS\n cpd16918 C69H131N2O8PRS\n cpd16919 C51H97N2O8PRS\n cpd16920 C55H103N2O8PRS\n cpd16921 C67H127N2O8PRS\n cpd16922 C53H101N2O8PRS\n cpd16923 C51H95N2O8PRS\n cpd16924 C63H119N2O8PRS\n cpd16925 C49H93N2O8PRS\n cpd16926 C35H65N2O8PRS\n cpd16927 C68H131N2O9PRS\n cpd16928 C69H133N2O9PRS\n cpd16929 C70H135N2O9PRS\n cpd16930 C71H137N2O9PRS\n cpd16931 C68H131N2O9PRS\n cpd16932 C70H135N2O9PRS\n cpd16933 C68H131N2O9PRS\n cpd16934 C69H133N2O9PRS\n cpd16939 C69H131N2O9PRS\n cpd16946 C70H135N2O9PRS\n cpd16949 C70H133N2O9PRS\n cpd16950 C68H129N2O9PRS\n cpd16955 C69H131N2O9PRS\n cpd16957 C68H129N2O9PRS\n cpd16965 C70H135N2O9PRS\n cpd16967 C69H133N2O9PRS\n cpd16986 C71H135N2O9PRS\n cpd16999 C72H139N2O9PRS\n cpd17007 C17H33OX\n cpd17009 C13H25OX\n cpd17010 C5H9OX\n cpd17011 C14H27OX\n cpd17012 C15H29OX\n cpd17013 C16H31OX\n cpd17014 C7H13OX\n cpd17015 C6H11OX\n cpd17016 Fe8S8X\n cpd17017 Fe8S8X\n cpd17018 C17H33OX\n cpd17021 C17H31OX\n cpd17029 C15H29OX\n cpd17031 C15H27OX\n cpd17037 O2U\n cpd17038 O2U\n cpd17041 None\n cpd17042 None\n cpd17043 None\n cpd17046 C12H23O2X\n cpd17047 C14H27O2X\n cpd17051 C18H35O2X\n cpd17286 C18H28NO21P3R3\n cpd17287 None\n cpd17290 C27H43N3O18P2R3\n cpd17376 C26H35N2O33R2S4\n cpd17377 C26H33N2O36R2S5\n cpd17393 C20H36N2O3R2S\n cpd17448 C7H12NO6RS\n cpd17449 C7H11NO9RS2\n cpd17450 CHNORS\n cpd17451 C4H7N2O3RS\n cpd17452 C4H3O5R\n cpd17453 C4H3O5R\n cpd17454 C3H2O3R\n cpd17455 C3H6NO2R\n cpd17467 C19H25N2O20P3R2S\n cpd17468 C22H30N3O22P3R2S\n cpd17469 C20H26N5O19P3R2\n cpd17470 C19H26N3O20P3R2\n cpd17512 C9H14NO2R\n cpd17578 None\n cpd17635 C30H39N4O25P4R2\n cpd17636 C30H39N4O25P4R2\n cpd17922 None\n cpd17923 None\n cpd17990 C5H8NO3RS\n cpd17991 C4H8NORS\n cpd17992 C9H15N2O4RS\n cpd17993 C9H15N2O5RS\n cpd18029 C17H27N5O4R2S\n cpd18030 C18H26N5O6R2S\n cpd18032 C5H6NO3R\n cpd18041 None\n cpd18044 C5H7NO4R\n cpd18067 C8H7O3R\n cpd18068 C9H9O3R\n cpd18069 C9H7O3R\n cpd18072 FeR\n cpd18074 FeR\n cpd18075 FeR\n cpd18076 FeR\n cpd18077 CuR\n cpd18078 CuR\n cpd18079 CuR\n cpd18080 CuR\n cpd18081 CuR\n cpd18082 CuR\n cpd19000 CO2R\n cpd19004 C5H6O5R2\n cpd19010 None\n cpd19011 None\n cpd19014 C14H20NO11R2\n cpd19029 None\n cpd19033 C50H85N3O26R\n cpd19038 None\n cpd19049 None\n cpd19051 C4H6N2O3R2\n cpd19155 None\n cpd19166 C21H18O10R2\n cpd19167 None\n cpd19168 None\n cpd19171 C8H14NORS2\n cpd19172 C8H16NORS2\n cpd19173 C12H22NO2RS2\n cpd19179 C12H19NO4RS2\n cpd19180 C10H18NO2RS2\n cpd19183 C5H7O5R\n cpd19186 None\n cpd19189 C11H7O3R\n cpd19192 H3NR\n cpd19193 HRS\n cpd19194 C10H19N6O4R2\n cpd19195 C7H14N3O2R2\n cpd19197 None\n cpd19250 None\n cpd19251 None\n cpd19276 None\n cpd19395 C7H12NO9PR\n cpd19396 C7H13O9PR\n cpd19399 C12H17O16P2R3\n cpd19414 None\n cpd19429 None\n cpd19430 C30H25MgN4O4R2\n cpd19431 C31H27MgN4O4R2\n cpd19432 C45H50MgN4O4R2\n cpd19433 C46H52MgN4O4R2\n cpd19438 C9H13NO7R2\n cpd19439 C12H19NO7R2\n cpd19451 None\n cpd19452 None\n cpd19453 None\n cpd19454 None\n cpd19455 None\n cpd19456 None\n cpd19457 None\n cpd19458 None\n cpd19459 None\n cpd19460 None\n cpd19466 None\n cpd19467 None\n cpd19468 None\n cpd19469 None\n cpd19470 None\n cpd19476 None\n cpd19477 None\n cpd19478 None\n cpd19499 None\n cpd19500 None\n cpd19553 None\n cpd19554 None\n cpd19555 None\n cpd19556 None\n cpd19557 None\n cpd19558 None\n cpd19559 C19H18NO8RS\n cpd19560 C19H14NO6RS\n cpd19579 None\n cpd19592 C16H17O8RS\n cpd19593 C16H19O8RS\n cpd19594 C16H15O6RS\n cpd19595 C20H21O10RS\n cpd19596 C20H23O10RS\n cpd19597 C20H19O8RS\n cpd19598 C21H23O10RS\n cpd19599 C21H25O10RS\n cpd19600 C21H21O8RS\n cpd19601 C20H19O9RS\n cpd19602 C20H17O8RS\n cpd19618 C16H13O5R\n cpd19620 C16H15O5RS\n cpd19622 C16H15O5RS\n cpd19634 C6H15ClN3O2R\n cpd19647 None\n cpd19831 O4STl2\n cpd19880 C4H5O4RS\n cpd19881 C3H3O4RS\n cpd19882 C3H4NO3RS\n cpd20059 C2HNOR2\n cpd20250 None\n cpd20251 None\n cpd20254 None\n cpd20255 None\n cpd20256 None\n cpd20257 None\n cpd20258 None\n cpd20338 C19H27N3O17P2R2\n cpd20340 C28H38N6O24P3R2\n cpd20345 C38H49N11O30P4R2\n cpd20415 Th\n cpd20417 Pu\n cpd20421 None\n cpd20430 None\n cpd20441 CW\n cpd20458 None\n cpd20460 None\n cpd20461 C12H16O15R4S2\n cpd20571 None\n cpd20588 C14H10N2O2R2\n cpd20638 C34H24BaCl2N4O8S2\n cpd20692 None\n cpd20693 None\n cpd20748 C6H11NOR2\n cpd20760 C3H6R2\n cpd20762 C2F4R2\n cpd20764 C6H9NOR2\n cpd20774 None\n cpd20787 None\n cpd20809 None\n cpd20825 None\n cpd20830 None\n cpd20849 None\n cpd20851 None\n cpd20861 None\n cpd20879 None\n cpd20894 None\n cpd20895 None\n cpd20919 None\n cpd20920 None\n cpd20969 C19H26N3O20P3R2\n cpd20970 C25H39N5O21P3R2\n cpd21021 C5H6O11P2R2\n cpd21035 C12H11N4O2R\n cpd21046 C12H19N3O8R2\n cpd21047 C5H8N2O3R2\n cpd21048 C13H21N3O8R2\n cpd21054 C29H39O2R\n cpd21055 C14H17OR\n cpd21067 C3H3N2O2R\n cpd21068 C3H3N2O2R\n cpd21084 C17H29N3O10R3\n cpd21087 C7H11O3RS\n cpd21088 C8H13O3RS\n cpd21091 None\n cpd21102 None\n cpd21103 None\n cpd21107 C39H69N2O18R\n cpd21125 C5H7NO4R\n cpd21126 C11H15N2O7R\n cpd21127 C11H15N2O10PR\n cpd21128 C11H16N2O6R\n cpd21129 C11H20N3O5R\n cpd21145 None\n cpd21151 C25H41NO7PR2\n cpd21161 None\n cpd21162 None\n cpd21163 None\n cpd21164 None\n cpd21165 None\n cpd21166 None\n cpd21179 None\n cpd21180 None\n cpd21277 None\n cpd21278 None\n cpd21279 None\n cpd21282 None\n cpd21286 None\n cpd21290 None\n cpd21293 None\n cpd21302 None\n cpd21306 None\n cpd21307 None\n cpd21310 None\n cpd21311 None\n cpd21312 None\n cpd21313 None\n cpd21314 None\n cpd21318 None\n cpd21319 None\n cpd21320 None\n cpd21322 None\n cpd21323 None\n cpd21324 None\n cpd21325 None\n cpd21331 None\n cpd21332 None\n cpd21333 None\n cpd21334 None\n cpd21335 None\n cpd21336 None\n cpd21337 None\n cpd21338 None\n cpd21339 None\n cpd21340 None\n cpd21341 None\n cpd21344 None\n cpd21345 None\n cpd21346 None\n cpd21347 None\n cpd21348 None\n cpd21349 None\n cpd21350 None\n cpd21352 C13H17N6O7R5S\n cpd21353 C28H41N6O7R5S\n cpd21457 C11H11O4R\n cpd21500 None\n cpd21501 None\n cpd21502 None\n cpd21503 None\n cpd21504 None\n cpd21505 None\n cpd21506 None\n cpd21507 None\n cpd21508 None\n cpd21509 None\n cpd21510 None\n cpd21511 None\n cpd21512 None\n cpd21513 None\n cpd21514 None\n cpd21515 None\n cpd21516 None\n cpd21517 None\n cpd21518 None\n cpd21519 None\n cpd21520 None\n cpd21521 None\n cpd21522 None\n cpd21523 None\n cpd21524 None\n cpd21525 None\n cpd21526 None\n cpd21527 None\n cpd21528 None\n cpd21529 None\n cpd21530 None\n cpd21531 None\n cpd21532 None\n cpd21533 None\n cpd21534 None\n cpd21535 None\n cpd21536 None\n cpd21537 None\n cpd21538 None\n cpd21539 None\n cpd21540 None\n cpd21541 None\n cpd21542 None\n cpd21543 None\n cpd21544 None\n cpd21545 None\n cpd21546 None\n cpd21547 None\n cpd21548 None\n cpd21549 None\n cpd21550 None\n cpd21551 None\n cpd21552 None\n cpd21553 None\n cpd21554 None\n cpd21555 None\n cpd21556 None\n cpd21557 None\n cpd21558 None\n cpd21559 None\n cpd21560 None\n cpd21561 None\n cpd21562 None\n cpd21563 None\n cpd21564 None\n cpd21565 None\n cpd21566 None\n cpd21567 None\n cpd21568 None\n cpd21569 None\n cpd21570 None\n cpd21571 None\n cpd21572 None\n cpd21573 None\n cpd21574 None\n cpd21575 None\n cpd21576 None\n cpd21577 None\n cpd21578 None\n cpd21579 None\n cpd21580 None\n cpd21581 None\n cpd21582 None\n cpd21583 None\n cpd21584 None\n cpd21585 None\n cpd21586 None\n cpd21587 None\n cpd21588 None\n cpd21589 None\n cpd21590 None\n cpd21591 None\n cpd21592 None\n cpd21593 None\n cpd21594 None\n cpd21595 None\n cpd21596 None\n cpd21597 None\n cpd21598 None\n cpd21599 None\n cpd21600 None\n cpd21601 None\n cpd21602 None\n cpd21603 None\n cpd21604 None\n cpd21605 None\n cpd21606 None\n cpd21607 None\n cpd21608 None\n cpd21609 None\n cpd21610 None\n cpd21611 None\n cpd21612 None\n cpd21613 None\n cpd21614 None\n cpd21615 None\n cpd21616 None\n cpd21617 None\n cpd21618 None\n cpd21619 None\n cpd21620 None\n cpd21621 None\n cpd21622 None\n cpd21623 None\n cpd21624 None\n cpd21625 None\n cpd21626 None\n cpd21627 None\n cpd21628 None\n cpd21629 None\n cpd21630 None\n cpd21631 None\n cpd21632 None\n cpd21633 None\n cpd21634 None\n cpd21635 None\n cpd21636 None\n cpd21637 None\n cpd21638 None\n cpd21639 None\n cpd21640 None\n cpd21641 None\n cpd21642 None\n cpd21643 None\n cpd21644 None\n cpd21645 None\n cpd21646 None\n cpd21647 None\n cpd21648 None\n cpd21649 None\n cpd21650 None\n cpd21651 None\n cpd21652 None\n cpd21653 None\n cpd21654 None\n cpd21655 None\n cpd21656 None\n cpd21657 None\n cpd21658 None\n cpd21659 None\n cpd21660 None\n cpd21661 None\n cpd21662 None\n cpd21663 None\n cpd21664 None\n cpd21665 None\n cpd21666 None\n cpd21667 None\n cpd21668 None\n cpd21669 None\n cpd21670 None\n cpd21671 None\n cpd21672 None\n cpd21673 None\n cpd21674 None\n cpd21675 None\n cpd21676 None\n cpd21677 None\n cpd21678 None\n cpd21679 None\n cpd21680 None\n cpd21681 None\n cpd21682 None\n cpd21683 None\n cpd21684 None\n cpd21685 None\n cpd21686 None\n cpd21687 None\n cpd21688 None\n cpd21689 None\n cpd21690 None\n cpd21691 None\n cpd21692 None\n cpd21693 None\n cpd21694 None\n cpd21695 None\n cpd21696 None\n cpd21697 None\n cpd21698 None\n cpd21699 None\n cpd21700 None\n cpd21701 None\n cpd21702 None\n cpd21703 None\n cpd21704 None\n cpd21705 None\n cpd21706 None\n cpd21707 None\n cpd21708 None\n cpd21709 None\n cpd21710 None\n cpd21711 None\n cpd21712 None\n cpd21713 None\n cpd21714 None\n cpd21715 None\n cpd21716 None\n cpd21717 None\n cpd21718 None\n cpd21719 None\n cpd21720 None\n cpd21721 None\n cpd21722 None\n cpd21723 None\n cpd21724 None\n cpd21725 None\n cpd21726 None\n cpd21727 None\n cpd21728 None\n cpd21729 None\n cpd21730 None\n cpd21731 None\n cpd21732 None\n cpd21733 None\n cpd21734 None\n cpd21735 None\n cpd21736 None\n cpd21737 None\n cpd21738 None\n cpd21739 None\n cpd21740 None\n cpd21741 None\n cpd21742 None\n cpd21743 None\n cpd21744 None\n cpd21747 C18H30O16R2\n cpd21749 C15H24O13R2\n cpd21750 C5H6O5R2\n cpd21751 C18H30O16R2\n cpd21752 C24H40O21R2\n cpd21753 C20H32O17R2\n cpd21754 C24H40O21R2\n cpd21755 C18H21O17R2\n cpd21756 C24H40O19R2\n cpd21757 C30H48O25R2\n cpd21759 C30H48O25R2\n cpd21761 C12H20O11R2\n cpd21762 C12H20O11R2\n cpd21763 C27H49NO8PR\n cpd21764 C27H51NO8PR\n cpd21765 C27H51NO9PR\n cpd21766 C4H4O7PR\n cpd21767 C6H8O4R2\n cpd21768 C4H6O6PR\n cpd21769 C7H15NO6PR\n cpd21770 C5H8O7PR\n cpd21771 C8H19NO6PR\n cpd21772 C3H6O6PR\n cpd21773 C26H48NO9R\n cpd21778 CH2R2\n cpd21780 C4H6O7PR\n cpd21781 C6H8O5R2\n cpd21782 C11H14O19P3R2\n cpd21783 C11H14O19P3R2\n cpd21784 C11H15O16P2R2\n cpd21785 C17H26O18PR2\n cpd21786 C3H6O6PR2\n cpd21788 C19H31OR\n cpd21789 C21H29O2R\n cpd21791 C19H29OR\n cpd21792 C20H28O2R\n cpd21793 C20H30O2R\n cpd21795 C20H28OR\n cpd21796 C20H30OR\n cpd21797 C22H32NO18R2\n cpd21798 C11H16O10R2\n cpd21799 C17H26O15R2\n cpd21803 C14H17OR\n cpd21804 C15H24O13R2\n cpd21808 C18H30O16R2\n cpd21809 C24H41O21R\n cpd21810 C24H41O21R\n cpd21811 C10H13N3O11P2R2\n cpd21812 C10H13N3O11P2R2\n cpd21813 C10H13N3O11P2R2\n cpd21814 C10H13N3O11P2R2\n cpd21815 C11H14N5O11P2R2\n cpd21816 C11H13N5O10P2R2\n cpd21817 C11H13N5O11P2R2\n cpd21818 C11H13N5O11P2R2\n cpd21819 C11H13N5O11P2R2\n cpd21820 C10H12N2O12P2R2\n cpd21821 C10H13N3O11P2R2\n cpd21822 C24H30N10O16P3R2\n cpd21823 C11H14N5O11P2R2\n cpd21824 C10H11N5O10P2R2\n cpd21825 C20H22N10O16P3R2\n cpd21826 C9H11N3O11P2R2\n cpd21827 C9H11N3O11P2R2\n cpd21828 C9H11N3O11P2R2\n cpd21829 C9H11N3O11P2R2\n cpd21830 C10H11N5O11P2R2\n cpd21831 C10H11N5O11P2R2\n cpd21832 C10H11N5O11P2R2\n cpd21833 C10H11N5O11P2R2\n cpd21834 C10H11N5O11P2R2\n cpd21835 C9H10N2O12P2R2\n cpd21836 C9H10N2O12P2R2\n cpd21837 C9H10N2O12P2R2\n cpd21842 C10H12N2O12P2R2\n cpd21843 C24H30N10O16P3R2\n cpd21844 C20H22N10O16P3R2\n cpd21845 C9H10N2O12P2R2\n cpd21847 C20H38NO4R\n cpd21848 C11H17O8PR2\n cpd21849 C11H17N4O3R2\n cpd21850 C29H45N8O21P3R2S\n cpd21851 C6H13NO7PR\n cpd21855 C4H6O7PR\n cpd21856 C9H19NO7PR\n cpd21857 None\n cpd21859 C2H7NR\n cpd21866 C5H9O3R\n cpd21867 C9H15N2O5RS\n cpd21876 C2H2O3R\n cpd21877 C15HO3R9\n cpd21878 C6H13NO7PR\n cpd21879 C9H19NO7PR\n cpd21884 C11H14N5RS\n cpd21885 C12H15O3R\n cpd21886 C12H15O2R\n cpd21887 C22H34N2O11PR8\n cpd21888 None\n cpd21889 C10H13N3O11P2R2\n cpd21890 None\n cpd21891 C10H13N3O11P2R2\n cpd21892 C10H13N3O11P2R2\n cpd21893 C11H13N5O11P2R2\n cpd21894 C10H12N2O12P2R2\n cpd21895 C10H12N2O12P2R2\n cpd21896 C28H38O23R2\n cpd21897 C10H13N3O11P2R2\n cpd21898 C10H12N2O12P2R2\n cpd21903 None\n cpd21904 C2O3R\n cpd21910 C5H8O7PR\n cpd21913 C27H50O3R\n cpd21914 C12H17N3O8RS\n cpd21915 None\n cpd21916 C11H13N5O10P2R2\n cpd21917 C10H13O19P4R4\n cpd21918 C26H35N7O17P3RS\n cpd21919 C21H33O2R\n cpd21920 C10H12N2O12P2R2\n cpd21921 C9H10N2O12P2R2\n cpd21922 C9H10N2O12P2R2\n cpd21923 C9H10N2O12P2R2\n cpd21924 None\n cpd21928 C5H7O6PR\n cpd21929 C11H13N5O11P2R2\n cpd21930 C11H13N5O10P2R2\n cpd21931 C10H13N3O11P2R2\n cpd21932 C10H13N3O11P2R2\n cpd21933 C11H13N5O11P2R2\n cpd21934 C10H12N2O12P2R2\n cpd21935 C11H13N5O10P2R2\n cpd21936 C10H13N3O11P2R2\n cpd21937 C10H12N2O12P2R2\n cpd21938 C10H12N2O12P2R2\n cpd21939 C11H13N5O10P2R2\n cpd21940 C11H13N5O11P2R2\n cpd21941 C11H13N5O11P2R2\n cpd21942 C11H13N5O11P2R2\n cpd21943 C10H12N2O12P2R2\n cpd21944 C12H15N5O10P2R2\n cpd21945 C11H13N5O10P2R2\n cpd21946 C11H14N5O11P2R2\n cpd21947 C10H11N5O10P2R2\n cpd21948 C10H11N5O10P2R2\n cpd21949 C10H11N5O10P2R2\n cpd21950 C10H11N5O10P2R2\n cpd21951 C9H11N3O11P2R2\n cpd21952 C9H11N3O11P2R2\n cpd21953 C9H11N3O11P2R2\n cpd21954 C10H11N5O11P2R2\n cpd21955 C10H11N5O11P2R2\n cpd21956 C10H11N5O11P2R2\n cpd21957 C10H11N5O11P2R2\n cpd21958 C10H11N5O11P2R2\n cpd21959 C10H11N5O11P2R2\n cpd21960 C9H10N2O12P2R2\n cpd21961 C9H10N2O12P2R2\n cpd21962 C9H10N2O12P2R2\n cpd21963 C9H10N2O12P2R2\n cpd21964 C9H10N2O12P2R2\n cpd21965 C9H10N2O12P2R2\n cpd21966 C9H10N2O12P2R2\n cpd21967 C9H10N2O12P2R2\n cpd21968 C9H10N2O12P2R2\n cpd21969 C9H10N2O12P2R2\n cpd21970 C9H10N2O12P2R2\n cpd21971 C9H10N2O12P2R2\n cpd21972 C9H10N2O12P2R2\n cpd21973 C9H10N2O12P2R2\n cpd21974 C9H10N2O12P2R2\n cpd21975 C9H10N2O12P2R2\n cpd21976 None\n cpd21977 C28H44O6R4\n cpd21978 C28H46O3R2\n cpd21983 None\n cpd21984 None\n cpd21985 C29H46O6R4\n cpd21986 C29H46O5R4\n cpd21987 None\n cpd21988 None\n cpd21989 C4H6O4R\n cpd21990 C4H6O3R\n cpd21991 C2H2O3R\n cpd21992 C8H11O5R2\n cpd21993 C15H4O4R8\n cpd21994 C10H15O3R3\n cpd21997 C4H2O4R\n cpd21998 C14H24NO10R\n cpd21999 None\n cpd22002 C20H30O2R2\n cpd22003 C19H31OR\n cpd22004 C10H8N3O6R2S\n cpd22009 C6H9O6R\n cpd22010 C8H13O8R\n cpd22012 None\n cpd22013 None\n cpd22015 C36H58N3O10PR2S\n cpd22016 C6H10O2R2\n cpd22019 C15H3O3R9\n cpd22020 C5H8O6PR2\n cpd22021 C19H36N2O9PRS\n cpd22022 C20H34N3O12PR2S\n cpd22026 C25H35N7O18P3RS\n cpd22027 C20H32N3O12PR2S\n cpd22028 C22H36N3O12PR2S\n cpd22033 C12H12O4R\n cpd22034 None\n cpd22038 C28H38O23R2\n cpd22044 C36H56N3O10PR2S\n cpd22045 C19H29OR\n cpd22046 C19H29OR\n cpd22047 C19H27OR\n cpd22048 C19H27OR\n cpd22049 C19H29OR\n cpd22053 C5H8O7PR\n cpd22054 None\n cpd22055 C11H13O2R\n cpd22056 C5H7O10P2R2\n cpd22057 C5H8O7PR\n cpd22059 C12H19N4O3R2\n cpd22060 None\n cpd22061 C25H43N2O9PRS\n cpd22062 C27H48N2O9PRS\n cpd22063 C34H58N3O10PR2S\n cpd22064 None\n cpd22065 C22H38N3O12PR2S\n cpd22066 C78H149O2R\n cpd22067 C85H165O3R\n cpd22068 C86H164O3R\n cpd22069 C86H166O3R\n cpd22070 C86H165O3R\n cpd22071 C19H27OR\n cpd22072 C34H62N3O10PR2S\n cpd22073 C36H66N3O10PR2S\n cpd22074 C40H74N3O10PR2S\n cpd22075 C25H43N2O9PRS\n cpd22076 C27H47N2O9PRS\n cpd22077 C29H52N2O9PRS\n cpd22078 C34H56N3O10PR2S\n cpd22079 C38H70N3O10PR2S\n cpd22080 None\n cpd22081 C29H52N2O9PRS\n cpd22082 C5H6O9P2R3\n cpd22083 C5H7O6PR\n cpd22084 C21H30N3O11PR2S\n cpd22087 C6H8O6R\n cpd22088 C11H10O4R\n cpd22091 None\n cpd22093 C20H28O2R\n cpd22094 C20H28O2R\n cpd22095 C20H30O2R\n cpd22096 C20H28O2R\n cpd22097 C20H30O2R\n cpd22098 C20H30O2R\n cpd22099 C19H29OR\n cpd22100 C15HO3R9\n cpd22101 None\n cpd22102 None\n cpd22112 C12H12O3R\n cpd22113 C15H4O3R8\n cpd22117 C16H6O3R8\n cpd22121 C9H15N2O4RS\n cpd22122 None\n cpd22126 C12H14N2O14P2R2\n cpd22129 C13H17N3O14P2R2\n cpd22130 C5H8O6PR2\n cpd22132 None\n cpd22136 C11HO4R5\n cpd22139 C7H10N2O5R\n cpd22140 C5H9N2O3R\n cpd22145 C10H13N3O11P2R2\n cpd22148 C12H3O4R5\n cpd22149 C10H13N3O11P2R2\n cpd22150 C10H13N3O11P2R2\n cpd22151 C10H13N3O11P2R2\n cpd22152 C10H13N3O11P2R2\n cpd22153 C10H13N3O10P2R2\n cpd22155 None\n cpd22158 C5H7O9P2R2\n cpd22159 C5H7O9P2R2\n cpd22160 None\n cpd22161 C5H6O9P2R3\n cpd22162 None\n cpd22165 C12H15N3O14P2R2\n cpd22166 None\n cpd22167 None\n cpd22168 None\n cpd22169 C9H12N2O12P2R2\n cpd22170 C9H12N2O12P2R2\n cpd22171 C9H12N2O12P2R2\n cpd22172 C9H12N2O12P2R2\n cpd22173 C9H12N2O12P2R2\n cpd22174 C9H12N2O12P2R2\n cpd22175 C5H8O7PR2\n cpd22176 None\n cpd22177 None\n cpd22179 C20H34NO16R\n cpd22180 C28H47N2O21R\n cpd22182 C8H13O7R\n cpd22188 C6H10O9PR\n cpd22189 None\n cpd22190 None\n cpd22191 C18H28O15R4\n cpd22194 C22H35O22R2S\n cpd22196 C12H21O11R2\n cpd22201 C13H17O3R\n cpd22204 C19H31OR\n cpd22208 C19H29OR\n cpd22209 C15H6O2R8\n cpd22211 C16H8O2R8\n cpd22212 C11HO4R5\n cpd22213 C12H3O4R5\n cpd22216 C29H39O2R\n cpd22217 None\n cpd22218 None\n cpd22219 None\n cpd22220 None\n cpd22221 C25H46NO8R\n cpd22223 C4H7O4R\n cpd22224 None\n cpd22225 HO2R\n cpd22226 C8H5NO3R\n cpd22227 C29H46O28P2R2\n cpd22229 None\n cpd22230 C34H57N2O26R\n cpd22232 C56H95N3O31R\n cpd22234 C22H31N7O17P3RS\n cpd22235 C4H6O7PR\n cpd22236 C12H15O11R3\n cpd22237 C81H131O2R\n cpd22241 C10H12N5O10P2R\n cpd22242 None\n cpd22243 C21H33N9O14P2R2\n cpd22245 C10H12N5O10P2R\n cpd22249 C28H34N11O21P3R\n cpd22250 None\n cpd22251 C30H35O31R2\n cpd22256 C20H34NO16R\n cpd22257 C22H36O19R2\n cpd22263 C25H40N2O19R\n cpd22264 C36H56N3O27R\n cpd22266 None\n cpd22268 C8H14NO6R\n cpd22271 C15H31N3O2R2S2\n cpd22275 None\n cpd22276 C7H6NOR\n cpd22277 C21H18O10R2\n cpd22278 C5H8O11P2R2\n cpd22279 C3H6NO2R2\n cpd22280 None\n cpd22281 C28H34N11O21P3R\n cpd22284 C28H34N11O21P3R\n cpd22285 None\n cpd22286 C28H34N11O21P3R\n cpd22287 C32H39N12O24P3R\n cpd22288 C28H34N11O21P3R\n cpd22289 None\n cpd22290 None\n cpd22291 None\n cpd22292 C138H213O112R2\n cpd22293 C2H3O2R\n cpd22294 None\n cpd22295 C27H39O21R3\n cpd22296 CO2R\n cpd22297 C30H42N3O16PR2S\n cpd22298 C30H44N3O16PR2S\n cpd22299 C30H40N3O14PR2S\n cpd22300 None\n cpd22301 None\n cpd22302 None\n cpd22303 None\n cpd22304 None\n cpd22305 C7H13N2O3R\n cpd22306 CO5PR\n cpd22307 C5H6NO3R\n cpd22308 C5H7NO4R\n cpd22309 C15H25N3O9PR3S\n cpd22310 C6H13NO2R\n cpd22311 C20H22N10O22P5R2\n cpd22312 C14H19N7O9PR\n cpd22313 None\n cpd22314 C26H41O21R2\n cpd22315 C24H36O19R2\n cpd22316 None\n cpd22317 None\n cpd22318 HOR\n cpd22319 CHOR\n cpd22320 None\n cpd22321 C2H5O2R\n cpd22322 C2H2O3R\n cpd22323 C2H3O2R\n cpd22324 CH2NOR\n cpd22325 None\n cpd22326 CHNOR2\n cpd22327 CH5NR\n cpd22328 None\n cpd22329 CNR\n cpd22330 None\n cpd22331 C2H2NOR\n cpd22332 None\n cpd22333 C3H3OR\n cpd22334 CH3R\n cpd22335 CH2O3RS\n cpd22336 C3H3NO4RS2\n cpd22337 HO2R\n cpd22338 CH3ORS\n cpd22339 HRS\n cpd22340 C10H21NO7PR\n cpd22341 C6H12NO7PR2\n cpd22342 C8H14NO7PR2\n cpd22343 CH5NR\n cpd22344 None\n cpd22345 C10H14O14P3R4\n cpd22346 NR4\n cpd22347 None\n cpd22348 None\n cpd22349 C20H28O2R\n cpd22350 C20H30O2R\n cpd22351 C21H29O2R\n cpd22352 C20H28OR\n cpd22353 C2H4NO2R\n cpd22354 C6H11O6R\n cpd22355 C24H40O21R2\n cpd22356 C6H8O7R\n cpd22357 C6H11O6R\n cpd22359 C16H23N3O16P2R2\n cpd22361 C6H11O5R\n cpd22362 C20H40NO5R\n cpd22363 C18H29OR\n cpd22364 C2O4R\n cpd22366 None\n cpd22367 None\n cpd22368 H4NR\n cpd22369 C2H4NO2R\n cpd22370 C21H31N4O10PR2S\n cpd22371 None\n cpd22372 None\n cpd22373 C21H12O7R9\n cpd22374 C24H13O10R9\n cpd22375 C15H2OR9\n cpd22378 C3H6NO2R2\n cpd22379 C3H6NO2R2\n cpd22380 None\n cpd22381 C7H14N3O2R2\n cpd22382 C6H13N2O2R2\n cpd22383 None\n cpd22384 C6H13N2O2R2\n cpd22385 C15H19N5O16P3R2\n cpd22386 None\n cpd22387 C35H56O29R2\n cpd22388 C34H64N3O9PR2S\n cpd22389 CH5NR\n cpd22390 None\n cpd22391 None\n cpd22392 CO2R\n cpd22393 C2H4NO2R\n cpd22394 C2O3R\n cpd22395 CH3OR\n cpd22396 C2H2NOR\n cpd22397 None\n cpd22398 HOR\n cpd22399 CHOR\n cpd22400 C6H2NR5\n cpd22401 C6O4PR7\n cpd22402 O4RS\n cpd22403 C12H11O6R5\n cpd22404 C6O4R5S\n cpd22405 H3NR\n cpd22406 C28H47N2O21R\n cpd22407 None\n cpd22408 None\n cpd22409 None\n cpd22410 None\n cpd22411 None\n cpd22414 None\n cpd22415 C20H37N2O3R2S2\n cpd22416 C19H35N2O3R2S2\n cpd22417 C20H37N2O3R2S2\n cpd22418 C15H29N2O2R2S2\n cpd22419 C15H27N2O2R2S2\n cpd22420 None\n cpd22421 None\n cpd22422 C6H14N2OR2\n cpd22425 C14H24NO11R\n cpd22435 None\n cpd22439 None\n cpd22440 C2H4NO2R\n cpd22441 C36H68N3O9PR2S\n cpd22442 None\n cpd22444 None\n cpd22445 C15H28N2O9PRS\n cpd22446 C6H11O6R\n cpd22447 C6H11O6R\n cpd22448 None\n cpd22449 C6H8O7R\n cpd22450 C6H11O6R\n cpd22451 C6H11O5R\n cpd22452 C12H19O11R3\n cpd22453 C6H11O6R\n cpd22454 C6H11O6R\n cpd22455 C16H23N3O16P2R2\n cpd22456 C6H11O5R\n cpd22457 C5H9O5R\n cpd22458 C3H2NOR3\n cpd22459 None\n cpd22460 None\n cpd22461 None\n cpd22462 None\n cpd22463 None\n cpd22465 C16H27N4O3R2S\n cpd22466 C28H47N2O21R2\n cpd22467 None\n cpd22468 None\n cpd22469 None\n cpd22470 None\n cpd22471 None\n cpd22472 None\n cpd22473 C28H44O6R4\n cpd22474 BrR\n cpd22475 HRS\n cpd22477 None\n cpd22478 None\n cpd22481 None\n cpd22483 None\n cpd22484 None\n cpd22486 None\n cpd22487 None\n cpd22488 None\n cpd22490 None\n cpd22491 None\n cpd22492 None\n cpd22493 None\n cpd22494 None\n cpd22497 C78H151O2R\n cpd22499 C85H167O3R\n cpd22500 C86H167O3R\n cpd22501 C86H169O3R\n cpd22502 C86H167O3R\n cpd22506 None\n cpd22508 None\n cpd22509 None\n cpd22513 C13H16O17P2R4\n cpd22514 None\n cpd22515 None\n cpd22516 C20H29N3O15P2R2\n cpd22517 C14H17N3O15P2R2\n cpd22518 C24H40O19R2\n cpd22519 C28H50N4O19R2\n cpd22521 C8H12NO9R2S\n cpd22522 C8H11NO12R2S2\n cpd22523 C25H35N7O17P3RS\n cpd22524 C25H35N7O17P3RS\n cpd22526 C31H47N8O22P3R2S\n cpd22650 C15H5O2R9\n cpd22651 C17H26O15R2\n cpd22652 C23H36O20R2\n cpd22673 C15H5O2R9\n cpd22691 C44H64N4O33R2\n cpd22692 C46H66N4O34R2\n cpd22739 C7H11NO9PR\n cpd22775 C17H29N2O12R2\n cpd22779 C16H28ORS\n cpd22875 C50H85N3O26R\n cpd22880 C11H15O16P2R2\n cpd22881 C7H12NO6R\n cpd22882 C10H17O12PR\n cpd22885 C10H15NO9R\n cpd22911 C18H20O18R\n cpd22973 C6H6NO5R2\n cpd22974 C11H7O2R\n cpd22978 C15H5O3R9\n cpd22991 None\n cpd23000 None\n cpd23001 C2H4NO2R\n cpd23012 C16H21N5O10P2R2S\n cpd23013 C16H21N5O11P2R2S\n cpd23022 None\n cpd23036 C20H24N4O18P3R2\n cpd23037 C20H24N4O18P3R2\n cpd23039 None\n cpd23041 C20H24N4O18P3R2\n cpd23042 C18H18N4O18P3R4\n cpd23064 None\n cpd23065 None\n cpd23067 None\n cpd23068 None\n cpd23152 C25H35N4O12PR2S\n cpd23154 C21H30N3O11PR2S\n cpd23182 Pt\n cpd23226 C24H35N4O11PR2S2\n cpd23227 C17H32N4O9PR2S2\n cpd23228 C24H33N4O10PR2S2\n cpd23229 C27H36N5O10PR2S3\n cpd23230 C27H38N5O10PR2S3\n cpd23231 C28H40N5O10PR2S3\n cpd23232 C17H32N4O9PR2S2\n cpd23241 None\n cpd23242 None\n cpd23243 None\n cpd23269 C15H17N3O4R4\n cpd23270 C22H18N4O4R4\n cpd23271 C12H12N2O3R4S\n cpd23362 C28H43N10O15P2R2\n cpd23379 None\n cpd23380 None\n cpd23381 None\n cpd23418 None\n cpd23442 C14H13N3O4R4S\n cpd23507 C4H4O3R\n cpd23590 C38H35FeN6O6R2S\n cpd23642 None\n cpd23655 C19H17O8R\n cpd23672 C7H9O6R\n cpd23673 C6H6O6R\n cpd23674 C6H6O6R\n cpd23676 C12H14O13R2\n cpd23677 C6H6O6R\n cpd23681 C6H8N2O5R\n cpd23691 C24H36O31R2S4\n cpd23722 C39H69N2O18R\n cpd23723 C45H79N2O23R\n cpd23750 None\n cpd23753 C30H34O4R2\n cpd23771 None\n cpd23773 None\n cpd23774 None\n cpd23775 None\n cpd23835 None\n cpd23912 None\n cpd23913 None\n cpd24008 None\n cpd24058 C59H85O49R2\n cpd24059 C62H86O51R2\n cpd24093 None\n cpd24094 None\n cpd24120 C17H17N4O9PRS\n cpd24185 None\n cpd24290 None\n cpd24313 C34H40N3O15PR2S\n cpd24314 C34H38N3O16PR2S\n cpd24315 C34H40N3O16PR2S\n cpd24328 None\n cpd24348 C7H10NO6R\n cpd24349 C9H16NO8PR2\n cpd24352 C11H16O10R2\n cpd24354 C11H15O16P2R2\n cpd24360 C19H26N4O16PR\n cpd24362 C18H30NO5R\n cpd24363 Cs\n cpd24365 Tl\n cpd24384 C7H12NO6R\n cpd24458 C8H9N3O6R3\n cpd24459 C24H43N3O5R\n cpd24463 C8H14NO8PR2\n cpd24466 C4H7O4R\n cpd24471 C16H28O7PR\n cpd24479 C3H3NO3R\n cpd24480 C6H8N2O4R\n cpd24481 C26H39N7O17P3RS\n cpd24489 C19H29NO18PR2\n cpd24552 C4H7O4R\n cpd24580 C21H33N9O14P2R2\n cpd24588 C15H30NO2R\n cpd24592 C25H26N8O8R\n cpd24595 C5H4NO5R\n cpd24604 Fe2R4S6\n cpd24609 C11H12N5O8PR\n cpd24626 C10H14NO12PR\n cpd24627 C13H20N2O7R2\n cpd24628 C5H9N2O3RS\n cpd24630 C5H7NO2R2\n cpd24666 C8H9N3O2R2\n cpd24687 C21H30NO12R2\n cpd24743 C56H95N3O31R\n cpd24760 C3H5NO3R2S\n cpd24761 C3H6NO2R2S\n cpd24762 C28H47N2O21R\n cpd24763 C16H27N2O11R\n cpd24764 C15H25O13R\n cpd24765 C10H17O9R\n cpd25081 None\n cpd25082 None\n cpd25083 C21H41O29P3R\n cpd25084 C15H23O8PR3\n cpd25086 None\n cpd25088 None\n cpd25089 C14H24NO11R2\n cpd25247 C7H11NO9PR\n cpd25333 C10H14O14P3R4\n cpd25336 None\n cpd25337 None\n cpd25338 None\n cpd25339 C23H32N5O18P2R2\n cpd25340 C13H17N3O14P2R2\n cpd25341 C6H11O6R\n cpd25342 C12H20N3O7R2\n cpd25344 C5H9O5R\n cpd25345 C5H7O5R\n cpd25346 C3H4NO2R3\n cpd25347 C9H17N2O6R3\n cpd25348 C8H14NO6R\n cpd25350 C4H6O3R\n cpd25351 C6H14N2OR2\n cpd25352 C7H16N2OR2\n cpd25353 C6H8O7R\n cpd25354 C20H34NO15R\n cpd25355 C8H14NO6R\n cpd25368 C5H8O12P3R2\n cpd25369 C25H34N5O29P5R3\n cpd25372 C6H10NO2R2\n cpd25504 None\n cpd25559 None\n cpd25560 None\n cpd25568 None\n cpd25594 None\n cpd25620 C8H11N2O5PR\n cpd25790 None\n cpd25869 None\n cpd25892 None\n cpd25910 C11H11O3R\n cpd25922 C10H24N3OR2\n cpd25976 Fe2N2R6S4\n cpd25977 None\n cpd26004 None\n cpd26005 None\n cpd26006 C39H64N8O20R\n cpd26007 C8H14NO5R\n cpd26008 C36H57N7O18R\n cpd26009 C36H56N10O17R2\n cpd26010 C18H29N5O9R\n cpd26011 C18H29N5O9R\n cpd26012 C15H24N4O8R\n cpd26013 C33H51N9O16R2\n cpd26014 C32H52N4O21R2\n cpd26015 C30H51N4O20R2\n cpd26021 None\n cpd26084 None\n cpd26096 None\n cpd26118 None\n cpd26157 O2U\n cpd26166 None\n cpd26167 None\n cpd26195 None\n cpd26208 None\n cpd26226 None\n cpd26235 BaCl2\n cpd26237 None\n cpd26238 None\n cpd26251 None\n cpd26252 None\n cpd26254 None\n cpd26255 C15H17N5O14P2R2\n cpd26291 None\n cpd26301 None\n cpd26330 None\n cpd26333 None\n cpd26338 C12H18N3O10P2R2\n cpd26339 C13H17N5O9P2R2\n cpd26340 C11H16N3O11P2R2\n cpd26341 C12H18N3O11P2R2\n cpd26342 C12H15N5O10P2R2\n cpd26343 C13H17N5O10P2R2\n cpd26360 None\n cpd26366 None\n cpd26381 None\n cpd26382 None\n cpd26386 None\n cpd26390 None\n cpd26405 None\n cpd26406 C13H14N2O17P2R2\n cpd26418 None\n cpd26419 None\n cpd26420 None\n cpd26421 None\n cpd26422 None\n cpd26466 None\n cpd26468 C10H14O16P3R4\n cpd26469 None\n cpd26470 None\n cpd26471 None\n cpd26472 None\n cpd26474 La\n cpd26477 None\n cpd26478 None\n cpd26483 None\n cpd26484 None\n cpd26485 None\n cpd26486 None\n cpd26487 None\n cpd26488 None\n cpd26493 None\n cpd26494 None\n cpd26495 None\n cpd26497 None\n cpd26499 None\n cpd26500 None\n cpd26501 None\n cpd26502 None\n cpd26503 None\n cpd26513 C10H13N2O7R\n cpd26537 None\n cpd26546 None\n cpd26547 C6H11O5R\n cpd26549 C10H13N3O11P2R2\n cpd26572 None\n cpd26573 None\n cpd26574 None\n cpd26575 None\n cpd26576 None\n cpd26597 C19H38NO3R\n cpd26607 C4H6O2R\n cpd26660 None\n cpd26662 C20H30OR\n cpd26663 C20H28OR\n cpd26664 C20H30OR\n cpd26668 C28H34N11O21P3R\n cpd26670 None\n cpd26671 None\n cpd26672 C14H19N7O9PR\n cpd26673 None\n cpd26674 C17H26N4O5R2S\n cpd26675 CO2R\n cpd26676 CO2R2\n cpd26677 None\n cpd26678 None\n cpd26679 None\n cpd26680 C4H5N2O3R4\n cpd26681 C28H44O5R4\n cpd26682 C6H2O2R4\n cpd26683 C24H40O19R2\n cpd26684 None\n cpd26685 C4H7NO3R2\n cpd26686 C40H76N3O9PR2S\n cpd26687 C15H11O4R\n cpd26688 C31H40N12O22P3R\n cpd26689 C34H48N15O22P3R\n cpd26690 C32H40N13O23P3R\n cpd26691 C32H39N12O24P3R\n cpd26692 C31H40N12O22P3RS\n cpd26693 C33H43N13O23P3R\n cpd26694 C33H41N12O24P3R\n cpd26695 C30H38N12O22P3R\n cpd26696 C34H42N14O22P3R\n cpd26697 C34H46N12O22P3R\n cpd26698 C34H46N12O22P3R\n cpd26699 C34H48N13O22P3R\n cpd26700 C33H44N12O22P3RS\n cpd26701 C37H44N12O22P3R\n cpd26702 C33H42N12O22P3R\n cpd26703 C40H54N14O23P3R\n cpd26704 C31H39N12O22P3RSe\n cpd26705 C31H40N12O23P3R\n cpd26706 C32H42N12O23P3R\n cpd26707 C39H45N13O22P3R\n cpd26708 C37H44N12O23P3R\n cpd26709 C33H44N12O22P3R\n cpd26710 C8H12N2O3R2S\n cpd26711 C32H52N4O21R2\n cpd26712 C20H37N3O14R2\n cpd26713 C14H25N2O8R2\n cpd26714 ClR\n cpd26715 C28H45O2R\n cpd26716 C6H7O7R2\n cpd26717 C8H12NO9R2S\n cpd26718 C8H13NO6R2\n cpd26719 C14H19NO15R2S\n cpd26720 C14H19NO15R2S\n cpd26721 C6H6O10R2S\n cpd26723 C14H18NO18R2S2\n cpd26725 C25H35N7O17P3RS\n cpd26726 C23H39N2O8PRS\n cpd26727 C25H43N2O8PRS\n cpd26728 C15H19N5O11P2R2\n cpd26729 C21H38N2O8PRS\n cpd26730 C29H54N2O8PRS\n cpd26731 None\n cpd26732 None\n cpd26736 None\n cpd26737 None\n cpd26738 None\n cpd26739 None\n cpd26740 None\n cpd26741 None\n cpd26742 None\n cpd26743 C54H80CoN14O9R2\n cpd26744 C54H80CoN14O9R2\n cpd26745 C54H80CoN14O9R2\n cpd26746 None\n cpd26747 None\n cpd26748 C4H5N2O3R4\n cpd26749 None\n cpd26750 None\n cpd26751 C11H11O4R\n cpd26752 C8H14NO6R2\n cpd26753 C3H6NO2R2\n cpd26754 None\n cpd26755 C15H26N2O8PRS\n cpd26756 None\n cpd26757 None\n cpd26760 C40H68N20O16R2\n cpd26761 C44H72N21O20R\n cpd26762 C5H7O6PR\n cpd26763 C5H7O6PR\n cpd26764 None\n cpd26765 None\n cpd26766 C5H6O9P2R2\n cpd26767 None\n cpd26768 None\n cpd26769 None\n cpd26770 C9H11N3O11P2R2\n cpd26771 C9H11N3O11P2R2\n cpd26772 C9H11N3O11P2R2\n cpd26773 C9H11N3O11P2R2\n cpd26774 C6H14N2OR2\n cpd26775 C5H10NOR2S\n cpd26776 C7H16N4OR2\n cpd26777 C7H16N2OR2\n cpd26778 C6H13NOR2S\n cpd26779 C6H14N4OR2\n cpd26780 C34H30FeN4O4R\n cpd26781 C34H30FeN4O4R\n cpd26782 None\n cpd26783 None\n cpd26784 C34H32FeN4O4R2S2\n cpd26785 C34H32FeN4O4R2S2\n cpd26786 None\n cpd26787 None\n cpd26788 C34H32FeN4O4R2S2\n cpd26789 C34H32FeN4O4R2S2\n cpd26790 FeR\n cpd26791 FeR\n cpd26792 C34H32FeN4O6R\n cpd26793 C34H32FeN4O6R\n cpd26794 C34H32FeN4O4R2S2\n cpd26795 C34H32FeN4O4R2S2\n cpd26796 C34H32FeN4O4R2S2\n cpd26797 C9H11N3O11P2R2\n cpd26798 C9H11N3O11P2R2\n cpd26799 C9H11N3O11P2R2\n cpd26800 C9H11N3O11P2R2\n cpd26801 C9H11N3O11P2R2\n cpd26802 C25H37N7O18P3RS\n cpd26804 C3H3N2O2R\n cpd26809 CHOR\n cpd26810 C2H4NO2R\n cpd26812 C37H66NO18R\n cpd26816 C5H9N2O3R\n cpd26817 C11H18N2O7R3\n cpd26819 C11H16O10R2\n cpd26820 C21H31NO19R2S\n cpd26823 None\n cpd26826 C23H36N4O9PR2S\n cpd26828 C5H9O4R\n cpd26834 C24H41O21R\n cpd26835 C6H8O7R\n cpd26839 C25H35N7O17P3RS\n cpd26841 None\n cpd26848 C10H14O14P3R4\n cpd26849 C6H7O7R2\n cpd26850 None\n cpd26853 C42H70N3O31R\n cpd26855 C5H6O5R2\n cpd26856 C5H6O11P2R2\n cpd26857 None\n cpd26863 C14H27N2O2R2S2\n cpd26871 C4H6N2O3R2\n cpd26873 C13H23N5O2R2\n cpd26874 C13H21N4O3R2\n cpd26878 None\n cpd26884 C10H14N5O10P2R2\n cpd26885 C11H13N5O9P2R2\n cpd26886 C11H13N5O10P2R2\n cpd26887 C10H11N5O9P2R2\n cpd26888 C18H18N4O18P3R4\n cpd26889 None\n cpd26890 None\n cpd26891 C20H26N4O9R4\n cpd26892 C9H11N3O10P2R2\n cpd26893 C10H11N5O10P2R2\n cpd26894 None\n cpd26895 C10H13N3O11P2R2\n cpd26896 C10H14O14P3R4\n cpd26897 C10H13N3O10P2R2\n cpd26898 None\n cpd26899 None\n cpd26900 None\n cpd26901 C10H11N5O9P2R2\n cpd26902 C10H10N4O10P2R2\n cpd26903 None\n cpd26904 C20H26N4O21P4R4\n cpd26905 None\n cpd26906 C5H8O10P2R2\n cpd26907 None\n cpd26908 C10H15O15P3R3\n cpd26909 C11H15N5O11P2R2\n cpd26910 None\n cpd26911 C10H11N5O9P2R3\n cpd26912 None\n cpd26913 C9H10N2O11P2R2\n cpd26914 None\n cpd26915 C36H56N3O9PR2S\n cpd26916 C36H58N3O9PR2S\n cpd26917 None\n cpd26923 None\n cpd26924 None\n cpd26925 C92H151N8O25P2R\n cpd26926 None\n cpd26927 None\n cpd26928 OR\n cpd26929 None\n cpd26930 C33H51N7O17P3RS\n cpd26931 C6H6O6R\n cpd26933 C27H48N2O8PRS\n cpd26934 C27H48N2O8PRS\n cpd26935 C13H17O4R\n cpd26936 HR\n cpd26937 C15H15O2R\n cpd26938 C15H13O2R\n cpd26939 C5H9O9P2R\n cpd26940 C5H8O6PR\n cpd26941 C5H9O12P3R\n cpd26942 C5H9O3R\n cpd26943 C5H9O3R\n cpd26944 C6H14N2O3R2\n cpd26945 None\n cpd26946 C5H8O10P2R2\n cpd26947 C8H12NO9R2S\n cpd26948 C8H11NO12R2S2\n cpd26949 C8H13NO6R2\n cpd26950 C28H37N2O32R2S3\n cpd26951 C6H6O10R2S\n cpd26953 C30H44OR4\n cpd26954 None\n cpd26955 None\n cpd26956 C20H31O13R2\n cpd26957 None\n cpd26958 C3H5O3R3\n cpd26959 C12H19NO7R2\n cpd26960 C8H11NO5R3S\n cpd26961 C9H13NO7R2\n cpd26962 O4PR2\n cpd26963 None\n cpd26964 C21H29O2R\n cpd26965 C14H27N2O2R2S2\n cpd26968 C15H3O3R9\n cpd26971 C10H12O8PR6\n cpd26973 C6H8N2O5R\n cpd26974 C8H11N4O3R\n cpd26975 C7H13N2O3RS\n cpd26976 C7H11N2O3R\n cpd26977 R2S2\n cpd26978 H2R\n cpd26979 None\n cpd26980 None\n cpd26981 None\n cpd26982 C30H42O32P6R8\n cpd26983 None\n cpd26984 C6H9N2O2R3S2\n cpd26985 C6H11N2O2R3S3\n cpd26986 None\n cpd26987 None\n cpd26988 None\n cpd26990 None\n cpd26991 None\n cpd26994 C34H54N3O9PR2S\n cpd26995 None\n cpd26996 C10H24N3O2R2\n cpd26997 C6H14N2OR2\n cpd26998 None\n cpd27000 C3H3NRS\n cpd27003 None\n cpd27005 C13H10N4O2R\n cpd27006 C13H13N4O2R\n cpd27011 C6H2O2R4\n cpd27012 C6O2R4\n cpd27013 None\n cpd27014 None\n cpd27015 None\n cpd27016 None\n cpd27017 C9H11N3O11P2R2\n cpd27018 C11H13N3O12P2R2\n cpd27019 C4H2OR4\n cpd27020 C20H32N3O11PR2S\n cpd27021 C22H36N3O11PR2S\n cpd27026 None\n cpd27028 None\n cpd27029 C34H30FeN4O4R\n cpd27030 None\n cpd27031 C34H30FeN4O4R\n cpd27033 C15H2O2R10\n cpd27039 C20H21N7O6R\n cpd27043 None\n cpd27044 None\n cpd27045 None\n cpd27046 None\n cpd27047 C4H5N2O3R4\n cpd27048 None\n cpd27049 C4H5N2O3R4\n cpd27050 None\n cpd27051 None\n cpd27052 None\n cpd27053 None\n cpd27054 None\n cpd27055 None\n cpd27056 CO2R\n cpd27057 C3H5O2R\n cpd27058 CO2R\n cpd27059 C22H31N7O17P3RS\n cpd27060 CHOR\n cpd27061 C3H5O2R\n cpd27062 FeR\n cpd27063 None\n cpd27064 None\n cpd27065 C40H38FeN7O5R2\n cpd27066 C40H38FeN7O5R2\n cpd27067 C10H9O3R\n cpd27068 C4H5N2O2R4\n cpd27069 None\n cpd27070 None\n cpd27071 None\n cpd27072 C15H4O2R10\n cpd27074 C15O2R10\n cpd27075 None\n cpd27076 C26H23O12R5\n cpd27077 C24H12O11R9\n cpd27078 C21H11O8R9\n cpd27079 C21H15O8R5\n cpd27080 C21H15O8R5\n cpd27081 C27H25O12R5\n cpd27082 C21H12O9R8\n cpd27083 C42H37O20R9\n cpd27084 C27H21O13R9\n cpd27085 C33H31O18R9\n cpd27086 C15HO3R9\n cpd27088 C2H3NOR2\n cpd27089 C2H4NOR2\n cpd27090 C25H30N4O9R\n cpd27091 None\n cpd27092 None\n cpd27093 None\n cpd27095 None\n cpd27096 C15H19N5O19P4R2\n cpd27097 C22H37N2O16R\n cpd27100 C28H34N11O21P3R\n cpd27101 C28H34N11O21P3R\n cpd27104 C19H31N3O12R3\n cpd27105 C42H70N3O31R2\n cpd27106 C27H44N4O17R3\n cpd27107 C34H57N2O26R2\n cpd27108 C15H20NO7R2\n cpd27110 C7H10O7R\n cpd27111 C6H8O7R\n cpd27112 C12H18O12R\n cpd27113 C33H41N12O24P3R\n cpd27115 C28H34N11O21P3R\n cpd27116 C28H34N11O21P3R\n cpd27117 None\n cpd27118 C24H37N2O20R\n cpd27121 None\n cpd27122 None\n cpd27123 None\n cpd27124 C9H10NO2R2\n cpd27125 C12H21O10R\n cpd27130 C29H48O25R2\n cpd27131 C20H34NO16R2\n cpd27132 C34H54N2O27R2\n cpd27133 C8H14NO6R\n cpd27134 None\n cpd27135 None\n cpd27136 None\n cpd27137 C6H11O6R\n cpd27138 C6H11O6R\n cpd27139 C18H31O15R\n cpd27140 C17H26O15R2\n cpd27141 C6H11O6R\n cpd27142 C5H9N2O3R\n cpd27143 C18H29OR\n cpd27144 C6H14N2OR2\n cpd27145 C48H74O41R2\n cpd27146 None\n cpd27147 None\n cpd27148 HO4PR\n cpd27149 C4H5N2O3R4\n cpd27150 C17H28N2O6R6\n cpd27151 None\n cpd27152 C63H104N2O52R\n cpd27153 C80H133N4O62R2\n cpd27154 C26H41NO22R2\n cpd27155 C12H16NO17R2S2\n cpd27156 C12H16NO17R2S2\n cpd27157 C34H54N2O27R2\n cpd27158 None\n cpd27159 None\n cpd27160 None\n cpd27161 None\n cpd27162 C22H33N3O21P2R2\n cpd27163 None\n cpd27164 C24H40O21R2\n cpd27167 C7H11NO9RS2\n cpd27168 C25H48NO8R\n cpd27169 C25H46NO8R\n cpd27170 None\n cpd27171 C6H11O6R\n cpd27172 C27H41O23R2\n cpd27173 C52H81O43R2\n cpd27174 C6H8O6R\n cpd27175 C22H33O19R2\n cpd27176 C37H57O31R2\n cpd27178 None\n cpd27179 None\n cpd27180 C20H34N3O11PR2S\n cpd27181 None\n cpd27183 C3H7O6PR\n cpd27184 None\n cpd27185 None\n cpd27186 C204H340O171R2\n cpd27187 C4H6N2O2R3\n cpd27188 C4H6N2O2R3\n cpd27189 C15H21NO14R2S\n cpd27190 C4H6NO3R3\n cpd27191 C16H23N3O17P2R2\n cpd27192 None\n cpd27193 None\n cpd27194 C44H74N9O13PR2S\n cpd27195 C7H4O2R4\n cpd27196 C10H11N5O11P2R2\n cpd27197 C10H11N5O11P2R2\n cpd27198 C10H11N5O11P2R2\n cpd27199 C20H22N10O18P3R2\n cpd27200 C10H11N5O11P2R2\n cpd27201 C10H11N5O11P2R2\n cpd27202 C10H11N5O11P2R2\n cpd27203 C10H11N5O11P2R2\n cpd27204 C10H11N5O11P2R2\n cpd27205 C10H11N5O11P2R2\n cpd27206 None\n cpd27207 None\n cpd27209 C24H30N2O39R2S6\n cpd27210 C6H12NO5R2\n cpd27211 C6H11NO8R2S\n cpd27212 None\n cpd27214 None\n cpd27217 None\n cpd27218 C28H34N11O21P3R\n cpd27219 None\n cpd27220 C29H45N8O21P3R2S\n cpd27221 C22H31N7O17P3RS\n cpd27223 C7H4O4R\n cpd27224 C9H7N2O5R2S\n cpd27226 None\n cpd27227 C2H2O2R\n cpd27228 None\n cpd27229 C6H11O6R\n cpd27230 None\n cpd27231 C6H8NO14R2S3\n cpd27232 C6H9NO11R2S2\n cpd27233 C8H13NO6R2\n cpd27234 C8H12NO9R2S\n cpd27235 C8H14NO6R\n cpd27236 C7H10O6R\n cpd27237 C6H9NO11R2S2\n cpd27238 C6H10NO8R2S\n cpd27239 C6H13NO5R\n cpd27240 C12H17NO14R2S\n cpd27241 C12H16NO17R2S2\n cpd27242 C12H16NO17R2S2\n cpd27243 C12H17NO14R2S\n cpd27244 C6H10NO8R2S\n cpd27245 C6H9NO11R2S2\n cpd27246 C30H42N3O16PR2S\n cpd27247 C31H58N3O9PR2S\n cpd27248 None\n cpd27250 C11H13N5O10P2R2\n cpd27251 C10H11N5O10P2R2\n cpd27252 C6H14N4OR2\n cpd27253 C7H16N4OR2\n cpd27254 None\n cpd27255 None\n cpd27256 C14H26N3O8PR2S\n cpd27257 C14H26N3O8PR2S\n cpd27258 None\n cpd27259 C29H45N8O21P3R2S\n cpd27260 None\n cpd27261 C28H40N2O23R2\n cpd27262 C8H14NO6R\n cpd27263 C7H10O6R\n cpd27264 None\n cpd27265 None\n cpd27266 None\n cpd27267 None\n cpd27268 C51H89N2O27R\n cpd27269 C28H34N11O21P3R\n cpd27272 None\n cpd27281 None\n cpd27295 C26H50NO13PR\n cpd27296 None\n cpd27297 None\n cpd27298 None\n cpd27301 CNRS\n cpd27303 None\n cpd27304 None\n cpd27305 None\n cpd27306 None\n cpd27309 C25H48NO11PR\n cpd27310 C4H5N2O3R4\n cpd27311 None\n cpd27312 C18H31O15R\n cpd27313 IR\n cpd27314 None\n cpd27316 C21H11O8R9\n cpd27317 C15O2R10\n cpd27327 None\n cpd27329 C28H46N2O21R2\n cpd27330 C28H42N2O33R2S4\n cpd27332 C18H35N3O13PR2\n cpd27333 C18H36N3O10R2\n cpd27334 C6H9O9R2S\n cpd27335 C6H10O6R2\n cpd27336 C8H13NO6R2\n cpd27337 C8H12NO9R2S\n cpd27339 C7H12NO8PR2\n cpd27340 C8H12O10PR2\n cpd27341 C8H11O13P2R2\n cpd27342 C8H11NO10PR2\n cpd27343 C11H16O13PR2\n cpd27344 C2H4NO2R\n cpd27345 C25H37N7O18P3RS\n cpd27347 C3H3N2O2R\n cpd27352 C2H4NO2R\n cpd27353 C4H6N2O3R2\n cpd27354 C20H32O16R2\n cpd27355 C3H6NOR2S\n cpd27356 None\n cpd27363 C5H8NO3RS\n cpd27364 C5H8NO3R\n cpd27366 C7H13N2O2R2S\n cpd27368 C5H5O8PR2\n cpd27369 None\n cpd27370 C6H11O5R\n cpd27372 C31H39N12O26P4R\n cpd27374 C32H39N12O24P3R\n cpd27375 C9H16NO6R2\n cpd27376 C33H41N12O24P3R\n cpd27377 C6H13NOR\n cpd27378 C33H44N12O22P3RS\n cpd27380 C31H40N12O23P3R\n cpd27381 C3H6NO2R2\n cpd27382 C3H5NO5PR2\n cpd27383 C28H34N11O21P3R\n cpd27385 None\n cpd27387 C3H4OR2\n cpd27388 C28H34N11O21P3R\n cpd27389 C9H17N2O4R\n cpd27390 C68H113N4O52R2\n cpd27391 C76H126N5O60PR2\n cpd27394 C31H56NO13R\n cpd27395 C45H79N2O23R\n cpd27397 C24H34O39R2S6\n cpd27398 None\n cpd27399 None\n cpd27400 None\n cpd27401 C24H41O21R\n cpd27402 None\n cpd27403 None\n cpd27404 C38H72N3O9PR2S\n cpd27409 None\n cpd27410 C12H21O11R\n cpd27411 C18H31OR\n cpd27412 None\n cpd27413 None\n cpd27414 HR\n cpd27415 C162H281N2O89P4R\n cpd27416 None\n cpd27417 None\n cpd27418 C14H25N2O2R2S2\n cpd27419 C27H49N3O9PR3S\n cpd27420 C15H29O2R\n cpd27421 C13H25OR\n cpd27422 CO2R\n cpd27423 C2H2O2R2\n cpd27424 C13H24O2R\n cpd27425 HO16P5R\n cpd27426 C13H27OR\n cpd27427 None\n cpd27428 C15H24N5O12P2R2\n cpd27429 C3H8NO2R2\n cpd27430 C4H6O7PR2\n cpd27431 C34H48N13O22P3R\n cpd27433 C18H31O16R\n cpd27434 None\n cpd27439 C26H44NO21R\n cpd27440 None\n cpd27441 C15H27O16PR\n cpd27442 C18H31O16R\n cpd27443 C28H34N11O21P3R\n cpd27445 C69H99CoN16O15PR2\n cpd27448 C55H83CoN14O9R2\n cpd27449 C20H21N7O5R\n cpd27454 C38H70NO26P2R\n cpd27455 C32H60NO18PR\n cpd27456 C10H17O9R\n cpd27458 None\n cpd27459 None\n cpd27460 None\n cpd27461 None\n cpd27462 None\n cpd27463 None\n cpd27464 None\n cpd27465 None\n cpd27466 None\n cpd27467 None\n cpd27468 None\n cpd27469 None\n cpd27470 None\n cpd27471 None\n cpd27472 None\n cpd27473 None\n cpd27474 None\n cpd27475 None\n cpd27476 None\n cpd27477 None\n cpd27478 None\n cpd27479 None\n cpd27480 None\n cpd27481 None\n cpd27482 None\n cpd27484 None\n cpd27485 C54H80CoN14O9R2\n cpd27486 None\n cpd27487 C31H47N8O22P3R2S\n cpd27488 C32H46N8O24P3R2S\n cpd27489 None\n cpd27490 C18H31O16R\n cpd27491 C24H40O21R2\n cpd27492 C20H34NO16R\n cpd27493 C58H96N5O42R2\n cpd27496 None\n cpd27497 None\n cpd27498 None\n cpd27499 C6H11O6R\n cpd27500 C16H17O2R\n cpd27501 C16H15O2R\n cpd27503 None\n cpd27504 C26H39N2O3R2\n cpd27505 C26H39N2O2R2\n cpd27506 C24H31N4O8R\n cpd27507 C68H96CoN16O15PR2\n cpd27509 CH3RS\n cpd27510 CH3R\n cpd27511 C55H83CoN14O9R2\n cpd27512 C55H83CoN14O9R2\n cpd27513 C55H83CoN14O9R2\n cpd27514 C55H83CoN14O9R2\n cpd27515 C55H83CoN14O9R2\n cpd27516 None\n cpd27517 C55H83CoN14O9R2\n cpd27518 CH3R\n cpd27519 C19H24O17R2\n cpd27520 C3H5OR\n cpd27521 None\n cpd27522 None\n cpd27523 None\n cpd27524 None\n cpd27525 C11H16N3O11P2R2Se\n cpd27526 C14H21O9R2\n cpd27527 None\n cpd27528 None\n cpd27529 CH4NR2\n cpd27530 CO2R\n cpd27531 CH2NOR\n cpd27532 None\n cpd27533 C2H3O2R\n cpd27534 None\n cpd27535 HO7P2R\n cpd27536 C54H80CoN14O9R2\n cpd27537 C54H80CoN14O9R2\n cpd27538 C54H80CoN14O9R2\n cpd27539 C54H80CoN14O9R2\n cpd27540 C54H80CoN14O9R2\n cpd27541 None\n cpd27542 None\n cpd27543 C20H34N3O13R2S2\n cpd27544 C6H14N4OR2\n cpd27545 C7H16N4OR2\n cpd27546 None\n cpd27547 None\n cpd27548 None\n cpd27549 None\n cpd27550 None\n cpd27551 C25H48N2O8PRS\n cpd27552 C10H22N3OR2\n cpd27553 C10H22N3OR2\n cpd27554 C15H19N5O10P2R2\n cpd27555 C22H37N2O16R\n cpd27556 C22H37N2O16R\n cpd27557 C30H50N3O21R\n cpd27558 C22H37N2O16R\n cpd27559 C8H14NO6R\n cpd27560 None\n cpd27561 C53H92N3O28R\n cpd27562 C53H92N3O28R\n cpd27564 C33H54N4O22R3\n cpd27565 C8H14NO6R\n cpd27570 C11H17NO9R\n cpd27571 None\n cpd27572 C3H2NO3R2\n cpd27573 C5H4NO5R\n cpd27574 C8H4NO2R5\n cpd27575 C4H8NOR\n cpd27576 None\n cpd27577 C8H14NO6R\n cpd27578 C2H4NOR\n cpd27579 C3H2NO3R2\n cpd27580 C3H6NO2R\n cpd27581 C8H10NO9PR3\n cpd27582 C2H4NO2R\n cpd27584 None\n cpd27591 C3H4N2O3R\n cpd27592 C3H4N2O3R\n cpd27596 C12H18N3O8R3\n cpd27597 C6H2NOR5\n cpd27600 C6H6NO5RS\n cpd27601 C3H6NO2R\n cpd27604 C12H15NO20R2S3\n cpd27605 C2H3NO2R2\n cpd27606 C30H36N12O22P3R3\n cpd27609 None\n cpd27611 C8H14NO6R\n cpd27614 C34H43N12O23P3RS\n cpd27615 C17H31N4O3R\n cpd27616 C15H28N4O3R2\n cpd27617 C22H43N2O2R2\n cpd27619 C16H29N4O3R\n cpd27620 C14H26N4O3R2\n cpd27622 C22H26N10O22P5R2\n cpd27623 C11H13N5O11P2R2\n cpd27624 C11H13N5O10P2R2\n cpd27625 C8H15N2O4R2\n cpd27628 C60H99N6O41R2\n cpd27629 C60H99N6O41R2\n cpd27630 C59H97N6O41R2\n cpd27631 C6H11N2O3R\n cpd27632 C6H11N2O3R2\n cpd27633 C20H21N7O6R\n cpd27634 C5H4N5R\n cpd27635 C11H14N5O5R\n cpd27637 C12H15N5O10P2R2\n cpd27638 C21H25N7O14P2R\n cpd27640 C21H26N7O14P2R\n cpd27642 C20H34NO16R2\n cpd27643 C14H24NO11R\n cpd27644 C91H147N7O25P2R\n cpd27645 C94H152N8O26P2R\n cpd27647 C11H16O12P2R2\n cpd27648 C11H16O12P2R2\n cpd27649 C11H14O12P2R2\n cpd27650 C11H16O13P2R2\n cpd27651 C11H18O12P2R2\n cpd27652 C11H18O14P2R2\n cpd27653 C11H18O14P2R2\n cpd27658 C18H33N4O3R\n cpd27659 C16H29N4O3R2\n cpd27660 None\n cpd27661 C17H32N4O3R2\n cpd27662 None\n cpd27663 None\n cpd27664 None\n cpd27667 None\n cpd27668 C24H37O19R\n cpd27669 C18H30O16R2\n cpd27670 CNR\n cpd27671 NO2R\n cpd27672 None\n cpd27673 None\n cpd27674 HR\n cpd27675 CHNO4RS2\n cpd27676 HOR\n cpd27677 HOR\n cpd27678 HR\n cpd27679 H3NR\n cpd27680 HOR\n cpd27681 None\n cpd27682 None\n cpd27683 None\n cpd27684 HR\n cpd27685 C5H8O9P2R2\n cpd27686 C5H7O6PR2\n cpd27687 C5H8O12P3R2\n cpd27688 C5H8O3R2\n cpd27689 C5H6O3R4\n cpd27690 C8H14NO4R\n cpd27691 None\n cpd27692 C9H15NO7R3\n cpd27694 C25H41O6R\n cpd27695 C4H5N2O3R4\n cpd27697 C9H14NO2R\n cpd27698 C9H14NO2R\n cpd27700 C31H39N12O26P4R\n cpd27701 C24H38O27P2R2\n cpd27702 None\n cpd27704 None\n cpd27705 HO6P2R\n cpd27706 C80H131O7P2R\n cpd27707 C2HO3R\n cpd27708 C2H3O3R\n cpd27709 C11H18N2O5R2\n cpd27711 None\n cpd27713 C14H27N2O2R2\n cpd27714 C14H27N2O2R2\n cpd27715 None\n cpd27716 C29H52N2O8PRS\n cpd27717 C18H33OR\n cpd27718 None\n cpd27719 C12H20O11R2\n cpd27720 None\n cpd27721 C30H42O32P6R10\n cpd27722 C6H6O6R\n cpd27723 C18H30O16R2\n cpd27724 None\n cpd27725 C6H6O6R\n cpd27726 C23H31N7O19P3RS\n cpd27727 O4RS\n cpd27728 C9H18NO7PR2\n cpd27729 HO4PR\n cpd27730 R\n cpd27731 R\n cpd27732 C10H14N5O4R4S2\n cpd27733 None\n cpd27734 None\n cpd27735 C10H13N4O4R4S2\n cpd27736 None\n cpd27737 C12H8N4O2R\n cpd27738 Fe4R16S4\n cpd27739 Fe2R8S2\n cpd27740 H2CuN2R3S\n cpd27741 R\n cpd27742 C19H19N3O11PR\n cpd27743 R\n cpd27744 C12H8N4O2R\n cpd27745 C10H14N5O4R4S2\n cpd27746 R\n cpd27747 C5H6O2R2\n cpd27748 None\n cpd27749 Fe2R8S2\n cpd27750 FeR8\n cpd27751 H2CuN2R4S2\n cpd27752 None\n cpd27753 Fe2R8S2\n cpd27754 C34H32FeN4O4R2S2\n cpd27755 C34H32FeN4O4R2S2\n cpd27756 C34H32FeN4O4R2S2\n cpd27757 Fe2R8S2\n cpd27758 R\n cpd27759 None\n cpd27760 H2FeNiR3S4\n cpd27761 None\n cpd27762 None\n cpd27763 C19H32N2O5R2S2\n cpd27764 C15H29N2O2R2S2\n cpd27765 C15H27N2O2R2S2\n cpd27767 C40H38FeN7O7R2\n cpd27770 C11H11NO8PR2\n cpd27771 C10H16O19P4R2\n cpd27772 None\n cpd27774 C30H10O4R16\n cpd27777 C18H21O17R2\n cpd27778 C9H10N2O4RS\n cpd27780 C11H11N2O3R2\n cpd27781 C11H11N2OR2\n cpd27782 C28H34N11O21P3R\n cpd27788 C11H14O19P3R2\n cpd27789 None\n cpd27791 C11H13O22P4R2\n cpd27793 None\n cpd27795 C11H20NO7PR2\n cpd27799 C10H17O7P2R\n cpd27800 C21H29NO12R3\n cpd27801 C7H7O4R\n cpd27802 None\n cpd27803 None\n cpd27805 C5H10O2R2\n cpd27808 C28H34N11O21P3R\n cpd27809 C12H24N2O7R2\n cpd27810 C6H14N2O2R2\n cpd27811 C6H14N2OR2\n cpd27814 C3H6NOR2S\n cpd27815 C20H32N2O3R2S\n cpd27816 C21H35N2O3R2S\n cpd27817 C4H5NO3R2\n cpd27818 C5H7NO3R2\n cpd27819 C6H12N3O2R2\n cpd27820 C14H25N2O2R2S2\n cpd27821 C8H16N3O2R3\n cpd27824 C28H34N11O21P3R\n cpd27830 C27H50N2O8PRS\n cpd27831 None\n cpd27832 None\n cpd27833 C14H26N3O8PR2S\n cpd27834 C14H26N3O8PR2S\n cpd27835 C21H30O17R2\n cpd27836 C4H5NO4R2\n cpd27837 C2H4NO2R\n cpd27838 None\n cpd27839 HR\n cpd27840 None\n cpd27841 C11H15N4O4R4\n cpd27842 C82H129N6O21P2R\n cpd27843 C94H152N8O26P2R\n cpd27844 None\n cpd27845 None\n cpd27847 C3H6NOR2S2\n cpd27848 HRS2\n cpd27849 C29H53N2O8PRS\n cpd27850 C29H55N2O8PRS\n cpd27851 C6H5OR\n cpd27852 C6H4OR\n cpd27853 C8H3O2R5\n cpd27854 None\n cpd27855 C6H12NO2R\n cpd27856 C7H14NO2R\n cpd27857 None\n cpd27858 None\n cpd27859 None\n cpd27860 None\n cpd27861 None\n cpd27862 C24H39O24PR2\n cpd27863 C15H23O8PR3\n cpd27864 C14H21O8PR3\n cpd27865 C5H5O8PR3\n cpd27866 C36H61O34PR\n cpd27867 None\n cpd27868 None\n cpd27869 H2O4PR\n cpd27870 None\n cpd27871 None\n cpd27872 None\n cpd27873 None\n cpd27875 None\n cpd27876 C19H38NO4R\n cpd27877 None\n cpd27878 C19H37NO6PR\n cpd27879 C19H38NO3R\n cpd27880 C19H38NO3R\n cpd27881 C19H38NO4R\n cpd27882 C19H36NO4R\n cpd27883 None\n cpd27884 R\n cpd27885 None\n cpd27886 None\n cpd27887 None\n cpd27888 C18H28O15R4\n cpd27889 C20H22N10O16P3R2\n cpd27890 C30H39N10O27P4R\n cpd27891 C20H23N3O12R2\n cpd27892 None\n cpd27893 C12H18O7R\n cpd27894 None\n cpd27895 C5H9O7P2R\n cpd27896 C14H26N3O8PR2S\n cpd27897 C24H28O25R2\n cpd27898 None\n cpd27899 HO9P3R\n cpd27900 HR\n cpd27901 C6H8O7R\n cpd27902 C6H8O7R\n cpd27903 None\n cpd27904 C25H38O19R2\n cpd27905 C25H39O23R2S\n cpd27906 None\n cpd27907 None\n cpd27908 C20H22N5O8R5\n cpd27909 C20H20N5O8R5\n cpd27910 None\n cpd27911 C5H8O7PR2\n cpd27912 C5H6O9P2R2\n cpd27913 None\n cpd27914 CH3OR\n cpd27915 H3NR\n cpd27916 C10H13N3O11P2R2\n cpd27917 C9H11N3O11P2R2\n cpd27918 None\n cpd27919 None\n cpd27920 None\n cpd27921 None\n cpd27922 C9H13N3O4R2\n cpd27923 C17H27N5O4R2S\n cpd27924 C3H6NOR2\n cpd27925 C4H8N3O2R2\n cpd27926 C6H7N3O4PR2\n cpd27927 C2H3NOR3\n cpd27928 None\n cpd27929 C3H6NO2R2\n cpd27930 R5S2\n cpd27931 H2R5S2\n cpd27932 C5H7NO3R2\n cpd27933 C2H4NOR2\n cpd27934 C6H8N3OR2\n cpd27935 C6H14N4OR2\n cpd27936 C4H7N2O2R2\n cpd27937 C4H5NO3R2\n cpd27938 C5H10NO2R2S\n cpd27939 C5H9N2O2R2\n cpd27940 C6H14N2OR2\n cpd27941 C5H10NOR2S\n cpd27942 C5H10NO2R2S\n cpd27943 C3H6NO2R2\n cpd27944 C11H19N2O7R2\n cpd27945 C12H21N2O7R2\n cpd27946 C7H10N3OR2\n cpd27947 C6H7N3O4PR2\n cpd27948 C6H10N2O2R3\n cpd27949 C6H11N2O2R2\n cpd27950 C6H10N3O2R3S2\n cpd27951 C3H5NO5PR2\n cpd27952 C4H7NO5PR2\n cpd27953 C6H12N3O2R3S2\n cpd27954 C18H30NOR2S\n cpd27955 C4H8NOR2S\n cpd27956 C3H6NO2R2\n cpd27957 C9H10NO2R2\n cpd27958 C4H6N2O2R3\n cpd27959 C7H13N2O2R2S\n cpd27960 C13H14N3O2R2\n cpd27961 None\n cpd27962 None\n cpd27963 C19H32N3O11R2\n cpd27964 C10H20N2O4R2\n cpd27965 C12H24N2O6R2\n cpd27966 C5H7NO3R2\n cpd27967 C60H98N9O36R2\n cpd27968 C5H7NO2R2\n cpd27969 None\n cpd27970 None\n cpd27971 None\n cpd27972 C6H7N3O4PR2\n cpd27973 C10H19N2O7PR2\n cpd27974 C12H23N2O9PR2\n cpd27975 C12H23N2O9PR2\n cpd27976 C11H21N2O8PR2\n cpd27977 C12H24N2O6R2\n cpd27978 C11H22N2O5R2\n cpd27979 C25H42N3O16R2\n cpd27980 C9H9NO5R2S\n cpd27981 C9H9NO5PR2\n cpd27982 None\n cpd27983 None\n cpd27984 None\n cpd27985 None\n cpd27986 None\n cpd27987 None\n cpd27988 None\n cpd27989 None\n cpd27990 C4H8NO2R2\n cpd27991 None\n cpd27992 None\n cpd27993 C4H5N2O2R4\n cpd27994 C54H90O46R2\n cpd27995 HR\n cpd27996 C5H9O10P2R\n cpd27997 C5H8O7PR\n cpd27998 C5H9O4R\n cpd27999 C5H9O13P3R\n cpd28001 C6H11O6R\n cpd28002 HR\n cpd28003 C5H9O3R\n cpd28004 C5H8O3R2\n cpd28005 None\n cpd28008 None\n cpd28010 None\n cpd28011 C17H31N2O3R2S2\n cpd28012 C15H29N2O2R2S2\n cpd28013 C15H27N2O2R2S2\n cpd28014 None\n cpd28017 NR4\n cpd28019 C2HO2R2\n cpd28020 C2H2O3R\n cpd28021 C24H35N7O17P3RS\n cpd28023 C27H52N2O9PRS\n cpd28024 C29H54N2O9PRS\n cpd28025 C34H64N3O10PR2S\n cpd28026 C36H68N3O10PR2S\n cpd28027 C40H76N3O10PR2S\n cpd28028 C38H72N3O10PR2S\n cpd28029 C29H53N2O9PRS\n cpd28030 C29H56N2O9PRS\n cpd28034 None\n cpd28035 HR\n cpd28036 CH2HgR\n cpd28039 C30H42O35P6R10\n cpd28040 C15H21O19P3R5\n cpd28041 C10H14O16P3R4\n cpd28042 None\n cpd28043 None\n cpd28044 None\n cpd28045 None\n cpd28046 None\n cpd28047 None\n cpd28048 None\n cpd28049 None\n cpd28050 None\n cpd28051 None\n cpd28052 C20H30OR\n cpd28054 None\n cpd28055 R\n cpd28056 R\n cpd28057 C10H16N5O4R4S2\n cpd28058 None\n cpd28059 None\n cpd28060 C10H15N4O4R4S2\n cpd28061 None\n cpd28062 Fe4R16S4\n cpd28063 Fe2R8S2\n cpd28064 H2CuN2R3S\n cpd28065 R\n cpd28066 None\n cpd28067 C19H22N3O11PR\n cpd28068 R\n cpd28069 C12H11N4O2R\n cpd28070 C12H11N4O2R\n cpd28071 C10H16N5O4R4S2\n cpd28072 None\n cpd28073 Fe2R8S2\n cpd28074 C6H2O2R4\n cpd28075 FeR8\n cpd28076 H2CuN2R4S2\n cpd28077 None\n cpd28078 Fe2R8S2\n cpd28079 C34H32FeN4O4R2S2\n cpd28080 C34H32FeN4O4R2S2\n cpd28081 C34H32FeN4O4R2S2\n cpd28082 Fe2R8S2\n cpd28083 R\n cpd28084 None\n cpd28085 H3FeNiR3S4\n cpd28086 None\n cpd28087 None\n cpd28088 None\n cpd28089 None\n cpd28090 None\n cpd28091 None\n cpd28092 None\n cpd28093 C20H28OR\n cpd28094 None\n cpd28095 None\n cpd28096 None\n cpd28097 None\n cpd28099 C6H8O7R\n cpd28100 C6H11O5R\n cpd28101 C24H34O21R2\n cpd28102 C6H11O5R\n cpd28103 C26H39N2O3R2\n cpd28104 C26H39N2O2R2\n cpd28105 C2H3NOR2\n cpd28106 C2H4NOR2\n cpd28107 C5H9O10P2R\n cpd28108 C5H8O7PR\n cpd28109 C5H9O13P3R\n cpd28110 C5H9O4R\n cpd28111 C5H8O7PR\n cpd28112 None\n cpd28113 None\n cpd28116 C32H43N4O14PR2S\n cpd28117 C29H43N4O13PR2S\n cpd28118 C49H73N4O18PR2S\n cpd28119 C31H52O7P2R\n cpd28120 None\n cpd28121 None\n cpd28123 C2HO2R2\n cpd28124 C2H2O3R\n cpd28125 C24H35N7O17P3RS\n cpd28128 C11H15N3O7RS\n cpd28129 C4H8NO3RS\n cpd28130 C3H6NO2RS\n cpd28132 C18H30NOR2S\n cpd28134 C23H38NOR2S\n cpd28139 C8H12N2O5RS\n cpd28140 C10H15N3O6RS\n cpd28141 C3H6NO2RS\n cpd28142 C5H7NO3RS\n cpd28149 None\n cpd28152 C28H34N11O21P3R\n cpd28154 C10H17O9R\n cpd28158 None\n cpd28159 C15H21O16P3R5\n cpd28160 C15H21O19P3R5\n cpd28163 C11H15O12R2S\n cpd28164 None\n cpd28165 None\n cpd28166 C7H12O7PR\n cpd28167 C18H31N3O9PR3S\n cpd28168 C25H37N7O17P3RS\n cpd28169 C3H6OR2\n cpd28170 C6HO2R4\n cpd28173 C14H18NO18R2S2\n cpd28174 C24H40O21R2\n cpd28175 C10H14O16P3R4\n cpd28176 None\n cpd28177 None\n cpd28178 HR\n cpd28179 C15H21O16P3R5\n cpd28180 None\n cpd28181 None\n cpd28182 None\n cpd28183 None\n cpd28184 None\n cpd28185 C6H11O6R\n cpd28186 C6H11O6R\n cpd28187 C3H8NO5PR\n cpd28188 C3H9NO2R\n cpd28189 C9H19N2O6PR2\n cpd28190 None\n cpd28191 None\n cpd28192 None\n cpd28193 None\n cpd28195 C19H31R\n cpd28196 C19H31OR\n cpd28197 C20H30O2R2\n cpd28198 None\n cpd28199 HO4PR\n cpd28200 None\n cpd28201 HNO3RS\n cpd28202 CH2R2S\n cpd28204 R\n cpd28205 C3H6NOR2S\n cpd28206 None\n cpd28207 HRS\n cpd28208 None\n cpd28209 None\n cpd28210 None\n cpd28215 C27H37N7O17P3RS\n cpd28218 C19H21N7O5R\n cpd28223 CHNO4RS2\n cpd28225 C28H34N11O21P3R\n cpd28227 None\n cpd28228 C9H7NO4R2\n cpd28232 C25H35N7O17P3RS\n cpd28233 None\n cpd28234 C50H83N4O36R\n cpd28237 C6H8N3O4R3\n cpd28239 C28H34N11O21P3R\n cpd28240 C12H25N3O2R2\n cpd28243 C28H34N11O21P3R\n cpd28244 None\n cpd28245 C74H121N2O22P3R\n cpd28246 C77H127N2O27P4R\n cpd28247 C83H137N2O32P4R\n cpd28248 None\n cpd28249 C2H3R\n cpd28250 C8H10N4O5R4\n cpd28251 None\n cpd28252 C30H46OR4\n cpd28253 C4H7N2O3R\n cpd28254 None\n cpd28255 C3H6NOR2S\n cpd28256 C3H6NOR2S2\n cpd28257 None\n cpd28258 None\n cpd28259 C4H7N2O2RS\n cpd28260 C4H7N2O2RS\n cpd28261 C6H11O5RS\n cpd28262 HRS\n cpd28265 None\n cpd28266 C9H8I2NO2R2\n cpd28267 C9H9INO2R2\n cpd28268 C15H10I4NO3R2\n cpd28269 C9H10NO2R2\n cpd28270 C3H3NOR2\n cpd28271 C15H11I3NO3R2\n cpd28272 C25H35N7O17P3RS\n cpd28273 C21H38N2O8PRS\n cpd28274 C40H74N3O9PR2S\n cpd28275 C23H39N2O8PRS\n cpd28276 None\n cpd28277 C27H46N2O8PRS\n cpd28278 C6H5O6R3\n cpd28279 None\n cpd28280 None\n cpd28282 C6H14N2OR2\n cpd28283 C8H15N2O2R2\n cpd28284 C7H9N2O5R2\n cpd28285 C3H6NOR2S\n cpd28286 C3H6NOR2S2\n cpd28287 None\n cpd28288 C16H18N3O7R2\n cpd28289 None\n cpd28291 None\n cpd28296 C9H11N2O12P2R\n cpd28297 None\n cpd28298 None\n cpd28299 None\n cpd28300 C14H19O4R\n cpd28301 C14H17O4R\n cpd28302 None\n cpd28303 None\n cpd28304 None\n cpd28305 None\n cpd28306 None\n cpd28307 HR\n cpd28308 C9H10N2O12P2R2\n cpd28309 C9H10N2O12P2R2\n cpd28310 C9H10N2O12P2R2\n cpd28311 C9H10N2O12P2R2\n cpd28312 C9H10N2O12P2R2\n cpd28313 C9H10N2O12P2R2\n cpd28314 C9H10N2O12P2R2\n cpd28315 C9H10N2O12P2R2\n cpd28316 C9H10N2O12P2R2\n cpd28317 C9H10N2O12P2R2\n cpd28318 C28H34N11O21P3R\n cpd28319 None\n cpd28320 C44H75N7O17P3RS\n cpd28322 C10H11N5O10P2R2\n cpd28323 C11H13N5O10P2R2\n cpd28324 C25H37N7O18P3RS\n cpd28325 C23H47OR\n cpd28326 C23H45OR\n cpd28327 C25H35N7O17P3RS\n cpd28328 C25H35N7O18P3RS\n cpd28329 C25H37N7O17P3RS\n cpd28330 C3H6NO2R2\n cpd28331 C14H26N3O8PR2S\n cpd28332 None\n cpd28333 None\n cpd28334 W\n cpd28335 C26H50O2R2\n cpd28336 None\n cpd28337 None\n cpd28338 None\n cpd28339 None\n cpd28340 None\n cpd28341 None\n cpd28342 None\n cpd28343 None\n cpd28347 C55H90O47R2\n cpd28348 C70H100O57R2\n cpd28349 None\n cpd28350 None\n cpd28351 None\n cpd28352 None\n cpd28353 None\n cpd28354 C28H37O25R2\n cpd28355 C39H61O33R5\n cpd28356 C39H63O33R3\n cpd28357 C39H63O33R3\n cpd28358 C44H72O38R2\n cpd28359 C49H80O43R2\n cpd28360 C44H72O38R2\n cpd28361 C39H64O33R2\n cpd28362 None\n cpd28363 None\n cpd28365 None\n cpd28366 None\n cpd28367 None\n cpd28368 None\n cpd28369 None\n cpd28370 None\n cpd28371 None\n cpd28372 C5H7NO4R\n cpd28373 C52H99OR\n cpd28374 None\n cpd28375 None\n cpd28376 C3H6NO2R2\n cpd28377 C6H11O6R\n cpd28378 C23H42N2O9PRS\n cpd28379 C23H41N2O9PRS\n cpd28380 C8H14NO6R\n cpd28381 None\n cpd28382 C11H11N2O14P2R2\n cpd28383 C73H140N3O10PR2S\n cpd28384 C74H140N3O10PR2S\n cpd28385 C27H39N7O17P3RS\n cpd28386 C71H136N3O10PR2S\n cpd28387 C73H140N3O10PR2S\n cpd28388 C66H124N3O9PR2S\n cpd28389 C56H104N3O10PR2S\n cpd28390 C56H102N3O10PR2S\n cpd28391 C56H104N3O9PR2S\n cpd28392 C62H116N3O10PR2S\n cpd28393 C62H114N3O10PR2S\n cpd28394 C62H116N3O9PR2S\n cpd28395 C58H108N3O10PR2S\n cpd28396 C58H106N3O10PR2S\n cpd28397 C58H108N3O9PR2S\n cpd28398 C64H120N3O10PR2S\n cpd28399 C64H118N3O10PR2S\n cpd28400 C64H120N3O9PR2S\n cpd28401 C60H112N3O10PR2S\n cpd28402 C60H110N3O10PR2S\n cpd28403 C60H112N3O9PR2S\n cpd28404 C66H124N3O10PR2S\n cpd28405 C66H122N3O10PR2S\n cpd28406 C66H124N3O9PR2S\n cpd28407 C62H116N3O10PR2S\n cpd28408 C62H114N3O10PR2S\n cpd28409 C62H116N3O9PR2S\n cpd28410 C68H128N3O10PR2S\n cpd28411 C68H126N3O10PR2S\n cpd28412 C68H128N3O9PR2S\n cpd28413 C64H120N3O10PR2S\n cpd28414 C64H118N3O10PR2S\n cpd28415 C64H120N3O9PR2S\n cpd28416 C70H132N3O10PR2S\n cpd28417 C70H130N3O10PR2S\n cpd28418 C70H132N3O9PR2S\n cpd28419 C72H136N3O10PR2S\n cpd28420 C72H134N3O10PR2S\n cpd28421 C72H136N3O9PR2S\n cpd28422 C48H88N3O9PR2S\n cpd28423 C54H100N3O9PR2S\n cpd28424 C50H92N3O10PR2S\n cpd28425 C50H90N3O10PR2S\n cpd28426 C50H92N3O9PR2S\n cpd28427 C56H104N3O10PR2S\n cpd28428 C56H102N3O10PR2S\n cpd28429 C56H104N3O9PR2S\n cpd28430 C52H96N3O10PR2S\n cpd28431 C52H94N3O10PR2S\n cpd28432 C52H96N3O9PR2S\n cpd28433 C58H108N3O10PR2S\n cpd28434 C58H106N3O10PR2S\n cpd28435 C58H108N3O9PR2S\n cpd28436 C54H100N3O10PR2S\n cpd28437 C54H98N3O10PR2S\n cpd28438 C54H100N3O9PR2S\n cpd28439 C60H112N3O10PR2S\n cpd28440 C60H110N3O10PR2S\n cpd28441 C60H112N3O9PR2S\n cpd28442 C44H82N3O10PR2S\n cpd28443 C44H80N3O10PR2S\n cpd28444 C44H82N3O9PR2S\n cpd28445 C46H86N3O10PR2S\n cpd28446 C46H84N3O10PR2S\n cpd28447 C46H86N3O9PR2S\n cpd28448 C48H90N3O10PR2S\n cpd28449 C48H88N3O10PR2S\n cpd28450 C48H90N3O9PR2S\n cpd28451 C50H94N3O10PR2S\n cpd28452 C50H92N3O10PR2S\n cpd28453 C50H94N3O9PR2S\n cpd28454 C52H98N3O10PR2S\n cpd28455 C52H96N3O10PR2S\n cpd28456 C52H98N3O9PR2S\n cpd28457 C54H102N3O10PR2S\n cpd28458 C54H100N3O10PR2S\n cpd28459 C36H66N3O9PR2S\n cpd28460 C38H70N3O10PR2S\n cpd28461 C38H68N3O10PR2S\n cpd28462 C38H70N3O9PR2S\n cpd28463 C40H74N3O10PR2S\n cpd28464 C40H72N3O10PR2S\n cpd28465 C40H74N3O9PR2S\n cpd28466 C42H78N3O10PR2S\n cpd28467 C42H76N3O10PR2S\n cpd28468 C42H78N3O9PR2S\n cpd28469 C60H115O2R\n cpd28470 None\n cpd28471 C59H115O2R\n cpd28472 None\n cpd28473 C29H52N2O8PRS\n cpd28474 None\n cpd28475 None\n cpd28476 None\n cpd28477 None\n cpd28478 None\n cpd28479 None\n cpd28480 None\n cpd28481 None\n cpd28482 None\n cpd28483 None\n cpd28484 C34H56N3O9PR2S\n cpd28485 C34H58N3O9PR2S\n cpd28486 C6H8N3OR2\n cpd28487 None\n cpd28488 C2OR4\n cpd28489 None\n cpd28490 None\n cpd28491 C44H64N4O33R2\n cpd28492 None\n cpd28493 None\n cpd28494 C32H54N3O9PR2S\n cpd28495 C2H2O2R4\n cpd28496 None\n cpd28497 None\n cpd28498 None\n cpd28499 None\n cpd28500 C24H32O31R2S4\n cpd28501 None\n cpd28502 C24H34O25R2S2\n cpd28503 None\n cpd28504 None\n cpd28505 None\n cpd28506 None\n cpd28507 None\n cpd28508 None\n cpd28509 None\n cpd28510 None\n cpd28511 C16H22N5O19P4R2\n cpd28512 C17H24N5O19P4R2\n cpd28513 None\n cpd28514 None\n cpd28515 None\n cpd28516 HO7P2R\n cpd28517 HO4PR\n cpd28518 None\n cpd28519 None\n cpd28520 None\n cpd28521 None\n cpd28522 None\n cpd28523 C24H36O33R2S4\n cpd28524 C3H5OR\n cpd28525 C24H34O39R2S6\n cpd28526 C5H7O10P2R2\n cpd28527 None\n cpd28528 None\n cpd28529 None\n cpd28530 C44H64N4O33R2\n cpd28531 C15H19N5O23P5R2\n cpd28532 None\n cpd28533 None\n cpd28534 C6H10NO3R2\n cpd28535 None\n cpd28536 None\n cpd28537 None\n cpd28538 None\n cpd28539 None\n cpd28540 C15H21O19P3R5\n cpd28541 None\n cpd28542 C9H10N2O11P2R2S\n cpd28543 C10H11N5O10P2R2\n cpd28544 C10H11N5O10P2R2\n cpd28545 C10H11N5O10P2R2\n cpd28546 C10H11N5O10P2R2\n cpd28547 C10H10N4O11P2R2\n cpd28548 C11H16N3O11P2R2S\n cpd28549 C15H19N5O10P2R2\n cpd28550 C11H13N5O10P2R2\n cpd28551 C11H13N5O10P2R2\n cpd28552 C11H13N5O10P2R2\n cpd28553 C11H13N5O11P2R2\n cpd28554 C11H13N5O11P2R2\n cpd28555 C24H30N10O18P3R2\n cpd28556 C23H28N10O18P3R2\n cpd28557 C22H26N10O18P3R2\n cpd28558 C21H24N10O18P3R2\n cpd28559 C11H13N5O11P2R2\n cpd28560 C11H13N5O11P2R2\n cpd28561 C11H13N5O11P2R2\n cpd28562 C12H15N5O11P2R2\n cpd28563 C12H15N5O11P2R2\n cpd28564 C11H14N5O11P2R2\n cpd28565 None\n cpd28566 C9H12N2O12P2R2\n cpd28567 C15H21O19P3R5\n cpd28568 None\n cpd28569 None\n cpd28570 C10H12N2O12P2R2\n cpd28571 C12H11N5O11P2R2\n cpd28572 C12H15N6O11P2R2\n cpd28573 C12H15N3O13P2R2S\n cpd28574 C10H12N2O12P2R2\n cpd28575 C10H11N5O11P2R2\n cpd28576 C10H14O16P3R4\n cpd28577 C9H10N2O12P2R2\n cpd28578 C9H10N2O12P2R2\n cpd28579 C9H10N2O12P2R2\n cpd28580 C9H10N2O12P2R2\n cpd28581 C9H10N2O12P2R2\n cpd28582 C9H10N2O12P2R2\n cpd28583 C9H10N2O12P2R2\n cpd28584 C9H10N2O12P2R2\n cpd28585 C9H10N2O12P2R2\n cpd28586 C9H10N2O12P2R2\n cpd28587 C9H10N2O12P2R2\n cpd28588 C9H10N2O12P2R2\n cpd28589 C9H10N2O12P2R2\n cpd28590 C9H10N2O12P2R2\n cpd28591 C9H10N2O12P2R2\n cpd28592 C9H10N2O12P2R2\n cpd28593 C12H16N5O11P2R2\n cpd28594 None\n cpd28595 C9H10N2O12P2R2\n cpd28596 C28H34N11O21P3R\n cpd28597 C17H22N5O13P2R2\n cpd28598 C11H13N5O10P2R2\n cpd28599 C17H22N5O14P2R2\n cpd28600 C28H34N11O21P3R\n cpd28601 C22H27N6O16P2R2\n cpd28602 C17H22N5O13P2R2\n cpd28603 C74H142N3O10PR2S\n cpd28604 C27H37N7O17P3RS\n cpd28605 C74H140N3O10PR2S\n cpd28606 C27H37N7O17P3RS\n cpd28607 C72H138N3O10PR2S\n cpd28608 C44H80N3O9PR2S\n cpd28609 C46H84N3O9PR2S\n cpd28610 C48H88N3O9PR2S\n cpd28611 C50H92N3O9PR2S\n cpd28612 C52H96N3O9PR2S\n cpd28613 C54H100N3O9PR2S\n cpd28614 C38H68N3O9PR2S\n cpd28615 C40H72N3O9PR2S\n cpd28616 C42H76N3O9PR2S\n cpd28617 C56H102N3O9PR2S\n cpd28618 C62H114N3O9PR2S\n cpd28619 C58H106N3O9PR2S\n cpd28620 C64H118N3O9PR2S\n cpd28621 C60H110N3O9PR2S\n cpd28622 C66H122N3O9PR2S\n cpd28623 C62H114N3O9PR2S\n cpd28624 C68H126N3O9PR2S\n cpd28625 C64H118N3O9PR2S\n cpd28626 C70H130N3O9PR2S\n cpd28627 C72H134N3O9PR2S\n cpd28628 C50H90N3O9PR2S\n cpd28629 C56H102N3O9PR2S\n cpd28630 C52H94N3O9PR2S\n cpd28631 C58H106N3O9PR2S\n cpd28632 C54H98N3O9PR2S\n cpd28633 C60H110N3O9PR2S\n cpd28634 C74H142N3O10PR2S\n cpd28635 C34H62N3O9PR2S\n cpd28636 C36H66N3O9PR2S\n cpd28637 C38H70N3O9PR2S\n cpd28638 None\n cpd28639 None\n cpd28640 None\n cpd28641 None\n cpd28642 None\n cpd28643 None\n cpd28645 None\n cpd28646 None\n cpd28647 None\n cpd28648 None\n cpd28649 None\n cpd28650 None\n cpd28651 None\n cpd28656 None\n cpd28661 None\n cpd28662 None\n cpd28663 None\n cpd28666 None\n cpd28674 None\n cpd28680 None\n cpd28681 None\n cpd28682 None\n cpd28683 None\n cpd28684 None\n cpd28686 None\n cpd28690 None\n cpd28691 None\n cpd28692 None\n cpd28693 None\n cpd28694 None\n cpd28695 None\n cpd28696 None\n cpd28697 None\n cpd28698 None\n cpd28699 None\n cpd28700 None\n cpd28701 None\n cpd28702 None\n cpd28703 None\n cpd28705 None\n cpd28706 None\n cpd28707 None\n cpd28708 None\n cpd28709 None\n cpd28710 C6H8N2O4R2\n cpd28712 None\n cpd28713 None\n cpd28716 None\n cpd28717 None\n cpd28719 None\n cpd28720 None\n cpd28721 None\n cpd28725 None\n cpd28727 None\n cpd28740 None\n cpd28741 None\n cpd28742 None\n cpd28743 None\n cpd28744 None\n cpd28747 None\n cpd28758 None\n cpd28763 None\n cpd28777 None\n cpd28778 None\n cpd28780 None\n cpd28781 None\n cpd28788 None\n cpd28789 C10H18O13P2R2\n cpd28791 None\n cpd28816 None\n cpd28823 None\n cpd28826 None\n cpd28827 None\n cpd28828 None\n cpd28831 None\n cpd28855 None\n cpd28863 None\n cpd28884 None\n cpd28885 None\n cpd28886 None\n cpd28887 None\n cpd28888 None\n cpd28889 None\n cpd28890 None\n cpd28891 None\n cpd28892 None\n cpd28893 None\n cpd28897 None\n cpd28937 None\n cpd28953 None\n cpd29032 None\n cpd29033 None\n cpd29034 None\n cpd29035 None\n cpd29036 None\n cpd29037 None\n cpd29038 None\n cpd29039 None\n cpd29040 None\n cpd29041 None\n cpd29042 None\n cpd29043 None\n cpd29044 None\n cpd29050 None\n cpd29101 None\n cpd29102 None\n cpd29103 None\n cpd29104 None\n cpd29105 None\n cpd29106 None\n cpd29107 None\n cpd29108 None\n cpd29109 None\n cpd29110 None\n cpd29111 None\n cpd29112 None\n cpd29113 None\n cpd29114 None\n cpd29115 None\n cpd29116 None\n cpd29117 None\n cpd29118 None\n cpd29119 None\n cpd29120 None\n cpd29121 None\n cpd29122 None\n cpd29123 None\n cpd29124 None\n cpd29125 None\n cpd29126 None\n cpd29127 None\n cpd29128 None\n cpd29129 None\n cpd29130 None\n cpd29131 None\n cpd29132 None\n cpd29133 None\n cpd29134 None\n cpd29135 None\n cpd29136 None\n cpd29137 None\n cpd29138 None\n cpd29139 None\n cpd29140 None\n cpd29141 None\n cpd29142 None\n cpd29143 None\n cpd29144 None\n cpd29145 None\n cpd29146 None\n cpd29147 None\n cpd29148 None\n cpd29149 None\n cpd29150 None\n cpd29151 None\n cpd29152 None\n cpd29153 None\n cpd29154 None\n cpd29155 None\n cpd29156 None\n cpd29157 None\n cpd29158 None\n cpd29159 None\n cpd29160 None\n cpd29161 None\n cpd29162 None\n cpd29163 None\n cpd29164 None\n cpd29165 None\n cpd29166 None\n cpd29167 None\n cpd29168 None\n cpd29169 None\n cpd29170 None\n cpd29171 None\n cpd29172 None\n cpd29173 None\n cpd29174 None\n cpd29175 None\n cpd29176 None\n cpd29177 None\n cpd29178 None\n cpd29179 None\n cpd29180 None\n cpd29181 None\n cpd29182 None\n cpd29183 None\n cpd29184 None\n cpd29185 None\n cpd29186 None\n cpd29187 None\n cpd29189 None\n cpd29190 None\n cpd29191 None\n cpd29192 None\n cpd29195 None\n cpd29196 None\n cpd29197 None\n cpd29201 None\n cpd29202 None\n cpd29203 None\n cpd29204 None\n cpd29205 None\n cpd29206 None\n cpd29207 None\n cpd29208 None\n cpd29209 None\n cpd29210 None\n cpd29211 None\n cpd29212 None\n cpd29214 None\n cpd29215 None\n cpd29216 None\n cpd29217 None\n cpd29218 None\n cpd29219 None\n cpd29220 None\n cpd29221 None\n cpd29222 None\n cpd29223 None\n cpd29224 None\n cpd29225 None\n cpd29226 None\n cpd29227 None\n cpd29228 None\n cpd29229 None\n cpd29230 None\n cpd29231 None\n cpd29232 None\n cpd29233 None\n cpd29234 None\n cpd29235 None\n cpd29236 None\n cpd29237 None\n cpd29238 None\n cpd29239 None\n cpd29240 None\n cpd29241 None\n cpd29242 None\n cpd29243 None\n cpd29244 None\n cpd29245 None\n cpd29246 None\n cpd29247 None\n cpd29248 None\n cpd29249 None\n cpd29250 None\n cpd29251 None\n cpd29252 None\n cpd29255 None\n cpd29256 None\n cpd29257 None\n cpd29258 None\n cpd29259 None\n cpd29261 None\n cpd29262 None\n cpd29263 None\n cpd29264 None\n cpd29265 None\n cpd29266 None\n cpd29267 None\n cpd29268 None\n cpd29269 None\n cpd29270 None\n cpd29271 None\n cpd29272 None\n cpd29273 None\n cpd29274 None\n cpd29275 None\n cpd29276 None\n cpd29277 None\n cpd29278 None\n cpd29279 None\n cpd29280 None\n cpd29281 None\n cpd29282 None\n cpd29283 None\n cpd29284 None\n cpd29285 None\n cpd29286 None\n cpd29287 None\n cpd29288 None\n cpd29289 None\n cpd29290 None\n cpd29291 None\n cpd29293 None\n cpd29294 None\n cpd29295 None\n cpd29296 None\n cpd29297 C6H8O7R\n cpd29298 None\n cpd29299 None\n cpd29300 None\n cpd29302 None\n cpd29303 None\n cpd29304 None\n cpd29306 None\n cpd29307 None\n cpd29311 None\n cpd29312 None\n cpd29313 None\n cpd29314 None\n cpd29315 None\n cpd29316 None\n cpd29320 None\n cpd29322 None\n cpd29324 None\n cpd29325 None\n cpd29326 None\n cpd29327 None\n cpd29328 None\n cpd29329 None\n cpd29330 C34H55N7O17P3RS\n cpd29331 None\n cpd29332 None\n cpd29333 None\n cpd29334 None\n cpd29335 None\n cpd29337 None\n cpd29338 None\n cpd29339 None\n cpd29340 None\n cpd29341 None\n cpd29342 None\n cpd29343 None\n cpd29344 None\n cpd29345 None\n cpd29346 None\n cpd29347 None\n cpd29348 None\n cpd29349 None\n cpd29350 None\n cpd29351 None\n cpd29352 None\n cpd29353 None\n cpd29354 None\n cpd29355 None\n cpd29356 None\n cpd29357 None\n cpd29358 None\n cpd29359 None\n cpd29360 None\n cpd29361 None\n cpd29362 None\n cpd29363 None\n cpd29364 None\n cpd29365 None\n cpd29366 None\n cpd29367 None\n cpd29368 None\n cpd29369 None\n cpd29370 None\n cpd29371 None\n cpd29372 None\n cpd29373 None\n cpd29374 None\n cpd29375 None\n cpd29376 None\n cpd29377 None\n cpd29378 None\n cpd29379 None\n cpd29380 None\n cpd29381 None\n cpd29382 None\n cpd29383 None\n cpd29384 None\n cpd29385 None\n cpd29386 None\n cpd29387 None\n cpd29388 None\n cpd29389 None\n cpd29390 None\n cpd29391 None\n cpd29392 None\n cpd29393 None\n cpd29394 None\n cpd29395 None\n cpd29396 None\n cpd29397 None\n cpd29398 None\n cpd29399 None\n cpd29400 None\n cpd29401 None\n cpd29402 None\n cpd29403 None\n cpd29404 None\n cpd29405 None\n cpd29406 None\n cpd29407 None\n cpd29409 None\n cpd29410 None\n cpd29411 None\n cpd29412 None\n cpd29414 None\n cpd29415 None\n cpd29416 None\n cpd29417 None\n cpd29418 None\n cpd29419 None\n cpd29420 None\n cpd29421 C86H139N7O21P2R\n cpd29422 None\n cpd29423 None\n cpd29424 None\n cpd29425 None\n cpd29426 None\n cpd29428 None\n cpd29430 None\n cpd29431 None\n cpd29432 None\n cpd29435 None\n cpd29438 None\n cpd29439 None\n cpd29440 None\n cpd29441 None\n cpd29442 None\n cpd29443 None\n cpd29444 None\n cpd29445 None\n cpd29446 None\n cpd29447 None\n cpd29448 None\n cpd29449 None\n cpd29450 None\n cpd29451 None\n cpd29452 None\n cpd29453 None\n cpd29454 None\n cpd29455 None\n cpd29456 None\n cpd29457 None\n cpd29458 None\n cpd29459 None\n cpd29460 None\n cpd29461 None\n cpd29462 None\n cpd29463 None\n cpd29464 None\n cpd29465 None\n cpd29466 None\n cpd29467 None\n cpd29468 None\n cpd29469 None\n cpd29470 None\n cpd29471 None\n cpd29472 None\n cpd29473 None\n cpd29474 None\n cpd29475 None\n cpd29476 None\n cpd29477 None\n cpd29479 None\n cpd29480 None\n cpd29481 None\n cpd29482 None\n cpd29483 None\n cpd29484 None\n cpd29485 None\n cpd29486 None\n cpd29487 None\n cpd29488 None\n cpd29489 None\n cpd29490 None\n cpd29491 None\n cpd29492 None\n cpd29493 None\n cpd29494 None\n cpd29495 None\n cpd29496 None\n cpd29497 None\n cpd29498 None\n cpd29499 None\n cpd29500 None\n cpd29501 None\n cpd29502 None\n cpd29503 None\n cpd29504 None\n cpd29505 None\n cpd29506 None\n cpd29507 None\n cpd29508 None\n cpd29509 None\n cpd29510 None\n cpd29511 None\n cpd29512 None\n cpd29513 None\n cpd29514 None\n cpd29515 None\n cpd29516 None\n cpd29517 None\n cpd29518 None\n cpd29519 None\n cpd29520 None\n cpd29521 None\n cpd29522 None\n cpd29523 None\n cpd29524 None\n cpd29525 None\n cpd29527 None\n cpd29528 None\n cpd29529 None\n cpd29530 None\n cpd29531 None\n cpd29532 C6H8N3OR2\n cpd29533 C6H7N3O4PR2\n cpd29534 None\n cpd29535 None\n cpd29536 None\n cpd29537 None\n cpd29538 None\n cpd29539 None\n cpd29540 None\n cpd29541 None\n cpd29542 None\n cpd29543 None\n cpd29544 None\n cpd29545 None\n cpd29546 None\n cpd29547 None\n cpd29548 None\n cpd29549 None\n cpd29550 None\n cpd29551 C4H6NO3R\n cpd29552 None\n cpd29553 None\n cpd29554 None\n cpd29555 None\n cpd29558 None\n cpd29559 None\n cpd29560 None\n cpd29561 None\n cpd29562 None\n cpd29563 None\n cpd29564 None\n cpd29565 None\n cpd29566 None\n cpd29567 None\n cpd29568 None\n cpd29574 None\n cpd29576 None\n cpd29577 None\n cpd29578 None\n cpd29580 None\n cpd29581 None\n cpd29582 None\n cpd29585 None\n cpd29586 None\n cpd29587 None\n cpd29588 None\n cpd29589 None\n cpd29590 None\n cpd29591 None\n cpd29592 None\n cpd29593 None\n cpd29595 None\n cpd29596 None\n cpd29597 None\n cpd29598 None\n cpd29599 None\n cpd29600 None\n cpd29601 None\n cpd29602 None\n cpd29604 None\n cpd29605 None\n cpd29606 None\n cpd29607 None\n cpd29610 None\n cpd29611 None\n cpd29612 None\n cpd29613 None\n cpd29614 None\n cpd29615 None\n cpd29616 None\n cpd29617 None\n cpd29618 None\n cpd29621 None\n cpd29623 None\n cpd29624 None\n cpd29625 None\n cpd29626 None\n cpd29627 None\n cpd29629 None\n cpd29630 None\n cpd29631 None\n cpd29632 None\n cpd29633 None\n cpd29634 None\n cpd29635 None\n cpd29636 None\n cpd29637 None\n cpd29638 None\n cpd29639 None\n cpd29640 None\n cpd29641 None\n cpd29642 None\n cpd29643 None\n cpd29647 None\n cpd29648 None\n cpd29649 None\n cpd29650 None\n cpd29651 None\n cpd29652 None\n cpd29658 None\n cpd29659 None\n cpd29660 None\n cpd29661 None\n cpd29662 None\n cpd29672 HOR\n cpd29680 C29H53N2O8PRS\n cpd29682 None\n cpd29683 None\n cpd29684 None\n cpd29685 None\n cpd29686 None\n cpd29687 None\n cpd29688 None\n cpd29689 C8H12O13P2R2\n cpd29707 None\n cpd29708 None\n cpd29709 None\n cpd29710 None\n cpd29711 None\n cpd29712 None\n cpd29713 None\n cpd29714 None\n cpd29715 None\n cpd29716 None\n cpd29717 None\n cpd29719 None\n cpd29720 None\n cpd29721 None\n cpd29722 None\n cpd29723 None\n cpd29724 None\n cpd29725 None\n cpd29726 None\n cpd29727 None\n cpd29728 None\n cpd29729 None\n cpd29730 None\n cpd29731 None\n cpd29734 None\n cpd29735 None\n cpd29736 None\n cpd29737 None\n cpd29738 None\n cpd29739 None\n cpd29740 None\n cpd29741 None\n cpd29742 None\n cpd29743 None\n cpd29744 None\n cpd29745 None\n cpd29746 None\n cpd29747 None\n cpd29748 None\n cpd29749 None\n cpd29750 None\n cpd29751 None\n cpd29752 None\n cpd29753 None\n cpd29754 None\n cpd29755 None\n cpd29756 None\n cpd29757 None\n cpd29758 None\n cpd29759 None\n cpd29760 None\n cpd29761 None\n cpd29762 None\n cpd29763 None\n cpd29764 None\n cpd29765 None\n cpd29766 None\n cpd29767 None\n cpd29768 None\n cpd29769 None\n cpd29770 None\n cpd29771 None\n cpd29772 None\n cpd29773 None\n cpd29774 None\n cpd29775 None\n cpd29776 None\n cpd29777 None\n cpd29778 None\n cpd29779 None\n cpd29780 None\n cpd29781 None\n cpd29782 None\n cpd29783 None\n cpd29784 None\n cpd29785 None\n cpd29786 None\n cpd29787 None\n cpd29788 None\n cpd29789 None\n cpd29790 None\n cpd29791 None\n cpd29792 None\n cpd29793 None\n cpd29794 None\n cpd29795 None\n cpd29796 None\n cpd29797 None\n cpd29798 None\n cpd29799 None\n cpd29800 None\n cpd29801 None\n cpd29802 None\n cpd29803 None\n cpd29804 None\n cpd29805 None\n cpd29806 None\n cpd29807 None\n cpd29808 None\n cpd29809 None\n cpd29810 None\n cpd29811 None\n cpd29812 None\n cpd29813 None\n cpd29814 None\n cpd29815 None\n cpd29816 None\n cpd29817 None\n cpd29818 None\n cpd29819 None\n cpd29820 None\n cpd29821 None\n cpd29822 None\n cpd29823 None\n cpd29824 None\n cpd29825 None\n cpd29826 None\n cpd29827 None\n cpd29828 None\n cpd29829 None\n cpd29830 None\n cpd29831 None\n cpd29832 None\n cpd29833 None\n cpd29834 None\n cpd29835 None\n cpd29836 None\n cpd29837 None\n cpd29838 None\n cpd29839 None\n cpd29840 None\n cpd29841 None\n cpd29842 None\n cpd29843 None\n cpd29844 None\n cpd29845 None\n cpd29863 None\n cpd29864 None\n cpd29867 None\n cpd29868 None\n cpd29869 None\n cpd29872 None\n cpd29873 None\n cpd29874 None\n cpd29876 None\n cpd29877 None\n cpd29878 None\n cpd29879 None\n cpd29880 None\n cpd29881 None\n cpd29882 None\n cpd29883 None\n cpd29884 None\n cpd29885 None\n cpd29886 None\n cpd29887 None\n cpd29888 None\n cpd29889 None\n cpd29890 None\n cpd29891 None\n cpd29892 None\n cpd29893 None\n cpd29894 None\n cpd29895 None\n cpd29896 None\n cpd29897 None\n cpd29898 None\n cpd29899 None\n cpd29900 None\n cpd29901 None\n cpd29902 None\n cpd29903 None\n cpd29904 None\n cpd29905 None\n cpd29906 None\n cpd29907 None\n cpd29908 None\n cpd29909 None\n cpd29910 None\n cpd29911 None\n cpd29912 None\n cpd29913 None\n cpd29915 None\n cpd29918 None\n cpd29919 None\n cpd29921 None\n cpd29922 None\n cpd29923 None\n cpd29924 None\n cpd29925 None\n cpd29931 None\n cpd29932 None\n cpd29933 None\n cpd29934 None\n cpd29935 None\n cpd29936 None\n cpd29937 None\n cpd29938 None\n cpd29939 None\n cpd29940 None\n cpd29941 None\n cpd29942 None\n cpd29943 None\n cpd29944 None\n cpd29945 None\n cpd29946 None\n cpd29947 None\n cpd29948 None\n cpd29949 None\n cpd29950 None\n cpd29951 None\n cpd29952 None\n cpd29953 None\n cpd29954 None\n cpd29955 None\n cpd29956 None\n cpd29957 None\n cpd29958 None\n cpd29959 None\n cpd29960 None\n cpd29961 None\n cpd29962 None\n cpd29963 None\n cpd29964 None\n cpd29965 None\n cpd29966 None\n cpd29967 None\n cpd29968 None\n cpd29969 None\n cpd29970 None\n cpd29971 None\n cpd29972 None\n cpd29973 None\n cpd29974 None\n cpd29975 None\n cpd29976 None\n cpd29977 None\n cpd29978 None\n cpd29979 None\n cpd29980 None\n cpd29981 None\n cpd29982 None\n cpd29983 None\n cpd29984 None\n cpd29985 None\n cpd29986 None\n cpd29987 None\n cpd29988 None\n cpd29989 None\n cpd29990 None\n cpd29991 None\n cpd29992 None\n cpd29993 None\n cpd29994 None\n cpd29995 None\n cpd29996 None\n cpd29997 None\n cpd29998 None\n cpd29999 None\n cpd30000 None\n cpd30001 None\n cpd30002 None\n cpd30003 None\n cpd30004 None\n cpd30005 None\n cpd30006 None\n cpd30007 None\n cpd30008 None\n cpd30009 None\n cpd30010 None\n cpd30011 None\n cpd30012 None\n cpd30013 None\n cpd30014 None\n cpd30015 None\n cpd30016 None\n cpd30017 None\n cpd30018 None\n cpd30034 None\n cpd30035 None\n cpd30039 None\n cpd30040 None\n cpd30041 None\n cpd30042 None\n cpd30043 None\n cpd30044 None\n cpd30045 None\n cpd30046 None\n cpd30047 None\n cpd30048 None\n cpd30049 None\n cpd30050 None\n cpd30051 None\n cpd30052 None\n cpd30053 None\n cpd30054 None\n cpd30055 None\n cpd30056 None\n cpd30057 None\n cpd30058 None\n cpd30061 None\n cpd30062 None\n cpd30064 None\n cpd30066 None\n cpd30067 None\n cpd30068 None\n cpd30070 None\n cpd30071 None\n cpd30072 None\n cpd30074 None\n cpd30083 None\n cpd30084 None\n cpd30085 None\n cpd30087 None\n cpd30116 None\n cpd30117 None\n cpd30118 None\n cpd30119 None\n cpd30120 None\n cpd30121 None\n cpd30122 None\n cpd30123 None\n cpd30125 None\n cpd30126 None\n cpd30127 None\n cpd30128 None\n cpd30129 None\n cpd30130 None\n cpd30131 None\n cpd30132 None\n cpd30133 None\n cpd30134 None\n cpd30165 None\n cpd30166 None\n cpd30167 None\n cpd30168 None\n cpd30169 None\n cpd30170 None\n cpd30171 None\n cpd30172 None\n cpd30173 None\n cpd30174 None\n cpd30175 None\n cpd30176 None\n cpd30177 None\n cpd30178 None\n cpd30179 None\n cpd30180 None\n cpd30181 None\n cpd30191 None\n cpd30192 None\n cpd30199 None\n cpd30201 None\n cpd30225 None\n cpd30227 None\n cpd30234 None\n cpd30235 None\n cpd30236 None\n cpd30237 None\n cpd30238 None\n cpd30239 None\n cpd30240 None\n cpd30241 None\n cpd30242 None\n cpd30243 None\n cpd30244 None\n cpd30245 None\n cpd30246 None\n cpd30247 None\n cpd30248 None\n cpd30249 None\n cpd30250 None\n cpd30251 None\n cpd30252 None\n cpd30253 None\n cpd30254 None\n cpd30255 None\n cpd30256 None\n cpd30257 None\n cpd30258 None\n cpd30259 None\n cpd30260 None\n cpd30261 None\n cpd30262 None\n cpd30263 None\n cpd30264 None\n cpd30265 None\n cpd30266 None\n cpd30267 None\n cpd30268 None\n cpd30269 None\n cpd30270 None\n cpd30271 None\n cpd30272 None\n cpd30273 None\n cpd30274 None\n cpd30275 None\n cpd30276 None\n cpd30277 None\n cpd30278 None\n cpd30279 None\n cpd30280 None\n cpd30288 None\n cpd30289 None\n cpd30290 None\n cpd30291 None\n cpd30292 None\n cpd30293 None\n cpd30294 None\n cpd30295 None\n cpd30296 None\n cpd30297 None\n cpd30298 None\n cpd30299 None\n cpd30300 None\n cpd30301 None\n cpd30310 None\n cpd30311 None\n cpd30328 None\n cpd30329 None\n cpd30347 None\n cpd30349 None\n cpd30350 None\n cpd30354 None\n cpd30359 None\n cpd30360 None\n cpd30361 None\n cpd30362 None\n cpd30369 None\n cpd30370 None\n cpd30371 None\n cpd30372 None\n cpd30373 None\n cpd30374 None\n cpd30375 None\n cpd30376 None\n cpd30377 None\n cpd30378 None\n cpd30379 None\n cpd30380 None\n cpd30381 None\n cpd30382 None\n cpd30383 None\n cpd30384 None\n cpd30385 None\n cpd30386 None\n cpd30387 None\n cpd30403 None\n cpd30409 None\n cpd30412 None\n cpd30415 None\n cpd30416 None\n cpd30418 None\n cpd30419 None\n cpd30422 None\n cpd30424 None\n cpd30425 None\n cpd30426 None\n cpd30427 None\n cpd30429 None\n cpd30430 None\n cpd30432 None\n cpd30433 None\n cpd30434 None\n cpd30435 None\n cpd30437 None\n cpd30438 None\n cpd30440 None\n cpd30442 None\n cpd30443 None\n cpd30444 None\n cpd30445 None\n cpd30446 None\n cpd30447 None\n cpd30449 None\n cpd30457 None\n cpd30459 None\n cpd30461 None\n cpd30463 None\n cpd30465 None\n cpd30467 None\n cpd30469 None\n cpd30471 None\n cpd30473 None\n cpd30475 None\n cpd30477 None\n cpd30479 None\n cpd30481 None\n cpd30483 None\n cpd30486 None\n cpd30487 None\n cpd30490 None\n cpd30491 None\n cpd30492 None\n cpd30493 None\n cpd30494 None\n cpd30495 None\n cpd30498 None\n cpd30502 None\n cpd30503 None\n cpd30504 None\n cpd30505 None\n cpd30506 None\n cpd30507 None\n cpd30508 None\n cpd30509 None\n cpd30510 None\n cpd30517 None\n cpd30519 None\n cpd30530 None\n cpd30553 None\n cpd30554 None\n cpd30556 None\n cpd30558 None\n cpd30559 None\n cpd30563 None\n cpd30564 None\n cpd30565 None\n cpd30566 None\n cpd30567 None\n cpd30568 None\n cpd30569 None\n cpd30570 None\n cpd30571 None\n cpd30572 None\n cpd30573 None\n cpd30574 None\n cpd30575 None\n cpd30576 None\n cpd30577 None\n cpd30578 None\n cpd30579 None\n cpd30580 None\n cpd30581 None\n cpd30582 None\n cpd30583 None\n cpd30584 None\n cpd30585 None\n cpd30586 None\n cpd30587 None\n cpd30588 None\n cpd30589 None\n cpd30590 None\n cpd30591 None\n cpd30592 None\n cpd30593 None\n cpd30594 None\n cpd30595 None\n cpd30596 None\n cpd30597 None\n cpd30598 None\n cpd30599 None\n cpd30600 None\n cpd30601 None\n cpd30602 None\n cpd30603 None\n cpd30604 None\n cpd30613 None\n cpd30614 None\n cpd30623 None\n cpd30624 None\n cpd30625 None\n cpd30626 None\n cpd30627 None\n cpd30633 None\n cpd30634 None\n cpd30636 None\n cpd30639 None\n cpd30640 None\n cpd30642 None\n cpd30644 None\n cpd30645 None\n cpd30646 None\n cpd30647 None\n cpd30648 None\n cpd30649 None\n cpd30650 None\n cpd30651 None\n cpd30652 None\n cpd30653 None\n cpd30654 None\n cpd30655 None\n cpd30656 None\n cpd30657 None\n cpd30660 None\n cpd30661 None\n cpd30662 None\n cpd30663 None\n cpd30664 None\n cpd30666 None\n cpd30667 None\n cpd30668 None\n cpd30669 None\n cpd30670 None\n cpd30672 None\n cpd30673 None\n cpd30674 None\n cpd30675 None\n cpd30676 None\n cpd30677 None\n cpd30680 None\n cpd30681 None\n cpd30683 None\n cpd30684 None\n cpd30686 None\n cpd30687 None\n cpd30689 None\n cpd30690 None\n cpd30691 None\n cpd30692 None\n cpd30694 None\n cpd30696 None\n cpd30698 None\n cpd30700 None\n cpd30702 None\n cpd30703 None\n cpd30707 None\n cpd30708 None\n cpd30709 None\n cpd30710 None\n cpd30711 None\n cpd30712 None\n cpd30713 None\n cpd30714 None\n cpd30716 None\n cpd30717 None\n cpd30719 None\n cpd30720 None\n cpd30722 None\n cpd30723 None\n cpd30725 None\n cpd30726 None\n cpd30727 None\n cpd30740 R2S\n cpd30750 None\n cpd30762 None\n cpd30763 None\n cpd30784 None\n cpd30785 None\n cpd30786 None\n cpd30787 None\n cpd30788 None\n cpd30789 C9H20N2OR2\n cpd30790 None\n cpd30791 None\n cpd30792 None\n cpd30793 None\n cpd30795 None\n cpd30796 None\n cpd31000 Z\n cpd31062 C6H7O4RS\n cpd31063 C6H9O4RS\n cpd31064 C6H7O3RS\n cpd31065 C6H9O3RS\n cpd31066 C8H11O4RS\n cpd31067 C8H13O4RS\n cpd31068 C8H11O3RS\n cpd31109 C14H18NO16R3S2\n cpd31110 C22H31N5O20P3R2\n cpd31111 C20H25N4O20P3R2\n cpd31113 C20H25N4O20P3R2\n cpd31115 C30H24O12R2\n cpd31117 C26H26O14R2\n cpd31121 C27H28O15R2\n cpd31122 C36H34O16R2\n cpd31123 C42H44O21R2\n cpd31124 C32H36O19R2\n cpd31202 C24H40N7O19P3R2\n cpd31228 None\n cpd31254 C28H38N6O22P3R2\n cpd31256 C27H35N6O22P3R2\n cpd31259 None\n cpd31261 C29H41N6O22P3R2\n cpd31262 C31H41N6O24P3R2\n cpd31263 C28H37N6O23P3R2\n cpd31264 C31H41N6O25P3R2\n cpd31271 C24H35N7O17P3RS\n cpd31280 C23H28N5O20P3R2\n cpd31299 C11H18N2O8R2\n cpd31300 C19H31N3O13R2\n cpd31304 C7H13N3O4RS\n cpd31305 C22H25N6O11PR2\n cpd31307 C17H29N2O12RS\n cpd31308 C27H44N4O18R2\n cpd31309 C4H4NO3R2\n cpd31320 C25H32N6O23P3R2\n cpd31321 C26H34N6O23P3R2S\n cpd31322 C26H36N5O19P3R2S\n cpd31323 C25H32N6O23P3R2S\n cpd31324 C25H34N5O19P3R2S\n cpd31325 C5H5N2O4R2S\n cpd31326 C6H7N2O4R2S\n cpd31353 C12H21ORS\n cpd31355 C22H30N5O19P3R2\n cpd31369 C9H10ClNO2RS\n cpd31382 None\n cpd31383 None\n cpd31384 None\n cpd31388 None\n cpd31392 C20H16O9R2\n cpd31441 C129H201N14O47P2R3\n cpd31457 CH3O4PR2\n cpd31472 None\n cpd31499 None\n cpd31568 C19H23N5O17P3R2\n cpd31582 C19H23N5O17P3R2\n cpd31673 C25H40N2O19R\n cpd31703 None\n cpd31791 None\n cpd31822 None\n cpd31837 None\n cpd31844 C55H93O4PR\n cpd31863 None\n cpd31897 None\n cpd31941 None\n cpd31960 C18H22N6O16P3R2\n cpd31999 None\n cpd32027 None\n cpd32070 C59H91N10O34R2\n cpd32110 C14H26N2O10R\n cpd32111 None\n cpd32144 C36H54N7O20R\n cpd32177 None\n cpd32217 C126H196N13O46P2R3\n cpd32267 None\n cpd32279 None\n cpd32321 None\n cpd32331 None\n cpd32335 None\n cpd32356 None\n cpd32401 None\n cpd32429 None\n cpd32432 None\n cpd32445 C20H35OR\n cpd32459 None\n cpd32462 C141H233NO78P2R\n cpd32553 None\n cpd32554 None\n cpd32557 None\n cpd32558 None\n cpd32570 C37H56N7O20R\n cpd32585 C48H73CoN11O8R\n cpd32596 None\n cpd32676 None\n cpd32692 None\n cpd32705 None\n cpd32746 None\n cpd32832 None\n cpd32905 C141H232NO81P3R\n cpd32923 None\n cpd32924 C91H144N7O27P2R\n cpd33001 C19H23N5O17P3R2\n cpd33072 None\n cpd33087 None\n cpd33098 None\n cpd33106 C18H22N6O16P3R2\n cpd33158 C37H56N7O20R\n cpd33208 None\n cpd33223 None\n cpd33281 None\n cpd33287 None\n cpd33343 None\n cpd33363 None\n cpd33380 None\n cpd33486 None\n cpd33538 C38H60N7O20R\n cpd33596 None\n cpd33638 C35H50N3O18PR2S\n cpd33639 None\n cpd33656 None\n cpd33682 None\n cpd33683 C71H106N13O39R3\n cpd33691 None\n cpd33862 None\n cpd33875 C19H23N5O17P3R2\n cpd33885 None\n cpd33900 C27H39N4O4R\n cpd33942 C23H35N7O15P2R2\n cpd33952 C15H19N5O10P2R2S\n cpd33957 C19H23N5O17P3R2\n cpd33964 None\n cpd33967 None\n cpd33986 None\n cpd33993 None\n cpd34014 None\n cpd34025 C20H33OR\n cpd34149 None\n cpd34181 None\n cpd34189 None\n cpd34197 None\n cpd34253 None\n cpd34256 None\n cpd34276 C40H44N20O27P4S4W\n cpd34287 None\n cpd34300 None\n cpd34397 None\n cpd34430 C40H61N8O21R\n cpd34529 None\n cpd34561 None\n cpd34641 C28H36N4O8R\n cpd34652 CH3O4PR2\n cpd34658 C45H57CoN6O12R\n cpd34664 C20H24N4O18P3R2\n cpd34685 C8H10NO5R4S\n cpd34687 None\n cpd34696 C22H39N3O15R\n cpd34741 None\n cpd34800 C19H36N8O11R\n cpd34863 C45H57CoN6O12R\n cpd34910 C142H235NO81P3R\n cpd34926 C89H141N6O26P2R\n cpd35018 None\n cpd35052 None\n cpd35077 C36H63N5O24R2\n cpd35083 C74H111N14O40R3\n cpd35091 C25H37N7O18P3RS\n cpd35116 None\n cpd35126 C38H57GlO37P\n cpd35142 C34H51N6O19R\n cpd35215 C18H23N6O16P3R2\n cpd35261 C129H203N14O47P2R\n cpd35318 C48H73CoN11O8R\n cpd35334 None\n cpd35439 C35H48N3O18PR2S\n cpd35462 None\n cpd35500 C25H40N2O19R\n cpd35502 None\n cpd35534 C75H116N14O40R2\n cpd35570 C20H24N4O18P3R2\n cpd35587 None\n cpd35589 C36H64N11O20PR3S\n cpd35645 C74H113N14O40R\n cpd35660 None\n cpd35708 None\n cpd35776 None\n cpd35785 None\n cpd35809 None\n cpd35876 None\n cpd35906 None\n cpd35907 C13H13N4O3R\n cpd35916 None\n cpd35953 H7ClN2OPt\n cpd35955 None\n cpd35979 C18H29OR\n cpd36000 None\n cpd36008 None\n cpd36052 C25H44N6O12PR2S\n cpd36053 CH2ORS\n cpd36054 C20H29OR\n cpd36055 C5H8O7PR2\n cpd36056 C21H36O21PR\n cpd36057 C7H4O2R\n cpd36058 C20H36NO3R\n cpd36059 None\n cpd36060 C36H60O31R2\n cpd36061 C14H17NO5R\n cpd36062 C14H18N3O17P3R3\n cpd36063 C32H48N3O10PR2S\n cpd36064 C20H34O8PR\n cpd36065 C5H13NO4PR\n cpd36066 C15HO2R11\n cpd36068 C30H56O8PR2\n cpd36069 C31H50N2O23R\n cpd36070 C68H104N3O64P3R4\n cpd36071 C11H18N2O6R2\n cpd36072 C4H5N2O3R3\n cpd36073 C26H50O3R2\n cpd36074 C6H7N2O5R3\n cpd36075 C17H32N4O9PR2S\n cpd36076 C32H31O17R8\n cpd36077 C23H36N4O10PR2S\n cpd36078 C9H14NO9RS2\n cpd36079 C6H10NO3R2S\n cpd36080 C32H40N3O15PR2S\n cpd36081 C10H14N3O11P2R2S\n cpd36082 C6H13NOR\n cpd36083 C59H102N3O33R\n cpd36084 C72H117N5O42R\n cpd36085 C12H20N3O4RS\n cpd36086 C10H17N2O4RS\n cpd36087 C8H10N4O5R5\n cpd36088 C6H15N4OR\n cpd36089 C36H42N5O7R2S\n cpd36090 C30H52N3O13PR2S\n cpd36091 C32H54NO24R\n cpd36092 C20H35N6O8R\n cpd36093 C40H68N3O16PR2S\n cpd36094 C10H19N5O4R\n cpd36095 C15H17N6O14P2R2\n cpd36096 C9H18N2O3R2\n cpd36097 C6H14N2O2R2\n cpd36098 C17H30N3O10PR2S\n cpd36099 C36H55O34PR3\n cpd36100 C32H54NO24R\n cpd36101 C3H7NOR\n cpd36102 C72H138N3O10PR2S\n cpd36103 C36H60N3O26R\n cpd36104 C19H29R\n cpd36105 None\n cpd36106 C26H50O2R\n cpd36107 C15H19N5O15P3R2\n cpd36108 C5H8NO2R\n cpd36109 C3H4N2O2R2S\n cpd36110 C34H57N2O24R\n cpd36111 C11H22N2O2RS\n cpd36112 None\n cpd36113 C62H103N4O47R2\n cpd36114 C15H3O2R9\n cpd36115 C5H8O2R\n cpd36116 C12H20N2O5R2\n cpd36117 C25H49N8O10PR2S\n cpd36118 C3H4NO5PR3\n cpd36119 C4H8NORS2\n cpd36120 C11H24N3O2RS\n cpd36121 C11H14N5O6R5S\n cpd36122 C16H30O7PR\n cpd36123 C19H29N4O9PR2S\n cpd36124 C36H42N5O7R2S\n cpd36125 C25H49NO8PR\n cpd36126 C35H60N3O34P3R3\n cpd36127 C20H40N7O9PR2S\n cpd36128 C31H52N3O15PR2S\n cpd36129 C92H146N7O27P2R\n cpd36130 C15H2O3R10\n cpd36131 C36H42N5O7R2S\n cpd36132 C10H11N2O2R2S\n cpd36133 C13H15O9R\n cpd36134 C5H6O5PR2\n cpd36135 C62H96N9O16PR2S\n cpd36136 C12H14N2O13P2R2S\n cpd36137 C17H30N3O9PR2S\n cpd36139 C40H61N2O33R2\n cpd36140 None\n cpd36141 None\n cpd36142 C5H10NOR\n cpd36143 C61H101N4O34R\n cpd36144 C3H7NO2R\n cpd36145 C17H20N6O13P2R2\n cpd36146 None\n cpd36147 C17H17O2R\n cpd36148 C15H4O3R10\n cpd36149 C4H6N2O3R3\n cpd36150 C51H98N3O9PR2S\n cpd36151 C12H21N2O3RS\n cpd36152 C50H83N4O37R2\n cpd36153 C15H6O5R6\n cpd36154 C44H84N3O9PR2S\n cpd36155 C28H47N2O20R\n cpd36156 C20H14N3O3R4\n cpd36157 C11H20NO2R2S\n cpd36159 C23H30N5O9PR2S\n cpd36160 C8H17N3O3R\n cpd36161 C51H89N2O28R\n cpd36162 C22H33N2O17R\n cpd36163 C2H3NO2R2\n cpd36164 C19H34NO3R\n cpd36165 C48H73N3O38R3\n cpd36166 C5H11NOR\n cpd36167 C22H33N4O11PR2S\n cpd36168 C21H16O10R6\n cpd36169 C7H16N4OR2\n cpd36170 C16H15O2R\n cpd36171 C33H51N7O17P3RS\n cpd36172 C131H237O16R5\n cpd36173 C17H27N3O10PR3S\n cpd36174 C5H10N2O2R\n cpd36175 C35H49N8O27P3R2S\n cpd36176 C19H21N6O8PR2\n cpd36177 C3H3NO2R2\n cpd36178 C3H4NO5PR3\n cpd36179 C15H5O2R9\n cpd36180 C19H36N3O8PR2S\n cpd36181 C17H27N2O12PR2S\n cpd36182 C11H19N2O4R\n cpd36183 C16H31OR\n cpd36184 C133H214N16O45P2R2\n cpd36185 C27H42N3O10PR2S\n cpd36186 C17H27N2O9PR6\n cpd36187 C56H95N3O31R\n cpd36188 C4H8NO2RS\n cpd36189 C3H7NO2R\n cpd36190 C25H40N3O20PR3\n cpd36191 C22H39O5R\n cpd36192 C78H108CoN21O18PR2\n cpd36193 C48H73N3O38R3\n cpd36194 C33H43N4O16PR2S\n cpd36195 C4H6N2O3R3\n cpd36196 C4H7N2O3RS\n cpd36197 C23H40N3O10PR2S\n cpd36198 C49H80O41R2\n cpd36199 C29H40N7O12PR2S\n cpd36200 C72H101CoN18O17PR\n cpd36201 C16H31N8O4R4\n cpd36202 C25H47N2O8PRS\n cpd36203 C17H29N4O10PR2S\n cpd36204 C133H215N16O45P2R\n cpd36205 C12H13O4R\n cpd36206 None\n cpd36207 C22H33OR\n cpd36208 C111H169N4O51P2R\n cpd36210 C27H54O3R2\n cpd36211 C7H8O8PR\n cpd36212 C56H93N4O42R2\n cpd36213 C10H17NO8R2\n cpd36214 C39H61O33R2\n cpd36215 C6H9N2O2R3S3\n cpd36216 C5H4N2O3R4\n cpd36217 C18H27OR\n cpd36218 C32H48N3O10PR2S\n cpd36219 C6H14N4O2R2\n cpd36220 C45H67N6O13PR2S\n cpd36221 C27H22O12R9\n cpd36223 None\n cpd36224 C13H23N2O3RS\n cpd36225 C36H42N5O7R2S\n cpd36226 C52H83N7O23P3RS\n cpd36227 C88H147N3O34P4R\n cpd36228 C20H34NO15R\n cpd36229 None\n cpd36230 C96H143N12O69PR\n cpd36231 C89H147N6O63R2\n cpd36232 C53H94O16R5S\n cpd36233 C15H25NO12R3\n cpd36234 C7H9O4R\n cpd36235 C22H28O4R\n cpd36236 C8H17N5O3R\n cpd36237 C22H36O5R2\n cpd36238 None\n cpd36239 C99H177O18R6\n cpd36240 C36H71O9PR\n cpd36241 C15H16N4O3R3\n cpd36242 C27H40N5O14PR2S\n cpd36243 None\n cpd36244 None\n cpd36245 None\n cpd36246 C14H24N4O3R2\n cpd36247 C35H54N3O10PR2S\n cpd36248 C10H14N3O10P2R2\n cpd36249 C3H6NO2RS\n cpd36251 C20H38NO4R\n cpd36252 None\n cpd36253 C46H84N3O12PR2S\n cpd36254 C62H102N7O42R2\n cpd36255 C4H6N2O2R3\n cpd36256 C18H27OR\n cpd36257 C10H18N4O3R4\n cpd36258 C56H85N8O15PR2S\n cpd36259 C29H50N2O9PRS\n cpd36260 C58H95N10O38P2R2\n cpd36261 C26H39N7O17P3RS\n cpd36262 C60H113N3O9PR3S\n cpd36263 C6H10NO3R\n cpd36264 C10H11N5O10P2R\n cpd36265 C9H9N2O11P2R\n cpd36266 C19H34N4O9PR2S\n cpd36267 C14H15N5O11P2R2\n cpd36268 C21H36N6O11PR2S\n cpd36269 C20H32O20PR2\n cpd36270 C18H30O16R2\n cpd36271 C14H20NO12R2\n cpd36272 C36H59N7O19R\n cpd36273 C17H27N3O9PR3S\n cpd36274 C20H33N5O11PR3S\n cpd36276 None\n cpd36277 C10H14N2O6R\n cpd36278 C87H147N2O40P6R\n cpd36279 C46H88N3O9PR2S\n cpd36280 C7H12NO2R\n cpd36281 C74H140N3O10PR2S\n cpd36282 C54H89N6O37R2\n cpd36283 None\n cpd36284 C24H35N4O13PR2S\n cpd36285 None\n cpd36286 C20H14Ca3N3O3R4\n cpd36287 C20H33N5O10PR2S3\n cpd36288 C11H19N2O4RS\n cpd36289 C17H32N4O9PR2S\n cpd36290 C20H34NO15R\n cpd36291 C9H13N3O3R3\n cpd36292 C8H16N3O3R2\n cpd36293 C31H55N2O8PRS\n cpd36294 FeR\n cpd36295 C21H30N3O10PR2S\n cpd36296 C5H7O9P2R2\n cpd36297 None\n cpd36298 C18H31O15R\n cpd36299 None\n cpd36301 C8H18N4OR2\n cpd36302 C6H8N3O3R4\n cpd36303 C5H8NO3R\n cpd36304 C11H18NO11PR2\n cpd36305 C4H5N2O3R3\n cpd36306 C8H10NO5R3S\n cpd36307 C5H8N2O2R3S\n cpd36308 C56H93N4O42R2\n cpd36309 C30H18O9R9\n cpd36310 C16H30N4O9PR2S\n cpd36311 C30H51O4R2\n cpd36312 C76H144O5R4\n cpd36313 C18H34N4O10PR2S\n cpd36314 None\n cpd36315 C19H34N4O9PR2S\n cpd36316 C2H5NOR\n cpd36317 C4H6N2O2R3\n cpd36318 C6H14N2O2R2\n cpd36319 C65H107N6O45R2\n cpd36320 C16H29OR\n cpd36322 None\n cpd36323 C8H10N4O5R5\n cpd36324 C32H52N4O22R\n cpd36325 None\n cpd36326 C14H20N2O3RS\n cpd36327 C6H8N2O4R2\n cpd36328 C31H50N2O23R\n cpd36329 C14H16NO6R\n cpd36331 C26H42O8R2\n cpd36332 C39H66N3O10PR2S\n cpd36333 C47H79N7O17P3RS\n cpd36334 C2H2R2\n cpd36336 C24H38O25R2S2\n cpd36337 None\n cpd36338 C62H103N4O47R2\n cpd36339 None\n cpd36340 C28H50O5R\n cpd36341 C19H29N4O9PR2S\n cpd36342 C3H6NO4RS\n cpd36343 C29H39N6O12PR2S2\n cpd36344 C16H19N6O14P2R2S\n cpd36345 C29H48N2O8PRS\n cpd36347 C10H17N2O4RS\n cpd36348 None\n cpd36349 C19H31O2R\n cpd36350 None\n cpd36351 C4H6N2O2R3\n cpd36352 C62H103N4O47R2\n cpd36353 C16H19N8O7PR2\n cpd36354 C2H5NOR\n cpd36355 None\n cpd36356 C15H19N5O16P3R2\n cpd36357 C27H48N2O8PRS\n cpd36358 C21H36N3O10PR2S\n cpd36359 C22H34N3O15R3\n cpd36360 None\n cpd36361 C27H51O3R\n cpd36362 C2H4NORS\n cpd36363 C10H14N3O11P2R2\n cpd36364 C67H115N4O38R\n cpd36365 C22H32O5R2\n cpd36366 None\n cpd36367 C8H17N5O2R2\n cpd36368 C20H33OR\n cpd36369 C6H7N3O4R4\n cpd36370 C15H2O3R9\n cpd36371 C35H50N2O30R2\n cpd36372 C16H30N4O9PR2S\n cpd36373 C20H31OR\n cpd36374 C28H52O8PR2\n cpd36375 C4H6N2O3R2\n cpd36376 C3H7NO2RS\n cpd36377 C14H20NO12R2\n cpd36378 C39H73N3O9PR3S\n cpd36379 C5H8N2O2R3S\n cpd36380 C12H15N6O8PR\n cpd36381 C5H8NO3R\n cpd36382 None\n cpd36383 C14H19NO4R\n cpd36384 C30H54O8PR2\n cpd36385 C15H6O5R6\n cpd36386 None\n cpd36387 C25H37N5O10PR2S\n cpd36388 C5H7O9P2R2\n cpd36389 C3H5NO4R2S4\n cpd36391 C20H39N2O2R2\n cpd36392 C27H44NO23R3\n cpd36393 C47H73N7O23P3R2S\n cpd36394 C34H56NO35P3R3\n cpd36395 C9H12N3O10P2R2S\n cpd36396 C16H28N4O3RS\n cpd36397 None\n cpd36398 C12H26N5O2R\n cpd36399 C15H16N6O13P2R2\n cpd36400 C15HO2R9\n cpd36401 C20H36O5R2\n cpd36402 C40H66N3O16PR2S\n cpd36403 C19H35OR\n cpd36405 C32H44O27R2\n cpd36406 C20H35N4O12P2R2S\n cpd36407 C4H6O2R\n cpd36408 C27H43N7O12PR2S\n cpd36409 C5H10NO2R2S\n cpd36410 C4H5N2O3R3\n cpd36411 C10H20NO7PR2\n cpd36412 None\n cpd36414 C32H44N3O17PR2S\n cpd36415 C3H12FeN5O2R2S\n cpd36416 C46H75N3O44P3R3\n cpd36417 C48H76N8O14PR2S\n cpd36418 C10H19N4O4R6S2\n cpd36419 C32H53N3O15PR2S\n cpd36420 C5H7O2R\n cpd36421 C25H41N3O17R3\n cpd36422 C26H38N5O6R5S\n cpd36423 C17H29N3O9PR3S\n cpd36424 C30H50O26R2\n cpd36425 C22H34O8PR\n cpd36426 C7H9O5R\n cpd36427 C9H15N2O4RS\n cpd36428 C5H5O8PR3\n cpd36429 C11H13N5O9P2R2\n cpd36430 C39H62N8O19R\n cpd36431 C46H81N7O14PR2S\n cpd36432 C51H98N3O10PR2S\n cpd36433 C11H18N3O4RS\n cpd36434 C15H24R2\n cpd36435 C38H70N3O9PR4S\n cpd36436 C25H49N2O9PRS\n cpd36437 C30H50N3O27P2R3\n cpd36438 C25H37N4O12PR2S\n cpd36440 C19H34N3O9PR2S\n cpd36441 C100H171N19O71P5R\n cpd36442 C5H8NO2R\n cpd36443 C68H107N10O17PR2S\n cpd36444 C4H8N2O2R\n cpd36445 C7H11O4R\n cpd36446 C100H159N7O63R\n cpd36447 C19H28ClN4O9PR2S\n cpd36448 C25H41N3O17R3\n cpd36449 C6H10N2O4R4SSe\n cpd36450 C10H14N3O11P2R2\n cpd36451 C3H7NOR\n cpd36452 C15H3O3R9\n cpd36453 C22H34O5R2\n cpd36454 None\n cpd36455 C7H12N3O3R2\n cpd36456 C4H7O4R\n cpd36457 C36H34O16R2\n cpd36458 C33H47N4O18PR2S\n cpd36459 C5H7O7PR2\n cpd36460 C11H11N2O15P2R2\n cpd36461 C5H10NO4RS\n cpd36462 C15H19N5O16P3R2\n cpd36463 C20H33OR\n cpd36465 C11H13OR\n cpd36466 None\n cpd36467 C3H3O4R2\n cpd36468 C3H7NORS\n cpd36469 C16H19N6O14P2R2\n cpd36470 C40H61N2O33R2\n cpd36471 C16H27N3O9PR3S\n cpd36472 C25H40N2O19R\n cpd36473 C62H102N7O42R2\n cpd36474 C10H17NO8R2\n cpd36475 C27H49O12R\n cpd36476 C23H36N4O10PR2S\n cpd36477 C21H33O2R\n cpd36478 C19H23N5O21P4R4\n cpd36479 C5H10NO3RS\n cpd36481 C10H12N5O7PR\n cpd36482 C12H17N3O15P2R2S\n cpd36483 C45H73N3O33R\n cpd36485 C10H19NO6RS2\n cpd36486 C16H21N3O2RS\n cpd36487 C26H44NO20R\n cpd36488 C28H47N2O20R\n cpd36489 C22H34O8PR\n cpd36490 C18H22N6O13P2R2\n cpd36491 C22H32N3O11PR2S\n cpd36492 C21H31N4O10PR2S\n cpd36493 C23H39N2O8PRS\n cpd36494 C67H111N4O39R\n cpd36495 C10H15O12R2S\n cpd36496 C7H15N3O3R\n cpd36497 C24H30N4O10PR2S\n cpd36498 C18H29OR\n cpd36499 C95H151N8O28P2R\n cpd36500 C18H33NO14R\n cpd36501 C20H21N7O7R\n cpd36502 C34H32FeN4O6R\n cpd36504 C11H18N2O7R3\n cpd36505 C18H33ClN4O10PR2S\n cpd36506 C24H35O24PR3\n cpd36507 None\n cpd36508 C94H155N8O67R2\n cpd36509 C26H44NO19R\n cpd36510 C70H118N4O41R\n cpd36511 C18H31O2R\n cpd36512 C25H48NO8R\n cpd36513 C32H51O6R\n cpd36514 C6H8N3O4R4\n cpd36515 C19H27OR\n cpd36516 C7H14NO7PR2\n cpd36517 C60H115N3O10PR3S\n cpd36518 None\n cpd36519 C9H19N5O4RS\n cpd36520 None\n cpd36521 C13H13N5O11P2R2\n cpd36522 C86H143N4O67R2\n cpd36523 C62H89CoN13O14PR\n cpd36524 None\n cpd36525 C23H36O20R2\n cpd36526 C22H32N7O18P3RS\n cpd36527 C19H28BrN4O9PR2S\n cpd36528 C10H19N3O3RS\n cpd36529 C8H17N5O2R2\n cpd36530 C19H35OR\n cpd36531 C16H24N7O7PR2\n cpd36532 C20H35N4O11PR2S\n cpd36533 None\n cpd36534 C28H38N3O14PR2S\n cpd36535 C42H70N3O31R\n cpd36536 C16H29NO2R3S\n cpd36537 C16H19N8O8PR2\n cpd36538 C5H8O4R2\n cpd36539 C79H143O6R6\n cpd36540 C19H32NO3R\n cpd36541 C23H36O23PR2\n cpd36542 C19H27Br2N4O9PR2S\n cpd36543 C34H44N3O17PR2S\n cpd36544 C4H5N2O2RS\n cpd36545 C102H167N4O48P5R\n cpd36546 C21H16O8R8\n cpd36548 C31H54N3O13PR2S\n cpd36549 C5H8N2O2R3S\n cpd36550 C25H37N7O18P3RS\n cpd36551 C19H22N8O16P3R2\n cpd36552 C10H11N5O11P2R\n cpd36553 C3H2OR2\n cpd36555 C3H5NO2R3\n cpd36556 C8H16N5O3R2\n cpd36557 C20H34NO15R\n cpd36558 C10H10N5O10P2R\n cpd36559 C20H14N3OR3\n cpd36560 C16H28N5O6R2\n cpd36561 C20H32N3O9PR2S\n cpd36562 None\n cpd36563 None\n cpd36564 C20H35N3O9PR3S\n cpd36565 C13H16N2O15P2R2\n cpd36566 C10H20N2O2RS\n cpd36567 C49H69N8O15PR2S\n cpd36568 None\n cpd36569 C4H5N2O3R3\n cpd36570 C3H5NO4R2S3\n cpd36571 C6H14N2O2R\n cpd36572 C29H46O25R2\n cpd36573 C39H69N3O12PR3S\n cpd36574 C6H7N2O5R3\n cpd36575 C13H22N2O5R2\n cpd36576 C30H51O3R\n cpd36577 C25H37N5O9PR2S\n cpd36578 C24H35N4O11PR2S2\n cpd36579 C62H103N4O47R2\n cpd36580 None\n cpd36581 C21H16O8R8\n cpd36582 None\n cpd36583 C9H12N3O4R3\n cpd36584 C8H14N3O4R2\n cpd36585 C25H44N3O10PR2S\n cpd36586 C56H92N8O17PR2S\n cpd36587 C31H54N3O14PR2S\n cpd36588 C20H33R\n cpd36589 None\n cpd36590 C12H15N5O9P2R2\n cpd36591 C12H27N4O2R2\n cpd36592 C8H18N4OR2\n cpd36593 C22H32N7O18P3RS\n cpd36594 C6H9O2R\n cpd36595 C6HOR5\n cpd36596 C20H40NO4R\n cpd36597 C129H212NO67P2R\n cpd36598 C17H21O4R\n cpd36599 C25H41N3O17R3\n cpd36600 C5H7NO3R2S\n cpd36602 C25H41O6R\n cpd36603 C14H23N4O7RS2\n cpd36604 C30H45O29PR3\n cpd36605 C74H123N4O57R2\n cpd36606 C8H11N4O3R\n cpd36607 C4H8N2O2R\n cpd36608 C37H62N2O36P3R3\n cpd36609 C89H143N6O55R\n cpd36610 C15H16N2O10PR2S2\n cpd36611 C9H18N2O3RS\n cpd36612 C36H60O9R2\n cpd36613 None\n cpd36614 C12H16Fe4N4O4R8S8\n cpd36615 C2H3NO2R2\n cpd36616 C19H34N4O9PR2S\n cpd36617 C9H16NO10RS2\n cpd36618 C5H9N2O2R2S\n cpd36619 C24H39O20R\n cpd36620 C22H36O8PR\n cpd36621 C16H21N2O4RS\n cpd36622 C29H51N6O14P2R2S\n cpd36623 C6H8O5RS\n cpd36624 None\n cpd36625 C10H12N2O11P2R2\n cpd36626 C19H27Cl2N4O9PR2S\n cpd36627 C8H15N2O2R2\n cpd36628 C26H47O10R\n cpd36629 C98H175O18R6\n cpd36630 C3H6O3R2\n cpd36631 C20H36N3O10PR2S\n cpd36632 C4H5N2O3R3\n cpd36634 C32H56N3O9PR2S\n cpd36635 C18H30N3O11PR2S\n cpd36636 C18H22N3O3RS\n cpd36637 C14H15N5O11P2R2\n cpd36638 C15H19N5O16P3R2\n cpd36639 C17H19O5R\n cpd36640 C20H38N4O9PR2S\n cpd36641 C12H18N2O5RS\n cpd36642 C3H2O2R2\n cpd36643 C62H89CoN13O14PR\n cpd36644 C10H13N2O7R\n cpd36645 None\n cpd36646 None\n cpd36647 C23H33N4O12PR2S\n cpd36648 C66H108N10O21PR2S\n cpd36649 C78H108CoN21O18PR2\n cpd36650 C3H3O3R2\n cpd36651 C19H31O3RS\n cpd36652 C8H11N2O6PR3\n cpd36653 C6H15N2OR\n cpd36654 C26H44NO20R\n cpd36655 None\n cpd36656 C36H61O8R\n cpd36657 C39H63N3O29R\n cpd36658 C24H40O21R2\n cpd36659 C15H18N5O17P3R3\n cpd36660 C13H23N2O3RS\n cpd36661 C4H5O3RS\n cpd36662 C21H30N3O10PR2S\n cpd36663 C16H27OR\n cpd36664 C3H6NOR2S3\n cpd36665 C26H20O11R9\n cpd36666 C48H74N3O38R2\n cpd36667 C100H179O18R6\n cpd36668 C17H32N4O9PR2S\n cpd36669 C33H55NO27R3\n cpd36670 C7H8O7PR\n cpd36671 C27H49N3O11PR3S\n cpd36672 C16H29OR\n cpd36673 C33H41N4O16PR2S\n cpd36674 None\n cpd36675 C5H11NO2RS\n cpd36676 C8H17N3O2R2\n cpd36677 C161H285N14O79P8R\n cpd36678 C36H42N5O7R2S\n cpd36679 C20H14N3O3R3\n cpd36680 C22H33OR\n cpd36681 C27H30N9O9R5\n cpd36682 C21H33OR\n cpd36683 C36H60O31R2\n cpd36684 C39H66N3O10PR2S\n cpd36685 C116H187N10O83R2\n cpd36686 C20H34N3O9PR2S\n cpd36687 C9H12N3O7PR\n cpd36688 C5H8O6PR\n cpd36689 C10H17ORS\n cpd36690 C6H8N3O4R4\n cpd36691 C15H2O3R9\n cpd36692 C34H39O33R2\n cpd36693 None\n cpd36694 C23H33N4O12PR2S\n cpd36696 C92H153N4O72R2\n cpd36697 C3H13FeN5O3R2S\n cpd36698 C7H6O7PR\n cpd36699 C37H44N12O23P3R\n cpd36700 C36H42N5O7R2S\n cpd36701 C78H130O66R2\n cpd36703 None\n cpd36704 C30H38N3O13PR2S\n cpd36705 C20H19N7O6R\n cpd36706 C43H71O6PR\n cpd36707 C6H4N5O2R\n cpd36708 C8H16N2O2RS2\n cpd36709 C74H118N11O19PR2S\n cpd36710 None\n cpd36712 C25H47N2O8PRS\n cpd36713 C18H34N4O9PR2S\n cpd36714 C18H22N6O20P4R4\n cpd36716 C18H33OR\n cpd36717 C36H61O8R\n cpd36718 C8H16N2O2RS\n cpd36719 None\n cpd36720 C68H113N4O52R2\n cpd36721 C15H29N2O2R2\n cpd36722 C24H40O21R\n cpd36723 C4H6O7PR2\n cpd36724 C22H38N5O12PR2S\n cpd36726 C18H29OR\n cpd36727 C123H223N16O84P6R3\n cpd36728 C22H40O8PR\n cpd36729 None\n cpd36730 C70H115N8O47R2\n cpd36731 C17H29N4O11PR2S\n cpd36732 C57H90N3O54P3R3\n cpd36733 C20H31N5O10PR2S3\n cpd36734 C9H10N3O10P2R\n cpd36735 C31H55N2O8PRS\n cpd36736 C12H23NO10R\n cpd36737 C37H62N3O10PR2S\n cpd36738 C18H31OR\n cpd36739 C62H89CoN13O14PR\n cpd36740 C5H8N2O2R3S\n cpd36741 C5H8O15P4R2\n cpd36742 C28H47N2O21R\n cpd36743 C5H8N2O2R3S\n cpd36744 C10H16O9R2\n cpd36745 C19H29N4O9PR2S\n cpd36746 C15H3O3R9\n cpd36747 C9H17N4O3R2\n cpd36748 C33H45N4O18PR2S\n cpd36749 C36H42N5O7R2S\n cpd36750 C10H17N2O4R\n cpd36751 C28H50O8PR2\n cpd36753 None\n cpd36754 C16H25OR\n cpd36755 C4H6N2O2R3\n cpd36756 None\n cpd36757 C6H10NO3R2\n cpd36758 C15H24N3O2R\n cpd36759 C74H123N4O57R2\n cpd36760 C87H141N12O66P4R\n cpd36761 None\n cpd36762 None\n cpd36763 C4H5NO6PR2\n cpd36764 C4H5NO6PR2\n cpd36765 None\n cpd36767 C78H128N9O52R2\n cpd36768 C9H18N4O3R2\n cpd36769 C20H38N4O9PR2S\n cpd36770 C27H49N3O10PR3S\n cpd36771 C24H42N3O10PR2S\n cpd36772 C19H25N6O13P2R2\n cpd36773 C5H9N2O3R\n cpd36774 C24H42N3O10PR2S\n cpd36775 C20H36O8PR\n cpd36776 C51H83N3O37R\n cpd36777 None\n cpd36778 C71H117N11O51P3R\n cpd36779 C16H28O10R2\n cpd36780 None\n cpd36781 None\n cpd36782 C10H14O15P3R3\n cpd36783 C21H24N7O6R3\n cpd36784 C20H31OR\n cpd36785 None\n cpd36786 C20H34O5R2\n cpd36787 C13H25N3O3RS\n cpd36788 C67H111N4O39R\n cpd36789 Ca3R\n cpd36790 C27H46N2O9PRS\n cpd36791 C7H12NO6RS\n cpd36792 C40H64O33R2\n cpd36793 C95H160N3O45P6R\n cpd36795 C4H6NO2R\n cpd36796 C5H8NO3R\n cpd36797 C5H7O10P2R2\n cpd36798 C19H36NO4R\n cpd36800 C11H21N5O4R\n cpd36801 C9H13NO10PR2\n cpd36802 C25H41N3O17R3\n cpd36803 C9H17N3O4RS2\n cpd36804 None\n cpd36805 C28H44N3O10PR2S\n cpd36806 C6H7N2O5R3\n cpd36807 C6H8N3O4R3\n cpd36808 C6H8O5RS\n cpd36809 C20H21N7O6R\n cpd36810 C10H13N2O10PR\n cpd36811 None\n cpd36812 C106H179N5O51P6R\n cpd36813 C21H36N6O12PR2S\n cpd36814 C92H165O14R6\n cpd36815 C23H36N4O11PR2S\n cpd36816 None\n cpd36817 C31H55N2O9PRS\n cpd36818 None\n cpd36819 C14H29N5O4R2\n cpd36820 C5H8N2O3R2\n cpd36821 C68H113N4O52R2\n cpd36822 None\n cpd36823 C13H15N5O5R\n cpd36824 C19H19N7O5R\n cpd36825 C23H44O2R\n cpd36826 C5H8N2O2R3S\n cpd36827 C22H38O5R2\n cpd36829 C7H10N2O4R2\n cpd36830 C10H12N2O11P2R2S\n cpd36831 C14H20N2O2RS\n cpd36832 C15H24N5O2R\n cpd36834 C34H57N2O24R\n cpd36835 None\n cpd36836 C53H96O13R2\n cpd36837 None\n cpd36838 C21H25N5O10PR2\n cpd36839 C46H76N5O32R2\n cpd36840 C37H62N3O10PR2S\n cpd36841 C6H7N3O4R4\n cpd36842 C20H30O3R\n cpd36843 C24H40O21R2\n cpd36844 C12H14N2O15P2R2\n cpd36845 C10H12N5O14P3R\n cpd36846 C12H24N4O4R8S4\n cpd36847 C26H48O3R2\n cpd36848 C21H34NO18R3\n cpd36849 C14H25N7O10P2R2\n cpd36852 C23H33N5O25P5R3\n cpd36853 C18H27OR\n cpd36855 C8H17N5O3R\n cpd36856 C73H138N3O10PR2S\n cpd36857 C15H29N2O2R2\n cpd36858 None\n cpd36859 None\n cpd36860 C19H33N4O11PR2S\n cpd36861 None\n cpd36862 C3H6NO3RS\n cpd36863 C11H14N2O11P2R2\n cpd36864 C31H52N3O14PR2S\n cpd36865 C30H52N3O14PR2S\n cpd36866 None\n cpd36867 C14H17N2O18P3R3\n cpd36868 C18H34N4O10PR2S\n cpd36869 C3H5NO4R2S3\n cpd36870 C11H16N2O5RS\n cpd36871 C24H39O20R\n cpd36872 C19H29N4O9PR2S\n cpd36873 C23H42N3O10PR2S\n cpd36874 C16H24N7O8PR2\n cpd36875 C3H5O4R2S\n cpd36876 C12H17N3O14P2R2S2\n cpd36877 C24H34N2O27R2S2\n cpd36878 C30H51O4R\n cpd36879 None\n cpd36880 C18H27OR\n cpd36881 C42H44O21R2\n cpd36882 C36H67N3O9PR3S\n cpd36884 C38H69N3O9PR3S\n cpd36885 C13H27R\n cpd36886 None\n cpd36887 C19H34N4O9PR2S\n cpd36888 C5H7O10P2R2\n cpd36889 C9H13N3O3R3\n cpd36890 C19H36NO4R\n cpd36891 C19H14Ca3N3OR4\n cpd36892 C12H26N3O2R\n cpd36893 None\n cpd36894 C25H37N4O13PR2S\n cpd36895 None\n cpd36896 C30H50O26R2\n cpd36897 C7H10N2O3R2\n cpd36898 C5H7O9P2R2\n cpd36899 C26H45N6O14P2R2S\n cpd36900 None\n cpd36901 C13H16NO3R\n cpd36902 C6H8N2O4R\n cpd36903 C51H86N3O13PR2S\n cpd36904 C10H11N5O11P2R2\n cpd36905 C5H8N2O2R3S\n cpd36906 C9H14N4O2R4S\n cpd36907 C5H7NO3R2\n cpd36908 C40H72N3O9PR6S\n cpd36909 C26H45O10R\n cpd36911 C19H14N3OR3\n cpd36912 C23H30N10O19P4R\n cpd36913 C25H40N4O11PR2S\n cpd36914 C22H31N5O25P5R3\n cpd36915 C14H24NO11R\n cpd36916 C6H7N3O4R4\n cpd36917 C32H51N4O21R\n cpd36918 C18H25O19PR3\n cpd36919 C26H47N4O11PR2S\n cpd36920 C30H49O4R2\n cpd36921 C84H141N5O51R\n cpd36922 C24H33N4O11PR2S2\n cpd36923 C14H29N7O4R2\n cpd36924 C7H14N2O2RS\n cpd36925 C33H39N4O14PR2S\n cpd36926 C6H8N3O4R4\n cpd36927 C23H32O7PR3\n cpd36928 C25H35Cl2N4O12PR2S\n cpd36929 C105H186O18R13S\n cpd36930 C20H37OR\n cpd36931 C6H8N2O4R4SSe\n cpd36932 C26H39N7O18P3RS\n cpd36934 C4H8O6PR\n cpd36935 C34H46N3O18PR2S\n cpd36936 C25H50N2O9PRS\n cpd36937 None\n cpd36938 C14H19N4O4R4\n cpd36939 C3HO3R2\n cpd36940 C62H90CoN13O14PR2\n cpd36941 None\n cpd36942 C17H28N2O11PR2S\n cpd36943 C17H31N4O5RS2\n cpd36944 C3H5NO4R2S3\n cpd36945 C18H33O2R\n cpd36946 C30H37N12O22P3R2\n cpd36947 C11H13N5O9P2R2\n cpd36948 C15H18N3O3R2\n cpd36949 C54H104N3O10PR2S\n cpd36950 C11H19N3O3R4\n cpd36951 C34H48O38R3S3\n cpd36952 C24H35N4O12PR2S2\n cpd36953 C8H14NO2RS\n cpd36954 C7H16N2OR2\n cpd36956 C23H35ClN4O11PR2S\n cpd36957 C9H10N2O11P2R2S\n cpd36958 C22H31OR\n cpd36959 C30H53O4R2\n cpd36960 C20H14N3OR4\n cpd36961 C13H18NO3R\n cpd36962 None\n cpd36963 C31H55O4R2\n cpd36964 C23H35ClN4O10PR2S\n cpd36965 C19H29N4O9PR2S\n cpd36966 C4H5NO4R\n cpd36967 C3H5NO2R3\n cpd36968 None\n cpd36969 C30H50O26R2\n cpd36970 C9H9NO6R4S\n cpd36971 C27H46N2O8PRS\n cpd36972 C23H36N5O8PR\n cpd36973 C16H29OR\n cpd36974 None\n cpd36975 C26H46N3O11PR2S\n cpd36976 C41H74N6O12PR2S\n cpd36977 C20H20N7O5R\n cpd36978 C33H53N8O8R4S2\n cpd36979 C17H32N4O9PR2S\n cpd36980 C4H6N2O2R3\n cpd36981 C80H133N4O62R2\n cpd36982 C25H42N2O8PRS\n cpd36983 C36H60O8R2\n cpd36984 C23H47R\n cpd36985 C16H30NO2R\n cpd36986 C19H33N4O11PR2S\n cpd36988 C9H16O11PR2\n cpd36989 C42H70O36R2\n cpd36990 C9H10N2O13P2R2\n cpd36991 C36H60O31R2\n cpd36992 None\n cpd36993 C39H45N13O22P3R\n cpd36994 None\n cpd36995 C28H38N3O14PR2S\n cpd36996 C21H13O9R6\n cpd36997 C39H51N8O12PR2S\n cpd36998 C23H40N5O13P2R2S\n cpd36999 C3H5NO3R2S2\n cpd37000 C17H32N4O10PR2S\n cpd37001 C9H10N2O12P2R\n cpd37002 C7H8N2O3R2\n cpd37003 C9H19N5O5RS\n cpd37004 C18H22N6O14P2R2\n cpd37005 None\n cpd37006 C20H14N3OR4\n cpd37007 C48H88N3O12PR2S\n cpd37008 C21H30N3O10PR2S\n cpd37009 C10H18NO9RS3\n cpd37010 C6H11N2O2R3S2\n cpd37011 None\n cpd37012 C4H7OR\n cpd37013 C36H61O9R\n cpd37014 C5H7O6PR2\n cpd37015 C29H49N2O9PRS\n cpd37016 None\n cpd37017 C68H113N4O52R2\n cpd37018 C6H6O4RS\n cpd37019 C22H28N10O19P4R\n cpd37020 C10H18N3O5R\n cpd37021 None\n cpd37022 C3H6NO3R\n cpd37023 C62H105N3O36R\n cpd37024 C10H10N5O9P2R\n cpd37025 C73H121N11O52P3R\n cpd37026 C23H33N4O11PR2S\n cpd37027 C16H27OR\n cpd37028 C9H15N4O2R2\n cpd37029 C29H48N2O8PRS\n cpd37030 C10H18NO10RS3\n cpd37031 C62H89CoN13O14PR\n cpd37032 C17H28N2O12R3\n cpd37033 C13H24NO4R\n cpd37034 C6H13N4O4PR2\n cpd37035 C15H19N5O17P3R2\n cpd37036 C32H54N8O6R4S2\n cpd37037 C15H18N6O6R\n cpd37038 C24H40O21R2\n cpd37039 C79H131N2O28P4R\n cpd37040 C25H49N2O8PRS\n cpd37041 None\n cpd37042 C80H133N4O62R2\n cpd37043 C26H42N3O10PR2S\n cpd37044 C19H22N8O17P3R2\n cpd37045 C33H52N3O10PR2S\n cpd37046 C10H15NO6R2\n cpd37047 C2H4OR2S2\n cpd37048 C86H155O10R6\n cpd37049 C20H34NO15R\n cpd37050 C22H40O5R2\n cpd37051 C19H26Br3N4O9PR2S\n cpd37052 C21H13O9R6\n cpd37053 C19H17N7O5R\n cpd37054 C7H5O4R\n cpd37055 C17H32N4O9PR2S\n cpd37056 None\n cpd37057 C21H30N3O10PR2S\n cpd37059 C16H30N4O9PR2S\n cpd37060 C19H33NO3R2\n cpd37061 C75H142O5R6\n cpd37062 C55H93O4PR\n cpd37063 C16H31N6O4R4\n cpd37064 C49H82N3O13PR2S\n cpd37065 C4H5N2O3R3\n cpd37066 C40H66N3O17PR2S\n cpd37067 C3H7NO2R\n cpd37068 C5H8N2O2R3S\n cpd37069 C21H35NO17R3\n cpd37070 None\n cpd37071 C43H82N3O9PR2S\n cpd37072 C20H26N5O21P4R3\n cpd37073 C40H68N3O38P3R3\n cpd37074 C42H71O13R\n cpd37075 C7H16N2OR2\n cpd37076 C41H78N3O9PR2S\n cpd37077 None\n cpd37078 C17H29N4O10PR2S\n cpd37079 None\n cpd37080 C20H35O2R\n cpd37081 C77H127N6O55R2\n cpd37082 C26H43O30P3R3\n cpd37084 C35H56O29R2\n cpd37085 C21H26N6O15P2R2\n cpd37086 C31H49N2O26RS\n cpd37087 C7H11N2O2R2\n cpd37088 C15H18N5O16P3R3\n cpd37089 C26H46NO24PR2\n cpd37090 C11H22N2O2RS\n cpd37091 C36H42N5O7R2S\n cpd37092 None\n cpd37093 C37H57N7O21R2\n cpd37094 C18H31O2R\n cpd37095 C6H16Fe8N2O2R4S10\n cpd37096 C17H32N4O9PR2S\n cpd37097 C19H14Ca3N3OR4\n cpd37098 C42H65O39PR3\n cpd37099 C48H80O41R2\n cpd37100 C17H32N4O10PR2S\n cpd37101 C47H81N7O17P3RS\n cpd37102 C23H33O4R3\n cpd37103 C34H50N7O21R2\n cpd37104 C28H40N3O15PR2S\n cpd37105 C14H19NO5R\n cpd37106 C36H42N5O7R2S\n cpd37107 None\n cpd37108 C56H95N3O31R\n cpd37109 C25H45NO11RS\n cpd37110 C7H11O4R\n cpd37111 C17H26O15R2\n cpd37113 C14H24N2O5R2\n cpd37114 C4H5N2O3R3\n cpd37115 C19H14N3OR3\n cpd37116 C20H24N8O16P3R2\n cpd37117 C20H24N4O22P4R4\n cpd37118 C20H37O2R\n cpd37119 C26H41O10R\n cpd37120 C73H125N4O43R\n cpd37121 C7H10N2O5R2\n cpd37122 C32H50N3O10PR2S\n cpd37123 C4H5OR\n cpd37124 C46H66N4O34R2\n cpd37125 C3H7NOR\n cpd37126 C60H99N9O19PR2S\n cpd37127 C9H12N3O4R3\n cpd37129 C21H26N6O16P2R2\n cpd37130 C26H44NO19R\n cpd37131 C25H35N4O12PR2S\n cpd37132 C29H40N2O25R2\n cpd37133 C85H153O10R6\n cpd37134 None\n cpd37135 C9H12N2O10P2R4\n cpd37136 C30H61OR\n cpd37137 C48H75O44PR3\n cpd37138 C51H83N3O48P3R3\n cpd37139 C22H33N4O10PR2S\n cpd37140 C54H104N3O9PR2S\n cpd37141 C36H70O2R2\n cpd37142 C50H74N7O14PR2S\n cpd37143 C18H29OR\n cpd37144 None\n cpd37145 None\n cpd37146 C5H6O9P2R2\n cpd37147 None\n cpd37148 C167H297N16O81P8R\n cpd37149 C78H127N5O47R\n cpd37150 C22H32O8PR\n cpd37151 C25H43N6O13PR2S\n cpd37152 C16H19N8O8PR2\n cpd37153 C30H35O31R2\n cpd37154 C40H75N3O9PR3S\n cpd37155 C51H84N5O36R2\n cpd37156 C5H8NO2RS\n cpd37157 C30H44O41P8R8\n cpd37158 C17H28N2O12R3\n cpd37159 C8H12N2O3R3\n cpd37160 C15H5O2R9\n cpd37161 C24H40O21R2\n cpd37162 C22H33N4O10PR2S\n cpd37163 C4H9NORS\n cpd37164 C7H7O4R\n cpd37165 C5H9N2O2R2\n cpd37166 C105H171N9O75R2\n cpd37167 C31H52O4R\n cpd37168 C145H259N12O69P8R\n cpd37169 C20H36N4O9PR2S\n cpd37170 C32H39N12O24P3R\n cpd37171 C5H8N2O2R3S\n cpd37172 None\n cpd37173 C10H14N5O4R\n cpd37174 C19H25N6O14P2R2\n cpd37175 C5H9O3R\n cpd37176 C3H5NO2R3\n cpd37177 C17H30N2O11PR2S\n cpd37178 C19H36NO2R2S\n cpd37179 C56H93N4O42R2\n cpd37180 C5H9N2O3R\n cpd37181 C8H16N2O3RS\n cpd37182 C3H4NO5PR3\n cpd37183 C15H16N2O11PR2S\n cpd37184 C3H7NOR\n cpd37185 C10H17N2O3RS\n cpd37186 C32H44O27R2\n cpd37187 C10H12N2O11P2R2\n cpd37188 C15H4O2R10\n cpd37189 C36H62N2O16R\n cpd37190 C48H74N3O38R2\n cpd37191 C27H45NO22R3\n cpd37192 C16H21N2O3RS\n cpd37193 C30H37N12O22P3R2\n cpd37194 C26H43O10R\n cpd37195 C30H60O4PR\n cpd37196 C26H52O3R2\n cpd37197 C9H11N3O11P2R\n cpd37198 C85H141N2O33P4R\n cpd37199 C57H99N2O32R\n cpd37200 HO10P3R\n cpd37201 C19H31N3O12R3\n cpd37202 C7H12NO2RS\n cpd37203 C27H48N2O9PRS\n cpd37204 C31H52O8R\n cpd37205 C5H7OR\n cpd37206 C27H46N2O8PRS\n cpd37207 C43H87O6PR\n cpd37208 C27H48O15RS\n cpd37209 C3H5NO2R3\n cpd37210 C19H23N5O21P4R4\n cpd37212 C35H56N5O12PR2S\n cpd37213 C6H12N2O2R4S2\n cpd37214 C22H38O8PR\n cpd37215 C4H3O2R3\n cpd37216 C42H71O13R\n cpd37217 C186H310O155R2\n cpd37218 C6H10N2O5R3S3\n cpd37219 C18H29OR\n cpd37220 C5H8NO2R\n cpd37221 C5H6O2R2\n cpd37222 C6H10O2R\n cpd37223 C20H23N7O5R\n cpd37224 C62H88CoN13O14PR\n cpd37225 C63H104N5O46R2\n cpd37226 C38H71N3O10PR3S\n cpd37227 C20H31OR\n cpd37228 C12H11N5O9P2R2\n cpd37229 C4H9NO2R\n cpd37230 C19H29OR\n cpd37231 C103H173N4O50P6R\n cpd37232 C6H8N3O4R4\n cpd37233 C19H31OR\n cpd37234 None\n cpd37235 C17H28N2O12R3\n cpd37236 None\n cpd37237 C31H49N6O13PR2S\n cpd37238 C18H25OR\n cpd37239 C37H67N5O10PR2S\n cpd37240 C15H17N5O11P2R2\n cpd37241 C39H64O33R2\n cpd37242 C25H35N4O11PR2S\n cpd37243 C18H21O19R2\n cpd37244 CH2O4RS\n cpd37245 C17H29N4O9PR2S2\n cpd37246 C36H42N5O7R2S\n cpd37248 C40H68N3O17PR2S\n cpd37249 C28H40N3O15PR2S\n cpd37250 C36H42N5O7R2S\n cpd37251 C32H55N3O16PR2S\n cpd37252 C18H31O2R\n cpd37253 None\n cpd37254 C101H181O18R6\n cpd37255 C51H91N7O16P3RS\n cpd37256 C22H34O5R2\n cpd37257 C9H17N3O3RS\n cpd37258 C16H24N7O7PR2\n cpd37259 C22H35OR\n cpd37260 C19H33OR\n cpd37261 C26H24O11R8\n cpd37262 C97H165N18O70P5R\n cpd37263 C56H95N3O31R\n cpd37264 C73H140N3O10PR2S\n cpd37265 C28H52N2O25P2R2\n cpd37266 C31H54N2O9PRS\n cpd37267 C15H19N5O15P3R2\n cpd37268 C18H34N4O10PR2S\n cpd37269 C11H19N3O4R3\n cpd37290 C5H11NORS\n cpd37291 C6H10NO2RS\n cpd37292 C6H9N3O5R2\n\n\n\n```python\nlen(cpdDict)\n```\n\n\n\n\n 26051\n\n\n\n\n```python\ncpdDict.get('cpd00783', None)\n```\n\n\n```python\nM['list_of_reactions'][891]\n```\n\n\n\n\n {'id': 'rxn00910',\n 'reactants': ['cpd00006', 'cpd00345'],\n 'products': ['cpd00005', 'cpd00125'],\n 'genes': [],\n 'enzymes': ['1.5.1.20']}\n\n\n\n\n```python\ndef get_delta(tuple1, tuple2):\n # tuple1,2 are (mass, formula)\n # get diff of mass and formulas. tuple2 as from products.\n # return (mdiff, formulaDiff, tuple1[1], tuple2[1])\n \n F1, F2 = tuple1[1], tuple2[1]\n mdiff = tuple2[0] - tuple1[0]\n if tuple1[0] <= tuple2[0]:\n F2, F1 = tuple1[1], tuple2[1]\n \n # F1 is the larger\n F1dict, F2dict = parse_chemformula_dict(F1), parse_chemformula_dict(F2)\n # invert F2 and calculate differential formula\n for k,v in F2dict.items():\n F2dict[k] = -v\n formulaDiff = add_formula_dict( F1dict, F2dict )\n if formulaDiff:\n formulaDiff = dict_to_hill_formula(formulaDiff)\n \n return (mdiff, formulaDiff, tuple1[1], tuple2[1])\n \n```\n\n\n```python\n# get useful rxns\n# signature_mass_diff is the mass shift btw all products and all reactants\ngood = []\nfor R in M['list_of_reactions']:\n if R['products'] and R['reactants']:\n for C1 in R['reactants']:\n for C2 in R['products']:\n if C1 != C2:\n D1 = cpdDict.get(C1, None)\n D2 = cpdDict.get(C2, None)\n if D1 and D2: \n diff = get_delta((D1['neutral_mono_mass'], D1['neutral_formula']),\n (D2['neutral_mono_mass'], D2['neutral_formula'])\n )\n good.append((R['id'], diff))\n R['signature_mass_diff'] = diff[0]\n R['signature_formula_diff'] = diff[1]\n \nprint(len(good))\n```\n\n 132622\n\n\n\n```python\ngood[40:60]\n```\n\n\n\n\n [('rxn00019', (-42.04695, 'C3H6', 'C3H7NO2', 'HNO2')),\n ('rxn00019', (-31.005813000000003, 'HNO', 'C3H7NO2', 'C3H6O')),\n ('rxn00020', (162.052823, 'C6H10O5', 'H2O', 'C6H12O6')),\n ('rxn00020', (-162.05282400000002, 'C6H10O5', 'C12H22O11', 'C6H12O6')),\n ('rxn00021', (-106.041865, 'C7H6O', 'C14H12O2', 'C7H6O')),\n ('rxn00022', (162.052823, 'C6H10O5', 'H2O', 'C6H12O6')),\n ('rxn00022', (-162.05282400000002, 'C6H10O5', 'C12H22O11', 'C6H12O6')),\n ('rxn00023', (111.928886, 'O3S2', 'H2O3S2', 'H2O6S4')),\n ('rxn00024', (165.078979, 'C9H11NO2', 'O2', 'C9H11NO4')),\n ('rxn00024', (15.994914999999992, 'O', 'C9H11NO3', 'C9H11NO4')),\n ('rxn00025', (252.22418700000003, None, 'O2', 'C20H28O')),\n ('rxn00025', (-252.22418600000003, None, 'C40H56', 'C20H28O')),\n ('rxn00026', (2.015650000000001, 'H2', 'O2', 'H2O2')),\n ('rxn00026', (230.09900300000004, None, 'O2', 'C12H14N4OS')),\n ('rxn00026', (-230.099003, None, 'C12H16N4OS', 'H2O2')),\n ('rxn00026', (-2.0156499999999937, 'H2', 'C12H16N4OS', 'C12H14N4OS')),\n ('rxn00027', (-284.113724, 'C9H21N2O6P', 'C18H34N4O12P2', 'C9H13N2O6P')),\n ('rxn00028', (-13.979264, None, 'O2', 'H2O')),\n ('rxn00028', (142.026609, 'C6H6O4', 'O2', 'C6H6O6')),\n ('rxn00028', (-158.021523, 'C6H6O5', 'C6H8O6', 'H2O'))]\n\n\n\n\n```python\ndef add_formula_dict2(dict1, dict2):\n '''\n Addition of two formulae as dictionaries.\n This allows calculating formula after a reaction, as dict2 can contain substraction of elements.\n Not as good as using real chemical structures, just a simple approximation.\n '''\n new, result = {}, {}\n for k in set(dict1.keys()).union( set(dict2.keys()) ):\n if k in dict1 and k in dict2:\n new[k] = dict1[k] + dict2[k]\n elif k in dict1:\n new[k] = dict1[k]\n else:\n new[k] = dict2[k]\n for k,v in new.items():\n if v != 0:\n result[k] = v\n return result\n```\n\n\n```python\ngood2 = []\nfor line in good:\n rid, DD = line\n if DD[0] != 0:\n F1, F2 = DD[2:4]\n F1dict, F2dict = parse_chemformula_dict(F1), parse_chemformula_dict(F2)\n for k,v in F2dict.items():\n F2dict[k] = -v\n formula_d = add_formula_dict2(F1dict, F2dict)\n good2.append(\n [rid] + [x for x in DD] + [str(formula_d)]\n )\n \nprint(len(good2))\n```\n\n 131266\n\n\n\n```python\ngood2[88:93]\n```\n\n\n\n\n [['rxn00038',\n 86.036779,\n 'C4H6O2',\n 'H2O',\n 'C4H8O3',\n \"{'C': -4, 'O': -2, 'H': -6}\"],\n ['rxn00038',\n -86.03678000000001,\n 'C4H6O2',\n 'C8H14O5',\n 'C4H8O3',\n \"{'C': 4, 'O': 2, 'H': 6}\"],\n ['rxn00039',\n -152.01095899999999,\n 'C7H4O4',\n 'C13H16O10',\n 'C6H12O6',\n \"{'C': 7, 'O': 4, 'H': 4}\"],\n ['rxn00039',\n 152.01095800000002,\n 'C7H4O4',\n 'C13H16O10',\n 'C20H20O14',\n \"{'C': -7, 'O': -4, 'H': -4}\"],\n ['rxn00040',\n 152.01095800000002,\n 'C7H4O4',\n 'H2O',\n 'C7H6O5',\n \"{'C': -7, 'O': -4, 'H': -4}\"]]\n\n\n\n\n```python\n# combine +/- values\nall_mds = [str(round(abs(x[1]),4)) for x in good2]\nprint(len(all_mds), len(set(all_mds)))\n```\n\n 131266 14035\n\n\n\n```python\n# so there's 5556 mz_diffs. because all mass were calculated from same source, bin in 0.0002 okay\n# make a dict of mz_diff to reaction IDs\nmzdiff_dict = {}\nfor k in set(all_mds):\n mzdiff_dict[k] = []\n for x in good2:\n if abs(abs(x[1])-float(k)) < 0.0002:\n mzdiff_dict[k].append(x[0])\n \n```\n\n\n```python\nlist(mzdiff_dict.items())[88:90]\n```\n\n\n\n\n [('1271.3391', ['rxn10531']),\n ('206.0691', ['rxn03362', 'rxn03362', 'rxn45594', 'rxn45594'])]\n\n\n\n\n```python\nfreq = list(zip(mzdiff_dict.keys(), [len(set(v)) for k,v in mzdiff_dict.items()]))\nfreq.sort(reverse=True, key=itemgetter(1))\n```\n\n\n```python\nfrom matplotlib import pyplot as plt\n```\n\n\n```python\nfreq[500]\n```\n\n\n\n\n ('328.2038', 21)\n\n\n\n\n```python\nplt.plot([x[1] for x in freq[10:500]] )\n```\n\n\n```python\nfreq[:40]\n```\n\n\n\n\n [('2.0157', 10614),\n ('2.0156', 10614),\n ('15.9949', 4110),\n ('13.9793', 4075),\n ('79.9665', 2804),\n ('79.9663', 2802),\n ('162.0528', 2463),\n ('727.0805', 1901),\n ('14.0157', 1770),\n ('14.0156', 1770),\n ('711.0856', 1401),\n ('18.0106', 1266),\n ('329.0525', 1162),\n ('159.9327', 1147),\n ('647.1142', 1091),\n ('42.0106', 1025),\n ('0.984', 919),\n ('1.0316', 679),\n ('409.0189', 641),\n ('749.1046', 616),\n ('12.0', 608),\n ('631.1193', 552),\n ('43.9898', 547),\n ('27.9949', 533),\n ('86.0004', 384),\n ('203.0794', 349),\n ('86.0368', 340),\n ('102.0317', 338),\n ('809.1258', 327),\n ('129.0426', 310),\n ('701.1013', 297),\n ('31.9898', 293),\n ('420.0521', 284),\n ('79.9568', 283),\n ('204.1878', 272),\n ('589.172', 271),\n ('132.0423', 260),\n ('68.0626', 258),\n ('177.9432', 248),\n ('22.0241', 244)]\n\n\n\n\n```python\nfreq[199], mzdiff_dict[freq[199][0]]\n```\n\n\n\n\n (('58.0419', 48),\n ['rxn00030',\n 'rxn00671',\n 'rxn00734',\n 'rxn00995',\n 'rxn01131',\n 'rxn01709',\n 'rxn02848',\n 'rxn02848',\n 'rxn03478',\n 'rxn03610',\n 'rxn03979',\n 'rxn04000',\n 'rxn04014',\n 'rxn04015',\n 'rxn04563',\n 'rxn05940',\n 'rxn06859',\n 'rxn08043',\n 'rxn08043',\n 'rxn10589',\n 'rxn11394',\n 'rxn11675',\n 'rxn12090',\n 'rxn12777',\n 'rxn12823',\n 'rxn13006',\n 'rxn13496',\n 'rxn14178',\n 'rxn15094',\n 'rxn15324',\n 'rxn15325',\n 'rxn15734',\n 'rxn17398',\n 'rxn19028',\n 'rxn19030',\n 'rxn19030',\n 'rxn23640',\n 'rxn24440',\n 'rxn25253',\n 'rxn25599',\n 'rxn27665',\n 'rxn27667',\n 'rxn27667',\n 'rxn30333',\n 'rxn30333',\n 'rxn35548',\n 'rxn40600',\n 'rxn43674',\n 'rxn43989',\n 'rxn45568',\n 'rxn45568',\n 'rxn46173',\n 'rxn48129',\n 'rxn48474'])\n\n\n\n\n```python\nfil = [x for x in freq if 2 < float(x[0]) < 100]\nprint(len(fil))\nprint(len([x for x in fil if x[1] > 5]))\n```\n\n 2084\n 474\n\n\n## With 14035 mass diff, 2084 are within (2, 100) amu\n\nand \n\n474 occurs more than 5 times\n\n\n```python\ns = json.JSONEncoder().encode( mzdiff_dict )\nwith open(\"ModelSeed_universal_signatures_dict.json\", \"w\") as O:\n O.write(s)\n```\n", "meta": {"hexsha": "ffc63289ef508abd779301552cede40a625f9bb1", "size": 272398, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/rxn_mass_signatures-ModelSeed.ipynb", "max_stars_repo_name": "gmhhope/JMS", "max_stars_repo_head_hexsha": "5097c1fa11bf112b71330c878455003a1326f528", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/rxn_mass_signatures-ModelSeed.ipynb", "max_issues_repo_name": "gmhhope/JMS", "max_issues_repo_head_hexsha": "5097c1fa11bf112b71330c878455003a1326f528", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/rxn_mass_signatures-ModelSeed.ipynb", "max_forks_repo_name": "gmhhope/JMS", "max_forks_repo_head_hexsha": "5097c1fa11bf112b71330c878455003a1326f528", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-16T18:58:27.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T18:58:27.000Z", "avg_line_length": 30.7100338219, "max_line_length": 10384, "alphanum_fraction": 0.5961864625, "converted": true, "num_tokens": 98116, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5312093733737563, "lm_q2_score": 0.5078118642792044, "lm_q1q2_score": 0.26975442221551515}} {"text": "```python\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport math\nimport seaborn as sns\nfrom datetime import datetime\nfrom functools import reduce\nfrom collections import Counter\nimport functions\nfrom scipy.stats import ks_2samp\nfrom scipy.stats import pearsonr\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\n\npd.options.mode.chained_assignment = None\n```\n\n# Load the dataset\nwe load our dataset and using the function **parsedate** we have changed the format of our timestamp\n\n\n```python\ndataset = pd.read_csv('steam_reviews.csv',\n index_col=0,\n parse_dates=['timestamp_created', 'timestamp_updated', 'author.last_played'],\n date_parser=functions.parsedate)\n```\n\n C:\\Users\\Clara\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\numpy\\lib\\arraysetops.py:580: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n mask |= (ar1 == a)\n\n\n\n```python\ndataset.head(20)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_idapp_namereview_idlanguagereviewtimestamp_createdtimestamp_updatedrecommendedvotes_helpfulvotes_funny...steam_purchasereceived_for_freewritten_during_early_accessauthor.steamidauthor.num_games_ownedauthor.num_reviewsauthor.playtime_foreverauthor.playtime_last_two_weeksauthor.playtime_at_reviewauthor.last_played
0292030The Witcher 3: Wild Hunt85185598schinese\u4e0d\u73a9\u6b64\u751f\u9057\u61be\uff0cRPG\u6e38\u620f\u91cc\u7684\u5929\u82b1\u677f\uff0c\u592a\u5438\u5f15\u4eba\u4e862021-01-23 06:00:292021-01-23 06:00:29True00...TrueFalseFalse76561199095369542621909.01448.01909.02021-01-22 19:23:03
1292030The Witcher 3: Wild Hunt85185250schinese\u62d4DIAO\u65e0\u60c5\u6253\u6869\u673a--\u6770\u6d1b\u7279!!!2021-01-23 05:50:302021-01-23 05:50:30True00...TrueFalseFalse7656119894950411530102764.02743.02674.02021-01-23 07:18:27
2292030The Witcher 3: Wild Hunt85185111schinese\u5deb\u5e083NB2021-01-23 05:46:402021-01-23 05:46:40True00...TrueFalseFalse76561199090098988511061.01061.01060.02021-01-23 06:36:17
3292030The Witcher 3: Wild Hunt85184605englishOne of the best RPG's of all time, worthy of a...2021-01-23 05:32:502021-01-23 05:32:50True00...TrueFalseFalse76561199054755373535587.03200.05524.02021-01-23 06:35:44
4292030The Witcher 3: Wild Hunt85184287schinese\u5927\u4f5c2021-01-23 05:23:472021-01-23 05:23:47True00...TrueFalseFalse7656119902832695174217.042.0217.02021-01-16 09:10:49
5292030The Witcher 3: Wild Hunt85184171englishgood story, good graphics. lots to do.2021-01-23 05:21:042021-01-23 05:21:04True00...TrueFalseFalse76561198170193529111823.0823.0823.02021-01-23 05:20:01
6292030The Witcher 3: Wild Hunt85184064englishdis gud,2021-01-23 05:18:112021-01-23 05:18:11True00...TrueFalseFalse765611981193028122724192.03398.04192.02021-01-22 21:42:14
7292030The Witcher 3: Wild Hunt85183602turkish.\\n2021-01-23 05:05:122021-01-23 05:05:12True00...TrueFalseFalse76561199084188849912701.00.02701.02021-01-03 10:53:38
8292030The Witcher 3: Wild Hunt85183227schinese\u5e74\u5ea6\u6700\u4f73\u7684\u4f5c\u54c1\uff0c\u6ca1\u5565\u597d\u5938\u7684\uff0c\u795e\u4f5c2021-01-23 04:55:032021-01-23 04:55:03True00...TrueFalseFalse76561198130808993581176921.0222.06921.02021-01-22 12:07:55
9292030The Witcher 3: Wild Hunt85182785spanishgreat game2021-01-23 04:43:252021-01-23 04:43:25True00...TrueFalseFalse765611983017241123852399.0333.02364.02021-01-23 05:18:00
10292030The Witcher 3: Wild Hunt85182697schinese\u795e\u4f5c\uff01\u4e0dbb2021-01-23 04:40:312021-01-23 04:40:31True00...FalseFalseFalse765611990892095772915368.01471.05368.02021-01-23 03:14:25
11292030The Witcher 3: Wild Hunt85182372russian\u0428\u0438\u043a\u0430\u0440\u043d\u0430\u044f \u0438\u0433\u0440\u0430 \u0441 \u043e\u0442\u043b\u0438\u0447\u043d\u044b\u043c \u0441\u044e\u0436\u0435\u0442\u043e\u043c, \u043d\u0435\u043f\u043b\u043e\u0445\u043e\u0439 \u0433\u0440\u0430...2021-01-23 04:31:212021-01-23 04:31:21True00...TrueFalseFalse76561198257031328112508.0508.0348.02021-01-23 07:22:48
12292030The Witcher 3: Wild Hunt85182067schinese\u4ec0\u4e48?\u4f60\u513f\u5b50\u5931\u8e2a\u4e86?\u4ec0\u4e48?\u8fd9\u680b\u623f\u5b50\u95f9\u9b3c?\u4ec0\u4e48?\u4f60\u731c\u5230\u6211\u8981\u73a9\"\u6765\u53e5\u6606\u7279\u724c\"\u7684\u6897?\\n\u62b1\u6b49\u8fd9\u4e2a\u6897...2021-01-23 04:22:522021-01-23 04:22:52True00...TrueFalseFalse765611983489415852016598.029.06598.02021-01-15 13:05:23
13292030The Witcher 3: Wild Hunt85181146russian\u0417\u0430\u043c\u0435\u0447\u0430\u0442\u0435\u043b\u044c\u043d\u0430\u044f \u043a\u0430\u0440\u0442\u043e\u0447\u043d\u0430\u044f \u0438\u0433\u0440\u0430 \u0432 \u0430\u043d\u0442\u0443\u0440\u0430\u0436\u0435 \u0444\u044d\u043d\u0442\u0435\u0437...2021-01-23 03:57:182021-01-23 03:58:29True00...TrueFalseFalse76561197987104694501237310.00.07310.02018-05-05 21:09:09
14292030The Witcher 3: Wild Hunt85181114koreana\uac00\uc131\ube44 \uc9f12021-01-23 03:56:352021-01-23 03:56:35True00...TrueFalseFalse76561199120263118113285.03084.03205.02021-01-23 07:10:40
15292030The Witcher 3: Wild Hunt85180815latamThe witcher 3 es un gran juego RPG, permite gr...2021-01-23 03:48:282021-01-23 03:48:28True00...TrueFalseFalse76561198301696591513586.00.03586.02020-12-31 05:02:58
16292030The Witcher 3: Wild Hunt85180734schinese\u5f53\u4f60\u73a9\u8fdb\u53bb\u4e4b\u540e\uff0c\u4e00\u5207\u90fd\u987a\u7406\u6210\u7ae0\u4e86\u8d77\u67652021-01-23 03:46:332021-01-23 03:46:33True00...TrueFalseFalse76561198985671330915483.02184.05483.02021-01-23 03:06:43
17292030The Witcher 3: Wild Hunt85180438schinese\u5251\u821e\u5929\u4e0b\u65e0\u654c\uff01\uff012021-01-23 03:38:072021-01-23 03:38:07True00...FalseFalseFalse765611981495506256675442.01643.05396.02021-01-23 07:24:06
18292030The Witcher 3: Wild Hunt85180436englishfavorite game of all time cant wait for the Ne...2021-01-23 03:38:062021-01-23 03:38:06True00...TrueFalseFalse7656119806559152833123329.0177.023329.02021-01-21 08:50:11
19292030The Witcher 3: Wild Hunt85179968schinese\u6253\u68692021-01-23 03:24:462021-01-23 03:24:46True00...FalseFalseFalse765611982495859743152153.046.02153.02021-01-22 14:23:06
\n

20 rows \u00d7 22 columns

\n
\n\n\n\n\n```python\ndataset.columns\n```\n\n\n\n\n Index(['app_id', 'app_name', 'review_id', 'language', 'review',\n 'timestamp_created', 'timestamp_updated', 'recommended',\n 'votes_helpful', 'votes_funny', 'weighted_vote_score', 'comment_count',\n 'steam_purchase', 'received_for_free', 'written_during_early_access',\n 'author.steamid', 'author.num_games_owned', 'author.num_reviews',\n 'author.playtime_forever', 'author.playtime_last_two_weeks',\n 'author.playtime_at_review', 'author.last_played'],\n dtype='object')\n\n\n\n\n```python\ndataset.shape\n```\n\n\n\n\n (21747371, 22)\n\n\n\n\n```python\ndataset.info()\n```\n\n \n Int64Index: 21747371 entries, 0 to 21747375\n Data columns (total 22 columns):\n # Column Dtype \n --- ------ ----- \n 0 app_id int64 \n 1 app_name object \n 2 review_id int64 \n 3 language object \n 4 review object \n 5 timestamp_created datetime64[ns]\n 6 timestamp_updated datetime64[ns]\n 7 recommended bool \n 8 votes_helpful int64 \n 9 votes_funny int64 \n 10 weighted_vote_score float64 \n 11 comment_count int64 \n 12 steam_purchase bool \n 13 received_for_free bool \n 14 written_during_early_access bool \n 15 author.steamid int64 \n 16 author.num_games_owned int64 \n 17 author.num_reviews int64 \n 18 author.playtime_forever float64 \n 19 author.playtime_last_two_weeks float64 \n 20 author.playtime_at_review float64 \n 21 author.last_played datetime64[ns]\n dtypes: bool(4), datetime64[ns](3), float64(4), int64(8), object(3)\n memory usage: 3.2+ GB\n\n\n# RQ1\n\n### Exploratory Data Analysis (EDA)\n\nTo try to better understand our dataset we have made a bunch of plots and tables in which we have tried to catch some information about these reviews received for the applications in Steam.\n\n\n```python\ndataset.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_idreview_idvotes_helpfulvotes_funnyweighted_vote_scorecomment_countauthor.steamidauthor.num_games_ownedauthor.num_reviewsauthor.playtime_foreverauthor.playtime_last_two_weeksauthor.playtime_at_review
count2.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.174737e+072.172169e+07
mean3.928181e+055.187500e+074.044689e+051.267917e+051.654424e-011.308768e-017.656120e+161.011300e+064.044775e+051.609105e+041.555421e+028.807421e+03
std2.480977e+052.084267e+071.333741e+092.333553e+072.434006e-012.199398e+003.179134e+082.108829e+091.333741e+093.743057e+047.300488e+022.388553e+04
min7.000000e+014.300000e+010.000000e+000.000000e+000.000000e+000.000000e+007.656120e+160.000000e+001.000000e+000.000000e+000.000000e+001.000000e+00
25%2.427600e+053.639355e+070.000000e+000.000000e+000.000000e+000.000000e+007.656120e+162.200000e+012.000000e+001.250000e+030.000000e+005.590000e+02
50%3.595500e+055.384058e+070.000000e+000.000000e+000.000000e+000.000000e+007.656120e+166.100000e+014.000000e+004.307000e+030.000000e+001.881000e+03
75%5.780800e+056.928793e+071.000000e+000.000000e+004.827586e-010.000000e+007.656120e+161.450000e+021.000000e+011.491200e+040.000000e+006.823000e+03
max1.291340e+068.521867e+074.398047e+124.294967e+099.959868e-014.893000e+037.656120e+164.398047e+124.398047e+123.744943e+062.703900e+043.228103e+06
\n
\n\n\n\n#### Application more reviewed: \nTo start our analysis we have made a pie chart about applications more reviewed. In particular we have decided to pick the first thirty games more reviewed and understand how the number of rewiews is splitted between them. Indeed the percentage written in the slices of the pie plot is referred not to the total number of reviews but the to the sum of reviews written for these thirty more popular games. The choice of thirty is due to make cleaner the plot and because we are interested only in the more popular games. The most talked-about.\n\n\n```python\na = pd.Series(dataset.groupby(\"app_name\").app_id.count().sort_values(ascending=False).head(30))\nplt.rcParams['figure.figsize'] = (10, 10)\nplt.pie(a,\n labels = a.index,\n explode = [0.1 for value in range(0, a.index.nunique())],\n shadow = True, autopct = '%.1f%%')\nplt.title('Application name', fontsize = 20)\nplt.axis('off')\nplt.show()\n```\n\n#### Correlation matrix:\nThen we have tried to make a correlation matrix to understand if there are some variables correlated between them \n\n\n```python\nfig, ax = plt.subplots(figsize=(13,13)) \nsns.heatmap(dataset.corr(), cbar=True, annot = True, cmap='BrBG', linewidths=.3,fmt='.1g')\n```\n\nWe have noticed that there was not any particular correlation between columns except for the ones related to time played by the player therefore we have decided to see in depth these correlations to have clearer information about them. \n\n\n```python\ndf = pd.DataFrame(dataset,columns=['author.playtime_forever','author.playtime_last_two_weeks',\\\n 'author.playtime_at_review'])\ncorrMatrix = df.corr()\nsns.heatmap(corrMatrix, annot=True)\nplt.show()\n\n```\n\n#### Time and Language:\nAt this point we want to extract some information about the language of the reviews and time when they were written. We have divided the day in three parts: morning (8am-2pm), afternoon (2pm-10pm) and night (10pm-8am). \nSo for each part of the day we have grouped the reviews by language, counted them and picked the ten languages more popular.\n\nIn this way in our final barplot for each popular language we have the number of reviews written in each part of the day. We have also made a table to explain better the number obtained. \n\n\n```python\narr_1 = dataset['timestamp_created'].dt.time\n```\n\n\n```python\ntime_1 = [datetime.strptime('08:00:00', '%H:%M:%S').time(),\n datetime.strptime('13:59:59', '%H:%M:%S').time()]\nindex_1 = [x for x in arr_1.index if (time_1[0] <= arr_1[x] <= time_1[1])]\n```\n\n\n```python\ntime_2 = [datetime.strptime('14:00:00', '%H:%M:%S').time(),\n datetime.strptime('21:59:59', '%H:%M:%S').time()]\nindex_2 = [x for x in arr_1.index if (time_2[0] <= arr_1[x] <= time_2[1])]\n```\n\n\n```python\ntime_3 = [datetime.strptime('22:00:00', '%H:%M:%S').time(),\n datetime.strptime('23:59:59', '%H:%M:%S').time(),\n datetime.strptime('00:00:00', '%H:%M:%S').time(),\n datetime.strptime('07:59:59', '%H:%M:%S').time()]\nindex_3 = [x for x in arr_1.index\n if ((time_3[0] <= arr_1[x] <= time_3[1]) or\n (time_3[2] <= arr_1[x] <= time_3[3]))]\n```\n\n\n```python\n# counting occurrences in the languages\nmat1 = Counter((dataset['language'][index_1]).tolist())\npom1 = Counter((dataset['language'][index_2]).tolist())\nnot1 = Counter((dataset['language'][index_3]).tolist())\n```\n\n\n```python\n# sorting the occurrences\nmat2 = {k: v for k, v in sorted(mat1.items(), key=lambda item: item[1], reverse=True)}\npom2 = {k: v for k, v in sorted(pom1.items(), key=lambda item: item[1], reverse=True)}\nnot2 = {k: v for k, v in sorted(not1.items(), key=lambda item: item[1], reverse=True)}\n```\n\n\n```python\n# taking only the first 10 languages, that happens to be the same for every time slot\nmattina = list(mat2.items())[:10]\npomeriggio = list(pom2.items())[:10]\nnotte = list(not2.items())[:10]\n```\n\n\n```python\n# creating an empty dataframe with timeslots as cols and languages as indexes\ndf = pd.DataFrame(index=list(mat2.keys())[:10], columns=['8am-2pm', '2pm-10pm', '10pm-8am'])\n```\n\n\n```python\n# adding the values in the dataframe\nfor (couple1, couple2, couple3) in zip(mattina, pomeriggio, notte):\n df['8am-2pm'][couple1[0]] = couple1[1]\n df['2pm-10pm'][couple2[0]] = couple2[1]\n df['10pm-8am'][couple3[0]] = couple3[1]\n```\n\n\n```python\ndf.index.name = 'language'\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
8am-2pm2pm-10pm10pm-8am
language
english172358138180564093800
schinese14266729161481422147
russian7776491099378471873
koreana238303157217218112
german202319418566131711
turkish188734327833119301
polish14835327065176525
french143964281877115910
spanish107991345728359601
brazilian91844370417375263
\n
\n\n\n\n\n```python\nax = df.plot(y=[\"8am-2pm\", \"2pm-10pm\", \"10pm-8am\"], kind=\"bar\")\nax.set_yscale('log')\nax.set_xlabel('languages')\nax.set_ylabel(\"number reviews\")\n```\n\nIn this stacked barplot we can see that the majority of the reviews are written during the afternoon while during the night fewer people usually write on Steam. The language more used as expected is English\n\n#### Viral Comments:\nIn this table we have wanted to look at the ten reviews which have received more comments because we have thought that it could be interesting look at them to understand which comments are popular on Steam. \n\n\n```python\ndataset_7 = dataset.sort_values(by=['comment_count'], ascending = False)\ndataset_7 = dataset_7.reset_index()\n```\n\n\n```python\ndataset_7[[\"author.steamid\", \"language\", \"app_name\", \"review\", \"comment_count\"]].head(10)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
author.steamidlanguageapp_namereviewcomment_count
076561198094505831schineseFor Honor\u8fa3\u9e21\u6e38\u620f4893
176561198144481578englishNo Man's SkyOverhyped, poorly optimized, indie game made b...1432
276561198348751745schinesePLAYERUNKNOWN'S BATTLEGROUNDS\u84dd\u6d1e\u97e9\u56fd\u6742\u603b\uff0c\u56fd\u5bb6\u5c31\u662f\u4e00\u5e2e\u5783\u573e\u6c11\u65cf\u7ec4\u6210\u7684\u72d7\u6742\u79cd\uff0c\u8fd9\u4e2a\u5783\u573e\u6c11\u65cf\u5c31\u8be5\u6b7b\u7edd\u6b7b\u5149\uff0c\u6740\u7edd\u79cd\u4e86\uff0c\u6742\u79cd\u6c11\u65cf...1235
376561198301678331schineseNieR:Automata\u2122\u6bd4\u5168\u7403\u9996\u53d1\u7248\u8fd8\u8d35\u2026\u5f53\u4e2d\u56fd\u4eba\u50bb\u94b1\u591a\u4e48\uff1f\\n\\n------------------------...1143
476561198041275847englishFallout 4People have demonstrated consistently since Fa...1034
576561197976628085englishFallout 4tl;dr\\nFallout 4 is a huge leap backwards from...1026
676561198253235623schineseMirror2020.2.6\\n\u65b0\u51a0\u671f\u95f4\u6d8c\u5165\u5927\u91cf\u72fc\u53cb\uff0c\u6240\u4ee5\u66f4\u65b0\u4e86\u4e00\u4e0b\\n1.\u66f4\u65b0\u4e86\u5931\u6548\u7684\u7f51\u76d8\u94fe\u63a5\\n2....929
776561198819827956schineseMirror\u53bb\u5154\u5b50\u8865\u4e013.0 \u53ef\u4ee5\u53bbdlc \uff08\u62ff\u5230\u7684\u70b9\u8d5e \u9876\u6211\u4e0a\u53bb\uff01\uff09\\n\u94fe\u63a5\uff1apan.\u5220\u9664ba...880
876561198078772275russianMiddle-earth\u2122: Shadow of War\u2122[h1] \u041e\u043d\u0438 \u043d\u0435 \u0445\u043e\u0442\u044f\u0442 \u0443\u043d\u0438\u0447\u0442\u043e\u0436\u0438\u0442\u044c \u043d\u0430\u0441. \\n\u041e\u043d\u0438 \u0445\u043e\u0442\u044f\u0442 ...810
976561198027414971englishRustI love this game, I built a house around a guy...779
\n
\n\n\n\nUnfortunately the majority of them are written not in english!\n\n#### Games more played:\nIn our dataset there is a column in which is stored the time played by that player to that particular game. So we have decided to explore what are the games more played in terms of hours. We have decided to pick the top 20 games because we have thought that 20 is a good trade-off between a clear plot and a meaningful number of games. \n\n\n```python\n#dataset_8 = dataset_8[[\"author.steamid\", \"author.playtime_forever\",\"app_name\"]]\ndataset_8 = pd.Series(dataset.groupby(\"app_name\")[\"author.playtime_forever\"].sum().sort_values(ascending=False))\nore_di_gioco = dataset_8.values\ngiochi = dataset_8.index\n```\n\n\n```python\nplt.figure(figsize = ((15, 8)))\nsns.barplot(x = ore_di_gioco[:20], \n y = giochi[:20], orient = 'h')\nplt.title('TOP 20 games more played in terms of hours', size = 20)\nplt.ylabel('Games', size = 14, style = 'italic')\nplt.xlabel('Number of hours', size = 14, style = 'italic')\n#plt.xscale('log')\nplt.xticks(np.arange(1000000000,60000000000,2000000000)) \nplt.show()\n```\n\nIn this barplot we have found some confirms: the games more played are also often the games more reviewed that were appeared in the pie chart.\n\n#### Active players:\nTo conclude this first analysis we have tried to understand what are the players more useful for Steam: we have selected the ten authors that have written the most number of helpful and funny reviews. \n\n\n```python\ndataset_9 = pd.Series(dataset[(dataset.votes_helpful > 0)].groupby(\"author.steamid\").votes_helpful.count().sort_values(ascending=False))\n\ndataset_10 = pd.Series(dataset[(dataset.votes_funny > 0)].groupby(\"author.steamid\").votes_funny.count().sort_values(ascending=False))\n```\n\n\n```python\npd.concat([dataset_9[:11], dataset_10[:11]], axis=1).reset_index().fillna(0).sort_values(by=['votes_helpful'],ascending=False).reset_index(drop = True)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
author.steamidvotes_helpfulvotes_funny
076561198315585536132.0131.0
176561198192166873105.066.0
276561198045381877103.00.0
37656119823916374497.083.0
47656119802797329589.00.0
57656119800566706680.055.0
67656119801196536579.00.0
77656119796974988477.00.0
87656119797076112376.00.0
97656119809480380873.00.0
107656119832715048273.00.0
11765611980408848670.063.0
12765611980538437500.065.0
13765611980081078420.056.0
14765611980030811630.064.0
15765611979876586580.053.0
16765611983750291800.055.0
17765611983930057160.052.0
\n
\n\n\n\nIt's interesting to see that the authors who have written some funny reviews have also written helpful reviews. \n\n#### Languages and subplots\n\n\n```python\nprint(\"The total number of languages used to write reviews is \",'\\033[1m' +str(len(dataset[\"language\"].unique())) +'\\033[0m')\n```\n\n The total number of languages used to write reviews is \u001b[1m28\u001b[0m\n\n\nMaking a subplot we have been able to visualize all the present languages in the dataset and counting the number of reviews. The two subplots have different measure in y-scales!\n\n\n```python\nfig=plt.figure(figsize=(25,18))\nax1=fig.add_subplot(2,1,1)\ndataset['language'].value_counts().head(10).plot.bar(figsize = (18, 10),title='Top 10 Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax1,rot=0, logy = True, color = \"orange\")\nax2=fig.add_subplot(2,1,2)\ndataset['language'].value_counts().iloc[-18:].plot.bar(figsize = (18, 10),title='Other 18 Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax2,rot=0, color = \"orchid\")\nfig.tight_layout();\n\n#dataset['language'].value_counts().plot.bar(figsize = (18, 7),title='Top Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax1)\n```\n\n# RQ2\n\n### Plot the number of reviews for each application in descending order.\n\nWe have decided to make a barplot in which we have counted the number of reviews for the first 50 applications. We have decided 50 because it have seemed to us a good tradeoff to have a clean representation a pick the more reviewed games\n\n\n```python\nnumber_review = dataset.groupby(\"app_name\").review_id.count().sort_values(ascending=False)\nnumber_review[0:50].plot.bar(figsize = (18, 7), title=' Number of review', xlabel='Name of application',\nylabel='Number of review', color = \"coral\", logy = True)\nplt.show()\n\n```\n\n\n```python\n# for a visual table to have an idea of how many reviews for the first 50 apps\nnumber_review.reset_index().head(50)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_namereview_id
0PLAYERUNKNOWN'S BATTLEGROUNDS1644255
1Grand Theft Auto V1019116
2Tom Clancy's Rainbow Six Siege841918
3Terraria672815
4Garry's Mod655524
5Rust549074
6Rocket League498565
7PAYDAY 2487747
8Among Us485293
9The Witcher 3: Wild Hunt469395
10Dead by Daylight418897
11ARK: Survival Evolved400009
12Euro Truck Simulator 2387553
13Stardew Valley315717
14The Elder Scrolls V: Skyrim294966
15Wallpaper Engine292790
16Monster Hunter: World290946
17Hollow Knight269854
18The Forest239734
19Don't Starve Together238636
20DARK SOULS\u2122 III235422
21Portal 2232329
22Fallout 4228957
23Phasmophobia219090
24Dying Light214431
25Arma 3182348
26No Man's Sky182045
27Tomb Raider181054
28Sid Meier's Civilization V171404
29Subnautica152414
30Doki Doki Literature Club148837
31Cities: Skylines147028
32Undertale144351
33Sid Meier's Civilization VI143789
34The Elder Scrolls V: Skyrim Special Edition142509
35DOOM139973
36The Binding of Isaac: Rebirth124389
37Mount & Blade: Warband123905
38Human: Fall Flat123604
39Sekiro\u2122: Shadows Die Twice119760
40Hades118416
41Counter-Strike: Source118081
42Hearts of Iron IV116255
43Divinity: Original Sin 2113508
44Factorio108282
45BioShock Infinite107277
46DOOM Eternal105196
47Raft97930
48Stellaris94667
49RimWorld94041
\n
\n\n\n\n### What applications have the best Weighted Vote Score?\n\nEach review has a **Weighted Vote Score** that represents the helpfuness score of that review. To extract the weighted vote score for each game we have computed the mean between all the vote for each application. In this way we have an idea about what applications have received the most helpfulness reviews. Then we have decided to select only average votes above 0.3 because we have considered it a good threshold for the best votes. \n\n\n```python\nmedie = pd.DataFrame(dataset.groupby(\"app_name\").weighted_vote_score.mean().sort_values(ascending=False))\nmedie = medie[medie.values > 0.3]\nmedie\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
weighted_vote_score
app_name
Hunt Down The Freeman0.502150
Urban Empire0.438623
METAL GEAR SURVIVE0.421632
Identity0.415384
Umbrella Corps0.414678
Torment: Tides of Numenera0.411199
BERSERK and the Band of the Hawk0.397092
DRAGON QUEST HEROES\u2122 II0.381110
X Rebirth0.380447
Toukiden 20.380300
Warhammer 40,000: Dawn of War III0.377935
DYNASTY WARRIORS 90.371213
RollerCoaster Tycoon World0.365161
Wolfenstein: Youngblood0.354932
Steel Division: Normandy 440.351986
SENRAN KAGURA Peach Beach Splash0.347711
Clicker Heroes 20.345768
Takedown: Red Sabre0.337137
Secret of Mana0.336822
Bless Online0.331621
Artifact0.323762
Desolate0.322193
SOS0.321467
Call of Duty: Infinite Warfare0.321253
Cube World0.319483
WWE 2K170.318298
Dead Rising 40.315530
The Wild Eight0.314761
Rapture Rejects0.314585
Deus Ex: The Fall0.310173
Down To One0.307944
Total War Saga: Thrones of Britannia0.301253
Devil May Cry HD Collection0.301180
Battle Royale Trainer0.300996
\n
\n\n\n\n### Which applications have the most and the least recommendations\n\nIn this point, we thought that for most and least recommended apps, the percentage values where the ones to be aware of, meaning that an app was the most recommended if it has the higher percentage value of the most recommended reviews\n\n\n```python\n#Most\n# recommended. group_by app_name. count all recommended,\n# count True recommended and False recommended in separate cols, and percentage of these.\n# taking only the useful cols\nnew_data = dataset[['app_name', 'recommended']]\n# count_rec col counts all recommended respectively False and True of an application\nnew_data['count_rec'] = new_data.groupby(['app_name', 'recommended'], sort=False)['recommended'].transform('count')\n```\n\n\n```python\n# all_rec col counts all recommedations, False and True together\nnew_data['all_rec'] = new_data.groupby(\"app_name\", sort=False)['count_rec'].transform('count')\n```\n\n\n```python\n# final dataframe which contains only the True recommendations\n# this means that we can calculate the most and the least recommended apps\nfinal = new_data[(new_data['recommended']==True)].drop_duplicates()\n```\n\n\n```python\n# perc_rec calculates the percentage recommendation\nfinal['perc_rec'] = (final['count_rec']/final['all_rec'])*100\n# drop not useful cols\nfinal.drop(['recommended', 'count_rec'], axis=1, inplace=True)\n```\n\n\n```python\n# most recommended, first 50\nfinal.sort_values(by='perc_rec', ascending=False).reset_index(drop=True).head(50)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_nameall_recperc_rec
0ULTRAKILL558499.534384
1Senren\uff0aBanka503499.404052
2A Short Hike584799.144861
3The Henry Stickmin Collection1940099.025773
4Factorio10828298.918564
5Hades11841698.875996
6Portal 223232998.769418
7People Playground2631998.768950
8Townscaper760198.710696
9Half-Life: Alyx5109998.686863
10Don't Escape: 4 Days to Survive101398.519250
11There Is No Game: Wrong Dimension840998.477821
12Grimm's Hollow534598.447147
13RimWorld9404198.300741
14Baba Is You1176298.240095
15DUSK1105098.190045
16One Finger Death Punch1550498.168215
17Wallpaper Engine29279098.167629
18Nova Drift353998.163323
19The Binding of Isaac: Rebirth12438998.134883
20Helltaker8630698.119482
21A Hat in Time3497998.110295
22OneShot1897598.102767
23The Witcher 3: Wild Hunt46939598.052174
24Just Shapes & Beats925698.044512
25Keep Talking and Nobody Explodes881398.014297
26Totally Accurate Battle Simulator5211998.010323
27Terraria67281597.908192
28Slay the Spire8602397.888937
29Stardew Valley31571797.825584
30The Room Two1133697.812279
31Finding Paradise1042997.794611
32VA-11 Hall-A: Cyberpunk Bartender Action2369497.780029
33Celeste3145497.764990
34STEINS;GATE962197.723729
35Phoenix Wright: Ace Attorney Trilogy936097.713675
36The Wolf Among Us2493197.697646
37Mount & Blade: Warband12390597.695815
38The Room1582097.648546
39Phasmophobia21909097.550778
40South Park\u2122: The Stick of Truth\u21224484997.542866
41Euro Truck Simulator 238755397.500471
42Slime Rancher5804997.484883
43Gunpoint1161097.364341
44Hotline Miami6007497.363252
45Mirror6019597.318714
46Dishonored5633797.316151
47BattleBlock Theater5931797.280712
48Persona 4 Golden4018097.262320
49Resident Evil 28265097.219601
\n
\n\n\n\nWe can see that the most recommended apps are not the one with the higher reviews\n\n\n```python\n# least recommended, first 50\nfinal.sort_values(by='perc_rec', ascending=True).reset_index(drop=True).head(50)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_nameall_recperc_rec
0Identity180424.334812
1RollerCoaster Tycoon World530424.509804
2SOS690027.840580
3Umbrella Corps250528.223553
4NBA 2K181871829.196495
5DYNASTY WARRIORS 9780731.228385
6Urban Empire230033.000000
7Deus Ex: The Fall361034.432133
8NBA 2K211573535.068319
9Down To One210137.934317
10Hunt Down The Freeman181537.961433
11Bless Online1019737.962146
12Cube World1633338.798751
13NBA 2K192243739.448233
14WWE 2K20363839.527213
15Takedown: Red Sabre550240.239913
16Nether1740641.290360
17Rapture Rejects160341.921397
18X Rebirth700142.422511
19Neon Hardcorps20044.500000
20Wolfenstein: Youngblood744545.520484
21ATLAS3721046.466004
22Warhammer 40,000: Dawn of War III1843546.487659
23Artifact2350947.994385
24Call of Duty: Infinite Warfare2073648.099923
25Stay Out717048.744770
26Dead Rising 4430852.112349
27PLAYERUNKNOWN'S BATTLEGROUNDS164425553.909947
28Just Cause 41811157.594832
29\u4e09\u56fd\u7fa4\u82f1\u4f208 Heroes of the Three Kingdoms 8811157.662434
30Battle Royale Trainer160058.437500
31X-Blades280561.319073
32WWE 2K17149961.507672
33BATTALION 19441600461.622094
34Rules Of Survival490261.648307
35Call of Duty: WWII2761762.262375
36Clicker Heroes 2210064.714286
37METAL GEAR SURVIVE410465.009747
38For Honor9397165.724532
39Total War Saga: Thrones of Britannia1000966.350285
40Secret of Mana190068.842105
41No Man's Sky18204568.943393
42Farm Manager 2018200269.530470
43Torment: Tides of Numenera360169.564010
44PixARK771169.640773
45Desolate550370.470652
46DRAGON QUEST HEROES\u2122 II60070.500000
47WWE 2K19290770.657035
48SCUM4491370.770156
49The Wild Eight560172.344224
\n
\n\n\n\n### How many of these applications were purchased, and how many were given for free?\n\n\n\n\n```python\n# steam_purchase\n# taking only the useful cols\nnew_data1 = dataset[['app_name', 'steam_purchase']]\n```\n\n\n```python\n# same modus operandi of counting recommendation\nnew_data1['count_pur'] = new_data1.groupby(['app_name', 'steam_purchase'], sort=False)['steam_purchase'].transform('count')\n```\n\n\n```python\n# taking only the ones purchased\nfinal1 = new_data1[(new_data1['steam_purchase']==True)].drop_duplicates()\n```\n\n\n```python\n# drop not useful col\nfinal1.drop(['steam_purchase'], axis=1, inplace=True)\n```\n\n\n```python\n# received_for_free\n# taking only the useful cols\nnew_data2 = dataset[['app_name', 'received_for_free']]\n```\n\n\n```python\n# same modus operandi\nnew_data2['count_free'] = new_data2.groupby(['app_name', 'received_for_free'], sort=False)['received_for_free'].transform('count')\n```\n\n\n```python\n# take only the ones received_for_free\nfinal2 = new_data2[(new_data2['received_for_free']==True)].drop_duplicates()\n```\n\n\n```python\n# drop not useful col\nfinal2.drop(['received_for_free'], axis=1, inplace=True)\n```\n\n\n```python\n# now it's time to calculate the final result, by doing a merge of the final dataframes\ndfs = [final, final1, final2]\nfinal_df = reduce(lambda left,right: pd.merge(left,right,on=['app_name'],\n how='outer'), dfs)\n```\n\n\n```python\n# taking the first 40 apps that are most recommended and displaying how many times were\n# purchased and how many times were received for free\nfinal_df.sort_values(by='perc_rec', ascending=False).head(40)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_nameall_recperc_reccount_purcount_free
134ULTRAKILL558499.5343845167.0104
139Senren\uff0aBanka503499.4040524635.049
81A Short Hike584799.1448614566.099
82The Henry Stickmin Collection1940099.02577318591.0537
60Factorio10828298.91856486472.01142
103Hades11841698.875996109447.0989
4Portal 223232998.769418182952.05779
108People Playground2631998.76895025021.01094
137Townscaper760198.7106967243.075
314Half-Life: Alyx5109998.68686342639.02701
51Don't Escape: 4 Days to Survive101398.519250771.025
146There Is No Game: Wrong Dimension840998.4778217861.091
174Grimm's Hollow534598.447147NaN589
61RimWorld9404198.30074177383.0940
135Baba Is You1176298.24009510399.0112
141DUSK1105098.1900459888.0244
130One Finger Death Punch1550498.16821511889.0264
65Wallpaper Engine29279098.167629269839.07633
97Nova Drift353998.1633233045.0229
45The Binding of Isaac: Rebirth12438998.134883107803.01518
105Helltaker8630698.119482303.04426
17A Hat in Time3497998.11029527262.0820
123OneShot1897598.10276717356.0418
0The Witcher 3: Wild Hunt46939598.052174429409.05748
158Just Shapes & Beats925698.0445128539.0314
163Keep Talking and Nobody Explodes881398.0142977280.0140
107Totally Accurate Battle Simulator5211998.01032348938.01641
208Terraria67281597.908192527762.020154
63Slay the Spire8602397.88893773074.0724
85Stardew Valley31571797.825584264253.05827
153The Room Two1133697.81227910885.0129
161Finding Paradise1042997.7946119177.0130
124VA-11 Hall-A: Cyberpunk Bartender Action2369497.78002920325.0360
15Celeste3145497.76499028773.0414
165STEINS;GATE962197.7237298837.084
168Phoenix Wright: Ace Attorney Trilogy936097.7136758016.094
126The Wolf Among Us2493197.69764620178.0147
106Mount & Blade: Warband12390597.69581595914.01508
144The Room1582097.64854615012.0171
104Phasmophobia21909097.550778178828.07145
\n
\n\n\n\n\n```python\n# least recommended\nfinal_df.sort_values(by='perc_rec').head(40)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
app_nameall_recperc_reccount_purcount_free
38Identity180424.334812513.040
23RollerCoaster Tycoon World530424.5098042893.061
229SOS690027.8405804799.0301
39Umbrella Corps250528.223553798.0173
24NBA 2K181871829.19649514722.0213
247DYNASTY WARRIORS 9780731.2283855271.0265
288Urban Empire230033.0000001814.023
26Deus Ex: The Fall361034.4321332808.024
25NBA 2K211573535.06831911006.0242
42Down To One210137.9343171633.052
40Hunt Down The Freeman181537.9614331570.0112
221Bless Online1019737.9621467435.0188
30Cube World1633338.7987518281.01045
31NBA 2K192243739.44823315291.0492
41WWE 2K20363839.5272132184.0108
35Takedown: Red Sabre550240.2399133343.064
32Nether1740641.29036014174.085
27Rapture Rejects160341.921397290.0311
5X Rebirth700142.4225114934.035
92Neon Hardcorps20044.50000029.027
33Wolfenstein: Youngblood744545.5204846292.0125
36ATLAS3721046.46600432345.0793
34Warhammer 40,000: Dawn of War III1843546.48765910269.0645
28Artifact2350947.99438519473.0619
29Call of Duty: Infinite Warfare2073648.0999239751.01988
37Stay Out717048.744770NaN1012
307Dead Rising 4430852.1123492140.097
176PLAYERUNKNOWN'S BATTLEGROUNDS164425553.9099471372721.061443
276Just Cause 41811157.59483211634.01184
88\u4e09\u56fd\u7fa4\u82f1\u4f208 Heroes of the Three Kingdoms 8811157.6624347031.065
230Battle Royale Trainer160058.4375001525.049
95X-Blades280561.319073910.0218
298WWE 2K17149961.507672987.039
236BATTALION 19441600461.62209413363.0480
260Rules Of Survival490261.6483074285.0496
206Call of Duty: WWII2761762.26237516999.0697
268Clicker Heroes 2210064.7142861625.030
249METAL GEAR SURVIVE410465.0097472743.079
226For Honor9397165.72453252502.011907
203Total War Saga: Thrones of Britannia1000966.3502858211.095
\n
\n\n\n\n# RQ 3\n\n### What is the most common time that authors review an application? For example, authors usually write a review at 17:44.\n\nFirst of all, we take only the `timestamp_created` col and we convert in `string` the time values. Next, with a simple dictionary and a `for` cycle, we count the occurrences of every single time (HH:MM) and at the end we return only the most common time.\n\n\n```python\n# first point\n# taking only the timestamp_created col\ntimestamp_col = np.array(dataset[\"timestamp_created\"].dt.time.astype('str'))\n```\n\n\n```python\ndict_time = {}\nfor time in timestamp_col:\n # taking only hour and minute\n new_time = time[:5]\n if new_time not in list(dict_time.keys()):\n dict_time[new_time] = 1\n else:\n dict_time[new_time] += 1\n```\n\n\n```python\n# sorting the dictionary in descending order\ndict_time_sorted = {k: v for k, v in sorted(dict_time.items(), key=lambda item: item[1], reverse=True)}\n```\n\n\n```python\n# returning the most common time (without seconds)\nnext(iter(dict_time_sorted))\n```\n\n\n\n\n '14:50'\n\n\n\n### Create a function that receives as a parameter a list of time intervals and returns the plot the number of reviews for each of the intervals.\n\nUsing the function **orario** we can extract for a given list of time interval the number of reviews written in each time interval \n\n\n### Use the function that you created in the previous literal to plot the number of reviews between the following time intervals:\n\n\n```python\nintervalli = ['06:00:00', '10:59:59', '11:00:00', '13:59:59', '14:00:00', '16:59:59',\n '17:00:00', '19:59:59', '20:00:00', '23:59:59', '00:00:00', '02:59:59', '03:00:00',\n '05:59:59']\n```\n\n\n```python\nfunctions.orario(intervalli)\n```\n\nOn the x-axis for each bar is indicated the starting point of the time interval. We have observed that fewer people have written reviews during the night while the majority of people have written their reviews in the first hours of the morning and in the dinner hours\n\n# RQ4\n\n### What are the top 3 languages used to review applications?\n\n\n```python\ntop_languages = pd.DataFrame(dataset.groupby(\"language\").review_id.count().sort_values(ascending=False).head(3))\ntop_languages\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
review_id
language
english9635437
schinese3764967
russian2348900
\n
\n\n\n\nAs expected the majority of the reviews are written in english, chinese and russian!\n\n\n```python\ntop_languages = list(top_languages.index)\ntop_languages\n```\n\n\n\n\n ['english', 'schinese', 'russian']\n\n\n\n### Create a function that receives as parameters both the name of a data set and a list of languages\u2019 names and returns a data frame filtered only with the reviews written in the provided languages.\n\nThere we have used the function **get_reviews_by_languages** to accomplish a dataframe where there are only reviews written in the top 3 languages\n\n\n```python\ndataset_filter = functions.get_reviews_by_languages(dataset, top_languages)\n```\n\n### Use the function created in the previous literal to find what percentage of these reviews (associated with the top 3 languages) were voted as funny?\n\nFor this request we have used the new filtered dataset and for each language we have selected the reviews that have received at least one funny vote and then we have computed the ratio between them and all the reviews written in that language.\n\nTo compute this percentage we have used **dataset_filter** that is the new dataframe obtained using the previous function **filtro**\n\n\n```python\nnumeratore_1 = []\ndenominatore_1 = []\nrapporto_1 = []\nfor i in range(len(top_languages)):\n numeratore_1.append(dataset_filter.loc[(dataset_filter.votes_funny != 0) & (dataset_filter.language == top_languages[i])].votes_funny.count())\n denominatore_1.append(dataset_filter[dataset_filter.language == top_languages[i]].votes_funny.count())\n rapporto_1.append(round((numeratore_1[i]/denominatore_1[i])*100, 2))\n print(\"The percentage of reviews written in \" + '\\033[1m' + top_languages[i] +'\\033[0m' +\n \" that has received at least a funny vote is \" +\n '\\033[1m' + str(rapporto_1[i]) + \"%\" + '\\033[0m')\n\n```\n\n The percentage of reviews written in \u001b[1menglish\u001b[0m that has received at least a funny vote is \u001b[1m11.27%\u001b[0m\n The percentage of reviews written in \u001b[1mschinese\u001b[0m that has received at least a funny vote is \u001b[1m11.82%\u001b[0m\n The percentage of reviews written in \u001b[1mrussian\u001b[0m that has received at least a funny vote is \u001b[1m16.68%\u001b[0m\n\n\nAt this point we have also wanted to compute the percentage of reviews that have received at least a funny vote among all these three languages. \n\n\n```python\n# same as above\nprint(\"The percentage of reviews written in one of the top 3 language that has received at \"\n \"least a funny vote is \" + '\\033[1m' + str(round((sum(numeratore_1)/sum(denominatore_1))*100, 2)) + \"%\" + '\\033[0m')\n```\n\n The percentage of reviews written in one of the top 3 language that has received at least a funny vote is \u001b[1m12.21%\u001b[0m\n\n\n### Use the function created in the literal \u201ca\u201d to find what percentage of these reviews (associated with the top 3 languages) were voted as helpful?\n\nFor this request we have used the new filtered dataset and for each language we have selected the reviews that have received at least one helpful vote and then we have computed the ratio between them and all the reviews written in that language.\n\nTo compute this percentage we have used **dataset_filter** that is the new dataframe obtained using the previous function **filtro**\n\n\n```python\nnumeratore_2 = []\ndenominatore_2 = []\nrapporto_2 = []\nfor i in range(len(top_languages)):\n numeratore_2.append(dataset_filter.loc[(dataset_filter.votes_helpful != 0) & (dataset_filter.language == top_languages[i])].votes_helpful.count())\n denominatore_2.append(dataset_filter[dataset_filter.language == top_languages[i]].votes_helpful.count())\n rapporto_2.append(round((numeratore_2[i]/denominatore_2[i])*100, 2))\n print(\"The percentage of reviews written in \" + '\\033[1m' + top_languages[i] + '\\033[0m' +\n \" that has received at least a helpful vote is \" +\n '\\033[1m' + str(rapporto_2[i]) + \"%\" + '\\033[0m')\n```\n\n The percentage of reviews written in \u001b[1menglish\u001b[0m that has received at least a helpful vote is \u001b[1m29.2%\u001b[0m\n The percentage of reviews written in \u001b[1mschinese\u001b[0m that has received at least a helpful vote is \u001b[1m25.1%\u001b[0m\n The percentage of reviews written in \u001b[1mrussian\u001b[0m that has received at least a helpful vote is \u001b[1m35.5%\u001b[0m\n\n\nAt this point we have also wanted to compute the percentage of reviews that have received at least a helpful vote among all these three languages.\n\n\n```python\n# same as above\nprint(\"The percentage of reviews written in one of the top 3 language that has received at \"\n \"least a helpful vote is \" + '\\033[1m' + str(round((sum(numeratore_2)/sum(denominatore_2))*100, 2)) + \"%\" + '\\033[0m')\n```\n\n The percentage of reviews written in one of the top 3 language that has received at least a helpful vote is \u001b[1m29.16%\u001b[0m\n\n\n# RQ5\n\n### Plot the top 10 most popular reviewers and the number of reviews.\n\n\n```python\nnum_reviewers = dataset['author.steamid'].value_counts().head(10)\n```\n\n\n```python\nnum_reviewers.plot(kind='bar',\n xlabel='TOP 10 reviewers',\n ylabel='number of reviews')\n```\n\n### What applications did the most popular author review?\n\n\nAt first, we took the previous result of the most popular author to leave only the rows of the reviews written by him/her, and then we returned all the applications reviewed by this author.\n\n\n```python\nnum_rev = pd.DataFrame({'reviewers':num_reviewers.index, 'num_reviews':num_reviewers.values})\n```\n\n\n```python\npop_auth = num_rev['reviewers'][0]\n```\n\n\n```python\napps_rev = dataset[dataset['author.steamid'] == pop_auth].app_name\n```\n\n\n```python\napp_name_rev = list(apps_rev.values)\n```\n\n\n```python\napp_name_rev = [el for el, count in Counter(app_name_rev).items()]\n```\n\n\n```python\nprint(app_name_rev)\n```\n\n ['Half-Life', 'Counter-Strike: Source', 'Half-Life 2: Episode Two', 'Portal 2', \"Garry's Mod\", \"Sid Meier's Civilization V\", 'Dead by Daylight', \"Sid Meier's Civilization VI\", 'Subnautica', 'Human: Fall Flat', 'Banished', 'Celeste', 'Getting Over It with Bennett Foddy', 'A Hat in Time', 'The Forest', 'Axiom Verge', 'The Binding of Isaac: Rebirth', 'To the Moon', 'Cave Story+', 'Titan Souls', 'Super Meat Boy', \"Don't Escape: 4 Days to Survive\", 'Volgarr the Viking', 'Enter the Gungeon', 'Salt and Sanctuary', 'Hollow Knight', 'The End Is Nigh', 'Factorio', 'RimWorld', 'Insurgency: Sandstorm', 'Euro Truck Simulator 2', 'Foundation', 'Kenshi', 'Into the Breach', 'Warhammer: Vermintide 2', 'DOOM Eternal', 'Age of Empires: Definitive Edition', 'Void Bastards', 'Stardew Valley', 'Among Us', 'Blackwake', 'Little Nightmares', 'Bomber Crew', 'Rust', 'HITMAN\u2122 2', 'Phasmophobia', 'Mount & Blade: Warband', 'Resident Evil 2', 'Slime Rancher', 'Hotline Miami', 'Tomb Raider', 'BattleBlock Theater', 'Dishonored', 'South Park\u2122: The Stick of Truth\u2122', 'Undertale', \"Don't Starve\", 'Rocket League', 'Dead Cells', 'Broforce', 'The Wolf Among Us', 'The Walking Dead', 'One Finger Death Punch', 'Oxygen Not Included', 'Cuphead', 'ULTRAKILL', 'Castle Crashers', 'Townscaper', 'Papers, Please', 'GRIS', 'DUSK', 'Outlast', 'FTL: Faster Than Light', 'Dying Light', 'American Truck Simulator', 'Saints Row: The Third', 'STAR WARS\u2122 Empire at War: Gold Pack', 'Age of Empires II (2013)', 'Super Hexagon', 'BioShock Infinite', 'DOOM', 'Black Mesa', 'Finding Paradise', 'Keep Talking and Nobody Explodes', 'Duck Game', 'Mark of the Ninja', 'Phoenix Wright: Ace Attorney Trilogy', 'Gunpoint', \"PLAYERUNKNOWN'S BATTLEGROUNDS\", 'Monster Hunter: World', 'The Elder Scrolls Online', 'Total War: WARHAMMER II', 'Cities: Skylines', 'Stellaris', 'Black Desert Online', 'Kingdom Come: Deliverance', 'Jurassic World Evolution', 'ARK: Survival Evolved', \"No Man's Sky\", 'Frostpunk', 'Fallout 4', 'DARK SOULS\u2122 III', 'Rise of the Tomb Raider', 'Middle-earth\u2122: Shadow of War\u2122', 'Hearts of Iron IV', 'They Are Billions', 'Total War Saga: Thrones of Britannia', 'Total War: ROME II - Emperor Edition', 'Terraria', 'PAYDAY 2', 'XCOM 2', 'Deep Rock Galactic', 'Hunt: Showdown', 'Conan Exiles', 'Two Point Hospital', 'Total War: WARHAMMER', 'The Elder Scrolls V: Skyrim Special Edition', 'NieR:Automata\u2122', 'House Flipper', 'Surviving Mars', 'Ni no Kuni\u2122 II: Revenant Kingdom', 'Railway Empire', 'Rise of Industry', 'Devil May Cry HD Collection', 'Heroes of Hammerwatch', 'Ghost of a Tale', 'Ancestors Legacy', 'FAR: Lone Sails', 'Totally Accurate Battlegrounds', 'Vampyr', 'Yakuza 0', 'Thief Simulator', 'Darksiders III', 'Mutant Year Zero: Road to Eden', 'Just Cause 4', 'Planet Coaster', 'Nioh: Complete Edition', 'Europa Universalis IV', 'Just Cause 3', 'Resident Evil 7 Biohazard', 'Urban Empire', 'Youtubers Life', 'Night in the Woods', 'Northgard', 'Sniper Elite 4', 'Day of Infamy', 'SimAirport', 'Dead Rising 4', 'Styx: Shards of Darkness']\n\n\n### How many applications did he/she purchase, and how many did he/she get as free? Provide the number (count) and the percentage.\n\n\n```python\n# taking only the steam_purchase and received_for_free apps of the author\napp_count = dataset[dataset['author.steamid'] == pop_auth][['steam_purchase', 'received_for_free']]\n```\n\n\n```python\n# how many app did the author reviewed\ntot_app_rev = len(app_count.index)\n\n```\n\n\n```python\npurchased = dict(Counter(app_count['steam_purchase']))\nfree_apps = dict(Counter(app_count['received_for_free']))\n```\n\n\n```python\npurchased[True] = [purchased[True], \"{:.2%}\".format(purchased[True]/tot_app_rev)]\npurchased[False] = [purchased[False], \"{:.2%}\".format(purchased[False]/tot_app_rev)]\nfree_apps[True] = [free_apps[True], \"{:.2%}\".format(free_apps[True]/tot_app_rev)]\nfree_apps[False] = [free_apps[False], \"{:.2%}\".format(free_apps[False]/tot_app_rev)]\n```\n\n\n```python\npurch_df = pd.DataFrame(purchased, index=['count', 'Percentage']).T\nfree_df = pd.DataFrame(free_apps, index=['count', 'Percentage']).T\n```\n\n\n```python\npurch_df.index.name = 'App Purchased'\nfree_df.index.name = 'App given Free'\n```\n\n\n```python\npurch_df\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countPercentage
App Purchased
True11073.83%
False3926.17%
\n
\n\n\n\n`True` means that the apps were purchased, `False` doesn't.\n\n\n```python\nfree_df\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countPercentage
App given Free
False14597.32%
True42.68%
\n
\n\n\n\n`True` means that the apps were given for free, `False` doesn't.\n\nThere is a significant difference between the purchased and the free apps: the first ones were mostly purchased on Steam, and the latter only 4 apps were given for free, then this means that not every app that the author reviewed was purchased on Steam, because if we assume that all the purchased apps are counted also in the \"not given for free\" ones, then we have 35 apps purchased somewhere else, and counting also the 4 apps given for free, we have all the apps not purchased on Steam, which are 39.\n\n### How many of the applications he/she purchased reviewed positively, and how many negatively? How about the applications he received for free?\n\n\n```python\n# have to use the recommended col\napp_recomm = dataset.loc[(dataset['author.steamid'] == pop_auth) & (dataset['recommended'] == True)][['steam_purchase', 'received_for_free']]\n```\n\n\n```python\npurchased_rec = dict(Counter(app_recomm['steam_purchase']))\nfree_apps_rec = dict(Counter(app_recomm['received_for_free']))\ntot_app_rec = len(app_recomm.index)\n```\n\n\n```python\nprint('{} applications purchased were reviewed positively, and {} were reviewed negatively'\n .format(purchased_rec[True], purchased_rec[False]))\nprint('{} applications given for free were reviewed positively, and {} were reviewed negatively'\n .format(free_apps_rec[True], free_apps_rec[False]))\n```\n\n 108 applications purchased were reviewed positively, and 38 were reviewed negatively\n 4 applications given for free were reviewed positively, and 142 were reviewed negatively\n\n\nComparing these results with the ones in the previous question, we can see that 3 apps were not recommended positively nor negatively, and those are, using the same hypothesis of the previous answer, 2 purchased on Steam and 1 purchased elsewhere. Also we can see that all apps given for free where recommended positively, which means that the author liked playing with them (and we assume that he/she also liked their quality of being \"free\")\n\n# RQ6 \n\n\n### What is the average time (days and minutes) a user lets pass before he updates a review?\n\nJust to start we have computed the difference between the time when the review is written and time when the review is updated and then we have transformed this difference in terms of days\n\n\n```python\ndataset['difference_days'] = (dataset['timestamp_updated'] - dataset['timestamp_created'])\ndataset['difference_days'] = dataset['difference_days']/np.timedelta64(1,'D')\n```\n\nAfter that we have deleted who did not update his review because we have thought that is meaningless consider them. Then we have computed the mean between days and the integer part of this number represents the average number of days after an author updates his review. Instead to transform the decimal part in minutes we have to multiply it for 1440 because in one day there are 1440 minutes. We have made a simple proportion: *1 : 1440 = x : (decimal part of our number)*\n\n\n```python\ndataset_1 = dataset[dataset.difference_days != 0]\naverage = dataset_1.difference_days.mean()\nminutes = round((average % 1) * 1440, 0)\ndays = average // 1\nprint(\"The average time a user lets pass before he updates a review is \"+\n '\\033[1m' + str(days) + '\\033[0m' + \" days and \" + '\\033[1m' + str(minutes) + '\\033[0m' + \" minutes\")\n```\n\n The average time a user lets pass before he updates a review is \u001b[1m321.0\u001b[0m days and \u001b[1m46.0\u001b[0m minutes\n\n\nOn average an author updates his review almost after a year! \n\n### Plot the top 3 authors that usually update their reviews.\n\nWe have used the dataframe **dataset_1** in which there are only the reviews that have been updated. We did not use the starting dataset because we have to extract who are the authors that usually update their reviews so authors that have updated more reviews through time.\n\n\n```python\na = pd.Series(dataset_1.groupby('author.steamid').review_id.count().sort_values(ascending=False).head(3))\na\n```\n\n\n\n\n author.steamid\n 76561198192166873 95\n 76561198206999976 61\n 76561198072450805 60\n Name: review_id, dtype: int64\n\n\n\n\n```python\n#bar plot\nplt.figure(figsize=(12, 8))\nax = a.plot(kind=\"bar\", color = [\"orchid\", \"orange\", \"green\"], alpha=0.75, rot=0)\nax.set_title(\"TOP 3 authors that have updated more reviews\")\nax.set_xlabel(\"Steam ID\")\nax.set_ylabel(\"Number of reviews updated\")\n#needed to put values on top of the bar\nfor i, v in enumerate(a.values):\n ax.text(i, v+1, str(v), color='black', fontweight='bold')\n```\n\nWe have put the number of reviews over the bars because the second and the third author have updated almost the same number of reviews.\n\n# RQ7\n\n### What\u2019s the probability that a review has a Weighted Vote Score equal to or bigger than 0.5?\n\nWe have used the definition of probability to compute these values indeed we have count the number of reviews that has a Weighted Vote Score equal to or bigger than 0.5 and this number represents the favourable case (we have stored this number in **casi_fav**)while the number of total case is represented by the number of the lines of our dataset, stored in **casi_tot**. The probability is the ratio between them. \n\n\n```python\n#filter the dataset picking only weighted_vote_score >= 0.5\n#and count the rows of filter dataset\ncasi_fav = dataset[dataset.weighted_vote_score >= 0.5].weighted_vote_score.count()\n```\n\n\n```python\n#number of rows of initial dataset\ncasi_tot = dataset.weighted_vote_score.count()\n```\n\n\n```python\nresult_1 = round(casi_fav/casi_tot, 2)\nprint(\"The probability is of a review has a Weighted Vote Score equal to or bigger than 0.5 is \"+ '\\033[1m' +str(result_1)+'\\033[0m')\n```\n\n The probability is of a review has a Weighted Vote Score equal to or bigger than 0.5 is \u001b[1m0.22\u001b[0m\n\n\n### What\u2019s the probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5?\n\nWe want to compute this conditional probability P(B|A) where B is the event: *a review has at least one vote as funny*. The sample space will be reduced, indeed we have filtered the dataset in such way that we are going to look for reviews with at least one vote as funny just among reviews with Weighted Vote Score is bigger than 0.5.\n\n\n```python\n#new sample space: filter dataset like before\n# A\ndataset_prob = dataset[dataset.weighted_vote_score > 0.5]\n```\n\n\n```python\n#count the reviews with at least a funny vote in the new filter dataset\n#B intersect A\ncasi_fav_2 = dataset_prob[dataset_prob.votes_funny != 0].votes_funny.count()\n```\n\n\n```python\n#A\ncasi_tot2 = dataset_prob.weighted_vote_score.count()\n#P(B|A)\nresult_2 = round(casi_fav_2/casi_tot2, 2)\nprint(\"The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5 is \",'\\033[1m' +str(result_2)+'\\033[0m')\n```\n\n The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5 is \u001b[1m0.25\u001b[0m\n\n\n### Is the probability that \u201ca review has at least one vote as funny\u201d independent of the \u201cprobability that a review has a Weighted Vote Score equal or bigger than 0.5\"?\n\nTo be independent these two events it would happen that the probability of the event B: *a review has at least one vote as funny* would be equal to *probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5, that is P(B|A);* because in this way the conditioning of the two probability is useless given that they are independent.\n\nTo be independent these two events it would happen that the P(B) would be equal to P(B|A) because in this way the conditioning of the two probability is useless given that they are independent: P(B|A) = P(B).\n\n\n```python\n#P(B|A)\ncasi_fav_ba = dataset[(dataset.weighted_vote_score >= 0.5) & (dataset.votes_funny != 0)].votes_funny.count()\nresult_3a = round(casi_fav_ba/casi_fav, 2)\nprint(\"The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5 is \",'\\033[1m' +str(result_3a)+'\\033[0m')\n```\n\n The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5 is \u001b[1m0.25\u001b[0m\n\n\n\n```python\n#count the reviews with at least a funny vote in the starting dataset\n#B\ncasi_fav_3 = dataset[dataset.votes_funny != 0].votes_funny.count()\n```\n\n\n```python\n#P(B)\nresult_3 = round(casi_fav_3/casi_tot,2)\nprint(\"The probability of a review has at least one vote as funny is \"+ '\\033[1m' +str(result_3)+'\\033[0m')\n```\n\n The probability of a review has at least one vote as funny is \u001b[1m0.12\u001b[0m\n\n\n0.12 is different from 0.25 so these two events are **dependent!**\n\n# RQ8\n\n### Is there a significant difference in the Weighted Vote Score of reviews made in Chinese vs the ones made in Russian? Use an appropriate statistical test or technique and support your choice.\n\nWe'll use a non-parametric(Kolgomoronov-Smirnov) test in order to find if the 2 distribution are the same(comes from the same population) or not, since the 2 distributions are not normally distributed\n\n\n```python\ndata_lang = functions.get_reviews_by_languages(dataset,[\"schinese\",\"russian\"])\n```\n\nFirst at all we compare chinese weighted score distribution and russian weighted score distribution using histograms. At first glance there does not seem to be any significant differences between the two distribution. From this plot those 2 distributions seems that distributes equally.\n\n\n```python\nplt.figure(figsize = (10,8))\ndata_lang[data_lang.language == \"schinese\"].weighted_vote_score.plot(kind = \"hist\", label = \"Chinese\",alpha = 0.3)\ndata_lang[data_lang.language == \"russian\"].weighted_vote_score.plot(kind = \"hist\", label = \"Russian\", color = \"orange\",alpha = 0.3)\nplt.legend()\n```\n\nSo we can support the choice with a statistaical test.Let's check with the KS test\n\n\n```python\nk_smir_test = ks_2samp(data_lang[data_lang.language == \"schinese\"].weighted_vote_score,\n data_lang[data_lang.language == \"russian\"].weighted_vote_score)\nif k_smir_test.pvalue <= 0.1:\n print(\"the two distributions are identical.\")\nelse:\n print(f\"the 2 distributions are different with a pvalue of {k_smir_test.pvalue}\")\n```\n\n the two distributions are identical.\n\n\nThe Kolmogorov-Smirnov test is a non-parametric test that checks the shape of sample distributions. It can be used to compare two samples and It does not in itself require any assumptions about the sample distribution, like in our case. The acceptance of the H0 hypothesis predicts that the two distributions belong to the same population.\n\n### Can you find any significant relationship between the time that a user lets pass before he updates the review and the Weighted Vote Score? Use an appropriate statistical test or technique and support your choice.\n\nWe'll discover if there is a relationship into 3 step:\n * plot\n * pearson correlations\n * Linear Regression\n\n\n```python\n# step 1: plot\nplt.figure(figsize = (10,8))\nplt.scatter(dataset.difference_days, dataset.weighted_vote_score)\nprint(\"no relationship visible\")\n```\n\n\n```python\n# step 2: pearson correlation\nprint(pearsonr(dataset.difference_days, dataset.weighted_vote_score))\nprint(\"no relations detected \")\n```\n\n (0.07204700562113138, 0.0)\n no relations detected \n\n\n\n```python\nX = dataset[[\"difference_days\"]]\nX = sm.add_constant(X).values\nmodel = sm.OLS(dataset.weighted_vote_score, X)\nres = model.fit()\n```\n\n\n```python\nres.summary()\n```\n\n\n\n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n
OLS Regression Results
Dep. Variable: weighted_vote_score R-squared: 0.005
Model: OLS Adj. R-squared: 0.005
Method: Least Squares F-statistic: 1.135e+05
Date: Sat, 30 Oct 2021 Prob (F-statistic): 0.00
Time: 14:17:34 Log-Likelihood: -71544.
No. Observations: 21747371 AIC: 1.431e+05
Df Residuals: 21747369 BIC: 1.431e+05
Df Model: 1
Covariance Type: nonrobust
\n\n\n \n\n\n \n\n\n \n\n
coef std err t P>|t| [0.025 0.975]
const 0.1619 5.31e-05 3048.553 0.000 0.162 0.162
x1 9.792e-05 2.91e-07 336.860 0.000 9.74e-05 9.85e-05
\n\n\n \n\n\n \n\n\n \n\n\n \n\n
Omnibus: 11489550.572 Durbin-Watson: 1.722
Prob(Omnibus): 0.000 Jarque-Bera (JB): 3686236.042
Skew: 0.842 Prob(JB): 0.00
Kurtosis: 1.891 Cond. No. 186.


Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n\n\n\nusing Simple Linear Regression (1 X variable) is the same that using pearsonr because\n$R^{2}Score = (pearsonr)^2 $\n\n\n```python\np = pearsonr(dataset.difference_days, dataset.weighted_vote_score)\nprint(f\"pearsonr {p[0]}\\npearsonr^2 = {p[0]**2} -> same as R-squared detected above\")\n```\n\n pearsonr 0.07204700562113138\n pearsonr^2 = 0.005190771018971337 -> same as R-squared detected above\n\n\nThe second test is linear regression: also in this case there is no evidence that between two variables there is a sort of correlation.\n\n### Is there any change in the relationship of the variables mentioned in the previous literal if you include whether an application is recommended or not in the review? Use an appropriate statistical test or technique and support your choice.\n\njust adding another variable into Linear Regression\n\n\n```python\nX = dataset[[\"difference_days\",\"recommended\",\"weighted_vote_score\"]].astype({\"recommended\":int})\nmodel = smf.ols(\"weighted_vote_score ~ difference_days + C(recommended)\", data=X)\nres = model.fit()\nres.summary()\n```\n\n\n\n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n
OLS Regression Results
Dep. Variable: weighted_vote_score R-squared: 0.038
Model: OLS Adj. R-squared: 0.038
Method: Least Squares F-statistic: 4.290e+05
Date: Sat, 30 Oct 2021 Prob (F-statistic): 0.00
Time: 14:18:23 Log-Likelihood: 2.9263e+05
No. Observations: 21747371 AIC: -5.853e+05
Df Residuals: 21747368 BIC: -5.852e+05
Df Model: 2
Covariance Type: nonrobust
\n\n\n \n\n\n \n\n\n \n\n\n \n\n
coef std err t P>|t| [0.025 0.975]
Intercept 0.2792 0.000 1923.833 0.000 0.279 0.279
C(recommended)[T.1] -0.1338 0.000 -865.303 0.000 -0.134 -0.134
difference_days 9.164e-05 2.86e-07 320.477 0.000 9.11e-05 9.22e-05
\n\n\n \n\n\n \n\n\n \n\n\n \n\n
Omnibus: 5161093.731 Durbin-Watson: 1.752
Prob(Omnibus): 0.000 Jarque-Bera (JB): 3402870.313
Skew: 0.856 Prob(JB): 0.00
Kurtosis: 2.090 Cond. No. 744.


Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n\n\n\nno changes in relationships\n\n### What are histograms, bar plots, scatterplots and pie charts used for?\n\nHistogram: This type of data visualization helps to interpret univariate analysis results. Simply put, it shows where data points are dense and where they are sparse in one dimension. However, instead of comparing the categorical data, it breaks down a numeric data into interval groups and shows the frequency of data fall into each group. Histogram is good at identifying the pattern of data distribution on a numeric spectrum.\n\nBar Chart: Bar chart compares the measure of categorical dimension. Bar chart is very similar to a histogram. The fundamental difference is that the x-axis of bar charts is categorical attribute instead of numeric interval in the histogram. Furthermore, bar chart is not just limited to plot one categorical data. An extension of bar chart, clustered bar chart (or group bar chart) compares two categorical attributes.\n\nScatterplot: It plots one numeric attribute against another numeric attribute and visualizes the correlation between axes. Scatter plot is commonly applied to identify regression type of relationships such as linear regression, logistic regression etc. It also provides a robust analysis of the correlation significance. We can estimate that the correlation relationship is stronger,linearly, when the data points lying on a line with a certaing degree, whereas the relationship is weak if the line is flat.\n\nPiechart: It is used to represent the percentage and weight of components belonging to one categorical attribute. The size of the pie slice is proportional to the percentage, hence it intuitively depicts how much each component occupies the whole.\n\n### What insights can you extract from a Box Plot?\n\nA boxplot shows the distribution of the data with more detailed information. from Box Plot we can \"extract\" information such as outliers, maximum, minimum, first quartile(Q1), third quartile(Q3), interquartile range(IQR), and median. It also gives you the information about the skewness of the data, how tightly closed the data is and the spread of the data.\n\n# TQ1\n## Question 1\nAs known, given a random variable $X$, the Quantile function *Q($\\cdot$)* with support $\\{ p | p \\in [0,1] \\}$ is the function that computes:\n\n\\begin{equation}\nQ(p)=s \\hspace{0.2 cm} |\\hspace{0.2 cm} \\mathcal{P}(X<=s) = p\n\\end{equation}\n\nDenoting with $A_i$ the i-th element of the vector $A$ of length $n$ and given $k \\in [0,n]$, it is possible to see that our algorithm compute:
\n\n\\begin{equation}\n alg(A,k)=s \\hspace{0.2 cm} |\\hspace{0.2 cm} \\#\\{A_i<=s\\} = k\n\\end{equation}\n\nIt is then easily possible to perform some trasformations over our algorithm parameters in order to obtain the similarities with the quantile function, i.e.:\n\n1. A shrinkage over our algorithm support space (i.e. $k'=k/n$);\n\n2. A shrinkage over our cardinality measure (i.e. $\\#\\{A_i<=s \\}'=\\frac{\\#\\{A_i<=s \\}}{n}$);\n\nSubstituting into our $alg(A,k)$ it becomes:\n\\begin{equation}\n alg(A,k')=s\\hspace{0.2 cm} |\\hspace{0.2 cm} \\frac{\\#\\{A_i<=s\\}}{n} = k'\n\\end{equation}\nIn a frequentist approach (said $A_r$ a random sample of the vector $A$) we can equal $\\frac{\\#\\{A_i<=s\\}}{n}= \\mathcal{P}(A_r <= s)$; In words, our algorithm is computing the value $s$ so that the number of elements in the array $A$ smaller or equal to $s$ will be equal to $k$: we can so somehow define our algorithm a \"quantile function over a non-normalized support\".\n## Question 2\nWe initially note that the subdivision of the array $A$ (over which we are calling $alg()$) into $L$ and $R$ requires to scan the whole vector $A$ (i.e. requires $n=len(A)$ operations). Let consider the worst case scenario, i.e. imagine that $k=n$ and that at each iteration the random sample $s$ will always be equal to $A_1$: it basically means that the $s$ satisfying the condition over $k$ will be selected at the $n_{th}-1$ call of $alg()$ (iteration at which the vector $A$ over which we are calling $alg()$ has lenght equal to 2). We are so going to remove at each call of $alg()$ a single element, i.e. the smallest element in $A$. Due to this, the number of operations needed to scan the vector $A$ will decrease of one unit at each iteration of $alg()$. So we have that:\n $$\n T(n)=n+(n-1)+(n-2)+(n-3)+...+(n-(n-1)) = \\sum_{i=0}^{i=n-1}(n-i)=\\frac{1}{2}n(n-1)\n $$ \n(We recall that the sum is executed over $n-1$ iteration because we need $n-1$ call of $alg()$ to reach the right $s$). We can so assume an asymptotical complexity in the worst case scenario (removing costant therms) equal to $\\mathcal{O}(n^2)$.\n## Question 3\nIn the best case scenario, the right $s$ will be picked up at the first iteration: we only need $n$=len($A$) operation to scan $A$ and divide it into $L$ and $R$ : the asymptotical complexity will then be equal to $\\mathcal{O}(n)$.\n\n# TQ2\n## Question 1\nLet dive into the interpretation of the given recursive algorithm's complexity. It is clear that, given a particular $n$ and $\\forall l$, and expressing with $T(n)$ the time needed to complete the algorithm called with parameter $n$:\n\n\\begin{equation}\n T(n) = T\\left(\\frac{n}{2}\\right)\\cdot 2 + \\left(\\frac{n}{2}+1\\right)\\cdot 3\n\\end{equation}\n\nIndeed, calling **splitSwap(a,l,n)** we will have to solve two times **splitSwap(a,l,n/2)** plus execute 3 operations for each of the $\\left(\\frac{n}{2}+1\\right)$ iterations of the for loop into **swapList(a,l,n)**. Lets compute running times after the expression of $T(n)$:\n\n\\begin{equation}\n T\\left(\\frac{n}{2}\\right) = T\\left(\\frac{n}{2^2}\\right)\\cdot 2 + \\left(\\frac{n}{2^2}+1\\right)\\cdot 3\n\\end{equation}\n\n\\begin{equation}\n T(n) = T\\left(\\frac{n}{2^2}\\right)\\cdot 2^2 + \\left(\\frac{n}{2^2}+1\\right)\\cdot2 \\cdot 3 +\\left(\\frac{n}{2}+1\\right)\\cdot 3\n\\end{equation}\n\n\\begin{equation}\n T(n) = T\\left(\\frac{n}{2^2}\\right)\\cdot 2^2 + \\left(\\frac{n}{2}+1\\right)\\cdot2 \\cdot 3 +3\n\\end{equation}\n\n\\begin{equation}\n T\\left(\\frac{n}{2^2}\\right) = T\\left(\\frac{n}{2^3}\\right)\\cdot 2 + \\left(\\frac{n}{2^3}+1\\right)\\cdot 3\n\\end{equation}\n\n\\begin{equation}\n T(n) = T\\left(\\frac{n}{2^3}\\right)\\cdot 2^3 + \\left(\\frac{n}{2}+1\\right)\\cdot 3 \\cdot 3 +7\n\\end{equation}\n\n\\begin{equation}\n T(n) = T\\left(\\frac{n}{2^k}\\right)\\cdot 2^k + \\left(\\frac{n}{2}+1\\right)\\cdot k \\cdot 3 +log_2(2^k)-1\n\\end{equation}\n\nSetting $2^k=n \\Leftrightarrow k =log_2(n)$ we obtain:\n\n\\begin{equation}\n T(n) = T(1)\\cdot n + \\left(\\frac{n}{2}+1\\right)\\cdot log_2(n) \\cdot 3 +log_2(n)-1 \\simeq n\\cdot log_2(n)\n\\end{equation}\n\nIn the latter we have removed the dependency from factors, constant terms and considered only the term with the biggest growth rate w.r.t $n$. We can than say that the asymptotical complexity of the algorithm is $\\mathcal{O}(n\\cdot log_2(n))$.\n\n## Question 2\nGiven an array **a**, an index **l** and a number **n** (considering the scenario where both **len(a)** and **n** are power of 2 numbers), the algorithm output the array **a'** built as follows:\n\n\\begin{equation}\n a'[i]=a[i] \\hspace{1cm}\\forall i \\in [0,1,...,l-1]\\hspace{1cm}\\mbox{if}\\hspace{1cm} l \\geq 1\n\\end{equation}\n\n\\begin{equation}\n a'[l+i]=a[l+n-i]\n\\end{equation}\n\nIn words, starting from an index **l** of the original array **a**, the algorithm is reversing the position of the first **n** elements of the array. Because of this of course it is required that **l+n** $\\leq$ **len(a)**, otherwise the subroutine **swapList()** will raise an error because of the out-of-range index it loops on. Let describe the algorithm's mechanism. Looking at the code, we can assess how the only part of the code actually changing the position of the array's elements is the subroutine **swapList()**. Given a triplet **(a,l,n)**, once **splitSwap()** is called, it will recursively call himself with an **n** halfed call by call (i.e. **n**$^{(1)}$ =**n/2**, **n**$^{(2)}$ =**n**$^{(1)}/2$, **n**$^{(3)}$ =**n**$^{(2)}/2$ and so on). As we can see in the (Fig.1), after $\\text{log}_2(n)-1$ steps, the function **splitSwap(a,l,2)** will be called: in its execution both **splitSwap(a,l,1)** and **splitSwap(a,l+1,1)** will **return** (being **n**=1), finally allowing the execution of **swaplist(a,l,2)** (that we will call **final-node-subroutine** $\\forall l$) that will exchange the position of the array's elements **a[l]** with **a[l+1]**. Being **splitSwap(a,l,2)** completed, **splitSwap(a,l+2,2)** will be called. Similary, at the end of the execution its **final-node-subroutine** will exchange the position of the array's elements **a[l+2]** with **a[l+3]**. Basically the **final-node-subroutines** consider the array (starting from the element $a[l]$) as a sequence of $\\frac{n}{2}$ couples of elements and in each couple they exchange the 1st element with the 2nd one.\n\nRecalling that **splitSwap(a,l,2)** and **splitSwap(a,l+2,2)** where called in **splitSwap(a,l,4)**, **swapList(a,l,4)** (that we will call **semi-final-node-subroutine**) will finally be executed, exchanging the position of the array's elements **a[l]** with **a[l+2]** and **a[l+1]** with **a[l+3]**. So the role of **semi-final-node-subroutines** is to consider the array (starting from the element $a[l]$) as a sequence of $\\frac{n}{4}$ couples of couples and to exchange the position of the 1st element of the 1st couple with the 1st element of the 2nd couple, and the 2nd element of the 1st couple with the 2nd element of the 2nd couple. Basically, after the execution of all the **final-node-subroutines** and of the **semi-final-node-subroutines** the position of the 1st group of 4 elements of the original array will be reversed, the same for the 2nd group of 4 elements and so on. We can so climb our recursive function tree from the **final-node-subroutines** up to the top **first-final-node-subroutine** i.e. **swapList(a,l,n)**. We can see the effect of each kind of **subroutine** level over a test array in two examples at (Fig.2,3) recalling that the output of the **first-final-node-subroutine** will be equal to the algorithm's output.\n\nHaving assessed that the algorithm complexity is $\\simeq O(n\\cdot log_2(n))$, it is possible to confirm that the algorithm it's not optimal: infact it is easily possible to write some pseudo-code with a lower complexity than the given algorithm:\n\n```python\ndef reverse(a,l,n):\n reversed_array=a\n for i in range(n):\n reversed_array[i+l]=a[l+n-i]\n return reversed_array\n```\n\nWe can easily see that the **reverse()** algorithm complexity has now become (removing costant therms and factors) $O(n)$, proving that the **splitSwap()** algorithm was not optimal.\n\nIn order:
\nFig.1 :Reaching the first final-node-subroutine
\nFig.2 :Test over a with len(a)=n=16, l=0
\nFig.3 :Test over a with len(a)=16, n=8, l=7
\n\n\n
Fig.1 :Reaching the first final-node-subroutine
\n\n\n
Fig.2 :Test over a with len(a)=n=16, l=0
\n\n\n
Fig.3 :Test over a with len(a)=16, n=8, l=7
\n\n# TQ3: Knapsack\nIn this theoretical question we have to face with a NP-complete problem: the Knapsack one. To solve it generally we have to use heuristic solutions but in some cases they fail to provide the optimal solution.\n* The first heuristic solution is a greedy algorithm in which we order the object in increasing order of weight and then visit them sequentially, adding them to the solution as long as the budget is not exceeded. This algorithm does not provide the optimal solution in every situation indeed in my counterexample this greedy algorithm fails: we fix the budget: **W** = 10 and we have three object.\n\n\n|i |w_i| v_i|\n|-----|---|----|\n|1 |4 |3 |\n|2 |6 |5 |\n|3 |10 |9 |\n\nWe have to visit the object sequentially so we are going to pick the first two objects, but we cannot pick the third one because we will exceed the budget. This choice is not optimal because it would be better pick only the third object because its values (9) is greater of the sum of the first two (8).\n\n* In the second heuristic solution we have to order the objects in decreasing order of values, and then visit them sequentially, adding them to the solution if the budget is not exceeded. This algorithm does not provide the optimal solution in each situation indeed in my counterexample this greedy algorithm fails: I have decided to choose the same budget **W** = 10 and the same number of object of the last counterexample.\n\n|i |w_i| v_i|\n|-----|---|----|\n|1 |9 |9 |\n|2 |7 |7 |\n|3 |3 |3 |\n\nWe have to visit the objects sequentially so we are going to pick the first object, but we cannot pick the last two because we will exceed the budget. This choice is not optimal because it would be better pick the second and the third objects because the sum of their values (10) is greater of the first object value (9).\n\n* In the third heuristic solution we have to order them in decreasing relative value ($v_1$/ $w_i$), and then visit them sequentially, adding them to the solution if the budget is not exceeded\nThis algorithm does not provide the optimal solution in each situation indeed in my counterexample this greedy algorithm fails: I have decided to choose the same budget **W** = 10 and the same number of object of the two last counterexamples.\n\n|i |w_i| v_i|\n|-----|---|----|\n|1 |7 |9 |\n|2 |6 |6 |\n|3 |4 |4 |\n\nWe have to visit the objects sequentially so we are going to pick the first object whose relative value is 1.29 while the one of the other objects is 1. We cannot pick the last two because we will exceed the budget. This choice is not optimal because it would be better pick the second and the third objects because the sum of their values (10) is greater of the first object value (9).\n", "meta": {"hexsha": "cb51081df4c2740c5aee626da09f61c33f8e64b6", "size": 893783, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "main.ipynb", "max_stars_repo_name": "michele1783/ADM-HW2", "max_stars_repo_head_hexsha": "2635ff78b9b2afa6f5c733a3c48ba5c2c909f681", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.ipynb", "max_issues_repo_name": "michele1783/ADM-HW2", "max_issues_repo_head_hexsha": "2635ff78b9b2afa6f5c733a3c48ba5c2c909f681", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.ipynb", "max_forks_repo_name": "michele1783/ADM-HW2", "max_forks_repo_head_hexsha": "2635ff78b9b2afa6f5c733a3c48ba5c2c909f681", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 266.0860375112, "max_line_length": 219005, "alphanum_fraction": 0.8472459199, "converted": true, "num_tokens": 40563, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5312093733737562, "lm_q2_score": 0.5078118642792044, "lm_q1q2_score": 0.2697544222155151}} {"text": "# Language Classification with Naive Bayes in Python\n\n## Recommended Prerequisites for Successful Completion\n* Intermediate level understanding of Python 3+ (e.g. list and dictionary comprehension)\n* Basics of machine learning (e.g. the distinction between training and validation data)\n* Mathematical probability (e.g. understanding Bayes' Theorem at a basic level)\n\n\n## Project Outline\nIntroduction (#intro)\n\n(#task1): Exploratory Data Analysis + Visualization\n\n(#task2): Data Cleaning and Preprocessing\n\n(#task3): Naive Bayes Model Introduction and Training\n\n(#task4): Highlighting Problems with Basic Model and Simple Fixes\n\n(#task5): Advanced Approach to Further Improve Performance\n\n\n```python\nimport matplotlib\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\nfrom tqdm import tqdm_notebook\nimport numpy as np\nimport string\n\nfrom collections import defaultdict\n\nfrom sklearn.metrics import f1_score\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nimport joblib\nimport pickle as pkl\nfrom sklearn.metrics import confusion_matrix\nimport seaborn as sns\n\n```\n\n\n# Introduction\n\n## [Slovak Wikipedia Entry](https://sk.wikipedia.org/wiki/Jazve\u010d%C3%ADk)\nMnoh\u00ed \u013eudia, ktor\u00ed vidia na ulici jazve\u010d\u00edka s podlhovast\u00fdm telom v\u00f4bec nevedia o tom, \u017ee tento mal\u00fd \u0161tvornoh\u00fd a ve\u013emi ob\u013e\u00faben\u00fd spolo\u010dn\u00edk je pri dobrom v\u00fdcviku obratn\u00fdm, vynikaj\u00facim a spo\u013eahliv\u00fdm po\u013eovn\u00fdm psom. Ako po\u013eovn\u00fd pes je mnohostranne vyu\u017eite\u013en\u00fd, okrem in\u00e9ho ako duri\u010d na brloh\u00e1renie. Kr\u00e1li\u010d\u00ed jazve\u010d\u00edk sa dok\u00e1\u017ee obratne pohybova\u0165 v kr\u00e1li\u010dej nore. S in\u00fdmi psami a de\u0165mi si nie v\u017edy rozumie.\n\n## [Czech Wikipedia Entry](https://cs.wikipedia.org/wiki/Jezev\u010d%C3%ADk)\n\u00dapln\u011b prvn\u00ed zm\u00ednky o psech podobn\u00fdch dne\u0161n\u00edm jezev\u010d\u00edk\u016fm nach\u00e1z\u00edme a\u017e ve Star\u00e9m Egypt\u011b, kde jsou vyobrazeni na so\u0161k\u00e1ch a rytin\u00e1ch kr\u00e1tkonoz\u00ed psi s dlouh\u00fdm h\u0159betem a kr\u00e1tkou srst\u00ed. Jednalo se ale o neust\u00e1len\u00fd typ bez ust\u00e1len\u00e9ho jm\u00e9na. Dal\u0161\u00ed zm\u00ednky o jezev\u010d\u00edc\u00edch nach\u00e1z\u00edme a\u017e ve 14 - 15. stolet\u00ed. Jedn\u00e1 se o psa, kter\u00fd se nejv\u00edce podob\u00e1 dne\u0161n\u00edmu typu hladkosrst\u00e9ho standardn\u00edho jezev\u010d\u00edka.\n\n\n## [English Wikipedia Entry](https://en.wikipedia.org/wiki/Dachshund)\nWhile classified in the hound group or scent hound group in the United States and Great Britain, the breed has its own group in the countries which belong to the F\u00e9d\u00e9ration Cynologique Internationale (World Canine Federation). Many dachshunds, especially the wire-haired subtype, may exhibit behavior and appearance that are similar to that of the terrier group of dogs.\n\n\n# Task 1: Data Exploration and Visualization\n\n\n```python\ndef open_file(filename):\n with open(filename, encoding=\"utf8\") as f:\n data = f.readlines()\n return data\n```\n\n\n```python\ndata_raw = dict()\ndata_raw['sk'] = open_file('Data/Sentences/train_sentences.sk')\ndata_raw['cs'] = open_file('Data/Sentences/train_sentences.cs')\ndata_raw['en'] = open_file('Data/Sentences/train_sentences.en')\n```\n\n\n```python\ndef show_statistics(data):\n for language, sentences in data.items():\n \n number_of_sentences = 0\n number_of_words = 0\n number_of_unique_words = 0\n sample_extract = ''\n \n # take a few minutes to try populate these variables\n \n # here is a hint -- word_list breaks the collections of sentences into a list of words\n word_list = ' '.join(sentences).split()\n \n \n print(f'Language: {language}')\n print('-----------------------')\n print(f'Number of sentences\\t:\\t {number_of_sentences}')\n print(f'Number of words\\t\\t:\\t {number_of_words}')\n print(f'Number of unique words\\t:\\t {number_of_unique_words}')\n print(f'Sample extract\\t\\t:\\t {sample_extract}...\\n')\n```\n\n\n```python\nshow_statistics(data_raw)\n```\n\n Language: sk\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n Language: cs\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n Language: en\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n\n\n\n# Task 2: Data Cleaning and Preprocessing\n\n\n```python\ndef preprocess(text):\n '''\n Removes punctuation and digits from a string, and converts all characters to lowercase. \n Also clears all \\n and hyphens (splits hyphenated words into two words).\n \n '''\n \n preprocessed_text = text.lower().replace('-', ' ')\n \n translation_table = str.maketrans('\\n', ' ', string.punctuation+string.digits)\n \n preprocessed_text = preprocessed_text.translate(translation_table)\n \n return preprocessed_text\n```\n\n\n```python\ndata_preprocessed = {k: [preprocess(sentence) for sentence in v] for k, v in data_raw.items()}\n```\n\n\n```python\nshow_statistics(data_preprocessed)\n```\n\n Language: sk\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n Language: cs\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n Language: en\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n\n\n\n# Task 3: The Naive Bayes Model\n\n**Bayes' Theorem**\n\n\\begin{equation}\nP(A | B)=\\frac{P(B | A) \\times P(A)}{P(B)}\n\\end{equation}\n\nNow, let's translate this theory into our specific problem. In our case, where we want to categorise a sentence `my name is Ari` into one of `sk`, `cs`, or `en`, the following are the probabilities we want to determine.\n\n\\begin{equation}\nP(\\text {sk} | \\text {my name is Ari})=\\frac{P(\\text {my name is Ari} | \\text {sk}) \\times P(\\text {sk})}{P(\\text {my name is Ari})}\n\\end{equation}\n\n\\begin{equation}\nP(\\text {cs} | \\text {my name is Ari})=\\frac{P(\\text {my name is Ari} | \\text {cs}) \\times P(\\text {cs})}{P(\\text {my name is Ari})}\n\\end{equation}\n\n\\begin{equation}\nP(\\text {en} | \\text {my name is Ari})=\\frac{P(\\text {my name is Ari} | \\text {en}) \\times P(\\text {en})}{P(\\text {my name is Ari})}\n\\end{equation}\n\n## Unseen Data\n\nSince we assume conditional independence across our features, our numerator term for any of the above equations can be broken into the following.\n\n\\begin{equation}\nP(\\text {my name is Ari} | \\text {en}) = P(\\text {my} | \\text {en}) \\times P(\\text {name} | \\text {en}) \\times P(\\text {is} | \\text {en}) \\times P(\\text {Ari} | \\text {en})\n\\end{equation}\n\n## Vectorizing Training Data\n\n\n```python\ndef get_data(data):\n st , label = [],[]\n for k, v in data.items():\n for sentences in v:\n st.append(sentences)\n label.append(k)\n return st, label\n```\n\n\n```python\nsentences_train, y_train = get_data(data_preprocessed)\n\n```\n\n\n```python\nvectorizer = CountVectorizer()\n```\n\n\n```python\nX_train = vectorizer.fit_transform(sentences_train)\n```\n\n\n```python\nX_train\n```\n\n\n\n\n <210x2208 sparse matrix of type ''\n \twith 3867 stored elements in Compressed Sparse Row format>\n\n\n\n## Initializing Model Parameters and Training\n\n\n```python\nnaive_classifier = MultinomialNB()\nnaive_classifier.fit(X_train, y_train)\n```\n\n\n\n\n MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)\n\n\n\n## Vectorizing Validation Data and Evaluating Model\n\n\n```python\ndata_val = dict()\ndata_val['sk'] = open_file('Data/Sentences/val_sentences.sk')\ndata_val['cs'] = open_file('Data/Sentences/val_sentences.cs')\ndata_val['en'] = open_file('Data/Sentences/val_sentences.en')\n\ndata_val_preprocessed = {k: [preprocess(sentence) for sentence in v] for k, v in data_val.items()}\n```\n\n\n```python\nsentences_val, y_val = get_data(data_val_preprocessed)\n```\n\n\n```python\nX_val = vectorizer.transform(sentences_val)\n```\n\n\n```python\npredictions = naive_classifier.predict(X_val)\n```\n\n\n```python\n\ndef plot_confusion_matrix(labels , y_pred,classes):\n cm=confusion_matrix(labels,y_pred,classes)\n \n # ploting the matrix in form of heat map\n\n sns.heatmap(cm)\n```\n\n\n```python\nplot_confusion_matrix(y_val, predictions, ['sk', 'cs', 'en'])\n```\n\n\n \n\n \n\n\n\n```python\nf1_score(y_val, predictions, average='weighted')\n```\n\n\n\n\n 0.6149824401040264\n\n\n\n\n# Simple Adjustments and Highlighting Model Shortcomings\n\n\n```python\nnaive_classifier = MultinomialNB(alpha=0.0001, fit_prior=False)\nnaive_classifier.fit(X_train, y_train)\n\npredictions = naive_classifier.predict(X_val)\n\nplot_confusion_matrix(y_val, predictions, ['sk', 'cs', 'en'])\n```\n\n\n \n\n \n\n\n\n```python\nf1_score(y_val, predictions, average='weighted')\n```\n\n\n\n\n 0.8368507601649364\n\n\n\n\n# Using Subwords to Shift Perspective\n\n**Dummy Dataset**\n\nplaying ; eating ; play ; reads ; tea\n\n**Step 1**\n\nBreak each word into characters\n\nplaying > p l a y i n g\n\n\n**Step 2**\n\nFind common character sequences\n\nea, ing, play\n\n**Step 3**\n\nConvert dataset using these subwords into\n\nplay ing ; ea t ing ; play ; r ea d s ; t ea\n\n\n```python\n# taken from https://arxiv.org/abs/1508.07909\n\nimport re, collections\ndef get_stats(vocab):\n pairs = collections.defaultdict(int) \n for word, freq in vocab.items():\n symbols = word.split()\n for i in range(len(symbols)-1):\n pairs[symbols[i],symbols[i+1]] += freq \n return pairs\n\ndef merge_vocab(pair, v_in):\n v_out = {}\n bigram = re.escape(' '.join(pair))\n p = re.compile(r'(?= 2:\n merges[subword] += v\n```\n\n\n```python\nmerge_ordered = sorted(merges, key=merges.get, reverse=True)\n```\n\n\n```python\npkl.dump(merge_ordered, open('Data/Auxiliary/merge_ordered.pkl', 'wb'))\n```\n\n\n```python\ndef split_into_subwords(text):\n merges = pkl.load(open('Data/Auxiliary/merge_ordered.pkl', 'rb'))\n subwords = []\n for word in text.split():\n for subword in merges:\n subword_count = word.count(subword)\n if subword_count > 0:\n word = word.replace(subword, ' ')\n subwords.extend([subword]*subword_count)\n return ' '.join(subwords)\n```\n\n\n```python\nsplit_into_subwords('this is ari here')\n```\n\n\n\n\n 'is th is ar re'\n\n\n\n\n```python\ndata_preprocessed_subwords = {k: [split_into_subwords(sentence) for sentence in v] for k, v in data_preprocessed.items()}\n```\n\n\n```python\nshow_statistics(data_preprocessed_subwords)\n```\n\n Language: sk\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n Language: cs\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n Language: en\n -----------------------\n Number of sentences\t:\t 0\n Number of words\t\t:\t 0\n Number of unique words\t:\t 0\n Sample extract\t\t:\t ...\n \n\n\n\n```python\ndata_train_subwords = []\nfor sentence in sentences_train:\n data_train_subwords.append(split_into_subwords(sentence))\n```\n\n\n```python\ndata_val_subwords = []\nfor sentence in sentences_val:\n data_val_subwords.append(split_into_subwords(sentence))\n```\n\n\n```python\nvectorizer = CountVectorizer()\n```\n\n\n```python\nX_train = vectorizer.fit_transform(data_train_subwords)\nX_val = vectorizer.transform(data_val_subwords)\n```\n\n\n```python\nnaive_classifier = MultinomialNB(fit_prior=False)\nnaive_classifier.fit(X_train, y_train)\n```\n\n\n\n\n MultinomialNB(alpha=1.0, class_prior=None, fit_prior=False)\n\n\n\n\n```python\npredictions = naive_classifier.predict(X_val)\n```\n\n\n```python\nplot_confusion_matrix(y_val, predictions, ['sk', 'cs', 'en'])\n```\n\n\n \n\n \n\n\n\n```python\nf1_score(y_val, predictions, average='weighted')\n```\n\n\n\n\n 0.8456381060126386\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "5dffe5ffb7a18d241775df88453703548f32c1c9", "size": 70981, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Language_classifier.ipynb", "max_stars_repo_name": "HarshKhandelwal1552/Language_classifier_with_Naive_Bayes", "max_stars_repo_head_hexsha": "9807006677379ede25570f4229926c5000bb4a27", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-10T14:15:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-10T14:15:13.000Z", "max_issues_repo_path": "Language_classifier.ipynb", "max_issues_repo_name": "HarshKhandelwal1552/Language_classifier_with_Naive_Bayes", "max_issues_repo_head_hexsha": "9807006677379ede25570f4229926c5000bb4a27", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Language_classifier.ipynb", "max_forks_repo_name": "HarshKhandelwal1552/Language_classifier_with_Naive_Bayes", "max_forks_repo_head_hexsha": "9807006677379ede25570f4229926c5000bb4a27", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.2518059856, "max_line_length": 16442, "alphanum_fraction": 0.5865372424, "converted": true, "num_tokens": 3504, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118642792043, "lm_q2_score": 0.5312093733737563, "lm_q1q2_score": 0.2697544222155151}} {"text": "# Circuit\n\n\n```python\nfrom discopy import grammar\nfrom pytket.circuit.display import render_circuit_jupyter\n\nfrom lambeq.ccg2discocat import DepCCGParser\nfrom lambeq.circuit import IQPAnsatz\nfrom lambeq.core.types import AtomicType\n\nN = AtomicType.NOUN\nS = AtomicType.SENTENCE\n```\n\n\n```python\ndepccg_parser = DepCCGParser()\ndiagram = depccg_parser.sentence2diagram('Alice runs')\ndiagram.draw()\n```\n\n\n```python\nansatz = IQPAnsatz({N: 1, S: 1}, n_layers=2)\ndiscopy_circuit = ansatz(diagram)\ndiscopy_circuit.draw(figsize=(10, 15))\n```\n\n\n```python\ntket_circuit = ansatz(diagram).to_tk()\n\n# This does not render properly on GitHub, please view it at:\n# https://cqcl.github.io/lambeq/examples/circuit.html\nrender_circuit_jupyter(tket_circuit)\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n
\n\n
\n
\n\n
q[0]
\n\n
q[1]
\n\n
q[2]
\n\n\n
c[0]
\n\n
c[1]
\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rx(2*Alice__n_0)\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rz(2*Alice__n_1)\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n
\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rz(2*runs__n.r@s_0)\n
\n\n\n\n \n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rx(2*Alice__n_2)\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n
\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rz(2*runs__n.r@s_1)\n
\n\n\n\n \n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n
\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n X\n
\n\n\n\n \n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
z
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n\n\n
\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
z
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n\n\n
\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n
q[0]
\n\n
q[1]
\n\n
q[2]
\n\n\n
c[0]
\n\n
c[1]
\n\n
\n
\n\n
\n\n
\n\n\n\n\n
\n
\n\n\n\n\n\n```python\nfrom sympy import default_sort_key\n\n# Make sure you sort your symbols as they are returned as a set.\nparameters = sorted(tket_circuit.free_symbols(), key=default_sort_key)\n\nparam_dict = {p: i * 0.001 for i, p in enumerate(parameters)}\nparam_dict\n```\n\n\n\n\n {Alice__n_0: 0.0,\n Alice__n_1: 0.001,\n Alice__n_2: 0.002,\n runs__n.r@s_0: 0.003,\n runs__n.r@s_1: 0.004}\n\n\n\n\n```python\ntket_circuit.symbol_substitution(param_dict)\n\n# This does not render properly on GitHub, please view it at:\n# https://cqcl.github.io/lambeq/examples/circuit.html\nrender_circuit_jupyter(tket_circuit)\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n
\n\n
\n
\n\n
q[0]
\n\n
q[1]
\n\n
q[2]
\n\n\n
c[0]
\n\n
c[1]
\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rx(0.0)\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rz(0.002)\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n
\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rz(0.006)\n
\n\n\n\n \n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rx(0.004)\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n
\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n Rz(0.008)\n
\n\n\n\n \n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n
\n\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n X\n
\n\n\n\n \n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
z
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n\n\n
\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n
\n H\n
\n\n\n\n
\n
\n
\n
\n\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
z
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n \n\n\n\n
\n
\n\n
\n
\n\n
\n\n
\n\n\n\n
\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n\n\n\n\n\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n
\n
\n\n
\n\n\n\n\n\n\n\n
\n\n
\n\n
q[0]
\n\n
q[1]
\n\n
q[2]
\n\n\n
c[0]
\n\n
c[1]
\n\n
\n
\n\n
\n\n
\n\n\n\n\n
\n
\n\n\n\n", "meta": {"hexsha": "4f4e4a740e75c136295f3ab0a605c787d731afee", "size": 151213, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/examples/circuit.ipynb", "max_stars_repo_name": "kinianlo/lambeq", "max_stars_repo_head_hexsha": "86aedb24bf826c226f2845af4327cf616cd01a1e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-24T10:26:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-24T10:26:36.000Z", "max_issues_repo_path": "docs/examples/circuit.ipynb", "max_issues_repo_name": "yliu9418/lambeq", "max_issues_repo_head_hexsha": "357fb893e01e2b41d7628ceaed265356702ca5fa", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/examples/circuit.ipynb", "max_forks_repo_name": "yliu9418/lambeq", "max_forks_repo_head_hexsha": "357fb893e01e2b41d7628ceaed265356702ca5fa", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5472706809, "max_line_length": 33656, "alphanum_fraction": 0.4776242783, "converted": true, "num_tokens": 15805, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5117166047041654, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.2678428837907652}} {"text": "# GaiaLab - Introductory infos\n\nToy model in which the snapshots are taken continuously, i.e. not at discrete time steps.\n\n--- \n \n## Content:\n\n- [#1.-Scanner-Object](#1.-Scanner-Object)\n- [#2.-Source-detection](#2.-Source-detection)\n - Sirio\n - Vega\n - Proxima Centauri\n- [#3.-For-more](#3.-For-more)\n\n---\n\n## Few definitions: (from wikipedia:)\n* Attitude: https://en.wikipedia.org/wiki/Attitude_control\n* Parallax: https://en.wikipedia.org/wiki/Parallax\n* B-Spline: https://en.wikipedia.org/wiki/B-spline\n* Quaternion: https://en.wikipedia.org/wiki/Quaternion\n* CCD (Charge-Coupled Device): https://en.wikipedia.org/wiki/Charge-coupled_device\n* BCRS and GCRS (Barycentric/Geocentric Celestial Reference System): https://en.wikipedia.org/wiki/Barycentric_celestial_reference_system\n* Some other things about references frames: https://www.gaia.ac.uk/science/astronomical-coordinate-systems\n* TCB (Barycentric Coordinate Time): https://en.wikipedia.org/wiki/Barycentric_Coordinate_Time\n* Topocentric function: https://en.wikipedia.org/wiki/Horizontal_coordinate_system\n\n# Abstract\n\nThis notebook explains the wotking principle of the scanner object, together with auxiliary source and satellite objects\n\n\n```python\n# To use interact -- IPython widget\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets\n\n# Generic module imports import\nfrom IPython.display import Image\nimport sys\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# append to path the folder that contains the analytic scanner and import local files\nsys.path.append('../../GaiaLab/scan/analytic_scanner')\n\nimport constants as const\n\nfrom scanner import Scanner\nfrom satellite import Satellite\nfrom source import Source\n\nimport analytic_plots as plots\nfrom agis_functions import scanning_direction\nfrom agis_functions import get_angular_FFoV_PFoV\nfrom agis_functions import *\n\n# Ipython magics\n%load_ext autoreload\n%autoreload 2\n# for pseudo interactive plots\n# %matplotlib notebook \n\n# Set some user-specific variables\nMAIN_FOLDER = '../../' # relative path to project folder\nFIG_FOLDER = '../figures/' # relative path to figure\n```\n\n**About the frames we will use** \n\nHere is an image representing the frames:\n\n\n\n\nFigure 1. Definition of angles $\\epsilon$, $\\lambda$, $\\nu$, $\\xi$, and $\\Omega$ in the nominal scanning law.\n\nThe nominal scanning law of Gaia is described by the following angles:\n* $\\epsilon$: obliquity of equator. This is a $\\it{constant}$ chosen to be 23\u00ba 26' 21.448''\n* $\\xi$: revolving angle. At any time the z axis is at this $\\it{constant}$ angle from $\\vec{s}$. For Gaia, the current choice is 55\u00ba.\n* $\\lambda_{s}(t)$: nominal longitud of the sun in the ecliptic plane. This is chosen to have a constant speed: $\\dot{\\lambda}(t) = \\frac{2\\pi}{365}$ \n* $\\nu(t)$: revolving phase. Gives a nearly constant precession rate.\n* $\\Omega(t)$: spin phase. \n\nFrames of referenced used:\n* Interial Frame: $\\textbf{G} = \\big[ \\textbf{l},\\textbf{m},\\textbf{n}\\big]$ gaia non-turning wrt to sun reference system\n* Celestial frame of reference (ort): $\\textbf{N} = \\big[ \\textbf{l},\\textbf{j},\\textbf{k}\\big]$ Sun reference\n* Instrument triad frame (ort): $\\textbf{Z} = \\big[ \\textbf{x},\\textbf{y},\\textbf{z}\\big]$ Gaia reference turning\n* Ecliptic triad $\\textbf{E}= \\big[ \\textbf{l},\\textbf{j},\\textbf{k}\\big]$\n\nGiven by:\n$$\n\\textbf{k} = \\textbf{n}cos(\\epsilon) - \\textbf{m} sin(\\epsilon) \\\\ \n\\textbf{j} = \\textbf{m}cos(\\epsilon) + \\textbf{n} sin(\\epsilon) \\\\ \\textbf{s}=\\textbf{l}cos(\\lambda_{s}(t)) + \\textbf{j} sin(\\lambda_{s}(t) \n$$\n\nThe total inertial rotation of the telescope is therefore:\n$$\n\\begin{equation}\n\\vec{\\omega} = \\textbf{k} \\dot{\\lambda_{s}} + \\textbf{s} \\dot{\\nu} + \\textbf{z} \\dot{\\Omega}\n\\end{equation}\n$$\n\nThe precessional motion of the z-axis is: \n\n$$\n\\begin{equation}\n\\dot{z} = \\omega \\times z= \\big( k \\times z\\big)\\dot{\\lambda_{s}} + \\big(s \\times z\\big)\\dot{\\nu}\n\\end{equation}\n$$\n\nAnd if we consider the rate of change of the z-axis wrt time to be a constant:\n$$\n\\begin{equation}\n\\frac{dz}{dt}\\frac{dt}{d\\lambda_{s}} = \\frac{dz}{dt} \\dot{\\lambda_{s}}^{-1} = S\n\\end{equation}$$\n\n\n\nAnd keeping the condition $ \\|\\big( k \\times z \\big)\\|^{2} = 1 - sin^{2}(\\xi)sin^{2}(\\nu)$ then the following ODE for $\\nu$ is obtained:\n\n$$\n\\begin{equation}\n\\dot{\\nu} = \\dot{\\lambda_{s}}\\frac{\\sqrt{S^{2} - cos^{2}(\\nu)} + cos(\\xi)sin(\\nu)}{sin(\\xi)}\n\\end{equation}\n$$\n\nTaking the positive sign of the square root implying increasing $\\nu$.\nThen, for constant inertial spin rate about the z axis using the equation for the inertial rotation $\\omega$, we obtain:\n$$\n\\begin{equation}\n\\dot{\\Omega} = \\omega_{z} - \\dot{\\nu}cos\\xi - \\dot\\lambda_{s}sin\\xi sin\\nu\n\\end{equation}\n$$\n\nTo see how does the longitude and latitude of the z axis changes with time, according to Figure 1, the geometry gives the following equations in the ecliptic plane:\n$$\n\\begin{equation}\n\\lambda_{z}(t) = \\lambda_{s}(t) + arctan\\big[ tan\\xi cos\\nu(t) \\big]. \\\\\n\\beta_{z}(t) = arcsin \\big[ sin\\xi sin\\nu(t) \\big].\n\\end{equation}\n$$\n\nFrames of referenced used:\n* Celestial Frame: $\\textbf{G} = \\big[ \\textbf{l},\\textbf{m},\\textbf{n}\\big]$\n* Instrument Frame: $\\textbf{Z} = \\big[ \\textbf{x},\\textbf{y},\\textbf{z}\\big]$\n* Barycentric Reference System $\\textbf{BCRS}= \\big[ \\textbf{l'},\\textbf{m'},\\textbf{n'}\\big]$ \n\nThe BCRS is a frame of reference that has its axis all parallel to G-Frame, but that is displaced by 1AU. That is to say, it is the frame centered at the sun.\n\n# 1. Construct Object\n\n\n```python\nt_init = 0 # [days] initial time of satellite position wrt epoch J2000. This is also the time followed by the scanner.\nt_end = 365*5 # [days] end time at which we compute satellite position. \nmy_dt = 1/24 # [days] time step for the computation of the satellite attitude\n\n# Create the Satellite object \ngaia = Satellite(t_init, t_end, my_dt)\n\n# Create the Scanner object\nscanner = Scanner()\n```\n\n### 1.1 Check properties of objects\n\n##### 1.1.1 properties of the Satellite\n\nPlotting the time evolution of the attitude components, begining at $t=0$ up to $t=80$ days.\n\n\n```python\n# plot the attitude of the satellite\nplots.plot_attitude(gaia, ti=0, tf=80, n_points=1000, style='-')\n```\n\n\n```python\n\n```\n\nPlotting the scanner positions:\n\n\n```python\nmyTime = 50\nplots.plot_3D_scanner_pos(gaia, 'X', 0, myTime, 1000, elevation=30, azimuth=30)\nplots.plot_3D_scanner_pos(gaia, 'Z', 0, myTime, 1000)\ndef my_func(elev, azim):\n plots.plot_3D_scanner_pos(gaia, 'X', 0, myTime, 1000, elevation=elev, azimuth=azim)\ninteract(my_func, elev=widgets.IntSlider(min=0,max=90,step=1,value=20),\n azim=widgets.IntSlider(min=0,max=90,step=1,value=20))\n# x,y,z wrt to lmn reference\n```\n\n ##### 1.1.2 Scanner properties\n We just check some of the properties of the scanner:\n\n\n```python\nprint('Size of the field of view of the scanner: ', np.degrees(scanner.zeta_limit),'\u00b0')\nprint('Is the scanner using two telescope in a gaia-like fascion? ', scanner.double_telescope)\n```\n\n Size of the field of view of the scanner: 0.5 \u00b0\n Is the scanner using two telescope in a gaia-like fascion? True\n\n\n# 2. Source detection\n\nIn this section we create example stars from real data. The Source object takes as inputs:\n\n> Source('name', $\\alpha$, $\\delta$, parallax, $\\mu_{\\alpha}$, $\\mu_{\\delta}$, $\\mu_{r}$)\n\nwith units: [string, deg, deg, mas, mas/yr, mas/yr, km/s]\n\n\n\n### 2.1 Create source\n\n\n```python\n# Comments / uncomment lines to play with different sources\n\nsource = Source(\"sirio\", 101.28, -16.7161, 379.21, -546.05, -1223.14, -7.6)\n# source = Source(\"vega\", 279.2333, 38.78, 128.91, 201.03, 286.23, -13.9)\n# source = Source(\"proxima\",217.42, -62, 768.7, 3775.40, 769.33, 21.7)\n```\n\n##### 2.1.1 Visualize the evolution of the source in time\n\n\n```python\n# With respect to equatorial coordinates:\nfig = plots.plot_star_trajectory(source, gaia);\n# fig.show()\n```\n\n\n```python\nfig, ax = plots.plot_scanner_position_over_source(source=source, sat=gaia, t_init=0, t_end=1,\n num_points_of_discretization = 10000);\n```\n\n### 2.2 Scan source\n\n\n```python\ntime_taken_for_scan = scanner.scan(gaia, source, t_init, t_end)\nprint('Scan took: ', time_taken_for_scan, 'seconds')\nprint('Source observed ', len(scanner.obs_times), 'times')\n```\n\n Scan took: 1.22025728225708 seconds\n Source observed 75 times\n\n\n##### 2.2.1 Visualize scan\n\n\n```python\n# Get fields angle from scanner: \nscanner.compute_angles_eta_zeta(gaia, source)\n \n```\n\n\n```python\nas_,ds_ = ([], [])\nfor i, t in enumerate(scanner.obs_times):\n \n attitude = gaia.func_attitude(t)\n phi = scanner.eta_scanned[i]\n zeta = scanner.zeta_scanned[i]\n a, d = field_angles_to_alpha_delta(phi, zeta, attitude)\n as_.append(a)\n ds_.append(d)\n print(a,d)\nplt.figure()\nas_ = ft.zero_to_two_pi_to_minus_pi_pi(np.array(as_))\nds_ = ft.transform_twoPi_into_halfPi(np.array(ds_))\nplt.plot(as_,ds_,'+')\n```\n\n\n```python\nfig = plots.plot_star_trajectory(source, gaia, obs_times=scanner.obs_times, equatorial=False, \n show_scanning_directions=False);\n```\n\nWhere the red dots represent the position of the star at the time of observation.\n\n##### 2.2.2 Quantitative results\n\nCheck the deviation from 0 in the along scan direction:\n\n\n```python\nprint('Scan error in the along scan direction: ', scanner.scanner_error(), '[rads], \\nor equivalently,', \n scanner.scanner_error()/const.rad_per_mas, '[mas]')\n```\n\n Scan error in the along scan direction: -8.979896929409987e-14 [rads], \n or equivalently, -1.852236700263647e-05 [mas]\n\n\nCheck that all the observations are inside the limitations of the Field of View (FoV) which is [-0.5\u00b0, 0.5\u00b0]:\n\n\n```python\n# Check that all the observations are in the field of view:\nplt.figure()\nplt.title('Across scan angle of the observations')\nplt.plot(scanner.zeta_scanned, '+', label='Observation')\nplt.hlines(np.radians([-0.5, 0.5]), xmin=0, xmax=len(scanner.obs_times), label='| 0.5\u00b0|')\nplt.xlim(0, len(scanner.obs_times))\nplt.xlabel('Index of the observation')\nplt.ylabel('$\\zeta$ [rads]')\nplt.legend()\nplt.show();\n```\n\nWhere $\\zeta$ is the Across scan angle of the observation.\n\n# 3. Conclusion\n\nThank you for reading! We hope tha tthis notebook could help you understand the model and we hope you had fun discovering it! \nFor more about this project, visit the others notebooks as well as the gaia website.\n\n_For any errors or remarks, don't hesitate to contact us!_\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n# Tests\n\n\n```python\nimport constants as const\nimport frame_transformations as ft\nequatorial = True\nalphas_sol, deltas_sol = ([], []) # coordinates of the founds sources\ntimes_sol = []\nsat=gaia\nobs_times = scanner.obs_times\nfor t in obs_times:\n alpha_obs, delta_obs, delta_alpha_dx_mas, delta_delta_mas = source.topocentric_angles(sat, t)\n times_sol.append(t/const.days_per_year + 2000) # +2000 do the fact that the reference epoch is J2000\n if equatorial is True:\n alphas_sol.append(alpha_obs)\n deltas_sol.append(delta_obs)\n else:\n alphas_sol.append(delta_alpha_dx_mas)\n deltas_sol.append(delta_delta_mas)\n\n# Convert all in mas:\nif equatorial is True:\n if obs_times:\n alphas_sol = np.array(alphas_sol) / const.rad_per_mas\n deltas_sol = np.array(deltas_sol) / const.rad_per_mas\n```\n\n\n```python\nnp.sort(obs_times)\n```\n\n\n\n\n array([ 9.54965349, 9.62360949, 41.51333726, 41.58729534,\n 89.15177884, 89.22574167, 89.40178235, 89.47574443,\n 100.3997511 , 100.47371282, 100.64975141, 100.72371389,\n 149.34471992, 149.52076293, 149.59472506, 215.01009736,\n 267.11808211, 267.19204527, 267.36809264, 267.44205434,\n 267.61809983, 267.6920601 , 267.86810376, 267.94206263,\n 293.43740771, 293.61344987, 293.68741195, 293.86345559,\n 293.9374188 , 294.11346395, 294.18742831, 294.363475 ,\n 294.43744053, 352.54361573, 352.61757383, 391.31145956,\n 419.46080196, 419.53475965, 467.15995955, 467.33601494,\n 467.4099732 , 467.58602379, 478.83412302, 478.90808131,\n 479.08413362, 479.15809271, 526.53748509, 526.71353878,\n 556.43222579, 556.50618798, 594.20535372, 594.38140926,\n 654.30956608, 654.38353273, 654.55958575, 654.633551 ,\n 654.80960218, 654.88356605, 655.05961542, 655.13357792,\n 683.6260836 , 683.80213668, 683.87609702, 684.05215275,\n 684.12611518, 731.48404179, 731.55800951, 772.48564665,\n 772.66170136, 844.00856099, 844.08252235, 844.25857877,\n 858.5049547 , 858.57891493, 858.75497464, 858.82893571,\n 904.21915111, 904.3952057 , 938.3517184 , 938.42568381,\n 974.06410084, 974.13806796, 974.31412333, 1036.98997297,\n 1037.06394165, 1037.23999814, 1037.31396468, 1069.54637634,\n 1069.72243539, 1069.79640185, 1110.41120398, 1110.48517018,\n 1153.14654085, 1153.32259928, 1174.06152949, 1174.13549803,\n 1220.1706271 , 1220.24459041, 1220.42065477, 1282.56537183,\n 1282.63934171, 1320.5833642 , 1320.75942988, 1320.83339318,\n 1354.80815324, 1354.98421257, 1355.0581839 , 1355.23424638,\n 1417.6578794 , 1417.7318489 , 1489.07503113, 1489.14900105,\n 1533.22053456, 1533.29450219, 1550.96422652, 1551.03819142,\n 1596.64689116, 1596.82295523, 1619.63192632, 1619.80799253,\n 1661.47384654, 1661.54781681, 1704.15626196, 1704.23023244,\n 1737.46549893, 1737.64156578, 1737.7155364 , 1737.89160576,\n 1737.96557834, 1797.56314728, 1797.63712055])\n\n\n\n\n```python\ndef transform_twoPi_into_halfPi(deltas):\n deltas = np.array(deltas)\n to_modify_indices = np.where(deltas>np.pi)[0]\n deltas[to_modify_indices] -= 2*np.pi\n return deltas\n```\n\n\n```python\nfrom agis_functions import * \nras0, ras1, decs0, decs1 = ([], [], [], [])\nfor t in np.linspace(0, t_end, num=1000):\n ra0, dec0, _, _ = np.array(get_angular_FFoV_PFoV(gaia, t))\n ra1, dec1, _, _ = np.array(get_angular_FFoV_PFoV(gaia, t+1/24/4))\n \n attitude = gaia.func_attitude(t)\n ra0, dec0 = generate_observation_wrt_attitude(attitude)\n \n ra0, ra1 = ft.zero_to_two_pi_to_minus_pi_pi(np.array([ra0,ra1]))\n dec0, dec1 = transform_twoPi_into_halfPi(np.array([dec0, dec1]))\n ras0.append(ra0)\n ras1.append(ra1)\n decs0.append(dec0)\n decs1.append(dec1)\n \n \nplt.plot(ras0, decs0,'b,')\n# plt.plot(ras1, decs1,'r+')\nplt.grid()\n```\n\n\n```python\n\nimport helpers as helpers\nfor i, (t, a, d) in enumerate(zip(np.sort(obs_times), alphas_sol, deltas_sol)):\n if i >3:\n continue\n point = np.array([a, d])\n \n ra0, dec0, _, _ = np.array(get_angular_FFoV_PFoV(gaia, t))\n ra1, dec1, _, _ = np.array(get_angular_FFoV_PFoV(gaia, t+1/24/60/60))\n ra0, ra1 = ft.zero_to_two_pi_to_minus_pi_pi(np.array([ra0,ra1]))\n dec0, dec1 = transform_twoPi_into_halfPi(np.array([dec0, dec1]))\n \n vect_1 = np.array([ra1-ra0, dec1-dec0])/const.rad_per_mas\n \n \n \"\"\"plt.plot(ra0, dec0,'b.')\n plt.plot(ra1, dec1,'r.')\"\"\"\n \n att = sat.func_attitude(t)\n point0 = ft.rotate_by_quaternion(att, [0, 0, 0])\n point1 = ft.rotate_by_quaternion(att, [0, 1, 0])\n \n \n point0 = ft.vector_to_alpha_delta(point0)\n point1 = ft.vector_to_alpha_delta(point1)\n \n vect = np.array([point1[0]-point0[0], point1[1]-point0[1]])\n # vect = helpers.normalize(vect)\n \n \n vector = scanning_direction(source, sat, t)\n adp = ft.vector_to_adp(vector)\n dir_alpha, dir_delta = ft.vector_to_alpha_delta(vector)\n directions = np.array([dir_alpha, dir_delta])\n # directions = helpers.rescaled_direction((dir_alpha, dir_delta), length)\n to_plot_x = np.array([point[0], point[0]+dir_alpha])\n to_plot_y = np.array([point[1], point[1]+dir_delta])\n # plt.plot(to_plot_x, to_plot_y, 'k-', alpha=0.5)\n # plt.quiver(point[0], point[1], vect[0], vect[1], color=['r'])\n plt.quiver(point[0], point[1], vect_1[0], vect_1[1], color=['b'])\n # plt.quiver(point[0], point[1], directions[0], directions[1], color=['k'])\n # plt.quiver(0, 0, directions[0], directions[1], color=['r'])\n```\n\n\n```python\n\n```\n\n\n```python\n\nfor t in [0, 10, 20, 30, 40, 100, 200, 300, 600, 800]:\n att = sat.func_attitude(t)\n point0 = ft.rotate_by_quaternion(att, [0, 0, 0])\n point1 = ft.rotate_by_quaternion(att, [0, 1, 0])\n\n\n point0 = ft.vector_to_alpha_delta(point0)\n point1 = ft.vector_to_alpha_delta(point1)\n\n\n vect = np.array([point1[0]-point0[0], point1[1]-point0[1]])\n\n # vect = helpers.normalize(vect)\n\n plt.quiver(0, 0, vect[0], vect[1], color=['r'])\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4b8da961f9c04838341227fe888b81d40682bb23", "size": 607168, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Drafts/01-Scanner-Copy1.ipynb", "max_stars_repo_name": "bombrun/GaiaLab", "max_stars_repo_head_hexsha": "186b39621cdc5d8fb165907107b1aaed933f0c8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2018-06-21T14:36:25.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-04T18:21:18.000Z", "max_issues_repo_path": "notebooks/Drafts/01-Scanner-Copy1.ipynb", "max_issues_repo_name": "MaraBucur/GaiaLab", "max_issues_repo_head_hexsha": "186b39621cdc5d8fb165907107b1aaed933f0c8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-11-26T15:01:27.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-12T14:21:34.000Z", "max_forks_repo_path": "notebooks/Drafts/01-Scanner-Copy1.ipynb", "max_forks_repo_name": "MaraBucur/GaiaLab", "max_forks_repo_head_hexsha": "186b39621cdc5d8fb165907107b1aaed933f0c8f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-07-24T07:20:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-02T07:17:23.000Z", "avg_line_length": 542.5987488829, "max_line_length": 206628, "alphanum_fraction": 0.9406111653, "converted": true, "num_tokens": 5465, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.523420348936324, "lm_q2_score": 0.5117166047041652, "lm_q1q2_score": 0.26784288379076515}} {"text": "\n\n# Revealing Ferroelectric Switching Character Using Deep Recurrent Neural Networks\nJoshua C. Agar1,2,3*, Brett Naul4, Shishir Pandya1, Stefan van der Walt5, Joshua Maher1, Ren Yao6, Long-Qing Chen7, Sergei V. Kalinin8, Rama K. Vasudevan8, Ye Cao6, Joshua S. Bloom4, and Lane W. Martin1,2*\n\n1 \tDepartment of Materials Science and Engineering, University of California, Berkeley, Berkeley, CA 94720, USA \n2 \tMaterials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA \n3 \tDepartment of Materials Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA \n4 \tDepartment of Astronomy, University of California, Berkeley, Berkeley, CA 94720, USA \n5\tBerkeley Institute of Data Science, University of California, Berkeley, Berkeley, CA 94720, USA \n6 \tDepartment of Materials Science and Engineering, University Texas at Arlington, Arlington, TX 76019, USA \n7 \tDepartment of Materials Science and Engineering and Materials Research Institute, The Pennsylvania State University, University Park, PA 16802-5006, USA \n8 \tCenter for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA \n*joshua.agar@lehigh.edu, lwmartin@berkeley.edu\n\nKeywords: ferroelectric, switching, domains, scanning-probe microscopy, neural network\n\n# Abstract (text)\n\nThe ability to manipulate domains underpins function in applications of ferroelectrics. While there have been demonstrations of controlled nanoscale manipulation of domain structures to drive emergent properties, such approaches lack an internal feedback loop required for automatic manipulation. Here, using a deep sequence-to-sequence autoencoder we automate the extraction of latent features of nanoscale ferroelectric switching from piezoresponse force spectroscopy of tensile-strained PbZr0.2Ti0.8O3 with a hierarchical domain structure. We identify characteristic behavior in the piezoresponse and cantilever resonance hysteresis loops, which allows for the classification and quantification of nanoscale-switching mechanisms. Specifically, we identify elastic hardening events which are associated with the nucleation and growth of charged domain walls. This work demonstrates the efficacy of unsupervised neural networks in learning features of a materials physical response from nanoscale multichannel hyperspectral imagery and provides new capabilities in leveraging in operando spectroscopies that could enable the automated manipulation of nanoscale structures in materials.\n\n# Introduction: (text)\nThe ability to create and manipulate domain structures in ferroelectrics allows for the control of the phase and polarization orientation and the local and macroscale susceptibilities (e.g., electrical, thermal, mechanical, optical, etc.) thus providing a foundation for next-generation devices[1\u20133]. Early efforts in this regard have focused on deterministically creating desired domain structures. In, for example, tetragonal PbZr1-xTixO3, controlling the elastic boundary conditions has provided access to domains spanning simple monodomain to complex hierarchical domain structures[4\u20136]. As the field has advanced, ingenious routes, including, compositional gradients[7,8], superlattice structures[9\u201311], orientation control[12,13], and engineered octahedral rotations[14] have been leveraged to control domain structures.\nThe majority of this work, however, has focused on the static creation of desired domain structures[6,15\u201318] or functional domain walls[19,20], and thus lacks an internal self-regulating feedback loop required for automatic operation in functional devices. To deterministically manipulate ferroelectric domain structures requires the ability to measure while in operation (i.e., in operando) and automatically identify a number of features (e.g., polarization orientation, switching pathways, domain-wall geometry, etc.). Developments in multimodal spectroscopy now allow the acquisition of data at both the appropriate time- and length-scales required to glean such information from ferroelectric materials using techniques such as: transmission electron microscopy[21,22], scanning-probe microscopy[23,24], diffraction studies[25,26], etc[27]. The challenge, however, is that downstream analytical approaches which project data into a human-interpretable form remain underdeveloped and ill equipped for the complexity and magnitude of the data that can now be readily produced. In turn, despite the extensive amount of expensive experiments conducted, only an infinitesimally small fraction of the data collected is translated into knowledge. \nSolving this challenge, requires looking beyond the borders of nanoscience to fields such as social analytics[28,29], natural language processin[30,31], and sentiment analysis[32,33], where computational roadblocks are pervasive. For decades, standard practice was to develop machine-learning algorithms to create mathematical abstractions of the data based on characteristics of preconceived importance. Recently, the availability of massive datasets and specifically designed hardware has enabled features once designed by human experts to be extracted using brute-force computation. These representation learning tools generally rely on building architectures of simple non-linear mathematical functions which are optimized to relate the raw data to some information or label[34\u201336]. These so-called deep-learning-neural-network-based approaches have set new benchmarks for many common machine-learning tasks including: image[37] and speech recognition[38], language translation[39], and identification of human intention[32,33]. While these deep-learning approaches have begun to make meaningful inroads in, for example, genomics[40], high-energy physics[41], and astronomy[42], they have yet to be sufficiently embraced in experimental nanoscience[43\u201353]. \nHere, we develop a sequence-to-sequence neural network to extract inference from band-excitation piezoresponse spectroscopy (BEPS). To test our approach, we conducted BEPS on tensile-strained PbZr0.2Ti0.8O3 thin films wherein strain drives the formation of a hierarchical *c*/*a* and *a1*/*a2* domain structure. We develop and train a deep-learning-neural-network-based sparse autoencoder on piezoresponse hysteresis loops to demonstrate parity with conventional empirical-analysis approaches. We then apply this approach to extract insight from the resonance response which has a form too complex to be properly analyzed using techniques common in experimental materials science. Using the information \u201clearned\u201d, we identify geometrically-driven differences in the switching mechanism which are related to charged-domain-wall nucleation and growth during ferroelastic switching. This insight could not have been extracted using machine-learning approaches that have been previously applied to materials spectroscopy and provides unprecedented information about the nature of the specific domain-structure geometries that should be explored to enhance local and macroscale susceptibilities. Furthermore, the ability to automate the extraction of inference regarding ferroelectric-switching mechanisms from multichannel nanoscale spectroscopy provides the first step (i.e., machine-learned discrimination) that could be used to design real-time control systems capable of creation and verification of interconversion of functional domain structures and interfaces. The developed approach is extensible to other forms of multi-dimensional, hyperspectral (wherein there is a spectra at each pixel) images which are commonly acquired in experiments such as time-of-flight secondary-ion mass spectrometry[54,55], scanning Raman[56], electron energy loss spectroscopy[57,58], etc. To promote the utilization of this approach, we provide open access to all data and codes in the form of a Jupyter notebook[59] (Supplementary Information). Ultimately, this work represents an example of how unsupervised deep learning can highlight features relating to ferroelectric physics overlooked by human-designed-machine-learning algorithms, and how such approaches can be adapted to analyze hyperspectral data more broadly.\n\n\n## Initialization Code (code)\n\n### Importing Packages (code)\nThis code imports and installs packages that you need to run the code\n\n\n```python\n!pip install -U moviepy natsort tqdm scikit_image scikit_learn scipy\n!pip install pillow==4.2.1\n%tensorflow_version 1.x\n!pip install -U imageio==2.4.1 keras==2.2.3\n```\n\n\n```python\nimport imageio\nimageio.plugins.ffmpeg.download()\n```\n\n### Special Codes for Collaboratory (code)\n#### Provides access to google drive\nThis code provides access to your google drive folder so files can be saved\n\n\n```python\n# if running on collaboratory set = True\ncollaboratory = True\n\nif collaboratory:\n from google.colab import drive\n drive.mount('/content/drive')\nelse: \n print('Running on local systems, if running on collaboratory please change above')\n```\n\n\n```python\ncd drive/My\\ Drive\n```\n\n\n```python\nimport os\nif os.path.exists(\"./Revealing-Ferroelectric-Switching-Character-Using-Deep-Recurrent-Neural-Networks\"):\n pass\nelse:\n !git clone https://github.com/jagar2/Revealing-Ferroelectric-Switching-Character-Using-Deep-Recurrent-Neural-Networks.git\n```\n\n\n```python\ncd Revealing-Ferroelectric-Switching-Character-Using-Deep-Recurrent-Neural-Networks\n```\n\n\n```python\n!git pull\n```\n\n### Imports Packages (code)\nImports packages which are used by jupyter paper\n\n\n```python\n# imports useful packages\nimport warnings\nwarnings.filterwarnings('ignore')\nimport imp\nfrom matplotlib.ticker import FormatStrFormatter\nimport matplotlib.pyplot as plt\nimport codes.analysis.rnn as rnn\nimport codes.util as util\nimport codes.analysis.machine_learning as ml\nimport codes.analysis as an\nimport codes.processing as p\nimport codes.viz as viz\nimport codes.util.input_output as io_transfer\nfrom sklearn.decomposition import NMF\nfrom scipy import io\nimport numpy as np\nimport os\nimport os.path\n\n# loads the custom graphing format\nviz.format.custom_plt_format()\n\nplt.style.use('seaborn-white')\n\n\nfrom IPython.display import IFrame\nfrom IPython.display import HTML\n```\n\n### Folders (code)\nDefines a folder structure to save files\n\n\n```python\n# builds folders where the data will be saved\nfolder_structure = util.file.make_folder(\n './structure')\nfolder_BE = util.file.make_folder(\n './Band_Excitation')\nfolder_BE_Movie_files = util.file.make_folder(\n folder_BE + '/BE_Movie_Files')\nfolder_BE_all_images = util.file.make_folder(\n folder_BE + '/BE_all_images')\nfolder_BE_spectra = util.file.make_folder(\n folder_BE + '/BE_spectra')\nfolder_BE_cleaned_spectra = util.file.make_folder(\n folder_BE + '/cleaned_spectra')\nfolder_pca = util.file.make_folder(\n './pca')\nfolder_nmf = util.file.make_folder(\n './nmf')\nfolder_clustering = util.file.make_folder('./clustering')\nfolder_pca_clustering = util.file.make_folder(\n './pca_clustering')\nfolder_piezoresponse_autoencoder = util.file.make_folder(\n './piezoresponse_autoencoder')\nfolder_resonance_autoencoder = util.file.make_folder(\n './resonance_autoencoder')\nfolder_piezoresponse_autoencoder_movie = util.file.make_folder(\n folder_piezoresponse_autoencoder + '/movie')\nfolder_piezoresponse_autoencoder_training_movie = util.file.make_folder(\n folder_piezoresponse_autoencoder + '/training_movie')\nfolder_resonance_autoencoder_movie = util.file.make_folder(\n folder_resonance_autoencoder + '/movie')\nfolder_resonance_autoencoder_training_movie = util.file.make_folder(\n folder_resonance_autoencoder + '/training_movie')\nfolder_phase_field = util.file.make_folder(\n './Phase_Field')\n```\n\n### Download Data (code)\nThis downloads the full trained models and the phase-field simulations data. These files are >50 GB and you do not need them for most of the analysis in the Jupyter Paper\n\n\n```python\n# Downloading data for Phase Field simulations and full training data\n# note these are big files >50 gb\ndownload_data = False\n\nurl = 'https://zenodo.org/record/1482091/files/Phase_field.zip?download=1'\nfilename = 'phase_field.zip'\nsave_path = './Raw_Data/Phase_Field/'\n\nio_transfer.download_and_unzip(filename, url, save_path, download_data)\n\nurl = 'https://zenodo.org/record/1482091/files/Trained_models.zip?download=1'\nfilename = 'train_model_zip.zip'\nsave_path = './Trained Models/'\n\nio_transfer.download_and_unzip(filename, url, save_path, download_data)\n```\n\n### Settings (code)\n\n#### Export Figure Settings (code)\n\n\n```python\n# Sets what object to export\nprinting = { # exports eps vector graphics (note these files can be large)\n 'EPS': False,\n # exports png files\n 'PNG': False,\n # prints image series (note this can take some time)\n 'all_figures': False,\n # generates movies (note this can take some time)\n 'movies': False,\n # resolution of the images\n 'dpi': 300}\n```\n\n#### Plotting Format (code)\n\n\n```python\n# sets the plotting format\nplot_format = {\n # adds scalebar to image\n 'add_scalebar': True,\n # sets the dimensions for the scalebar [(size of image),(size of scalebar)]\n 'scalebar': [2000, 500],\n # selects if the image will be rotated\n 'rotation': True,\n # selects the rotation angle of the image\n 'angle': 60.46,\n # sets the fraction of the image to crop\n 'frac_rm': 0.17765042979942694,\n # sets the resolution of the image\n 'dpi': 300,\n # sets the default colormap\n 'color_map': 'viridis',\n # sets if color bars should be added\n 'color_bars': True}\n```\n\n### Loads the Data (code)\n\n\n```python\n# imports the raw band excitation data\nimported = {'data': io.matlab.loadmat('./Raw_Data/Data.mat'),\n 'validation_data': io.matlab.loadmat('Raw_Data/loop_1.mat')}\n\n# extracts the important information from the raw data\nraw = {'voltage': imported['data']['Voltagedata_mixed'],\n 'piezoresponse': imported['data']['Loopdata_mixed'],\n 'amplitude': imported['data']['OutA2_mixed'],\n 'phase': imported['data']['OutPhi1_mixed'],\n 'resonance': imported['data']['Outw2_mixed'],\n 'quality_factor': imported['data']['OutQ2_mixed'],\n 'val_piezoresponse': imported['validation_data']['piezo_1'],\n 'val_resonance': imported['validation_data']['resonance_loop_1']}\n```\n\n### Cleans the Raw Data (code)\n\n\n```python\n# adds a max min filter on the data to remove bad points\np.filters.range_filter(raw['resonance'], [1300, 1340])\np.filters.range_filter(raw['val_resonance'], [1300, 1340])\n\n# interpolates data that is non-real. This happens when the SHO fit fails\ninterpolated = {'voltage': raw['voltage'],\n 'piezoresponse': p.filters.clean_interpolate(raw['piezoresponse'],\n 'linear').reshape(-1, raw['piezoresponse'].shape[2]),\n 'amplitude': p.filters.clean_interpolate(raw['amplitude'],\n 'linear').reshape(-1, raw['amplitude'].shape[2]),\n 'phase': p.filters.clean_interpolate(raw['phase'],\n 'linear').reshape(-1, raw['phase'].shape[2]),\n 'resonance': p.filters.clean_interpolate(raw['resonance'],\n 'linear').reshape(-1, raw['resonance'].shape[2]),\n 'quality_factor': p.filters.clean_interpolate(raw['quality_factor'],\n 'linear').reshape(-1, raw['quality_factor'].shape[2]),\n 'val_piezoresponse': p.filters.clean_interpolate(raw['val_piezoresponse'],\n 'linear').reshape(-1, raw['val_piezoresponse'].shape[2]),\n 'val_resonance': p.filters.clean_interpolate(raw['val_resonance'],\n 'linear').reshape(-1, raw['val_resonance'].shape[2])}\n# Uses Savitzky-Golay filter to remove outlier points\nsg_filtered = {'voltage': raw['voltage'],\n 'piezoresponse': p.filters.savgol(interpolated['piezoresponse'], fit_type='linear'),\n 'amplitude': p.filters.savgol(interpolated['amplitude'], fit_type='linear'),\n 'phase': p.filters.savgol(interpolated['phase'], fit_type='linear'),\n 'resonance': p.filters.savgol(interpolated['resonance'], fit_type='linear'),\n 'quality_factor': p.filters.savgol(interpolated['quality_factor'], fit_type='linear'),\n 'val_piezoresponse': p.filters.savgol(interpolated['val_piezoresponse'], fit_type='linear'),\n 'val_resonance': p.filters.savgol(interpolated['val_resonance'], fit_type='linear')}\n\n# normalized the data. This is important for training Neural Networks\nnormalized = {'voltage': raw['voltage'],\n 'piezoresponse': p.filters.normalize(sg_filtered['piezoresponse']),\n 'amplitude': p.filters.normalize(sg_filtered['amplitude']),\n 'phase': p.filters.normalize(sg_filtered['phase']),\n 'resonance': p.filters.normalize(sg_filtered['resonance']),\n 'quality_factor': p.filters.normalize(sg_filtered['quality_factor']),\n 'val_piezoresponse': p.filters.normalize(sg_filtered['val_piezoresponse'],\n sg_filtered['piezoresponse']),\n 'val_resonance': p.filters.normalize(sg_filtered['val_resonance'],\n sg_filtered['resonance'])}\n\n# stores information which helps in making pretty axes.\nsignal_info = {'voltage': dict(\n symbol='voltage',\n format_str='%3.d',\n units='Voltage (V)',\n y_lim=None,\n x_tick=np.linspace(-15, 15, 7),\n pca_range=None),\n 'amplitude': dict(\n symbol='A',\n format_str='%.0e',\n units='Amplitude (Arb. U.)',\n y_lim=None,\n y_tick=[],\n pca_range=None),\n 'phase': dict(\n symbol='Phi',\n format_str='%3.d',\n units='Phase (${^\\circ}$)',\n y_lim=[-110, 110],\n y_tick=np.linspace(-90, 90, 5),\n pca_range=None),\n 'resonance': dict(\n symbol='w',\n format_str='%3.d',\n units='Resonance (kHz)',\n y_lim=[1326, 1329],\n y_tick=np.linspace(1320, 1329, 4),\n pca_range=None),\n 'quality_factor': dict(\n symbol='Q',\n format_str='%3.f',\n units='Quality Factor (Arb. U.)',\n y_lim=[210, 310],\n y_tick=np.linspace(215, 310, 5),\n pca_range=None),\n 'piezoresponse': dict(\n symbol='Piezoresponse',\n format_str='%.0e',\n units='Piezoresponse (Arb. U.)',\n y_lim=None,\n y_tick=[],\n pca_range=[-0.29, .29])\n}\n\n# builds a single dictonary to hold all the data\ndata = {'raw': raw,\n 'interpolated': interpolated,\n 'sg_filtered': sg_filtered,\n 'normalized': normalized,\n 'signal_info': signal_info}\n```\n\n# Results: Structural Characterization (Text)\nWe synthesized PbZr0.2Ti0.8O3/Ba0.5Sr0.5RuO3/ NdScO3 (110) heterostructures using pulsed-laser deposition (Methods). The resulting films have a hierarchical domain structure with a sawtooth topography on two length scales (Fig. 1a-d), as the result of primarily out-of-plane polarized *c*/*a*/*c*/*a* [with enhanced out-of-plane (Fig. 1b) and suppressed in-plane (Fig. 1c) piezoresponse] and fully in-plane polarized *a1*/*a2*/*a1*/*a2* [with suppressed out-of-plane and enhanced in-plane piezoresponse] domain bands. This hierarchical domain structure emerges due to the tensile strain which drives the *c*/*a* and *a1*/*a2* domain variants to be nearly energetically degenerate (Supplementary Fig. 1)[6].\n\n**Figure 1 | Surface topography of PbZr0.2Ti0.8O3 with hierarchical domain structures. a,** 3-dimensional tapping mode topography superimposed with topographic information presented in nm as indicated. **b, c,** 3-dimensional tapping mode topography superimposed with the **b,** vertical **c,** lateral piezoresponse amplitude. Color presented in arbitrary units defined by the color scale in the inset. Topographic line traces showing the surface topography across the **d,** *c*/*a*-*a1*/*a2* bands (indicated by the dark-blue-dashed line in a) and e, *c*/*a* bands (indicated by the dark-green-dashed line in a).\n\n## Supplementary Note 1: Additional Structural Information (text)\n\n\nTo support the structural studies provided in the manuscript we provide a modified temperature-strain phase diagram for PbZr0.2Ti0.8O3. Based on this diagram, at room temperature the film should exist with a monodomain *c* domain structure under large compressive strain. As the compressive strain decreases, it is energetically favorable to accommodate some in-plane oriented a domains. As the strain becomes more tensile in nature the domain structure transitions to a purely in-plane oriented *a1*/*a2* domain structure. In this work, we have grown films at $\\approx$1.7% tensile strain, at this intermediate strain there is competition between *c*/*a* and *a1*/*a2* domain structures, which in turn gives rise to the hierarchical domain structure observed.\nAdditionally, detailed reciprocal space mapping studies were conducted about the 220- and 002-diffraction condition of the substrate and film, respectively (Supplementary Figure 1b). From these studies, we observe two sets of five peaks each. The first set of five peaks has out-of-plane lattice parameters nearly identical to what is expected for bulk PbZr0.2Ti0.8O3 (~4.129 \u00c5). There is a central peak aligned with the out-of-plane axis of the substrate (PZTc} 002), and two sets of tilted reflections (PZTc-tilt 002). The first set of tilted reflections are tilted 0.6$^\\circ$ off axis, whereas, the second set of peaks are tilted by 1.2$^\\circ$ relative to the [001]. The second set of five peaks has an out-of-plane lattice parameter of ~3.96 \u00c5, nearly halfway in between the coherently strained a relaxed a domain peak position. Once again there is a central peak aligned with the substrate normal (PZTa 200), surrounded by two sets of peaks (PZTa-tilt 200) with varying tilts. The first set of reflections have a moderate tilt of ~1$^\\circ$ , whereas, the second set of peaks has a more significant tilt of ~1.8$^\\circ$ relative to the [001].\n

\n\n**Supplementary Figure 1 | Additional structural studies.** **a,** Temperature-strain phase diagram for PbZr0.2Ti0.8O3. **b,** Symmetric reciprocal space map of the 400 nm thick PbZr0.2Ti0.8O3/20 nm Ba0.5Sr0.5RuO3/NdScO3 (110) heterostructures. Maps obtained around the substrate 220 diffraction condition. Coherently strained and relaxed peak positions are indicated\n\n\n\n#### Reciprical Space Maps of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with Hierarchical Domain Structures (code)\n\n\n```python\nviz.plot.rsm(imported, printing, folder_structure, plot_format)\n```\n\n**Figure J1 |** Symmetric reciprocal space map of 400 nm thick PbZr${_{0.8}}$Ti${_{0.2}}$O${_{3}}$\u00a0heterostructures supported on NdScO${_3}$\u00a0(110). Map obtained around the substrate 220 diffraction condition.\n\n#### Initial PFM images (code)\n\n\n```python\n# (User) Sets the colorscale of [topography = (initial [-3e-9,3e-9]),\n#amplitude (initial [.5e-11,6.5e-11]),\n# phase (initial [40,260])]\nsignals = {'Topography': dict(\n c_lim=[-3e-9, 3e-9],\n data_loc='HeightOriginal'),\n 'Amplitude': dict(\n c_lim=[.5e-11, 6.5e-11],\n data_loc='AmpOriginal'),\n 'Phase': dict(\n c_lim=[40, 260],\n data_loc='PhaseOriginal')\n}\n\nviz.plot.pfm(signals, imported, printing, folder_structure, 'Inital PFM')\n```\n\n**Figure J2 | Piezoresponse force microscopy images prior to band excitation piezoresponse force microscopy switching.** **a,** topographic and **b,** vertical **c,** phase piezoresponse force microscopy images of as grown 400 nm thick PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ heterostructure supported on NdScO${_{3}}$ (110). \n\n#### Final PFM Images (code)\n\n\n```python\n# (User) Sets the colorscale of [topography = (initial [-3e-9,3e-9]),\n#amplitude (initial [.2e-10,1.5e-10]),\n# phase (initial [50,90])]\nsignals = {'Topography': dict(\n c_lim=[-2e-9, 2e-9],\n data_loc='HeightFinal'),\n 'Amplitude': dict(\n c_lim=[.2e-10, 1.5e-10],\n data_loc='AmpFinal'),\n 'Phase': dict(\n c_lim=[50, 90],\n data_loc='PhaseFinal')\n}\n\nviz.plot.pfm(signals, imported, printing, folder_structure, 'Final PFM')\n```\n\n**Figure J3 | Piezoresponse force microscopy images following band excitation piezoresponse force microscopy switching.** **a,** topographic and **b,** vertical **c,** phase piezoresponse force microscopy images of as grown 400 nm thick PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ heterostructure supported on NdScO${_{3}}$ (110). \n\n### Topography and Piezoresponse (code)\n\n\n```python\n# Description and properties of the plots\nsignals = {'Topography': dict(\n c_lim=[],\n data_loc='topo_mixed'),\n 'Large-Periodicity Line Trace': dict(\n data_loc='topo_ca_caca_mixed',\n x_lim=[0, 2],\n y_lim=[-4, 2],\n shift=0),\n 'Small-Periodicity Line Trace': dict(\n data_loc='topo_mixed_caca',\n x_lim=[0, .5],\n y_lim=[0, 2],\n shift=0.8),\n 'Vertical Amplitude': dict(\n c_lim=[0, 4.5e-10],\n data_loc='Vert_Amp_mixed'),\n 'Vertical Phase': dict(\n c_lim=[],\n data_loc='vert_phase_mixed'),\n 'Lateral Amplitude': dict(\n c_lim=[0, .8e-11],\n data_loc='lateral_amp_mixed'),\n 'Lateral Phase': dict(\n c_lim=[],\n data_loc='lateral_phase_mixed')\n}\n\n# plots the PFM images and line traces across those images.\nviz.plot.pfm_w_line_trace(signals, imported, printing, folder_structure)\n```\n\n**Figure J4 | Piezoresponse force microscopy images of 400 nm thick PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$\u00a0heterostructures supported on NdScO${_3}$\u00a0(110). a,** Topography **b,** Line trace indicating the large scale sawtooth-like topography between the *c/*a/*c/*a and *a${_1}$/*a${_2}$/*a${_1}$/*a${_2}$ domain regions. **c,** Line trace indicating the small scale sawtooth-like topography within the c/a/c/a domain bands. Images of piezoresponse vertical **d,** amplitude and **e,** phase and lateral **f,** amplitude and **g,** phase.\n\n# Results: Band Excitation Piezoresponse Force Microscopy (text)\nTo characterize the nanoscale switching processes, we conducted BEPS (Methods and Supplementary Figs. 2 and 3). Briefly, BEPS measures the piezoresponse in remanence (across a band of frequencies near the cantilever resonance) following a perturbation from a bipolar triangular switching waveform designed to fully switch the material. Following fitting, the raw data from this experiment at every pixel (x,y size = 60, 60) measures the amplitude (*A*), phase ($\\phi$), resonance frequency ($\\omega$), and quality factor (*Q*) of the cantilever resonance, which are qualitative measures of the piezoresponse, polarization direction, stiffness, and dampening, respectively, at various voltages (V length = 96). At the most basic level, visualization of the switching processes can be achieved by creating movies from the images of the signals throughout the switching process or by plotting the response curves at a specific location or within a predefined area (Supplementary Movie 1). Additionally, it is common to compute at each tip position a piezoresponse loop ($A\\cos{\\phi}$) which can then be fit to a 15-parameter empirical function. While such approaches are capable of visualizing large differences in the piezoresponse they provide only limited information into subtle differences of the response, which contains important insight (Supplementary Information Fig. 4). While in the raw form the data might occupy a N-dimensional space the information of physical significance lies on a data manifold with a much lower dimensionality, however, we have no means to predict the manifold.\n\n## Supplementary Note 2: Band Excitation Piezoresponse Force Microscopy (text)\nTraditional approaches to scanning probe microscopy and, in particular PFM, tend to rely on periodically exciting a cantilever using a single-frequency excitation at, or very near to, the cantilever resonance to perturb the sample, invoking a response in the cantilever amplified by its resonance. In single-frequency excitation, the response of the cantilever, and ultimately the signal of interest, is measured using a lock-in amplifier[s1]. While powerful for imaging highly-responsive materials with large domain features, the limitations of a single-frequency approach become obvious even when considering a simple idealized model of the cantilever resonance in the form of a simple harmonic oscillator (SHO). In this case, the response of the cantilever is primarily determined by the resonance frequency (defined by the tip-surface spring constant), amplitude (a measure of the response), and quality factor of cantilever resonance (i.e., tip-surface dissipation), all of which are convoluted by the tip-surface interaction[2]. While more advanced approaches based on phase-locked loops (which use external circuitry to try to maintain the system at resonance) or dual-frequency resonance tracking (which excites, measures, and then tracks the resonance using two frequencies) provides a more accurate measure of piezoresponse, they still only provide minor improvements applicable under the strictest set of assumptions[3].\nTo overcome these bandwidth limitations and accurately measure piezoresponse without artifacts imposed by changing cantilever dynamics it is crucial to measure the cantilever response over a large bandwidth near the cantilever resonance (Supplementary Figure 2a). To do this, we used band-excitation piezoresponse force microscopy (BE-PFM), which uses a computer-generated waveform (Supplementary Figure 2b) spanning band of frequencies near the measured cantilever resonance to electrically perturb the material (Supplementary Figure 2c)[S2]. The response of the cantilever can then be measured with a high-speed data acquisition system and subsequently Fourier transformed (Supplementary Figure 2d) into the frequency domain. Following data collection, assuming that the tip-sample interaction is weak, the amplitude and phase can be fit to a SHO model as described in equations 1-2, where *A0* and $\\omega_0$ are the amplitude and frequency at resonance.\n\n

\n\n\n**Supplementary Figure 2 | Schematic illustration of workflow for band-excitation piezoresponse force microscopy (BE-PFM) imaging. a,** Typical frequency domain plot of BE signal chosen to excite the cantilever near the cantilever resonance. **b,** Example BE-excitation sinc waveform in time domain used to excite the tip at a range of frequencies. **c,** Schematic drawing of AFM cantilever in contact with surface. **d,** Typical cantilever resonance response shown in frequency domain. Red-dashed line shows the fit, based on equations 1-2. Characteristic variables which define the cantilever/material response are indicated.\n

\n\n$$\n\\begin{equation}\nA(\\omega)=\\frac{A_{0}\\omega^2_0}{\\sqrt{(\\omega^2-\\omega^2_0)^2+(\\omega\\omega_0/Q)^2}}\n\\tag{1}\n\\end{equation}\n$$\n\n$$\n\\begin{equation}\n\\tan{(\\theta(\\omega))}=\\frac{\\omega\\omega_0/Q}{\\omega^2-\\omega_0^2}\n\\tag{2}\n\\end{equation}\n$$\n\nFollowing fitting (Supplementary Figure 2d), the error of the fits can be evaluated (i.e., the quality of the deconvolution), and if good, the data yields (x,y) maps of resonance amplitude (*A0*), resonance frequency ($\\omega_0$), and quality factor (*Q*) as well as the phase ($\\theta$) of the response. Therefore, with the proper care, the application of BE-PFM enables the exclusion of cross-talk (associated with position dependent changes in the cantilever resonance) minimizing the contribution from the tip-surface interaction.\n\nBuilding off the BE-based imaging technique, it is possible to add additional dimensionality to such measurements, providing deeper insight into the response of the material. Specifically, instead of just measuring $\\{A,\\theta\\}(x,y,\\omega)$ we can add an additional dc-voltage dimensionality to the measurement [that is, $\\{A,\\theta\\}(x,y,\\omega, V_{dc})$] enabling the measurement of local piezoelectric hysteresis loops while taking advantage of the enhanced measurement precision provided by band excitation. To do this, we superimposed a *n* x *n* grid, on a previously scanned region of interest (Supplementary Figure 3a). At each point, a full bipolar triangular switching waveform is applied to the cantilever (Supplementary Figure 3b-d) and readout is conducted in the off-state (that is, remanent state) by superimposing a band-excitation waveform (sense pulse). This process happens rapidly (total elapsed time <5 ms). Following data acquisition, the data is fit using a SHO model as previously described (Supplementary Figure 3e), yielding data of the form $\\{A_0, \\omega_0, Q, \\theta_0\\}(x, y, V_dc)$. By optimizing the rotation angle ($\\phi$) to maximize the real component of the hysteresis loop mixed-signal ($A_0\\cos{\\phi}) it is possible to generate local piezoelectric hysteresis loops of the same general form as typical macroscopic ferroelectric hysteresis loops (Supplementary Figure 3f).\n

\n\n\n\n**Supplementary Figure 3 | Schematic illustration of workflow for band excitation piezoresponse spectroscopy measurements (BEPS). a,** Schematic of the sampling, and fast and slow scan direction used for imaging. **b,** Example BE chirp waveform in time domain used to excite the tip at a range of frequencies. **c,** Triangular switching waveform used to locally switch the film. **d,** Schematic drawing of AFM cantilever in contact with surface. **e,** Typical cantilever resonance response shown in frequency domain. Red dashed line shows the fit, based on equations 1-2. **f,** Typical piezoelectric hysteresis loop obtained from BEPS.\n\n

\n\nIt is worth emphasizing that all units are intentionally reported in arbitrary units. To measure the deflection of the cantilever it is standard practice to use the beam bounce approach. In this approach, a laser is reflected off the back of the cantilever and the angular displacement, not the vertical displacement, is measured by a quadrant photodiode. To accurately convert the angular dependence into a displacement requires a number of assumptions to be made about the cantilever mode shape. While these assumptions are fairly accurate in air, they deviate from ideality when the cantilever is in contact with the surface. In turn, the measured effective piezoresponse using \n\nthe beam bounce approach can easily vary by orders of magnitude depending on the laser spot position and contact resonance frequency4. Such values of piezoresponse commonly reported in literature generally do not include consideration for these effects (unless specifically mentioned) and thus should be considered as being reported in arbitrary units.\n\n## Supplementary Movie 1: Band Excitation Piezoresponse Switching Spectroscopy\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 1. Raw image series obtained from band excitation piezoresponse force microscopy of PbZr0.2Ti0.2O3 thin films with hierarchical domain structures.** Image series of the amplitude, phase, resonance, and quality factor of the cantilever resonance is shown. Bottom figure tracks the progress of switching using a bipolar triangular waveform. \n\n### Visualize Cleaned Data (Code)\n\n\n```python\n# Selects a random index to plot\n#i = np.random.randint(3600)\n# if user wants to show a specific point\ni = 100\n\n# Plots the raws data (black) and cleaned data (red)\nviz.plot.cleaned_data(data, i, printing, folder_BE_cleaned_spectra)\n```\n\n**Figure J5 | Images showing preprocessing of data. a,** Piezoresponse **b,** amplitude **c,** phase **d,** resonance frequency **e,** quality factor. Raw data is shown in black, processed data shown in red.\n\n## Band Excitation Piezoresponse Force Microscopy - Basic Analysis (code)\n\n### Exports all images (code)\n\n\n```python\n# Checks if user selected to export all figures\nif printing['all_figures']:\n\n # (User) Sets the colorscale {Initial Amplitude = [0.0020e-3, 0.1490e-3]; Phase = [-265,-30];\n # Resonance = [1317,1330]; Quality Factor = [175,270]}\n signal_clim = {('Amplitude', 'A'): [0.0020e-3, 0.1490e-3],\n ('Phase', 'Phi'): [-265, -30],\n ('Resonance', 'w'): [1317, 1330],\n ('Quality Factor', 'Q'): [175, 270],\n }\n\n # prints all images from the switching studies\n viz.plot.band_excitation(imported['data'], signal_clim, plot_format, printing,\n folder_=folder_BE_all_images)\n```\n\n### Export Images for Movie (code)\n\n\n```python\nif printing['movies']:\n # (User) Sets the colorscale {Initial Amplitude = [0.0020e-3, 0.1490e-3]; Phase = [-265,-30];\n # Resonance = [1317,1330]; Quality Factor = [175,270]}\n signal_clim = {('Amplitude', 'A', '%.0e'): [0.0020e-3, 0.1490e-3],\n ('Phase', 'Phi', '%.0d'): [-265, -30],\n ('Resonance', 'w', '%.0d'): [1317, 1330],\n ('Quality Factor', 'Q', '%.0d'): [175, 270],\n }\n\n # creates the images used to make the movie of the switching studies\n viz.plot.band_excitation_movie(imported, signal_clim, \n plot_format, printing, folder = folder_BE_Movie_files)\n```\n\n\n```python\n# creates the movie of the switching studies\nif printing['movies']:\n util.file.make_movie('BE_Switching', folder_BE_Movie_files, folder_BE, 'png',\n 4, output_format='mp4')\n```\n\n### Plot Raw Band Excitation Spectra (code)\n\n\n```python\n# (User) selects index (index used in main manuscript as example [30,30], cycle 2)\nx = 30\ny = 30\ncycle = 2\n\n# Sets the information for plotting. (User) can adjust scales.\nsignal_clim = {'Amplitude': dict(\n symbol='A',\n format_str='%.0e',\n units='(Arb. U.)',\n y_lim=[],\n y_tick=[]),\n 'Phase': dict(\n symbol='Phi',\n format_str='%3.d',\n units='(${^\\circ}$)',\n y_lim=[-110, 110],\n y_tick=np.linspace(-90, 90, 5)),\n 'Resonance': dict(\n symbol='w',\n format_str='%3.d',\n units='(kHz)',\n y_lim=[1326, 1329],\n y_tick=np.linspace(1320, 1329, 4)),\n 'Quality Factor': dict(\n symbol='Q',\n format_str='%3.f',\n units='',\n y_lim=[210, 310],\n y_tick=np.linspace(215, 310, 5)),\n 'Piezoresponse': dict(\n symbol='Piezoresponse',\n format_str='%.0e',\n units='(Arb. U.)',\n y_lim=[],\n y_tick=[])\n}\n\n# plots the raw BE spectra\nviz.plot.band_excitation_spectra(x, y, cycle, imported['data'],\n signal_clim, printing, folder_BE_spectra)\n```\n\n**Figure J6 |** Example raw piezoresponse loops acquired during band excitation piezoresponse spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures. Showing **a,** amplitude, **b,** phase, **c,** resonance, **d,** quality factor, and **e,** piezoresponse (Acos${\\phi}$) loop.\n\n## Supplementary Note 3: Band Excitation Piezoresponse Force Microscopy Loop Fitting (text)\nSpatial maps of the various loop fitting parameters obtained from loop fitting are provided (Supplementary Figure 4). While the major characteristic features of the loops are extracted using this approach these maps provide limited insights into the subtitle differences in switching behavior which underpin the physical response. Additionally, in the exploration of the quality of the loop-fitting results, it is apparent that the function lacks the complexity to fit the varied response types observed. The addition of more parameters to the fitting function could resolve this issue, however, would likely results in overfitting.\n\n

\n\n**Supplementary Figure 4 | Spatial maps of loop fitting parameters obtained from band excitation switching spectroscopy of PbZr0.2Ti0.8O3 with hierarchical domain structures.**\n\n### Loop Fitting Results (code)\n\n\n```python\n# Sets the information for plotting. (User) can adjust scales.\nsignal_clim = {'a1': dict(\n label='a${_1}$',\n data_loc='a1_mixed',\n format_str='%.1e',\n c_lim=[-1.5e-4, 0]),\n 'a2': dict(\n label='a${_2}$',\n data_loc='a2_mixed',\n format_str='%.1e',\n c_lim=[0, 1.5e-4]),\n 'a3': dict(\n label='a${_3}$',\n data_loc='a3_mixed',\n format_str='%.1e',\n c_lim=[-1e-6, 3e-6]),\n 'b1': dict(\n label='b${_1}$',\n data_loc='b1_mixed',\n format_str='%.1f',\n c_lim=[0, 10]),\n 'b2': dict(\n label='b${_2}$',\n data_loc='b2_mixed',\n format_str='%.1f',\n c_lim=[0, 50]),\n 'b3': dict(\n label='b${_3}$',\n data_loc='b3_mixed',\n format_str='%.1f',\n c_lim=[0, 12]),\n 'b4': dict(\n label='b${_4}$',\n data_loc='b4_mixed',\n format_str='%.1f',\n c_lim=[0, 25]),\n 'b5': dict(\n label='b${_5}$',\n data_loc='b5_mixed',\n format_str='%.1f',\n c_lim=[0, 12]),\n 'b6': dict(\n label='b${_6}$',\n data_loc='b6_mixed',\n format_str='%.1f',\n c_lim=[0, 12]),\n 'b7': dict(\n label='b${_7}$',\n data_loc='b7_mixed',\n format_str='%.1f',\n c_lim=[-15, 15]),\n 'b8': dict(\n label='b${_8}$',\n data_loc='b8_mixed',\n format_str='%.1f',\n c_lim=[-15, 15]),\n 'Loop Area': dict(\n label='Raw Area',\n data_loc='Acosarea_mixed',\n format_str='%.1e',\n c_lim=[5e-4, 4e-3]),\n 'Fitted Loop Area': dict(\n label='Fitted Area',\n data_loc='Acosareafit_mixed',\n format_str='%.1e',\n c_lim=[5e-4, 4e-3]),\n 'Raw/Fitted Loop Difference': dict(\n label='Raw/Fitted Diff.',\n data_loc='Acosareadif_mixed',\n format_str='%.1e',\n c_lim=[0, 1.5]),\n 'Raw Amplitude Centroid': dict(\n label='Raw Amp. Cent.',\n data_loc='AcoscentAc_mixed',\n format_str='%.1e',\n c_lim=[-2e-5, 2e-5]),\n 'Fitted Amplitude Centroid': dict(\n label='Fitted Amp. Cent.',\n data_loc='AcoscentAcfit_mixed',\n format_str='%.1e',\n c_lim=[-2e-5, 2e-5]),\n 'Raw Voltage Centroid': dict(\n label='Raw Volt. Cent.',\n data_loc='AcoscentV_mixed',\n format_str='%.1f',\n c_lim=[-1, 4]),\n 'Fitted Voltage Centroid': dict(\n label='Fitted Volt. Cent.',\n data_loc='AcoscentVfit_mixed',\n format_str='%.1f',\n c_lim=[-1, 4]),\n 'Loop Height': dict(\n label='Height',\n data_loc='Acosheight_mixed',\n format_str='%.1e',\n c_lim=[5e-5, 2.5e-4]),\n 'Loop Width': dict(\n label='Width',\n data_loc='Acoswidth_mixed',\n format_str='%.1f',\n c_lim=[12, 18]),\n 'Left Coercive field': dict(\n label='Left E${_c}$',\n data_loc='Al_mixed',\n format_str='%.1f',\n c_lim=[4, 11]),\n 'Right Coercive field': dict(\n label='Right E${_c}$',\n data_loc='Au_mixed',\n format_str='%.1f',\n c_lim=[4, 11]),\n 'Negative Nucleation Bias': dict(\n label='Neg. Nuc. Bias',\n data_loc='Acosnegnuc_mixed',\n format_str='%.1f',\n c_lim=[0, 6]),\n 'Positive Nucleation Bias': dict(\n label='Pos. Nuc. Bias',\n data_loc='Acosposnuc_mixed',\n format_str='%.1f',\n c_lim=[0, 6]),\n 'Loop Twist': dict(\n label='Twist',\n data_loc='Acostwist_mixed',\n format_str='%.1e',\n c_lim=[0, 2.5e-2]),\n 'Optimum Rotation Angle': dict(\n label='Opt. Rot. Angle',\n data_loc='optrotang_mixed',\n format_str='%.1f',\n c_lim=[235, 240]),\n 'Normalized Amplitude Centroid': dict(\n label='Norm. Amp. Cent.',\n data_loc='NormAcCent_mixed',\n format_str='%.1f',\n c_lim=[-15, 15]),\n 'Normalized Voltage Centroid': dict(\n label='Norm. Volt. Cent.',\n data_loc='NormVCent_mixed',\n format_str='%.1f',\n c_lim=[-10, 30])}\n\nviz.plot.loopfits(imported['data'], signal_clim,\n printing, folder_BE, plot_format)\n```\n\n**Figure J7 | Spatial maps of loop fitting parameters obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures. a,** a${_1}$ - represents the lowest piezoresponse amplitude. **b,** a${_2}$ - represents the highest piezoresponse value. **c,** a${_3}$ - Loop rotation as defined by tan${\\delta}$. **d-g,** b${_{1-4}}$ - parameters specifying the curvature of the loop transitions. **h-i,** b${_{5-6}}$ - parameter specifying the rate of transitions between the curvatures of the loop. **j-k,** b${_{7-8}}$ - parameter specifying the voltage midpoint of the transitions. **l-m,** Raw (fitted) loop area the area enclosed by the raw (fitted) loop, representative of the work of switching. **n,** Area differential, the absolute difference between the area enclosed by the raw and fitted loop. **o-p,** Raw (fitted) amplitude centroid the center of mass of the amplitude of the raw (fitted) piezoresponse loop. **q-r,** Raw (fitted) voltage centroid the center of mass of the raw (fitted) piezoresponse loop. **s,** Loop height the vertical height in amplitude of the piezoelectric hysteresis loop. **t,** Loop width in volts. **u-v,** Left/Right E${_c}$ negative/positive piezoelectric coercive fields. **w-x,** Negative/positive nucleation bias, representing the voltage where the piezoresponse has changed by 3% of the loop height. **y,** Loop twist, the twists in shape of the piezoelectric hysteresis loops. **z,** Optimum rotation angle, the optimum ${\\phi}$ found which maximizes Acos${\\phi}$. **aa-ab,** Loop height (width) normalized amplitude (voltage) centroids. \n\n#Results: Linear Machine Learning Analysis (text)\nRecognizing these limitations, statistical approaches of machine learning have been applied to predict the data manifold, thus allowing more insight to be extracted from BEPS data. To demonstrate the necessity of the proposed deep-learning approach, we have conducted careful analysis using linear decomposition algorithms including: principal-component analysis and non-negative-matrix factorization, and a variety of clustering algorithms (Supplementary Figs. 5-7). All told, while these methods are able to identify the most significant differences in the response, they have minimal sensitivity to quantify subtle, yet physically significant, features. This is at least in part due to the decoupling of the voltage (i.e., the time or temporal) dependence of the spectra which are viewed as independent samples \u2013 meaning that these algorithms exclude the history of when each data point was collected.\n\n##Supplementary Note 4: Decomposition Algorithms \u2013 Principal Component Analysis (text)\nOne common approach used to visualize statistical variance in high-dimensional data (e.g., BEPS) is principal component analysis (PCA). PCA is a statistical method that converts a set of observations (in this case piezoresponse values at each voltage), to a set of principal components which are linearly uncorrelated (orthogonal to each other). In simple terms, this approach finds perpendicular directions in the data volume of maximal variance, such that the data can be projected onto lines in those direction with minimal loss of information. These principal components are ranked in the order of the variance of the data which they represent. Thus, if the data is highly-correlated, nearly all the information of the dataset can be represented by a subset of the highest-ranked principal components. These principal components can be used to identify correlations in data set by themselves, however, have no physical significance and thus are difficult to interpret. Since the principal components are ranked in the order of the variance in the data, most of the information in the data set is contained within the higher-ranking principal components; whereas, the lower-ranking principal components contain uncorrelated noise.\nTo represent this, we show the eigenvalues (top) and eigenvectors (bottom) of the first 16 principal components (Supplementary Fig. 5) of the piezoelectric hysteresis loops. Looking at the first eigenvalue map, it is evident that the majority of the information is represented by this first principal component and that this component has a response similar to the piezoelectric hysteresis loops. As we look at the higher-order principal components, we notice a reduction in the visual similarity between the loading maps and the film structure. The principal components beyond the first principal component have recognizable features but are increasingly complex and thus are difficult to interpret. Beyond the first few principal components, the resulting principal components have increased complexity and the loading maps show reduced correlations to the domain structure. This indicates they are representing uncorrelated information or noise, and therefore are of minimal significance. Additional analysis of other signal channels is provided (Jupyter Notebook J9-13).\n\n

\n\n**Supplementary Figure 5 | Principal component analysis of piezoresponse hysteresis loops obtained from band excitation piezoresponse force microscopy of 400 nm PbZr0.2Ti0.8O3 films with hierarchical domain structures.** Figures on the top show the loading maps (eigenvectors) and bottom shows the principal components (eigenvalues).\n\n### Principal Component Analysis (code)\n\n#### Piezoresponse (code)\n\n\n```python\n# creates a dictionary to store the machine learning results\nmachine_learning = {'pca': dict(),\n 'nmf': dict(),\n 'clustering': dict(),\n 'pca_clustering': dict()}\n```\n\n\n```python\n# Computes the PCA\n# second index represents the number of components to compute\nmachine_learning['pca']['piezoresponse'], _ = ml.pca(\n sg_filtered['piezoresponse'], 16)\n\n# Plots the PCA results\nviz.plot.pca_results(machine_learning['pca']['piezoresponse'], data,\n signal_info, printing, folder_pca,\n plot_format, 'piezoresponse', filename='piezoresponse')\n```\n\n**Figure J8 | Principal component analysis the piezoresponse obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Amplitude (code)\n\n\n```python\n# Computes the PCA\n# second index represents the number of components to compute\nmachine_learning['pca']['amplitude'], _ = ml.pca(sg_filtered['amplitude'], 16)\n\n# plots the pca results\nviz.plot.pca_results(machine_learning['pca']['amplitude'], data,\n signal_info, printing, folder_pca,\n plot_format, 'amplitude', filename='amplitude')\n```\n\n**Figure J9 | Principal component analysis the amplitude obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Phase (code)\n\n\n```python\n# Computes the PCA\n# second index represents the number of components to compute\nmachine_learning['pca']['phase'], _ = ml.pca(sg_filtered['phase'], 16)\n\n# plots the pca results\nviz.plot.pca_results(machine_learning['pca']['phase'], data,\n signal_info, printing, folder_pca,\n plot_format, 'phase', filename='phase')\n```\n\n**Figure J10 | Principal component analysis the phase obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Resonance Frequency (code)\n\n\n```python\n# Computes the PCA\n# second index represents the number of components to compute\nmachine_learning['pca']['resonance'], _ = ml.pca(sg_filtered['resonance'], 16)\n\n# plots the pca results\nviz.plot.pca_results(machine_learning['pca']['resonance'], data,\n signal_info, printing, folder_pca,\n plot_format, 'resonance', filename='resonance')\n```\n\n**Figure J11 | Principal component analysis the resonance frequency obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Quality Factor (code)\n\n\n```python\n# Computes the PCA\n# second index represents the number of components to compute\nmachine_learning['pca']['quality_factor'], _ = ml.pca(\n sg_filtered['quality_factor'], 16)\n\n# plots the pca results\nviz.plot.pca_results(machine_learning['pca']['quality_factor'], data,\n signal_info, printing, folder_pca,\n plot_format, 'quality_factor', filename='quality_factor')\n```\n\n**Figure J12 | Principal component analysis the quality factor obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n##Supplementary Note 5: Decomposition Algorithms \u2013 Non-Negative Matrix Factorization (text)\n\nA complementary decomposition algorithm which has seen recent use in the analysis of BEPS data is non-negative matrix factorization (NMF). NMF has gained favor because of its ability to extract non-negative (i.e. all positive abundance maps) sparse representations of data which are easier to interpret than those obtained from PCA. NMF works by finding a decomposition of positive samples *X* into two matrices *W* the spectral signature matrix and *H* the abundance matrix where all of the elements are non-negative. This is accomplished by optimizing the distance *d* between *X* and the matrix product $W$ $H$ using the Frobenius norm (eq. 3-4). To improve the generalization of the optimization regularization is added to the loss function, typical the loss function allows for control of the regularization strength with ($\\alpha$) and the l1 and $l_2$ character of the regularization using the $l_1$-ratio ($\\rho$, eq. 4).\n\n$$\n\\begin{equation}\nd_{Fro}(X,Y)=\\frac{1}{2}||X-WH||^2_{Fro}=\\frac{1}{2}\\sum_{i,j}(X_{ij}-W_{ij}H_{ij})^2\n\\tag{3}\n\\end{equation}\n$$\n\n$$\n\\begin{equation}\n\\alpha\\rho||W||_1+\\alpha\\rho||H||_1+\\frac{\\alpha(1-\\rho)}{2}||W||^2_{Fro}+\\frac{\\alpha(1-\\rho)}{2}||H||^2_{Fro}\n\\tag{4}\n\\end{equation}\n$$\n\nTo demonstrate this approach, we conducted NMF on the raw piezoresponse hysteresis loops wherein the hysteresis loops were shifted to have all non-negative values. In this example we computed the NMF assuming the number of components *n* = 4, with $\\alpha$= 1x10-7, and an $l_1$-ratio = 1 to impose sparsity. We show abundance maps (top) and spectral endmembers (bottom) obtained from NMF. From these maps we again as expected see features resembling the underlying domain structure. Specifically we observe abundance maps which highlight the *c* (*a*) domains (Supplementary Figure 6a-b, top respectively) and maps which highlight some local variances in the response predominately in the *c*/*a* (*a1*/*a2*) domains (Supplementary Figure 6c-d, top respectively). Changing our attention to the endmember spectra we first notice a response which looks like a classical piezoelectric hysteresis loop (Supplementary Figure 6a, bottom), however, the other endmember components while having some resemblance to a hysteresis loops have a non-classical shape which is difficult to interpret (Supplementary Figure 6d, bottom). This complication in the interpretability of the results is due to the inability of the algorithm to impose perfect sparsity and thus the response is a linear combination of multiple endmembers. Additional analysis of other signal channels is provided (Jupyter Notebook J14-18).\n\n

\n\n**Supplementary Figure 6 | Non-Negative matrix factorization of piezoresponse hysteresis loops obtained from band excitation piezoresponse force microscopy of 400 nm PbZr0.2Ti0.8O3 films with hierarchical domain structures.** Figures on the top show the abundance maps and bottom shows the endmember. spectra. \n\n### Non-Negative Matrix Factorization (code)\n\n#### Piezoresponse (code)\n\n\n```python\n# builds the model for NMF\nmodel = NMF(n_components=4, init='random',\n random_state=0, alpha=1e-7, l1_ratio=1)\n# computes the nmf\nmachine_learning['nmf']['piezoresponse'] = ml.nmf(\n model, data['sg_filtered']['piezoresponse'])\n\n# plots the nmf results\nviz.plot.NMF(data['raw']['voltage'],\n machine_learning['nmf']['piezoresponse'],\n printing,\n plot_format,\n signal_info['piezoresponse'],\n folder=folder_nmf,\n letter_labels=True,\n custom_order=[0, 2, 3, 1])\n```\n\n**Figure J13 | Non-negative matrix factorization of the piezoresponse obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Amplitude (code)\n\n\n```python\n# builds the model for NMF\nmodel = NMF(n_components=4, init='random',\n random_state=0, alpha=1e-7, l1_ratio=1)\n# computes the nmf\nmachine_learning['nmf']['amplitude'] = ml.nmf(\n model, data['sg_filtered']['amplitude'])\n\n# plots the nmf results\nviz.plot.NMF(data['raw']['voltage'],\n machine_learning['nmf']['amplitude'],\n printing,\n plot_format,\n signal_info['amplitude'],\n folder=folder_nmf,\n letter_labels=True,\n custom_order=[0, 2, 3, 1])\n```\n\n**Figure J14 | Non-negative matrix factorization of the amplitude obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Phase (code)\n\n\n```python\n# builds the model for NMF\nmodel = NMF(n_components=4, init='random',\n random_state=0, alpha=1e-7, l1_ratio=1)\n# computes the nmf\nmachine_learning['nmf']['phase'] = ml.nmf(model, data['sg_filtered']['phase'])\n\n# plots the nmf results\nviz.plot.NMF(data['raw']['voltage'],\n machine_learning['nmf']['phase'],\n printing,\n plot_format,\n signal_info['phase'],\n folder=folder_nmf,\n letter_labels=True,\n custom_order=[0, 2, 3, 1])\n```\n\n**Figure J15 | Non-negative matrix factorization of the phase obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Resonance Frequency (code)\n\n\n```python\n# builds the model for NMF\nmodel = NMF(n_components=4, init='random',\n random_state=0, alpha=1e-7, l1_ratio=1)\n# computes the nmf\nmachine_learning['nmf']['resonance'] = ml.nmf(\n model, data['sg_filtered']['resonance'])\n\n# plots the nmf\nviz.plot.NMF(data['raw']['voltage'],\n machine_learning['nmf']['resonance'],\n printing,\n plot_format,\n signal_info['resonance'],\n folder=folder_nmf,\n letter_labels=True,\n custom_order=[0, 2, 3, 1])\n```\n\n**Figure J16 | Non-negative matrix factorization of the resonance frequency obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Quality Factor (code)\n\n\n```python\n# builds the model for NMF\nmodel = NMF(n_components=4, init='random',\n random_state=0, alpha=1e-7, l1_ratio=1)\n# computes the nmf\nmachine_learning['nmf']['quality_factor'] = ml.nmf(\n model, data['sg_filtered']['quality_factor'])\n\n# plots the nmf\nviz.plot.NMF(data['raw']['voltage'],\n machine_learning['nmf']['quality_factor'],\n printing,\n plot_format,\n signal_info['quality_factor'],\n folder=folder_nmf,\n letter_labels=True,\n custom_order=[0, 2, 3, 1])\n```\n\n**Figure J17 | Non-negative matrix factorization of the quality factor obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n##Supplementary Note 6: Clustering Algorithms (text)\n\nAnother common approach to distill information from BEPS data is to apply clustering algorithms to group the spectra based on the statistical variance in the response. In this work, we applied the scikit-learn implementation of *k*-means clustering[5]. Specifically, *k*-means clustering partitions n samples into *k* groups of equal variances by minimizing the within-cluster sum-of-squares. The algorithm achieves this by randomly initializing *k* clusters, followed by looping through two steps. The first assigns each sample to its nearest centroid. In the second step the centroid position is updated based on the mean of the previously assigned centroid. This process continues until the difference between the old and new centroids reaches some threshold value. Generally, it is not known a priori what the number of clusters *k* is, however, in the case of the hierarchical domain structure PbZr0.2Ti0.8O3 we know there are discrete *c*/*a* and *a1*/*a2* bands. In turn, we applied divisive *k* means clustering wherein we first clustered the data with *k* = 2 isolating the *c*/*a* and *a1*/*a2* domain bands. The identified regions were then clustered where *k* was varied to form spatially continuous clusters which could be interpreted based on our understanding of the domain structure. Most simply, *k*-means clustering can be applied directly to the raw piezoelectric hysteresis loops wherein each voltage step is used as an independent sample in high dimensional space. While this approach is able to identify the *c*/*a* and *a1*/*a2* bands further clustering results in no further insight (Supplementary Figure 7). For completeness, we explored other clustering methodologies (see Jupyter notebook Figure J19-37).\n\n

\n\n**Supplementary Figure 7 | Divisive clustering of piezoresponse hysteresis loops obtained from band excitation piezoresponse force microscopy of 400 nm PbZr0.2Ti0.8O3 films with hierarchical domain structures. a,** Clustering map, **b,** average piezoelectric hysteresis loop within specific cluster (identified in inset).\n\n### Clustering (code)\n\n\n```python\n# Sets the number of clusters in the divisive clustering\nclustering = {'initial_clusters': 2,\n 'c_clusters': 5,\n 'a_clusters': 4}\n\n# Sets the names of the maps\nnames = {('c/a-a${_1}$/a${_2}$', 'cluster_ca'),\n ('a${_1}$/a${_2}$', 'a_map'),\n ('c/a', 'c_map')}\n```\n\n#### Piezoresponse (code)\n\n\n```python\n# clusters the piezoresponse curves\nmachine_learning['clustering']['piezoresponse'] = ml.k_means_clustering(\n data, 'piezoresponse',\n clustering, seed=42)\n\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['clustering']['piezoresponse'],\n names,\n plot_format)\n```\n\n**Figure J18 | Divisive clustering of the piezoresponse curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# sets the y range for the plots\nsignal_info['piezoresponse']['y_lim'] = [-1.5e-4, 1.5e-4]\n\n# plots the cluster maps and average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering']['piezoresponse'],\n plot_format,\n signal_info,\n 'piezoresponse',\n printing,\n folder_clustering)\n```\n\n**Figure J19 | Divisive clustering of the piezoresponse curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Amplitude (code)\n\n\n```python\n# clusters the amplitude curves\nmachine_learning['clustering']['amplitude'] = ml.k_means_clustering(\n data, 'amplitude',\n clustering, seed=42)\n\n# plots the amplitude clustering maps\nviz.plot.hierarchical_clustering(machine_learning['clustering']['amplitude'],\n names, plot_format)\n```\n\n**Figure J20 | Divisive clustering of the amplitude curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering map and average hysteresis loop\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering']['amplitude'],\n plot_format,\n signal_info,\n 'amplitude',\n printing,\n folder_clustering)\n```\n\n**Figure J21 | Divisive clustering of the amplitude curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Phase (code)\n\n\n```python\n# clusters the phase loops\nmachine_learning['clustering']['phase'] = ml.k_means_clustering(\n data, 'phase',\n clustering, seed=42)\n\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['clustering']['phase'],\n names, plot_format)\n```\n\n**Figure J22 | Divisive clustering of the phase curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering map and average hysteresis loop\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering']['phase'],\n plot_format,\n signal_info,\n 'phase',\n printing,\n folder_clustering)\n```\n\n#### Resonance Frequency (code)\n\n\n```python\n# clusters the resonance frequency\nmachine_learning['clustering']['resonance'] = ml.k_means_clustering(\n data, 'resonance',\n clustering, seed=42)\n\n# plots the resonance frequency maps\nviz.plot.hierarchical_clustering(machine_learning['clustering']['resonance'],\n names, plot_format)\n```\n\n**Figure J23 | Divisive clustering of the resonance frequency curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clusters with average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering']['resonance'],\n plot_format,\n signal_info,\n 'resonance',\n printing,\n folder_clustering)\n```\n\n**Figure J24 | Divisive clustering of the resonance frequency curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Quality Factor (code)\n\n\n```python\n# clusters the quality factor curves\nmachine_learning['clustering']['quality_factor'] = ml.k_means_clustering(\n data, 'quality_factor',\n clustering, seed=42)\n\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['clustering']['quality_factor'],\n names, plot_format)\n```\n\n**Figure J25 | Divisive clustering of the quality factor curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the cluster maps and average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering']['quality_factor'],\n plot_format,\n signal_info,\n 'quality_factor',\n printing,\n folder_clustering)\n```\n\n**Figure J26 | Divisive clustering of the quality factor curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n### PCA + Clustering (code)\n\n#### Piezoresponse (code)\n\n\n```python\nsignal = 'piezoresponse'\n\n# computes the PCA\neigenvalues = ml.weights_as_embeddings(machine_learning['pca'][signal],\n data['sg_filtered'][signal])\n\n# clusters the PCA results\nmachine_learning['pca_clustering'][signal] = ml.k_means_clustering(\n data, signal,\n clustering, seed=42, pca_in=eigenvalues)\n\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['pca_clustering'][signal],\n names, plot_format)\n```\n\n**Figure J27 | Divisive clustering of the first 16 principal components of the piezoresponse curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering results and average hysteresis curves\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering'][signal],\n plot_format,\n signal_info,\n signal,\n printing,\n folder_pca_clustering)\n```\n\n**Figure J28 | Divisive clustering of the first 16 principal components of the piezoresponse curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Amplitude (code)\n\n\n```python\nsignal = 'amplitude'\n\n# computes the pca\neigenvalues = ml.weights_as_embeddings(machine_learning['pca'][signal],\n data['sg_filtered'][signal])\n# clusters the loops\nmachine_learning['pca_clustering'][signal] = ml.k_means_clustering(\n data, signal,\n clustering, seed=42, pca_in=eigenvalues)\n\n# plots the clustering maps\nviz.plot.hierarchical_clustering(machine_learning['pca_clustering'][signal],\n names, plot_format)\n```\n\n**Figure J29 | Divisive clustering of the first 16 principal components of the amplitude curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering maps and average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering'][signal],\n plot_format,\n signal_info,\n signal,\n printing,\n folder_pca_clustering)\n```\n\n**Figure J30 | Divisive clustering of the first 16 principal components of the amplitude curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Phase (code)\n\n\n```python\nsignal = 'phase'\n\n# computes the pca\neigenvalues = ml.weights_as_embeddings(machine_learning['pca'][signal],\n data['sg_filtered'][signal])\n# clusters the pca\nmachine_learning['pca_clustering'][signal] = ml.k_means_clustering(\n data, signal,\n clustering, seed=42, pca_in=eigenvalues)\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['pca_clustering'][signal],\n names, plot_format)\n```\n\n**Figure J31 | Divisive clustering of the first 16 principal components of the phase curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering maps and average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering'][signal],\n plot_format,\n signal_info,\n signal,\n printing,\n folder_pca_clustering)\n```\n\n**Figure J32 | Divisive clustering of the first 16 principal components of the phase curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Resonance (code)\n\n\n```python\nsignal = 'resonance'\n\n# computes the pca\neigenvalues = ml.weights_as_embeddings(machine_learning['pca'][signal],\n data['sg_filtered'][signal])\n# clusters the results\nmachine_learning['pca_clustering'][signal] = ml.k_means_clustering(\n data, signal,\n clustering, seed=42, pca_in=eigenvalues)\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['pca_clustering'][signal],\n names, plot_format)\n```\n\n**Figure J33 | Divisive clustering of the first 16 principal components of the resonance frequency curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering maps and average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering'][signal],\n plot_format,\n signal_info,\n signal,\n printing,\n folder_pca_clustering)\n```\n\n**Figure J34 | Divisive clustering of the first 16 principal components of the resonance frequency curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n#### Quality Factor (code)\n\n\n```python\nsignal = 'quality_factor'\n\n# computes the pca\neigenvalues = ml.weights_as_embeddings(machine_learning['pca'][signal],\n data['sg_filtered'][signal])\n# computes the cluster maps\nmachine_learning['pca_clustering'][signal] = ml.k_means_clustering(\n data, signal,\n clustering, seed=42, pca_in=eigenvalues)\n# plots the cluster maps\nviz.plot.hierarchical_clustering(machine_learning['pca_clustering'][signal],\n names, plot_format)\n```\n\n**Figure J35 | Divisive clustering of the first 16 principal components of the quality factor curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n\n```python\n# plots the clustering maps and average hysteresis loops\nviz.plot.clustered_hysteresis(data['raw']['voltage'],\n data['sg_filtered']['piezoresponse'],\n machine_learning['clustering'][signal],\n plot_format,\n signal_info,\n signal,\n printing,\n folder_pca_clustering)\n```\n\n**Figure J36 | Divisive clustering of the first 16 principal components of the quality factor curves obtained from band excitation switching spectroscopy of PbZr${_{0.2}}$Ti${_{0.8}}$O${_{3}}$ with hierarchical domain structures.**\n\n# Results: Long-Short Term Memory Neurons (text)\nThus, what is required is to develop an approach that considers the temporal dependence inherent in the data. To do this, we developed a sequence-to-sequence deep learning neural network based on a long-short-term memory (LSTM) recurrent neural network (RNN) autoencoder (henceforth called the autoencoder) which acts as a feature extractor to derive inference from BEPS spectra. LSTM neurons (described in Supplemental Fig. 8) were chosen due to their success in natural language processing (which is analogous in data structure to spectra) wherein order of words (measurements) is important[60]. The autoencoder architecture consists of an encoder, which takes as an input a spectra and outputs a feature vector, and a decoder, which takes this feature vector and returns the input spectra (Fig. 2). By minimizing the mean-squared-reconstruction error of the input spectra the autoencoder \u201clearns\u201d a universal identity function. While building an arbitrarily complex identity function (where the large model capacity assures overfitting) is a fruitless task, building an identity function whose capacity is limited or strongly regularized can produce a generalizable (i.e., with a limited number of characteristic variables) function capable of broader inference. \n

\n\n**Figure 2 | Drawing of sparse long-short term memory autoencoder. Diagram shows the three components of the neural network the encoder, embedding layer, and decoder.** Within each of these sections the dimensionality and size of the input and outputs are indicated on the right. Diagram shows how temporal data (represented as linear changing arrows) is consider through the inclusion of recurrent long-short term memory neurons. Single colored arrows indicate just a single time step is passed, where arrows with gradients imply the passing of temporally dependent vectors. In the figure *l* is the number of encoding layers, *m* is the number of decoding layers, $N_{enc}$ is the number of neurons in the encoding layer, $N_{emb}$ is the number of neurons in the embedding layer, and $N_{dec}$ is the number of neurons in the decoding layer.\n\n## Supplementary Note 7:Long Short-Term Memory Neuron Structure (text)\n\n

\n\n\n**Supplementary Figure 8 | Schematic drawing showing the workflow of a long-short term memory neuron.** The equations used for each neuron is indicated in the diagram. Different components of the neuron is indicated graphically.\n\nIn dealing with data that has a sequential or temporal dependence (e.g. BEPS) it is important that a neural network architecture can consider this temporal dependence. The most effective approaches have been to utilize some form of a gated recurrent neural network (RNN). These networks propagate through time while having a leaky unit which allows information to be passed between time steps without causing the gradient to explode or vanish, in doing so accumulating information about the data over a long duration in time. One particularly common implementation of a gated RNN is called a long short-term memory (LSTM) RNN (Supplementary Figure 8). LSTM in addition to the memory gate within its substructure has a second learned gate which controls when to reset or forget the memory state. In more detail, LSTM's have a triptych cell structure. The first cell, the forget cell takes as an input the output from the previous time step and the current input. Within the forget gate there is an internal neuron which multiplies the inputs by two weight matrices of size ($N_i$ x $N_{ln}$ and $N_{ln}$ x $N_{ln}$) where $N_i$ is the number of inputs from the previous layer and $N_{ln}$ is the size of the current layer. The element-wise summation of the result plus a bias $b_0$ of size $N_{ln}$ is passed through a sigmoid non-linearity resulting in a value between 0 and 1. This output from the forget gate ($f_t$, eq. 5) is element-wise multiplied by the previous memory cell state, in doing so, gating or controlling how much information from the previous memory state $M_{t-1}$ is retained.\n\n$$\n\\begin{equation}\nf_t = \\sigma(W_f\\cdot[O_{t-1},I_t]+b_0)\n\\tag{5}\n\\end{equation}\n$$\n\n\nThe second cell within the LSTM neuron, the new memory cell builds information to pass to the next time step by having two internal neurons. The first, a neuron it (eq. 6), has the same form as the forget gate and acts to control the magnitude of the new cell state memory. The second neuron takes the same inputs and multiplies these values by two matrices of size ($N_i$ x $N_{ln}$ and $N_{ln}$ x $N_{ln}$). Following the addition of a bias $b_2$ of size $N_{ln}$, the element-wise summation of the results is then passed through a hyperbolic tangent non-linearity ($\\tilde{M_t} $, equation 7), which is multiplied by the new memory gate ($i_t$) and element-wise added to the previous cells memory ($M_{t-1}$) to form the new cell memory state ($M_t$, eq. 8).\n\n$$\n\\begin{equation}\ni_t = \\sigma(W_i\\cdot[O_{t-1},I_t]+b_1)\n\\tag{6}\n\\end{equation}\n$$\n\n\n$$\n\\begin{equation}\n\\tilde{M_t} = \\tanh{(W_m\\cdot[O_{t-1},I_t]+b_2)}\n\\tag{7}\n\\end{equation}\n$$\n\n\n$$\n\\begin{equation}\nM_t = f_t*M_{t-1} +i_t*\\tilde{M_t}\n\\tag{8}\n\\end{equation}\n$$\n\nThe final cell, the new output cell forms the output for time step Ot by first computing an output gate of similar form to the previous gates ($h_t$, eq. 9). This gate controls what information from the cell state is passed as an output to the next layer in the model. Finally, the new cell state is passed through a hyperbolic tangent non-linearity (forcing the output to be between -1 and 1). This output is then multiplied in an element-wise fashion by the output cell gate ($h_t$) to form the output ($O_t$, eq. 10). In the networks forward pass is computed for each time step through time. In the training process, the gradient is calculated through time in a process known as backpropagation through time, optimizing each of the 4($N_{ln}^2 + N_I x N_{ln}$) weights and $N_{ln}$ biases per LSTM neuron.\n\n$$\n\\begin{equation}\nh_t = \\sigma(W_0[O_{t-1},I_t]+b_3)\n\\tag{9}\n\\end{equation}\n$$\n\n\n$$\n\\begin{equation}\nO_t=h_t*\\tanh(M_t)\n\\tag{10}\n\\end{equation}\n$$\n\n# Results: More Details about Autoencoder Structure (text)\nMore explicitly, the first encoding layer accepts a time series as an input which, in this case, is the response from the 96 sequential voltage steps. Each of the encoding layers is composed of LSTM neurons ($N_{enc}$ = 256)[61] with an internal cell structure that enables the retention of long- and short-term temporal dependencies. During training, each layer outputs an abstract representation of the data for each input spectra. To build more descriptive data abstractions, it is typical to stack encoding layers ($l$ = 3) forming so-called deep neural networks. To minimize overfitting or memorization, dropout ($d$ = 20%)[62] (which minimizes co-adaptation by randomly removing some of the network connections) was applied. The output from the encoder is then passed to an embedding layer consisting of dense neurons ($N_{emb}$ = 16)[63]. This layer creates a low-dimensional representation of different characteristic responses. To make this representation more interpretable, it is beneficial to impose sparsity [i.e., minimize the number of non-zero activations (outputs)]. To do this, two synergistic approaches were applied: First, we restrict the outputs to non-negative values by selecting a rectified-linear activation function ($f(x)=\\max(0,x)$, ReLu). Secondly, we add strong $l_1$ regularization which adds an additional contribution to the loss function proportional to the sum of the weights ($\\lambda\\sum_i|W_i|$) thus, only those activations which significantly improve the model\u2019s accuracy are non-zero (this can be visualized in Supplementary Movie 2 and 3). The feature vector is then passed to the decoder which is structured in an identical fashion to the encoder [decoding layers ($m$ = 3), each with 128 LSTM neurons ($N_{dec}$ = 128)]. The decoder takes the feature vector and transforms it back into the original spectra, such that the network can be optimized to minimize the loss function composed of the mean-squared error and the $l_1$ regularization [($loss =\\frac{1}{n}\\sum_{i=1}^n(Y_i-\\hat{Y_i})^2+\\lambda\\sum_i|W_i|$), Supplementary Fig. 9]. \n\n## Supplementary Note 8: Additional Details about the Autoencoder Architecture (text)\n\nIn our description of the autoencoder architecture for brevity we excluded some of the subtitle details of the network structure. Here, we provide further details that can aid in the reproducibility of our method (Supplementary Figure 9). Initially, the network starts by taking as an input a mini-batch of response curves through time, where each voltage step is considered an independent time step. Generally, the mini-batches are chosen to be either \u00bc of the data set or the maximum permitted by the graphics processing unit (GPU) on board memory. The input data is then passed to the first layer of the encoder. The encoding layer is composed of 128 LSTM neurons which function as described in *Section 7* of the supplementary information. Within these 128 LSTM neurons, half of the neurons propagate through the data in the forward direction (i.e., from start-to-finish) and the other in the reverse direction (i.e., from finish-to-start). The inclusion of bidirectionality of the network helps mitigate bias associated with the first timesteps in the sequence, which in turn, tends to give better results. In the encoder, there are four encoding layers, wherein between each layer dropout of 20% was applied. Dropout seeks to minimize co-adaptation in the network by randomly removing 20% of the network connections during training. Since the application of dropout acts to thin the network at test time the weights have to be adjusted based on average scaling to compensate for dropout. After the encoder, the output is batch normalized. Batch normalization tends to increase the stability of the neural network by adjusting the shift and the scale of the outputs by subtracting the *batch mean* and dividing by the *batch standard deviation*. This mathematical transformation cannot be done by simply subtracting the mean and dividing by the standard deviation as this would merely increase the loss function in a way that would be erased by stochastic gradient descent. To solve this issue, two learnable parameters ($\\gamma$,$\\beta$) are used to scale the magnitude of the normalization such that the network can be optimized with inclusion of this normalization. More explicitly, batch normalization takes an input of values x from a mini-batch $\\mathfrak{B}=x_{1\u2026m}$ to compute the batch normalized output $BN_{\\gamma,\\mathfrak{B}}(x_i)$ as described by eq. 8-11, where $x_i$ is a collection of samples, m is the number of samples, $\\mu_\\mathfrak{B}$ is the batch mean, and $\\sigma_\\mathfrak{B}^2$ is the batch variance. \n\n$$\n\\begin{equation}\n\\mu_\\mathfrak{B} = \\frac{1}{m}\\sum_{i=1}^mx_i\n\\tag{11}\n\\end{equation}\n$$\n\n$$\n\\begin{equation}\n\\sigma_\\mathfrak{B}^2 = \\frac{1}{m}\\sum_{i=1}^m(x_i-\\mu_\\mathfrak{B})^2\n\\tag{12}\n\\end{equation}\n$$\n\n$$\n\\begin{equation}\n\\hat{x_i}=\\frac{x_i-\\mu_\\mathfrak{B}}{\\sqrt{\\sigma_\\mathfrak{B}^2+\\varepsilon}}\n\\tag{13}\n\\end{equation}\n$$\n\n$$\n\\begin{equation}\nBN_{\\gamma, \\mathfrak{B}}(x_i)=\\gamma\\hat{x_i}+\\beta\n\\tag{14}\n\\end{equation}\n$$\n\n

\n\n\n**Supplementary Figure 9 | Schematic drawing showing a detailed view of the neural network architecture.** Arrows indicate the passing of information between different layer. Specialty layers are indicated graphically.\n\nFollowing batch normalization, the output from the hidden layer is passed to the low-dimensional embedding layer. It is important to note that only the last time step in the data is passed to the low-dimensional layer. This reduces the dimensionality of the network such that each spectra is represented by only one value per neuron. Since the encoder is composed of LSTM neurons the final time step of the encoder has information regarding both the short- and long- term time dependencies in the data. This low-dimensional embedding layer is composed of sixteen neurons which are designed to impose sparsity. This is achieved using two synergistic approaches as described in the main text: First, we constrict the outputs to non-negative values by selecting a rectified-linear activation function ($f(x)=\\max(0,x)$\u3017, ReLu). Secondly, we add strong $l_1$ regularization which adds an additional contribution to the loss function proportional to the sum of the weights ($\\lambda\\sum_i|W_i|$) thus, only those activations which significantly improve the model\u2019s accuracy are non-zero. Following the low-dimensional embedding layer, a second batch normalization layer is applied. Immediately preceding the decoder, the output from the batch-normalization layer is repeated to the original vector length. This vector is then passed to the decoder which is constructed with an identical form as the encoder (i.e., having 4 layers, each with 128 LSTM neurons). Following the decoder, the output is passed to a so-called time distributed layer. This layer has a single dense neuron with a linear activation function for each time step in the spectra. In turn, this layer converts the abstract information extracted from the decoder and converts it back to the original time series such that it can be optimized using stochastic gradient descent.\n\n# Results: Piezoresponse Autoencoder (text)\nWe begin by training the autoencoder to analyze the piezoresponse hysteresis loops extracted from BEPS (Methods). Since every spectra analyzed by the autoencoder has a known pixel position it is possible to visualize the \u201clearned\u201d information by computing the output of the low-dimensional layer to form a real space feature map. While this layer would permit an independent feature map for each neuron, the addition of sparsity results in most features having a null value. Here, we show two of the most physically meaningful features (Fig. 3a,b), where the intensity of the maps represents the degree of a \u201clearned\u201d characteristic response form. To aid in the visualization of this information we have provided a line trace of the average activation superimposed on the average topography (Fig. 3c,d). The first feature map (Fig. 3a), shows increased activation within the c/a bands which is maximized on the peak side of the topographical features (Fig. 3c). To visualize what this activation is encoding, we use the decoder of the autoencoder as a generator, allowing us to manually manipulate the activation to see how this neuron alters the piezoresponse (Supplementary Movie 4). From the generated piezoresponse hysteresis loops, we observe an increase in the magnitude and the squareness of the loops as we increase the activation of this neuron, as would be expected for regions with increasing c-like character (Fig. 3e). This suggests that the neuron has \u201clearned\u201d the response associated with c domains and, since the map has different magnitudes of response, it provides a way to quantify the c-like switching character spatially. In total, this reveals that the autoencoder is capable of deducing physically interpretable inference from the data in an unsupervised fashion. Moving on to the second feature map (Fig. 3b), we notice a gradient in the activation which is maximized within the $a_1$/$a_2$ bands near the valley boundary which decreases in magnitude as we transition towards the peak boundary (Fig. 3d). Visualizing the loops as we increase this activation reveals a decrease in the magnitude and the emergence of an intermediate step in the piezoresponse loop (black arrow, Fig. 3f). This intermediate step is related to a two-step, three-state (*c* $\\rightarrow$ *a* $\\rightarrow$ *c*) ferroelastic switching process[6].\n\tIt is important to reemphasize that this detailed spatial variance was not apparent using conventional machine-learning approaches. All told, this reveals that the autoencoder is capable of \u201clearning\u201d a complex identity function where each neuron controls a physically meaningful characteristic of the piezoresponse. Not only is the autoencoder readily able to differentiate regions of varied response which correlate to different domain structure variants, but it is also able to quantify the relative response character (i.e., how x-like the response is) which reveals additional complexity. This deviates from traditional approaches which generally provide only qualitative classification of behavior.\n\n

\n\n**Figure 3 | Features learned from low-dimensional layer of the piezoresponse autoencoder. a,d,** Feature maps extracted from low dimensional layer of autoencoder trained on piezoresponse hysteresis loops. Color indicates the magnitude of the latent feature or the activation observed in each spectra at a given pixel position. Activation is mapped in normalized units as shown in colorbar. **b,e** Average activation across the domain bands superimposed onto the average topography. **c,f,** Neural network generated piezoelectric hysteresis loops as the magnitude of the activation is increased. In all figures the color of the curves/images reflect the normalized activation from the low-dimensional representation at that location or from the generated response curve. \n\n## Supplementary Movie 2: Low-Dimensional Layer Training Piezoresponse\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 2. Image series showing the activation or output from the piezoresponse autoencoder low-dimensional layer during training.** Images are provided when an epoch results in a reduction in the loss. This movie allows for the visualization of the learning process as well as the effect of the sparsity constraint.\n\n## Supplementary Movie 3: Piezoresponse Generator Movie\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 4. Movie showing the generated piezoresponse curves obtained from the autoencoder as a single activation is changed.** The activation changing is represented by the activation map shown in the inset. The colors of the curve match the colors presented in their related activation maps. This movie provides a way to visualize how the activation (as shown in the activation maps) influences the piezoresponse. \n\n## Piezoresponse Autoencoder (code)\n\n### Building the model (code)\n\n\n```python\n# selects the folder where the pre-trained models are located\nmodel_folder = './Trained Models/Piezoresponse/Bidirect_lstm_size064_enc4_emb16_dec4_lr3m05_drop0.2_l1norm_1m05_batchnorm_TT_001'\n\n# Function to build the model\npiezoresponse_model, run_id = rnn.rnn('lstm', 64, 4, 4, 16,\n data['sg_filtered']['piezoresponse'].shape[1],\n lr=3e-5, drop_frac=.2, l1_norm=1e-4,\n batch_norm=[True, True])\n```\n\n### Train the model (code)\n\n\n```python\n# select if the user will train a new model.\n# Note training requires GPU access and can take a long time (1-2 days)\ntrain_model = False\n\nif train_model:\n # trains the model saving results as checkpoints\n rnn.train_model(run_id, piezoresponse_model,\n data['normalized']['piezoresponse'],\n data['normalized']['val_piezoresponse'],\n folder_piezoresponse_autoencoder)\n```\n\n### Loads Pre-Trained Model (code)\n\n\n```python\n# loading the pre-trained weights\npiezoresponse_model.load_weights(model_folder + '/weights.15179-0.00.hdf5')\n\n# Updates the decoder based on decoding optimization.\n# this was done to improve the quality of the reconstruction.\npiezoresponse_model, piezoresponse_decoder = rnn.update_decoder(piezoresponse_model,\n './Trained Models/Piezoresponse/weights.00033723-0.0022.hdf5')\n```\n\n\n```python\n# Displays the model summary\npiezoresponse_model.summary()\n```\n\n### Model Validation (code)\n\n#### Validation Loss (code)\n\n\n```python\n# loss for the training data\nprint('Training Data Set:')\nscore = piezoresponse_model.evaluate(np.atleast_3d(data['normalized']['piezoresponse']),\n np.atleast_3d(data['normalized']['piezoresponse']))\nprint('Test loss:', score)\n\n# loss for the validation data\nprint('Validation Data Set:')\nscore = piezoresponse_model.evaluate(np.atleast_3d(data['normalized']['val_piezoresponse']),\n np.atleast_3d(data['normalized']['val_piezoresponse']))\nprint('Validation loss:', score)\n```\n\n### Training Results (code)\n\n\n```python\n# plots the loss and an example reconstruction\n# set to plot a random loop\n# to plots a specific point add i=(pixel position)\nviz.plot.training_loss(model_folder,\n data,\n piezoresponse_model,\n 'piezoresponse',\n signal_info,\n printing, folder_piezoresponse_autoencoder)\n```\n\n**Figure J37 | Piezoresponse autoencoder traiing results. a,** Training loss (training - black) validation (red). Example hysteresis loop from the **b,** training, **c,** validation data set. Black curve shows the original measured data, red curve show the autoencoder reconstruction. \n\n#### Low Dimensional Layer (code)\n\n\n```python\n# Computes the low dimensional layer\npiezoresponse_embeddings = rnn.get_activations(piezoresponse_model,\n data['normalized']['piezoresponse'],\n 9)\n```\n\n\n```python\n# defines the ranges for the images\nranges = [0, 1.3e-2, 0, 0, 0,\n 0, 0, 6e-3, 0, 0,\n 0, 1.3e-2, 1e-2, 0, 0, 3e-3]\n\n# plots the embedding maps\n_ = viz.plot.embedding_maps(piezoresponse_embeddings,\n printing,\n plot_format,\n folder_piezoresponse_autoencoder,\n filename='./Piezoresponse_embeddings',\n ranges=ranges)\n```\n\n**Figure J38 | Output of low dimensional layer obtained from the piezoreponse autoencoder.**\n\n#### Plot Embedding and Line Trace (code)\n\n\n```python\n# rotates and crops the topography image\ncrop_topo, scale = util.core.rotate_and_crop(\n np.flipud(imported['data']['HeightFinal'].reshape(1024, 1024).T))\n\n# creates the figures and axes in a pretty way\nnum_img = 10\nfig, ax = viz.format.layout_fig(num_img,\n mod=num_img // 2)\n\n# plots the selected embeddings superimposed on the line trace\nfor i, v in enumerate([1, 7, 11, 12, 15]):\n\n viz.plot.embedding_line_trace(ax,\n i,\n crop_topo,\n piezoresponse_embeddings[:, v],\n [0, ranges[v]],\n plot_format,\n number=num_img // 2)\n\nplt.tight_layout(pad=1)\n\n# saves the figure\nutil.file.savefig(folder_piezoresponse_autoencoder +\n '/embedding_and_topography', printing)\n```\n\n**Figure J39 | Plots of selected embedding maps from piezoelectric autoencoder superimposed on average topography.**\n\n#### Exports Training Images (code)\n\nExports low dimensional layer computed after each epoch (with improvement) during training. This allows the visualization of the effect of L${_1}$ regularization.\n\n\n```python\n# selects to export training images\n# note this take a long time (1-2 hours)\nexport_training_images = False\n\nif export_training_images:\n if np.int(io_transfer.get_size(model_folder) / 1e8) > 1:\n # exports all low dimensional layers from training\n viz.plot.training_images(piezoresponse_model,\n data,\n model_folder,\n printing,\n plot_format,\n folder_piezoresponse_autoencoder_training_movie)\n if printing['movies']:\n # Script to making movie\n util.file.make_movie('Piezoresponse_training_movie',\n folder_piezoresponse_autoencoder_training_movie,\n './',\n 'png',\n 10,\n output_format='mp4')\n```\n\n#### Make Generator Movie (code)\n\nMakes a movie where the magnitude of the embedding is manipulated and the decoder is used to generate the piezoresponse\n\n\n```python\nif printing['movies']:\n\n # defines the ranges for the embeddings\n ranges = [1.3e-2, 6e-3, 1.3e-2, 1e-2, 3e-3]\n\n # generates images for the generator movie\n _ = viz.plot.generator_movie(piezoresponse_decoder, piezoresponse_embeddings,\n data['raw']['voltage'], 100, 500,\n ranges, folder_piezoresponse_autoencoder_movie,\n plot_format, printing,\n graph_layout=[5, 5])\n\n # Script to making movie\n util.file.make_movie('Piezoresponse_Generator_movie', folder_piezoresponse_autoencoder_movie,\n './', 'png', 10, output_format='mp4', reverse=True)\n```\n\n#### Plots Generator Results (code)\n\n\n```python\n# defines the range for the embeddings\nranges = [1.3e-2, 6e-3, 1.3e-2, 1e-2, 3e-3]\n\n# plots the embedding layer and the generated results\nviz.plot.generator_piezoresponse(piezoresponse_decoder,\n piezoresponse_embeddings,\n data['raw']['voltage'],\n ranges,\n 6,\n 100,\n printing,\n plot_format,\n folder_piezoresponse_autoencoder)\n```\n\n**Figure J40 | Plots of selected embedding maps from piezoelectric autoencoder bottom shows generated hysteresis loop obtained when varying each embedding.** The color of the piezoelectric hysteresis loop reflects the colors in the map\n\n# Results: Contact-Resonance Autoencoder (text) \nHaving proven the capabilities of this approach, we applied a similar methodology to interpret the cantilever-sample-contact resonance (henceforth the resonance response) wherein the form of the response has increased complexity which complicates statistical analysis. We show three of the most physically meaningful non-zero components (Fig. 4), obtained following training (Methods). To use the extracted insight to understand the physical mechanisms of response it is important to identify the statistical distribution of the key response characteristics *learned*. The first selected feature map (Fig. 4a) shows increased activation within the *c*/*a* bands which is maximized near the valley boundary (Fig.4b). If we again, use this autoencoder as a generator (Supplementary Movie 5), we observe that the resonance has a classic butterfly-shaped loop, however, as this activation increases there is a gradual increase in the resonance frequency as we approach the valley boundary (Fig. 4c). If we had followed the traditional approach of merely studying how the piezoresponse hysteresis loops vary as we move from the peak to the valley boundary (Fig.4d), we would have observed essentially no change, thus leaving uncovered new physical effects. \n\tMoving on to the second feature map (Fig. 4e), we observe a gradient in the activation, which is maximized near the middle of the $a_1$/$a_2$ band and tends towards zero at the valley boundary Fig. 4f). The generated resonance loops reveal a non-traditional shape wherein upon application of bias the material undergoes elastic hardening before eventually softening when switching under both positive and negative bias (marked 1 and 2, respectively, Fig. 4g). This type of resonance behavior during switching has been related to a two-step, three-state (c $\\rightarrow$ a $\\rightarrow$ c) ferroelastic switching mechanism[6]. By computing the piezoelectric hysteresis loops, we identify that these loops have an intermediate step which plateaus at near-zero piezoresponse when switching under both positive and negative bias (1 and 2, respectively, Fig. 4h). The generated piezoresponse curves reveal that as we decrease the magnitude of this activation (i.e., traverse the $a_1$/$a_2$ band from the mid-point to the valley) we observe an increase in the prevalence (magnitude of the concavity) of this intermediate state when switching under positive bias; however, such a change is not evident when switching under negative bias.\n

\n\n**Figure 4 |. Features learned from low-dimensional layer of the resonance response autoencoder. a,e,i.** Feature maps extracted from low dimensional layer of autoencoder trained on the resonance hysteresis loops. Color indicates the magnitude of the latent feature or the activation observed in each spectra at a given pixel position. Activation is mapped in normalized units as shown in colorbar. **b,f,j,** Average activation across the domain bands superimposed onto the average topography. Neural network generated **c,g,k,** resonance hysteresis loops and **d,h,l,** piezoresponse hysteresis loops. In all figures the color of the curves/images reflect the normalized activation from the low-dimensional layer at that location or from the generated response curve. Numbers in figures represent observations of ferroelectric or ferroelastic switching events. \n

\nFinally, the third feature map (Fig. 4i) once again shows increased activation within the $a_1$/$a_2$ bands, however, the gradient in activation goes in the opposite direction from the previous map (Fig. 4e), in that it is maximized near the valley boundary and decreases as we approach the middle of the band (Fig. 4j). Upon generating the resonance (Fig. 4k) and piezoresponse (Fig. 4l) loops, we observe an asymmetric resonance switching behavior, wherein near the valley boundary (i.e., high activation) hardening only occurs when switching under positive bias (marked 1). As we move towards the peak boundary (i.e., low activation), however, hardening during switching occurs under both positive and negative bias (green curves, Fig. 4h). An analogous trend in the piezoresponse concavity is observed (Fig. 4l), where the intermediate concavity is only observed when switching under positive bias when near the valley boundary (marked 1); yet, is observed under both positive and negative bias when near the peak boundary (green curve, marked 2).\n\n\n## Supplementary Movie 4: Low Dimensional Layer Training Resonance Response\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 3. Image series showing the activation or output from the resonance autoencoder low-dimensional layer during training.** Images are provided when an epoch results in a reduction in the loss. This movie allows for the visualization of the learning process as well as the effect of the sparsity constraint.\n\n## Supplementary Movie 5: Resonance Generator Movie\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 5. Movie showing the generated resonance response obtained from the autoencoder as a single activation is changed.** The activation changing is represented by the activation map shown in the inset. The colors of the curve match the colors presented in their related activation maps. This movie provides a way to visualize how the activation (as shown in the activation maps) influences the resonance response. \n\n## Contact-Resonance Autoencoder (code)\n\n\n### Building the model (code)\n\n\n```python\n# selects the folder where the pre-trained model is saved\nmodel_folder = './Trained Models/Resonance/Bidirect_lstm_size064_enc4_emb16_dec4_lr3m05_drop0.2_l1norm_0.0001_batchnorm_TT_001'\n\n# Function to build the model\nresonance_model, run_id = rnn.rnn(\n 'lstm',\n 64,\n 4,\n 4,\n 16,\n data['sg_filtered']['resonance'].shape[1],\n lr=3e-5,\n drop_frac=.2,\n l1_norm=1e-4,\n batch_norm=[True, True])\n```\n\n### Train the model (code)\n\n\n```python\n# select if the user will train a new model.\n# Note training requires GPU access and can take a long time (1-2 days)\ntrain_model = False\n\nif train_model:\n # trains the model saving each epoch (with improvement) as a checkpoint\n rnn.train_model(\n run_id,\n resonance_model,\n data['normalized']['resonance'],\n data['normalized']['val_resonance'],\n folder_resonance_autoencoder)\n```\n\n### Loads Pre-Trained Model (code)\n\n\n```python\n# loading the pre-trained weights\nresonance_model.load_weights(model_folder + '/weights.00022570-0.0123.hdf5')\n\n# loads the pre-trained weight from an optimized decoder\n# training of the decoder was done to minimize reconstruction error\nresonance_model, resonance_decoder = rnn.update_decoder(\n resonance_model,\n './Trained Models/Resonance/weights.00013412-0.0106.hdf5')\n```\n\n\n```python\n# Displays the model summary\nresonance_model.summary()\n```\n\n### Model Validation\n\n#### Validation Loss (code)\n\n\n```python\n# computes the training loss\nprint('Training Data Set:')\nscore = resonance_model.evaluate(np.atleast_3d(data['normalized']['resonance']),\n np.atleast_3d(data['normalized']['resonance']))\nprint('Test loss:', score)\n\n# computes the validation loss\nprint('Validation Data Set:')\nscore = resonance_model.evaluate(np.atleast_3d(data['normalized']['val_resonance']),\n np.atleast_3d(data['normalized']['val_resonance']))\nprint('Validation loss:', score)\n```\n\n\n```python\n# plots the loss and an example reconstruction\n# set to plot a random loop\n# to plots a specific point add i=(pixel position)\nviz.plot.training_loss(\n model_folder,\n data,\n resonance_model,\n 'resonance',\n signal_info,\n printing,\n folder_resonance_autoencoder)\n```\n\n**Figure J42 | Resonance autoencoder traiing results. a,** Training loss (training - black) validation (red). Example hysteresis loop from the **b,** training, **c,** validation data set. Black curve shows the original measured data, red curve show the autoencoder reconstruction. \n\n### Training Results (code)\n\n\n```python\n# Computes the low dimensional layer\nresonance_embeddings = rnn.get_activations(\n resonance_model,\n data['normalized']['resonance'],\n 9)\n```\n\n\n```python\n# defines the ranges for the images\nranges = [0, 0, 0, 0, 6e-3,\n 0, 4e-2, 0, 6e-2, 1e-1,\n 0, 1e-3, 0, 0, 0, 1.6e-2]\n\n# plots the embedding maps\n_ = viz.plot.embedding_maps(\n resonance_embeddings,\n printing,\n plot_format,\n folder_resonance_autoencoder,\n filename='./Resonance_embeddings',\n ranges=ranges)\n```\n\n**Figure J43 | Output of low dimensional layer obtained from the resonance autoencoder.**\n\n#### Plot Embedding and Line Trace (code)\n\n\n```python\n# collects the c/a clustering results\ncluster_ca = machine_learning['clustering']['piezoresponse'][1]\n\n# makes a copy of the embeddings\nembedding_c = np.copy(resonance_embeddings)\nembedding_a = np.copy(resonance_embeddings)\n\n# splits the embeddings for the c and a domains\nembedding_c[np.where(cluster_ca == 1)] = 0\nembedding_a[np.where(cluster_ca == 0)] = 0\n\n# rotates and crops the topography image\ncrop_topo, scale = util.core.rotate_and_crop(\n np.flipud(imported['data']['HeightFinal'].reshape(1024, 1024).T))\n\n# defines the embedding ranges for the images\nranges = [0, 0, 0, 0, 6e-3,\n 0, 4e-2, 0, 6e-2, 1e-1,\n 0, 1e-3, 0, 0, 0, 1.6e-2]\n\n# creates the figures and axes in a pretty way\nfig, ax = viz.format.layout_fig(6, mod=3)\n\n# plots the embedding superimposed on the line trace\nviz.plot.embedding_line_trace(\n ax,\n 0,\n crop_topo,\n embedding_c[:, 15],\n [0, 1.6e-2],\n plot_format)\n\nviz.plot.embedding_line_trace(\n ax,\n 1,\n crop_topo,\n embedding_a[:, 4],\n [0, 4.5e-3],\n plot_format)\n\nviz.plot.embedding_line_trace(\n ax,\n 2,\n crop_topo,\n embedding_a[:, 11],\n [0, 7e-4],\n plot_format)\n\nplt.tight_layout(pad=1)\n\n# saves the figure\nutil.file.savefig(\n folder_resonance_autoencoder +\n '/embedding_and_topography',\n printing)\n```\n\n**Figure J44 | Plots of selected embedding maps from piezoelectric autoencoder superimposed on average topography.**\n\n#### Exports Training Images (code)\n\nExports low dimensional layer computed after each epoch (with improvement) during training. This allows the visualization of the effect of L${_1}$ regularization.\n\n\n```python\n# selects to export training images\n# note this take a long time (1-2 hours)\nexport_training_images = False\n\nif export_training_images:\n if np.int(io_transfer.get_size(model_folder) / 1e8) > 1:\n viz.plot.training_images(\n resonance_model,\n data,\n model_folder,\n printing,\n plot_format,\n folder_resonance_autoencoder_training_movie,\n data_type='resonance')\n\n if printing['movies']:\n # Script to making movie\n util.file.make_movie(\n 'resonance_training_movie',\n folder_resonance_autoencoder_training_movie,\n './',\n 'png',\n 10,\n output_format='mp4')\n```\n\n#### Make Generator Movie (code)\n\nMakes a movie where the magnitude of the embedding is manipulated and the decoder is used to generate the piezoresponse\n\n\n```python\nif printing['movies']:\n\n # collects the c/a c\n cluster_ca = machine_learning['clustering']['piezoresponse'][1]\n\n # makes a copy of the resonance embeddings\n embedding_c = np.copy(resonance_embeddings)\n embedding_a = np.copy(resonance_embeddings)\n\n # extracts the embeddings for the c/a regions\n embedding_c[np.where(cluster_ca == 1)] = 0\n embedding_a[np.where(cluster_ca == 0)] = 0\n\n # defines the embedding ranges for the images\n ranges_a = [0, 0, 0, 0, 5e-3,\n 0, 4e-2, 0, 6e-2, 1e-1,\n 0, 7e-4, 0, 0, 0, 1.6e-2]\n\n ranges_c = [0, 0, 0, 0, 2e-3,\n 0, 4e-2, 0, 6e-2, 1e-1,\n 0, .7e-3, 0, 0, 0, 1.6e-2]\n\n # selects the embeding maps to plot\n index_a = [4, 6, 11]\n index_c = [4, 11, 15]\n\n # selects the number of images (embedding levels) to make\n number = 100\n\n # selects the number of points to average the embedding between\n averaging_number = 50\n\n # generates the embedding images\n _ = viz.plot.resonance_generator_movie(\n resonance_model,\n index_c,\n index_a,\n embedding_c, data['raw']['voltage'],\n embedding_a,\n ranges_c,\n ranges_a,\n number,\n averaging_number,\n resonance_decoder,\n plot_format,\n printing,\n folder_resonance_autoencoder_movie,\n graph_layout=[12, 3])\n\n # Script to making movie\n util.file.make_movie(\n 'Resonance_Generator_movie',\n folder_resonance_autoencoder_movie,\n './',\n 'png',\n 10,\n output_format='mp4',\n reverse=True)\n```\n\n#### Autoencoder Generator (code)\n\n\n```python\n# defines the ranges for the images\nranges = [0, 0, 0, 0, 4.5e-3,\n 0, 4e-2, 0, 6e-2, 1e-1,\n 0, 7e-4, 0, 0, 0,\n 1.6e-2]\n\n# selects the embedding maps to plot\nindex_a = [4, 6, 11]\nindex_c = [4, 11, 15]\n\n# selects the number of curves to plot\nnumber = 8\n\n# selects the number of pixels to average\naveraging_number = 50\n\n# selects a subset of the generated plots\nplot_subselect = [[7, 6, 5],\n [7, 6, 5],\n [7, 6, 5]]\n\n# set the scales of the axes\nscales = [[1320, 1330],\n [-1.1, 1.1]]\n\n# plots the generated curves for the a domains\nviz.plot.resonance_generator(\n resonance_decoder,\n piezoresponse_decoder,\n index_a,\n embedding_a,\n ranges,\n number,\n averaging_number,\n plot_subselect,\n piezoresponse_embeddings,\n data['raw']['voltage'],\n data['sg_filtered']['resonance'],\n plot_format,\n printing,\n folder_resonance_autoencoder,\n scales,\n name_prefix='a_domains')\n\n\n# sets the embedding ranges for the c domains\nranges = [0, 0, 0, 0, 2e-3,\n 0, 4e-2, 0, 6e-2, 1e-1,\n 0, .7e-3, 0, 0, 0,\n 1.6e-2]\n\n# selects a subset of the generated plots\nplot_subselect = [[7, 6, 5], [7, 6, 5], [7, 5, 3, 1]]\n# set the scales of the axes\nscales = [[1320, 1330], [-1.55, 1.55]]\n\n# plots the generated curves for the a domains\nviz.plot.resonance_generator(\n resonance_decoder,\n piezoresponse_decoder,\n index_c,\n embedding_c,\n ranges,\n number,\n averaging_number,\n plot_subselect,\n piezoresponse_embeddings,\n data['raw']['voltage'],\n data['sg_filtered']['resonance'],\n plot_format,\n printing,\n folder_resonance_autoencoder,\n scales,\n name_prefix='c_domains')\n```\n\n**Figure J45 | Plots of selected embedding maps from resonance autoencoder.** \nTop shows embedding map, middle shows generated resonance hysteresis loop, bottom shows generated piezoelectric hysteresis loop obtained when varying each embedding. The color of the hysteresis loops reflects the colors in the map\n\n# Results: Phase-Field Simulations (text)\nTo understand the physical significance of the features identified by the autoencoder requires consideration of the domain structure, geometry, and switching processes. To guide our interpretation, we conducted phase-field switching studies (of a model film with a *c*/*a* domain structure) under a simulated tip bias (Methods, Supplementary Information Fig. 10). From these phase-field studies, we can observe the polarization, as well as the local energetics during the switching process. Prior to the switching studies, the film exists in the up-poled state where the film has nearly uniform electrostatic energy. \n\tStarting with the first \u201clearned\u201d feature, which is most pronounced when the tip is within the c domains near the valley boundary, we observe a square piezoelectric hysteresis loop (Fig. 5a) and an increase in the resonance frequency of the cantilever (i.e., an increase in the elastic modulus, Fig. 5b). From the initial state, phase-field simulations reveal that switching at this position results in the nucleation of a down-poled domain (Fig. 5c, top, Supplementary Movie 6). From the electrostatic energy, we observe a region of increased energy below the newly nucleated domain (Fig. 5c, arrow) as a result of the head-to-head charged domain wall which exists between the nucleated domain and the a domain. This electrostatic repulsion, from the growing charged-domain wall, increases the local modulus of the material, which manifests as an increase in the cantilever resonance frequency. When applying negative bias to the tip, the bias reinforces the as-poled domain structure resulting in an unremarkable change in both the domain structure and local electrostatic energy (Fig. 5d).\n

\n \n**Figure 5 |. Interpretation of learned features based on phase-field simulations. a,e,i,** Generated piezoelectric hysteresis loops and **b,f,j** resonance response hysteresis loops under high activation as indicated in the activation maps the inset of **b,f,j.** These insets have the same colorscales as shown in Fig. 4. Arrows indicate the resonance pathway taken during switching. Phase-field simulations of (top) out-of-plane polarization, (bottom) electrostatic energy of switching under locally applied tip bias when the tip is positioned **c,d** within the *c* domain at the valley boundary, **g,h** within the *a* domain at the peak boundary, and **g,h** within the *a* domain at the valley boundary, wherein the applied tip bias is positive and negative at each position respectively. Crystallographic orientation is indicated in the bottom left corner. Polarization direction is indicated in the right corner. Color scale for the polarization and electrostatic energy is presented as diverging perceptually uniform colormaps as indicated in the colorbar.\n

\n\tShifting our attention to the second \u201clearned\u201d feature, which is most pronounced at the peak of the a domains, where we observe intermediate concavities in the piezoresponse hysteresis loop (Fig. 5e) and elastic hardening when switching under positive and negative bias Fig. 5f). Phase-field simulations reveal, that switching at this location, under positive bias, results in the nucleation of a down-poled domain (Fig. 5g, top, Supplementary Movie 7) with a head-to-head positively-charged domain wall. Turning our attention to the electrostatic energy we observe, as expected, a significant increase in the electrostatic energy near this charged domain wall (Fig. 5g, bottom, and indicated by the arrow). Thus, as the tip bias is increased, the newly nucleated domain grows laterally along the in-plane [100] and [010] resulting in an increase in the charge domain wall area. This, in turn, increases the stiffness, which manifests as an elastic hardening step. This hardening process continues until the finite volume probed by the tip is purely *c*-like, upon which application of further bias results in softening to saturation. When applying negative bias to the tip at this location, we observe the nucleation of an up-poled domain within the *a* domain (Fig. 5h, top). This up-poled domain must form a tail-to-tail negatively-charged domain wall. In turn, the presence of the charged domain wall results in an increase in electrostatic energy (arrow, Fig. 5h, bottom) near this charged domain wall, which results in elastic hardening and softening to saturation with an identical mechanism as described for the behavior when switching under positive bias. \n\tFinally, focusing on the third \u201clearned\u201d feature, which is most pronounced within the a domain near the valley, we observed intermediate concavities in the piezoresponse hysteresis loop (Fig. 5i) and elastic hardening when switching only under positive bias (Fig. 5j). Looking at the phase-field simulations under positive tip bias (Fig. 5k, Supplementary Movie 8), we observe the nucleation of a down-poled domain which is identical in form to that observed near the peak boundary within the a domain. As expected, the positive-bias branch of the piezoresponse loop and the resonance response have a similar form to the switching observed near the a-domain boundary, wherein the formation of a charged domain wall (Fig. 5k, bottom, arrow) results in an intermediate step in the piezoresponse loop and elastic hardening. When switching under negative bias, phase-field simulations reveal a different switching mechanism, wherein application of bias results in the expansion of the up-poled domain into the a domain, however, due to the geometry the domain is nominally uncharged (Fig. 5l). As a result, there is no evidence of either an intermediate concavity in the piezoresponse hysteresis loop nor a hardening step. All told, by interpreting the features learned by the autoencoder in the context of the domain structure and switching process we are able to identify features in both the piezoresponse hysteresis loops and resonance response related to the formation of charged domain walls during the switching process, which were not identified using conventional and linear machine learning analysis approaches.\n\n\n## Supplementary Movie 6: Phase Field Switching Movie in a *c*-like domain near the valley *c*/*a*/*c*/*a* / *a1*/*a2* domain wall\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 6. Movie showing phase-field simulations of tip induced ferroelectric switching in [PbZr0.2Ti0.2O3](https://) thin films when the tip is positioned within a *c*-like domain near the valley *c*/*a*/*c*/*a* / *a1*/*a2* domain wall.** Movie shows the evolution of the polarization and landau, elastic, electrostatic and total energy. The left plots indicate the position during switching on a schematic ferroelectric hysteresis loop and bipolar triangular waveform. \n\n## Supplementary Movie 7: Phase Field Switching Movie in a *a*-like domain near the peak *c*/*a*/*c*/*a* / *a1*/*a2* domain wall\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 7. Movie showing phase-field simulations of tip induced ferroelectric switching in PbZr0.2Ti0.2O3 thin films when the tip is positioned within *a*-like domain near the peak *c*/*a*/*c*/*a* / *a1*/*a2* domain wall.** Movie shows the evolution of the polarization and landau, elastic, electrostatic and total energy. The left plots indicate the position during switching on a schematic ferroelectric hysteresis loop and bipolar triangular waveform. \n\n## Supplementary Movie 8: Phase Field Switching Movie in a *a*-like domain near the valley *c*/*a*/*c*/*a* / *a1*/*a2* domain wall\n\n\n```python\nHTML('')\n```\n\n**Supplementary Movie 8. Movie showing phase-field simulations of tip induced ferroelectric switching in PbZr0.2Ti0.2O3 thin films when the tip is positioned within *a*-like domain near the valley *c*/*a*/*c*/*a* / *a1*/*a2* domain wall.** Movie shows the evolution of the polarization and landau, elastic, electrostatic and total energy. The left plots indicate the position during switching on a schematic ferroelectric hysteresis loop and bipolar triangular waveform. \n\n## Supplementary Note 10: Phase-Field Simulations of Ferroelectric Switching (text)\nIn addition to visualizing the switching process using phase-field simulations it is also possible to compute the local ferroelectric hysteresis loops by extracting the average polarization under the tip. Computation of the ferroelectric hysteresis loop at various tip locations (Supplementary Figure 10) reveals switching mechanism which have similarity to the observed piezoresponse hysteresis Specifically, we observe square loops when the tip is within the *c*/*a* band near the valley boundary (Supplementary Figure 10a), intermediate concavities in the piezoelectric hysteresis loop when switching under positive and negative bias when in the $a_1$/$a_2$ band near the valley boundary (Supplementary Figure 10b), and intermediate concavities only when switching under positive bias when within the $a_1$/$a_2$ boundary near the peak (Supplementary Figure 10c). This is the exact same trend that is observed experimentally (Figure 4). We do note some differences particularly in comparison to the peak boundary. This difference is likely correlated to the difference in how the loop is constructed. In phase-field simulations the loops are calculated from the finite volume average of the polarization in close proximity to the tip. In turn, this is a measure of only the intrinsic response. In piezoresponse measurements we are measuring the change in the piezoresponse during switching which has both long-range and extrinsic contributions. It is likely that these long-range and extrinsic contributions contribute to the hysteresis-like shape observed in the piezoresponse loop which is not observed in the phase field simulation results.\n\n

\n\n\n**Supplementary Figure 10 | Ferroelectric hysteresis loops extracted from phase-field simulations at various tip positions along the domain wall geometry.** Polarization direction in film is indicated on the right. Surface topography is projected from the lattice parameters of the film.\n\n\n### Plotting Phase-Field Simultion Results (code)\n\n\n```python\n#@title\n# sets the position where the tip is located\ntip_positions = {'tip1': dict(pos=[42, 64, 20]),\n 'tip2': dict(pos=[50, 64, 20]),\n 'tip3': dict(pos=[62, 64, 20]),\n 'tip4': dict(pos=[72, 64, 20]),\n 'tip5': dict(pos=[74, 64, 20])}\n\n# sets the scale limits for the graphs\nclim = {'Polarization Z': [-1, 1],\n 'Landau Energy': [-10e7, 10e7],\n 'Elastic Energy': [-10e7, 10e7],\n 'Electrostatic Energy': [-10e7, 10e7],\n 'Gradient Energy': [-10e7, 10e7],\n 'Total Energy': [-10e7, 10e7]}\n\n# sets the information of the region to s6ho\ngraph_info = dict(top=20,\n y_cut=64,\n x_lim=[120, 360],\n y_lim=[0, 100],\n clim=clim)\n\n# collection of information used for plotting the phase feild results\nPhase_field_information = {'tips': ['tip1',\n 'tip2',\n 'tip3',\n 'tip4',\n 'tip5'],\n 'folder': dict(time_series='./Raw_Data/Phase_Field/Polarization/data-PEloop/',\n polarization='./Raw_Data/Phase_Field/Polarization/',\n energy='./Raw_Data/Phase_Field/energy/'),\n 'time_step': [60, 0, 20],\n 'tip_positions': tip_positions,\n 'graph_info': graph_info,\n 'labels': ['Polarization Z',\n 'Landau Energy',\n 'Elastic Energy',\n 'Electrostatic Energy',\n 'Gradient Energy',\n 'Total Energy'],\n 'output_folder': folder_phase_field}\n```\n\n\n```python\n#@title\n# plots the phase field results\nviz.phase_field.phase_field_switching(Phase_field_information, printing)\n```\n\n**Figure J46 | Phase-field simulations under local tip bias.** Maps show the polarization and various contributions to the energy at various tip positions. Maps show the switching under negative bias (left), initial state (center), positive bias (right). \n\n### Phase-Field Movies (code)\n\n\n```python\n#@title\nif printing['movies']:\n # exports all phase field images to create movie\n _ = viz.phase_field.movie(Phase_field_information, printing)\n```\n\n\n```python\n#@title\nif printing['movies']:\n for i, tip in enumerate(Phase_field_information['tips']):\n util.file.make_movie('Switching_movie_' + tip,\n folder_phase_field + '/movie/' + tip,\n folder_phase_field + '/movie/',\n 'png',\n 5, output_format='gif')\n```\n\n### Phase Field Hysteresis Loops (code)\n\n\n\n\n\n```python\n#@title\nviz.phase_field.phase_field_hysteresis(Phase_field_information, printing)\n```\n\n**Figure J47 | Phase-field simulations under local tip bias.** Plots show the extracted ferroelectric hysteresis loops at various tip positions.\n\n# Methods (text)\n\n## Growth of epitaxial PbZr0.2Ti0.8O3 thin films. \n\n\n\n\n400-nm-thick PbZr0.2Ti0.8O3 thin films were synthesized using pulsed-laser deposition by ablating a ceramic target of Pb1.1Zr0.2Ti0.8O3 using a KrF excimer laser (248 nm, LPX 305, Coherent), in an on-axis geometry with a 60 mm target-to-substrate spacing. The PbZr0.2Ti0.8O3 films were grown on 30 nm Ba0.5Sr0.5RuO3-buffered NdScO3 (110) single-crystal substrates which were affixed to the heater using Ag paint. The Ba0.5Sr0.5RuO3 bottom electrodes were grown at a heater temperature of 750$^\\circ$C in a dynamic oxygen pressure of 20 mTorr, by ablating a ceramic Ba0.5Sr0.5RuO3 target (Praxair) at a laser fluence and repetition rate of 1.8 J cm-2 and 2 Hz, respectively. The PbZr0.2Ti0.8O3 films were grown at a heater temperature of 600$^\\circ$C in a dynamic oxygen pressure of 50 mTorr, with a laser fluence and repetition frequency of 1.9 cm-2 and 14 Hz, respectively. Following growth, all heterostructures were cooled to room temperature in a static oxygen pressure of 760 Torr at 5$^\\circ$C/min. \n\t\n\n## Band-excitation piezoresponse spectroscopy (BEPS) (text)\n\nBEPS studies were performed at the Center for Nanophase Materials Science (CNMS) at Oak Ridge National Laboratory (ORNL) using a custom Cypher (Asylum Research) atomic force microscope controlled with a Labview- and Matlab-based controller. A bipolar-triangular-switching waveform was applied using a conductive scanning-probe tip in a square grid measuring the cantilever response caused by the band-excitation waveform in the time domain. Following processing with a fast-Fourier transform, the cantilever resonance response was fit to a simple harmonic oscillator model, allowing the extraction of piezoresponse amplitude, phase, cantilever resonance frequency, and dissipation. The use of band excitation for these measurements is crucial as-it minimizes effects from changing tip\u2013sample contact resonances that can alter the observed response, enabling consistent measurements of piezoresponse throughout multiple dimensions (that is, frequency, spatial, voltage, time, and so on; Supplementary Information). All measurements were carried out using Pt/Ir-coated probe tips (NanoSensor PPP-EFM). Switching spectroscopy measurements were measured at a resonance frequency of ~1320 kHz (with a bandwidth of 60 kHz). The DC voltage was chosen such that the piezoelectric hysteresis loops were saturated in both the positive and negative direction. The local piezoresponse was measured at remanence (following a dwell time of 0.5 ms), with a BE waveform of sinc character (peak-to-peak voltage of 1 V).\n\n## Neural-network structure and training. \n\nThe long short-term memory recurrent neural network autoencoders were built in Keras using the Tensorflow backend. The network trained on the piezoresponse data had 4 encoding and decoding layers each of size 128. Dropout within the encoding and decoding layer was fixed at 20%. The low dimensional embedding layer had a size of 16 and l1 regularization ($\\lambda = 1 x 10^{-5}$). Batch-normalization layers were included prior to- and following the low-dimensional embedding layer. The network was trained using Adam as an optimizer with an initial learning rate ($L = 3 x 10^{-5}$) for 16,000 epochs. For the analysis of the resonance data the network used was identical to the network used for the piezoresponse data with the exception that the network was trained for 22,000 epochs. Training was completed using a local workstation equipped with a NVIDIA Titan X graphics processing unit (GPU) or on the Savio supercomputer cluster equipped with GPU nodes with NVIDIA K80 GPUs. To accelerate the training of the generated responses formed, after training the autoencoder sufficiently, the weights through the low-dimensional layer were fixed and the decoder was trained for 1 million epochs without dropout.\n\n## Phase-Field Simulations\n\nA three-dimensional model was applied to simulate the evolution of ferroelectric polarizations ($P_i$ (i=1,2,3)) of the PbZr0.2Ti0.8O3 (PZT) thin film by numerically solving the time-dependent Landau-Ginzburg-Devonshire (LGD) equations[64].\n\n$$\n\\begin{equation}\n\\frac{\\partial P_i(x,t)}{\\partial t} = -L \\frac{\\partial F}{\\partial P_i(x,t)}, i = 1,2,3\n\\tag{1}\n\\end{equation}\n$$\n\nin which P_i is the polarization vector, x is the spatial position, t is the time, L is the kinetic coefficient related to the domain wall mobility, and F is the total free energy as shown below[65].\n\n$$\n\\begin{equation}\nF=\\int_V[f_{\\mathrm{Land}}(P_i)+f_{\\mathrm{Grad}}(P_{i,j})+f_{\\mathrm{Elas}}(P_i,\\varepsilon_{i,j})+f_{\\mathrm{Elec}}(P_i,E_i)]dV\n\\tag{2}\n\\end{equation}\n$$\n\nin which $f_\\mathrm{Land}(P_i)$, $f_{\\mathrm{Grad}}(P_{i,j})$, $f_{\\mathrm{Elas}}(P_i,\\varepsilon_{i,j})$, and $f_{\\mathrm{Elec}}(P_i,E_i)$ represent the LGD free energy density, gradient energy density, elastic energy density and electrostatic energy density, respectively. Details of these energy density terms, as well as the coefficients related to these energy terms are collected from literature.[66] Here we adopt a sixth-order polynomial expansion of $P_i$ for $f_\\mathrm{Land}(P_i)$, and choose the dielectric constant to be $\\kappa$ = 50 for PZT. The gradient energy coefficients are set to be $G_{11}$/$G_{110}$ = 0.6, where $G_{110}$ = 1.73\u00d710-10 C-2m4N.[67] The simulation size is a realistic three-dimensional geometry sampled on a fine grid mesh of 128$\\Delta$x \u00d7 128$\\Delta$x \u00d7 32$\\Delta$x, where the grid size $\\Delta$x = 1.0nm. The film and substrate thickness are 20$\\Delta$x and 10$\\Delta$x, respectively. A semi-implicit spectral method[68] is used to solve the time-dependent LGD equation, with periodic boundary conditions applied in $x_1$ and $x_2$ directions, and thin film boundary conditions applied in $x_3$ direction. The initial structure consists of $(100)_a$ / $(001)_c$ preset domain structure. The entire thin film is subjected to a homogeneous 0.3% tensile strain by the substrate. Electric bias is modeled using a Lorentz function $\\phi(x,y)=\\frac{\\phi_0 \\gamma^2}{(r-a)^2+\\gamma^2}$ where $r$ is the distance from the tip and $\\gamma$ is the half width at half maximum (HWHM) of applied bias ($\\phi_0$). The tips are located near the *c*/*a* domain boundaries as described in the main text. The average polarization in a 5$\\Delta$x \u00d7 5$\\Delta$x \u00d7 6$\\Delta$x cuboid volume near the tip center is collected to calculate the hysteresis loop.\n\n# Acknowledgements\n\n\n\nThe authors acknowledge fruitful conversations with Tess Schmidt. J.C.A, J.B.N., and L.W.M. acknowledge primary support of the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 (Materials Project program KC23MP) for the development of advanced functional materials and data-driven approaches to materials study. For work at Lehigh, J.C.A. acknowledges support from the National Science Foundation under grant TRIPODS+X:RES-1839234, and the Nano/Human Interfaces Presidential Initiative, the Institute for Functional Materials and Devices, and the Institute for Intelligent Systems and Computation at Lehigh University. B.N. and J.S.B. acknowledge the support of the Gordon and Betty Moore Foundation Data-Driven Discovery, the National Science Foundation BIGDATA grant number 1251274, and the Berkeley Institute of Data Science. S.P. acknowledges the support of the Army Research Office under grant W911NF-14-1-0104. S.v.d.W. acknowledges support by the Gordon and Betty Moore Foundation through Grant GBMF3834 and the Alfred P. Sloan Foundation through Grant 2013-10-27 to the University of California. J.M. acknowledges the support of the National Science Foundation under grant DMR-1708615. L.-Q.C. acknowledges the support of the National Science Foundation under grant DMR-1744213. R.Y. and Y.C. acknowledges the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper (URL: http://www.tacc.utexas.edu). A portion of this research was conducted at the Center for Nanophase Materials Sciences, which also supports (RKV, SVK), and is a US DOE Office of Science User Facility.\n\n# Author Contributions (text)\n\nJ.C.A. and L.W.M. designed and conceived the experiments. J.C.A., B.N., J.M., S.v.d.W., and J.B. developed the concept and trained the deep-learning neural network autoencoder. J.C.A., S.P., R.K.V. conducted the band excitation piezoresponse force microscopy experiments. S.P. and J.C.A. synthesized the films using pulsed-laser deposition. J.C.A., R.Y., Y.C., and L.Q.C. designed and conducted the phase-field simulations.\n\n# References\n\n## Main Text References\n\n1.\tMartin, L. W. & Rappe, A. M. Thin-film ferroelectric materials and their applications. Nat. Rev. Mater. 2, 16087-1\u201314 (2017).\n2.\tSchlom, D. G. D. G., Chen, L.-Q. L. Q., Fennie, C. J. C. J., Gopalan, V., Muller, D. A. D. A., Pan, X., Ramesh, R. & Uecker, R. Elastic strain engineering of ferroic oxides. MRS Bull. 39, 118\u2013130 (2014).\n3.\tDamodaran, A. R., Agar, J. C., Pandya, S., Chen, Z., Dedon, L., Xu, R., Apgar, B., Saremi, S. & Martin, L. W. New modalities of strain-control of ferroelectric thin films. J. Phys. Condens. Matter 28, 263001-1\u201336 (2016).\n4.\tLee, K. S., Choi, J. H., Lee, J. Y. & Baik, S. Domain formation in epitaxial Pb(Zr,Ti)O3 thin films. J. Appl. Phys. 90, 4095\u20134102 (2001).\n5.\tGanpule, C. S., Nagarajan, V., Hill, B. K., Roytburd, A. L., Williams, E. D., Ramesh, R., Alpay, S. P., Roelofs, A., Waser, R. & Eng, L. M. Imaging three-dimensional polarization in epitaxial polydomain ferroelectric thin films. J. Appl. Phys. 91, 1477\u20131481 (2002).\n6.\tDamodaran, A. R. R., Pandya, S., Agar, J. C. C., Cao, Y., Vasudevan, R. K. K., Xu, R., Saremi, S., Li, Q., Kim, J., Mccarter, M. R. R., Dedon, L. R. R., Angsten, T., Balke, N., Jesse, S., Asta, M., Kalinin, S. V. V & Martin, L. W. W. Three\u2010State Ferroelastic Switching and Large Electromechanical Responses in PbTiO3 Thin Films. Adv. Mater. 29, 1702069-1\u20139 (2017).\n7.\tAgar, J. C., Damodaran, A. R., Okatan, M. B., Kacher, J., Gammer, C., Vasudevan, R. K., Pandya, S., Dedon, L. R., Mangalam, R. V. K., Velarde, G. A., Jesse, S., Balke, N., Minor, A. M., Kalinin, S. V. & Martin, L. W. Highly mobile ferroelastic domain walls in compositionally graded ferroelectric thin films. Nat. Mater. 15, 549\u2013556 (2016).\n8.\tAgar, J. C., Damodaran, A. R., Velarde, G. A., Pandya, S., Mangalam, R. V. K. & Martin, L. W. Complex Evolution of Built-in Potential in Compositionally-Graded PbZr1-xTixO3 Thin Films. ACS Nano 9, 7332\u20137342 (2015).\n9.\tYadav, A. K., Nelson, C. T., Hsu, S. L., Hong, Z., Clarkson, J. D., Schlepueetz, C. M., Damodaran, A. R., Shafer, P., Arenholz, E., Dedon, L. R., Chen, D., Vishwanath, A., Minor, A. M., Chen, L. Q., Scott, J. F., Martin, L. W., Ramesh, R., Schlep\u00fcetz, C. M. C. M., Damodaran, A. R., et al. Observation of polar vortices in oxide superlattices. Nature 530, 198\u2013201 (2016).\n10.\tZubko, P., Wojdel, J. C., Hadjimichael, M., Fernandez-Pena, S., Sene, A., Luk\u2019yanchuk, I., Triscone, J.-M. & Iniguez, J. Negative capacitance in multidomain ferroelectric superlattices. Nature 534, 524\u2013528 (2016).\n11.\tMundy, J. A., Brooks, C. M., Holtz, M. E., Moyer, J. A., Das, H., Rebola, A. F., Heron, J. T., Clarkson, J. D., Disseler, S. M., Liu, Z., Farhan, A., Held, R., Hovden, R., Padgett, E., Mao, Q., Paik, H., Misra, R., Kourkoutis, L. F., Arenholz, E., et al. Atomically engineered ferroic layers yield a room - temperature magnetoelectric multiferroic. Nature 537, 523\u2013527 (2016).\n12.\tDaniels, J. E., Finlayson, T. R., Davis, M., Damjanovic, D., Studer, A. J., Hoffman, M. & Jones, J. L. Neutron diffraction study of the polarization reversal mechanism in [111]c-oriented Pb(Zn1/3Nb2/3)O3-XPbTiO3. J. Appl. Phys. 101, 104108-1\u20137 (2007).\n13.\tXu, R., Liu, S., Grinberg, I., Karthik, J., Damodaran, A. R., Rappe, A. M. & Martin, L. W. Ferroelectric polarization reversal via successive ferroelastic transitions. Nat. Mater. 14, 79\u201386 (2015).\n14.\tChen, Z. H., Damodaran, A. R., Xu, R., Lee, S. & Martin, L. W. Effect of asymmetry mismatch on the domain structure of rhombohedral BiFeO3 thin films. Appl. Phys. Lett. 104, 182908-1\u20135 (2014).\n15.\tKhan, A. I., Marti, X., Serrao, C., Ramesh, R. & Salahuddin, S. Voltage-Controlled Ferroelastic Switching in Pb(Zr0.2Ti0.8)O3 Thin Films. Nano Lett. 15, 2229\u20132234 (2015).\n16.\tFeigl, L., McGilly, L. J., Sandu, C. S. & Setter, N. Compliant ferroelastic domains in epitaxial Pb(Zr,Ti)O3 thin films. Appl. Phys. Lett. 104, 172904-1\u20134 (2014).\n17.\tMcGilly, L. J., Yudin, P., Feigl, L., Tagantsev, A. K. & Setter, N. Controlling domain wall motion in ferroelectric thin films. Nat. Nanotechnol. 10, 145\u2013150 (2015).\n18.\tFeigl, L., Sluka, T., McGilly, L. J., Crassous, A., Sandu, C. S. & Setter, N. Controlled creation and displacement of charged domain walls in ferroelectric thin films. Sci. Rep. 6, 31323-1\u20137 (2016).\n19.\tJiang, J., Bai, Z. L., Chen, Z. H., He, L., Zhang, D. W., Zhang, Q. H., Shi, J. A., Park, M. H., Scott, J. F., Hwang, C. S. & Jiang, A. Q. Temporary formation of highly conducting domain walls for non-destructive read-out of ferroelectric domain-wall resistance switching memories. Nat. Mater. 17, 49\u201356 (2018).\n20.\tSharma, P., Zhang, Q., Sando, D., Lei, C. H., Liu, Y., Li, J., Nagarajan, V. & Seidel, J. Nonvolatile ferroelectric domain wall memory. Sci. Adv. 3, e1700512 (2017).\n21.\tNelson, C. T., Gao, P., Jokisaari, J. R., Heikes, C., Adamo, C., Melville, A., Baek, S.-H., Folkman, C. M., Winchester, B., Gu, Y., Liu, Y., Zhang, K., Wang, E., Li, J., Chen, L.-Q., Eom, C.-B., Schlom, D. G. & Pan, X. Domain Dynamics During Ferroelectric Switching. Science (80-. ). 334, 968\u2013971 (2011).\n22.\tHart, J. L., Liu, S., Lang, A. C., Hubert, A., Zukauskas, A., Canalias, C., Beanland, R., Rappe, A. M., Arredondo, M. & Taheri, M. L. Electron-beam-induced ferroelectric domain behavior in the transmission electron microscope: Toward deterministic domain patterning. Phys. Rev. B 94, 174104-1\u20137 (2016).\n23.\tSomnath, S., Belianinov, A., Kalinin, S. V & Jesse, S. Full information acquisition in piezoresponse force microscopy. Appl. Phys. Lett. 107, 263102-1\u20134 (2015).\n24.\tJesse, S., Vasudevan, R. K. R. K., Collins, L., Strelcov, E., Okatan, M. B. M. B., Belianinov, A., Baddorf, A. P. A. P., Proksch, R. & Kalinin, S. V. S. V. Band excitation in scanning probe microscopy: recognition and functional imaging. Annu. Rev. Phys. Chem. 65, 519\u2013536 (2014).\n25.\tAhn, Y., Park, J., Pateras, A., Rich, M. B., Zhang, Q., Chen, P., Yusuf, M. H., Wen, H., Dawber, M. & Evans, P. G. Photoinduced Domain Pattern Transformation in Ferroelectric-Dielectric Superlattices. Phys. Rev. Lett. 119, 57601-1\u20136 (2017).\n26.\tLaanait, N., Zhang, Z., Schlep\u00fctz, C. M., Vila-Comamala, J., Highland, M. J. & Fenter, P. Full-field X-ray reflection microscopy of epitaxial thin-films. J. Synchrotron Radiat. 21, 1252\u20131261 (2014).\n27.\tKalinin, S. V, Sumpter, B. G. & Archibald, R. K. Big--deep--smart data in imaging for guiding materials design. Nat. Mater. 14, 973\u2013980 (2015).\n28.\tPerozzi, B., Al-Rfou, R. & Skiena, S. Deepwalk: Online learning of social representations. in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining 701\u2013710 (2014).\n29.\tGkotsis, G., Oellrich, A., Velupillai, S., Liakata, M., Hubbard, T. J. P., Dobson, R. J. B. & Dutta, R. Characterisation of mental health conditions in social media using Informed Deep Learning. Sci. Rep. 7, 45141-1\u201310 (2017).\n30.\tTshitoyan, V., Dagdelen, J., Weston, L., Dunn, A., Rong, Z., Kononova, O., Persson, K. A., Ceder, G. & Jain, A. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571, 95\u201398 (2019).\n31.\tVinyals, O., Toshev, A., Bengio, S. & Erhan, D. Show and tell: A neural image caption generator. in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 07-12-June, 3156\u20133164 (2015).\n32.\tdos Santos, C. & Gatti, M. Deep convolutional neural networks for sentiment analysis of short texts. in Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers 69\u201378 (2014).\n33.\tSeveryn, A. & Moschitti, A. Twitter sentiment analysis with deep convolutional neural networks. in Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval 959\u2013962 (2015).\n34.\tSchmidhuber, J. Deep learning in neural networks: An overview. Neural networks 61, 85\u2013117 (2015).\n35.\tGoodfellow, I., Bengio, Y., Courville, A. & Bengio, Y. Deep learning. 1, (MIT press Cambridge, 2016).\n36.\tLeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436\u2013444 (2015).\n37.\tSzegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. A. Inception-v4, inception-resnet and the impact of residual connections on learning. in AAAI 4, 12 (2017).\n38.\tSutskever, I., Vinyals, O. & Le, Q. V. Sequence to sequence learning with neural networks. in Advances in neural information processing systems 3104\u20133112 (2014).\n39.\tWu, Y., Schuster, M., Chen, Z., Le, Q. V, Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K. & others. Google\u2019s neural machine translation system: Bridging the gap between human and machine translation. arXiv Prepr. arXiv1609.08144 (2016).\n40.\tWu, C. H. & McLarty, J. W. Neural networks and genome informatics. 1, (Elsevier, 2012).\n41.\tDery, L. M., Nachman, B., Rubbo, F. & Schwartzman, A. Weakly supervised classification in high energy physics. J. High Energy Phys. 2017, 145-1\u20135 (2017).\n42.\tNaul, B., Bloom, J. S., P\u00e9rez, F. & van der Walt, S. A recurrent neural network for classification of unevenly sampled variable stars. Nat. Astron. 2, 151\u2013155 (2017).\n43.\tZhang, Y. & Kim, E.-A. Quantum loop topography for machine learning. Phys. Rev. Lett. 118, 216401-1\u20134 (2017).\n44.\tCh\u2019ng, K., Carrasquilla, J., Melko, R. G. & Khatami, E. Machine learning phases of strongly correlated fermions. Phys. Rev. X 7, 31038-1\u20139 (2017).\n45.\tBohrdt, A., Chiu, C. S., Ji, G., Xu, M., Greif, D., Greiner, M., Demler, E., Grusdt, F. & Knap, M. Classifying snapshots of the doped Hubbard model with machine learning. Nat. Phys. (2019). doi:10.1038/s41567-019-0565-x\n46.\tXie, T. & Grossman, J. C. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Phys. Rev. Lett. 120, 145301-1\u20136 (2018).\n47.\tSegler, M. H. S., Preuss, M. & Waller, M. P. Planning chemical syntheses with deep neural networks and symbolic AI. Nature 555, 604\u2013610 (2018).\n48.\tXu, W. & LeBeau, J. M. A deep convolutional neural network to analyze position averaged convergent beam electron diffraction patterns. Ultramicroscopy 188, 59\u201369 (2018).\n49.\tZiatdinov, M., Maksov, A. & Kalinin, S. V. Learning surface molecular structures via machine vision. NPJ Comput. Mater. 3, 31-1\u20139 (2017).\n50.\tBorodinov, N., Neumayer, S., Kalinin, S. V., Ovchinnikova, O. S., Vasudevan, R. K. & Jesse, S. Deep neural networks for understanding noisy data applied to physical property extraction in scanning probe microscopy. npj Comput. Mater. 5, 25-1\u20138 (2019).\n51.\tGhosh, K., Stuke, A., Todorovi\u0107, M., J\u00f8rgensen, P. B., Schmidt, M. N., Vehtari, A. & Rinke, P. Deep Learning Spectroscopy: Neural Networks for Molecular Excitation Spectra. Adv. Sci. 6, 1801367-1\u20137 (2019).\n52.\tZhang, Y., Mesaros, A., Fujita, K., Edkins, S. D., Hamidian, M. H., Ch\u2019ng, K., Eisaki, H., Uchida, S., Davis, J. C. S., Khatami, E. & Kim, E.-A. Machine learning in electronic-quantum-matter imaging experiments. Nature 570, 484\u2013490 (2019).\n53.\tRem, B. S., K\u00e4ming, N., Tarnowski, M., Asteria, L., Fl\u00e4schner, N., Becker, C., Sengstock, K. & Weitenberg, C. Identifying quantum phase transitions using artificial neural networks on experimental data. Nat. Phys. (2019). doi:10.1038/s41567-019-0554-0\n54.\tNeumayer, S. M., Ievlev, A. V., Collins, L., Vasudevan, R., Baghban, M. A., Ovchinnikova, O. S., Jesse, S., Gallo, K., Rodriguez, B. J. & Kalinin, S. V. Surface chemistry controls anomalous ferroelectric behavior in lithium niobate. ACS Appl. Mater. Interfaces 10, 29153\u201329160 (2018).\n55.\tCaprioli, R. M., Farmer, T. B. & Gile, J. Molecular imaging of biological samples: localization of peptides and proteins using MALDI-TOF MS. Anal. Chem. 69, 4751\u20134760 (1997).\n56.\tNeacsu, C. C., Dreyer, J., Behr, N. & Raschke, M. B. Scanning-probe Raman spectroscopy with single-molecule sensitivity. Phys. Rev. B 73, 193406-1\u20138 (2006).\n57.\tKim, Y.-M., Morozovska, A., Eliseev, E., Oxley, M. P., Mishra, R., Selbach, S. M., Grande, T., Pantelides, S. T., Kalinin, S. V & Borisevich, A. Y. Direct observation of ferroelectric field effect and vacancy-controlled screening at the BiFeO3/LaxSr1- xMnO3 interface. Nat. Mater. 13, 1019\u20131025 (2014).\n58.\tCueva, P., Hovden, R., Mundy, J. A., Xin, H. L. & Muller, D. A. Data processing for atomic resolution electron energy loss spectroscopy. Microsc. Microanal. 18, 667\u2013675 (2012).\n59.\tKluyver, T., Ragan-Kelley, B., P\u00e9rez, F., Granger, B., Bussonnier, M., Frederic, J., Kelley, K., Hamrick, J., Grout, J., Corlay, S., Ivanov, P., Avila, D., Abdalla, S. & Willing, C. Jupyter Notebooks -- a publishing format for reproducible computational workflows. in Positioning and Power in Academic Publishing: Players, Agents and Agendas (eds. Loizides, F. & Schmidt, B.) 87\u201390 (2016).\n60.\tYoung, T., Hazarika, D., Poria, S. & Cambria, E. Recent Trends in Deep Learning Based Natural Language Processing [Review Article]. IEEE Comput. Intell. Mag. 13, 55\u201375 (2018).\n61.\tHochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735\u20131780 (1997).\n62.\tSrivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929\u20131958 (2014).\n63.\tCore Layers. Keras Documentation Available at: https://keras.io/layers/core/#dense. \n64.\tChen, L.-Q. Phase-field method of phase transitions/domain structures in ferroelectric thin films: A review. J. Am. Ceram. Soc. 91, 1835\u20131844 (2008).\n65.\tLi, Y. L., Hu, S. Y., Liu, Z. K. & Chen, L. Q. Phase-field model of domain structures in ferroelectric thin films. Appl. Phys. Lett. 78, 3878\u20133880 (2001).\n66.\tHaun, M. J., Zhuang, Z. Q., Furman, E., Jang, S. J. & Cross, L. E. Thermodynamic theory of the lead zirconate-titanate solid solution system, part III: Curie constant and sixth-order polarization interaction dielectric stiffness coefficients. Ferroelectrics 99, 45\u201354 (1989).\n67.\tXue, F., Wang, J. J., Sheng, G., Huang, E., Cao, Y., Huang, H. H., Munroe, P., Mahjoub, R., Li, Y. L., Nagarajan, V. & Chen, L. Q. Phase field simulations of ferroelectrics domain structures in PbZrx<\\sub>Ti1-xO3 bilayers. Acta Mater. 61, 2909\u20132918 (2013).\n68.\tChen, L. Q. & Shen, J. Applications of semi-implicit Fourier-spectral method to phase field equations. Comput. Phys. Commun. 108, 147\u2013158 (1998).\n\n\n## Supplementary Notes References\n\n1.\tGruverman, A., Auciello, O. & Tokumoto, H. Imaging and control of domain structures in ferroelectric thin films via scanning force microscopy. Annu. Rev. Mater. Sci. 28, 101\u2013123 (1998).\n2.\tJesse, S. & Kalinin, S. V. Band excitation in scanning probe microscopy: sines of change. J. Phys. D. Appl. Phys. 44, 464006-1\u201333 (2011).\n3.\tRodriguez, B. J., Callahan, C., Kalinin, S. V. & Proksch, R. Dual-frequency resonance-tracking atomic force microscopy. Nanotechnology 18, 475504-1\u20135 (2007).\n4.\tLabuda, A. & Proksch, R. Quantitative measurements of electromechanical response with a combined optical beam and interferometric atomic force microscope. Appl. Phys. Lett. 106, 253103-1\u20134 (2015).\n5.\tPedregosa, F. et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 12, 2825\u20132830 (2012).\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "d2c482e8ffb6c60db58178e6709757d4900d0b59", "size": 222281, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jupyter_Paper_Revealing Ferroelectric Switching Character Using Deep Recurrent Neural Networks-Collaboratory.ipynb", "max_stars_repo_name": "jagar2/Revealing-Ferroelectric-Switching-Character-Using-Deep-Recurrent-Neural-Networks", "max_stars_repo_head_hexsha": "7c4a86f8b6b18a0eda53d2bcbd44ad29ef03aaf9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2018-11-29T02:20:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-30T03:51:06.000Z", "max_issues_repo_path": "Jupyter_Paper_Revealing Ferroelectric Switching Character Using Deep Recurrent Neural Networks-Collaboratory.ipynb", "max_issues_repo_name": "jagar2/Revealing-Ferroelectric-Switching-Character-Using-Deep-Recurrent-Neural-Networks", "max_issues_repo_head_hexsha": "7c4a86f8b6b18a0eda53d2bcbd44ad29ef03aaf9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jupyter_Paper_Revealing Ferroelectric Switching Character Using Deep Recurrent Neural Networks-Collaboratory.ipynb", "max_forks_repo_name": "jagar2/Revealing-Ferroelectric-Switching-Character-Using-Deep-Recurrent-Neural-Networks", "max_forks_repo_head_hexsha": "7c4a86f8b6b18a0eda53d2bcbd44ad29ef03aaf9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2018-11-30T00:37:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-22T15:33:53.000Z", "avg_line_length": 52.2768109125, "max_line_length": 2555, "alphanum_fraction": 0.5863928991, "converted": true, "num_tokens": 39109, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO\n\n", "lm_q1_score": 0.5350984434543458, "lm_q2_score": 0.5, "lm_q1q2_score": 0.2675492217271729}} {"text": "## Dependencies and Setup\n\n### Install Process Mining for Python\n\n\n```python\n!pip install pm4py\n!pip install jellyfish\n\n```\n\n Collecting pm4py\n Downloading pm4py-2.2.11.1-py3-none-any.whl (1.5 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.5 MB 8.1 MB/s \n \u001b[?25hCollecting pulp<=2.1\n Downloading PuLP-2.1-py3-none-any.whl (40.6 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 40.6 MB 70 kB/s \n \u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from pm4py) (4.41.1)\n Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from pm4py) (3.2.2)\n Requirement already satisfied: cvxopt in /usr/local/lib/python3.7/dist-packages (from pm4py) (1.2.6)\n Requirement already satisfied: graphviz in /usr/local/lib/python3.7/dist-packages (from pm4py) (0.10.1)\n Requirement already satisfied: numpy!=1.19.4 in /usr/local/lib/python3.7/dist-packages (from pm4py) (1.19.5)\n Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from pm4py) (2018.9)\n Collecting deprecation\n Downloading deprecation-2.1.0-py2.py3-none-any.whl (11 kB)\n Requirement already satisfied: sympy in /usr/local/lib/python3.7/dist-packages (from pm4py) (1.7.1)\n Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from pm4py) (0.22.2.post1)\n Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from pm4py) (1.4.1)\n Collecting jsonpickle\n Downloading jsonpickle-2.0.0-py2.py3-none-any.whl (37 kB)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from pm4py) (1.1.5)\n Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from pm4py) (2.5.1)\n Requirement already satisfied: intervaltree in /usr/local/lib/python3.7/dist-packages (from pm4py) (2.1.0)\n Requirement already satisfied: pydotplus in /usr/local/lib/python3.7/dist-packages (from pm4py) (2.0.2)\n Collecting stringdist\n Downloading StringDist-1.0.9.tar.gz (7.4 kB)\n Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from pm4py) (4.2.6)\n Collecting pyvis\n Downloading pyvis-0.1.9-py3-none-any.whl (23 kB)\n Requirement already satisfied: pyparsing>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from pulp<=2.1->pm4py) (2.4.7)\n Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from deprecation->pm4py) (21.0)\n Requirement already satisfied: sortedcontainers in /usr/local/lib/python3.7/dist-packages (from intervaltree->pm4py) (2.4.0)\n Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from jsonpickle->pm4py) (4.6.1)\n Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->jsonpickle->pm4py) (3.7.4.3)\n Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->jsonpickle->pm4py) (3.5.0)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->pm4py) (0.10.0)\n Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->pm4py) (2.8.1)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->pm4py) (1.3.1)\n Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->pm4py) (1.15.0)\n Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx->pm4py) (4.4.2)\n Requirement already satisfied: ipython>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from pyvis->pm4py) (5.5.0)\n Requirement already satisfied: jinja2>=2.9.6 in /usr/local/lib/python3.7/dist-packages (from pyvis->pm4py) (2.11.3)\n Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (0.7.5)\n Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (4.8.0)\n Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (1.0.18)\n Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (2.6.1)\n Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (0.8.1)\n Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (57.2.0)\n Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.3.0->pyvis->pm4py) (5.0.5)\n Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2>=2.9.6->pyvis->pm4py) (2.0.1)\n Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=5.3.0->pyvis->pm4py) (0.2.5)\n Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=5.3.0->pyvis->pm4py) (0.2.0)\n Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect->ipython>=5.3.0->pyvis->pm4py) (0.7.0)\n Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->pm4py) (1.0.1)\n Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy->pm4py) (1.2.1)\n Building wheels for collected packages: stringdist\n Building wheel for stringdist (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for stringdist: filename=StringDist-1.0.9-cp37-cp37m-linux_x86_64.whl size=23588 sha256=96514fa72d030bf1db0a6608559ecc4d6774c9e9d123796c00bf659c5e798815\n Stored in directory: /root/.cache/pip/wheels/d7/9c/d4/63bc3d8931de0980b9e4a724dea290bb40b8b1b2bd6227c8da\n Successfully built stringdist\n Installing collected packages: jsonpickle, stringdist, pyvis, pulp, deprecation, pm4py\n Successfully installed deprecation-2.1.0 jsonpickle-2.0.0 pm4py-2.2.11.1 pulp-2.1 pyvis-0.1.9 stringdist-1.0.9\n Collecting jellyfish\n Downloading jellyfish-0.8.4-py3-none-any.whl (10 kB)\n Installing collected packages: jellyfish\n Successfully installed jellyfish-0.8.4\n\n\n\n```python\nfrom pprint import pprint\nimport pm4py\nimport copy\nimport numpy\nimport jellyfish\nimport math\n```\n\n### Import dataset and save as event log\n\n\n\n```python\n\n\n\nfrom google.colab import drive\ndrive.mount('/content/drive')\nlog_file = '/content/drive/MyDrive/pathToXesLogFile'\nlog = pm4py.read_xes(log_file)\n```\n\n Mounted at /content/drive\n\n\n\n HBox(children=(FloatProgress(value=0.0, description='parsing log, completed traces :: ', max=150370.0, style=P\u2026\n\n\n \n\n\n## Preprocessing\n\n### Search for string attributes and save the according max string lenght for every attribute\n\n\n```python\n# All string keys\n\nseen_keys = []\n\nfor t in log:\n for i in range(len(t)):\n event = t[i]\n keys = event.keys()\n for key in keys:\n if (type(event[key]) == str):\n seen_keys.append(key)\n seen_keys = list(dict.fromkeys(seen_keys))\n\nstring_keys = dict.fromkeys(seen_keys)\n\nkeys2 = string_keys.keys()\nfor key in keys2:\n string_keys[key] = 0\n\n#max string_keys\nfor t in log:\n for i in range(len(t)):\n for key in keys2:\n if key in t[i]:\n if len(t[i][key]) > string_keys[key]:\n string_keys[key] = len(t[i][key])\n```\n\n\n```python\nfor t in log:\n for i in range(len(t)):\n event = t[i]\n keys = event.keys()\n for key in keys:\n if (type(event[key]) == str):\n seen_keys.append(key)\n seen_keys = list(dict.fromkeys(seen_keys))\n```\n\n### Search for all numeric attributes and save the attribute names \n\n\n```python\n\n# All numeric keys\n\nseen_keys = []\n\nfor t in log:\n for i in range(len(t)):\n event = t[i]\n keys = event.keys()\n for key in keys:\n if (type(event[key]) == float or type(event[key]) == int):\n seen_keys.append(key)\n seen_keys = list(dict.fromkeys(seen_keys))\n\nseen_keys = dict.fromkeys(seen_keys)\n```\n\n### Calculate the mean values of all numeric attributes\n\n\n```python\n# mean values\nsum = 0\n\nseen_keys = []\n\nfor t in log:\n for i in range(len(t)):\n event = t[i]\n keys = event.keys()\n for key in keys:\n if (type(event[key]) == float or type(event[key]) == int):\n seen_keys.append(key)\n seen_keys = list(dict.fromkeys(seen_keys))\n\nmean_keys = dict.fromkeys(seen_keys)\nkeys2 = mean_keys.keys()\nfor key in keys2:\n mean_keys[key] = 0\n\ncounter_keys = copy.deepcopy(mean_keys)\n\nfor t in log:\n for i in range(len(t)):\n for key in keys2:\n if key in t[i] :\n mean_keys[key] += t[i][key]\n counter_keys[key] += 1\n\nfor key in keys2:\n mean_keys[key] = mean_keys[key] / counter_keys[key]\n print(mean_keys[key])\n```\n\n 63.6816560395868\n 17.815603519745885\n 113.2466582430006\n 0.0788787657112456\n 11.948984392276309\n 46.41002731923214\n 0.0\n\n\n### Calculate the std. deviation of all numeric values\n\n\n```python\n\n# std deviation of values\n\ndeviation_keys = dict.fromkeys(seen_keys)\nkeys2 = deviation_keys.keys()\nfor key in keys2:\n deviation_keys[key] = []\n\nfor t in log:\n for i in range(len(t)):\n for key in keys2:\n if key in t[i]:\n deviation_keys[key].append(t[i][key])\n\nfor key in keys2:\n deviation_keys[key] = numpy.std(deviation_keys[key]) \n```\n\n 89.67779636777881\n 36.77930126804085\n 68.21924855798667\n 0.5776847046687658\n 4.023171014461607\n 39.345307223637775\n 0.0\n\n\n### Apply the z score normalisation to the log\n\n\n```python\n# z score of values\n\nnumeric_keys = dict.fromkeys(seen_keys)\n\nz_log = copy.deepcopy(log)\n\nfor t in z_log:\n for i in range(len(t)):\n for key in numeric_keys.keys():\n if key in t[i] and deviation_keys[key] != 0:\n t[i][key] = (t[i][key] - mean_keys[key] ) / deviation_keys[key]\n```\n\n### Calculate the minimum and maximum timestamp per trace\n\n\n```python\n# timestamp min max values per trace\n\nmin_timestamps = []\n\nmax_timestamps = []\n\nfor t in log:\n temp_min_timestamp = t[0]['time:timestamp']\n temp_max_timestamp = t[0]['time:timestamp']\n for i in range(len(t)):\n if temp_min_timestamp > t[i]['time:timestamp']:\n temp_min_timestamp = t[i]['time:timestamp']\n if temp_max_timestamp < t[i]['time:timestamp']:\n temp_max_timestamp = t[i]['time:timestamp']\n min_timestamps.append(temp_min_timestamp)\n max_timestamps.append(temp_max_timestamp)\n```\n\n### Apply min-max normalisation on timestamps\n\n\n```python\n# normalize timestamps\n\nnormalised_log = copy.deepcopy(z_log)\n\nfor y in range (len(log)):\n for i in range (len(log[y])):\n if max_timestamps[y] == min_timestamps[y]:\n normalised_time = 0\n else: \n normalised_time = (log[y][i]['time:timestamp'] - min_timestamps[y]) / (max_timestamps[y] - min_timestamps[y])\n normalised_log[y][i]['time:timestamp'] = normalised_time\n```\n\n### Define distance algorithm\n\n\n```python\n# distance algorithm\n\ndef similarity(dict1, dict2, treshhold, weights):\n longest_value = 0\n distance_score = 0\n distance_sum = 0\n d1_keys = (dict1.keys())\n d2_keys = (dict2.keys())\n\n # Combine the keys from both events to calculate the vector distance between both of them\n both_keys = list(dict.fromkeys(dict1))\n for key in d2_keys:\n if not (key in both_keys):\n both_keys.append(key)\n\n both_keys = dict.fromkeys(both_keys)\n for key in both_keys:\n ## Timestamp distance\n if (key == 'time:timestamp'):\n diff = (dict1[key] - dict2[key])\n both_keys[key] = pow(diff, 2)\n\n # If no value is present for the key set a default value, otherwise take value from event\n\n # Check if value is numeric\n elif key in numeric_keys:\n if key not in dict1:\n d1_score = 0 # is 0 the best solution here?\n else:\n d1_score = dict1[key] \n if key not in dict2:\n d2_score = 0 # is 0 the best solution here?\n else:\n d2_score = dict2[key]\n\n diff = d1_score - d2_score\n both_keys[key] = pow(diff, 2)\n if(key == 'totalPaymentAmount'):\n both_keys[key] = both_keys[key]\n\n\n # Otherwise value must be a string\n else:\n if key not in dict1:\n d1_string = '' # is '' the best solution here?\n else:\n d1_string = dict1[key] \n if key not in dict2:\n d2_string = '' # is '' the best solution here?\n else:\n d2_string = dict2[key]\n\n # calculate levenstein distance\n lev_dist = jellyfish.levenshtein_distance(d1_string, d2_string)\n # execute normalization\n distance_percentage = lev_dist / string_keys[key]\n both_keys[key] = pow(distance_percentage, 2)\n \n # include weights and calculate euclidean distance\n for key in both_keys:\n distance_sum += weights[key] * both_keys[key]\n \n distance_score = math.sqrt(distance_sum) \n\n # If distanz is under treshhold return true\n if distance_score < treshhold:\n #print(distance_score)\n return True\n else:\n return False\n```\n\n### Create weight dictionary\n\n\n```python\nseen_keys = []\n\nfor t in log:\n for i in range(len(t)):\n event = t[i]\n keys = event.keys()\n for key in keys:\n seen_keys.append(key)\n seen_keys = list(dict.fromkeys(seen_keys))\n\nweights = dict.fromkeys(seen_keys)\nfor key in seen_keys:\n weights[key] = 1\n```\n\n ['amount', 'org:resource', 'dismissal', 'concept:name', 'vehicleClass', 'totalPaymentAmount', 'lifecycle:transition', 'time:timestamp', 'article', 'points', 'expense', 'notificationType', 'lastSent', 'paymentAmount', 'matricola']\n\n\n## Example execution with weights and treshhold of 0.1\n\n\n```python\nweights1 = copy.deepcopy(weights)\n#f.i. weights1['attribute']=10\n\ntreshhold = 0.1\n\nfor x in range(len(normalised_log)):\n t = normalised_log[x]\n for i in range(len(t)):\n event_outer = t[i]\n for y in range(i+1, len(t)):\n event_inner = t[y]\n flag = similarity(event_outer, event_inner, treshhold, weights1)\n if flag == True:\n print(t)\n \n```\n", "meta": {"hexsha": "997e4895f20c8e4c5f86bf474ddd95d0eba63298", "size": 35687, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Final_Submission.ipynb", "max_stars_repo_name": "Jostafarr/duplicate-detection-algorithm", "max_stars_repo_head_hexsha": "c83190b44a638aff66d7c1e9639e90e80beda546", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Final_Submission.ipynb", "max_issues_repo_name": "Jostafarr/duplicate-detection-algorithm", "max_issues_repo_head_hexsha": "c83190b44a638aff66d7c1e9639e90e80beda546", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Final_Submission.ipynb", "max_forks_repo_name": "Jostafarr/duplicate-detection-algorithm", "max_forks_repo_head_hexsha": "c83190b44a638aff66d7c1e9639e90e80beda546", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.7585170341, "max_line_length": 245, "alphanum_fraction": 0.4969596772, "converted": true, "num_tokens": 4237, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.5, "lm_q1q2_score": 0.26754921431330575}} {"text": "```python\nimport pandas as pd\nimport ms3 \nfrom ms3.utils import *\nimport os \nfrom ms3 import Score\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics.pairwise import cosine_similarity\nimport numpy as np\n# include directory \nhome_dir = '/home/nulpe/Desktop/Tresillo/'\n```\n\n# Towards a Rhythmical Definition of the Tressilio beat and a Tracing of it in Popular Music \n\n## 1) Introduction & Research Question \n\n### 1.1) Research question: \n\"Can we compute to which extent a defined rhythm, which we refer to as Tresillo rhythm, is used in a given pop song and if so can we measure the intensity of Tresillo rhythm use in billboard songs over the past twenty years?\" \n\n**Discussion:** \nIn our project we would like to discuss the use of a rhythm, which we refer to as 'Tresillo rhythm', in the popular music of the last 20 years. We define this rhythm in our project, given secondary literature, and thus obtain a precise notation and formulization of the Tresillo rhythm. Given this definition we can then compute the similarity between the Tresillo rhythm and the rhythm of a given pop song. Thus, we hope to obtain a similarity coefficient which measures how similar the rythm of a given pop song is to our self defined Tresillo rythm. Given the computed similarity coefficients, we hope to measure the use of the Tresillo rhythm in the top 20 billboard songs of the past 20 years (1999-2019).\n\n\n### 1.2) Assumptions\n\n### 1.3) Data Representation\nInitially our data is represented in the MIDI file format. The representation of music in the MIDI format has the advantage, that often several voices of different instruments are represented in such files. In contrast to musescore, where often only the voice of one instrument (mostly piano) is notated. \nHowever, to obtain a list of onsets of every musical event in a given song, we have to convert our MIDI (.midi) files to Musescore (.mscx) files. \nTo convert and further analyze our files, we will use the [ms3](https://pypi.org/project/ms3/) python library. \nTo convert a directory of .midi files to .mscx files we use following command:\n\n\n\n```python\npath_midi = '/home/nulpe/Desktop/Tresillo/dataset/project_midi/tresillo/'\ntarget = '/home/nulpe/Desktop/Tresillo/dataset/project_mscx/mscx_tresillos_billboard/'\n\ndir_list = os.listdir(path_midi)\n\nfor el in dir_list:\n convert(path_midi+el, target+el[:-4]+'.mscx', MS='musescore3')\n```\n\nGiven the Musescore representation, we can now obtain a list of all musical events in a given song. Given additional information about bars and timing of the onsets, this data representation comprises all information we need for our analysis. The code below exemplifies how our final data representation looks like:\n\n\n```python\ndir_mscx = '/home/nulpe/Desktop/Tresillo/dataset/project_mscx/mscx_tresillos_billboard/Shape of you-Ed Sheran.mscx'\ns = Score(dir_mscx)\ndf = s.mscx.notes\n\ndf.head(5)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
mcmnmc_onsetmn_onsettimesigstaffvoicedurationnominal_durationscalartiedtpcmidivoltachord_id
011004/4211/81/81<NA>761<NA>6
111004/4611/81/81<NA>761<NA>12
211004/4911/81/81<NA>761<NA>18
311004/4911/81/81<NA>464<NA>18
411004/4211/81/81<NA>868<NA>6
\n
\n\n\n\n**TA instructions**: Precision of Research Question\n\n- State the final version of your research question as you understand it now.\n\n- Make all assumptions and hypotheses explicit.\n\n- In case Milestone 2 did not include the final data representation that you are analyzing, present the representation you finally used and the necessary steps to get to it.\n\n**Thoughts Aurel:** \n - Reasearch Question: Move to a fuzzy definition of Tresillio-ness: E.g.: \n Are pop songs increasingly using a rythm pattern which is similar to a rythm pattern which we reffer to as 'Tresillio rythm'? \n \n - Assumptions are very important for them. Here we have to note our definition(s) of the Tresillio rythm pattern and how we derive them (incl sources). Furthermore, we have to discuss cases where there are Rythms which are similar to the Tresillio rythm but not equivalent (e.g.: reggaeton) and how we deal with them computationally. \n \n - Here we have to discuss the conversion of our MIDI files to the musescore3 file format. Furthermore, we have to discuss what the musescore3 format offers us, and why it is the better choice for our analysis. \n \n **Toughts Florian:**\n -Assumptions:\n \n -30 second piece of pop song is suitable to identfy the main rythm\n \n -Main rythm can be identified by counting the onsets\n \n -the great majority of pop songs comes in 4/4 (not shure if we really need that assumption)\n \n\n\n\n\n \n\n\n\n## 2) Methods\n\n### 2.1) Definition of the Tresillo rhythm\nToDo Florian (cite relevant literature to define tresillo)\n\n### 2.2) Rhythm histograms and vectors \n\nTo be able to measure the similarity between rhythm we must have a clear definition and thus following representation of rhythm. In general, one can define rhythm as \"a series of onsets and durations of musical events.\u201d (Rohrmeier, 2020). In our specific case however, we are interested in the dominat and repeating rhythm of a given song. Therefore we prefer a narrower definition of rhythm as \u201crepeated recurrences in alternate heavy and light beats\u201d (Chin and Wu, 1992). To furthermore simplify our data, we assume that the main rhythm of a song can be defined by the onsets of its musical events (notes). \nTo obtain a representation of the dominate rhythm of a song, we preceed to aggregate all musical onset to one bar. Collapsing all musical onsets to one bar and thus obtaining onset 'histograms' is a common pratice and has been used beside others to analyze western classical music (Palmer and Krumhansl, 1990) and american folk music (Huron and Ommen, 2006). \nWith the onset histogram of a song we can compile a n dimensional vector for each song, which we refer to as a 'rhythm vector'. Given that the number of songs with meters other than 4/4 is negible, we only consider songs with a 4/4 meter in our analysis. Given that we only consider songs with 4/4 meters, we obtain for each song a 16 dimensional vector. \n\n\n\n\n### 2.3) Naive approach: Rhythm simularity measured with cosine simularity \nGiven the 16 dimensional rhythm vectors we obtain following the methode described above, we can now compute simple similarity metrics. \nIn rhythm analysis a common similarity metric which is used to calculate the similarity between two rhythm vectors is the cosine distance (see: anteli et al., 2014; Parry and Essa, 2003). The cosine similarity metric is scale invariante, which as such is interesting for rhythm similarity given that thus only relative frequencies of onsets are important and not absolute frequencies. \nThe cosine distance between two vectors A and B is defined as following: \n\\begin{equation}\n\\cos ({\\bf A},{\\bf B})= {{\\bf A}*{\\bf B} \\over \\|{\\bf A}\\| \\|{\\bf B}\\|} = \\frac{ \\sum_{i=1}^{n}{{\\bf A}_i{\\bf B}_i} }{ \\sqrt{\\sum_{i=1}^{n}{({\\bf A}_i)^2}} \\sqrt{\\sum_{i=1}^{n}{({\\bf B}_i)^2}} }\n\\end{equation}\n\nGiven the definition of the cosine similarity we can now compute the similarity between our self defined Tresillo rhythm and the billboard songs. \nFirst, however we will validate this similarity metric by testing it on our self compiled list of songs which do comprise a Tresillo rhythm and songs which do not comprise a Tresillo rhythm. We then compute the mean 'Tresillo-ness' (similarity to Tresillo rhythm) of both samples. By employing the Bootstrapping method we can also obtain a measurement of uncertainty, as provided by 2.5% and 97.5% confidence intervals.\n\n### 2.4) Reducing noise: (pushkar)\n\n**TA instructions**: \n- How did you deal with the problems you mentioned in Milestone 2?\n\n- Which methods did you use to obtain the final results? Provide self-contained explanations and make sure to cite relevant literature where appropriate.\n\n- Explain your core calculations using equations.\n\n- Do not describe all the methods you tried out but only those that lead to the results: the final analysis is not an exploratory analysis anymore.\n\n- Specify any adjustments you made to pre-existing methods\n\n\n**Thoughts Aurel**: \n - Talk about how we got to the bar representation of our music. Furthermore, also discuss how we get to a 'perfect Tresillio Histogram'\n \n - Here I propose we try out several things and compare the results of several methods:\n \n a) A first big topics is how we define the perfect Tresillo: \n 1. Given predefined rythm patterns (by Florian)\n 2. Given songs with high Tresillio-ness\n - All instruments collapsed\n - Only key instruments\n - Certain instruments \n \n a) The second big question is how do we measure Tresillio-ness in the pop songs, I would suggest three approaches. All aproaches require that we first obtain 16 dimensional Rythm vectors of each song of interest: \n 1. Compare our vanilla self defined Tresillio rythm vector with all vectors of our pop songs. Measure distance or simularity with some commonly used distance measure in the literature\n 2. Very similar to 1) but this time use the Tresillio rythm vector as defined by our songs\n 3. Prior clustering of the rythm vectors. Obtaining centroid and measuring with it Tresillio ness in the charts (method as proposed by Pushkar)\n \n- Equations should be included in the prior part\n- Discuss critically any outliers, problems and limitations of our methodology. Extra focus on the question how we deal with related but not the same rythm (e.g.: Reggaeton)\n\n\n**Toughts Florian**\n- Following the last paper discussion one measuurement we could use is the Cosin distance \n- Not sure about an instrument selection. We can mention that a reduction seems not favorable as the tresillo rythm is presented with different instruments throughout the dataset\n\n## 3) Final Results\n\n### 3.1) Onset histograms and rhythm vectors \nIn this first part we will use onset histograms to compute rhythm vectors. \nTo obtain the onset histogram of a given song, we use the notes representation provided by the [ms3](https://pypi.org/project/ms3/) libary and colapse all musical onsets to one bar. In the example below we will compile the histograms for our self defined 'Vanilla Tresillo' and for the example song 'shape of you' by Ed Sheran. Then we will proceed to compute the rhythm vectors for both songs\n\n\n```python\n#paths to both examples\nshape_of_you = home_dir+'dataset/project_mscx/mscx_tresillos_billboard/Shape of you-Ed Sheran.mscx'\nvanilla_tresillo = home_dir+'dataset/project_mscx/mscx_tresillos/Vanilla_Tresillo.mscx'\n\n\n# get the note scores of both examples \ndf_shape_of_you = Score(shape_of_you).mscx.notes\ndf_vanilla_tresillo = Score(vanilla_tresillo).mscx.notes\n\n#calculate quarter note position for each note\ndf_shape_of_you['quarter_beats'] = (df_shape_of_you.mc_onset*16).astype('int32')\ndf_vanilla_tresillo['quarter_beats'] = (df_vanilla_tresillo.mc_onset*16).astype('int32')\n\n\nfig, ax = plt.subplots(1,2, figsize=(12,3))\n\nax[0].hist(df_shape_of_you['quarter_beats'], bins=16)\nax[1].hist(df_vanilla_tresillo['quarter_beats'], bins=16)\nax[0].set_xlabel('quarter_beats')\nax[1].set_xlabel('quarter_beats')\nax[0].xaxis.set_ticks(np.arange(0, 16, 1))\nax[1].xaxis.set_ticks(np.arange(0, 16, 1))\nax[0].set_ylabel('count')\nax[0].set_title('Shape of you')\nax[1].set_title('Vanilla Tresillo')\n\n#mention somewhere that we are working with quarter notes \n```\n\nIn a next step we want to compile rhythm of each song given this notion of histograms. E.g: every dimension incorporates the absolute frequency of onsets on one given quarter note. This is done as follows:\n\n\n```python\nrhythm_vector_shape_you = df_shape_of_you.groupby(['quarter_beats'])['mn'].agg(['count'])\nrhythm_vector_shape_you = rhythm_vector_shape_you.reindex(list(range(0,16)),fill_value=0).T\nrhythm_vector_shape_you \n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
quarter_beats0123456789101112131415
count20633441197237169122002975112752716710
\n
\n\n\n\nIn the assumption that the rhythm vectors of distinct voices within a song might include information we want to preserve, we compiled one rhythm vector per instrument in a song as follows:\n\n\n```python\n# Define instruments \nshape_of_you_score = Score(shape_of_you)\ninstrument_dict = {}\nfor key in shape_of_you_score.mscx.metadata['parts']:\n for staff in shape_of_you_score.mscx.metadata['parts'][key].keys():\n instrument_dict[staff] = key\n\n\n#staff to voice/instruments \ndf_shape_of_you['instrument'] = [instrument_dict[el] if el in instrument_dict else 'na' for el in df_shape_of_you.staff]\n\n#compute rhythm vectors per voice\nrhythm_vector_shape_you_instruments = df_shape_of_you.groupby(['instrument','quarter_beats'])['mn'].agg(['count'])\nrhythm_vector_shape_you_instruments = rhythm_vector_shape_you_instruments.groupby(level=0).apply(lambda x: x.reset_index(level = 0).drop(['instrument'],axis=1).reindex(list(range(0,16)),fill_value=0).T)\nrhythm_vector_shape_you_instruments = rhythm_vector_shape_you_instruments.reset_index()\nrhythm_vector_shape_you_instruments\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
quarter_beatsinstrumentlevel_10123456789101112131415
0Grand Pianocount660000000660000000
1Marimba, untitledcount660053140630680057170630
2Melodic Drumcount00333613193900050281019380
3Overdrive Gtrcount6211446261626160
4Percussioncount242302192222212112192232
5Tenor Saxcount2081081612168167186155148
6Woodblockcount2002002020012020
7Xylophonecount2200176021021001660210
\n
\n\n\n\nIf we want to compile rhythm vectors (per voice) for all mscx files in one directory we can use following loop:\n\n\n```python\ndef rythm_vectors(in_dir, out_dir):\n list_sheet_music = os.listdir(in_dir)\n df_rythm_vectors =[]\n\n for idx, el in enumerate(list_sheet_music):\n if el[-4:] == 'mscx':\n\n \n #Get notes with onsets\n s = Score(dir_sheet_music+el)\n df = s.mscx.notes\n\n # Define instruments \n instrument_dict = {}\n for key in s.mscx.metadata['parts']:\n for staff in s.mscx.metadata['parts'][key].keys():\n instrument_dict[staff] = key\n\n\n #staff to instruments \n df['instrument'] = [instrument_dict[el] if el in instrument_dict else 'na' for el in df.staff]\n\n\n # define quarter beat \n df['quarter_beats'] = (df.mc_onset*16).astype('int32')\n\n\n #make rythm matrix & data frame\n df_histogram = df.groupby(['instrument','quarter_beats'])['mn'].agg(['count'])\n df_histogram = df_histogram.groupby(level=0).apply(lambda x: x.reset_index(level = 0).drop(['instrument'],axis=1).reindex(list(range(0,16)),fill_value=0).T)\n df_histogram = df_histogram.reset_index()\n\n df_histogram.insert(loc=0, column='song_artist', value=el[:-5])\n\n #concat to big rythm vector df\n if len(df_rythm_vectors) == 0: df_rythm_vectors = df_histogram\n\n df_rythm_vectors = pd.concat([df_rythm_vectors,df_histogram], axis=0)\n\n df_rythm_vectors.to_csv(out_dir, index = False)\n\n \ndir_sheet_music = home_dir + '/dataset/project_mscx/mscx_billboard/'\nout_dir = home_dir + '/dataset/rythm_vectors/rythm_vectors_billboard.csv'\nrythm_vectors(dir_sheet_music, out_dir)\n```\n\n\n\n**TA instructions**: \n- Present your results in relation to your research question.\n- Present them in a logical order that does not have to be the order in which you achieved them.\n\n**Thoughts Aurel:** \n- This will be our big code part. I would propose following order:\n 1. Histograms of the self defined Tresillio & critical discussion reasoning\n 2. Histogram of the Tresillio as defined by popular songs. This includes variations of taking only some instruments. etc\n 3. Comparison of Tresillio Histograms to histograms where we can be sure that there is no Tresillio\n 4. Discussion of the clustering method, either k-mean clustering or something we dont have to set the cluster number\n 5. Finding Tresillio-ness in the pop charts with all three methods a) Vanilla Tresillio-ness b) Tresillio songs vector c) \n \n \n\n## 4) Outlook on final interpretation\n\nPoints to discuss as stated by TAs: \n- Interpreting your results is the final step that you will do in preparing Milestone 4 (your presentations). Please end your submission by giving a first,preliminary outlook on this final step: what aspects of your results do you find interesting with respect to your hypotheses and previous literature? What do you think might be the main points to elaborate upon in the discussion? \n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5dfa293f9df49ad7df6de3436edaa5e75d629c73", "size": 47306, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/milestone_3_sugessted_structure_aurel.ipynb", "max_stars_repo_name": "pushkarjajoria/Tresillo", "max_stars_repo_head_hexsha": "9b72373746192d00d82c6e9c2de19d70648108eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/milestone_3_sugessted_structure_aurel.ipynb", "max_issues_repo_name": "pushkarjajoria/Tresillo", "max_issues_repo_head_hexsha": "9b72373746192d00d82c6e9c2de19d70648108eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/milestone_3_sugessted_structure_aurel.ipynb", "max_forks_repo_name": "pushkarjajoria/Tresillo", "max_forks_repo_head_hexsha": "9b72373746192d00d82c6e9c2de19d70648108eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.7188465499, "max_line_length": 11132, "alphanum_fraction": 0.6004523739, "converted": true, "num_tokens": 6792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5428632831725052, "lm_q2_score": 0.4921881357207956, "lm_q1q2_score": 0.2671908672959457}} {"text": "\n\n# Tutorial 2: Introduction to GANs and Density Ratio Estimation Perspective of GANs\n\n**Week 2, Day 5: Generative Models**\n\n**By Neuromatch Academy**\n\n__Content creators:__ Kai Xu, Seungwook Han, Akash Srivastava\n\n__Content reviewers:__ Polina Turishcheva, Melvin Selim Atay, Hadi Vafaei, Deepak Raya, Charles J Edelson, Kelson Shilling-Scrivo\n\n__Content editors:__ Charles J Edelson, Kelson Shilling-Scrivo, Spiros Chavlis\n\n__Production editors:__ Arush Tagade, Spiros Chavlis\n\n**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**\n\n

\n\n---\n\n## Tutorial Objectives\n\nThe goal of this tutorial is two-fold; first you will be introduced to GANs training, and you will be able to understand how GANs are connected to other generative models that we have been before. \n\nBy the end of the first part of this tutorial you will be able to:\n- Understand, at a high level, how GANs are implemented.\n- Understand the training dynamics of GANs. \n- Know about a few failure modes of GAN training.\n- Understand density ratio estimation using a binary classifier\n- Understand the connection between GANs and other generative models.\n- Implement a GAN.\n\n\n```python\n# @title Tutorial slides\n\n# @markdown These are the slides for the videos in this tutorial\n\n# @markdown If you want to locally download the slides, click [here](https://osf.io/dftym/download)\nfrom IPython.display import IFrame\nIFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/dftym/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)\n```\n\n\n\n\n\n\n\n\n\n\n---\n# Setup\n\n\n```python\n# @title Install dependencies\n!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet\nfrom evaltools.airtable import AirtableForm\n\n# generate airtable form\natform = AirtableForm('appn7VdPRseSoMXEG','W2D5_T2','https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303')\n```\n\n Building wheel for evaltools (setup.py) ... \u001b[?25l\u001b[?25hdone\n\n\n\n```python\n# Imports\nimport torch\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# @title Figure settings\nimport ipywidgets as widgets # interactive display\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle\")\n```\n\n\n```python\n# @title Plotting functions\n\nld_true = [-7.0066e-01, -2.6368e-01, -2.4250e+00, -2.0247e+00, -1.1795e+00,\n -4.5558e-01, -7.1316e-01, -1.0932e-01, -7.8608e-01, -4.5838e-01,\n -1.0530e+00, -9.1201e-01, -3.8020e+00, -1.7787e+00, -1.2246e+00,\n -6.5677e-01, -3.6001e-01, -2.2313e-01, -1.8262e+00, -1.2649e+00,\n -3.8330e-01, -8.8619e-02, -9.2357e-01, -1.3450e-01, -8.6891e-01,\n -5.9257e-01, -4.8415e-02, -3.3197e+00, -1.6862e+00, -9.8506e-01,\n -1.1871e+00, -7.0422e-02, -1.7378e+00, -1.3099e+00, -1.8926e+00,\n -3.4508e+00, -1.5696e+00, -7.2787e-02, -3.2420e-01, -2.9795e-01,\n -6.4189e-01, -1.4120e+00, -5.3684e-01, -3.4066e+00, -1.9753e+00,\n -1.4178e+00, -2.0399e-01, -2.3173e-01, -1.2792e+00, -7.2990e-01,\n -1.9872e-01, -2.9378e-03, -3.5890e-01, -5.6643e-01, -1.8003e-01,\n -1.5818e+00, -5.2227e-01, -2.1862e+00, -1.8743e+00, -1.4200e+00,\n -3.1988e-01, -3.5513e-01, -1.5905e+00, -4.2916e-01, -2.5556e-01,\n -8.2807e-01, -6.5568e-01, -4.8475e-01, -2.1049e-01, -2.0104e-02,\n -2.1655e+00, -1.1496e+00, -3.6168e-01, -8.9624e-02, -6.7098e-02,\n -6.0623e-02, -5.1165e-01, -2.7302e+00, -6.0514e-01, -1.6756e+00,\n -3.3807e+00, -5.7368e-02, -1.2763e-01, -6.6959e+00, -5.2157e-01,\n -8.7762e-01, -8.7295e-01, -1.3052e+00, -3.6777e-01, -1.5904e+00,\n -3.8083e-01, -2.8388e-01, -1.5323e-01, -3.7549e-01, -5.2722e+00,\n -1.7393e+00, -2.8814e-01, -5.0310e-01, -2.2077e+00, -1.5507e+00,\n -6.8569e-01, -1.4620e+00, -9.2639e-02, -1.4160e-01, -3.6734e-01,\n -1.0053e+00, -6.7353e-01, -2.2676e+00, -6.0812e-01, -1.0005e+00,\n -4.2908e-01, -5.1369e-01, -2.2579e-02, -1.8496e-01, -3.4798e-01,\n -7.3089e-01, -1.1962e+00, -1.6095e+00, -1.7558e-01, -3.3166e-01,\n -1.1445e+00, -2.4674e+00, -5.0600e-01, -2.0727e+00, -5.4371e-01,\n -8.0499e-01, -3.0521e+00, -3.6835e-02, -2.0485e-01, -4.6747e-01,\n -3.6399e-01, -2.6883e+00, -1.9348e-01, -3.1448e-01, -1.6332e-01,\n -3.2233e-02, -2.3336e-01, -2.6564e+00, -1.2841e+00, -1.3561e+00,\n -7.4717e-01, -2.7926e-01, -8.7849e-01, -3.3715e-02, -1.4933e-01,\n -2.7738e-01, -1.6899e+00, -1.5758e+00, -3.2608e-01, -6.5770e-01,\n -1.7136e+00, -5.8316e+00, -1.1988e+00, -8.3828e-01, -1.8033e+00,\n -2.3017e-01, -8.9936e-01, -1.1917e-01, -1.6659e-01, -2.7669e-01,\n -1.2955e+00, -1.2076e+00, -2.2793e-01, -1.0528e+00, -1.4894e+00,\n -5.7428e-01, -7.3208e-01, -9.5673e-01, -1.6617e+00, -3.9169e+00,\n -1.2182e-01, -3.8092e-01, -1.1924e+00, -2.4566e+00, -2.7350e+00,\n -2.8332e+00, -9.1506e-01, -6.7432e-02, -7.8965e-01, -2.0727e-01,\n -3.4615e-02, -2.8868e+00, -2.1218e+00, -1.2368e-03, -9.0038e-01,\n -5.3746e-01, -5.4080e-01, -3.1625e-01, -1.1786e+00, -2.2797e-01,\n -1.1498e+00, -1.3978e+00, -1.9515e+00, -1.1614e+00, -5.1456e-03,\n -1.9316e-01, -1.3849e+00, -9.2799e-01, -1.1649e-01, -2.3837e-01]\n\n\ndef plotting_ld(ld, true=ld_true):\n fig, ax = plt.subplots(figsize=(7, 7))\n ax.plot([-6, 1], [-6, 1], label=\"Ground Truth\")\n ax.scatter(true, ld, marker=\"x\",\n label=\"Your implementation\")\n ax.set_xlabel(\"Loss from oracle implementation\")\n ax.set_ylabel(\"Loss from your implementation\")\n ax.legend()\n ax.set_title(\"Discriminator Loss\")\n\n\nlg_true = [-7.0066e-01, -2.6368e-01, -2.4250e+00, -2.0247e+00, -1.1795e+00,\n -4.5558e-01, -7.1316e-01, -1.0932e-01, -7.8608e-01, -4.5838e-01,\n -1.0530e+00, -9.1201e-01, -3.8020e+00, -1.7787e+00, -1.2246e+00,\n -6.5677e-01, -3.6001e-01, -2.2313e-01, -1.8262e+00, -1.2649e+00,\n -3.8330e-01, -8.8619e-02, -9.2357e-01, -1.3450e-01, -8.6891e-01,\n -5.9257e-01, -4.8415e-02, -3.3197e+00, -1.6862e+00, -9.8506e-01,\n -1.1871e+00, -7.0422e-02, -1.7378e+00, -1.3099e+00, -1.8926e+00,\n -3.4508e+00, -1.5696e+00, -7.2787e-02, -3.2420e-01, -2.9795e-01,\n -6.4189e-01, -1.4120e+00, -5.3684e-01, -3.4066e+00, -1.9753e+00,\n -1.4178e+00, -2.0399e-01, -2.3173e-01, -1.2792e+00, -7.2990e-01,\n -1.9872e-01, -2.9378e-03, -3.5890e-01, -5.6643e-01, -1.8003e-01,\n -1.5818e+00, -5.2227e-01, -2.1862e+00, -1.8743e+00, -1.4200e+00,\n -3.1988e-01, -3.5513e-01, -1.5905e+00, -4.2916e-01, -2.5556e-01,\n -8.2807e-01, -6.5568e-01, -4.8475e-01, -2.1049e-01, -2.0104e-02,\n -2.1655e+00, -1.1496e+00, -3.6168e-01, -8.9624e-02, -6.7098e-02,\n -6.0623e-02, -5.1165e-01, -2.7302e+00, -6.0514e-01, -1.6756e+00,\n -3.3807e+00, -5.7368e-02, -1.2763e-01, -6.6959e+00, -5.2157e-01,\n -8.7762e-01, -8.7295e-01, -1.3052e+00, -3.6777e-01, -1.5904e+00,\n -3.8083e-01, -2.8388e-01, -1.5323e-01, -3.7549e-01, -5.2722e+00,\n -1.7393e+00, -2.8814e-01, -5.0310e-01, -2.2077e+00, -1.5507e+00]\n\n\ndef plotting_lg(lg, true=lg_true):\n fig, ax = plt.subplots(figsize=(7, 7))\n ax.plot([-6, 1], [-6, 1], label=\"Ground Truth\")\n ax.scatter(true, lg, marker=\"x\",\n label=\"Your implementation\")\n ax.set_xlabel(\"Loss from oracle implementation\")\n ax.set_ylabel(\"Loss from your implementation\")\n ax.legend()\n ax.set_title(\"Generator loss\")\n\n\ndef plotting_ratio_impl(ax, x_real, x_fake, ratio, yscale=\"linear\"):\n dist_p = torch.distributions.normal.Normal(loc=0, scale=1)\n dist_q = torch.distributions.normal.Normal(loc=-2, scale=1)\n x = torch.linspace(-3, 5, 100)\n prob_p = torch.exp(dist_p.log_prob(x))\n prob_q = torch.exp(dist_q.log_prob(x))\n trueRatio = prob_p / prob_q\n ax.plot(x, trueRatio, label=\"True tatio\")\n\n x = torch.cat([x_real, x_fake])\n ax.scatter(x[:,0][::10], ratio[:,0][::10], marker=\"x\",\n label=\"Ratio from discriminator\")\n ax.hist(x_real[:,0], density=True, bins=50, histtype=\"step\", label=\"Real\")\n ax.hist(x_fake[:,0], density=True, bins=50, histtype=\"step\", label=\"Fake\")\n ax.set_yscale(yscale)\n title = \"Densities and the ratio from discriminator\"\n if yscale == \"log\":\n title += \" in log scale\"\n ax.set_title(title)\n ax.legend()\n\n\ndef plotting_ratio(x_real, x_fake, ratio):\n fig, axes = plt.subplots(1, 2, figsize=(2 * 7, 7))\n plotting_ratio_impl(axes[0], x_real, x_fake, ratio, yscale=\"linear\")\n plotting_ratio_impl(axes[1], x_real, x_fake, ratio, yscale=\"log\")\n\n\nclass Interactive:\n def display_widgets(self):\n for widget in self.widgets:\n display(widget)\n\n def __init__(self, widgets, handler):\n def handler_with_extra_steps(b):\n clear_output(wait=True)\n handler(*map(lambda w: w.value, widgets))\n self.display_widgets()\n self.widgets = widgets\n for widget in self.widgets:\n widget.observe(handler_with_extra_steps,\n names=['value'])\n handler(*map(lambda w: w.value, widgets))\n self.display_widgets()\n\n\n# Using Interactive\n# All widgets: https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html\n\n\ndef make_plot(xmax, title, ftype):\n fig, ax = plt.subplots()\n x = np.linspace(-2, xmax, 100)\n if ftype == \"sin\":\n y = np.sin(x)\n if ftype == \"cos\":\n y = np.cos(x)\n if ftype == \"tanh\":\n y = np.tanh(x)\n ax.scatter(x, y)\n ax.set_xlim(-2.1, 2.1)\n ax.set_ylim(-2, 2)\n if title:\n ax.set_title(f\"Range from -1 to {xmax}\")\n return fig\n```\n\n\n```python\n# @title Set random seed\n\n# @markdown Executing `set_seed(seed=seed)` you are setting the seed\n\n# for DL its critical to set the random seed so that students can have a\n# baseline to compare their results to expected results.\n# Read more here: https://pytorch.org/docs/stable/notes/randomness.html\n\n# Call `set_seed` function in the exercises to ensure reproducibility.\nimport random\nimport torch\n\ndef set_seed(seed=None, seed_torch=True):\n if seed is None:\n seed = np.random.choice(2 ** 32)\n random.seed(seed)\n np.random.seed(seed)\n if seed_torch:\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n torch.cuda.manual_seed(seed)\n torch.backends.cudnn.benchmark = False\n torch.backends.cudnn.deterministic = True\n\n print(f'Random seed {seed} has been set.')\n\n\n# In case that `DataLoader` is used\ndef seed_worker(worker_id):\n worker_seed = torch.initial_seed() % 2**32\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n```\n\n\n```python\n# @title Set device (GPU or CPU). Execute `set_device()`\n# especially if torch modules used.\n\n# inform the user if the notebook uses GPU or CPU.\n\ndef set_device():\n device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n if device != \"cuda\":\n print(\"WARNING: For this notebook to perform best, \"\n \"if possible, in the menu under `Runtime` -> \"\n \"`Change runtime type.` select `GPU` \")\n else:\n print(\"GPU is enabled in this notebook.\")\n\n return device\n```\n\n\n```python\nSEED = 2021\nset_seed(seed=SEED)\nDEVICE = set_device()\n```\n\n Random seed 2021 has been set.\n GPU is enabled in this notebook.\n\n\n---\n# Section 1: How to train GANs\n\n*Time estimate: ~15mins*\n\n\n```python\n# @title Video 1: Generative Adversarial Networks\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"https://player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=f\"BV1o64y1i7xA\", width=854, height=480, fs=1)\n print(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=f\"FmUbll93kms\", width=854, height=480, fs=1, rel=0)\n print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\n# add event to airtable\natform.add_event('Video 1: Generative Adversarial Networks')\n\ndisplay(out)\n```\n\n\n Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})\n\n\nGANs consist two networks: A critic or discriminator (`disc`) and a generator (`gen`) that are trained by alternating between the following two steps:\n- In step 1, we update the parameters (`disc.params`) of the discriminator by backpropagating through the discriminator loss (BCE loss) `disc.loss`.\n- In step 2, we update the parameters (`gen.params`) of the generator by backpropagating through the generator loss, `gen.loss` (-1 * BCE loss).\n\nWe will now implement a simple GAN training loop!\n\n## Coding Exercise 1: The GAN training loop\n\nTo get you started we have implemented a simple GAN in pseudocode. All you have to do is to implement the training loop.\n\n__Your goal__ is to arrange the functions given below in the correct order in the `train_gan_iter` function\n- `disc.loss(x_real, x_fake)`: Discriminator loss\n- `disc.classify(x)`: Classify `x` as real or fake\n- `gen.loss(x_fake, disc_fn)`: Generator loss\n- `disc_fn(x)` is a function to check `x` is real or fake.\n- `gen.sample(num_samples)`: Generate samples from the generator\n- `backprop(loss, model)`: Compute gradient of `loss` wrt `model`\n- `model` is either `disc` or `gen`\n\nWe have already taken care of most of these functions. So you only have to figure out the placement of `disc.loss` and `gen.loss` functions.\n\n__We highly recommend studying `train_gan_iter` function to understand how the GAN training loop is structured.__ \n\n\n```python\n# @markdown *Execute this cell to enable helper functions*\n\ndef get_data():\n return \"get_data\"\n\n\nclass Disc:\n\n def loss(self, x_real, x_fake):\n assert x_real == \"get_data\" and x_fake == \"gen.sample\",\"Inputs to disc.loss is wrong\"\n\n def classify(self, x):\n return \"disc.classify\"\n\n\nclass Gen:\n\n def loss(self, x_fake, disc_fn):\n assert x_fake == \"gen.sample\" and disc_fn(None) == \"disc.classify\", \"Inputs to gen.loss is wrong\"\n\n def sample(self, num_samples):\n return \"gen.sample\"\n\n\ndef backprop(loss, model):\n pass\n\n\ndef update(model, grad):\n pass\n```\n\n\n```python\ndef train_gan_iter(data, disc, gen):\n \"\"\"Update the discriminator (`disc`) and the generator (`gen`) using `data`\n\n Args:\n data (ndarray): An array of shape (N,) that contains the data\n disc (Disc): The discriminator\n gen (Gen): The generator\n\n Returns:\n \"\"\"\n #################################################\n # Intructions for students: #\n # Fill out ... in the function and remove below #\n #################################################\n\n # Number of samples in the data batch\n num_samples = 200\n\n # The data is the real samples\n x_real = data\n\n ## Discriminator training\n\n # Ask the generator nicely to generate some fake samples\n x_fake = gen.sample(num_samples)\n\n #################################################\n ## TODO for students: details of what they should do ##\n # Fill out function and remove\n # raise NotImplementedError(\"Student exercise: Write code to compute disc_loss\")\n #################################################\n # Compute the discriminator loss\n disc_loss = disc.loss(x_real, x_fake)\n\n # Compute the gradient for discriminator\n disc_grad = backprop(disc_loss, disc)\n\n # Update the discriminator\n update(disc, disc_grad)\n\n ## Generator training\n\n # Ask the generator to generate some fake samples\n x_fake = gen.sample(num_samples)\n\n #################################################\n ## TODO for students: details of what they should do ##\n # Fill out function and remove\n # raise NotImplementedError(\"Student exercise: Write code to compute gen_loss\")\n #################################################\n # Compute the generator loss\n gen_loss = gen.loss(x_fake, disc.classify)\n\n # Compute the gradient for generator\n gen_grad = backprop(gen_loss, gen)\n\n # Update the generator\n update(gen, gen_grad)\n\n print(\"Your implementation passes the check!\")\n\n return None\n\n\n# add event to airtable\natform.add_event('Coding Exercise 1: The GAN training loop')\n\ndata = get_data()\ndisc = Disc()\ngen = Gen()\n# Uncomment below to check your function\ntrain_gan_iter(data, disc, gen)\n```\n\n Your implementation passes the check!\n\n\n\n```python\n# to_remove solution\ndef train_gan_iter(data, disc, gen):\n \"\"\"Update the discriminator (`disc`) and the generator (`gen`) using `data`\n\n Args:\n data (ndarray): An array of shape (N,) that contains the data\n disc (Disc): The discriminator\n gen (Gen): The generator\n\n Returns:\n \"\"\"\n\n # Number of samples in the data batch\n num_samples = 200\n\n # The data is the real samples\n x_real = data\n\n ## Discriminator training\n\n # Ask the generator to generate some fake samples\n x_fake = gen.sample(num_samples)\n\n # Compute the discriminator loss\n disc_loss = disc.loss(x_real, x_fake)\n\n # Compute the gradient for discriminator\n disc_grad = backprop(disc_loss, disc)\n\n # Update the discriminator\n update(disc, disc_grad)\n\n ## Generator training\n\n # Ask the generator to generate some fake samples\n x_fake = gen.sample(num_samples)\n\n # Compute the generator loss\n gen_loss = gen.loss(x_fake, disc.classify)\n\n # Compute the gradient for generator\n gen_grad = backprop(gen_loss, gen)\n\n # Update the generator\n update(gen, gen_grad)\n\n print(\"Your implementation passes the check!\")\n\n return None\n\n\n# add event to airtable\natform.add_event('Coding Exercise 1: The GAN training loop')\n\ndata = get_data()\ndisc = Disc()\ngen = Gen()\n## Uncomment below to check your function\ntrain_gan_iter(data, disc, gen)\n```\n\n---\n# Section 2: GAN Training Objective\n\n*Time estimate: ~20mins*\n\nThe training objective of GANs consists of the losses for generators and discriminators respectively. In this section we will be implementing these objectives.\n\n\n```python\n# @title Video 2: Principles of GANs\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"https://player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=f\"BV1bo4y1U7YT\", width=854, height=480, fs=1)\n print(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=f\"U_4z5-hX1Kg\", width=854, height=480, fs=1, rel=0)\n print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\n# add event to airtable\natform.add_event('Video 2: Principles of GANs')\n\ndisplay(out)\n```\n\n\n Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})\n\n\n## Section 2.1: Discriminator Loss\n\nThe critic or the discriminator in a vanilla GAN is trained as a binary classifier using the BCE criteria. In this section, we will implement the training objective for the discriminator. \n\n\\begin{equation}\n\\text{BCE}_\\omega = \\mathbb{E}_{x \\sim p}[\\log(\\sigma(D_\\omega(x)))] + \\mathbb{E}_{x \\sim q}[\\log(1 - \\sigma(D_\\omega(x)))]\n\\end{equation}\n\nHere, $p$ is the data distribution and $q$ is the generator distribution. $D_\\omega$ is the logit, which represents $\\log \\frac{p}{q}$. $\\sigma$ is the sigmoid function and therfore, $\\sigma(D_\\omega)$ represents $\\frac{p}{p+q}$.\n\n### Coding Exercise 2.1: Implement Discriminator Loss\n\nTo get you started we have implemented a simple GAN in pseudocode and partially implemented the discriminator training objective.\n\n**Your goal** is to complete the missing part in the training objective of the discriminator in the function `loss_disc`.\n\n`loss_disc` also allows you evaluate the loss function on some random samples.\nIf your implementation is correct, you will see a plot where the loss values from your implementation will match the ground truth loss values.\n\nIn practice, given $N$ samples, we estimate BCE as\n\n\\begin{equation}\n\\text{BCE}_\\omega = -\\frac{1}{N} \\sum_{i=1}^N y_i \\log(\\sigma(D_\\omega(x_i)) + (1-y_i) \\log(1-\\sigma(D_\\omega(x_i))).\n\\end{equation}\n\nHere, $y$ is the label. $y=1$ when $x \\sim p$ (real data) and $y=0$ when $x \\sim q$ (i.e., fake data).\n\nPlease note, `disc.classify` = $\\sigma(D_\\omega)$ in `loss_disc`.\n\n\n```python\n# @markdown *Execute this cell to enable helper functions*\n\ndef get_data(num_samples=100, seed=0):\n set_seed(seed)\n return torch.randn([num_samples, 1])\n\n\nclass DummyGen:\n def sample(self, num_samples=100, seed=1):\n set_seed(seed)\n return torch.randn([num_samples, 1]) + 2\n\n\nclass DummyDisc:\n def classify(self, x, seed=0):\n set_seed(seed)\n return torch.rand([x.shape[0], ])\n```\n\n\n```python\ndef loss_disc(disc, x_real, x_fake):\n \"\"\"Compute the discriminator loss for `x_real` and `x_fake` given `disc`\n\n Args:\n disc: The discriminator\n x_real (ndarray): An array of shape (N,) that contains the real samples\n x_fake (ndarray): An array of shape (N,) that contains the fake samples\n\n Returns:\n ndarray: The discriminator loss\n \"\"\"\n\n label_real = 1\n #################################################\n # TODO for students: Loss for real data\n # raise NotImplementedError(\"Student exercise: Implement loss for real samples\")\n #################################################\n loss_real = label_real * torch.log(disc.classify(x_real))\n\n label_fake = 0\n #################################################\n # TODO for students: Loss for fake data\n #raise NotImplementedError(\"Student exercise: Implement loss for fake samples\")\n #################################################\n loss_fake = (1 - label_fake) * torch.log(1 - disc.classify(x_fake))\n\n\n return torch.cat([loss_real, loss_fake])\n\n\n# add event to airtable\natform.add_event('Coding Exercise 3.1: Implement Discriminator Loss')\n\ndisc = DummyDisc()\ngen = DummyGen()\n\nx_real = get_data()\nx_fake = gen.sample()\n\n# Uncomment to check your function\nld = loss_disc(disc, x_real, x_fake)\nplotting_ld(ld)\n```\n\n\n```python\n# to_remove solution\ndef loss_disc(disc, x_real, x_fake):\n \"\"\"Compute the discriminator loss for `x_real` and `x_fake` given `disc`\n\n Args:\n disc: The discriminator\n x_real (ndarray): An array of shape (N,) that contains the real samples\n x_fake (ndarray): An array of shape (N,) that contains the fake samples\n\n Returns:\n ndarray: The discriminator loss\n \"\"\"\n\n # Loss for real data\n label_real = 1\n loss_real = label_real * torch.log(disc.classify(x_real))\n\n # Loss for fake data\n label_fake = 0\n loss_fake = (1 - label_fake) * torch.log(1 - disc.classify(x_fake))\n\n return torch.cat([loss_real, loss_fake])\n\n\n# add event to airtable\natform.add_event('Coding Exercise 3.1: Implement Discriminator Loss')\n\ndisc = DummyDisc()\ngen = DummyGen()\n\nx_real = get_data()\nx_fake = gen.sample()\n\n## Uncomment to check your function\nld = loss_disc(disc, x_real, x_fake)\nwith plt.xkcd():\n plotting_ld(ld)\n```\n\n**A note on numerical stability**\n\nIt is common that functions like $\\log$ throw a numerical error.\nFor $\\log$, it happens when $x$ in $\\log(x)$ is very close to 0.\nThe most common practice is to always add some very small value $\\epsilon$ to $x$, i.e. use $\\log(x + \\epsilon)$ instead.\nMost build-in functions in modern DL frameworks like TensorFlow or PyTorch handle such things in their build-in loss already, e.g., `torch.nn.BCE`, which is equivalent to the loss you implemented above.\n\n## Section 2.3: The generator loss\n\nNow that we have a trained critic, lets see how to train the generator using it.\n\n### Coding Exercise 2.3: The generator loss\n\nWe will now implement the generator loss function and evaluate it on some fixed points.\n\n**Your goal** is to complete the implementation of the function `loss_gen` using the optimal critic from above.\n\nUpon correct implementation, you shall see a plot where the loss values from generator samples align with the \"Correct\" values.\n\n**HINT:** You simply need to change the labels. \n\n\n```python\ndef loss_gen(disc, x_fake):\n \"\"\"Compute the generator loss for `x_fake` given `disc`\n\n Args:\n disc: The generator\n x_fake (ndarray): An array of shape (N,) that contains the fake samples\n\n Returns:\n ndarray: The generator loss\n \"\"\"\n\n #################################################\n # TODO for students: Loss for fake data\n # raise NotImplementedError(\"Student exercise: Implement loss for fake data\")\n #################################################\n label_fake = 1\n loss_fake = label_fake * torch.log(disc.classify(x_fake))\n\n return loss_fake\n\n\n# add event to airtable\natform.add_event('Coding Exercise 2.3: The generator loss')\n\ndisc = DummyDisc()\ngen = DummyGen()\n\nx_fake = gen.sample()\n# Uncomment below to check your function\nlg = loss_gen(disc, x_fake)\nplotting_lg(lg)\n```\n\n\n```python\n# to_remove solution\ndef loss_gen(disc, x_fake):\n \"\"\"Compute the generator loss for `x_fake` given `disc`\n\n Args:\n disc: The generator\n x_fake (ndarray): An array of shape (N,) that contains the fake samples\n\n Returns:\n ndarray: The generator loss\n \"\"\"\n\n # Loss for fake data\n label_fake = 1\n loss_fake = label_fake * torch.log(disc.classify(x_fake))\n\n return loss_fake\n\n\n# add event to airtable\natform.add_event('Coding Exercise 2.3: The generator loss')\n\ndisc = DummyDisc()\ngen = DummyGen()\n\nx_fake = gen.sample()\n## Uncomment below to check your function\nlg = loss_gen(disc, x_fake)\nwith plt.xkcd():\n plotting_lg(lg)\n```\n\n**Did you notice?**\n\nThe loss you implemented for generator is essentially the part for real data in `loss_disc`, i.e., it is saying, \"*the data I am feeding to you is real and not fake*\".\n\n---\n# Section 3: The difficulty of GAN training.\n\n*Time estimate: ~10mins*\n\nIn this section we will develop an intuition for the training dynamics of GANs.\n\n## Interactive Demo 2: Failure modes of GAN training\n\nGAN training is notoriously difficult because \nit is very sensitive to hyper-parameters such as learning rate and model architecture. To help you develop a sense of this here is a very simple GAN training demo that we have borrowed from [Andrej Karpathy's website](https://cs.stanford.edu/people/karpathy/gan/). \n\nThe generator $G$, pictured in red, takes inputs sampled from a uniform distribution, $z$. It attempts to transform these to match a data distribution, shown below in blue. Meanwhile, the discriminator $D$ attempts to determine whether a sample is from the data distribution or the generating distribution. In the demo, the green curve represents the output of the discriminator. Its value is high where the discriminator is more confident that a sample with that value is drawn from the data distribution.\n\nEven though the GAN in this demo is very simple and operates in either 1D or 2D spaces, it is however very sensitive to the learning rate. Try it for yourself!\n\n\n```python\n# @title Video 3: GAN generator Learning Idea\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"https://player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=f\"BV1aL411J7SE\", width=854, height=480, fs=1)\n print(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=f\"Iqqz2_USUGs\", width=854, height=480, fs=1, rel=0)\n print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\n# add event to airtable\natform.add_event('Video 3: GAN generator Learning Idea')\n\ndisplay(out)\n```\n\n\n Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})\n\n\n\n```python\n# @title GAN training demo\n# @markdown Make sure you execute this cell to enable the widget!\n\nfrom IPython.display import IFrame\nIFrame(src='https://xukai92.github.io/gan_demo/index.html', width=900, height=600)\n```\n\n\n\n\n\n\n\n\n\n\n## Think! 2: What makes GANs hard to train?\n\nYou have played with the demo and it's time to think about a few questions\n\n1. Which target is more stable to train, 1D or 2D?\n2. If you keep increasing the learning rate, what happens? Does it happen in both the cases, i.e., 1D/2D targets?\n3. Can you think of some drawbacks of using small learning rates?\n\n\n```python\n# @title Student Response\nfrom ipywidgets import widgets\n\n\ntext=widgets.Textarea(\n value='Type your answer here and click on `Submit!`',\n placeholder='Type something',\n description='',\n disabled=False\n)\n\nbutton = widgets.Button(description=\"Submit!\")\n\ndisplay(text,button)\n\ndef on_button_clicked(b):\n atform.add_answer('q1', text.value)\n print(\"Submission successful!\")\n\n\nbutton.on_click(on_button_clicked)\n```\n\n\n```python\n# to_remove explanation\n\n\"\"\"\n 1. 2D as it is simply a more difficult distribution to model compared to the\n unimodal 1D example.\n 2. The training becomes unstable and eventually diverges. Yes.\n 3. Training is too slow and takes longer to convege.\n\nIn general, when training a GAN we need to ensure a certain balance between\nthe critic and generator training. As such, tuninig the learning rate\nis crucial for succesfully training GANs.\n\"\"\"\n```\n\n\n```python\n# @title Video 4: GAN Failure Models\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"https://player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=f\"BV17M4y1L7w9\", width=854, height=480, fs=1)\n print(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=f\"fmU2UM_QzLo\", width=854, height=480, fs=1, rel=0)\n print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\n# add event to airtable\natform.add_event('Video 4: GAN Failure Models')\n\ndisplay(out)\n```\n\n\n Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})\n\n\n---\n# Section 4: GAN training in action!\n\n*Time estimate: ~4mins*\n\nIn this section we will be playing with a complete implementation of GAN.\n\n## Interactive Demo 4: GAN training in action\n\n\n\n```python\n# @title GanLab\nfrom IPython.display import HTML\nHTML('')\n```\n\n\n\n\n\n\n\n\n---\n# Summary\n\nThrough this tutorial, we have learned\n\n- How to implement the training loop of GANs.\n- Developed an intuition about the training dynamics of GANs.\n- How to implement the training objectives for the generator and discriminator of GANs.\n- How are GANs connected to density ratio estimation.\n\nNext tutorial will cover conditional GANs and ethical issues of DL.\n\n\n```python\n# @title Airtable Submission Link\nfrom IPython import display as IPydisplay\nIPydisplay.HTML(\n f\"\"\"\n
\n \n \n
\"\"\" )\n```\n\n\n\n\n\n
\n \n \n
\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "885ef2ac08cdc0518894da0dc2397bbade1c86f2", "size": 312444, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/W2D5_GenerativeModels/W2D5_Tutorial2.ipynb", "max_stars_repo_name": "fabxy/course-content-dl", "max_stars_repo_head_hexsha": "d2b4bf8c6d97215184d063c4dd444a99d2767ec9", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-30T08:42:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-30T08:42:05.000Z", "max_issues_repo_path": "tutorials/W2D5_GenerativeModels/W2D5_Tutorial2.ipynb", "max_issues_repo_name": "fabxy/course-content-dl", "max_issues_repo_head_hexsha": "d2b4bf8c6d97215184d063c4dd444a99d2767ec9", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/W2D5_GenerativeModels/W2D5_Tutorial2.ipynb", "max_forks_repo_name": "fabxy/course-content-dl", "max_forks_repo_head_hexsha": "d2b4bf8c6d97215184d063c4dd444a99d2767ec9", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 113.1633466135, "max_line_length": 77162, "alphanum_fraction": 0.8098443241, "converted": true, "num_tokens": 11143, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5467381519846138, "lm_q2_score": 0.48828339529583464, "lm_q1q2_score": 0.2669631611888173}} {"text": "\n\n\n\n\n```python\nfrom IPython.display import HTML\nhide_me = ''\nHTML('''\nTo toggle on/off the raw code, click here.''')\n```\n\n\n```python\nhide_me\n\nimport ipywidgets as widgets\nfrom IPython.display import display, Math, Latex, HTML, IFrame\nfrom ipywidgets import IntSlider, Label\nfrom ipywidgets import interact, interactive\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\nimport pandas as pd\n```\n\nGrade 11\n# Uniform Motion and Uniformly Accelerated Motion\n\n> \n>\n
https://giphy.com/gifs/usain-bolt-sgjJkkJutglMY
\n\n## Introduction\n\nEverything in the universe is constantly moving. Objects can be moving incredibly slow, so slow that they appear to be at rest, or so incredibly fast that you may not even see it. Even if you're standing still on Earth, you're moving and incomprehensible speeds. On Earth, you are moving around the Sun at approximately 108,000 km/h, and the Sun is orbiting galactic center at approximately 720,000 km/h. But that's not it: our galaxy the Milky Way is moving at approximately 2,268,000 km/h. As motion is so universal, understanding motion is an important topic of physics. Motion is defined by the change of position of an object with respect to other surrounding objects. For example a car is moving with respect to trees on the roadside. Motion can be described using three important quantities: velocity, speed and acceleration. In this notebook, we will familiarize ourselves with two types of motion: uniform motion, and uniformly accelerated motion.\n\n## Concepts of Uniform Motion\n\nMotion is described by three variables: distance ($d$), velocity ($\\vec{v}$), and acceleration ($\\vec{a}$). Let's define and explore these quantities below.\n\n### Distance Vs. Displacement\nTo begin, let us outline the difference between distance and displacement. \n> Distance describes the length of the actual path traveled to travel from one point to another.\n\n> Displacement is identical to distance as it describes the amount of space between two points. However, displacement is a vector quantity which means it also specifies the _direction_ of travel, as well as the amount of space between two points\u200b.\n\nBelow is a video which demonstrates the difference between distance and displacement:\n\n\n```python\nhide_me\n\nfrom IPython.display import HTML\n# Youtube\nHTML('')\n```\n\n**Practise**\n\nCalculate the distance and displacement based on the image below. Imagine you start at point $\\textrm{A}$ and you move around the field in the following order $\\textrm{A} \\rightarrow \\textrm{B} \\rightarrow \\textrm{C} \\rightarrow \\textrm{D} \\rightarrow \\textrm{E} \\rightarrow \\textrm{A}$:\n\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Distance: 2m and Displacement: 2m East\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Displacement contains both measurement and direction\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Calculate the distance and displacement at point B?\"))\n\na1 = 'Distance: 2m and Displacement: 2m'\na2 = \"Distance: 2m and Displacement: 2m East\"\na3 = \"Distance: 2m North and Displacement: 2m\"\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n```python\nhide_me\n\ndef q_2(val):\n if val == \"Distance: 9m North and Displacement: 2.2m NW\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Distance: Actual path covered and Displacement: Shortest path covered with direction\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Calculate the distance and displacement at point E?\"))\n\na1 = 'Distance: 9m and Displacement: 7m'\na2 = \"Distance: 3m and Displacement: 2.2m NW\"\na3 = \"Distance: 9m North and Displacement: 2.2m NW\"\ninteract(q_2, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n### Speed Vs. Velocity\n\nAnalogous to distance and displacement are speed and velocity, however now these quantities also imply that the object's distance/displacement is _changing_. Let's take a look at how speed and velocity are defined.\n\nSpeed is the rate of change of distance over time.\n\n\\begin{equation}\n\\textrm{speed } = \\frac{\\textrm{change} \\ \\textrm{of} \\ \\textrm{distance (m)}}{\\textrm{time (s)}}\n\\end{equation}\n\nVelocity is the rate of change of displacement over time.\n\n\\begin{equation}\n\\textrm{velocity, } \\vec{v} = \\frac{\\textrm{change} \\textrm{ of} \\textrm{ displacement (m)}}{\\textrm{time (s)}} \\\\\n\\textrm{m} = \\textrm{meter } \\textrm{and} \\textrm{ s} = \\textrm{second}\n\\end{equation}\n\n**Practise**\n\nNow, let's repeat a similar question as we did with displacement and distance, but now include velocity. The required time of one point to another point is given below:\n\n* $\\textrm{A} \\rightarrow \\textrm{C}: 4 \\textrm{ sec}$\n* $\\textrm{A} \\rightarrow \\textrm{D}: 10 \\textrm{ sec}$\n* $\\textrm{A} \\rightarrow \\textrm{E}: 16 \\textrm{ sec}$\n* $\\textrm{A} \\rightarrow \\textrm{A}: 20 \\textrm{ sec}$\n\n\n\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Speed: 0.75 m/s and Velocity: 0.55 m/s NE\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Velocity contains both measurement and direction as it relates to displacement\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Calculate the speed and velocity at point C?\"))\n\na1 = 'Speed: 0.75 m/s and Velocity: 0.55 m/s NE'\na2 = \"Speed: 0.55 m/s and Velocity: 0.75 m/s NE\"\na3 = \"Speed: 1 m/s and Velocity: 0.55 m/s NE\"\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n```python\nhide_me\n\ndef q_2(val):\n if val == \"Speed: 0.60 m/s and Velocity: 0.20 m/s South\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Speed and Velocity are not always equivalent\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Calculate the speed and velocity at point D?\"))\n\na1 = 'Speed: 0.80 m/s and Velocity: 0.60 m/s South'\na2 = \"Speed: 0.60 m/s and Velocity: 0.20 m/s South\"\na3 = \"Speed: 0.60 m/s and Velocity: 0.60 m/s South\"\ninteract(q_2, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n### Acceleration\n\nVelocity can also change with respect to time. A change in velocity requires _acceleration_, which is defined as rate of change of velocity over time. As acceleration depends on the change in the vector quantity velocity, acceleration is also a vector. Below is an example of acceleration.\n\n\n
https://giphy.com/gifs/cell-concept-acceleration-139Qnnkbg2pefe
\n\n Acceleration is the rate of change of velocity over time. Acceleration is a vector quantity as it depends on velocity. \n\n\\begin{equation}\n\\textrm{acceleration, } \\vec{a} = \\frac{\\textrm{change} \\textrm{ of} \\textrm{ velocity } (\\frac{\\text{m}}{\\text{s}}) } {\\textrm{time (s)}} \n\\end{equation}\n\n [Here is an interactive animation demonstrating acceleration further.](https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/ClassMechanics/MotionDiagram/MotionDiagram.html)\n\n**Practice**\n\nLet's go back to our field example and think about where we may see acceleration. Once again we are traveling from $\\textrm{A} \\rightarrow \\textrm{B} \\rightarrow \\textrm{C} \\rightarrow \\textrm{D} \\rightarrow \\textrm{E} \\rightarrow \\textrm{A}$\u200b: \n\n\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Acceleration: 0 m/s\\u00b2 East\":\n display(Latex(\"Correct!\"))\n display(Latex(\"No change in velocity and direction is also straight line\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Calculate the acceleration at point B?\"))\n\na1 = 'Acceleration: 0 m/s\\u00b2 East'\na2 = \"Acceleration: 1 m/s\\u00b2 East\"\na3 = \"Acceleration: 2 m/s\\u00b2 East\"\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"Yes\":\n display(Latex(\"Correct!\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Does a change in direction imply that there was acceleration?\"))\n\na1 = 'Yes'\na2 = \"No\"\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2 ],value = ' ',description = 'Choose One:',disabled = False));\n\ndef q_2(val):\n if val == 'To change direction is to change your velocity, and this requires acceleration':\n display(Latex(\"Correct!\"))\n display(Latex(\"Velocity is a vector quantity. To change your direction requires acceleration in another direction.\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"Why?\"))\n\na1 = 'To change direction is to change your velocity, and this requires acceleration'\na2 = 'When you change direction you gain speed, and this requires acceleration'\n\n\ninteract(q_2, val = widgets.Dropdown(options=[' ',a1 ,a2],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n### Uniform Motion\n\nNow that we have an understanding of distance, displacement, velocity and acceleration, let's use those quantities to describe \"uniform motion\".\n\nUniform motion has the following two properties:\n\n* Motion is constant or steady that means object covers equal distance in equal time interval.\n\n* The object travels in a straight line.\n\nConsider the blue car in the animation below\n\n\n
http://www.ninetyeast.net/physics/grade-9-10-gcse-hsc/forces/newtons-laws-of-motion/newtons-first-law-of-motion
\n\nThe blue car is travelling at a velocity of $10 \\textrm{ ms}^{-1}$ to the right. That means every second the car is travelling $10 \\textrm{ m}$. If we record this car's displacement and velocity for $10 \\textrm{ sec}$, we get the following table. \n\n\n| Time $(sec$) | Displacement ($m$) | Velocity ($ms^{-1}$)|\n|:-------------:|:-----------------:|:--------:|\n| $ 1$ | $ 10$ | $10$ | \n| $ 2$ |$ 20$ | $10$ | \n| $ 3$ |$30$ | $10$ | \n| $4$ |$40$ | $10$ | \n| $ 5$ | $ 50$ | $10$ | \n| $ 6$ |$ 60$ | $10$ | \n| $ 7$ |$70$ | $10$ | \n| $8$ |$80$ | $10$ | \n| $ 9$ |$90$ | $10$ | \n| $10$ |$100$ | $10$ | \n\nWe can also use this table to create the animation below:\n\n\n```python\nhide_me\n\n# Data\nt = np.linspace(0,10,11)\nd = np.linspace(0,100,11)\nv = np.linspace(10,10,11)\n\n# Create a figure with two subplots\nfig, (ax1, ax2) = plt.subplots(1,2, figsize=(10, 4), dpi= 90, facecolor='w', edgecolor='k')\n\n# Same X axis limt and grid initalizations\nfor ax in [ax1, ax2]:\n ax.set_xlim(0, 10)\n ax.grid()\n \nax1.set_ylim(0,100)\nax2.set_ylim(0,20) \n\n# Initialize the plot\nl1, = ax1.plot([],[], 'go-', label='Displacement', linewidth=2)\nleg = ax1.legend(loc='best')\nl2, = ax2.plot([],[], 'rs-', label='Velocity')\nleg = ax2.legend(loc='best')\n\nfig.suptitle('Uniform Motion')\nax1.set_xlabel('Time (sec)')\nax2.set_xlabel('Time (sec)')\nax1.set_ylabel('Displacement (m)')\nax2.set_ylabel (r'$\\mathrm{Velocity} \\ (\\mathrm{ ms}^{-1})$')\n \n# Initiate the animation\ndef animate(i): \n l1.set_data(t[:i+1], d[:i+1])\n l2.set_data(t[:i+1], v[:i+1])\n \nani = FuncAnimation(fig, animate, interval = 800, frames=len(t))\nplt.close()\n\n# Convert animation to video\nfrom IPython.display import HTML\nHTML(ani.to_html5_video())\n\n```\n\nFrom the table and animations, we find that the blue car's displacement is changing and its velocity is constant. This is an example of uniform motion. The car travels equal distance in equal time.\n\n\\begin{equation}\n\\textrm{velocity, } \\vec{v} = \\frac{\\textrm{change} \\ \\textrm{of} \\ \\textrm{displacement (m)}}{\\textrm{time (s)}} = \\textrm{constant}\n\\end{equation}\n\n**Based on your knowledge of uniform motion, what is the blue car's acceleration?**\n\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"0\":\n display(Latex(\"Correct!\"))\n display(Latex(\"Speed is constant and direction is in a straight line\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\na1 = '0'\na2 = 'Constant'\na3 = 'Variable'\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n### Uniform Accelelerated Motion\n\n\n```python\nhide_me\n\nfrom IPython.display import HTML\n# Youtube\nHTML('')\n```\n\nUniformly accelerated motion is a little different than the uniform motion we discussed earlier. In the case of uniformly accelerated motion, your velocity is increasing constantly and equally in time. Your velocity is changing in a way similar to how displacement changes if you're traveling at constant velocity. Suppose you start at rest and begin running in order to catch a ball. In this scenario, you will need to _accelerate_. Suppose after each second, you are traveling 2 $\\frac{\\textrm{m}}{\\textrm{s}}$ faster than the second previous. In such a case, you have a uniform acceleration of 2 $\\frac{\\textrm{m}}{\\textrm{s}^2}$ (the seconds are squared in the units of acceleration as you are increasing your velocity constantly - i.e. you gain more velocity per second). Let's write down displacement, velocity and acceleration as you try to catch this ball in a table:\n\n| Time $(\\textrm{sec}$) | Displacement ($\\textrm{m}$) | Velocity ($\\textrm{ms}^{-1}$)| Acceleration ($\\textrm{ms}^{-2}$) \n|:-------------:|:-----------------:|:--------:|:--------:|\n| $ 0$ | $ 0$ | $0$ | $2$ |\n| $ 1$ |$ 1$ | $2$ | $2$ |\n| $ 2$ |$4$ | $4$ | $2$ |\n| $3$ |$9$ | $6$ | $2$ |\n| $ 4$ |$16$ | $8$ | $2$ |\n| $5$ |$25$ | $10$ | $2$ |\n| $6$ |$36$ | $12$ | $2$ |\n| $ 7$ |$49$ | $14$ | $2$ |\n| $8$ |$64$ | $16$ | $2$ |\n\nUsing the table, we can also create an animation\n\n\n```python\nhide_me\n\n# Data\nt = np.linspace(0,8,9);\nd = ([0,1,4,9,16,25,36,49,64]);\nv = ([0,2,4,6,8,10,12,14,16]);\na = np.linspace(2,2,9);\n\n# Create a figure with three subplots\nfig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(12.5, 4), dpi= 80, facecolor='w', edgecolor='k')\n\n# Same X axis limt and grid initalizations\nfor ax in [ax1, ax2, ax3]:\n ax.set_xlim(0, 8);\n ax.grid();\n \nax1.set_ylim(0,70);\nax2.set_ylim(0,16); \nax3.set_ylim(0,5);\n\n# Initialize the plot\nl, = ax1.plot([],[], 'go-', label='Displacement', linewidth=2);\nleg = ax1.legend(loc='best');\nl1, = ax2.plot([],[], 'rs-', label='Velocity');\nleg = ax2.legend(loc='best');\nl2, = ax3.plot([],[], 'b*-', label='Acceleration', linewidth=2);\nleg = ax3.legend(loc='best');\n\nfig.suptitle('Uniformly Accelerated Motion');\nax1.set_xlabel('Time (sec)');\nax2.set_xlabel('Time (sec)');\nax3.set_xlabel('Time (sec)');\nax1.set_ylabel('Displacement (m)');\nax2.set_ylabel (r'$\\mathrm{Velocity} \\ (\\mathrm{ ms}^{-1})$');\nax3.set_ylabel (r'$\\mathrm{Acceleration} \\ (\\mathrm{ ms}^{-2})$');\n\n# Initiate the animation\ndef animate(i):\n l.set_data(t[:i+1], d[:i+1]);\n l1.set_data(t[:i+1], v[:i+1]);\n l2.set_data(t[:i+1], a[:i+1]);\n \nani=FuncAnimation(fig, animate, interval = 800, frames=len(t));\nplt.close()\n\n# Convert animation to video\nfrom IPython.display import HTML\nHTML(ani.to_html5_video())\n```\n\nNotice how with constant acceleration, your velocity increases _linearly_. As well, when you're accelerating, your displacement changes _parabolically_. These relationships are explained further with the equations of motion below.\n\n\n## Equations of Motion\n\nUsing our relationships between displacement, velocity, acceleration and time, we can define \"equations of motion\" for a moving object. There are four such equations relevant to the principles of uniform motion and uniform acceleration.\n\nSuppose an object with initial velocity $\\vec{\\textrm{v}}_i$ $\\textrm{ms}^{-1}$ is travelling with uniform acceleration $\\vec{\\textrm{a}}$ $\\textrm{ms}^{-2}$. After traveling a displacement $\\vec{\\textrm{s}}$ in time $\\textrm{t}$ the object's final velocity would be $\\vec{\\textrm{v}_f}$ $\\textrm{ms}^{-1}$, these quantities are described by the following equations\n\n\\begin{align}\n \\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t} \\ \\ \\ \\ \\ \\ \\ \\ (1) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n \\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t} \\; \\; (2)\n\\end{align}\n\n\n\n\\begin{align}\n\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}\\vec{\\textrm{a}}\\textrm{t}^{2} \\; (3) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\vec{\\textrm{v}_f} = \\textrm{v}_i^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}} \\; \\; \\; \\; (4)\n\\end{align}\n\n\n\nThe equations above describe both uniform motion and uniformly accelerated motion. Equation (1) describes your final velocity given an intial velocity, and an acceleration over time. Notice how this is the equation of a line. This is what we see in the center plot of the animation above. Equation (3) describes your displacement moving at some initial velocity and accelerating uniformly for some time. Notice how time is squared in this equation. This means that this is the equation of a parabola. Equation (3) is what we see in the first plot of the animation above. \n\n### Mathematical Problems\n\nAs we work through the problems below, we will outline the following steps as we come to the solution. Breaking the problem down in this way makes the problem less intimidating, and makes the path to solution more straight foreward.\n\n> 1. Identify and write what information is given in the problem, and the answer that we are asked for.\n2. Using the information we know, and and the problem identified in step one, identify which equation(s) of motion we need to use to solve the problem.\n3. Ensure that all the values are in the correct units and fill them in the selected equation.\n4. Quote the answer and check the units.\n\n#### Problem 1\n\nA bus accelerates from rest at $4 \\textrm{ms}^{-2}$ until it reaches a final velocity of $40 \\ \\textrm{ms}^{-1}$. For how many seconds was the bus accelerating?\n\n#### Solution\n\n**Step 1:**\n\nGiven,\n\ninitial velocity, $\\vec{\\textrm{v}_i} = 0 \\ \\textrm{ms}^{-1}$ \n\nfinal velocity, $\\vec{\\textrm{v}_f} = 40 \\ \\textrm{ms}^{-1}$ \n\nacceleration, $\\vec{\\textrm{a}} = 4 \\ \\textrm{ms}^{-2}$\n\ntime, $\\textrm{t} = ?$\n\n**Step 2:**\n\nIn this problem, the value of $\\vec{\\textrm{v}_i}$, $\\vec{\\textrm{v}_f}$ and $\\vec{\\textrm{a}}$ are given and $\\textrm{t}$ is required. Now, check the above $4$ motion equations, find the equation is related to$\\vec{\\textrm{v}_i}$, $\\vec{\\textrm{v}_f}$ and $\\vec{\\textrm{a}}$ and $\\textrm{t}$. We find the equation and it is $\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}+\\vec{\\textrm{a}}\\textrm{t}$.\n\n**Step 3:** After checking all the units of the given variable $\\textrm{v}_i$, $\\textrm{v}_f$ and $\\textrm{a}$, we find that they are correct. Then fill them in selected equation:\n\n\\begin{equation}\n\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}+\\vec{\\textrm{a}}\\textrm{t} \\\\\n\\Rightarrow \\vec{\\textrm{a}}\\textrm{t} = \\vec{\\textrm{v}_f} - \\vec{\\textrm{v}_i} \\\\\n\\Rightarrow \\textrm{t} = \\frac{\\vec{\\textrm{v}_f}-\\vec{\\textrm{v}_i}}{\\vec{\\textrm{a}}} \\\\\n\\Rightarrow \\textrm{t} = \\frac{40\\textrm{ms}^{-1}-0\\textrm{ms}^{-1}}{4\\textrm{ms}^{-2}} \\\\\n\\Rightarrow \\textrm{t} = 10\\textrm{ s}\n\\end{equation}\n\n**Step 4:**\n\ntime = $10\\textrm{ s}$ $(Ans)$\n\nThis answer can be seen graphically in the animation below:\n\n\n\n```python\nhide_me\n\n# Data frame to create table\ndf = pd.DataFrame()\ndf['Velocity'] = np.arange(0,41,4)\ndf['Time'] = df['Velocity']/4\n\n# Create a figure with two subplots\nfig,(ax1,ax2) = plt.subplots(1,2,figsize=(6.8, 4), dpi= 100)\n\n# Limit and label axis\nax1.set_ylim(0,10)\nax1.set_xlim(0,40)\nax1.set_xlabel('Velocity ($ms^{-1}$)')\nax1.set_ylabel('Time (s)')\nax1.grid()\n\n# Initiate table properties\nfont_size=14\nbbox=[0, 0, 1, 1]\nax2.axis('off')\n\n# Initialize the plot\nl, = ax1.plot([],[], 'go-', label='Time', linewidth=2)\nleg = ax1.legend(loc='best')\n\n# Initiate the animation\ndef animate(i):\n l.set_data(df['Velocity'][:i+1], df['Time'][:i+1])\n table = ax2.table(cellText = df.values[:i+1],bbox=bbox, colLabels=df.columns)\n \n \nani = FuncAnimation(fig, animate, interval = 800, frames=len(df.index))\nplt.close()\n\n# Convert animation to video\nfrom IPython.display import HTML\nHTML(ani.to_html5_video())\n```\n\n#### Problem 2\n\nA car accelerates uniformly from $16 \\ \\textrm{ms}^{-1}$ to $38.8 \\ \\textrm{ms}^{-1}$ in $3.1$ seconds. Calculate the distance traveled by the car.\n\n#### Solution\n\n**Step 1:**\n\nGiven,\n\n$\\textrm{initial} \\ \\textrm{velocity}, \\vec{\\textrm{v}_i} = 16 \\ \\textrm{ms}^{-1}$\n\n$\\textrm{final} \\ \\textrm{velocity}, \\vec{\\textrm{v}_f} = 38.8 \\ \\textrm{ms}^{-1}$\n\n$\\textrm{time, t} = 3.1 \\ \\textrm{s}$ \n\n$\\textrm{displacement, } \\vec{\\textrm{s}} = ?$\n\n**Step 2:**\n\nTry yourself!\n\n\n```python\nhide_me\n\ndisplay(Latex('Which equation can be chosen to solve this problem?'))\n\n#Create the box to select m=Motion of Equations\na=widgets.Checkbox(\n value=False,\n description=r\"$\\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t}$\",\n disabled=False\n)\n\nb=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t}$')\n\nc=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}\\vec{\\textrm{a}}\\textrm{t}^{2}$'\n)\nd=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}}$',\n disabled=False\n)\n\n#Display the check box\ndisplay(a)\ndisplay(b)\ndisplay(c)\ndisplay(d)\n\n#create a button to check the answer\nbutton_check = widgets.Button(description=\"check\")\ndisplay(button_check)\n\n#Check the answer\ndef check_button(x):\n if a.value==False and b.value==True and c.value==False and d.value==False:\n display(Latex(\"Correct. Well done!\"))\n else: \n display(Latex(\"Wrong one!\"))\n\nbutton_check.on_click(check_button)\n\n```\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"84.94 m\":\n display(Latex(\"Correct!\"))\n display(Latex(r'$\\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t}$'))\n display(Latex(r'$\\vec{\\textrm{s}} = (\\frac{(16 + 38.8)} {2} \\times 3.1) \\textrm{ m}$'))\n display(Latex(r'$\\vec{\\textrm{s}} = 84.94 \\textrm{ m}$'))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"What is your final displacement?\"))\n\na1 = '84.94 m'\na2 = '65.26 m'\na3 = '89.5 m'\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n#### Problem 3\n\nA racing car is traveling with a velocity of $72 \\ \\textrm{kmh}^{-1}$ N and accelerates at $5 \\ \\textrm{ms}^{-2}$ for $10 \\ \\textrm{s}$ N. What is the final velocity of the car and how far will it travel as it accelerates?\n\n#### Solution\n\n**Step 1:**\n\n\nGiven\n$\\textrm{initial} \\ \\textrm{velocity,} \\vec{\\textrm{v}_i} = 72 \\ \\textrm{kmh}^{-1}$ N\n\n$\\textrm{acceleration, } \\vec{\\textrm{a}} = 5 \\ \\textrm{ms}^{-2}$ N\n\n$\\textrm{time}, \\textrm{t} = 10 \\ \\textrm{s}$\n\n$\\textrm{final} \\ \\textrm{velocity}, \\vec{\\textrm{v}_f} = ?$\n\n$\\textrm{displacement}, \\vec{\\textrm{s}} = ?$\n\n**Step 2:** Let's try,\n\n\n\n```python\nhide_me\n\ndisplay(Latex('Which equations can be chosen to solve this problem?'))\n\n#Create the box to select m=Motion of Equations\na=widgets.Checkbox(\n value=False,\n description=r\"$\\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t}$\",\n disabled=False\n)\n\nb=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = (\\frac{\\vec{\\textrm{v}_i}+\\vec{\\textrm{v}_f}}{2})\\textrm{t}$')\n\nc=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}\\vec{\\textrm{a}}\\textrm{t}^{2}$'\n)\nd=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}}$',\n disabled=False\n)\n\n#Display the check box\ndisplay(a)\ndisplay(b)\ndisplay(c)\ndisplay(d)\n\n#create a button to check the answer\nbutton_check = widgets.Button(description=\"check\")\ndisplay(button_check)\n\n#Check the answer\ndef check_button(x):\n if a.value==True and b.value==True and c.value==True and d.value==True:\n display(Latex(\"Correct. Well done!\"))\n display(Latex(\"yes, all the equations can be used. It depends on which one you are picking.\")) \n else: \n display(Latex(\"Try Again!\"))\n display(Latex('Hint: More than one answers')) \n\nbutton_check.on_click(check_button)\n\n```\n\n**Step 3:**\n\n\\begin{equation}\n\\textrm{initial} \\ \\textrm{velocity,} \\vec{\\textrm{v}_i} = 72 \\ \\textrm{kmh}^{-1} N = \\frac{72\\times1000}{3600} \\ \\textrm{ms}^{-1} = 20 \\ \\textrm{ms}^{-1} \\; N \n\\end{equation}\n\nCalculate final velocity,\n\n\\begin{equation}\n\\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}+\\vec{\\textrm{a}}\\textrm{t} \\\\\n\\vec{\\textrm{v}_f}= 20+(5\\times10) \\\\\n\\vec{\\textrm{v}_f}= 70 \\ \\textrm{ms}^{-1} N\n\\end{equation}\n\nCalculate displacement,\n\n\\begin{equation}\n\\vec{\\textrm{s}} = \\vec{\\textrm{v}_i}\\textrm{t}+\\frac{1}{2}{\\vec{\\textrm{a}}}{\\textrm{t}^{2}} \\\\\n\\vec{\\textrm{s}}=20\\times10+\\frac{1}{2}{5\\times}{10^{2}} \\\\\n\\vec{\\textrm{s}}=(200+250)\\ \\textrm{m} \\\\\n\\vec{\\textrm{s}}=450 \\ \\textrm{m} \\; N\n\\end{equation}\n\n**Step 4:**\n\n\\begin{align}\n\\textrm{final} \\ \\textrm{velocity}, \\vec{\\textrm{v}_f}=70 \\ \\textrm{ms}^{-1}\\ \\; N \\;\\;(Ans) \\\\\n\\textrm{displacement} = 84.94 \\ \\textrm{m} \\ \\; N \\;\\; (Ans)\n\\end{align}\n\n#### Problem 4\n\nAn Air Canada plane requires a takeoff speed of $78.5 \\ \\textrm{ms}^{-1}$ and $1690 \\ \\textrm{m}$ of runway to reach that speed. Determine the acceleration of this plane and the time required to reach this speed.\n\n#### Solution\n\n**Step 1:** Let's try,\n\n\n```python\nhide_me\n\ndisplay(Latex('Which values are given in this problem?'))\n\n#Create the box to select m=Motion of Equations\na=widgets.Checkbox(\n value=False,\n description=r\"$\\vec{\\textrm{v}_i}$\",\n disabled=False\n)\n\nb=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{v}_f}$')\n\nc=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{a}}$'\n)\nd=widgets.Checkbox(\n value=False,\n description=r'$\\vec{\\textrm{s}}$',\n disabled=False\n)\n\ne=widgets.Checkbox(\n value=False,\n description=r'$t$',\n disabled=False\n)\n\n\n#Display the check box\ndisplay(a)\ndisplay(b)\ndisplay(c)\ndisplay(d)\ndisplay(e)\n\n#create a button to check the answer\nbutton_check = widgets.Button(description=\"check\")\ndisplay(button_check)\n\n#Check the answer\ndef check_button(x):\n if a.value==True and b.value==True and c.value==False and d.value==True and e.value==False:\n display(Latex(\"Correct. Well done!\"))\n display(Latex(\"Initial velocity is also given as initially plane will start from $0$ $ms^{-1}$\"))\n else: \n display(Latex(\"Wrong one!\"))\n \n\nbutton_check.on_click(check_button)\n\n```\n\n**Step 2:**\n\n\\begin{align}\n\\textrm{Motion} \\ \\textrm{equation, } \\vec{\\textrm{v}_f} = \\vec{\\textrm{v}_i}^{2}+2 \\vec{\\textrm{a}} \\; \\vec{\\textrm{s}} \\textrm{ and }\n\\vec{\\textrm{v}_f}=\\vec{\\textrm{v}_i}+\\vec{\\textrm{a}} \\textrm{t}\n\\end{align}\n\n**Step 3**\n\nLet's calculate,\n\n\n```python\nhide_me\n\ndef q_1(val):\n if val == \"acceleration = 1.82 ms\\u00b2 and time = 43.06 s\":\n display(Latex(\"Correct!\"))\n elif val == ' ':\n None\n else:\n display(Latex(\"Try Again!\"))\n\ndisplay(Latex(\"What is the plane's required acceleration, and how long was it accelerating?\"))\n\na1 = 'acceleration = 1.91 ms\\u00b2 and time = 48.1 s'\na2 = 'acceleration = 1.82 ms\\u00b2 and time = 43.06 s'\na3 = 'acceleration = 1.82 ms\\u00b2 and time = 45.4 s'\n\ninteract(q_1, val = widgets.Dropdown(options=[' ',a1 ,a2, a3 ],value = ' ',description = 'Choose One:',disabled = False));\n```\n\n### Exercise\n\n **Question 1**: \n A car starting from rest in straight moves with uniform acceleration of $5 \\ \\textrm{ms}^{-2}$. What will be the velocity while crossing a person at a distance $40 \\ \\textrm{m}$? (Ans: $20 \\ \\textrm{ms}^{-1}$)\n\n **Question 2**:\n A bike accelerates uniformly from rest to a speed of $1 \\ \\frac{\\textrm{km}}{\\textrm{min}}$ over a distance of $65 \\ \\textrm{m}$. Determine the acceleration of the bike. (Ans: $2.137 \\ \\textrm{ms}^{-1}$)\n\n **Question 3**:\n A bullet leaves a rifle with a velocity of $521 \\ \\textrm{ms}^{-1}$. While accelerating through the barrel of the rifle, the bullet moves a distance of $0.840 \\ \\textrm{m}$. Determine the acceleration of the bullet (Consider a uniform acceleration). (Ans: $1.62 \\times 10^{5} \\ \\textrm{ms}^{-1}$)\n\n **Question 4**:\n The Velocity of a jeep decreases uniformly from $35 \\ \\textrm{ms}^{-1}$ and after $8 \\ \\textrm{s}$ it becomes $10 \\ \\textrm{ms}^{-1}$. Find the acceleration of the car? (Ans: $-3.125 \\ \\textrm{ms}^{-2}$, You do not have worry about the '-' sign. It defines the direction as velocity is dreceasing)\n\n## Conclusion\n\nIn this notebook, we introduced two important concepts of motion: Uniform and Uniformly accelerated motion. We demonstrated how motion is related to distance, speed, velocity and acceleration. We also showed the various linear relationships between displacement, velocity and uniform acceleration using both the equations of motion, and the graphs of those functions. Using these equations, we then demonstrated several cases where they we can apply the ideas of uniform motion and uniformly accelerated motion to solve classic physics problems. This notebook serves as an introduction to the concept of uniform and uniformly accelerated motion, and will allow you to solve a great many problems using these concepts, and should act as a reasonable primer for more complex kinematic problems.\n\n[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)\n", "meta": {"hexsha": "94fc5060f04a14667b0dbcc1af876754e0b32d91", "size": 39669, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_sources/curriculum-notebooks/Science/UniformMotionAndUniformlyAcceleratedMotion/uniform-motion-and-uniformly-accelerated-motion.ipynb", "max_stars_repo_name": "mlamoureux/CallystoJBook", "max_stars_repo_head_hexsha": "058080e1bb6370c0a072ed1c0ded96fe1e2947de", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_sources/curriculum-notebooks/Science/UniformMotionAndUniformlyAcceleratedMotion/uniform-motion-and-uniformly-accelerated-motion.ipynb", "max_issues_repo_name": "mlamoureux/CallystoJBook", "max_issues_repo_head_hexsha": "058080e1bb6370c0a072ed1c0ded96fe1e2947de", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_sources/curriculum-notebooks/Science/UniformMotionAndUniformlyAcceleratedMotion/uniform-motion-and-uniformly-accelerated-motion.ipynb", "max_forks_repo_name": "mlamoureux/CallystoJBook", "max_forks_repo_head_hexsha": "058080e1bb6370c0a072ed1c0ded96fe1e2947de", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39669.0, "max_line_length": 39669, "alphanum_fraction": 0.5988807381, "converted": true, "num_tokens": 9525, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO\n\n", "lm_q1_score": 0.5506073655352404, "lm_q2_score": 0.4843800842769843, "lm_q1q2_score": 0.26670324212148805}} {"text": "# Patch Discord\n## The Aleatoric MIDI Router\n\n### Introduction\n\nThe aletoric MIDI router is a device that serves the purpose of erasing the labels from the control knobs of a synthesizer.\n\nIn a system comprising a controller and a synthesizer, the router can be placed in the middle, receiving a number of control signals and outputting altered control signals. As can be guessed from the name, the router randomly permutes signal destinations. However, it does not simply route its inputs to different channels. This would have resulted in simply renaming the knobs, and not yet erasing their labels. Instead, the router performs a permutation such that *all* the input channels affect *all* the output channels. This way, a musician using a controller cannot know what the result of a turn of a knob would be, and thus cannot apply his or her knowledge and experience, but has to completely rely on listening instead. Furthermore, to exclude the possibility of the musician getting accustomed to the routing with time and mentally mapping the controls to their new destinations, the internal state of the router gradually changes, so that the routing is never static. The output changes even when the musician does not provide any input. This side effect serves as an invitation for the musician to actively work even when just maintaining a certain timbre.\n\nThe aleatoric router is clearly not a utility device, but rather a catalyst of a creative practice.\n\n### Notes on implementation\n\nWhile the general description above does not imply a single implementation, this document develops one possible implementation of the router.\n\nThe router can be realised as a MIDI effect plugin for DAWs, as well as a standalone hardware device. Moreover, an analog variant is also concievable, replacing MIDI with CV signals. However, the current implementation focuses on MIDI.\n\n### High-level structure\n\nIn a setup with a separate controller and a sound engine, the router fits in the middle:\n\n```\n+---+ +---+ +---+\n| C | == M ==> | R | == M ==> | S |\n+---+ +---+ +---+\n\nC - controller\nM - MIDI bus\nR - router\nS - synth\n```\n\n---\n\nThe internal structure of the router is demonstated by the figure below:\n\n```\n \u23a1 \u23a4 \u23a1 \u23a4 \u23a1 \u23a4\nM ==>\u23a2I\u23a2 -> \u23a2 P \u23a2 -> \u23a2O\u23a2 ==> M\n \u23a3 \u23a6 \u23a3 \u23a6 \u23a3 \u23a6\n ^\n ^\n G\n \nM - MIDI bus\nI - input accumulator\nP - permutation\nG - internal state generator\nO - output accumulator\n```\n\n---\n\nThe principal operation performed by the router is a permutation of an input vector (permutation here is not to be understood in its combinatorial meaning). To adapt to the event-driven model of the MIDI protocol, input and output accumulators are employed. The input accumulator receives MIDI CC events and outputs a vector of the latest values on each of the preconfigured channels. The output accumulator perfoms the reverse operation: it receives the result of the permutation and emits MIDI CC events on channels on which the value has changed.\n\nOne cycle of the router's operation consists of:\n1. freezing the input vector\n2. performing a permutation\n3. sending to the bus the difference between the permutation result and the previous cycle's output\n\nAs has been already mentioned, the permutation is not static. The purpose of the internal state generator is to periodically generate the permutation parameters. The state generator is not clocked at the same rate as the main cycle.\n\nThe clock rate of the main cycle should be approximately the rate at which the knobs are scanned by the controller, so that CC events are not missed, otherwise the user will experience unresponsiveness.\n\nThe clock rate of the state generator should be substantially slower, otherwise something akin to a frantic LFO on all the configured channels will be heard. In general, the state change rate should be a user-controllable parameter.\n\n### Permutation\n\nThe main operation of the router is the permutation of an input vector. The term permutation here is not used in its combinatorial meaning, but is defined below instead.\n\nFirst of all, it should be pointed out that all the values on the MIDI bus are taken to be integers in the range $[0; 127]$. The input and the output of the permutation are vectors of values in this domain.\n\nThe term channel is used below to signify a MIDI CC number. In more conventional MIDI terminology, the router operates on a single MIDI channel, but on multiple controllers (aka CC).\n\nBefore the permutation is derived, its desired characteristics should be clarified.\n1. Number of input and output channels need not necessarily be equal.\n2. All the input channels affect all the output channels to a certain degree.\n3. The mapping should be continuous (in the discreet approximation, as we are operating in integers), so that close input vectors map to close output vectors.\n4. The mapping should **not** be intuitively understandable by the musician. This implies that a permutation that maps all-zero input to all-zero output and/or all-maximum input to all-maximum output is unsatisfactory.\n5. The output vector should not overflow.\n6. The full range of each output channel should be covered by the input space. In other words, for each output channel for each possible value there exists at least one input vector that maps to an output vector that contains this value.\n7. The state of the permutation should be gradually variable. This means that the permutation is parameterised, and changing the permutation parameter results in smooth changes in the output given the same input.\n\nIt should also be noted that the following properties are not specifically aimed for:\n- Covering the entire output space is not required. Any specific output vector it is not guaranteed to be reachable.\n- Unique mapping is not required. For any reachable output vector there might exist more than one input vector that maps to it.\n\nAll these requirements do not describe a single possible permutation. In the context of the musical device, a precise mathematical destination is probably not relevant. One permutation that fulfills the requirements is derived below, using some basic linear algebra and some intuitions.\n\nTo satisfy the first requirement, let's immediately consider the general case of $n$ input channels and $m$ output channels. The task is now to find a mapping from $n$ dimensional vector space to $m$ dimensional vector space.\n\n\\begin{equation}\n\\begin{bmatrix}i_{1} \\\\ i_{2} \\\\ ... \\\\ i_{n}\\end{bmatrix}\n\\mapsto\n\\begin{bmatrix}o_{1} \\\\ o_{2} \\\\ ... \\\\ o_{m}\\end{bmatrix}\n\\end{equation}\n\nWe begin with a linear mapping and introduce a matrix $P$ with $m$ rows and $n$ columns. This satisfies the second requirement, as the element $p_{mn}$ of the matrix is a weigth with which $n$-th input contributes to $m$-th output.\n\n\\begin{equation}\n\\begin{bmatrix}p_{11} & p_{12} & ... & p_{1n} \\\\ p_{21} & ... \\\\ ... & & ... \\\\ p_{m1} & & & p_{mn}\\end{bmatrix}\n\\times\n\\begin{bmatrix}i_{1} \\\\ i_{2} \\\\ ... \\\\ i_{n}\\end{bmatrix}=\n\\begin{bmatrix}o_{1} \\\\ o_{2} \\\\ ... \\\\ o_{m}\\end{bmatrix}\n\\end{equation}\n\nLinear mapping also satisfies the third requirement: there are no abrupt jumps in output.\n\nHowever, the fourth requirement, namely that the all-zero input should not map to all-zero output renders a linear mapping unsatisfactory. The fourth requirement calls for a translation. Let's introduce a translation vector that is added to the input vector before the multiplication by the matrix.\n\n\\begin{equation}\n\\begin{bmatrix}p_{11} & p_{12} & ... & p_{1n} \\\\ p_{21} & ... \\\\ ... & & ... \\\\ p_{m1} & & & p_{mn}\\end{bmatrix}\n\\times\n\\left(\n\\begin{bmatrix}i_{1} \\\\ i_{2} \\\\ ... \\\\ i_{n}\\end{bmatrix}\n+\n\\begin{bmatrix}t_{1} \\\\ t_{2} \\\\ ... \\\\ t_{n}\\end{bmatrix}\n\\right)=\n\\begin{bmatrix}o_{1} \\\\ o_{2} \\\\ ... \\\\ o_{m}\\end{bmatrix}\n\\end{equation}\n\nThe fifth requirement is that the output doesn't overflow. Let's consider each output channel separately. The value of the $k$-th output channel is defined by the input vector and the $k$-th row of the matrix.\n\n\\begin{equation}\no_{k} = p_{k1} (i_{1} + t_{k}) + p_{k2} (i_{2} + t_{k}) + ... + p_{kn} (i_{n} + t_{k})\n\\end{equation}\n\nBy carefully picking the $p$ coeffiecients, it can be ensured that the output doesn't overflow. For example, if we ignore the translation, we can achieve that by choosing coefficients to be in the range of $[0; 1]$ in such a way that the sum of the row is $1$. Then each coefficient is the percentage of contribution of the corresponding input.\n\nThis is a viable solution. However, in this case every input channel affects any output channel only to a limited degree. No input channel can reach the full range of an output channel.\n\nIt appears that in order to be more musically interesting, the control should be more redundant. Some input channels should be able to reach the full range of output, and some other input channels should be able to reverse the effect of the former channels.\n\nSo the internal calculations are performed in a broader domain. The translation and multiplication results are allowed to overflow the MIDI range. The multiplication product is mapped back to the domain afterwards.\n\n\\begin{equation}\nwrap\n\\left(\n\\begin{bmatrix}p_{11} & p_{12} & ... & p_{1n} \\\\ p_{21} & ... \\\\ ... & & ... \\\\ p_{m1} & & & p_{mn}\\end{bmatrix}\n\\times\n\\left(\n\\begin{bmatrix}i_{1} \\\\ i_{2} \\\\ ... \\\\ i_{n}\\end{bmatrix}\n+\n\\begin{bmatrix}t_{1} \\\\ t_{2} \\\\ ... \\\\ t_{n}\\end{bmatrix}\n\\right)\n\\right)=\n\\begin{bmatrix}o_{1} \\\\ o_{2} \\\\ ... \\\\ o_{m}\\end{bmatrix}\n\\end{equation}\n\nWe need to employ such a way of wrapping that respects the continuity requirement. Wrapping using a modulo operation does not fulfill this requirment, as it involves a jump from one boundary to the other:\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(-300, 300, 5)\ny = x % 127\n\nfig, ax = plt.subplots(figsize=(15, 2))\nax.plot(x, y, '.')\nplt.show()\n```\n\nInstead of making a jump, the value could bounce from the boundary and start moving in the opposite direction:\n\n\n```python\ndef wrap(x, limit):\n '''The lower limit is assumed to be 0.'''\n x %= 2 * limit\n if x > limit:\n x = 2 * limit - x\n return x\n\ny = np.array([wrap(xi, 127) for xi in x])\n\nfig, ax = plt.subplots(figsize=(15, 2))\nax.plot(x, y, '.')\nplt.show()\n```\n\nWrapping is applied to each element of the output vector.\n\nWith wrapping, the elements of the matrix can be picked almost arbitrarily.\n\nHowever, it is desirable that input covers the output range more than once. The value of a coefficient can be understood as the number of times an input channel covers an output channel (ignoring translation). Large values of coefficients decrease the resolution of the input controls, which is to be avoided. Intuitively, the values of the coefficients should remain approximately in the range $[-2, 2]$, where negative coefficients mean inversion of the input channel.\n\nAlso by intuition, the elements of the translation vector are in the MIDI domain.\n\nThe remaining requirement is that the permutation should be parameterized. This is detailed in the next section.\n\n### Internal State\n\nThe internal state of the router comprises the translation vector and the matrix. The state should be gradually variable, so that varying the state does not result in jumps in the output.\n\nThe translation vector can be randomly chosen at the inititialization time and fixed from then on.\n\nThe matrix should be periodically regenerated, and the next matrix should be closely related to the previous matrix. The elements of the matrix should be in the range $[-2; 2]$. A parameterized function that always yields values within a certain range is necessary.\n\nA simple example of such a function is the sine function:\n\\begin{equation}\nx = a \\cdot sin{(f t + \\phi)} + c\n\\end{equation}\n\nSince the range is symmetrical with respect to $0$, the constant $c$ is $0$.\n\n$a$ is the scaling factor controlling the output range. It should be set to $2$.\n\nEach row of coefficients can be picked as samples from a section of a sine wave one period long.\nThe equation for the elements of a row is:\n\n\\begin{equation}\np_{i} = 2 \\cdot sin{((i - 1) \\frac{2 \\pi}{n} + \\phi)}\n\\end{equation}\n\n$n$ is the number of coefficients, which is the number of inputs. $i$ is the index of the element in the row, in the range $[1; n]$.\n\nThis way there is one peak and one valley per row, that is per output channel. This translates to one input channel being the dominant control, and another input channel being its opposite.\n\nThe phase $\\phi$ is the variable parameter of the permutation. As it changes, the row of coefficients slightly shifts along the sine wave, but never changes abruptly. Initial phases per row can be randomly chosen at initialization time, just like the translation vector.\n\n$\\phi$ is gradually incremented. The rate of change of $\\phi$ is the rate of the state generator mentioned in the section on high-level structure. The precise value can be user-controlled, but the full cycle (change of $\\phi$ by $2\\pi$ radians) should take several minutes, so that the fact that the state is periodic is not obvious to the user.\n\nHere is a demonstration of sampling of the coefficients:\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nPHI = .333\nN = 8 # number of coefficients\n\ndef coefficient(index):\n return 2 * np.sin((index * 2 * np.pi) / (N - 1) + PHI)\n\nindices = [i for i in range(N)]\nsamples = np.array([coefficient(i) for i in indices])\n\n# detailed function for demonstration\nref_x = np.arange(0, N - 0.9, 0.1)\nref_y = np.array([coefficient(i) for i in ref_x])\n\nfig, ax = plt.subplots(figsize=(15, 2))\nax.plot(indices, samples, 'o', ref_x, ref_y, '-')\nplt.show()\n```\n\nThere is one more consideration. With the coefficients sampled from a sine wave, the user might notice a spatial pattern of the dominant control and the opposite control being $n/2$ knobs apart if there are $n$ inputs. In order to conceal this spatial pattern, one more step is added. After the coefficients are sampled, they are shuffled, so that they appear in the matrix in a different order.\n\nThe shuffling can be represented with a shuffling matrix $S$ that has the same dimensions as the permutation matrix. The element of each row of the shuffling matrix $s_{i}$ represents the column in the permutation matrix to which the $i$-th sample from the sine wave will go. So the rows of the permutation matrix are shufflings of numbers from 1 to $n$, where $n$ is the number of coefficients.\n\n\n```python\nimport random\n\nshuffling = random.sample(indices, len(indices))\n \ncoefficients = [0 for i in range(N)]\nfor i in indices:\n coefficients[i] = samples[shuffling[i]]\n\nprint(\"shuffling:\")\nprint(\"\\n\".join([\"from {0} to {1}\".format(i, v) for i, v in enumerate(shuffling)]))\nprint(\"\\nsamples:\")\nprint(\" | \".join([\"{:.3f}\".format(i) for i in samples]))\nprint(\"\\ncoefficients:\")\nprint(\" | \".join([\"{:.3f}\".format(i) for i in coefficients]))\n```\n\n shuffling:\n from 0 to 7\n from 1 to 0\n from 2 to 1\n from 3 to 5\n from 4 to 4\n from 5 to 3\n from 6 to 2\n from 7 to 6\n \n samples:\n 0.654 | 1.885 | 1.697 | 0.231 | -1.409 | -1.988 | -1.070 | 0.654\n \n coefficients:\n 0.654 | 0.654 | 1.885 | -1.988 | -1.409 | 0.231 | 1.697 | -1.070\n\n\nFinally, the full state of the router is represented by:\n- translation vector\n- initial phases vector\n- shuffling matrix\n- current phase shift\n\nAll of these except for the phase shift are fixed at the initialization stage.\nThe phase shift is incremented at state change rate. The permutation matrix is then regenerated with the new phase shift.\n", "meta": {"hexsha": "7b2cc849dc6d7f0bb8bdd4062c622768cda50dad", "size": 46942, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/concept.ipynb", "max_stars_repo_name": "maxklint/patch-discord", "max_stars_repo_head_hexsha": "d6ed14a6119f99e0dd07ad06a766592ab423f6e0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/concept.ipynb", "max_issues_repo_name": "maxklint/patch-discord", "max_issues_repo_head_hexsha": "d6ed14a6119f99e0dd07ad06a766592ab423f6e0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/concept.ipynb", "max_forks_repo_name": "maxklint/patch-discord", "max_forks_repo_head_hexsha": "d6ed14a6119f99e0dd07ad06a766592ab423f6e0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 109.6775700935, "max_line_length": 13044, "alphanum_fraction": 0.8188615739, "converted": true, "num_tokens": 3838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.554470450236115, "lm_q2_score": 0.48047867804790706, "lm_q1q2_score": 0.26641122894607633}} {"text": "# 3. Imagined movement\n\nIn this tutorial we will look at imagined movement. Our movement is controlled in the motor cortex where there is an increased level of mu activity (8\u201312 Hz) when we perform movements. This is accompanied by a reduction of this mu activity in specific regions that deal with the limb that is currently moving. This decrease is called Event Related Desynchronization (ERD). By measuring the amount of mu activity at different locations on the motor cortex, we can determine which limb the subject is moving. Through mirror neurons, this effect also occurs when the subject is not actually moving his limbs, but merely imagining it.\n\n## Credits\nThe CSP code was originally written by Boris Reuderink of the Donders\nInstitute for Brain, Cognition and Behavior. It is part of his Python EEG\ntoolbox: https://github.com/breuderink/eegtools\n\nInspiration for this tutorial also came from the excellent code example\ngiven in the book chapter:\n \nArnaud Delorme, Christian Kothe, Andrey Vankov, Nima Bigdely-Shamlo,\nRobert Oostenveld, Thorsten Zander, and Scott Makeig. MATLAB-Based Tools\nfor BCI Research, In _(B+H)CI: The Human in Brain-Computer Interfaces and\nthe Brain in Human-Computer Interaction._ Desney S. Tan and Anton Nijholt\n(eds.), 2009, 241-259, http://dx.doi.org/10.1007/978-1-84996-272-8\n\n## About the data\nThe dataset for this tutorial is provided by the fourth BCI competition.\nBefore you continue with this tutorial, please, go to http://www.bbci.de/competition/iv/#download\nand fill in your name and email address.\nYou don't need to actually download the data, since it is installed in the virtual server already, but this way, the wonderful organizers of the competition get notified that their data is still being used and by whom.\n\n[Description of the data](http://bbci.de/competition/iv/desc_1.html)\n\n\n```python\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\nExecuting the code cell below will load the data:\n\n\n```python\nimport numpy as np\nimport scipy.io\n\nm = scipy.io.loadmat('data/BCICIV_calib_ds1d.mat', struct_as_record=True)\n\n# SciPy.io.loadmat does not deal well with Matlab structures, resulting in lots of\n# extra dimensions in the arrays. This makes the code a bit more cluttered\n\nsample_rate = m['nfo']['fs'][0][0][0][0]\nEEG = m['cnt'].T\nnchannels, nsamples = EEG.shape\n\nchannel_names = [s[0] for s in m['nfo']['clab'][0][0][0]]\nevent_onsets = m['mrk'][0][0][0]\nevent_codes = m['mrk'][0][0][1]\nlabels = np.zeros((1, nsamples), int)\nlabels[0, event_onsets] = event_codes\n\ncl_lab = [s[0] for s in m['nfo']['classes'][0][0][0]]\ncl1 = cl_lab[0]\ncl2 = cl_lab[1]\nnclasses = len(cl_lab)\nnevents = len(event_onsets)\n```\n\nNow we have the data in the following python variables:\n\n\n```python\n# Print some information\nprint('Shape of EEG:', EEG.shape)\nprint('Sample rate:', sample_rate)\nprint('Number of channels:', nchannels)\nprint('Channel names:', channel_names)\nprint('Number of events:', len(event_onsets))\nprint('Event codes:', np.unique(event_codes))\nprint('Class labels:', cl_lab)\nprint('Number of classes:', nclasses)\n```\n\n Shape of EEG: (59, 190473)\n Sample rate: 100\n Number of channels: 59\n Channel names: ['AF3', 'AF4', 'F5', 'F3', 'F1', 'Fz', 'F2', 'F4', 'F6', 'FC5', 'FC3', 'FC1', 'FCz', 'FC2', 'FC4', 'FC6', 'CFC7', 'CFC5', 'CFC3', 'CFC1', 'CFC2', 'CFC4', 'CFC6', 'CFC8', 'T7', 'C5', 'C3', 'C1', 'Cz', 'C2', 'C4', 'C6', 'T8', 'CCP7', 'CCP5', 'CCP3', 'CCP1', 'CCP2', 'CCP4', 'CCP6', 'CCP8', 'CP5', 'CP3', 'CP1', 'CPz', 'CP2', 'CP4', 'CP6', 'P5', 'P3', 'P1', 'Pz', 'P2', 'P4', 'P6', 'PO1', 'PO2', 'O1', 'O2']\n Number of events: 1\n Event codes: [-1 1]\n Class labels: ['left', 'right']\n Number of classes: 2\n\n\nThis is a large recording: 59 electrodes where used, spread across the entire scalp. The subject was given a cue and then imagined either right hand movement or the movement of his feet. As can be seen from the [Homunculus](http://en.wikipedia.org/wiki/Cortical_homunculus), foot movement is controlled at the center of the motor cortex (which makes it hard to distinguish left from right foot), while hand movement is controlled more lateral.\n\n\n\n## Plotting the data\n\nThe code below cuts trials for the two classes and should look familiar if you've completed the previous tutorials. Trials are cut in the interval [0.5\u20132.5 s] after the onset of the cue.\n\n\n```python\n# Dictionary to store the trials in, each class gets an entry\ntrials = {}\n\n# The time window (in samples) to extract for each trial, here 0.5 -- 2.5 seconds\nwin = np.arange(int(0.5*sample_rate), int(2.5*sample_rate))\n\n# Length of the time window\nnsamples = len(win)\n\n# Loop over the classes (right, foot)\nfor cl, code in zip(cl_lab, np.unique(event_codes)):\n \n # Extract the onsets for the class\n cl_onsets = event_onsets[event_codes == code]\n \n # Allocate memory for the trials\n trials[cl] = np.zeros((nchannels, nsamples, len(cl_onsets)))\n \n # Extract each trial\n for i, onset in enumerate(cl_onsets):\n trials[cl][:,:,i] = EEG[:, win+onset]\n \n# Some information about the dimensionality of the data (channels x time x trials)\nprint('Shape of trials[cl1]:', trials[cl1].shape)\nprint('Shape of trials[cl2]:', trials[cl2].shape)\n```\n\n Shape of trials[cl1]: (59, 200, 100)\n Shape of trials[cl2]: (59, 200, 100)\n\n\n\n\nSince the feature we're looking for (a decrease in $\\mu$-activity) is a frequency feature, lets plot the PSD of the trials in a similar manner as with the SSVEP data. The code below defines a function that computes the PSD for each trial (we're going to need it again later on):\n\n\n```python\nfrom matplotlib import mlab\n\ndef psd(trials):\n '''\n Calculates for each trial the Power Spectral Density (PSD).\n \n Parameters\n ----------\n trials : 3d-array (channels x samples x trials)\n The EEG signal\n \n Returns\n -------\n trial_PSD : 3d-array (channels x PSD x trials)\n the PSD for each trial. \n freqs : list of floats\n Yhe frequencies for which the PSD was computed (useful for plotting later)\n '''\n \n ntrials = trials.shape[2]\n trials_PSD = np.zeros((nchannels, 101, ntrials))\n\n # Iterate over trials and channels\n for trial in range(ntrials):\n for ch in range(nchannels):\n # Calculate the PSD\n (PSD, freqs) = mlab.psd(trials[ch,:,trial], NFFT=int(nsamples), Fs=sample_rate)\n trials_PSD[ch, :, trial] = PSD.ravel()\n \n return trials_PSD, freqs\n```\n\n\n```python\n# Apply the function\npsd_r, freqs = psd(trials[cl1])\npsd_f, freqs = psd(trials[cl2])\ntrials_PSD = {cl1: psd_r, cl2: psd_f}\n```\n\nThe function below plots the PSDs that are calculated with the above function. Since plotting it for 118 channels will clutter the display, it takes the indices of the desired channels as input, as well as some metadata to decorate the plot.\n\n\n```python\nimport matplotlib.pyplot as plt\n\ndef plot_psd(trials_PSD, freqs, chan_ind, chan_lab=None, maxy=None):\n '''\n Plots PSD data calculated with psd().\n \n Parameters\n ----------\n trials : 3d-array\n The PSD data, as returned by psd()\n freqs : list of floats\n The frequencies for which the PSD is defined, as returned by psd() \n chan_ind : list of integers\n The indices of the channels to plot\n chan_lab : list of strings\n (optional) List of names for each channel\n maxy : float\n (optional) Limit the y-axis to this value\n '''\n plt.figure(figsize=(12,5))\n \n nchans = len(chan_ind)\n \n # Maximum of 3 plots per row\n nrows = int(np.ceil(nchans / 3))\n ncols = min(3, nchans)\n \n # Enumerate over the channels\n for i,ch in enumerate(chan_ind):\n # Figure out which subplot to draw to\n plt.subplot(nrows,ncols,i+1)\n \n # Plot the PSD for each class\n for cl in trials.keys():\n plt.plot(freqs, np.mean(trials_PSD[cl][ch,:,:], axis=1), label=cl)\n \n # All plot decoration below...\n \n plt.xlim(1,30)\n \n if maxy != None:\n plt.ylim(0,maxy)\n \n plt.grid()\n \n plt.xlabel('Frequency (Hz)')\n \n if chan_lab == None:\n plt.title('Channel %d' % (ch+1))\n else:\n plt.title(chan_lab[i])\n\n plt.legend()\n \n plt.tight_layout()\n```\n\nLets put the `plot_psd()` function to use and plot three channels:\n\n 1. C3: Central, left\n 2. Cz: Central, central\n 3. C4: Central, right\n\n\n```python\nplot_psd(\n trials_PSD,\n freqs,\n [channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']],\n chan_lab=['left', 'center', 'right'],\n maxy=500\n)\n```\n\nA spike of mu activity can be seen on each channel for both classes. At the right hemisphere, the mu for the left hand movement is lower than for the right hand movement due to the ERD. At the left electrode, the mu for the right hand movement is reduced and at the central electrode the mu activity is about equal for both classes. This is in line with the theory that the left hand is controlled by the right hemisphere and the feet are controlled centrally.\n\n## Classifying the data\n\nWe will use a machine learning algorithm to construct a model that can distinguish between the right hand and foot movement of this subject. In order to do this we need to:\n\n 1. find a way to quantify the amount of mu activity present in a trial\n 2. make a model that describes expected values of mu activity for each class\n 3. finally test this model on some unseen data to see if it can predict the correct class label\n\nWe will follow a classic BCI design by Blankertz et al. [1] where they use the logarithm of the variance of the signal in a certain frequency band as a feature for the classifier.\n\n[1] Blankertz, B., Dornhege, G., Krauledat, M., M\u00fcller, K.-R., & Curio, G. (2007). The non-invasive Berlin Brain-Computer Interface: fast acquisition of effective performance in untrained subjects. *NeuroImage*, 37(2), 539\u2013550. doi:10.1016/j.neuroimage.2007.01.051\n\nThe script below designs a band pass filter using [`scipy.signal.irrfilter`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.iirfilter.html) that will strip away frequencies outside the 8--15Hz window. The filter is applied to all trials:\n\n\n```python\nimport scipy.signal \n\ndef bandpass(trials, lo, hi, sample_rate):\n '''\n Designs and applies a bandpass filter to the signal.\n \n Parameters\n ----------\n trials : 3d-array (channels x samples x trials)\n The EEGsignal\n lo : float\n Lower frequency bound (in Hz)\n hi : float\n Upper frequency bound (in Hz)\n sample_rate : float\n Sample rate of the signal (in Hz)\n \n Returns\n -------\n trials_filt : 3d-array (channels x samples x trials)\n The bandpassed signal\n '''\n\n # The iirfilter() function takes the filter order: higher numbers mean a sharper frequency cutoff,\n # but the resulting signal might be shifted in time, lower numbers mean a soft frequency cutoff,\n # but the resulting signal less distorted in time. It also takes the lower and upper frequency bounds\n # to pass, divided by the niquist frequency, which is the sample rate divided by 2:\n a, b = scipy.signal.iirfilter(6, [lo/(sample_rate/2.0), hi/(sample_rate/2.0)])\n\n # Applying the filter to each trial\n ntrials = trials.shape[2]\n trials_filt = np.zeros((nchannels, nsamples, ntrials))\n for i in range(ntrials):\n trials_filt[:,:,i] = scipy.signal.filtfilt(a, b, trials[:,:,i], axis=1)\n \n return trials_filt\n```\n\n\n```python\n# Apply the function\ntrials_filt = {cl1: bandpass(trials[cl1], 8, 15, sample_rate),\n cl2: bandpass(trials[cl2], 8, 15, sample_rate)}\n```\n\nPlotting the PSD of the resulting `trials_filt` shows the suppression of frequencies outside the passband of the filter:\n\n\n```python\npsd_r, freqs = psd(trials_filt[cl1])\npsd_f, freqs = psd(trials_filt[cl2])\ntrials_PSD = {cl1: psd_r, cl2: psd_f}\n\nplot_psd(\n trials_PSD,\n freqs,\n [channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']],\n chan_lab=['left', 'center', 'right'],\n maxy=300\n)\n```\n\nAs a feature for the classifier, we will use the logarithm of the variance of each channel. The function below calculates this:\n\n\n```python\n# Calculate the log(var) of the trials\ndef logvar(trials):\n '''\n Calculate the log-var of each channel.\n \n Parameters\n ----------\n trials : 3d-array (channels x samples x trials)\n The EEG signal.\n \n Returns\n -------\n logvar - 2d-array (channels x trials)\n For each channel the logvar of the signal\n '''\n return np.log(np.var(trials, axis=1))\n```\n\n\n```python\n# Apply the function\ntrials_logvar = {cl1: logvar(trials_filt[cl1]),\n cl2: logvar(trials_filt[cl2])}\n```\n\nBelow is a function to visualize the logvar of each channel as a bar chart:\n\n\n```python\ndef plot_logvar(trials):\n '''\n Plots the log-var of each channel/component.\n arguments:\n trials - Dictionary containing the trials (log-vars x trials) for 2 classes.\n '''\n plt.figure(figsize=(12,5))\n \n x0 = np.arange(nchannels)\n x1 = np.arange(nchannels) + 0.4\n\n y0 = np.mean(trials[cl1], axis=1)\n y1 = np.mean(trials[cl2], axis=1)\n\n plt.bar(x0, y0, width=0.5, color='b')\n plt.bar(x1, y1, width=0.4, color='r')\n\n plt.xlim(-0.5, nchannels+0.5)\n\n plt.gca().yaxis.grid(True)\n plt.title('log-var of each channel/component')\n plt.xlabel('channels/components')\n plt.ylabel('log-var')\n plt.legend(cl_lab)\n```\n\n\n```python\n# Plot the log-vars\nplot_logvar(trials_logvar)\n```\n\nWe see that most channels show a small difference in the log-var of the signal between the two classes. The next step is to go from 118 channels to only a few channel mixtures. The CSP algorithm calculates mixtures of channels that are designed to maximize the difference in variation between two classes. These mixures are called spatial filters.\n\n\n```python\nfrom numpy import linalg\n\ndef cov(trials):\n ''' Calculate the covariance for each trial and return their average '''\n ntrials = trials.shape[2]\n covs = [ trials[:,:,i].dot(trials[:,:,i].T) / nsamples for i in range(ntrials) ]\n return np.mean(covs, axis=0)\n\ndef whitening(sigma):\n ''' Calculate a whitening matrix for covariance matrix sigma. '''\n U, l, _ = linalg.svd(sigma)\n return U.dot( np.diag(l ** -0.5) )\n\ndef csp(trials_r, trials_f):\n '''\n Calculate the CSP transformation matrix W.\n arguments:\n trials_r - Array (channels x samples x trials) containing right hand movement trials\n trials_f - Array (channels x samples x trials) containing foot movement trials\n returns:\n Mixing matrix W\n '''\n cov_r = cov(trials_r)\n cov_f = cov(trials_f)\n P = whitening(cov_r + cov_f)\n B, _, _ = linalg.svd( P.T.dot(cov_f).dot(P) )\n W = P.dot(B)\n return W\n\ndef apply_mix(W, trials):\n ''' Apply a mixing matrix to each trial (basically multiply W with the EEG signal matrix)'''\n ntrials = trials.shape[2]\n trials_csp = np.zeros((nchannels, nsamples, ntrials))\n for i in range(ntrials):\n trials_csp[:,:,i] = W.T.dot(trials[:,:,i])\n return trials_csp\n```\n\n\n```python\n# Apply the functions\nW = csp(trials_filt[cl1], trials_filt[cl2])\ntrials_csp = {cl1: apply_mix(W, trials_filt[cl1]),\n cl2: apply_mix(W, trials_filt[cl2])}\n```\n\nTo see the result of the CSP algorithm, we plot the log-var like we did before:\n\n\n```python\ntrials_logvar = {cl1: logvar(trials_csp[cl1]),\n cl2: logvar(trials_csp[cl2])}\nplot_logvar(trials_logvar)\n```\n\nInstead of 118 channels, we now have 118 mixtures of channels, called components. They are the result of 118 spatial filters applied to the data.\n\nThe first filters maximize the variation of the first class, while minimizing the variation of the second. The last filters maximize the variation of the second class, while minimizing the variation of the first.\n\nThis is also visible in a PSD plot. The code below plots the PSD for the first and last components as well as one in the middle:\n\n\n```python\npsd_r, freqs = psd(trials_csp[cl1])\npsd_f, freqs = psd(trials_csp[cl2])\ntrials_PSD = {cl1: psd_r, cl2: psd_f}\n\nplot_psd(trials_PSD, freqs, [0,28,-1], chan_lab=['first component', 'middle component', 'last component'], maxy=0.75 )\n```\n\nIn order to see how well we can differentiate between the two classes, a scatter plot is a useful tool. Here both classes are plotted on a 2-dimensional plane: the x-axis is the first CSP component, the y-axis is the last.\n\n\n```python\ndef plot_scatter(left, foot):\n plt.figure()\n plt.scatter(left[0,:], left[-1,:], color='b')\n plt.scatter(foot[0,:], foot[-1,:], color='r')\n plt.xlabel('Last component')\n plt.ylabel('First component')\n plt.legend(cl_lab)\n```\n\n\n```python\nplot_scatter(trials_logvar[cl1], trials_logvar[cl2])\n```\n\nWe will apply a linear classifier to this data. A linear classifier can be thought of as drawing a line in the above plot to separate the two classes. To determine the class for a new trial, we just check on which side of the line the trial would be if plotted as above.\n\nThe data is split into a train and a test set. The classifier will fit a model (in this case, a straight line) on the training set and use this model to make predictions about the test set (see on which side of the line each trial in the test set falls). Note that the CSP algorithm is part of the model, so for fairness sake it should be calculated using only the training data.\n\n\n```python\n# Percentage of trials to use for training (50-50 split here)\ntrain_percentage = 0.5 \n\n# Calculate the number of trials for each class the above percentage boils down to\nntrain_r = int(trials_filt[cl1].shape[2] * train_percentage)\nntrain_f = int(trials_filt[cl2].shape[2] * train_percentage)\nntest_r = trials_filt[cl1].shape[2] - ntrain_r\nntest_f = trials_filt[cl2].shape[2] - ntrain_f\n\n# Splitting the frequency filtered signal into a train and test set\ntrain = {cl1: trials_filt[cl1][:,:,:ntrain_r],\n cl2: trials_filt[cl2][:,:,:ntrain_f]}\n\ntest = {cl1: trials_filt[cl1][:,:,ntrain_r:],\n cl2: trials_filt[cl2][:,:,ntrain_f:]}\n\n# Train the CSP on the training set only\nW = csp(train[cl1], train[cl2])\n\n# Apply the CSP on both the training and test set\ntrain[cl1] = apply_mix(W, train[cl1])\ntrain[cl2] = apply_mix(W, train[cl2])\ntest[cl1] = apply_mix(W, test[cl1])\ntest[cl2] = apply_mix(W, test[cl2])\n\n# Select only the first and last components for classification\ncomp = np.array([0,-1])\ntrain[cl1] = train[cl1][comp,:,:]\ntrain[cl2] = train[cl2][comp,:,:]\ntest[cl1] = test[cl1][comp,:,:]\ntest[cl2] = test[cl2][comp,:,:]\n\n# Calculate the log-var\ntrain[cl1] = logvar(train[cl1])\ntrain[cl2] = logvar(train[cl2])\ntest[cl1] = logvar(test[cl1])\ntest[cl2] = logvar(test[cl2])\n```\n\nFor a classifier the Linear Discriminant Analysis (LDA) algorithm will be used. It fits a gaussian distribution to each class, characterized by the mean and covariance, and determines an optimal separating plane to divide the two. This plane is defined as $r = W_0 \\cdot X_0 + W_1 \\cdot X_1 + \\ldots + W_n \\cdot X_n - b$, where $r$ is the classifier output, $W$ are called the feature weights, $X$ are the features of the trial, $n$ is the dimensionality of the data and $b$ is called the offset.\n\nIn our case we have 2 dimensional data, so the separating plane will be a line: $r = W_0 \\cdot X_0 + W_1 \\cdot X_1 - b$. To determine a class label for an unseen trial, we can calculate whether the result is positive or negative.\n\n\n```python\ndef train_lda(class1, class2):\n '''\n Trains the LDA algorithm.\n arguments:\n class1 - An array (observations x features) for class 1\n class2 - An array (observations x features) for class 2\n returns:\n The projection matrix W\n The offset b\n '''\n nclasses = 2\n \n nclass1 = class1.shape[0]\n nclass2 = class2.shape[0]\n \n # Class priors: in this case, we have an equal number of training\n # examples for each class, so both priors are 0.5\n prior1 = nclass1 / float(nclass1 + nclass2)\n prior2 = nclass2 / float(nclass1 + nclass1)\n \n mean1 = np.mean(class1, axis=0)\n mean2 = np.mean(class2, axis=0)\n \n class1_centered = class1 - mean1\n class2_centered = class2 - mean2\n \n # Calculate the covariance between the features\n cov1 = class1_centered.T.dot(class1_centered) / (nclass1 - nclasses)\n cov2 = class2_centered.T.dot(class2_centered) / (nclass2 - nclasses)\n \n W = (mean2 - mean1).dot(np.linalg.pinv(prior1*cov1 + prior2*cov2))\n b = (prior1*mean1 + prior2*mean2).dot(W)\n \n return (W,b)\n\ndef apply_lda(test, W, b):\n '''\n Applies a previously trained LDA to new data.\n arguments:\n test - An array (features x trials) containing the data\n W - The project matrix W as calculated by train_lda()\n b - The offsets b as calculated by train_lda()\n returns:\n A list containing a classlabel for each trial\n '''\n ntrials = test.shape[1]\n \n prediction = []\n for i in range(ntrials):\n # The line below is a generalization for:\n # result = W[0] * test[0,i] + W[1] * test[1,i] - b\n result = W.dot(test[:,i]) - b\n if result <= 0:\n prediction.append(1)\n else:\n prediction.append(2)\n \n return np.array(prediction)\n```\n\nTraining the LDA using the training data gives us $W$ and $b$:\n\n\n```python\nW,b = train_lda(train[cl1].T, train[cl2].T)\n\nprint('W:', W)\nprint('b:', b)\n```\n\n W: [ 5.31347949 -5.52963938]\n b: 0.3802472099930121\n\n\nIt can be informative to recreate the scatter plot and overlay the decision boundary as determined by the LDA classifier. The decision boundary is the line for which the classifier output is exactly 0. The scatterplot used $X_0$ as $x$-axis and $X_1$ as $y$-axis. To find the function $y = f(x)$ describing the decision boundary, we set $r$ to 0 and solve for $y$ in the equation of the separating plane:\n\n
\n$$\\begin{align}\nW_0 \\cdot X_0 + W_1 \\cdot X_1 - b &= r &&\\text{the original equation} \\\\\\\nW_0 \\cdot x + W_1 \\cdot y - b &= 0 &&\\text{filling in $X_0=x$, $X_1=y$ and $r=0$} \\\\\\\nW_0 \\cdot x + W_1 \\cdot y &= b &&\\text{solving for $y$}\\\\\\\nW_1 \\cdot y &= b - W_0 \\cdot x \\\\\\\n\\\\\\\ny &= \\frac{b - W_0 \\cdot x}{W_1}\n\\end{align}$$\n
\n\nWe first plot the decision boundary with the training data used to calculate it:\n\n\n```python\n# Scatterplot like before\nplot_scatter(train[cl1], train[cl2])\ntitle('Training data')\n\n# Calculate decision boundary (x,y)\nx = np.arange(-5, 1, 0.1)\ny = (b - W[0]*x) / W[1]\n\n# Plot the decision boundary\nplt.plot(x,y, linestyle='--', linewidth=2, color='k')\nplt.xlim(-5, 1)\nplt.ylim(-2.2, 1)\n```\n\nThe code below plots the boundary with the test data on which we will apply the classifier. You will see the classifier is going to make some mistakes.\n\n\n```python\nplot_scatter(test[cl1], test[cl2])\ntitle('Test data')\nplt.plot(x,y, linestyle='--', linewidth=2, color='k')\nplt.xlim(-5, 1)\nplt.ylim(-2.2, 1)\n```\n\nNow the LDA is constructed and fitted to the training data. We can now apply it to the test data. The results are presented as a confusion matrix:\n \n\n \n \n \n \n
True labels \u2192
\u2193 Predicted labelsRightFoot
Right
Foot
\n\nThe number at the diagonal will be trials that were correctly classified, any trials incorrectly classified (either a false positive or false negative) will be in the corners.\n\n\n```python\n# Print confusion matrix\nconf = np.array([\n [(apply_lda(test[cl1], W, b) == 1).sum(), (apply_lda(test[cl2], W, b) == 1).sum()],\n [(apply_lda(test[cl1], W, b) == 2).sum(), (apply_lda(test[cl2], W, b) == 2).sum()],\n])\n\nprint('Confusion matrix:')\nprint(conf)\nprint()\nprint('Accuracy: %.3f' % (np.sum(np.diag(conf)) / float(np.sum(conf))))\n```\n\n Confusion matrix:\n [[45 4]\n [ 5 46]]\n \n Accuracy: 0.910\n\n\nThe confusion matrix shows that 4 out of the 50 trials with foot movement were incorrectly classified as right hand movement and 5 out of the 50 trials with right hand movement were incorrectly classified as foot movement. In total, 91% of the trials were correctly classified, not a bad score!\n\nIf you want, you can continue with the next tutorial:\n[4. Classifiying the P300](4.%20Classifying%20the%20P300.ipynb)\n", "meta": {"hexsha": "3fe0b5e2316c9f436a1251ba95b5d22148485e92", "size": 221542, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "eeg-bci/3. Imagined movement.ipynb", "max_stars_repo_name": "wmvanvliet/neuroscience_tutorials", "max_stars_repo_head_hexsha": "d24a1134b1e2a1230e5b314a589c8d44b4a8894b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 57, "max_stars_repo_stars_event_min_datetime": "2015-09-22T08:21:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T13:54:52.000Z", "max_issues_repo_path": "eeg-bci/3. Imagined movement.ipynb", "max_issues_repo_name": "wmvanvliet/neuroscience_tutorials", "max_issues_repo_head_hexsha": "d24a1134b1e2a1230e5b314a589c8d44b4a8894b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2019-10-08T07:31:07.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-20T06:05:02.000Z", "max_forks_repo_path": "eeg-bci/3. Imagined movement.ipynb", "max_forks_repo_name": "wmvanvliet/neuroscience_tutorials", "max_forks_repo_head_hexsha": "d24a1134b1e2a1230e5b314a589c8d44b4a8894b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2015-06-18T21:12:18.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-17T02:58:22.000Z", "avg_line_length": 200.3092224231, "max_line_length": 44136, "alphanum_fraction": 0.8975544141, "converted": true, "num_tokens": 6911, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5698526660244838, "lm_q2_score": 0.4649015713733885, "lm_q1q2_score": 0.26492539988609726}} {"text": "# Generative models - auto-encoders\n\n### Author: Philippe Esling (esling@ircam.fr)\n\nIn this course we will cover\n1. A brief introduction to [generative models](#generative)\n2. A formal presentation of [Auto-Encoders](#ae) (AEs)\n3. An explanation of how to [implement AEs](#implement)\n4. An [application](#application) of AEs for modeling images\n4. An practical exemple of [convolutional denoising AEs](#denoising) for image data **(exercise)**\n\n\n\n## Generative models\n\n### Supervised refresher\n\nUntil now, we have mostly discussed models that are developed for _supervised_ and _discriminative_ tasks. To formalize this problem, we have a set of data $\\{\\mathbf{x}_{i}, \\mathbf{y}_{i}\\}_{i\\in[1,n]}$, where the $\\mathbf{x}_{i}$ and $\\mathbf{y}_{i}$ are linked. Therefore, we want to approximate this relation through\n\\begin{equation}\n \\mathbf{\\hat{y}} = \\mathcal{F}_\\mathbf{\\theta}(\\mathbf{x}) \n\\end{equation}\nwhere we train the parameters $\\mathbf{\\theta}$ so that $\\mathbf{\\hat{y}}\\approx\\mathbf{y}$. The existence of a label $\\mathbf{y}$ (\"correct answer\") defines a _supervised_ problem\n\n### Going unsupervised\n\nIn some cases, we might only have a set of data $\\{\\mathbf{x}_{i}\\}_{i\\in[1,n]}$, and still be interested in learning some underlying properties or structure of this set. In that case, the problem is _unsupervised_ as we have to learn without a given answer. \n\nHere, we can turn to _generative_ models [[1](#reference1)], which allows to create new data instances based on the observation of existing examples. Although these models are more naturally defined in a _probabilistic way_, we will assume that we have some simple _code_ $\\mathbf{z}$, which allows to control the properties of the generation, and need to learn\n\\begin{equation}\n \\mathbf{\\hat{x}} = \\mathcal{F}_\\mathbf{\\theta}(\\mathbf{z}) \n\\end{equation}\nwhere we still need to learn $\\mathbf{\\theta}$, so that $\\mathbf{\\hat{x}}$ have similar properties to that of the examples in $\\{\\mathbf{x}_{i}\\}_{i\\in[1,n]}$.\n\nNow the problem to solve is how we can learn directly from a set of data.\n\n\n\n## Auto-encoders\n\nOne way to understand a set of data is to try to _compress_, or _simplify_ the corresponding dataset. So the idea is to learn simultaneously how to _encode_ our unlabeled input $\\{\\mathbf{x}_{i}\\}_{i\\in[1,n]}$ and to _decode_ the corresponding representation. This idea give rise to the notion of **auto-encoder**. \n\n### Architecture \n\nThe auto-encoder is an unsupervised architecture originally proposed to perform _dimensionality reduction_ [[3](#reference3)]. As its name indicates, we will try to train this model to learn an efficient _encoding_ $\\mathbf{z}$ of unlabeled input data $\\mathbf{x}$. The only way to learn efficient parameters is to also learn a _decoding_ function to _reconstruct_ $\\mathbf{x}$ from $\\mathbf{z}$.\n\n\n\nAs shown here, a first model $\\mathcal{E}_\\phi$ \\textit{encodes} the input into a \\textit{latent code} $\\mathbf{z}$ in order to provide a low-dimensional representation of the data. A second model $\\mathcal{D}_\\theta$ designated as the \\textit{decoder} aims to generate outputs from $\\mathbf{z}$ that are as close to the original inputs as possible.\n\n### Formal definition\n\nThe latent code $\\mathbf{z}$ can be seen as a compressed abstract representation, and may be used as an intermediate space for analysis or generation. This helps to govern the distribution of the data through a simpler and higher-level representation, while enhancing the \\textit{expressiveness} of the generative model.\nThe behaviour of an auto-encoder can be formalized as:\n\\begin{align}\n \\mathbf{z} &= \\mathcal{E}_\\phi(\\mathbf{x}) \\\\\n \\mathbf{\\hat{x}} &= \\mathcal{D}_\\theta(\\mathbf{z}) \n\\end{align}\nwith the \\textit{encoder} $\\mathcal{E}_\\phi$ and \\textit{decoder} $\\mathcal{D}_\\theta$ functions parameterized respectively by $\\phi$ and $\\theta$. As we can see this defines the reconstruction relationship\n\\begin{equation}\n \\mathbf{\\hat{x}} = \\mathcal{D}_\\theta(\\mathcal{E}_\\phi(\\mathbf{x})) \n\\end{equation}\n\n### Training\n\nThe training of an auto-encoder consists in finding the optimal functions of encoding $\\mathcal{E}^*$ and decoding $\\mathcal{D}^*$ by evaluating the \\textit{reconstruction error} $\\mathcal{L}$ between $\\mathbf{x}$ and $\\mathbf{\\hat{x}}$, such that\n\\begin{equation}\n \\mathcal{E}^*, \\mathcal{D}^* = arg\\,min_{ \\phi, \\theta}{\\mathcal{L}}(\\mathbf{x}, \\mathcal{D}_\\theta(\\mathcal{E}_\\phi(\\mathbf{x})))\n\\end{equation}\n\nAs the latent space usually has a smaller dimensionality than the input, it acts as an incentive for the network to find the main attributes of variations in the dataset (and also explains its use for _dimensionality reduction_).\n\n### Variants and discussion\n\nThere are several variants of auto-encoders, such as denoising auto-encoders or variational auto-encoders. Each address some downside of the basic AE model. For instance, the deterministic nature of the basic auto-encoder implies a point-wise mapping of the latent space, meaning that not all the latent positions can be leveraged to produce relevant reconstructions. Because of this reason, there is no way to ensure that the latent space could allow a robust generalization and that any random $\\mathbf{z}$ would generate a meaningful output.\n\n \n\n## Implementation\n\nHere, we discuss how we can implement and train a simple auto-encoder network in _TensorFlow_. As discussed earlier, an AE is composed of two parts, an **encoder** and a **decoder**. The goal of the encoder is to \"compress\" the dataset, representing its principal features with a very small code, while the goal of the decoder is to learn how to reproduce the initial input from this code. Hence, we will first need to use some basic imports and definition to setup our problem\n\n### Import TensorFlow and other libraries\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras import layers, losses\nfrom tensorflow.keras.datasets import fashion_mnist\nfrom tensorflow.keras.models import Model\n```\n\n### Load the dataset\n\nTo start with a pragmatic and simple to understand example, we will try to train the basic AE using the Fashon MNIST dataset. This dataset contains images of size 28x28 pixels, with different pieces of clothing. The following code allows to load (and eventually download) the dataset, by using the `tensorflow.keras.datasets` module. We also plot some randomly selected test examples\n\n\n```python\n# Load (and eventually download) the dataset\n(x_train, _), (x_test, _) = fashion_mnist.load_data()\n# Normalize the dataset in the [0, 1] range]\nx_train = x_train.astype('float32') / 255.\nx_test = x_test.astype('float32') / 255.\nprint('Number of training examples')\nprint (x_train.shape)\nprint('Number of testing examples')\nprint (x_test.shape)\nplt.figure(figsize=(20, 4))\n# Plot some randomly selected input\nfor i in range(10):\n ax = plt.subplot(1, 10, i + 1)\n i_val = np.random.randint(x_test.shape[0])\n plt.imshow(x_test[i_val])\n plt.gray()\n plt.title(\"Example %d\"%(i_val))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()\n```\n\n## Basic auto-encoder\n\nWe recall here that in order to define an autoencoder, we will need an `encoder`, which compresses the images into a small latent vector, and a `decoder`, that reconstructs the original image from this code. Here, we will start very basic and define the encoder and decoder as simple `Dense` layers. To define the model simply, we will use the [Keras API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) defined in the `tensorflow.keras` module (Note that we pre-loaded the `layers` submodule\n\n\n```python\nclass AE(Model):\n def __init__(self, encoding_dim):\n super(AE, self).__init__()\n self.latent_dim = encoding_dim \n self.encoder = tf.keras.Sequential([\n layers.Flatten(),\n layers.Dense(self.latent_dim, activation='relu'),\n ])\n self.decoder = tf.keras.Sequential([\n layers.Dense(784, activation='sigmoid'),\n layers.Reshape((28, 28))\n ])\n\n def call(self, x):\n encoded = self.encoder(x)\n decoded = self.decoder(encoded)\n return decoded\n```\n\nHere we can see that the model depends on a given `encoding_dim` variable, which defines the size of the latent code. Therefore, we can instantiate our model arbitrarliy with `64` dimensions\n\n\n```python\nlatent_dim = 64 \nautoencoder = AE(latent_dim) \n```\n\nThe only remaining part that we did not discuss yet is what type of _loss_ (defined as $\\mathcal{L}$) we can use to train our model. First, we will simply rely on the _Mean Squared Error_ (MSE) loss, which is defined as\n\\begin{equation}\n \\mathcal{L}_{MSE}(\\hat{\\mathbf{x}}, \\mathbf{x}) = \\mid \\hat{\\mathbf{x}}, \\mathbf{x} \\mid^{2}\n\\end{equation}\n\n\n```python\nautoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())\n```\n\n \n\n### Training the model\n\nTrain the model using `x_train` as both the input and the target. The `encoder` will learn to compress the dataset from 784 dimensions to the latent space, and the `decoder` will learn to reconstruct the original images.\n.\n\n\n```python\nautoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))\n```\n\n Epoch 1/10\n 1875/1875 [==============================] - 2s 992us/step - loss: 0.0086 - val_loss: 0.0088\n Epoch 2/10\n 1875/1875 [==============================] - 2s 966us/step - loss: 0.0086 - val_loss: 0.0088\n Epoch 3/10\n 1875/1875 [==============================] - 2s 983us/step - loss: 0.0086 - val_loss: 0.0087\n Epoch 4/10\n 1875/1875 [==============================] - 2s 993us/step - loss: 0.0086 - val_loss: 0.0087\n Epoch 5/10\n 1875/1875 [==============================] - 2s 978us/step - loss: 0.0086 - val_loss: 0.0088\n Epoch 6/10\n 1875/1875 [==============================] - 2s 1ms/step - loss: 0.0086 - val_loss: 0.0088\n Epoch 7/10\n 1875/1875 [==============================] - 2s 1ms/step - loss: 0.0086 - val_loss: 0.0087\n Epoch 8/10\n 1875/1875 [==============================] - 2s 1ms/step - loss: 0.0086 - val_loss: 0.0087\n Epoch 9/10\n 1875/1875 [==============================] - 2s 1ms/step - loss: 0.0086 - val_loss: 0.0087\n Epoch 10/10\n 1875/1875 [==============================] - 2s 1ms/step - loss: 0.0085 - val_loss: 0.0087\n\n\n\n\n\n \n\n\n\nNow that the model is trained, we can test it by encoding and decoding images from the test set.\n\n\n```python\nencoded_imgs = autoencoder.encoder(x_test).numpy()\ndecoded_imgs = autoencoder.decoder(encoded_imgs).numpy()\n```\n\nBy plotting the images, we can see that the model is able to perform an adequate (yet somewhat blurry) reconstruction of the input images. The interesting point is that this reconstruction comes from a code of only `64` dimensions, whereas the original images have `784` dimensions.\n\n\n```python\nn = 10\nplt.figure(figsize=(20, 4))\nfor i in range(n):\n # display original\n ax = plt.subplot(2, n, i + 1)\n plt.imshow(x_test[i])\n plt.title(\"original\")\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n # display reconstruction\n ax = plt.subplot(2, n, i + 1 + n)\n plt.imshow(decoded_imgs[i])\n plt.title(\"reconstructed\")\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()\n```\n\n### Improvements\n\nEven though this very basic example seems to work, several improvements can be made over the original model. First, we can see that the overall framework does not depend on the exact nature of the `encoder` and `decoder`. Therefore, we can rewrite the original class to accept any type of architecture for these (see code below). Another aspect that has a major impact on the results is the type of `loss` used. The `MeanSquaredError` tend to favor blurry reconstructions, and could be replaced by a `Multinoulli` loss. You can experiment with these ideas below.\n\n\n```python\nclass AE(Model):\n def __init__(self, encoder, decoder, encoding_dim):\n super(AE, self).__init__()\n self.latent_dim = encoding_dim\n self.encoder = encoder\n self.decoder = decoder\n\n def call(self, x):\n encoded = self.encoder(x)\n decoded = self.decoder(encoded)\n return decoded\n```\n\n## Exercise: Denoising AE\n\nImagine (for the sake of argument), that we choose an encoding dimension which is of same dimensionality as the input one. Then, one huge problem is that nothing prevents the AE from simply learning the _identity_ function (try to imagine why). An autoencoder can also be trained to remove noise from images. This type of _regularization_ prevents the model from learning this degenerate situation.\n\nIn this exercise, you will need to create your own denoising AE, by relying on a noisy version of the Fashion MNIST dataset (adding random Gaussian noise to each image). You will then train an autoencoder using the noisy image as input, and the original image as the target.\n\nLet's reimport the dataset to omit the modifications made earlier.\n\n\n```python\n(x_train, _), (x_test, _) = fashion_mnist.load_data()\nx_train = x_train.astype('float32') / 255.\nx_test = x_test.astype('float32') / 255.\nprint(x_train.shape)\n```\n\n (60000, 28, 28)\n\n\nHere, we create two new train and test sets by adding random noise to the images\n\n\n```python\nnoise_factor = 0.2\nx_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape) \nx_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape) \nx_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)\nx_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)\n```\n\nPlot the noisy images.\n\n\n\n```python\nn = 10\nplt.figure(figsize=(20, 4))\nfor i in range(n):\n ax = plt.subplot(2, n, i + 1)\n plt.title(\"original\")\n plt.imshow(tf.squeeze(x_test[i]))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n plt.gray()\n ax = plt.subplot(2, n, i + 1 + n)\n plt.title(\"original + noise\")\n plt.imshow(tf.squeeze(x_test_noisy[i]))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n plt.gray()\nplt.show()\n```\n\n### Define a convolutional autoencoder\n\nIn this example, you will train a convolutional autoencoder using [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) layers in the `encoder`, and [Conv2DTranspose](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose) layers in the `decoder`.\n\n\n```python\nclass DenoisingAE(AE):\n def __init__(self):\n super(DenoisingAE, self).__init__()\n self.encoder = ... \n self.decoder = ...\n \n def call(self, x):\n encoded = ...\n decoded = ...\n return decoded\n\nautoencoder = ...\n```\n\nCompile and fit your model with appropriate losses\n\n\n```python\nautoencoder.compile(...)\nautoencoder.fit(...)\n```\n\nLet's take a look at a summary of the encoder and decoder. Notice how the images are downsampled and then upsampled back to the original input size.\n\n\n```python\nautoencoder.encoder.summary()\nautoencoder.decoder.summary()\n```\n\nPlot both the noisy images and the denoised images produced by the autoencoder to check that your implementation is correct\n\n\n```python\nencoded_imgs = ...\ndecoded_imgs = ...\nplt.figure(figsize=(20, 4))\nfor i in range(n):\n ...\nplt.show()\n```\n\n\n
\n\n\n### References\n\n\n[1] Rezende, Danilo Jimenez, and Shakir Mohamed. \"Variational inference with normalizing flows.\" _arXiv preprint arXiv:1505.05770_ (2015). [link](http://arxiv.org/pdf/1505.05770)\n\n[2] Kingma, Diederik P., Tim Salimans, and Max Welling. \"Improving Variational Inference with Inverse Autoregressive Flow.\" _arXiv preprint arXiv:1606.04934_ (2016). [link](https://arxiv.org/abs/1606.04934)\n\n[3] Kingma, D. P., & Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. (2013). [link](https://arxiv.org/pdf/1312.6114)\n\n[4] Rezende, D. J., Mohamed, S., & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. (2014). [link](https://arxiv.org/pdf/1401.4082)\n\n[5] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., & Courville, A. (2016). Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013. [link](https://arxiv.org/pdf/1611.05013)\n\n[6] Van den Oord, A., & Vinyals, O. (2017). Neural discrete representation learning. In NIPS 2017 (pp. 6306-6315). [link](http://papers.nips.cc/paper/7210-neural-discrete-representation-learning.pdf)\n", "meta": {"hexsha": "d2c439c63cabd3e703f1493f232465a191d771b2", "size": 143303, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01_auto_encoders.ipynb", "max_stars_repo_name": "esling/formation_scai", "max_stars_repo_head_hexsha": "4f5b3645b171262d137b5852d190ed50d0e9f602", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-28T22:49:38.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-28T22:49:38.000Z", "max_issues_repo_path": "01_auto_encoders.ipynb", "max_issues_repo_name": "esling/formation_scai", "max_issues_repo_head_hexsha": "4f5b3645b171262d137b5852d190ed50d0e9f602", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01_auto_encoders.ipynb", "max_forks_repo_name": "esling/formation_scai", "max_forks_repo_head_hexsha": "4f5b3645b171262d137b5852d190ed50d0e9f602", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2016-10-19T16:01:59.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-20T04:59:13.000Z", "avg_line_length": 188.06167979, "max_line_length": 51164, "alphanum_fraction": 0.8899255424, "converted": true, "num_tokens": 4573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5583270090337583, "lm_q2_score": 0.47268347662043286, "lm_q1q2_score": 0.2639119517211647}} {"text": "\n\n# PyTorch tutorial \u306e\u5185\u5bb9\u306b\u3064\u3044\u3066(4)\n\n- date: 2020-0722\n- author: \u6d45\u5ddd\u4f38\u4e00\n\n[https://github.com/pytorch/tutorials/tree/master/beginner_source/nlp](https://github.com/pytorch/tutorials/tree/master/beginner_source/nlp) \u3092\u898b\u308b\u3068\nPyTorch \u3067 \u81ea\u7136\u8a00\u8a9e\u51e6\u7406\u3092\u884c\u3046\u5834\u5408\u306e\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u306f\u4ee5\u4e0b\u3068\u304a\u308a\u3067\u3042\u308b\n\n# Deep Learning for NLP with Pytorch\n\n1. [pytorch_tutorial.py](https://github.com/pytorch/tutorials/blob/master/beginner_source/nlp/pytorch_tutorial.py): \n\t[PyTorch \u5165\u9580 Introduction to PyTorch](https://pytorch.org/tutorials/beginner/nlp/pytorch_tutorial.html)\n\n2. [deep_learning_tutorial.py](https://github.com/pytorch/tutorials/blob/master/beginner_source/nlp/deep_learning_tutorial.py): \n\t[PyTorch \u306b\u3088\u308b\u6df1\u5c64\u5b66\u7fd2 Deep Learning with PyTorch](https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html)\n\n3. [word_embeddings_tutorial.py](https://github.com/pytorch/tutorials/blob/master/beginner_source/nlp/word_embeddings_tutorial.py): \n\t[\u5358\u8a9e\u57cb\u3081\u8fbc\u307f:\u8a9e\u5f59\u7684\u610f\u5473\u306e\u7b26\u53f7\u5316 Word Embeddings: Encoding Lexical Semantics](https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html)\n\n4. [sequence_models_tutorial.py]((https://github.com/pytorch/tutorials/blob/master/beginner_source/nlp/sequence_models_tutorial.py): \n\t[\u7cfb\u5217\u30e2\u30c7\u30eb\u3068 LSTM Sequence Models and Long-Short Term Memory Networks](https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html)\n\n5. [advanced_tutorial.py]((https://github.com/pytorch/tutorials/blob/master/beginner_source/nlp/advanced_tutorial.py): \n\t[\u52d5\u7684\u610f\u601d\u6c7a\u5b9a\u3068\u53cc\u65b9\u5411 LSTM \u6761\u4ef6\u4ed8\u304d\u78ba\u7387\u5834 Advanced: Making Dynamic Decisions and the Bi-LSTM CRF](https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html)\n\n\n\u4ee5\u4e0b\u3067\u306f\uff0c\u3053\u306e\u3046\u3061\u306e 4 \u306b\u3064\u3044\u3066\u89e3\u8aac\u3057\u3066\u3044\u308b\u3002\n\n\n\n```python\n%matplotlib inline\n```\n\n# \u7cfb\u5217\u30e2\u30c7\u30eb\u3068 LSTM \u30cd\u30c3\u30c8\u30ef\u30fc\u30af\n\u3053\u3053\u307e\u3067\uff0c\u69d8\u3005\u306a\u30d5\u30a3\u30fc\u30c9\u30d5\u30a9\u30ef\u30fc\u30c9\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u898b\u3066\u304d\u307e\u3057\u305f\u3002\n\u30d5\u30a3\u30fc\u30c9\u30d5\u30a9\u30ef\u30fc\u30c9\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306f\uff0c\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u304c\u4fdd\u6301\u3059\u308b(\u5185\u90e8) \u72b6\u614b\u304c\u3042\u308a\u307e\u305b\u3093\u3002\n\u30d5\u30a3\u30fc\u30c9\u30d5\u30a9\u30ef\u30fc\u30c9\u30e2\u30c7\u30eb\u306f\uff0c\u7cfb\u5217\u51e6\u7406\u306b\u5411\u3044\u3066\u3044\u308b\u30e2\u30c7\u30eb\u3067\u306f\u306a\u3044\u3068\u8a00\u3048\u307e\u3059\u3002\n\u4e00\u65b9\uff0c\u7cfb\u5217\u30e2\u30c7\u30eb\u306f \u81ea\u7136\u8a00\u8a9e\u51e6\u7406 (NLP) \u306e\u4e2d\u6838\u3092\u306a\u3059\u30e2\u30c7\u30eb\u3067\u3059\u3002\n\u7cfb\u5217\u30e2\u30c7\u30eb\u3068\u306f\uff0c\u5165\u529b\u60c5\u5831\u9593\u306b\u6642\u9593\u7684\u4f9d\u5b58\u95a2\u4fc2\u3092\u4eee\u5b9a\u3057\u305f\u30e2\u30c7\u30eb\u3067\u3059\u3002\n\u7cfb\u5217\u30e2\u30c7\u30eb\u306e\u53e4\u5178\u7684\u306a\u4f8b\u3068\u3057\u3066\u306f\uff0c\u54c1\u8a5e\u30bf\u30b0\u4ed8\u3051\u306e\u305f\u3081\u306e ``\u96a0\u308c\u30de\u30eb\u30b3\u30d5\u30e2\u30c7\u30eb`` \u304c\u3042\u308a\u307e\u3059\u3002\n\u4ed6\u306e\u4f8b\u3068\u3057\u3066\u306f ``\u6761\u4ef6\u4ed8\u304d\u78ba\u7387\u5834`` \u304c\u3042\u308a\u307e\u3059\u3002\n\n\n\n\n\u30ea\u30ab\u30ec\u30f3\u30c8\u30fb\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306f\uff0c\u72b6\u614b\u3092\u4fdd\u6301\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3067\u3059\u3002\n\u4f8b\u3048\u3070\uff0c\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u51fa\u529b\u306f\u6b21\u306e\u5165\u529b\u306e\u4e00\u90e8\u3068\u3057\u3066\u4f7f\u7528\u3055\u308c\u307e\u3059\u3002\n\u30ea\u30ab\u30ec\u30f3\u30c8\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306f\uff0c\u5168\u7cfb\u5217\u3092\u901a\u3057\u3066\u60c5\u5831\u304c\u4f1d\u64ad\u3059\u308b\u3088\u3046\u306b\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059\u3002\nLSTM \u306e\u5834\u5408 \u7cfb\u5217\u306e\u5404\u8981\u7d20\u306b\u5bfe\u3057\u3066\uff0c\u5bfe\u5fdc\u3059\u308b *\u96a0\u308c\u72b6\u614b* $h_t$ \u304c\u3042\u308a\uff0c\u539f\u5247\u3068\u3057\u3066\uff0c\u7cfb\u5217\u306e\u4efb\u610f\u306e\u6642\u70b9\u306e\u60c5\u5831\u3092\u84c4\u3048\u3066\u304a\u304f\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\n\u96a0\u308c\u72b6\u614b\u3092\u5229\u7528\u3057\u3066\uff0c\u8a00\u8a9e\u30e2\u30c7\u30eb\u306e\u5358\u8a9e\u3084\u54c1\u8a5e\u30bf\u30b0\u4ed8\uff0c\u305d\u306e\u4ed6\u306e\u591a\u304f\u306e\u4e88\u6e2c\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3059\u3002\n\n\n\n### PyTorch \u306b\u304a\u3051\u308b LSTM\n\n\u4f8b\u984c\u306b\u5165\u308b\u524d\u306b \u3044\u304f\u3064\u304b\u306e\u3053\u3068\u306b\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044\u3002\nPytorch \u306e LSTM \u306f\u3001\u3059\u3079\u3066\u306e\u5165\u529b\u304c 3 \u6b21\u5143\u30c6\u30f3\u30bd\u30eb\u3067\u3042\u308b\u3053\u3068\u3092\u4eee\u5b9a\u3057\u3066\u3044\u307e\u3059\u3002\n\u3053\u306e\u5165\u529b\u30c6\u30f3\u30bd\u30eb\u306e\u8ef8\u306e\u610f\u5473\u306f\u91cd\u8981\u3067\u3059\u3002\n\u7b2c 1 \u8ef8(\u6b21\u5143)\u306f\u7cfb\u5217\u305d\u306e\u3082\u306e\u3067\u3059\u3002\n\u7b2c 2 \u8ef8(\u6b21\u5143) \u306f\u30df\u30cb\u30d0\u30c3\u30c1\u5185\u306e\u5b9f\u4f53\uff0c\n\u7b2c 3 \u8ef8(\u6b21\u5143) \u306e\u8ef8\u306f\u5165\u529b\u306e\u8981\u7d20\u3092\u8868\u3057\u307e\u3059\u3002\n\u30df\u30cb\u30d0\u30c3\u30c1\u306b\u3064\u3044\u3066\u306f\u3053\u3053\u3067\u306f\u8b70\u8ad6\u3057\u3066\u3044\u307e\u305b\u3093\u3002\n\u3067\u3059\u306e\u3067 \u30df\u30cb\u30d0\u30c3\u30c1\u306f\u7121\u8996\u3057\u3066\uff0c\u7b2c 2 \u8ef8\u306b\u306f\u5e38\u306b 1\u3064\u306e\u6b21\u5143\u304c\u3042\u308b\u3068\u4eee\u5b9a\u3057\u307e\u3057\u3087\u3046\u3002\n``\u725b\u304c\u98db\u3073\u8df3\u306d\u305f The cow jumped`` \u3068\u3044\u3046\u6587\u306b\u5bfe\u3057\u3066\u7cfb\u5217\u30e2\u30c7\u30eb\u3092\u5b9f\u884c\u3057\u305f\u3044\u5834\u5408\uff0c\n\u5165\u529b\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u306a\u308a\u307e\u3059:\n\n\\begin{align}\\begin{bmatrix}\n \\overbrace{q_\\text{The}}^\\text{\u884c\u30d9\u30af\u30c8\u30eb} \\\\\n q_\\text{cow} \\\\\n q_\\text{jumped}\n \\end{bmatrix}\n\\end{align}\n\n\u30b5\u30a4\u30ba 1 \u306e\u8ffd\u52a0\u3057\u305f\u7b2c 2 \u6b21\u5143\u304c\u3042\u308b\u3053\u3068\u3092\u5fd8\u308c\u306a\u3044\u3067\u304f\u3060\u3055\u3044\u3002\n\n\u3055\u3089\u306b \u7cfb\u5217\u3092\u4e00\u5ea6\u306b 1 \u3064\u305a\u3064\u9032\u3081\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059\u3002\u3067\u3059\u304c \u305d\u306e\u5834\u5408 \u7b2c1\u8ef8 \u3082\u30b5\u30a4\u30ba\u304c 1 \u306b\u306a\u308a\u307e\u3059\u3002\n\n\u7c21\u5358\u306a\u4f8b\u3092\u898b\u3066\u307f\u307e\u3057\u3087\u3046\u3002\n\n\n\n\n\n```python\n# Author: Robert Guthrie\n# modified by Shin Asakawa\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)\n```\n\n\n\n\n \n\n\n\n\n```python\nlstm = nn.LSTM(3, 3) # \u5165\u529b\u6b21\u5143\u306f 3 \u51fa\u529b\u6b21\u5143\u3082 3 \u3067\u3059\ninputs = [torch.randn(1, 3) for _ in range(5)] # \u9577\u3055 5 \u306e\u7cfb\u5217\u3092\u4f5c\u6210\n\n# \u96a0\u308c\u72b6\u614b\u306e\u521d\u671f\u5316\nhidden = (torch.randn(1, 1, 3),\n torch.randn(1, 1, 3))\nfor i in inputs:\n # \u5168\u7cfb\u5217\u3092\u901a\u3058\u3066\uff0c\u4e00\u5ea6\u306b\u4e00\u3064\u3065\u3065\uff0c\u9010\u6b21\uff0cStep through the sequence one element at a time.\n # 1\u30b9\u30c6\u30c3\u30d7\u5f8c\uff0c\u4e2d\u9593\u5c64\u306f\uff0c\u4e2d\u9593\u5c64\u306e\u72b6\u614b\u3068\u5165\u529b\u60c5\u5831\u3068\u304c\u5165\u529b\u3068\u306a\u308b # after each step, hidden contains the hidden state.\n out, hidden = lstm(i.view(1, 1, -1), hidden)\n\n\n# \u4ee3\u308f\u308a\u306b \u7cfb\u5217\u5168\u4f53\u3092\u4e00\u5ea6\u306b\u5b9f\u884c\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059\u3002\n# LSTM \u304c\u8fd4\u3059\u6700\u521d\u306e\u5024\u306f \u7cfb\u5217\u5168\u4f53\u306e \u96a0\u308c\u72b6\u614b\u306e\u3059\u3079\u3066\u3067\u3059\u3002\n# 2 \u756a\u76ee\u306e\u5024\u306f \u6700\u65b0\u306e\u96a0\u308c\u72b6\u614b\u3067\u3059\n#\uff08\"out \"\u3068 \"hidden \"\u306e\u6700\u5f8c\u306e\u30b9\u30e9\u30a4\u30b9\u3092\u6bd4\u8f03\u3057\u3066\u307f\u3066\u304f\u3060\u3055\u3044\u3002\n# \u305d\u306e\u7406\u7531\u306f\u6b21\u306e\u3088\u3046\u306b\u306a\u308a\u307e\u3059:\n# \"out \"\u306f \u7cfb\u5217\u5185\u306e\u5168\u96a0\u308c\u72b6\u614b\u306b\u30a2\u30af\u30bb\u30b9\u3067\u304d\u308b\u3088\u3046\u306b\u3057\u307e\u3059\u3002\n# \"hidden\" \u306f \u7cfb\u5217\u3092\u7d99\u7d9a\u3057\u3066\u8aa4\u5dee\u9006\u4f1d\u64ad\u3059\u308b\u3053\u3068\u3092\u53ef\u80fd\u306b\u3057\u307e\u3059\u3002\n# \u5f8c\u3067 LSTM \u306b\u5f15\u6570\u3068\u3057\u3066\u6e21\u3059\u3053\u3068\u3067 2 \u6b21\u5143\u3092\u8ffd\u52a0\u3057\u307e\u3059\u3002\n# alternatively, we can do the entire sequence all at once.\n# the first value returned by LSTM is all of the hidden states throughout\n# the sequence. the second is just the most recent hidden state\n# (compare the last slice of \"out\" with \"hidden\" below, they are the same)\n# The reason for this is that:\n# \"out\" will give you access to all hidden states in the sequence\n# \"hidden\" will allow you to continue the sequence and backpropagate,\n# by passing it as an argument to the lstm at a later time\n# Add the extra 2nd dimension\ninputs = torch.cat(inputs).view(len(inputs), 1, -1)\nhidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) # clean out hidden state\nout, hidden = lstm(inputs, hidden)\nprint(out)\nprint(hidden)\n```\n\n tensor([[[-0.0187, 0.1713, -0.2944]],\n \n [[-0.3521, 0.1026, -0.2971]],\n \n [[-0.3191, 0.0781, -0.1957]],\n \n [[-0.1634, 0.0941, -0.1637]],\n \n [[-0.3368, 0.0959, -0.0538]]], grad_fn=)\n (tensor([[[-0.3368, 0.0959, -0.0538]]], grad_fn=), tensor([[[-0.9825, 0.4715, -0.0633]]], grad_fn=))\n\n\n### \u4f8b: \u54c1\u8a5e\u30bf\u30b0\u4ed8\u3051 LSTM\n\n\n\n\n\u3053\u3053\u3067\u306f LSTM \u3092\u4f7f\u3063\u3066\u54c1\u8a5e\u30bf\u30b0\u4ed8\u3051\u3092\u884c\u3044\u307e\u3059\u3002\n\u30d3\u30bf\u30d3 (\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0) \u3084 \u524d\u5411\u304d-\u5f8c\u308d\u5411\u304d (\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0) \u306a\u3069\u306f\u4f7f\u3044\u307e\u305b\u3093\u3002\n\u3067\u3059\u304c\uff0c\u8aad\u8005\u3078\u306e (\u6311\u6226\u7684\u306a) \u7df4\u7fd2\u554f\u984c\u3068\u3057\u3066\uff0c\u8aac\u660e\u3092\u8aad\u3093\u3060\u5f8c\u3067\uff0c\u3069\u3046\u3059\u308c\u3070 \u30d3\u30bf\u30d3 \u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u304c\u4f7f\u3048\u308b\u304b\u3092\u8003\u3048\u3066\u307f\u3066\u304f\u3060\u3055\u3044\u3002\n\n\n\n\u30e2\u30c7\u30eb\u306f\u4ee5\u4e0b\u306e\u3068\u304a\u308a\u3067\u3059:\n\u5165\u529b\u7cfb\u5217\u3092\u8a9e\u5f59\u306e\u7cfb\u5217 $w_1, \\ldots, w_M$ \u3068\u3057\u307e\u3059\u3002\n\u307e\u305f $T$ \u3092\u30bf\u30b0\u96c6\u5408\uff0c$y_i$ \u3092\u5358\u8a9e $w_i$ \u306b\u5bfe\u5fdc\u3059\u308b\u30bf\u30b0\u3068\u3057\u307e\u3059\u3002\n\u5358\u8a9e $w_i$ \u306e\u54c1\u8a5e\u306e\u4e88\u6e2c\u5024\u306f $\\hat{y}_i$ \u3068\u3057\u307e\u3059\u3002\n\n\u3053\u308c\u306f\uff0c\u4e88\u6e2c\u30e2\u30c7\u30eb\u3067\u3042\u308a\u51fa\u529b\u306f $\\hat{y}_1,\\ldots,\\hat{y}_M$ \u3068\u3057\u307e\u3059\u3002\u3053\u3053\u3067 $\\hat{y}_i \\in T$ \u3068\u3057\u307e\u3059\u3002\n\n\n\n\u4e88\u6e2c\u3092\u884c\u3046\u969b\u306b\u306f \u6587\u306e\u4e0a\u3092 LSTM \u3092\u5165\u529b\u3057\u307e\u3059\u3002\n\u6642\u523b $i$ \u3067\u306e\u4e2d\u9593\u5c64\u72b6\u614b\u3092$h_i$ \u3068\u3057\u307e\u3059\u3002\n\u307e\u305f \u305d\u308c\u305e\u308c\u306e\u30bf\u30b0\u306b\u4e00\u610f\u306a\u30a4\u30f3\u30c7\u30c3\u30af\u30b9\u3092\u5272\u308a\u5f53\u3066\u308b(\u5358\u8a9e\u306e\u57cb\u3081\u8fbc\u307f\u306e\u9805\u3067 word_to\\_ix\u3068\u3057\u305f\u3088\u3046\u306b)\u3002\n\u3059\u308b\u3068 $hat{y}_i$ \u306e\u4e88\u6e2c\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u306a\u308a\u307e\u3059:\n\n\n\n\\begin{align}\\hat{y}_i = \\text{argmax}_j \\ (\\log \\text{Softmax}(Ah_i + b))_j\\end{align}\n\n\n\n\u3059\u306a\u308f\u3061\uff0c\u4e2d\u9593\u5c64\u72b6\u614b\u306e\u30a2\u30d5\u30a3\u30f3\u5199\u50cf\u306e \u5bfe\u6570\u30bd\u30d5\u30c8\u30de\u30c3\u30af\u30b9\u304b\u3089\uff0c\u3053\u306e\u30d9\u30af\u30c8\u30eb\u4e2d\u306e\u6700\u5927\u5024\u3092\u6301\u3064\u30bf\u30b0\u3092\u4e88\u6e2c\u30bf\u30b0\u3068\u3057\u307e\u3059\u3002\n\u3053\u306e\u3053\u3068\u306f $A$ \u306e\u30bf\u30fc\u30b2\u30c3\u30c8\u7a7a\u9593\u306e\u6b21\u5143\u304c $|T|$ \u3067\u3042\u308b\u3053\u3068\u3092\u6697\u793a\u3057\u3066\u3044\u308b\u3053\u3068\u306b\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n\u30c7\u30fc\u30bf\u306e\u6e96\u5099:\n\n\n\n\n\n```python\ndef prepare_sequence(seq, to_ix):\n idxs = [to_ix[w] for w in seq]\n return torch.tensor(idxs, dtype=torch.long)\n\n\ntraining_data = [\n (\"The dog ate the apple\".split(), [\"DET\", \"NN\", \"V\", \"DET\", \"NN\"]),\n # the \u306f\u5b9a\u51a0\u8a5e:DET, dog \u306f\u540d\u8a5e:NN, ate \u306f \u52d5\u8a5e:v, the \u306f\u5b9a\u51a0\u8a5e;DET, apple \u306f\u540d\u8a5e:NN\n (\"Everybody read that book\".split(), [\"NN\", \"V\", \"DET\", \"NN\"])\n # everybody \u306f \u540d\u8a5e:NN, read \u306f\u52d5\u8a5e:V, that \u306f\u5b9a\u51a0\u8a5e:DET, book \u306f\u540d\u8a5e:NN\n]\nword_to_ix = {}\nfor sent, tags in training_data:\n for word in sent:\n if word not in word_to_ix:\n word_to_ix[word] = len(word_to_ix)\nprint(word_to_ix)\ntag_to_ix = {\"DET\": 0, \"NN\": 1, \"V\": 2}\n\n# \u4ee5\u4e0b\u306f\u4f8b\u306a\u306e\u3067\u5c0f\u3055\u304f 6 \u6b21\u5143\u3068\u3057\u3066\u3044\u307e\u3059\u304c\uff0c\u901a\u5e38\u306f 32 \u3068\u304b 64 \u6b21\u5143\n# \u306b\u3057\u3066\u3044\u307e\u3059\u3002\nEMBEDDING_DIM = 6\nHIDDEN_DIM = 6\n```\n\n {'The': 0, 'dog': 1, 'ate': 2, 'the': 3, 'apple': 4, 'Everybody': 5, 'read': 6, 'that': 7, 'book': 8}\n\n\n\n\u30e2\u30c7\u30eb\u306e\u4f5c\u6210:\n\n\n\n\n```python\nclass LSTMTagger(nn.Module):\n\n def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):\n super(LSTMTagger, self).__init__()\n self.hidden_dim = hidden_dim\n\n self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)\n\n # LSTM \u3092\u7528\u3044\u305f\u8a00\u8a9e\u30e2\u30c7\u30eb\u3067\u306f\uff0c\u5358\u8a9e\u57cb\u3081\u8fbc\u307f\u8868\u73fe\u3092\u5165\u529b\u3068\u3057\uff0c\n # \u4e2d\u9593\u5c64\u306e\u72b6\u614b\u3068\u51fa\u529b\u5024\u3092\u51fa\u529b\u3057\u307e\u3059\n self.lstm = nn.LSTM(embedding_dim, hidden_dim)\n\n # \u4e2d\u9593\u5c64\u304b\u3089\u51fa\u529b\u7a7a\u9593\u3078\u306e\u5199\u50cf\u95a2\u6570\n self.hidden2tag = nn.Linear(hidden_dim, tagset_size)\n\n def forward(self, sentence):\n embeds = self.word_embeddings(sentence)\n lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))\n tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))\n tag_scores = F.log_softmax(tag_space, dim=1)\n return tag_scores\n```\n\n\n\u30e2\u30c7\u30eb\u306e\u8a13\u7df4:\n\n\n\n```python\nmodel = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))\nloss_function = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.1)\n\n# See what the scores are before training\n# Note that element i,j of the output is the score for tag j for word i.\n# Here we don't need to train, so the code is wrapped in torch.no_grad()\nwith torch.no_grad():\n inputs = prepare_sequence(training_data[0][0], word_to_ix)\n tag_scores = model(inputs)\n print(tag_scores)\n\nfor epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data\n for sentence, tags in training_data:\n # Step 1. Remember that Pytorch accumulates gradients.\n # We need to clear them out before each instance\n model.zero_grad()\n\n # Step 2. Get our inputs ready for the network, that is, turn them into\n # Tensors of word indices.\n sentence_in = prepare_sequence(sentence, word_to_ix)\n targets = prepare_sequence(tags, tag_to_ix)\n\n # Step 3. Run our forward pass.\n tag_scores = model(sentence_in)\n\n # Step 4. Compute the loss, gradients, and update the parameters by\n # calling optimizer.step()\n loss = loss_function(tag_scores, targets)\n loss.backward()\n optimizer.step()\n\n# See what the scores are after training\nwith torch.no_grad():\n inputs = prepare_sequence(training_data[0][0], word_to_ix)\n tag_scores = model(inputs)\n\n # The sentence is \"the dog ate the apple\". i,j corresponds to score for tag j\n # for word i. The predicted tag is the maximum scoring tag.\n # Here, we can see the predicted sequence below is 0 1 2 0 1\n # since 0 is index of the maximum value of row 1,\n # 1 is the index of maximum value of row 2, etc.\n # Which is DET NOUN VERB DET NOUN, the correct sequence!\n print(tag_scores)\n```\n\n tensor([[-1.1389, -1.2024, -0.9693],\n [-1.1065, -1.2200, -0.9834],\n [-1.1286, -1.2093, -0.9726],\n [-1.1190, -1.1960, -0.9916],\n [-1.0137, -1.2642, -1.0366]])\n tensor([[-0.0462, -4.0106, -3.6096],\n [-4.8205, -0.0286, -3.9045],\n [-3.7876, -4.1355, -0.0394],\n [-0.0185, -4.7874, -4.6013],\n [-5.7881, -0.0186, -4.1778]])\n\n\n\n### \u7df4\u7fd2: \u6587\u5b57\u30ec\u30d9\u30eb\u306e\u7279\u5fb4\u306b\u3088\u308b LSTM \u3092\u7528\u3044\u305f\u54c1\u8a5e\u30bf\u30b0\u4ed8\u3051\u30e2\u30c7\u30eb\u306e\u62e1\u5f35\n\n\n\n\u4e0a\u8a18\u306e\u4f8b\u3067\u306f \u5404\u5358\u8a9e\u306b\u306f\u57cb\u3081\u8fbc\u307f\u8868\u73fe\u304c\u3042\u308a\uff0c\u305d\u306e\u8868\u73fe\u3092\u7cfb\u5217\u30e2\u30c7\u30eb\u3078\u306e\u5165\u529b\u3068\u3057\u3066\u3044\u307e\u3059\u3002\n\u5358\u8a9e\u306e\u57cb\u3081\u8fbc\u307f\u8868\u73fe\u3092 \u5358\u8a9e\u306e\u6587\u5b57\u304b\u3089\u6d3e\u751f\u3057\u305f\u8868\u73fe\u3067\u62e1\u5f35\u3057\u3066\u307f\u307e\u3057\u3087\u3046\u3002\n\u63a5\u8f9e\u306e\u3088\u3046\u306a\u6587\u5b57\u30ec\u30d9\u30eb\u306e\u60c5\u5831\u306f\u54c1\u8a5e\u306b\u5927\u304d\u306a\u5f71\u97ff\u3092\u4e0e\u3048\u308b\u306e\u3067 \u3053\u308c\u306f\u5927\u304d\u306a\u52a9\u3051\u306b\u306a\u308b\u3068\u671f\u5f85\u3055\u308c\u307e\u3059\u3002\n\u4f8b\u3048\u3070 \u63a5\u5c3e\u8f9e *-ly* \u3092\u6301\u3064\u5358\u8a9e\u306f \u82f1\u8a9e\u3067\u306f\u307b\u3068\u3093\u3069\u5e38\u306b\u526f\u8a5e\u3068\u3057\u3066\u30bf\u30b0\u4ed8\u3051\u3055\u308c\u307e\u3059\u3002\n\n\n\n\u305d\u306e\u305f\u3081\u306b $c_w$ \u3092\u5358\u8a9e $w$ \u306e\u6587\u5b57\u30ec\u30d9\u30eb\u8868\u73fe\u3068\u3057\u307e\u3059\u3002\n$x_w$ \u3092\u5148\u306e\u4f8b\u3068\u540c\u69d8\u306e\u5358\u8a9e\u306e\u57cb\u3081\u8fbc\u307f\u8868\u73fe\u3068\u3057\u307e\u3059\u3002\n\u3059\u308b\u3068 \u7cfb\u5217\u30e2\u30c7\u30eb\u3078\u306e\u5165\u529b\u306f $x_w$ \u3068 $c_w$ \u306e\u7d50\u5408\u306b\u306a\u308a\u307e\u3059\u3002\n\u3057\u305f\u304c\u3063\u3066 $x_w$ \u306e\u6b21\u5143\u304c 5 \u3067 $c_w$ \u306e\u6b21\u5143\u304c 3 \u306e\u5834\u5408\uff0cLSTM \u306f 8 \u6b21\u5143\u306e\u5165\u529b\u3092\u53d7\u3051\u5165\u308c\u308b\u3053\u3068\u306b\u306a\u308a\u307e\u3059\u3002\n\n\n\n\u6587\u5b57\u30ec\u30d9\u30eb\u306e\u8868\u73fe\u3092\u5f97\u308b\u306b\u306f \u5358\u8a9e\u306e\u6587\u5b57\u306b\u5bfe\u3057\u3066 LSTM \u3092\u884c\u3044 $c_w$ \u3092\u3053\u306e LSTM \u306e\u6700\u7d42\u96a0\u308c\u5c64\u3068\u3057\u307e\u3059\u3002\n\u30d2\u30f3\u30c8:\n\n\n\n* \u65b0\u30e2\u30c7\u30eb\u306b\u306f 2 \u3064 LSTM \u304c\u5fc5\u8981\u3067\u3059\u3002\n \u6700\u521d\u306e\u4f8b\u306e\u30e2\u30c7\u30eb\u306f \u54c1\u8a5e\u30bf\u30b0\u306e\u30b9\u30b3\u30a2\u3092\u51fa\u529b\u3059\u308b\u3082\u306e\u3067\u3059\u3002\u65b0\u3057\u3044\u30e2\u30c7\u30eb\u306f \u5404\u5358\u8a9e\u306e\u6587\u5b57\u30ec\u30d9\u30eb\u306e\u8868\u73fe\u3092\u51fa\u529b\u3057\u307e\u3059\u3002\n* \u6587\u5b57\u5358\u4f4d\u306e\u3067\u7cfb\u5217\u30e2\u30c7\u30eb\u3092\u884c\u3046\u306b\u306f \u6587\u5b57\u3092\u57cb\u3081\u8fbc\u3080\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3002\n \u6587\u5b57\u306e\u57cb\u3081\u8fbc\u307f\u306f\u3001\u6587\u5b57 LSTM \u3078\u306e\u5165\u529b\u3068\u306a\u308a\u307e\u3059\u3002\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2a73371d57328664435e2d30bee12383e1fb1120", "size": 24012, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/04sequence_models_tutorial.ipynb", "max_stars_repo_name": "JPA-BERT/jpa-bert.github.io", "max_stars_repo_head_hexsha": "d0acda35703d876582b90b80298cfe0fa8590512", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/04sequence_models_tutorial.ipynb", "max_issues_repo_name": "JPA-BERT/jpa-bert.github.io", "max_issues_repo_head_hexsha": "d0acda35703d876582b90b80298cfe0fa8590512", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/04sequence_models_tutorial.ipynb", "max_forks_repo_name": "JPA-BERT/jpa-bert.github.io", "max_forks_repo_head_hexsha": "d0acda35703d876582b90b80298cfe0fa8590512", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7674023769, "max_line_length": 354, "alphanum_fraction": 0.5306096952, "converted": true, "num_tokens": 5957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5156199157230157, "lm_q2_score": 0.5117166047041654, "lm_q1q2_score": 0.2638512725916295}} {"text": "### Imports\n\n\n```python\n# Imports\n\nimport sys\n'''\n!{sys.executable} -m pip install --user -q cirq\n!{sys.executable} -m pip install --user -q google-api-python-client==1.8.0\n'''\nimport cirq\nimport math\nimport random\nimport numpy as np\nimport sympy\n!{sys.executable} --version\nprint(\"\\nCirq version \" + cirq.__version__)\n```\n\n Python 3.7.7\n \n Cirq version 0.7.0\n\n\n# Quantum Computing Simulation using Google's cirq library for Python\n### Dmitri Friedenberg\n\n\nQuantum programming languages are a major developing topic in the field of quantum computing. As the availability of different quantum computing systems increases, the need rises for a clear syntax with which to program a series of operations on an arbitrary quantum computer. \n\n# Review of literature and high-level quantum programming proposals\n\nDiscussion about quantum pseudoprogramming began immediately after Shor's algorithm and Grover's algorithm each took the spotlight in 1994 and 1996 respectively. In \"Conventions for quantum pseudocode\" \\[1\\] in 1996, E. Knill's often cited paper began the initiative to develop a common method for programming quantum computers. Although a physical implementation was nowhere in sight, the need for adopting expressions for constructing a quantum system was immediately clear. A number of quantum programming language proposals followed in the decade after Knill's paper, well documented by Peter Selinger's \"A brief survey of quantum programming languages\" \\[2\\].\n\n### Imperative languages\n\nWithin a few years of Knill's paper, several proposals for *imperative* quantum programming languages emerged. The first proposed imperative language was **Quantum Computation Language (QCL)**, introduced in 1998 in B. \u00d6mer's Master's thesis, \"A Procedural Formalism for Quantum Computing\" \\[3\\]. His thesis presented a fully developed imperative language, with examples such as the QCL implementation of Coppersmith's algorithm of a fast quantum discrete Fourier Transform (Table 2.1):\n\n```java\n operator dft(qureg q) { // main operator\n const n=#q; // set n to length of input\n int i; int j; // declare loop counters\n for i=0 to n-1 {\n for j=0 to i-1 { // apply conditional phase gates\n CPhase(2*pi/2^(i-j+1),q[n-i-1] & q[n-j-1]);\n }\n Mix(q[n-i-1]); // qubit rotation\n }\n flip(q); // swap bit order of the output\n }\n```\n\nGenerally, initial proposals for quantum programming languages were imperative, likely due to the intuition that the languages are designed to manipulate physical objects (qubits and gates), more easily expressed in an object-oriented language. The imperative languages that followed included **Q Language** (an extension of C++, from \"Toward an architecture for quantum programming\", S. Bettelli 2002), which tracked quantum memory using a class \"Qreg\" to create registers; and **quantum Guarded-Command Language (qGCL)**, modeled after Edsger Dijkstra's \"Guarded Command Language\", defined by P. Zuliani in his thesis, \"Quantum Programming\" (2001).\n\n### Functional languages\n\nAs functional programming has risen back into the zeitgeist of the last decade or two among the software engineering community, quantum computation has seen its share of attempts at functional languages. Selinger defined the **Quantum Programming Language (QPL)** in another paper from 2004, \"Towards a quantum programming language\" \\[4\\]. His paper describes QPL as, \"*functional*, in the sense that each (atomic or composite) statement operates by transforming a specific set of inputs to outputs.\" The language implements natural solutions for issues like the no-cloning property, where the syntax of the language prevents an implied duplication of a quantum state. Selinger's paper also defines **Quantum flow charts (QFC)**, an excellent tool for illustrating a quantum functional program:\n\n\n\n### Quantum Computing Simulators\n\nGoing further high-level, we come to quantum circuit simulators. Quantum computing simulators not only allow for the simple construction of circuits in a programming language, but may run an input vector through a circuit to simulate the probabalistic results moment by moment. For instance, take the Quantum Forier Transform (QFT) - a simulator can simply compute the Fourier transform classically, and update the qubit states (note that it *could* also properly simulate the full circuit for QFT). This allows for advanced simulations to include features such as noise, error rates, run-times for individual operations, and so on.\n\n# Cirq\n\nFinally, we come to the simulator of focus in this notebook. Cirq was announced as an open source Python framework for Noisy Intermediate Scale Quantum (NISQ) computers on Google's AI Blog, on July 18, 2018 \\[5\\], fully available at https://github.com/quantumlib/Cirq. Cirq is actively used on Google's \"Bristlecone\" quantum processor, and has even recently been incorporated into TensorFlow Quantum (also by Google), \"an open-source library for the rapid prototyping of quantum ML models\" \\[6\\]. TensorFlow Quantum is a very recent development just announced on March 9, 2020.\n\n### Notable features\n\n#### Graphical ASCII printing of circuits\n\n\n```python\n# This example is modified from https://github.com/quantumlib/Cirq .\n\n# Pick a qubit.\nqubit = cirq.GridQubit(0, 0)\n\n# Create a circuit\ncircuit = cirq.Circuit(\n cirq.X(qubit)**0.5, # Square root of NOT.\n cirq.measure(qubit, key='m') # Measurement.\n)\nprint(\"Circuit:\")\nprint(circuit)\n\n# Simulate the circuit several times.\nsimulator = cirq.Simulator()\nresult = simulator.run(circuit, repetitions=20)\nprint(\"Ran circuit 20 times. Results:\")\nprint(result)\n```\n\n Circuit:\n (0, 0): \u2500\u2500\u2500X^0.5\u2500\u2500\u2500M('m')\u2500\u2500\u2500\n Ran circuit 20 times. Results:\n m=00101001001100001011\n\n\n#### Moment-based circuits\n\n\n\n(credit [cirq.readthedocs.io]())\n\nBy grouping operations into \"moments\", Cirq constructs operators in parallel and better simulation of scheduled operations on real hardware. This also permits inspection of the state of a circuit at each moment:\n\n\n```python\nqubits = cirq.LineQubit.range(3)\ncircuit = cirq.Circuit(\n cirq.IdentityGate(num_qubits = 3).on(*qubits),\n cirq.H(qubits[0]),\n cirq.H(qubits[2]),\n cirq.CNOT(qubits[0], qubits[1]),\n cirq.CNOT(qubits[1], qubits[2])\n)\nprint(circuit, \"\\n\")\n\nsimulator = cirq.Simulator()\nfor i, step in enumerate(simulator.simulate_moment_steps(circuit)):\n print('state at step %d: %s' % (i, step.dirac_notation(3)) + \"\\n\")\n```\n\n 0: \u2500\u2500\u2500I\u2500\u2500\u2500H\u2500\u2500\u2500@\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 1: \u2500\u2500\u2500I\u2500\u2500\u2500\u2500\u2500\u2500\u2500X\u2500\u2500\u2500@\u2500\u2500\u2500\n \u2502 \u2502\n 2: \u2500\u2500\u2500I\u2500\u2500\u2500H\u2500\u2500\u2500\u2500\u2500\u2500\u2500X\u2500\u2500\u2500 \n \n state at step 0: |000\u27e9\n \n state at step 1: 0.5|000\u27e9 + 0.5|001\u27e9 + 0.5|100\u27e9 + 0.5|101\u27e9\n \n state at step 2: 0.5|000\u27e9 + 0.5|001\u27e9 + 0.5|110\u27e9 + 0.5|111\u27e9\n \n state at step 3: 0.5|000\u27e9 + 0.5|001\u27e9 + 0.5|110\u27e9 + 0.5|111\u27e9\n \n\n\n### Easily extensible operations and classes\n\nCirq is designed to be extensible with abstract classes for gates/operations, simulators, etc. This allows for creation of operations such as unitary operations corresponding to a complex function. The catch is that Cirq allows such operations not to be reflective of the full circuit - that is, the operation does not have to be the sum of normally-defined gates, but can simply hide a classical operation. In the next section, we will see how Cirq can use this to abstract the hardest part of Shor's algorithm.\n\n# Shor's algorithm with Cirq\n\n### Review\n\nA brief review of the procedure for Shor's algorithm \\[7\\] is as follows:\n\n___\n\n\nWe begin by picking a number $N$ to be factored.\n\n##### Classical component\n\n1. Pick a random number $a < N$.\n2. Compute $\\gcd(a, N)$. If $\\gcd(a, N) \\neq 1$, then this common divisor is a nontrivial factor of $N$, so the algorithm is done (though for large numbers it is likely that the algorithm can be run again on those factors). Otherwise, we proceed.\n3. Use the **quantum component** to determine $r$, the period of the function $f(x) = a^x \\mod N$.\n4. If any of the following conditions are true, return to step 1.\n - $r = 0$\n - No candidate $r$ was found\n - $r$ is odd\n - $a^{\\frac{r}{2}} \\equiv -1 \\mod N$\n - $\\gcd(a^{\\frac{r}{2}}+1\\text{ mod }N, N) = 1$\n5. We can factor $N$ by $\\gcd(a^{\\frac{r}{2}}+1\\text{ mod }N, N)$.\n\n##### Quantum component\n\nWe receive $a, N$ with which we must solve the period $r$ for $f(x) = a^x \\mod N$. We define the unitary operation $$U_f |x, 0^q\\rangle = |x, f(x)\\rangle, \\ \\ f(x) = a^x \\text{ mod } N.$$\n\n1. Let $n_0$ be the number of bits in N. Let $n$ be the unique integer satisfying $N^2 \\leq 2^n \\leq 2N^2$. Initialize an input and output qubit register, with $n_0$ and $n$ qubits respectively, to the state $|\\psi_0\\rangle = |0\\ldots 0\\rangle_n|0\\ldots 0\\rangle_{n_0}$.\n2. Let $q=2^n$. Prepare a superposition $$|\\psi_1\\rangle = \\frac{1}{\\sqrt{q}}\\sum_{x=0}^{q-1}|x\\rangle|0\\rangle$$ by applying $q$ Hadamard gates.\n3. Apply $U_f$ so that $|\\psi_2\\rangle = U_f|\\psi_1\\rangle.$\n4. Measure the output register of $|\\psi_2\\rangle$ (recall this register has size $n_0$), and discard the result of that measurement (the point is to force the input register into a particular superposition). This puts the input register into state $$|\\psi_3\\rangle = \\frac{1}{\\sqrt{m}}\\sum_{k=0}^{m-1}|x_0 + kr\\rangle.$$\n5. Apply the Quantum Fourier Transform (QFT) to $|\\psi_3\\rangle$ to obtain $|\\psi_4\\rangle = QFT|\\psi_3\\rangle$.\n - *Note: Many sources describe incorrectly describe this as the \"inverse QFT\". This confusion is due to the fact that QFT is the quantum analog of the inverse discrete Fourier transform - but it should properly be referred to as just \"QFT\". Still, Cirq follows the trend of calling this the \"inverse QFT\".*\n6. Measure $y$ from the input register of $|\\psi_4\\rangle$.\n7. Determine the continued fraction representation of $y/q$. Test each convergent $j'/r'$ in order, where $j'/r'$ is reduced to lowest terms. If at any point $r' < N$ and $$|\\frac{y}{q}-\\frac{j'}{r'}|\\leq\\frac{1}{2q}, $$ then $r'$ is the candidate value for the period. Return $r'$ to the classical component's step 3.\n\n___\n\nWith this in mind, let's first examine how Cirq can deal with the circuit for modular exponentiation in Shor's algorithm, satisfying the unitary operation\n\n$$U_f |x, 0^q\\rangle = |x, f(x)\\rangle, \\ \\ f(x) = a^x \\text{ mod } N$$\n\nThis is not at all a simple circuit, and minimizing the number of operations to achieve this unitary function for variables $a, N$ has been the subject of much recent research and contention. Shor's landmark paper did not specify how this unitary operation could be achieved, not even discussing the number of qubits needed for the operation. Given that this operation is seen as **the bottle-neck** for applying Shor's algorithm, the debate over optimizing such a circuit is warranted.\n\nIn \u201cFast Quantum Modular Exponentiation Architecture for Shor\u2019s Factorization Algorithm\u201d \\[8\\] (2013), Pavlidis and Gizopoulos present a circuit with a depth near $2000n^2$ requiring $9n+2$ qubits, where $n$ represents the number of bits of the classical number being factored, where \"the total quantum cost of the proposed design is $1600n^3$. The implementation is complicated, and this notebook will not attempt to program that circuit into Cirq. Instead, we'll use a simple trick to ignore the physical circuit requirements entirely, and simply *pretend* we have such a physical device ready for use. The following code creates an ArithmeticOperation in Cirq which sweeps all the troublesome circuitry under the rug.\n\n\n```python\n# Code modified from https://github.com/quantumlib/Cirq/blob/master/examples/shor.py\n\nfrom typing import Callable, List, Optional, Sequence, Union\n\nclass ModularExp(cirq.ArithmeticOperation):\n def __init__(self, target: Sequence[cirq.Qid],\n exponent: Union[int, Sequence[cirq.Qid]], base: int,\n modulus: int) -> None:\n if len(target) < modulus.bit_length():\n raise ValueError(f'Register with {len(target)} qubits is too small '\n f'for modulus {modulus}')\n self.target = target\n self.exponent = exponent\n self.base = base\n self.modulus = modulus\n\n def registers(self) -> Sequence[Union[int, Sequence[cirq.Qid]]]:\n return self.target, self.exponent, self.base, self.modulus\n\n def with_registers(\n self,\n *new_registers: Union[int, Sequence['cirq.Qid']],\n ) -> cirq.ArithmeticOperation:\n if len(new_registers) != 4:\n raise ValueError(f'Expected 4 registers (target, exponent, base, '\n f'modulus), but got {len(new_registers)}')\n target, exponent, base, modulus = new_registers\n return ModularExp(target, exponent, base, modulus)\n\n def apply(self, *register_values: int) -> int:\n assert len(register_values) == 4\n target, exponent, base, modulus = register_values\n if target >= modulus:\n return target\n return target ^ (base**exponent) % modulus\n\n def _circuit_diagram_info_(\n self,\n args: cirq.CircuitDiagramInfoArgs,\n ) -> cirq.CircuitDiagramInfo:\n assert args.known_qubits is not None\n wire_symbols: List[str] = []\n t, e = 0, 0\n for qubit in args.known_qubits:\n if qubit in self.target:\n if t == 0:\n if isinstance(self.exponent, Sequence):\n e_str = 'e'\n else:\n e_str = str(self.exponent)\n wire_symbols.append(\n f'ModularExp(t*{self.base}**{e_str} % {self.modulus})')\n else:\n wire_symbols.append('t' + str(t))\n t += 1\n if isinstance(self.exponent, Sequence) and qubit in self.exponent:\n wire_symbols.append('e' + str(e))\n e += 1\n return cirq.CircuitDiagramInfo(wire_symbols=tuple(wire_symbols))\n```\n\nThis may seem oddly simple for such a complicated circuit. The magic is in the method:\n\n```\n def apply(self, *register_values: int) -> int:\n assert len(register_values) == 4\n target, exponent, base, modulus = register_values\n if target >= modulus:\n return target\n return (target * base**exponent) % modulus\n```\n\nHere, we see that this Cirq operation really just wraps `a**x % N`. Of course, this still means we get a classical run-time when factoring numbers, but that is to be expected.\n\n### The Algorithm\n\n\n```python\n'''\nAttempt to factor N with a single run of Shor's algorithm.\nIf this returns None, then it probablistically failed and\nmust be run again.\nIf this returns -1, then the number is prime.\n'''\n\ndef shor_factor(N, seed=None, verbose=False):\n if type(N) is not int:\n raise TypeError(\"n must be an integer.\")\n if N > (2 ** 30):\n raise ValueError(\"Number is too large. Try n <= 2^30.\")\n if N < 1:\n raise ValueError(\"Number must be positive integer greater than 1.\")\n \n if N % 2 == 0:\n if verbose:\n print(f\"{N} has trivial factor of 2.\")\n return 2, N // 2\n \n # using sympy.isprime is certainly 'cheating' - but Shor's algorithm\n # doesn't work on prime numbers, so we can save some wasted effort here.\n if sympy.isprime(N):\n if verbose:\n print(f\"{N} is prime. Aborting.\")\n return -1\n \n '''\n 1. Pick a random number \ud835\udc4e<\ud835\udc41\n '''\n random.seed(seed)\n a = random.randint(2, N-1)\n if verbose:\n print(f\"Chose random number a={a}.\")\n \n '''\n 2. Compute gcd(\ud835\udc4e,\ud835\udc41) . If gcd(\ud835\udc4e,\ud835\udc41)\u22601 , then this common divisor \n is a nontrivial factor of \ud835\udc41 , so the algorithm is done \n (though for large numbers it is likely that the algorithm \n can be run again on those factors). Otherwise, we proceed.\n '''\n gcd = math.gcd(N, a)\n if gcd != 1:\n if verbose:\n print(f\"gcd({N}, {a}) is {gcd}, which is a trivial factor.\")\n return gcd, N // gcd\n if verbose:\n print(f\"a is relatively prime to N.\")\n \n '''\n 3. Use the quantum component to determine \ud835\udc5f , \n the period of the function \ud835\udc53(\ud835\udc65)=\ud835\udc4e^\ud835\udc65 mod\ud835\udc41 .\n '''\n if verbose:\n print(f\"Finding order of `{a}**x % {N}`.\")\n r = quantum_find_order(a, N, seed, verbose)\n if verbose:\n print(f\"Quantum routine returned period r={r}.\")\n \n '''\n 4. If any of the following conditions are true, return to step 1.\n '''\n # No candidate \ud835\udc5f\u2032 was found\n # \ud835\udc5f is odd\n # \ud835\udc4e^(\ud835\udc5f/2)\u2261\u22121mod\ud835\udc41\n # gcd(\ud835\udc4e^(r/2),\ud835\udc41)=1\n if (r == 0 or\n r is None or\n r % 2 == 1):\n if verbose:\n print(f\"The period r={r} failed on classical step 4. Try algorithm again.\")\n return None\n \n c = (a**(r // 2)) % N\n d = math.gcd(c+1, N)\n if (c % N == N-1) or d == 1:\n if verbose:\n print(f\"The period r={r} failed on classical step 4. Try algorithm again.\")\n return None\n \n '''\n 5. We can factor \ud835\udc41 into \ud835\udc54\ud835\udc50\ud835\udc51(\ud835\udc4e^(\ud835\udc5f/2)\u00b11,\ud835\udc41) .\n '''\n if verbose:\n print(\"Algorithm succeeded. Returning factors.\")\n return d, N // d\n\n'''\nThe quantum component of Shor's algorithm. Returns 'r',\nthe candidate period.\n'''\ndef quantum_find_order(a, N, seed=None, verbose=False):\n n_0 = N.bit_length()\n n = 2 * n_0 + 3\n \n '''\n 1. Initialize an input and output qubit register, \n with \ud835\udc5b and \ud835\udc5b0 qubits respectively, to \n the state |\ud835\udf130\u27e9=|0\u20260\u27e9\ud835\udc5b|0\u20260\u27e9\ud835\udc5b0 .\n '''\n input_qubits = cirq.LineQubit.range(n)\n output_qubits = cirq.LineQubit.range(n, n + n_0)\n order_circuit = cirq.Circuit()\n\n '''\n 2. Let \ud835\udc5e=2^\ud835\udc5b . Prepare a superposition\n |\ud835\udf131\u27e9=1\ud835\udc5e\u23af\u23af\u221a\u2211\ud835\udc65=0\ud835\udc5e\u22121|\ud835\udc65\u27e9|0\u27e9\n\n by applying \ud835\udc5e Hadamard gates.\n '''\n q = 2 ** n\n order_circuit.append(cirq.H.on_each(*input_qubits))\n\n '''\n 3. Apply \ud835\udc48\ud835\udc53 so that |\ud835\udf132\u27e9=\ud835\udc48\ud835\udc53|\ud835\udf131\u27e9.\n '''\n order_circuit.append(\n ModularExp(output_qubits, input_qubits, a, N)\n )\n\n '''\n 4. Measure the output register of |\ud835\udf132\u27e9 \n (recall this register has size \ud835\udc5b ), \n and discard the result of that measurement \n (the point is to force the input register into \n a particular superposition). This puts the input \n register into state |\ud835\udf133\u27e9.\n '''\n order_circuit.append(cirq.measure(*output_qubits, key='output'))\n\n '''\n 5. Apply the Quantum Fourier Transform (QFT) \n to |\ud835\udf133\u27e9 to obtain |\ud835\udf134\u27e9=\ud835\udc44\ud835\udc39\ud835\udc47|\ud835\udf133\u27e9 .\n '''\n order_circuit.append(cirq.QFT(*input_qubits, inverse=True))\n\n '''\n 6. Measure \ud835\udc66 from the input register of |\ud835\udf134\u27e9\n '''\n order_circuit.append(cirq.measure(*input_qubits, key='input'))\n if verbose:\n print(\"Generating order-finding circuit:\\n\")\n print(order_circuit, \"\\n\")\n simulator = cirq.Simulator(seed=seed)\n input_result = simulator.run(order_circuit).measurements['input'][0]\n y = int(\"\".join(str(x) for x in input_result), 2)\n if verbose:\n print(f\"Circuit returned value of input register, y={y}\")\n\n '''\n 7. Determine the continued fraction representation of \ud835\udc66/\ud835\udc5e .\n Test each convergent \ud835\udc57\u2032/\ud835\udc5f\u2032 in order, where \ud835\udc57\u2032/\ud835\udc5f\u2032 is reduced \n to lowest terms. If at any point \ud835\udc5f\u2032<\ud835\udc41 and |\ud835\udc66/\ud835\udc5e\u2212\ud835\udc57\u2032/\ud835\udc5f\u2032|\u22641/2\ud835\udc5e, \n then \ud835\udc5f\u2032 is the candidate value for the period. \n Return \ud835\udc5f\u2032 to the classical component's step 3.\n '''\n def continued_fraction(num, denom):\n res = []\n quo, rem = divmod(num, denom)\n while rem != 0:\n res = res + [quo]\n quo, rem = divmod(denom, rem)\n denom = (denom-rem)//quo\n return res + [quo]\n \n def cf_to_frac(cf):\n num, denom = 1, 0\n for u in reversed(cf):\n num, denom = denom + (num * u), num\n return num, denom\n\n cf = continued_fraction(y, q)\n if verbose:\n print(f\"Continued fraction for {y}/{q} is {cf}.\")\n \n # test each convergent\n for i in range(len(cf)):\n j, r = cf_to_frac(cf[0:i+1])\n if math.fabs((y/q)-(j/r)) <= (1/(2*q)):\n if verbose:\n print(f\"Using convergent j/r = {j}/{r}. Returning to classical routine.\")\n if r == 1:\n return 0\n return r\n # all convergents failed\n if verbose:\n print(\"All convergents failed. Returning to classical routine.\")\n return None\n```\n\nNow, we are ready to run the algorithm. Note that this is a *very expensive* algorithm on classical systems - so we use a rather low $N$ to test the algorithm, or else we wait an eternity to factor a number like 99.\n\n\n```python\nfor i in range(5):\n res = shor_factor(21, seed=i+13, verbose=True)\n if res:\n print(res)\n break\n print(\"-\"*30 + \"\\n\")\n```\n\n Chose random number a=10.\n a is relatively prime to N.\n Finding order of `10**x % 21`.\n Generating order-finding circuit:\n \n 0: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500QFT^-1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M('input')\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 1: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 2: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 3: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 4: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#5\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 5: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e5\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#6\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 6: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e6\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#7\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 7: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e7\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#8\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 8: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e8\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#9\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 9: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e9\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#10\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 10: \u2500\u2500\u2500H\u2500\u2500\u2500e10\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#11\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 11: \u2500\u2500\u2500H\u2500\u2500\u2500e11\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#12\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 12: \u2500\u2500\u2500H\u2500\u2500\u2500e12\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#13\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502\n 13: \u2500\u2500\u2500\u2500\u2500\u2500\u2500ModularExp(t*10**e % 21)\u2500\u2500\u2500M('output')\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 14: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 15: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 16: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 17: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \n \n Circuit returned value of input register, y=4096\n Continued fraction for 4096/8192 is [0, 2].\n Using convergent j/r = 1/2. Returning to classical routine.\n Quantum routine returned period r=2.\n The period r=2 failed on classical step 4. Try algorithm again.\n ------------------------------\n \n Chose random number a=5.\n a is relatively prime to N.\n Finding order of `5**x % 21`.\n Generating order-finding circuit:\n \n 0: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500QFT^-1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M('input')\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 1: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 2: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 3: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 4: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#5\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 5: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e5\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#6\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 6: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e6\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#7\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 7: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e7\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#8\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 8: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e8\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#9\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 9: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e9\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#10\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 10: \u2500\u2500\u2500H\u2500\u2500\u2500e10\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#11\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 11: \u2500\u2500\u2500H\u2500\u2500\u2500e11\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#12\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 12: \u2500\u2500\u2500H\u2500\u2500\u2500e12\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#13\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502\n 13: \u2500\u2500\u2500\u2500\u2500\u2500\u2500ModularExp(t*5**e % 21)\u2500\u2500\u2500M('output')\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 14: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 15: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 16: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 17: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \n \n Circuit returned value of input register, y=0\n Continued fraction for 0/8192 is [0].\n Using convergent j/r = 0/1. Returning to classical routine.\n Quantum routine returned period r=0.\n The period r=0 failed on classical step 4. Try algorithm again.\n ------------------------------\n \n Chose random number a=8.\n a is relatively prime to N.\n Finding order of `8**x % 21`.\n Generating order-finding circuit:\n \n 0: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500QFT^-1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M('input')\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 1: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 2: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 3: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 4: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#5\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 5: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e5\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#6\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 6: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e6\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#7\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 7: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e7\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#8\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 8: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e8\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#9\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 9: \u2500\u2500\u2500\u2500H\u2500\u2500\u2500e9\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#10\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 10: \u2500\u2500\u2500H\u2500\u2500\u2500e10\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#11\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 11: \u2500\u2500\u2500H\u2500\u2500\u2500e11\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#12\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502 \u2502\n 12: \u2500\u2500\u2500H\u2500\u2500\u2500e12\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500#13\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502\n 13: \u2500\u2500\u2500\u2500\u2500\u2500\u2500ModularExp(t*8**e % 21)\u2500\u2500\u2500M('output')\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 14: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 15: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 16: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t3\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u2502 \u2502\n 17: \u2500\u2500\u2500\u2500\u2500\u2500\u2500t4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \n \n Circuit returned value of input register, y=4096\n Continued fraction for 4096/8192 is [0, 2].\n Using convergent j/r = 1/2. Returning to classical routine.\n Quantum routine returned period r=2.\n Algorithm succeeded. Returning factors.\n (3, 7)\n\n\n# Sources:\n\n\\[1\\]: E. Knill. Conventions for quantum pseudocode. Technical Report LAUR-96-2724, LANL, 1996. (https://pdfs.semanticscholar.org/60d1/e63ca31555ec7013c5eb9a8a63788398fd14.pdf)\n\n\\[2\\]: Selinger, Peter. (2004). A Brief Survey of Quantum Programming Languages. 1-6. (https://www.mscs.dal.ca/~selinger/papers/flops04.pdf)\n\n\\[3\\]: B. \u00d6mer, A Procedural Formalism for Quantum Computing, Master thesis (computing science), Technical University of Vienna, 1998. (http://tph.tuwien.ac.at/~oemer/doc/qcldoc.pdf)\n\n\\[4\\]: Selinger, Peter. \u201cTowards a Quantum Programming Language.\u201d Mathematical Structures in Computer Science, vol. 14, no. 4, Aug. 2004, pp. 527\u201386. Cambridge Core, doi:10.1017/S0960129504004256. (https://www.mscs.dal.ca/~selinger/papers/qpl.pdf)\n\n\\[5\\]: \"Announcing Cirq: An Open Source Framework for NISQ Algorithms.\" (https://ai.googleblog.com/2018/07/announcing-cirq-open-source-framework.html)\n\n\\[6\\]: \"Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learning.\" (https://ai.googleblog.com/2020/03/announcing-tensorflow-quantum-open.html)\n\n\\[7\\]: Shor, Peter W. \u201cPolynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer.\u201d SIAM Journal on Computing, vol. 26, no. 5, Oct. 1997, pp. 1484\u2013509. arXiv.org, doi:10.1137/S0097539795293172. (https://arxiv.org/pdf/quant-ph/9508027.pdf)\n\n\\[7\\]: Pavlidis, Archimedes, and Dimitris Gizopoulos. \u201cFast Quantum Modular Exponentiation Architecture for Shor\u2019s Factorization Algorithm.\u201d ArXiv:1207.0511 [Quant-Ph], Nov. 2013. arXiv.org, http://arxiv.org/abs/1207.0511. (https://arxiv.org/pdf/1207.0511.pdf)\n\n\\[8\\]: Cirq on GitHub. (https://github.com/quantumlib/Cirq/blob/master/)\n", "meta": {"hexsha": "4c656324439782a42a56d2234c7927c9d2a492d3", "size": 149293, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Shor's Algorithm in Cirq.ipynb", "max_stars_repo_name": "metasir/Shors-Algorithm-in-Cirq", "max_stars_repo_head_hexsha": "8e5f4cdc0257d3b0e02b89cd52de216b00796423", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-01T04:23:45.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-01T04:23:45.000Z", "max_issues_repo_path": "Shor's Algorithm in Cirq.ipynb", "max_issues_repo_name": "metasir/Shors-Algorithm-in-Cirq", "max_issues_repo_head_hexsha": "8e5f4cdc0257d3b0e02b89cd52de216b00796423", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Shor's Algorithm in Cirq.ipynb", "max_forks_repo_name": "metasir/Shors-Algorithm-in-Cirq", "max_forks_repo_head_hexsha": "8e5f4cdc0257d3b0e02b89cd52de216b00796423", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 181.1808252427, "max_line_length": 88816, "alphanum_fraction": 0.8379562337, "converted": true, "num_tokens": 8188, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5039061705290806, "lm_q2_score": 0.523420348936324, "lm_q1q2_score": 0.2637547436094982}} {"text": "```python\nimport ipywidgets as widgets\n```\n\n# Inductive Reasoning and Deductive Reasoning\n\nThere are two forms of reasoning that that are useful when investigating a piece of mathematics.\n\n* `Inductive reasoning` involves looking for __patterns__ in evidence in order to come up with conjectures (i.e. things that are likely to be true). This sort of reasoning will __not__ tell you whether or not something actually _is_ true but it is still very useful for making connections and figuring out what to investigate next.\n\n* `Deductive reasoning` involves starting with what you __know__ and logically figuring out if some conjecture __must__ also be true (and why). While deductive reasoning is stronger than inductive reasoning, it can also be more difficult to use.\n\nIn practice, one will often use `inductive reasoning` to make conjectures and `deductive reasoning` to verify them. In some cases producing a conjecture will require a mix of inductive and deductive reasoning.\n\nIn this notebook we will go over some example problems to help illustrate how one would go about using `inductive` and `deductive` reasoning in problem solving while avoiding pitfalls. Being able to apply these skills will make you a more effective problem solver. Being able to distinguish between the two will help you maintain a clear understanding of what you're doing, why you're doing it, and avoid mistakes in the processs.\n\n## A Flawed Application of Inductive Reasoning\n\nIn this problem a circle is partitioned into regions by adding dots to the edges and drawing chords connecting them. The circle on the top left has only a single dot and a single region. The next one has two dots and two regions.\n\nBefore we can apply inductive reasoning we need some examples. To that end I've counted the number of regions for the first five cases:\n\n$$1, 2, 4, 8, 16.$$\n\nIt looks like there is a pattern here. Inductive reasoning might suggest that the circle with $n$ dots will have $2^{n-1}$ regions, is that true?\n\nTry counting the number of regions in the sixth case.\n\n\n\nUnfortunately, as it turns out the sixth circle will break down into $31$ regions (the last one is contained in the intersection at the center, almost inexistent because of how I drew it), not $32$. This is an example where inductive reasoning can lead you astray. Fortunately for us we managed to find a counterexample right away but there are conjectures where the first counterexample took decades to find and required numbers so large that it's virtually impossible for people to find them by hand. So we should always be very skeptical about the things inductive thinking may lead us to believe.\n\n## Some Flawed Applications of Deductive Reasoning\n\nDeductive reasoning can also fail us if we are not careful. It is possible to get caught up manipulating equations and not realize there's an underlying logical problem.\n\nHere are two flawed proofs. Try and find the problem in each one!\n\n### A Classic Flawed Proof\n\nThere are a few variations of this proof (with the same flaw) floating around. Every so often a student rediscovers it and thinks they've broken math.\n\nLet $a=b$.\n\nThen it follows that $b^2=ab$.\n\n$$\n\\begin{align*}\n a^2 - b^2 &= a^2 - b^2\\\\\n a^2 - b^2 &= a^2 - ab \\tag{Since $b^2=ab$}\\\\\n (a+b)(a-b) &= (a)(a-b) \\tag{Factoring}\\\\\n a+b &= a \\tag{Divide by sides by $a-b$}\\\\\n 2a &= a \\tag{Since $a=b$}\\\\\n 2 &= 1 \\tag{Divide both sides by $a$}\n\\end{align*}\n$$\n\nHint: The problem involves division.\n\nThe problem is introduced when both sides are divided by $a-b$ because $a-b=0$ and division by zero is not allowed (for reasons like this).\n\n### A Flawed Proof Involving Radicals\n\nThis one is somewhat less common but still interesting.\n\n$$\n\\begin{align*}\n -1 \n &= i^2 \\\\\n &= (i)(i) \\\\\n &= \\sqrt{-1}\\sqrt{-1} \\\\\n &= \\sqrt{(-1)(-1)} \\\\\n &= \\sqrt{1} \\\\\n &= 1\n\\end{align*}\n$$\n\nSo $-1=1$.\n\nHint: The problem involves distributing roots.\n\nThe problem occurs because $\\sqrt{ab}=\\sqrt{a}\\sqrt{b}$ only holds when $a$ and $b$ are both greater than or equal to $0$ (neither $a$ nor $b$ are allowed to be negative).\n\n## Some Applications of Inductive Reasoning\n\n### Sum of the first n odd numbers\n\nSuppose you need to compute the sum of the first $100$ odd numbers. You could do this directly but that likely wouldn't be very fun or interesting. Let's instead try applying inductive reasoning to try to come up with a better way to do it.\n\nBefore we can start looking for patterns we'll first need to generate some examples (so that we can use them as evidence later). Let's do that in Python:\n\n\n```python\n# Create a list of odd numbers from 1 to 20 (incrementing by 2 each time).\noddNumbers = range(1,20,2)\n\n# Print a nice heading.\nprint('| n | Odd | S(n)|')\nprint('------------------')\n\n# For each odd number print the step, the number, and the sum of all odd numbers so far.\nstep = 0\noddSum = 0\nfor odd in oddNumbers:\n step = step + 1\n oddSum = oddSum + odd\n print('|{:3d} | {:3d} | {:3d} |'.format(step, odd, oddSum))\n```\n\n | n | Odd | S(n)|\n ------------------\n | 1 | 1 | 1 |\n | 2 | 3 | 4 |\n | 3 | 5 | 9 |\n | 4 | 7 | 16 |\n | 5 | 9 | 25 |\n | 6 | 11 | 36 |\n | 7 | 13 | 49 |\n | 8 | 15 | 64 |\n | 9 | 17 | 81 |\n | 10 | 19 | 100 |\n\n\nFor brevity we'll use $S(n)$ to refer to the __sum of the first $n$ odd numbers__.\n\nThe code above gives us a list of the first $10$ odd numbers as well as $S(n)$ for each one (eg. for $n=3$, the $3$rd odd is $5$ and $S(3) = 1 + 3 + 5 = 9$).\n\nNow look closely at the data and try to see if there is a pattern there. Maybe consider changing the 20 in `range(1,20,2)` to a larger value to obtain more examples.\n\nHint: $1+3+5=3^2$.\n\nA good conjecture might be that\n\n$$S(n)=n^2.$$\n\nHere is a slider that tests our conjecture against a larger range of values:\n\n\n```python\nnum = widgets.IntSlider(description='n:', min=1)\ndef oddCompare(num):\n oddNumbers = range(1, num*2, 2)\n oddSum = sum(list(oddNumbers))\n print('S(n): {}'.format(oddSum))\n print('n^2: {}'.format((num*num)))\n\nout = widgets.interactive_output(oddCompare, {'num': num})\nwidgets.VBox([num, out])\n```\n\n\n

Failed to display Jupyter Widget of type VBox.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nNow that we have a conjecture it is typically very helpful if we're able to take it further and come up with some guesses about __why__ the conjecture holds. In this case the trick is to realize that we can compute the sum of the first $n$ odd numbers by taking the sum of the first $n-1$ odd numbers and adding the $n$'th odd. In other words:\n\n$$S(n+1)=S(n) + (n+1)^{\\text{th}} \\text{ odd number}.$$\n\nFor instance, $S(5) = 1 + 3 + 5 + 7 + 9 = S(4) + 9$.\n\nThen combining this insight with the fact that we can represent square numbers as squares yields this visualization:\n\n\n\nUnfortunately as convincing as this visual representation may be, it isn't strong enough to prove that $S(n)=n^2$ for all numbers $n$. In order to prove that it holds for all $n$ we require a more advanced proof technique that we don't currently have access to. So we must grit our teeth and accept the fact that _as far as we know_ there could exist some number out there for which this fails.\n\n### Triangular numbers\n\nThere is famous story about the mathematician Carl Friedrich Gauss who as a child in primary school was tasked with computing the sum of the first 100 numbers as a way to keep him busy. As the story goes, Gauss quickly realized a pattern and wrote down the answer of 5050 within a few seconds.\n\nFor brevity we'll use $T(n)$ to refer to the __sum of the first $n$ numbers__.\n\nThe trick to seeing the pattern in this problem isn't as straightforward as the last one. As before we'll need to generate some examples to analyze first.\n\n\n```python\n# Create a list of the first 10 numbers.\nnumbers = range(1,11)\n\n# Print a nice heading.\nprint('| n | T(n)|')\nprint('-------------')\n\n# For each odd number print the number and the sum of all numbers so far.\ntSum = 0\nfor num in numbers:\n tSum = tSum + num\n print('| {:3d} | {:3d} |'.format(num, tSum))\n```\n\n | n | T(n)|\n -------------\n | 1 | 1 |\n | 2 | 3 |\n | 3 | 6 |\n | 4 | 10 |\n | 5 | 15 |\n | 6 | 21 |\n | 7 | 28 |\n | 8 | 36 |\n | 9 | 45 |\n | 10 | 55 |\n\n\nUnfortunately this didn't turn out to be very insightful.\n\nAnother approach we can take is to try to represent the sum differently. Taking a cue from the previous section we'll draw the sum visually:\n\n\n\n\n\nIt is because of this representation that the sum of the first $n$ numbers is often referred to as a $n$'th `triangle number`. The value of our sum is represented by the 'area' of its triangular representation. Now, while it may not be easy to compute the area of such a triangle it is easy to compute the area of a rectangle and we can produce one by setting two triangles face to face:\n\n\n\nThis representation suggests a good conjecture for computing the $n$'th triangle number:\n$$T(n)=\\frac{(n)(n+1)}{2}.$$\n\nUnfortunately we once again lack the advanced proof technique we need to prove (using deductive thinking) that this is true for all numbers $n$. So like before we've managed to obtain a really good conjecture through inductive thinking but are not able to confirm with certainty whether or not it's true.\n\n### One Weird Trick\n\nFrom time to time neat computational tricks like this will go viral on social media. Unfortunately the people presenting them will typically only show a few flashy examples and leave the readers feeling completely mystified about __why__ the trick works (or worse, feeling betrayed when it fails).\n\n\n\nBefore we start lets first rephrase what the picture is saying:\n\nTo compute $(97)(96)$:\n1. For each of our values, compute their difference from $100$:\n - $3=100-97$\n - $4=100-96$\n2. Multiply the differences to compute the first two digits of the result:\n - $12=(3)(4)$\n3. Add the differences and subtract the result from $100$ to compute the remaining digits of the result:\n - $93=100-(3+4)$\n4. Glue the two results together to get the final result:\n - $(97)(96)=9312$\n\nIt looks like step 3 could be simplified a bit to $93 = 97 - 4$ or $93 = 96 - 3$\n\nIn general it looks like the algorithm may be something like this:\n\nTo compute $(a)(b)$:\n1. For each of our values, compute their difference from $100$:\n - $a'=100-a$\n - $b'=100-b$\n2. Multiply the differences to compute the first two digits of the result:\n - $D=(a')(b')$\n3. Add the differences and subtract the result from $100$ to compute the remaining digits of the result:\n - $C=a-b'$\n4. Glue the two results together to get the final result:\n - $(a)(b)=C\\text{ appended with }D$\n\nNext lets have the computer generate some more examples for us so that we can get a better sense of the problem through `inductive reasoning`. The two sliders below let us choose some inputs and present the result created by the algorithm as well as the actual result with a message saying `Success!` if the algorithm gave the correct answer and `Fail!` if it gave the incorrect answer.\n\n\n```python\na = widgets.IntSlider(description='a:', min=85, max=115, value=100)\nb = widgets.IntSlider(description='b:', min=85, max=115, value=100)\n\ndef multiply(a,b):\n aDiff = 100-a\n bDiff = 100-b\n \n firstTwo = aDiff*bDiff\n lastTwo = a - bDiff\n \n result = str(lastTwo).lstrip('0') + str(firstTwo).zfill(2)\n print('Result: {}'.format(result))\n print('Actual product: {}'.format((a*b)))\n if (result == str(a*b)):\n print('Success!')\n else:\n print('Fail!')\n\nout = widgets.interactive_output(multiply, {'a': a, 'b':b})\nwidgets.VBox([a,b, out])\n```\n\n\n

Failed to display Jupyter Widget of type VBox.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nPlaying around with the sliders it seems that the algorithm fails in two cases:\n1. Where the first digits in the result are greater than $100$.\n * for instance, for $(101)(99)$ it gives `100-1` instead of `9999`\n2. Where the first digits in the result are negative.\n * For instance, for $(110)(110)$ it gives `120120` instead of `12100`\n\nCan you see a pattern in the way the numbers fail, maybe a way to fix it?\n\nIt seems like both instances can be fixed by carrying values. Perhaps, instead of gluing values together like strings we're actually supposed to be multiplying the last digits by $100$ and adding the first digits! For instance, instead of saying $$9312 = 93\\text{ appended with }12$$ we would say $$9312=(93)(100)+12.$$\n\nLets update the algorithm with this change:\n\nTo compute $(a)(b)$:\n1. For each of our values, compute their difference from $100$:\n - $a'=100-a$\n - $b'=100-b$\n2. Multiply the differences to compute the first two digits of the result:\n - $D=(a')(b')$\n3. Add the differences and subtract the result from $100$ to compute the remaining digits of the result:\n - $C=a - b'$\n4. Combine the two results together to get the final result:\n - $(a)(b)=(C)(100) + D$\n\nIn other words: $$ab = [a-(100-b)](100) + (100-b)(100-a).$$\n\nNext let's create a new version of the sliders:\n\n\n```python\na = widgets.IntSlider(description='a:', min=85, max=115, value=100)\nb = widgets.IntSlider(description='b:', min=85, max=115, value=100)\n\ndef multiply(a,b):\n aDiff = 100-a\n bDiff = 100-b\n \n firstTwo = aDiff*bDiff\n lastTwo = 100 - (aDiff + bDiff)\n \n result = lastTwo*100 + firstTwo\n print('Result: {}'.format(result))\n print('Actual product: {}'.format((a*b)))\n if (result == a*b):\n print('Success!')\n else:\n print('Fail!')\n\nout = widgets.interactive_output(multiply, {'a': a, 'b':b})\nwidgets.VBox([a,b, out])\n```\n\n\n

Failed to display Jupyter Widget of type VBox.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nFor the two failing examples mentioned above we now get:\n* For $(101)(99)$ we get `9999` which is correct!\n* For $(110)(110)$ we get `12100` which is correct!\n\nNow that we have a conjecture let's getting a better sense of why it works. One thing we can do is to take our equation from above:\n$$\n\\begin{align}\nab &= [a-(100-b)](100) + (100-b)(100-a) \\\\\n&= (a)(100) - (100-b)(100) + (100-b)(100-a)\n\\end{align}\n$$\n\nWe can visualize this:\n\n\nNote: This visualization assumes that $a$ and $b$ are between $0$ and $100$ (though in our conjecture we also allow them be greater than $100$).\n\n\nIn general these sorts of techniques where one performs a computation by manipulating the digits of a value is called an 'algorism' (not to be confused with algorithm). They're not really used very much these days (except for fast mental math gimicks).\n\nThis particular algorism has a lot of generalizations for dealing with larger numbers but the reasoning behind them gets quite convoluted and in the end the most important part isn't proving that a algorism works for all numbers but that it works for all the numbers for which a mental computation is fast. In this case we can be satisfied by saying that this algorism works for numbers between $91$ and $109$ since those values are the easiest to use in practice.\n\n## Some Applications of Deductive Reasoning\n\n### Fractions\n\nFractions can be difficult to manipulate. Perhaps we can use deductive reasoning to come up with some easier ways to manipulate them.\n\nFirst some observations:\n1. Every number $a$ can be written in fraction form: $$ a=\\frac{a}{1}. $$\n2. For every number $a$, except in the case where $a=0$, we can write: $$ 1=\\frac{a}{a}. $$\n3. Multiplying two fractions just multiplies the numerators and denominators: $$ \\left( \\frac{a}{b}\\right) \\left( \\frac{c}{d} \\right) = \\frac{ac}{bd}. $$\n\nThe first thing to note is that observation (3) gives us a way to factor any fraction: $$ \\frac{a}{b} = \\left( \\frac{a}{1}\\right) \\left( \\frac{1}{b} \\right). $$\n\nCancellation follows from these observations as well: $$ (b) \\left(\\frac{a}{b}\\right) = \\left(\\frac{b}{1}\\right) \\left(\\frac{a}{1}\\right)\\left(\\frac{1}{b}\\right) = \\left(\\frac{b}{b}\\right) \\left(\\frac{a}{1}\\right) = (1)(a) = a$$\n\nLet's use these to manipulate some complicated fractions.\n\n$$\n\\begin{align*}\n \\frac{a}{\\frac{1}{c}}\n & = \\left( \\frac{a}{\\frac{1}{c}}\\right) (1) \\tag{Multiply by $1$}\\\\\n & = \\left( \\frac{a}{\\frac{1}{c}}\\right) \\left(\\frac{c}{c} \\right) \\tag{By observation 2}\\\\\n & = \\frac{ac}{\\left( \\frac{1}{c}\\right) (c)} \\tag{By observation 3}\\\\\n & = \\frac{ac}{1} \\tag{By canceling}\\\\\n & = ac \\tag{By observation 1}\n\\end{align*}\n$$\n\nAnother more complicated example:\n$$\n\\begin{align*}\n \\frac{\\frac{a}{b}}{\\frac{d}{c}}\n & = \\left( \\frac{\\frac{a}{b}}{\\frac{d}{c}} \\right) (1)(1) \\tag{Multiply by $1$}\\\\\n & = \\left( \\frac{\\frac{a}{b}}{\\frac{d}{c}} \\right) \\left(\\frac{b}{b} \\right)\\left(\\frac{c}{c} \\right) \\tag{By observation 2}\\\\\n & = \\frac{\\left( \\frac{a}{b} \\right) (b)(c)}{\\left(\\frac{d}{c} \\right) (b)(c)} \\tag{By observation 3}\\\\\n & = \\frac{ac}{db} \\tag{By canceling}\n\\end{align*}\n$$\n\nWe can manipulate even the most complicated fractions by __cleverly multiplying by 1__ in this way.\n\n### Distributive Property\n\nThe distributive property is extremely useful in simplifying expressions and performing computations. In fact, every multiplication algorithm you encounter will at some level boil down to some clever application of the distributive property. Simply put the distributive property tells us how addition and multiplication interact: $$(a+b)c = ac + bc.$$\n\nSince multiplication is commutative this statement is equivalent: $$a(c+d) = ac + ad.$$\n\nThe FOIL mnemonic is just a special case of the distributive property:\n$$ (a+b)(c+d) = (a+b)c + (a+b)d = ac + bc + ad + bd. $$\n\nIt is important to remember that the distributive property can be read two ways. In one sense it tells us how to distribute multiplication accross addition but in another sense it tells us how to undo that distribution. For example, suppose you have something like $6x + 10xy$. If we notice that both $6x$ and $10xy$ have $2x$ as a factor then we can rewrite that as $6x + 10xy = 2x(3 + 5y)$. This technique is an extremely useful application of deductive reasoning.\n\n### Mentally Computing Simple Percentages\n\nThere are many occasions where one might be asked to compute a percentage of some value on the spot (eg. tipping at a restaurant). Fortunately there's a trick to doing it quickly.\n\nFirst notice that computing $10\\%$ is as easy as moving the decimal point one digit to the left (eg. $10\\%$ of $25.3$ is $2.53$). Similarly, $1\\%$ can be computed by moving the decimal point two digits to the left.\n\nFrom there on it's just a matter of adding, subtracting, and/or multiplying these percentages to get to the desired percentage. For instance, $18\\%$ can be computed by first computing $10\\%$, doubling the value to get $20\\%$, moving the decimal over one more time to get $2\\%$ and then subtracting the $2\\%$ value from the $20\\%$ value. It's easier than it sounds.\n\nThis is an application of deductive reasoning because we reached all of the assertions here logically, not by looking at any patterns and conjecturing.\n\n\n\n", "meta": {"hexsha": "7a48be88292c44e109744b8463a605c6fddad5ab", "size": 28974, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/NumberSenseAndLogicalReasoning/inductive-and-deductive-reasoning.ipynb", "max_stars_repo_name": "jazwinkicks/jaz-callysto", "max_stars_repo_head_hexsha": "f9414e275c3cc38bc7266f105e0dcb57ad483ae0", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/NumberSenseAndLogicalReasoning/inductive-and-deductive-reasoning.ipynb", "max_issues_repo_name": "jazwinkicks/jaz-callysto", "max_issues_repo_head_hexsha": "f9414e275c3cc38bc7266f105e0dcb57ad483ae0", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/NumberSenseAndLogicalReasoning/inductive-and-deductive-reasoning.ipynb", "max_forks_repo_name": "jazwinkicks/jaz-callysto", "max_forks_repo_head_hexsha": "f9414e275c3cc38bc7266f105e0dcb57ad483ae0", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.5069124424, "max_line_length": 606, "alphanum_fraction": 0.5919790157, "converted": true, "num_tokens": 5694, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.4339814501625211, "lm_q2_score": 0.6076631698328916, "lm_q1q2_score": 0.26371454365443264}} {"text": "# RAW-CAPTURE on Aspen-8\n\nIn this we are going to show how to access \"raw\" measurement data with Quilt.\n\n\n```python\nfrom pyquil import Program, get_qc\n\nqc = get_qc('Aspen-8')\n```\n\n\n```python\ncals = qc.compiler.calibration_program\n```\n\n## Peeking at a MEASURE calibration\nWe first take a peek at how a measurement operation is specified. We can dot his by looking at the corresponding calibration. Below we consider the calibration for `MEASURE 0`.\n\n\n```python\nfrom pyquil.quilatom import Qubit, Frame\nfrom pyquil.quilbase import Pulse, Capture, DefMeasureCalibration\n\nqubit = Qubit(0)\n\nmeasure_defn = next(defn for defn in cals.calibrations \n if isinstance(defn, DefMeasureCalibration) and defn.qubit == qubit)\n\nprint(measure_defn)\n```\n\n DEFCAL MEASURE 0 addr:\n FENCE 0\n DECLARE q0_unclassified REAL[2]\n NONBLOCKING PULSE 0 \"ro_tx\" flat(duration: 1.68e-06, iq: 1.0, scale: 0.04466835921509615, phase: 0.0, detuning: 0.0)\n NONBLOCKING CAPTURE 0 \"ro_rx\" boxcar_kernel(duration: 1.68e-06, scale: 1.0, phase: 2.6571617075901393, detuning: 0.0) q0_unclassified[0]\n PRAGMA FILTER-NODE q0_unclassified \"{'module':'lodgepole.filters.io','filter_type':'DataBuffer','source':'q0_ro_rx/filter','publish':true,'params':{},'_type':'FilterNode'}\"\n PRAGMA LOAD-MEMORY q0_unclassified \"q0_unclassified[0]\"\n PRAGMA FILTER-NODE q0_classified \"{'module':'lodgepole.filters.classifiers','filter_type':'SingleQLinear','source':'q0_ro_rx/filter','publish':false,'params':{'a':[1.0,0.0],'threshold':0.000241237408735565},'_type':'FilterNode'}\"\n PRAGMA FILTER-NODE q0 \"{'module':'lodgepole.filters.io','filter_type':'DataBuffer','source':'q0_classified','publish':true,'params':{},'_type':'FilterNode'}\"\n PRAGMA LOAD-MEMORY q0 \"addr\"\n FENCE 0\n \n\n\nThere are a few things note about the above:\n\n1. The basic structure of `MEASURE 0 addr` is to apply a pulse on the `\"ro_tx\"` frame, and then perform a capture on the corresponding `\"ro_rx\"` frame.\n2. Although the user may perform `MEASURE 0 ro`, the memory location required for this is a bit. Under the hood, `CAPTURE` writes a complex IQ value to the `REAL[2]` region `q0_unclassified`.\n3. The wrangling in order to map from `q0_unclassified` to `addr` is controlled through `PRAGMA` operations. These are important for downstream processing and execution of the Quil program. Tamper with them at your own risk!\n\n## RAW-CAPTURE experiments\n\nThe value stored in `q0_unclassified` has already been processed on hardware: in particular, it is produced by demodulating a passband signal and then integrating against the `CAPTURE` waveform. What `RAW-CAPTURE` does is give you, the user, access to the raw values of that passband signal. In the following, execute programs with `RAW-CAPTURE`, and plot their results.\n\nBefore we begin, it will be useful to get some data associated with the above `MEASURE` calibration. In particular, the `PULSE` and `CAPTURE` operations, as well as the frame definition for `0 \"ro_rx\"`.\n\n\n```python\npulse = next(i for i in measure_defn.instrs if isinstance(i, Pulse))\nprint(pulse, \"\\n\")\ncapture = next(i for i in measure_defn.instrs if isinstance(i, Capture))\nprint(capture, \"\\n\")\nframe = Frame([qubit], \"ro_rx\")\nframe_defn = cals.frames[frame]\nprint(frame_defn)\n```\n\n NONBLOCKING PULSE 0 \"ro_tx\" flat(duration: 1.68e-06, iq: 1.0, scale: 0.04466835921509615, phase: 0.0, detuning: 0.0) \n \n NONBLOCKING CAPTURE 0 \"ro_rx\" boxcar_kernel(duration: 1.68e-06, scale: 1.0, phase: 2.6571617075901393, detuning: 0.0) q0_unclassified[0] \n \n DEFFRAME 0 \"ro_rx\":\n DIRECTION: \"rx\"\n INITIAL-FREQUENCY: 7262459787.78838\n HARDWARE-OBJECT: \"q0_ro_rx\"\n SAMPLE-RATE: 2000000000.0\n \n\n\n### An almost-trivial example\n\nFirst, let's just run a `RAW-CAPTURE` instruction. We will apply this to the above `CAPTURE` frame, i.e. `0 \"ro_rx\"`, and for the same duration the `CAPTURE`. The principal difference is that rather than read-out to a memory region of length 2, we will need many more. It is easy to compute the size $n$ of the output, namely\n\\begin{equation}\nn = \\left \\lceil{t \\cdot f_s}\\right \\rceil , \n\\end{equation}\nwhere $t$ is the duration in seconds, $f_s$ is the sample rate in Hz (which is part of the frame definition), and $ \\left \\lceil x \\right \\rceil $ denotes the smallest integer not less than $x$.\n\n\n\n```python\nfrom math import ceil\n\nduration = capture.kernel.duration\nsample_rate = frame_defn.sample_rate\nmemory_length = ceil(duration * sample_rate)\n\nraw_capture_no_pulse = Program( \n f'DECLARE raw REAL[{memory_length}]',\n f'RAW-CAPTURE {frame} {duration} raw'\n).wrap_in_numshots_loop(1000)\nprint(raw_capture_no_pulse)\n```\n\n DECLARE raw REAL[3360]\n RAW-CAPTURE 0 \"ro_rx\" 1.68e-06 raw[0]\n \n\n\n\n```python\nexe = qc.compiler.native_quil_to_executable(raw_capture_no_pulse)\nqc.run(exe)\n```\n\n\n```python\nraw_results_no_pulse = qc.qam.read_memory(region_name='raw')\n```\n\nRaw capture results are by default represented as integers in the interval $[-2^{15}, 2^{15}]$. For many analyses you may prefer to normalize to the range $[-1,1]$.\n\n\n```python\nprint(\"shape\", raw_results_no_pulse.shape)\nprint(\"data\", raw_results_no_pulse)\n```\n\n shape (1000, 3360)\n data [[ 156. -268. -488. ... -152. -96. 176.]\n [-704. -68. -588. ... 572. 20. 452.]\n [-304. -412. -784. ... 76. 628. 668.]\n ...\n [-408. -788. -500. ... -80. -116. 868.]\n [-736. -400. -956. ... 528. 872. 272.]\n [-388. -608. -432. ... 556. 548. 520.]]\n\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure()\nplt.gcf().set_size_inches(20.5, 10.5)\nplt.plot(np.arange(len(raw_results_no_pulse[0,:]))/sample_rate, raw_results_no_pulse[0,:])\nplt.show()\n```\n\n\n```python\navg_results_no_pulse = raw_results_no_pulse.mean(axis=0) / (2**15)\n```\n\n\n```python\nplt.psd(avg_results_no_pulse, Fs=sample_rate)\nplt.show()\n```\n\n### Applying a `PULSE` before `RAW-CAPTURE`\n\nRecall how measurements are usually done: first there is a pulse on the `\"ro_tx\"` frame, and then a capture on the `\"ro_rx\"` frame. We modify our above program by including the `PULSE` operation associated with the vanilla measurement.\n\n\n```python\nraw_capture_pulse = Program( \n f'DECLARE raw REAL[{memory_length}]',\n pulse,\n f'RAW-CAPTURE {frame} {duration} raw'\n).wrap_in_numshots_loop(1000)\nprint(raw_capture_pulse)\n```\n\n DECLARE raw REAL[3360]\n NONBLOCKING PULSE 0 \"ro_tx\" flat(duration: 1.68e-06, iq: 1.0, scale: 0.04466835921509615, phase: 0.0, detuning: 0.0)\n RAW-CAPTURE 0 \"ro_rx\" 1.68e-06 raw[0]\n \n\n\n\n```python\nexe = qc.compiler.native_quil_to_executable(raw_capture_pulse)\nqc.run(exe)\n\nraw_results_pulse = qc.qam.read_memory(region_name='raw')\navg_results_pulse = raw_results_pulse.mean(axis=0) / 2**15\n```\n\n\n```python\nplt.psd(avg_results_pulse, Fs=sample_rate)\nplt.show()\n```\n\n### Capturing an excited qubit\n\nFinally, we extend the above by first exciting the qubit, by applying a `RX(pi)` gate.\n\n\n```python\nraw_capture_excited = Program( \n f'DECLARE raw REAL[{memory_length}]',\n f'RX(pi) {qubit}',\n pulse,\n f'RAW-CAPTURE {frame} {duration} raw'\n).wrap_in_numshots_loop(1000)\nprint(raw_capture_excited)\n```\n\n DECLARE raw REAL[3360]\n RX(pi) 0\n NONBLOCKING PULSE 0 \"ro_tx\" flat(duration: 1.68e-06, iq: 1.0, scale: 0.04466835921509615, phase: 0.0, detuning: 0.0)\n RAW-CAPTURE 0 \"ro_rx\" 1.68e-06 raw[0]\n \n\n\n\n```python\nexe = qc.compiler.native_quil_to_executable(raw_capture_excited)\nqc.run(exe)\n\nraw_results_excited = qc.qam.read_memory(region_name='raw')\navg_results_excited = raw_results_excited.mean(axis=0) / 2**15\n```\n\n\n```python\nplt.psd(avg_results_excited, Fs=sample_rate)\nplt.show()\n```\n\n### TODO\n\nDiscuss readout classification.\n\n## Some Restrictions May Apply\n\nPerforming a `RAW-CAPTURE` operation places a number of demands on the underlying hardware, and thus comes with a few constraints. We demonstrate these here.\n\n### Capture duration exceeds maximum length\n\nA `RAW-CAPTURE` operation can capture at most 8192 samples per shot, which puts a limit of $\\frac{8192}{f_s}$ seconds for the duration, where $f_s$ is the frame's sample rate.\n\n\n```python\nduration = 5e-6\nsamples = ceil(sample_rate*duration)\nrrr = Program(\n f'DECLARE raw REAL[{samples}]', \n f'RAW-CAPTURE 0 \"ro_rx\" {duration} raw'\n).wrap_in_numshots_loop(1)\n\ntry:\n exe = qc.compiler.native_quil_to_executable(rrr)\nexcept Exception as e:\n print(e)\n```\n\n ERROR: QPU Compiler native_quilt_to_binary failed: RAW-CAPTURE 0 \"ro_rx\" 5e-06 raw[0] would require 10000 samples, butat most 8192 are allowed. Consider using a duration of < 4.096e-06 seconds.\n\n\n### Number of samples in a job exceeds maximum\n\nThere is a total limit of $2^{24}$ samples per job, i.e. `duration * sample_rate * num_shots` cannot exceed $2^24$.\n\n\n```python\nduration = 1e-06\nsamples = ceil(sample_rate*duration)\nrrr = Program(\n f'DECLARE raw REAL[{samples}]', \n f'RAW-CAPTURE 0 \"ro_rx\" {duration} raw'\n).wrap_in_numshots_loop(100000)\n\ntry:\n exe = qc.compiler.native_quil_to_executable(rrr)\nexcept Exception as e:\n print(e)\n```\n\n ERROR: QPU Compiler native_quilt_to_binary failed: RAW-CAPTURE would require DMA buffer of size 381.4697265625 MB but the maximum allowed is 32.0 MB.\n For duration 1e-06 seconds this places a limit of at most 8388 shots.\n\n\n### `RAW-CAPTURE` precludes the use of other capture operations\n\nDue to the hardware requirements associated with `RAW-CAPTURE`, the following limits are currently imposed:\n\n* there can be at most one `RAW-CAPTURE` operation per program, and\n* if a program includes `RAW-CAPTURE`, then it cannot also include `CAPTURE` operations.\n\n\n```python\nduration = 1e-06\nsamples = ceil(sample_rate*duration)\nrrr = Program(\n f'DECLARE raw REAL[{samples}]', \n 'DECLARE ro BIT',\n 'MEASURE 1 ro',\n f'RAW-CAPTURE 0 \"ro_rx\" {duration} raw'\n)\n\ntry:\n exe = qc.compiler.native_quil_to_executable(rrr)\nexcept Exception as e:\n print(e)\n```\n\n ERROR: QPU Compiler native_quilt_to_binary failed: Capture conflict: RAW-CAPTURE 0 \"ro_rx\" 1e-06 raw[0] precludes the presence of any other capture instructions, but NONBLOCKING CAPTURE 1 \"ro_rx\" boxcar_kernel(duration: 2.36e-06, scale: 1.0, phase: 1.1499233858972862, detuning: 0.0) q1_unclassified[0] was observed.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "592ae2cd578056c4263dbf6c4f81398e0cfbc49c", "size": 368462, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/quilt_raw_capture.ipynb", "max_stars_repo_name": "stjordanis/pyquil", "max_stars_repo_head_hexsha": "36987ecb78d5dc85d299dd62395b7669a1cedd5a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 677, "max_stars_repo_stars_event_min_datetime": "2017-01-09T23:20:22.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-26T10:57:49.000Z", "max_issues_repo_path": "docs/source/quilt_raw_capture.ipynb", "max_issues_repo_name": "stjordanis/pyquil", "max_issues_repo_head_hexsha": "36987ecb78d5dc85d299dd62395b7669a1cedd5a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 574, "max_issues_repo_issues_event_min_datetime": "2018-11-28T05:38:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-23T20:38:28.000Z", "max_forks_repo_path": "docs/source/quilt_raw_capture.ipynb", "max_forks_repo_name": "stjordanis/pyquil", "max_forks_repo_head_hexsha": "36987ecb78d5dc85d299dd62395b7669a1cedd5a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 202, "max_forks_repo_forks_event_min_datetime": "2018-11-30T06:36:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T15:38:18.000Z", "avg_line_length": 616.1571906355, "max_line_length": 264288, "alphanum_fraction": 0.9474274145, "converted": true, "num_tokens": 3092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5273165085228825, "lm_q2_score": 0.5, "lm_q1q2_score": 0.26365825426144124}} {"text": "\n\nCopyright 2017 Google LLC.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n# Performance RNN\n\n### ___Ian Simon, Sageev Oore, Curtis Hawthorne___ ([blog](https://magenta.tensorflow.org/performance-rnn)) ([code](https://github.com/tensorflow/magenta/tree/master/magenta/models/performance_rnn))\n\nPerformance RNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. This notebook shows you how to generate new performed compositions from a trained model. You'll see how to download a bundle containing a pre-trained model, instantiate and initialize the model and generate new polyphonic performances. The notebook also shows some hyperparameters useful for controlling generation, such as temperature.\n\n___\n\nThis colab notebook is self-contained and should run natively on google cloud. The code and checkpoints can be downloaded separately and run locally, which is recommended if you want to train your own model. Details on how to do this can be found in the [GitHub repo](https://github.com/tensorflow/magenta/tree/master/magenta/models/performance_rnn).\n\n# Environment Setup\n\nIncludes package installation for sequence synthesis. May take a few minutes.\n\n\n```\n!apt-get update -qq && apt-get install -qq libfluidsynth1 build-essential libasound2-dev libjack-dev\n!pip install -U magenta==0.5.0 pyfluidsynth\n\n# Hack to allow python to pick up the newly-installed fluidsynth lib.\nimport ctypes.util\norig_ctypes_util_find_library = ctypes.util.find_library\ndef proxy_find_library(lib):\n if lib == 'fluidsynth':\n return 'libfluidsynth.so.1'\n else:\n return orig_ctypes_util_find_library(lib)\nctypes.util.find_library = proxy_find_library\n\n# Download Salamander piano SoundFont.\n# Samples by Alexander Holm: https://archive.org/details/SalamanderGrandPianoV3\n# Converted to sf2 by John Nebauer: https://sites.google.com/site/soundfonts4u\n!gsutil -m cp gs://download.magenta.tensorflow.org/soundfonts/Yamaha-C5-Salamander-JNv5.1.sf2 /tmp/\n```\n\n Selecting previously unselected package libfluidsynth1:amd64.\n (Reading database ... 110842 files and directories currently installed.)\n Preparing to unpack .../libfluidsynth1_1.1.9-1_amd64.deb ...\n Unpacking libfluidsynth1:amd64 (1.1.9-1) ...\n Processing triggers for libc-bin (2.27-3ubuntu1) ...\n Setting up libfluidsynth1:amd64 (1.1.9-1) ...\n Processing triggers for libc-bin (2.27-3ubuntu1) ...\n Collecting magenta==0.5.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/db/fc/3cae201d5141d49f167649d23fd3136f1c180dd0da495a6d06cefffe04c6/magenta-0.5.0-py2.py3-none-any.whl (1.4MB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.4MB 11.1MB/s \n \u001b[?25hCollecting pyfluidsynth\n Downloading https://files.pythonhosted.org/packages/43/eb/6b10b586d727a9706b5f2320121defdcb1cb9a2e301104549c7cca4a313e/pyFluidSynth-1.2.5-py2-none-any.whl\n Requirement already satisfied, skipping upgrade: numpy>=1.11.0 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.14.6)\n Requirement already satisfied, skipping upgrade: python-rtmidi in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.1.2)\n Requirement already satisfied, skipping upgrade: scipy>=0.18.1 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.1.0)\n Requirement already satisfied, skipping upgrade: mir-eval>=0.4 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.5)\n Requirement already satisfied, skipping upgrade: tensorflow>=1.12.0 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.12.0)\n Requirement already satisfied, skipping upgrade: pandas>=0.18.1 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.22.0)\n Requirement already satisfied, skipping upgrade: pretty-midi>=0.2.6 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.2.8)\n Requirement already satisfied, skipping upgrade: intervaltree>=2.1.0 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (2.1.0)\n Requirement already satisfied, skipping upgrade: librosa>=0.6.2 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.6.2)\n Collecting apache-beam>=2.8.0; python_version == \"2.7\" (from magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/3d/90aa15779e884feebae4b0c26cad6f52cd4040397a94deb58dad9c8b7300/apache_beam-2.9.0-cp27-cp27mu-manylinux1_x86_64.whl (2.4MB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.4MB 8.3MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: backports.tempfile in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.0)\n Requirement already satisfied, skipping upgrade: mido==1.2.6 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.2.6)\n Requirement already satisfied, skipping upgrade: Pillow>=3.4.2 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (4.0.0)\n Requirement already satisfied, skipping upgrade: protobuf in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (3.6.1)\n Requirement already satisfied, skipping upgrade: tensorflow-probability>=0.5.0 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.5.0)\n Requirement already satisfied, skipping upgrade: IPython in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (5.5.0)\n Requirement already satisfied, skipping upgrade: bokeh>=0.12.0 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.0.2)\n Requirement already satisfied, skipping upgrade: matplotlib>=1.5.3 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (2.1.2)\n Requirement already satisfied, skipping upgrade: futures; python_version == \"2.7\" in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (3.2.0)\n Collecting sonnet (from magenta==0.5.0)\n Downloading https://files.pythonhosted.org/packages/01/aa/641d5113d301cc67e2efd802bfd566ab5390676ea1117b2b8954ea6d60c1/sonnet-0.1.6.tar.gz\n Requirement already satisfied, skipping upgrade: absl-py in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.6.1)\n Requirement already satisfied, skipping upgrade: wheel in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.32.3)\n Requirement already satisfied, skipping upgrade: joblib>=0.12 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (0.13.0)\n Requirement already satisfied, skipping upgrade: tensor2tensor>=1.10.0 in /usr/local/lib/python2.7/dist-packages (from magenta==0.5.0) (1.11.0)\n Collecting sk-video (from magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/dd/3f/ce848b8b2062ad1ccf1449094a740c775f6c761339f411e44f1e090f23a7/sk_video-1.1.10-py2.py3-none-any.whl (2.3MB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.3MB 9.0MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: future in /usr/local/lib/python2.7/dist-packages (from pyfluidsynth) (0.16.0)\n Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python2.7/dist-packages (from mir-eval>=0.4->magenta==0.5.0) (1.11.0)\n Requirement already satisfied, skipping upgrade: grpcio>=1.8.6 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.15.0)\n Requirement already satisfied, skipping upgrade: mock>=2.0.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (2.0.0)\n Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.0.6)\n Requirement already satisfied, skipping upgrade: enum34>=1.1.6 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.1.6)\n Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.0.5)\n Requirement already satisfied, skipping upgrade: gast>=0.2.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (0.2.0)\n Requirement already satisfied, skipping upgrade: backports.weakref>=1.0rc1 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.0.post1)\n Requirement already satisfied, skipping upgrade: tensorboard<1.13.0,>=1.12.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.12.1)\n Requirement already satisfied, skipping upgrade: termcolor>=1.1.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (1.1.0)\n Requirement already satisfied, skipping upgrade: astor>=0.6.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow>=1.12.0->magenta==0.5.0) (0.7.1)\n Requirement already satisfied, skipping upgrade: pytz>=2011k in /usr/local/lib/python2.7/dist-packages (from pandas>=0.18.1->magenta==0.5.0) (2018.7)\n Requirement already satisfied, skipping upgrade: python-dateutil in /usr/local/lib/python2.7/dist-packages (from pandas>=0.18.1->magenta==0.5.0) (2.5.3)\n Requirement already satisfied, skipping upgrade: sortedcontainers in /usr/local/lib/python2.7/dist-packages (from intervaltree>=2.1.0->magenta==0.5.0) (2.1.0)\n Requirement already satisfied, skipping upgrade: audioread>=2.0.0 in /usr/local/lib/python2.7/dist-packages (from librosa>=0.6.2->magenta==0.5.0) (2.1.6)\n Requirement already satisfied, skipping upgrade: scikit-learn!=0.19.0,>=0.14.0 in /usr/local/lib/python2.7/dist-packages (from librosa>=0.6.2->magenta==0.5.0) (0.20.1)\n Requirement already satisfied, skipping upgrade: decorator>=3.0.0 in /usr/local/lib/python2.7/dist-packages (from librosa>=0.6.2->magenta==0.5.0) (4.3.0)\n Requirement already satisfied, skipping upgrade: resampy>=0.2.0 in /usr/local/lib/python2.7/dist-packages (from librosa>=0.6.2->magenta==0.5.0) (0.2.1)\n Requirement already satisfied, skipping upgrade: numba>=0.38.0 in /usr/local/lib/python2.7/dist-packages (from librosa>=0.6.2->magenta==0.5.0) (0.40.1)\n Requirement already satisfied, skipping upgrade: pyyaml<4.0.0,>=3.12 in /usr/local/lib/python2.7/dist-packages (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (3.13)\n Collecting hdfs<3.0.0,>=2.1.0 (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n Downloading https://files.pythonhosted.org/packages/96/4e/f82bd349c7893e1595429ecc95233369bc33c9a26e4859991439bfa01c1f/hdfs-2.2.2.tar.gz\n Requirement already satisfied, skipping upgrade: httplib2<=0.11.3,>=0.8 in /usr/local/lib/python2.7/dist-packages (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (0.11.3)\n Requirement already satisfied, skipping upgrade: dill<=0.2.8.2,>=0.2.6 in /usr/local/lib/python2.7/dist-packages (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (0.2.8.2)\n Collecting pyvcf<0.7.0,>=0.6.8 (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n Downloading https://files.pythonhosted.org/packages/20/b6/36bfb1760f6983788d916096193fc14c83cce512c7787c93380e09458c09/PyVCF-0.6.8.tar.gz\n Collecting fastavro<0.22,>=0.21.4 (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/e6/81/9ecee4b61514da16e6e11b5bfd3cc35112bc9c2846d846ec7cc1cda0b106/fastavro-0.21.16-cp27-cp27mu-manylinux1_x86_64.whl (1.1MB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.1MB 16.5MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: typing<3.7.0,>=3.6.0; python_version < \"3.5.0\" in /usr/local/lib/python2.7/dist-packages (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (3.6.6)\n Collecting avro<2.0.0,>=1.8.1 (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/eb/27/143f124a7498f841317a92ced877150c5cb8d28a4109ec39666485925d00/avro-1.8.2.tar.gz (43kB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 51kB 21.2MB/s \n \u001b[?25hCollecting pydot<1.3,>=1.2.0 (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/c3/f1/e61d6dfe6c1768ed2529761a68f70939e2569da043e9f15a8d84bf56cadf/pydot-1.2.4.tar.gz (132kB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 133kB 27.5MB/s \n \u001b[?25hCollecting oauth2client<4,>=2.0.1 (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/c0/7b/bc893e35d6ca46a72faa4b9eaac25c687ce60e1fbe978993fe2de1b0ff0d/oauth2client-3.0.0.tar.gz (77kB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 81kB 21.9MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: crcmod<2.0,>=1.7 in /usr/local/lib/python2.7/dist-packages (from apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (1.7)\n Requirement already satisfied, skipping upgrade: olefile in /usr/local/lib/python2.7/dist-packages (from Pillow>=3.4.2->magenta==0.5.0) (0.46)\n Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf->magenta==0.5.0) (40.6.3)\n Requirement already satisfied, skipping upgrade: simplegeneric>0.8 in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (0.8.1)\n Requirement already satisfied, skipping upgrade: pickleshare in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (0.7.5)\n Requirement already satisfied, skipping upgrade: backports.shutil-get-terminal-size; python_version == \"2.7\" in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (1.0.0)\n Requirement already satisfied, skipping upgrade: pathlib2; python_version == \"2.7\" or python_version == \"3.3\" in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (2.3.3)\n Requirement already satisfied, skipping upgrade: pexpect; sys_platform != \"win32\" in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (4.6.0)\n Requirement already satisfied, skipping upgrade: traitlets>=4.2 in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (4.3.2)\n Requirement already satisfied, skipping upgrade: pygments in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (2.1.3)\n Requirement already satisfied, skipping upgrade: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python2.7/dist-packages (from IPython->magenta==0.5.0) (1.0.15)\n Requirement already satisfied, skipping upgrade: Jinja2>=2.7 in /usr/local/lib/python2.7/dist-packages (from bokeh>=0.12.0->magenta==0.5.0) (2.10)\n Requirement already satisfied, skipping upgrade: packaging>=16.8 in /usr/local/lib/python2.7/dist-packages (from bokeh>=0.12.0->magenta==0.5.0) (18.0)\n Requirement already satisfied, skipping upgrade: tornado>=4.3 in /usr/local/lib/python2.7/dist-packages (from bokeh>=0.12.0->magenta==0.5.0) (4.5.3)\n Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python2.7/dist-packages (from matplotlib>=1.5.3->magenta==0.5.0) (0.10.0)\n Requirement already satisfied, skipping upgrade: backports.functools-lru-cache in /usr/local/lib/python2.7/dist-packages (from matplotlib>=1.5.3->magenta==0.5.0) (1.5)\n Requirement already satisfied, skipping upgrade: subprocess32 in /usr/local/lib/python2.7/dist-packages (from matplotlib>=1.5.3->magenta==0.5.0) (3.5.3)\n Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python2.7/dist-packages (from matplotlib>=1.5.3->magenta==0.5.0) (2.3.0)\n Collecting networkx==1.8.1 (from sonnet->magenta==0.5.0)\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/1e/ce/c5f0d432ab38b114a18020c1f716e08347b580b6865111ef1314995627a2/networkx-1.8.1.tar.gz (806kB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 808kB 18.4MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: requests in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (2.18.4)\n Requirement already satisfied, skipping upgrade: flask in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (1.0.2)\n Requirement already satisfied, skipping upgrade: gunicorn in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (19.9.0)\n Requirement already satisfied, skipping upgrade: mesh-tensorflow in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (0.0.5)\n Requirement already satisfied, skipping upgrade: tfds-nightly in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (0.0.2.dev201812180014)\n Requirement already satisfied, skipping upgrade: google-api-python-client in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (1.6.7)\n Requirement already satisfied, skipping upgrade: tqdm in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (4.28.1)\n Requirement already satisfied, skipping upgrade: bz2file in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (0.98)\n Requirement already satisfied, skipping upgrade: gevent in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (1.3.7)\n Requirement already satisfied, skipping upgrade: sympy in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (1.1.1)\n Requirement already satisfied, skipping upgrade: gym in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (0.10.9)\n Requirement already satisfied, skipping upgrade: h5py in /usr/local/lib/python2.7/dist-packages (from tensor2tensor>=1.10.0->magenta==0.5.0) (2.8.0)\n Requirement already satisfied, skipping upgrade: funcsigs>=1; python_version < \"3.3\" in /usr/local/lib/python2.7/dist-packages (from mock>=2.0.0->tensorflow>=1.12.0->magenta==0.5.0) (1.0.2)\n Requirement already satisfied, skipping upgrade: pbr>=0.11 in /usr/local/lib/python2.7/dist-packages (from mock>=2.0.0->tensorflow>=1.12.0->magenta==0.5.0) (5.1.1)\n Requirement already satisfied, skipping upgrade: werkzeug>=0.11.10 in /usr/local/lib/python2.7/dist-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow>=1.12.0->magenta==0.5.0) (0.14.1)\n Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in /usr/local/lib/python2.7/dist-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow>=1.12.0->magenta==0.5.0) (3.0.1)\n Requirement already satisfied, skipping upgrade: llvmlite>=0.25.0dev0 in /usr/local/lib/python2.7/dist-packages (from numba>=0.38.0->librosa>=0.6.2->magenta==0.5.0) (0.26.0)\n Requirement already satisfied, skipping upgrade: singledispatch in /usr/local/lib/python2.7/dist-packages (from numba>=0.38.0->librosa>=0.6.2->magenta==0.5.0) (3.4.0.3)\n Collecting docopt (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0)\n Downloading https://files.pythonhosted.org/packages/a2/55/8f8cab2afd404cf578136ef2cc5dfb50baa1761b68c9da1fb1e4eed343c9/docopt-0.6.2.tar.gz\n Requirement already satisfied, skipping upgrade: pyasn1>=0.1.7 in /usr/local/lib/python2.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (0.4.4)\n Requirement already satisfied, skipping upgrade: pyasn1-modules>=0.0.5 in /usr/local/lib/python2.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (0.2.2)\n Requirement already satisfied, skipping upgrade: rsa>=3.1.4 in /usr/local/lib/python2.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam>=2.8.0; python_version == \"2.7\"->magenta==0.5.0) (4.0)\n Requirement already satisfied, skipping upgrade: scandir; python_version < \"3.5\" in /usr/local/lib/python2.7/dist-packages (from pathlib2; python_version == \"2.7\" or python_version == \"3.3\"->IPython->magenta==0.5.0) (1.9.0)\n Requirement already satisfied, skipping upgrade: ptyprocess>=0.5 in /usr/local/lib/python2.7/dist-packages (from pexpect; sys_platform != \"win32\"->IPython->magenta==0.5.0) (0.6.0)\n Requirement already satisfied, skipping upgrade: ipython-genutils in /usr/local/lib/python2.7/dist-packages (from traitlets>=4.2->IPython->magenta==0.5.0) (0.2.0)\n Requirement already satisfied, skipping upgrade: wcwidth in /usr/local/lib/python2.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->IPython->magenta==0.5.0) (0.1.7)\n Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in /usr/local/lib/python2.7/dist-packages (from Jinja2>=2.7->bokeh>=0.12.0->magenta==0.5.0) (1.1.0)\n Requirement already satisfied, skipping upgrade: certifi in /usr/local/lib/python2.7/dist-packages (from tornado>=4.3->bokeh>=0.12.0->magenta==0.5.0) (2018.11.29)\n Requirement already satisfied, skipping upgrade: backports_abc>=0.4 in /usr/local/lib/python2.7/dist-packages (from tornado>=4.3->bokeh>=0.12.0->magenta==0.5.0) (0.5)\n Requirement already satisfied, skipping upgrade: idna<2.7,>=2.5 in /usr/local/lib/python2.7/dist-packages (from requests->tensor2tensor>=1.10.0->magenta==0.5.0) (2.6)\n Requirement already satisfied, skipping upgrade: urllib3<1.23,>=1.21.1 in /usr/local/lib/python2.7/dist-packages (from requests->tensor2tensor>=1.10.0->magenta==0.5.0) (1.22)\n Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python2.7/dist-packages (from requests->tensor2tensor>=1.10.0->magenta==0.5.0) (3.0.4)\n Requirement already satisfied, skipping upgrade: click>=5.1 in /usr/local/lib/python2.7/dist-packages (from flask->tensor2tensor>=1.10.0->magenta==0.5.0) (7.0)\n Requirement already satisfied, skipping upgrade: itsdangerous>=0.24 in /usr/local/lib/python2.7/dist-packages (from flask->tensor2tensor>=1.10.0->magenta==0.5.0) (1.1.0)\n Requirement already satisfied, skipping upgrade: wrapt in /usr/local/lib/python2.7/dist-packages (from tfds-nightly->tensor2tensor>=1.10.0->magenta==0.5.0) (1.10.11)\n Requirement already satisfied, skipping upgrade: promise in /usr/local/lib/python2.7/dist-packages (from tfds-nightly->tensor2tensor>=1.10.0->magenta==0.5.0) (2.2.1)\n Requirement already satisfied, skipping upgrade: tensorflow-metadata in /usr/local/lib/python2.7/dist-packages (from tfds-nightly->tensor2tensor>=1.10.0->magenta==0.5.0) (0.9.0)\n Requirement already satisfied, skipping upgrade: functools32 in /usr/local/lib/python2.7/dist-packages (from tfds-nightly->tensor2tensor>=1.10.0->magenta==0.5.0) (3.2.3.post2)\n Requirement already satisfied, skipping upgrade: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python2.7/dist-packages (from google-api-python-client->tensor2tensor>=1.10.0->magenta==0.5.0) (3.0.0)\n Requirement already satisfied, skipping upgrade: greenlet>=0.4.14; platform_python_implementation == \"CPython\" in /usr/local/lib/python2.7/dist-packages (from gevent->tensor2tensor>=1.10.0->magenta==0.5.0) (0.4.15)\n Requirement already satisfied, skipping upgrade: mpmath>=0.19 in /usr/local/lib/python2.7/dist-packages (from sympy->tensor2tensor>=1.10.0->magenta==0.5.0) (1.1.0)\n Requirement already satisfied, skipping upgrade: pyglet>=1.2.0 in /usr/local/lib/python2.7/dist-packages (from gym->tensor2tensor>=1.10.0->magenta==0.5.0) (1.3.2)\n Requirement already satisfied, skipping upgrade: googleapis-common-protos in /usr/local/lib/python2.7/dist-packages (from tensorflow-metadata->tfds-nightly->tensor2tensor>=1.10.0->magenta==0.5.0) (1.5.5)\n Building wheels for collected packages: sonnet, hdfs, pyvcf, avro, pydot, oauth2client, networkx, docopt\n Running setup.py bdist_wheel for sonnet ... \u001b[?25l-\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/c9/eb/cf/78638749d5a9065ef666af154431cff8f2e82bd2d9dc0d96b2\n Running setup.py bdist_wheel for hdfs ... \u001b[?25l-\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/99/3f/b2/a09631bd4e2220031fa88949f4acc010cc48cc29011cb25922\n Running setup.py bdist_wheel for pyvcf ... \u001b[?25l-\b \b\\\b \b|\b \b/\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/81/91/41/3272543c0b9c61da9c525f24ee35bae6fe8f60d4858c66805d\n Running setup.py bdist_wheel for avro ... \u001b[?25l-\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/bf/0b/75/1b3517b7d36ddc8ba5d22c0df5eb01e83979f34420066d643e\n Running setup.py bdist_wheel for pydot ... \u001b[?25l-\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/6a/a5/14/25541ebcdeaf97a37b6d05c7ff15f5bd20f5e91b99d313e5b4\n Running setup.py bdist_wheel for oauth2client ... \u001b[?25l-\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/48/f7/87/b932f09c6335dbcf45d916937105a372ab14f353a9ca431d7d\n Running setup.py bdist_wheel for networkx ... \u001b[?25l-\b \b\\\b \b|\b \b/\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/b0/45/7a/70a261dd19bcb094df5e3629f305a81156c9c2f15d47b29c89\n Running setup.py bdist_wheel for docopt ... \u001b[?25l-\b \bdone\n \u001b[?25h Stored in directory: /root/.cache/pip/wheels/9b/04/dd/7daf4150b6d9b12949298737de9431a324d4b797ffd63f526e\n Successfully built sonnet hdfs pyvcf avro pydot oauth2client networkx docopt\n \u001b[31mapache-beam 2.9.0 has requirement pytz<=2018.4,>=2018.3, but you'll have pytz 2018.7 which is incompatible.\u001b[0m\n Installing collected packages: docopt, hdfs, pyvcf, fastavro, avro, pydot, oauth2client, apache-beam, networkx, sonnet, sk-video, magenta, pyfluidsynth\n Found existing installation: pydot 1.3.0\n Uninstalling pydot-1.3.0:\n Successfully uninstalled pydot-1.3.0\n Found existing installation: oauth2client 4.1.3\n Uninstalling oauth2client-4.1.3:\n Successfully uninstalled oauth2client-4.1.3\n Found existing installation: networkx 2.2\n Uninstalling networkx-2.2:\n Successfully uninstalled networkx-2.2\n Found existing installation: magenta 0.3.19\n Uninstalling magenta-0.3.19:\n Successfully uninstalled magenta-0.3.19\n Successfully installed apache-beam-2.9.0 avro-1.8.2 docopt-0.6.2 fastavro-0.21.16 hdfs-2.2.2 magenta-0.5.0 networkx-1.8.1 oauth2client-3.0.0 pydot-1.2.4 pyfluidsynth-1.2.5 pyvcf-0.6.8 sk-video-1.1.10 sonnet-0.1.6\n Copying gs://download.magenta.tensorflow.org/soundfonts/Yamaha-C5-Salamander-JNv5.1.sf2...\n / [1/1 files][591.9 MiB/591.9 MiB] 100% Done \n Operation completed over 1 objects/591.9 MiB. \n\n\n\n```\nimport os\nfrom magenta.models.performance_rnn import performance_sequence_generator\nfrom magenta.protobuf import generator_pb2\nfrom magenta.protobuf import music_pb2\n\nimport magenta.music as mm\n\n# Necessary until pyfluidsynth is updated (>1.2.5).\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n\n# Constants.\nBUNDLE_DIR = '/tmp/'\nMODEL_NAME = 'performance_with_dynamics'\nBUNDLE_NAME = MODEL_NAME + '.mag'\n```\n\n /usr/local/lib/python2.7/dist-packages/scipy/signal/_max_len_seq.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._max_len_seq_inner import _max_len_seq_inner\n /usr/local/lib/python2.7/dist-packages/scipy/signal/_upfirdn.py:36: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._upfirdn_apply import _output_len, _apply\n /usr/local/lib/python2.7/dist-packages/scipy/signal/spectral.py:10: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._spectral import _lombscargle\n /usr/local/lib/python2.7/dist-packages/scipy/signal/_peak_finding.py:13: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._peak_finding_utils import (_argmaxima1d, _select_by_peak_distance,\n /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/base.py:35: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ..utils.seq_dataset import ArrayDataset, CSRDataset\n /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/least_angle.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ..utils import arrayfuncs, as_float_array, check_X_y, deprecated\n /usr/local/lib/python2.7/dist-packages/sklearn/utils/random.py:10: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._random import sample_without_replacement\n /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/coordinate_descent.py:30: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import cd_fast\n /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/__init__.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber\n /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/__init__.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber\n /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/sag.py:12: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .sag_fast import sag\n /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import libsvm, liblinear\n /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import libsvm, liblinear\n /usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py:9: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import libsvm_sparse\n /usr/local/lib/python2.7/dist-packages/sklearn/neighbors/__init__.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .ball_tree import BallTree\n /usr/local/lib/python2.7/dist-packages/sklearn/neighbors/__init__.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .ball_tree import BallTree\n /usr/local/lib/python2.7/dist-packages/sklearn/neighbors/__init__.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .ball_tree import BallTree\n /usr/local/lib/python2.7/dist-packages/sklearn/neighbors/__init__.py:7: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .kd_tree import KDTree\n /usr/local/lib/python2.7/dist-packages/sklearn/decomposition/online_lda.py:28: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._online_lda import (mean_change, _dirichlet_expectation_1d,\n /usr/local/lib/python2.7/dist-packages/sklearn/utils/graph.py:16: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .graph_shortest_path import graph_shortest_path # noqa\n /usr/local/lib/python2.7/dist-packages/sklearn/isotonic.py:11: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._isotonic import _inplace_contiguous_isotonic_regression, _make_unique\n /usr/local/lib/python2.7/dist-packages/sklearn/manifold/t_sne.py:26: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import _utils\n /usr/local/lib/python2.7/dist-packages/sklearn/manifold/t_sne.py:27: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import _barnes_hut_tsne\n /usr/local/lib/python2.7/dist-packages/sklearn/manifold/t_sne.py:27: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import _barnes_hut_tsne\n /usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._criterion import Criterion\n /usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._criterion import Criterion\n /usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._criterion import Criterion\n /usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._criterion import Criterion\n /usr/local/lib/python2.7/dist-packages/sklearn/cluster/k_means_.py:37: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import _k_means\n /usr/local/lib/python2.7/dist-packages/sklearn/cluster/k_means_.py:38: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._k_means_elkan import k_means_elkan\n /usr/local/lib/python2.7/dist-packages/sklearn/cluster/hierarchical.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import _hierarchical\n /usr/local/lib/python2.7/dist-packages/sklearn/cluster/hierarchical.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import _hierarchical\n /usr/local/lib/python2.7/dist-packages/sklearn/cluster/dbscan_.py:20: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._dbscan_inner import dbscan_inner\n /usr/local/lib/python2.7/dist-packages/sklearn/feature_extraction/hashing.py:14: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._hashing import transform as _hashing_transform\n\n\n\n```\nmm.notebook_utils.download_bundle(BUNDLE_NAME, BUNDLE_DIR)\n```\n\n# Generate a sequence\n\n\n```\nbundle = mm.sequence_generator_bundle.read_bundle_file(os.path.join(BUNDLE_DIR, BUNDLE_NAME))\ngenerator_map = performance_sequence_generator.get_generator_map()\ngenerator = generator_map[MODEL_NAME](checkpoint=None, bundle=bundle)\ngenerator.initialize()\ngenerator_options = generator_pb2.GeneratorOptions()\ngenerator_options.args['temperature'].float_value = 1.0 # Higher is more random; 1.0 is default. \ngenerate_section = generator_options.generate_sections.add(start_time=0, end_time=30)\nsequence = generator.generate(music_pb2.NoteSequence(), generator_options)\n\n# Play and view this masterpiece.\nmm.plot_sequence(sequence)\nmm.play_sequence(sequence, mm.midi_synth.fluidsynth,\n sf2_path='/tmp/Yamaha-C5-Salamander-JNv5.1.sf2')\n```\n\n WARNING:tensorflow:The saved meta_graph is possibly from an older release:\n 'model_variables' collection should be of type 'byte_list', but instead is of type 'node_list'.\n INFO:tensorflow:Restoring parameters from /tmp/tmpVlLvYQ/model.ckpt\n INFO:tensorflow:Need to generate 2899 more steps for this sequence, will try asking for 1740 RNN steps\n INFO:tensorflow:Beam search yields sequence with log-likelihood: -3987.715332 \n\n\n\n\n
\n \n Loading BokehJS ...\n
\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n
\n\n", "meta": {"hexsha": "9661fc52093db74e95e940655f16407f9a1f7e1e", "size": 81264, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Performance_RNN Midi Music generator.ipynb", "max_stars_repo_name": "wolfram-roemhild/deep-learning-colab", "max_stars_repo_head_hexsha": "31976e0aab5cd3d0b7c22442b6c48a1c1453d8b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-01-03T09:01:34.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-03T09:01:34.000Z", "max_issues_repo_path": "Performance_RNN Midi Music generator.ipynb", "max_issues_repo_name": "wolfram-roemhild/deep-learning-colab", "max_issues_repo_head_hexsha": "31976e0aab5cd3d0b7c22442b6c48a1c1453d8b9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Performance_RNN Midi Music generator.ipynb", "max_forks_repo_name": "wolfram-roemhild/deep-learning-colab", "max_forks_repo_head_hexsha": "31976e0aab5cd3d0b7c22442b6c48a1c1453d8b9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 97.5558223289, "max_line_length": 14770, "alphanum_fraction": 0.6201639102, "converted": true, "num_tokens": 11656, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5389832058771035, "lm_q2_score": 0.48828339529583464, "lm_q1q2_score": 0.26317654977310595}} {"text": "# Example 2: - Analyzing dim6 operators in WBF \n\nJohann Brehmer, Felix Kling, Kyle Cranmer 2018\n\nIn this tutorial we will demonstrate further functionalities of MadMiner. In particular we show how\n- to include smearing at parton level\n- to include backgrounds\n\n## Preparations\n\nLet us first load all the python libraries again\n\n\n```python\nimport sys\nimport os\nmadminer_src_path = \"/Users/felixkling/Documents/GitHub/madminer\"\nsys.path.append(madminer_src_path)\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport numpy as np\nimport matplotlib\nfrom matplotlib import pyplot as plt\nfrom scipy.optimize import curve_fit\n% matplotlib inline\n\nfrom madminer.core import MadMiner\nfrom madminer.plotting import plot_2d_morphing_basis\nfrom madminer.sampling import combine_and_shuffle\nfrom madminer.lhe import LHEProcessor\nfrom madminer.sampling import SampleAugmenter\n```\n\nPlease enter here the path to your MG5 root directory. This notebook assumes that you installed Delphes and Pythia through MG5. **This needs to be updated by the user**\n\n\n```python\nmg_dir = '/Users/felixkling/work/MG5_aMC_v2_6_2'\n```\n\n## 1. Setup\n\n### 1a) Define parameter space\n\nLet us first define the theory parameters. Note the additional option `param_card_transform`, which can transform the theory parameter in MadMiner (`theta`) to the input parameter in the param_card. In this case we re-scale from GeV$^{-2}$ to TeV$^{-2}$. \n\n\n```python\nminer = MadMiner()\n\nminer.add_parameter(\n lha_block='anoinputs',\n lha_id=3,\n parameter_name='FW',\n morphing_max_power=2,\n param_card_transform=\"0.000001*theta\",\n parameter_range=(-10,10)\n)\nminer.add_parameter(\n lha_block='anoinputs',\n lha_id=7,\n parameter_name='FWW',\n morphing_max_power=2,\n param_card_transform=\"0.000001*theta\",\n parameter_range=(-10,10)\n)\n```\n\n 06:41 \n 06:41 ------------------------------------------------------------\n 06:41 | |\n 06:41 | MadMiner v0.1.0 |\n 06:41 | |\n 06:41 | Johann Brehmer, Kyle Cranmer, and Felix Kling |\n 06:41 | |\n 06:41 ------------------------------------------------------------\n 06:41 \n 06:41 Added parameter FW (LHA: anoinputs 3, maximal power in squared ME: (2,), range: (-10, 10))\n 06:41 Added parameter FWW (LHA: anoinputs 7, maximal power in squared ME: (2,), range: (-10, 10))\n\n\n### 1b) Define benchmark points (evaluation points for |M|^2)\n\nDefine morphing benchmarks by hand / per scan. \n\n\n```python\nminer.add_benchmark(\n {'FW':0., 'FWW':0.},\n 'sm'\n)\nminer.add_benchmark(\n {'FW':2, 'FWW':2},\n 'w'\n)\n```\n\n 06:41 Added benchmark sm: FWW = 0.00e+00, FW = 0.00e+00)\n 06:41 Added benchmark w: FWW = 2.00, FW = 2.00)\n\n\n\n```python\nminer.set_benchmarks_from_morphing(\n keep_existing_benchmarks=True,\n n_trials=1000,\n max_overall_power=2\n)\n```\n\n 06:41 Optimizing basis for morphing\n 06:41 Added benchmark sm: FW = 0.00e+00, FWW = 0.00e+00)\n 06:41 Added benchmark w: FW = 2.00, FWW = 2.00)\n 06:41 Added benchmark morphing_basis_vector_2: FW = 2.14, FWW = -8.58e+00)\n 06:41 Added benchmark morphing_basis_vector_3: FW = -5.61e+00, FWW = -6.12e+00)\n 06:41 Added benchmark morphing_basis_vector_4: FW = -6.05e+00, FWW = 9.47)\n 06:41 Added benchmark morphing_basis_vector_5: FW = -7.78e+00, FWW = 0.21)\n\n\n\n```python\nfig = plot_2d_morphing_basis(\n miner.morpher,\n xlabel=r'$c_{W} / \\Lambda^2$ [TeV$^{-2}$]',\n ylabel=r'$c_{WW}} / \\Lambda^2$ [TeV$^{-2}$]',\n xrange=(-10,10),\n yrange=(-10,10)\n)\n```\n\n### 1c) Save setup\n\n\n```python\nminer.save('data/madminer_example.h5')\n```\n\n 06:41 Saving setup (including morphing) to data/madminer_example.h5\n\n\n### 1d) Comment on change UFO model to include detector smearing\n\nTo simulate the resolution of the invariant masses due to detector smearing, we use a simple trick: we change the Higgs propagator to reproduce the wanted (smeared) invariant mass distribution. The following example illustrates how to change the UFO model `dim6_for_information_parton` to reproduce the invariant mass distribution for $H\\to\\gamma\\gamma$. \n\n1. We base our simulation of detector effects on the CMS simulation in Fig. 6 (right) of [CMS-PAS-HIG-15-005](https://cds.cern.ch/record/2140979). In particular, we describe the peak region of the $H\\to\\gamma\\gamma$ diphoton mass distribution by a Gaussian \n\n \\begin{equation}\n Gaus\\left(m~|~N,m_H,\\sigma\\right)=\\frac{N}{\\sqrt{2\\pi}\\sigma}\\text{Exp}\\left[-\\frac{1}{2} \\left(\\frac{m-m_H}{\\sigma}\\right)^2\\right].\n \\end{equation}\n \n From a fit to the normalized distribution we obtain: $N=0.92$, $m_H=124.7$ and $\\sigma=1.67$. The normalization factor $N$ accounts for the fact, that a gaussian does not describe the tails of the dsitribition well. However, these tails will hardly contribute to the information and we account for the loss of signal rate in these tails through the normalization factor $N$. \n\n\n2. We now replace the usual Breit-Wigner propagator with the (square-root) of the Gaussian distribution: \n \n \\begin{equation} \n \\frac{1}{p^2-m_H^2+i \\Gamma m_H} \\rightarrow \\big[\\frac{N}{N_{BW}}\\frac{1}{\\sqrt{2\\pi}\\sigma}\\big]^{1/2} \\text{Exp}\\left[-\\frac{1}{4} \\left(\\frac{m-m_H}{\\sigma}\\right)^2\\right].\n \\end{equation} \n \n The normalization of the Breit-Wigner is given by $N_{BW}\\approx 2 m_H^2 \\Gamma_H / \\pi $. (Here I used $\\Gamma_H \\ll m_H$. For full formula see [Wikipedia](https://en.wikipedia.org/wiki/Relativistic_Breit%E2%80%93Wigner_distribution)). \n \n This can simply be implemented in the file `propagators.py` of the UFO model. First define the new propagator. Here I use the values obtained from the fit above and choose the prefactor $(\\sqrt{2\\pi}\\sigma N_{BW} / N)^{1/2} = 12.438$ \n \n `denominator_Higgs=\"12.438*cmath.exp(0.25*(cmath.sqrt(P('mu',id)*P('mu',id))-124.7)**2/1.5245**2)\" `\n \n Assign the new propagator to to the scalar particles \n \n `S = Propagator(name = \"S\",numerator = \"complex(0,1)\",denominator = denominator_Higgs)`\n\n\n3. By defaukt, MadGraph doesn't use the content of `propagators.py`. So we have to make sure it's used. This can simply be done by changing the particle information in the `particles.py` file of the UFO model\n\n `H = Particle(pdg_code = 25, name = 'H', ... , propagator = Prop.S)`\n \n \n4. Finally, we will change the `bwcutoff` in `run_card.dat` to ensure that MadGraph integrated over the desired range. The choice of `bwcutoff` depends on the choosen value of the Higgs width in the `param_card.dat`. For $\\Gamma_H=4.07$ MeV I would suggest to choose `bwcutoff=2000`, which then covers an integration range of $m_H \\pm 2000\\times \\Gamma_H = 116.9\\dots 133.1$ GeV. \n\n Note: if you get an error message such as `IEEE_UNDERFLOW_FLAG` or `IEEE_DENORMAL`, it probably means you have choosen a too large value for `bwcutoff`\n\n## 2. Event Generation\n\nLoad MadMiner again\n\n\n```python\nminer = MadMiner(debug=False)\nminer.load('data/madminer_example.h5')\n```\n\n 06:41 Found 2 parameters:\n 06:41 FW (LHA: anoinputs 3, maximal power in squared ME: (2,), range: (-10.0, 10.0))\n 06:41 FWW (LHA: anoinputs 7, maximal power in squared ME: (2,), range: (-10.0, 10.0))\n 06:41 Found 6 benchmarks:\n 06:41 sm: FW = 0.00e+00, FWW = 0.00e+00\n 06:41 w: FW = 2.00, FWW = 2.00\n 06:41 morphing_basis_vector_2: FW = 2.14, FWW = -8.58e+00\n 06:41 morphing_basis_vector_3: FW = -5.61e+00, FWW = -6.12e+00\n 06:41 morphing_basis_vector_4: FW = -6.05e+00, FWW = 9.47\n 06:41 morphing_basis_vector_5: FW = -7.78e+00, FWW = 0.21\n 06:41 Found morphing setup with 6 components\n\n\nWe now run MadMiner for the signal. Note that we call a new model file from the `ufo_model_directory`.\n\nHere we introduce a new option `MadMiner.run_multiple()`, which allows to start multiple runs with different run cards or different choices of `sample_benchmark`. In the following example, we will run MadMiner `nrun` times with the same run_card and the same sampling benchmark. \n\n\n```python\nnrun=5\n\nminer.run_multiple(\n mg_directory=mg_dir,\n log_directory='logs/wbf_signal',\n mg_process_directory='./mg_processes/wbf_signal',\n proc_card_file='cards/proc_card_wbf_signal.dat',\n param_card_template_file='cards/param_card_template.dat',\n run_card_files=['cards/run_card_wbf_signal.dat' for i in range(nrun)],\n pythia8_card_file=None,\n ufo_model_directory='cards/dim6_for_information_parton',\n sample_benchmarks=['sm'],\n initial_command='source ~/.bashrc'\n)\n```\n\n 06:41 Generating MadGraph process folder from cards/proc_card_wbf_signal.dat at ./mg_processes/wbf_signal\n 06:41 Run 0\n 06:41 Sampling from benchmark: sm\n 06:41 Original run card: cards/run_card_wbf_signal.dat\n 06:41 Original Pythia8 card: None\n 06:41 Copied run card: /madminer/cards/run_card_0.dat\n 06:41 Copied Pythia8 card: None\n 06:41 Param card: /madminer/cards/param_card_0.dat\n 06:41 Reweight card: /madminer/cards/reweight_card_0.dat\n 06:41 Log file: run_0.log\n 06:41 Creating param and reweight cards in ./mg_processes/wbf_signal//madminer/cards/param_card_0.dat, ./mg_processes/wbf_signal//madminer/cards/reweight_card_0.dat\n 06:41 Starting MadGraph and Pythia in ./mg_processes/wbf_signal\n 06:45 Run 1\n 06:45 Sampling from benchmark: sm\n 06:45 Original run card: cards/run_card_wbf_signal.dat\n 06:45 Original Pythia8 card: None\n 06:45 Copied run card: /madminer/cards/run_card_1.dat\n 06:45 Copied Pythia8 card: None\n 06:45 Param card: /madminer/cards/param_card_1.dat\n 06:45 Reweight card: /madminer/cards/reweight_card_1.dat\n 06:45 Log file: run_1.log\n 06:45 Creating param and reweight cards in ./mg_processes/wbf_signal//madminer/cards/param_card_1.dat, ./mg_processes/wbf_signal//madminer/cards/reweight_card_1.dat\n 06:45 Starting MadGraph and Pythia in ./mg_processes/wbf_signal\n 06:51 Run 2\n 06:51 Sampling from benchmark: sm\n 06:51 Original run card: cards/run_card_wbf_signal.dat\n 06:51 Original Pythia8 card: None\n 06:51 Copied run card: /madminer/cards/run_card_2.dat\n 06:51 Copied Pythia8 card: None\n 06:51 Param card: /madminer/cards/param_card_2.dat\n 06:51 Reweight card: /madminer/cards/reweight_card_2.dat\n 06:51 Log file: run_2.log\n 06:51 Creating param and reweight cards in ./mg_processes/wbf_signal//madminer/cards/param_card_2.dat, ./mg_processes/wbf_signal//madminer/cards/reweight_card_2.dat\n 06:51 Starting MadGraph and Pythia in ./mg_processes/wbf_signal\n 06:56 Run 3\n 06:56 Sampling from benchmark: sm\n 06:56 Original run card: cards/run_card_wbf_signal.dat\n 06:56 Original Pythia8 card: None\n 06:56 Copied run card: /madminer/cards/run_card_3.dat\n 06:56 Copied Pythia8 card: None\n 06:56 Param card: /madminer/cards/param_card_3.dat\n 06:56 Reweight card: /madminer/cards/reweight_card_3.dat\n 06:56 Log file: run_3.log\n 06:56 Creating param and reweight cards in ./mg_processes/wbf_signal//madminer/cards/param_card_3.dat, ./mg_processes/wbf_signal//madminer/cards/reweight_card_3.dat\n 06:56 Starting MadGraph and Pythia in ./mg_processes/wbf_signal\n 07:00 Run 4\n 07:00 Sampling from benchmark: sm\n 07:00 Original run card: cards/run_card_wbf_signal.dat\n 07:00 Original Pythia8 card: None\n 07:00 Copied run card: /madminer/cards/run_card_4.dat\n 07:00 Copied Pythia8 card: None\n 07:00 Param card: /madminer/cards/param_card_4.dat\n 07:00 Reweight card: /madminer/cards/reweight_card_4.dat\n 07:00 Log file: run_4.log\n 07:00 Creating param and reweight cards in ./mg_processes/wbf_signal//madminer/cards/param_card_4.dat, ./mg_processes/wbf_signal//madminer/cards/reweight_card_4.dat\n 07:00 Starting MadGraph and Pythia in ./mg_processes/wbf_signal\n\n\nIt is possible to start multiple processes based on the same `MadMiner` instance. This can be used to combine samples sampled according to different benchmarks, and to add reducible backgrounds. \n\nFor the latter, a useful option is the `is_background` switch, which should be used for processes that do *not* depend on the parameters theta. `is_background=True` will disable the reweighting and re-use the same weights for all cross sections.\n\n\n```python\nminer.run_multiple(\n mg_directory=mg_dir,\n log_directory='logs/ggF_background',\n mg_process_directory='./mg_processes/ggF_background',\n proc_card_file='cards/proc_card_ggF_background.dat',\n param_card_template_file='cards/param_card_template.dat', \n run_card_files=['cards/run_card_ggF_background.dat' for i in range(nrun)],\n pythia8_card_file=None, \n ufo_model_directory='cards/dim6_for_information_parton',\n sample_benchmarks=['sm'],\n is_background=True,\n initial_command='source ~/.bashrc'\n)\n```\n\n 07:04 Generating MadGraph process folder from cards/proc_card_ggF_background.dat at ./mg_processes/ggF_background\n 07:04 Run 0\n 07:04 Sampling from benchmark: sm\n 07:04 Original run card: cards/run_card_ggF_background.dat\n 07:04 Original Pythia8 card: None\n 07:04 Copied run card: /madminer/cards/run_card_0.dat\n 07:04 Copied Pythia8 card: None\n 07:04 Param card: /madminer/cards/param_card_0.dat\n 07:04 Reweight card: /madminer/cards/reweight_card_0.dat\n 07:04 Log file: run_0.log\n 07:04 Creating param and reweight cards in ./mg_processes/ggF_background//madminer/cards/param_card_0.dat, ./mg_processes/ggF_background//madminer/cards/reweight_card_0.dat\n 07:04 Starting MadGraph and Pythia in ./mg_processes/ggF_background\n 07:17 Run 1\n 07:17 Sampling from benchmark: sm\n 07:17 Original run card: cards/run_card_ggF_background.dat\n 07:17 Original Pythia8 card: None\n 07:17 Copied run card: /madminer/cards/run_card_1.dat\n 07:17 Copied Pythia8 card: None\n 07:17 Param card: /madminer/cards/param_card_1.dat\n 07:17 Reweight card: /madminer/cards/reweight_card_1.dat\n 07:17 Log file: run_1.log\n 07:17 Creating param and reweight cards in ./mg_processes/ggF_background//madminer/cards/param_card_1.dat, ./mg_processes/ggF_background//madminer/cards/reweight_card_1.dat\n 07:17 Starting MadGraph and Pythia in ./mg_processes/ggF_background\n 07:31 Run 2\n 07:31 Sampling from benchmark: sm\n 07:31 Original run card: cards/run_card_ggF_background.dat\n 07:31 Original Pythia8 card: None\n 07:31 Copied run card: /madminer/cards/run_card_2.dat\n 07:31 Copied Pythia8 card: None\n 07:31 Param card: /madminer/cards/param_card_2.dat\n 07:31 Reweight card: /madminer/cards/reweight_card_2.dat\n 07:31 Log file: run_2.log\n 07:31 Creating param and reweight cards in ./mg_processes/ggF_background//madminer/cards/param_card_2.dat, ./mg_processes/ggF_background//madminer/cards/reweight_card_2.dat\n 07:31 Starting MadGraph and Pythia in ./mg_processes/ggF_background\n 07:48 Run 3\n 07:48 Sampling from benchmark: sm\n 07:48 Original run card: cards/run_card_ggF_background.dat\n 07:48 Original Pythia8 card: None\n 07:48 Copied run card: /madminer/cards/run_card_3.dat\n 07:48 Copied Pythia8 card: None\n 07:48 Param card: /madminer/cards/param_card_3.dat\n 07:48 Reweight card: /madminer/cards/reweight_card_3.dat\n 07:48 Log file: run_3.log\n 07:48 Creating param and reweight cards in ./mg_processes/ggF_background//madminer/cards/param_card_3.dat, ./mg_processes/ggF_background//madminer/cards/reweight_card_3.dat\n 07:48 Starting MadGraph and Pythia in ./mg_processes/ggF_background\n 08:03 Run 4\n 08:03 Sampling from benchmark: sm\n 08:03 Original run card: cards/run_card_ggF_background.dat\n 08:03 Original Pythia8 card: None\n 08:03 Copied run card: /madminer/cards/run_card_4.dat\n 08:03 Copied Pythia8 card: None\n 08:03 Param card: /madminer/cards/param_card_4.dat\n 08:03 Reweight card: /madminer/cards/reweight_card_4.dat\n 08:03 Log file: run_4.log\n 08:03 Creating param and reweight cards in ./mg_processes/ggF_background//madminer/cards/param_card_4.dat, ./mg_processes/ggF_background//madminer/cards/reweight_card_4.dat\n 08:03 Starting MadGraph and Pythia in ./mg_processes/ggF_background\n\n\n## 3. Extract Parton Level Observables / Weights \n\n### 3a) Defining observables\n\nLet us first define observables. We will do that as a function, which we then can use for both signal and backgrounds. \n\n\n```python\ndef setup_observables(lhep):\n lhep.add_observable('pt_jhard', '(p[0].pt>p[1].pt)*p[0].pt+(p[0].ptp[1].pt)*p[1].pt')\n lhep.add_observable('dphijj' , 'abs(p[0].deltaphi(p[1]))')\n lhep.add_observable('m_aa' , '(p[2] + p[3]).m')\n lhep.add_observable('m_jj' , '(p[0] + p[1]).m')\n lhep.add_observable('m_jjaa', '(p[0] + p[1] + p[2] + p[3]).m')\n lhep.add_observable('deta_jj', 'abs(p[0].eta - p[1].eta)')\n\n lhep.add_observable('px_j1', 'p[0].px')\n lhep.add_observable('px_j2', 'p[1].px')\n lhep.add_observable('px_a1', 'p[2].px')\n lhep.add_observable('px_a2', 'p[3].px')\n lhep.add_observable('px_h' , '(p[2]+p[3]).px')\n\n lhep.add_observable('py_j1', 'p[0].py')\n lhep.add_observable('py_j2', 'p[1].py')\n lhep.add_observable('py_a1', 'p[2].py')\n lhep.add_observable('py_a2', 'p[3].py')\n lhep.add_observable('py_h' , '(p[2]+p[3]).py')\n\n lhep.add_observable('pz_j1', 'p[0].pz')\n lhep.add_observable('pz_j2', 'p[1].pz')\n lhep.add_observable('pz_a1', 'p[2].pz')\n lhep.add_observable('pz_a2', 'p[3].pz')\n lhep.add_observable('pz_h' , '(p[2]+p[3]).pz')\n\n lhep.add_observable('en_j1', 'p[0].e')\n lhep.add_observable('en_j2', 'p[1].e')\n lhep.add_observable('en_a1', 'p[2].e')\n lhep.add_observable('en_a2', 'p[3].e')\n lhep.add_observable('en_h' , '(p[2]+p[3]).e')\n\n lhep.add_observable('pt_j1', 'p[0].pt')\n lhep.add_observable('pt_j2', 'p[1].pt')\n lhep.add_observable('pt_a1', 'p[2].pt')\n lhep.add_observable('pt_a2', 'p[3].pt')\n lhep.add_observable('pt_h' , '(p[2]+p[3]).pt')\n\n lhep.add_observable('eta_j1', 'p[0].eta')\n lhep.add_observable('eta_j2', 'p[1].eta')\n lhep.add_observable('eta_a1', 'p[2].eta')\n lhep.add_observable('eta_a2', 'p[3].eta')\n lhep.add_observable('eta_h' , '(p[2]+p[3]).eta')\n\n lhep.add_observable('dphi_j1j2', 'p[0].deltaphi(p[1])')\n lhep.add_observable('dphi_j1j2', 'p[0].deltaphi(p[1])')\n lhep.add_observable('dphi_a1a2', 'p[2].deltaphi(p[3])')\n lhep.add_observable('dphi_j1h' , 'p[0].deltaphi(p[2]+p[3])')\n lhep.add_observable('dphi_j2h' , 'p[1].deltaphi(p[2]+p[3])')\n```\n\n### 3b) Setting up and run LHEProcessor for Signal\n\nSet up the processor. Here we include multiple LHE files\n\n\n```python\nlhep = LHEProcessor()\nfor run in range (nrun):\n run_str = str(run+1)\n if len(run_str) < 2:\n run_str = '0' + run_str\n lhep.add_lhe_sample(\n 'mg_processes/wbf_signal/Events/run_{}/unweighted_events.lhe.gz'.format(run_str),\n sampling_benchmark=\"sm\",\n rescale_factor=1./nrun)\nlhep.read_benchmark_names('data/madminer_example.h5')\n```\n\n 08:20 Adding LHE sample at mg_processes/wbf_signal/Events/run_01/unweighted_events.lhe.gz\n 08:20 Adding LHE sample at mg_processes/wbf_signal/Events/run_02/unweighted_events.lhe.gz\n 08:20 Adding LHE sample at mg_processes/wbf_signal/Events/run_03/unweighted_events.lhe.gz\n 08:20 Adding LHE sample at mg_processes/wbf_signal/Events/run_04/unweighted_events.lhe.gz\n 08:20 Adding LHE sample at mg_processes/wbf_signal/Events/run_05/unweighted_events.lhe.gz\n\n\nAdd observbales, analyse samples and save\n\n\n```python\nsetup_observables(lhep)\n```\n\n 08:20 Adding (not required) observable pt_jhard = (p[0].pt>p[1].pt)*p[0].pt+(p[0].ptp[1].pt)*p[1].pt\n 08:20 Adding (not required) observable dphijj = abs(p[0].deltaphi(p[1]))\n 08:20 Adding (not required) observable m_aa = (p[2] + p[3]).m\n 08:20 Adding (not required) observable m_jj = (p[0] + p[1]).m\n 08:20 Adding (not required) observable m_jjaa = (p[0] + p[1] + p[2] + p[3]).m\n 08:20 Adding (not required) observable deta_jj = abs(p[0].eta - p[1].eta)\n 08:20 Adding (not required) observable px_j1 = p[0].px\n 08:20 Adding (not required) observable px_j2 = p[1].px\n 08:20 Adding (not required) observable px_a1 = p[2].px\n 08:20 Adding (not required) observable px_a2 = p[3].px\n 08:20 Adding (not required) observable px_h = (p[2]+p[3]).px\n 08:20 Adding (not required) observable py_j1 = p[0].py\n 08:20 Adding (not required) observable py_j2 = p[1].py\n 08:20 Adding (not required) observable py_a1 = p[2].py\n 08:20 Adding (not required) observable py_a2 = p[3].py\n 08:20 Adding (not required) observable py_h = (p[2]+p[3]).py\n 08:20 Adding (not required) observable pz_j1 = p[0].pz\n 08:20 Adding (not required) observable pz_j2 = p[1].pz\n 08:20 Adding (not required) observable pz_a1 = p[2].pz\n 08:20 Adding (not required) observable pz_a2 = p[3].pz\n 08:20 Adding (not required) observable pz_h = (p[2]+p[3]).pz\n 08:20 Adding (not required) observable en_j1 = p[0].e\n 08:20 Adding (not required) observable en_j2 = p[1].e\n 08:20 Adding (not required) observable en_a1 = p[2].e\n 08:20 Adding (not required) observable en_a2 = p[3].e\n 08:20 Adding (not required) observable en_h = (p[2]+p[3]).e\n 08:20 Adding (not required) observable pt_j1 = p[0].pt\n 08:20 Adding (not required) observable pt_j2 = p[1].pt\n 08:20 Adding (not required) observable pt_a1 = p[2].pt\n 08:20 Adding (not required) observable pt_a2 = p[3].pt\n 08:20 Adding (not required) observable pt_h = (p[2]+p[3]).pt\n 08:20 Adding (not required) observable eta_j1 = p[0].eta\n 08:20 Adding (not required) observable eta_j2 = p[1].eta\n 08:20 Adding (not required) observable eta_a1 = p[2].eta\n 08:20 Adding (not required) observable eta_a2 = p[3].eta\n 08:20 Adding (not required) observable eta_h = (p[2]+p[3]).eta\n 08:20 Adding (not required) observable dphi_j1j2 = p[0].deltaphi(p[1])\n 08:20 Adding (not required) observable dphi_j1j2 = p[0].deltaphi(p[1])\n 08:20 Adding (not required) observable dphi_a1a2 = p[2].deltaphi(p[3])\n 08:20 Adding (not required) observable dphi_j1h = p[0].deltaphi(p[2]+p[3])\n 08:20 Adding (not required) observable dphi_j2h = p[1].deltaphi(p[2]+p[3])\n\n\n\n```python\nlhep.analyse_lhe_samples()\nlhep.save('data/madminer_lhedata_wbf_signal.h5', 'data/madminer_example.h5')\n```\n\n 08:20 Analysing LHE sample mg_processes/wbf_signal/Events/run_01/unweighted_events.lhe.gz\n 08:20 Analysing LHE sample mg_processes/wbf_signal/Events/run_02/unweighted_events.lhe.gz\n 08:21 Analysing LHE sample mg_processes/wbf_signal/Events/run_03/unweighted_events.lhe.gz\n 08:21 Analysing LHE sample mg_processes/wbf_signal/Events/run_04/unweighted_events.lhe.gz\n 08:21 Analysing LHE sample mg_processes/wbf_signal/Events/run_05/unweighted_events.lhe.gz\n 08:22 Loading HDF5 data from data/madminer_example.h5 and saving file to data/madminer_lhedata_wbf_signal.h5\n\n\nLet's make a quick cross check by plotting a distributions (Warning: plot's don't look very pretty)\n\n\n```python\nmycolors=[\"black\",\"red\",\"orange\",\"yellow\",\"green\",\"blue\",\"purple\"]\n\nfig = plt.figure(figsize=(5,5))\nfor i, weights in enumerate(lhep.weights):\n plt.hist(lhep.observations['m_jj'], range=(0.,5000.), bins=20, histtype='step', weights=weights,color=mycolors[i])\nplt.show()\n\nfig = plt.figure(figsize=(5,5))\nfor i, weights in enumerate(lhep.weights):\n plt.hist(lhep.observations['dphi_j1j2'], range=(-3.2,3.2), bins=20, histtype='step', weights=weights,color=mycolors[i])\nplt.show()\n```\n\n### 3c) Cross Check: Does the smearing work? \n\nHere I explicitly check the smearing of the mass peak and compare it to the wanted distribution obtained from the experimental collaboration. \n\n\n```python\n#Define my fitting function\ndef myfunction(x, mean, amplitude, standard_deviation):\n return amplitude/np.sqrt(2.0*3.1415*standard_deviation**2) * np.exp( - 0.5*((x - mean) / standard_deviation) ** 2)\n\n#Function to do the plotting\ndef smearing_validation_plot(filename,lheprocessor,inputrange,label):\n \n #Get Data from Experiment / MadMiner\n exp_data , exp_weights = np.loadtxt(filename)[:,0] , np.loadtxt(filename)[:,1]\n madminer_data , madminer_weights = lheprocessor.observations['m_aa'] , lheprocessor.weights[0]\n\n #Plot Exp. Data\n fig = plt.figure(figsize=(5,5))\n bin_heights, bin_borders, _ = plt.hist(exp_data, weights=exp_weights,normed=True,\n range=(inputrange[0],inputrange[1]), bins=inputrange[2], \n histtype='step',color='Red',label=label)\n\n #Fit to Experiment\n bin_centers = bin_borders[:-1] + np.diff(bin_borders) / 2\n bestfit, _ = curve_fit(myfunction, bin_centers, bin_heights, p0=[125, 1, 1])\n fitplotrange=np.arange(inputrange[0], inputrange[1], 0.01)\n plt.plot(fitplotrange,myfunction(fitplotrange,bestfit[0],bestfit[1],bestfit[2]),\n color='Black',linestyle='dashed', label='Fit to Experiment')\n \n #Plot MadMiner\n norm = sum(madminer_weights)\n print ('The total cross section is: %.6f pb'%(norm),)\n bin_heights_mm, _ , _ = plt.hist(madminer_data,\n weights = madminer_weights/norm*inputrange[2]/(inputrange[1]-inputrange[0])*bestfit[1],\n range=(inputrange[0],inputrange[1]), bins=inputrange[2], \n histtype='step',color='Blue',label='MadMiner') \n\n #Finish Plot\n plt.legend(loc='upper left')\n upperbound=max(max(bin_heights),max(bin_heights_mm))\n plt.text(inputrange[0],0.75*upperbound,\n r'$\\frac{N}{\\sqrt{2\\pi}\\sigma}\\exp\\left[-\\frac{1}{2}\\left(\\frac{m-m_H}{\\sigma}\\right)^2\\right]$',\n fontsize=14)\n plt.text(inputrange[0],0.65*upperbound,r'$m_H$=%.3f'%(bestfit[0]),fontsize=14)\n plt.text(inputrange[0],0.60*upperbound,r'$N$=%.3f'%(bestfit[1]),fontsize=14)\n plt.text(inputrange[0],0.55*upperbound,r'$\\sigma$=%.3f'%(bestfit[2]),fontsize=14)\n\n plt.show()\n\n#Call Function\nsmearing_validation_plot('cards/smearing_data/h_2_aa.txt',lhep,[115,130,30],'CMS: PAS HIG-15-005')\n```\n\n### 3d) Setting up and run LHEProcessor for Background\n\nRepeat procedure for ggF background\n\n\n```python\nlhep = LHEProcessor()\n\nfor run in range (nrun):\n run_str = str(run+1)\n if len(run_str) < 2:\n run_str = '0' + run_str\n lhep.add_lhe_sample(\n 'mg_processes/ggF_background/Events/run_{}/unweighted_events.lhe.gz'.format(run_str),\n sampling_benchmark=\"sm\",\n is_background=True,\n rescale_factor=1./nrun)\n \nlhep.read_benchmark_names('data/madminer_example.h5')\nsetup_observables(lhep)\nlhep.analyse_lhe_samples()\nlhep.save('data/madminer_lhedata_ggF_background.h5', 'data/madminer_example.h5')\n```\n\n 08:22 Adding LHE sample at mg_processes/ggF_background/Events/run_01/unweighted_events.lhe.gz\n 08:22 Adding LHE sample at mg_processes/ggF_background/Events/run_02/unweighted_events.lhe.gz\n 08:22 Adding LHE sample at mg_processes/ggF_background/Events/run_03/unweighted_events.lhe.gz\n 08:22 Adding LHE sample at mg_processes/ggF_background/Events/run_04/unweighted_events.lhe.gz\n 08:22 Adding LHE sample at mg_processes/ggF_background/Events/run_05/unweighted_events.lhe.gz\n 08:22 Adding (not required) observable pt_jhard = (p[0].pt>p[1].pt)*p[0].pt+(p[0].ptp[1].pt)*p[1].pt\n 08:22 Adding (not required) observable dphijj = abs(p[0].deltaphi(p[1]))\n 08:22 Adding (not required) observable m_aa = (p[2] + p[3]).m\n 08:22 Adding (not required) observable m_jj = (p[0] + p[1]).m\n 08:22 Adding (not required) observable m_jjaa = (p[0] + p[1] + p[2] + p[3]).m\n 08:22 Adding (not required) observable deta_jj = abs(p[0].eta - p[1].eta)\n 08:22 Adding (not required) observable px_j1 = p[0].px\n 08:22 Adding (not required) observable px_j2 = p[1].px\n 08:22 Adding (not required) observable px_a1 = p[2].px\n 08:22 Adding (not required) observable px_a2 = p[3].px\n 08:22 Adding (not required) observable px_h = (p[2]+p[3]).px\n 08:22 Adding (not required) observable py_j1 = p[0].py\n 08:22 Adding (not required) observable py_j2 = p[1].py\n 08:22 Adding (not required) observable py_a1 = p[2].py\n 08:22 Adding (not required) observable py_a2 = p[3].py\n 08:22 Adding (not required) observable py_h = (p[2]+p[3]).py\n 08:22 Adding (not required) observable pz_j1 = p[0].pz\n 08:22 Adding (not required) observable pz_j2 = p[1].pz\n 08:22 Adding (not required) observable pz_a1 = p[2].pz\n 08:22 Adding (not required) observable pz_a2 = p[3].pz\n 08:22 Adding (not required) observable pz_h = (p[2]+p[3]).pz\n 08:22 Adding (not required) observable en_j1 = p[0].e\n 08:22 Adding (not required) observable en_j2 = p[1].e\n 08:22 Adding (not required) observable en_a1 = p[2].e\n 08:22 Adding (not required) observable en_a2 = p[3].e\n 08:22 Adding (not required) observable en_h = (p[2]+p[3]).e\n 08:22 Adding (not required) observable pt_j1 = p[0].pt\n 08:22 Adding (not required) observable pt_j2 = p[1].pt\n 08:22 Adding (not required) observable pt_a1 = p[2].pt\n 08:22 Adding (not required) observable pt_a2 = p[3].pt\n 08:22 Adding (not required) observable pt_h = (p[2]+p[3]).pt\n 08:22 Adding (not required) observable eta_j1 = p[0].eta\n 08:22 Adding (not required) observable eta_j2 = p[1].eta\n 08:22 Adding (not required) observable eta_a1 = p[2].eta\n 08:22 Adding (not required) observable eta_a2 = p[3].eta\n 08:22 Adding (not required) observable eta_h = (p[2]+p[3]).eta\n 08:22 Adding (not required) observable dphi_j1j2 = p[0].deltaphi(p[1])\n 08:22 Adding (not required) observable dphi_j1j2 = p[0].deltaphi(p[1])\n 08:22 Adding (not required) observable dphi_a1a2 = p[2].deltaphi(p[3])\n 08:22 Adding (not required) observable dphi_j1h = p[0].deltaphi(p[2]+p[3])\n 08:22 Adding (not required) observable dphi_j2h = p[1].deltaphi(p[2]+p[3])\n 08:22 Analysing LHE sample mg_processes/ggF_background/Events/run_01/unweighted_events.lhe.gz\n 08:22 Analysing LHE sample mg_processes/ggF_background/Events/run_02/unweighted_events.lhe.gz\n 08:22 Analysing LHE sample mg_processes/ggF_background/Events/run_03/unweighted_events.lhe.gz\n 08:22 Analysing LHE sample mg_processes/ggF_background/Events/run_04/unweighted_events.lhe.gz\n 08:23 Analysing LHE sample mg_processes/ggF_background/Events/run_05/unweighted_events.lhe.gz\n 08:23 Loading HDF5 data from data/madminer_example.h5 and saving file to data/madminer_lhedata_ggF_background.h5\n\n\nCross check plots\n\n\n```python\nmycolors=[\"black\",\"red\",\"orange\",\"yellow\",\"green\",\"blue\",\"purple\"]\n\nfig = plt.figure(figsize=(5,5))\nfor i, weights in enumerate(lhep.weights):\n plt.hist(lhep.observations['m_jj'], range=(0.,5000.), bins=20, histtype='step', weights=weights,color=mycolors[i])\nplt.show()\n\nfig = plt.figure(figsize=(5,5))\nfor i, weights in enumerate(lhep.weights):\n plt.hist(lhep.observations['dphi_j1j2'], range=(-3.2,3.2), bins=20, histtype='step', weights=weights,color=mycolors[i])\nplt.show()\n```\n\nSmearing Plot\n\n\n```python\nsmearing_validation_plot('cards/smearing_data/h_2_aa.txt',lhep,[115,130,30],'CMS: PAS HIG-15-005')\n```\n\n### 3e) Plot Distributions\n\n\n```python\nsa_wbf = SampleAugmenter('data/madminer_lhedata_wbf_signal.h5', debug=False)\nsa_ggF = SampleAugmenter('data/madminer_lhedata_ggF_background.h5', debug=False)\nx_weighted_wbf, weights_wbf = sa_wbf.extract_raw_data(theta=[0.,0.])\nx_weighted_ggF, weights_ggF = sa_ggF.extract_raw_data(theta=[0.,0.])\n\nbins = 25\nn_observables = x_weighted_wbf.shape[1]\nn_cols = 3\nn_rows = (n_observables + n_cols - 1) // n_cols\nlabels = sa_wbf.observables.keys()\n\nplt.figure(figsize=(4. * n_cols, 4. * n_rows))\n\nfor i, label in enumerate(labels):\n xmin = np.percentile(x_weighted_wbf[:,i], 5.)\n xmax = np.percentile(x_weighted_wbf[:,i], 95.)\n xwidth = xmax - xmin\n xmin -= xwidth * 0.1\n xmax += xwidth * 0.1\n x_range = (xmin, xmax)\n \n ax = plt.subplot(n_rows, n_cols, i+1)\n \n plt.hist(x_weighted_wbf[:,i], weights=weights_wbf, histtype='step', range=x_range, bins=bins, \n lw=1.5, label=r'WBF signal', normed=False)\n plt.hist(x_weighted_ggF[:,i], weights=weights_ggF, histtype='step', range=x_range, bins=bins, \n lw=1.5, label=r'ggF background', normed=False)\n \n if i == 0:\n plt.legend()\n \n plt.xlabel(label)\n \nplt.tight_layout()\nplt.show()\n```\n\n### 3f) Combining Samples (if necessary) \n\nCombine signal and background\n\n\n```python\ncombine_and_shuffle(\n ['data/madminer_lhedata_wbf_signal.h5','data/madminer_lhedata_ggF_background.h5'],\n 'data/madminer_lhedata.h5'\n)\n```\n\n 08:23 Careful: this tool assumes that all samples are generated with the same setup, including identical benchmarks (and thus morphing setup). If it is used with samples with different settings, there will be wrong results! There are no explicit cross checks in place yet.\n 08:23 Copying setup from data/madminer_lhedata_wbf_signal.h5 to data/madminer_lhedata.h5\n 08:23 Loading samples from file 1 / 2 at data/madminer_lhedata_wbf_signal.h5\n 08:23 Loading samples from file 2 / 2 at data/madminer_lhedata_ggF_background.h5\n\n", "meta": {"hexsha": "fb28a38109fbf0705f551698df3b8fb380f2a4e8", "size": 429004, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/example2_wbf/Part1-Generation.ipynb", "max_stars_repo_name": "vischia/madminer", "max_stars_repo_head_hexsha": "98c2bcfb93d0fd84ff1872b344c4d89adf51217f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/example2_wbf/Part1-Generation.ipynb", "max_issues_repo_name": "vischia/madminer", "max_issues_repo_head_hexsha": "98c2bcfb93d0fd84ff1872b344c4d89adf51217f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/example2_wbf/Part1-Generation.ipynb", "max_forks_repo_name": "vischia/madminer", "max_forks_repo_head_hexsha": "98c2bcfb93d0fd84ff1872b344c4d89adf51217f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-01-16T14:32:54.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-16T14:32:54.000Z", "avg_line_length": 353.0897119342, "max_line_length": 259772, "alphanum_fraction": 0.925278086, "converted": true, "num_tokens": 11254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5467381519846138, "lm_q2_score": 0.4804786780479071, "lm_q1q2_score": 0.262696024503923}} {"text": "\n\n\n# Tutorial-IllinoisGRMHD: IllinoisGRMHD_headers.h\n\n## Authors: Leo Werneck & Zach Etienne\n\n**This module is currently under development**\n\n## In this tutorial module we explain the main `IllinoisGRMHD` header file. This module will likely be absorbed by another one by the time we finish documenting the code\n\n### Required and recommended citations:\n\n* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., M\u00f6sta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).\n* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).\n* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)).\n\nIf using the version of `IllinoisGRMHD` with piecewise polytropic *or* tabulated (coming soon!) EOS support, then the following citation is also required:\n\n* **(Required)** Etienne, Z. B., Werneck, L., Paschalidis, V., Haas R., M\u00f6sta P., and Shapiro, S. L., *IllinoisGRMHD github repository* (2019). Source Code URL: https://github.com/zachetienne/nrpytutorial/tree/master/IllinoisGRMHD/.\n\n### Dependencies\n\nThe files generated in this tutorial notebook depend on the following files:\n\n* `apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.C` \\[[**tutorial**](Tutorial-IllinoisGRMHD__apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.ipynb)\\]\n* `IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij.C` \\[[**tutorial**](Tutorial-IllinoisGRMHD__convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij.ipynb)\\]\n* `symmetry__set_gzs_staggered_gfs.C` \\[[**tutorial**](Tutorial-IllinoisGRMHD__symmetry__set_gzs_staggered_gfs.ipynb)\\]\n* `IllinoisGRMHD_EoS_lowlevel_functs.C` \\[[**tutorial**](Tutorial-IllinoisGRMHD__EoS_lowlevel_functs.ipynb)\\]\n\n\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis module is organized as follows\n\n0. [Step 0](#src_dir): **Source directory creation**\n1. [Step 1](#vars_headers): **Generating `VARS` headers with NRPy+**\n 1. [Step 1.a](#load_python_nrpy_modules): *Load necessary Python/NRPy+ modules*\n 1. [Step 1.b](#adm_3metric_vars): *The `ADM_3METRIC_VARS.h` file*\n 1. [Step 1.c](#conf_metric_facevals_vars): *The `CONF_METRIC_FACEVALS_VARS.h` file*\n 1. [Step 1.d](#grmhd_vars): *The `GRMHD_VARS.h` file*\n 1. [Step 1.e](#interp_vars): *The `INTERP_VARS.h` file*\n 1. [Step 1.f](#smallb_and_conservs_vars): *The `SMALLB_VARS.h` and `CONSERV_VARS.h` files*\n 1. [Step 1.g](#tmunu_vars): *The `TMUNU_VARS.h` files*\n1. [Step 2](#igm_headers__h): **`IllinoisGRMHD_headers.h`**\n1. [Step 3](#code_validation): **Code validation**\n1. [Step 4](#latex_pdf_output): **Output this notebook to $\\LaTeX$-formatted PDF file**\n\n\n\n# Step 0: Source directory creation \\[Back to [top](#toc)\\]\n$$\\label{src_dir}$$\n\nWe will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.\n\n\n```python\n# Step 0: Creation of the IllinoisGRMHD source directory\n# Step 0a: Add NRPy's directory to the path\n# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory\nimport os,sys\nnrpy_dir_path = os.path.join(\"..\",\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\n\n# Step 0b: Load up cmdline_helper and create the directory\nimport cmdline_helper as cmd\noutdir = os.path.join(\"..\",\"src\")\ncmd.mkdir(outdir)\n\n# Step 0c: Set up header file output path\nNRPy_headers_dir_path = os.path.join(outdir,\"NRPy_generated_headers\")\ncmd.mkdir(NRPy_headers_dir_path)\n```\n\n\n\n# Step 1: Generating `VARS` headers with NRPy+ \\[Back to [top](#toc)\\]\n$$\\label{vars_headers}$$\n\nWe will now use NRPy+ to generate all the `VARS` header files we need in the `IllinoisGRMHD_headers.h` file.\n\n\n\n## Step 1.a: Load necessary Python/NRPy+ modules \\[Back to [top](#toc)\\]\n$$\\label{load_python_nrpy_modules}$$\n\nWe now load all necessary Python/NRPy+ modules needed to generate the `VARS` header files.\n\n\n```python\n# Imported needed Python modules\nimport sympy as sp # Python module: symbolic expressions capabilities\n\n# Imported needed NRPy+ modules\nimport IllinoisGRMHD_output_functions as IGMout # NRPy+ module: IllinoisGRMHD output file functions\n```\n\n\n\n## Step 1.b: The `ADM_3METRIC_VARS.h` file \\[Back to [top](#toc)\\]\n$$\\label{adm_3metric_vars}$$\n\nWe will now generate the `ADM_3METRIC_VARS.h` file, which sets integers so that we can locate specific variables inside the `ADM_3METRIC` array. For example, the lapse variable $\\alpha$ is contained in the array element `ADM_3METRIC[0]`, so we define a variable \n```c\nconst static int ALPHA = 0;\n```\nwhich allow us to access the same array element via `ADM_3METRIC[ALPHA]`. We remind the reader that the `ADM_3METRIC` array contains the following quantities (in order):\n\n$$\n\\left(\n\\alpha,\n\\beta^{x},\\beta^{y},\\beta^{z},\n\\gamma_{xx},\\gamma_{xy},\\gamma_{xz},\\gamma_{yy},\\gamma_{yz},\\gamma_{zz},\n\\gamma^{xx},\\gamma^{xy},\\gamma^{xz},\\gamma^{yy},\\gamma^{yz},\\gamma^{zz},\n\\sqrt{\\gamma}\n\\right)\\ .\n$$\n\n\n```python\n# Step 1.b: Declare basic ADM variables to be used by IllinoisGRMHD\n# Step 1.b.i: Set spatial dimension to 3\nDIM = 3\n\n# Step 1.b.ii: Set up alpha\ngfslist = [[\"ALPHA\"]]\n# Step 1.b.iii: Set up beta^{i}\nfor i in range(DIM):\n gfslist.append([\"BETA\"+chr(ord('X')+i)])\n\n# Step 1.b.iv: Set up gamma_{ij}\nfor i in range(DIM):\n for j in range(i,DIM):\n gfslist.append([\"GAMMA\"+chr(ord('X')+i)+chr(ord('X')+j)])\n\n# Step 1.b.v: Set up gamma^{ij}\nfor i in range(3):\n for j in range(i,3):\n gfslist.append([\"GAMMAUP\"+chr(ord('X')+i)+chr(ord('X')+j)])\n\n# Step 1.b.vi: Set up \\sqrt{\\gamma}\ngfslist.append([\"SQRTGAMMA\"])\n\n# Step 1.b.vii: Set up NUMVARS_FOR_ADM_3METRIC, which is the\n# number of variables in the gfslist array\ngfslist.append([\"NUMVARS_FOR_ADM_3METRIC\"])\n\n# Step 1.b.viii: Output to file\ncomment = \"/* ADM_3METRIC variables */\\n\"\nfilename = \"ADM_3METRIC_VARS.h\"\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment)\n```\n\n Just generated the file: ../src/NRPy_generated_headers/ADM_3METRIC_VARS.h\n\n\n\n\n## Step 1.c: The `CONF_METRIC_FACEVALS_VARS.h` file \\[Back to [top](#toc)\\]\n$$\\label{conf_metric_facevals_vars}$$\n\nWe will now generate the `CONF_METRIC_FACEVALS_VARS.h` file, which sets integers so that we can locate specific variables inside the `CONF_METRIC` and `FACEVALS` arrays. This header file replaces the following piece of code\n\n```c\n// The order here MATTERS, as we assume that GAMMAUPXX+1=GAMMAUPYY, etc.\nstatic const int PHI=0,PSI=1,GAMMATILDEXX=2,GAMMATILDEXY=3,GAMMATILDEXZ=4,GAMMATILDEYY=5,GAMMATILDEYZ=6,GAMMATILDEZZ=7,\n LAPM1=8,SHIFTX=9,SHIFTY=10,SHIFTZ=11,GAMMATILDEUPXX=12,GAMMATILDEUPYY=13,GAMMATILDEUPZZ=14,\n NUMVARS_FOR_METRIC_FACEVALS=15; //<-- Be _sure_ to set this correctly, or you'll have memory access bugs!\n\n// These are not used for facevals in the reconstruction step, but boy are they useful anyway. \nstatic const int GAMMAUPXY=15,GAMMAUPXZ=16,GAMMAUPYZ=17,\n NUMVARS_FOR_METRIC=18; //<-- Be _sure_ to set this correctly, or you'll have memory access bugs!\n```\n\nwith a new addition: we rename *all* conformal metric array quantities by adding a `CM_` string before their names. \n\n\n```python\n# Start setting up the gridfunction indices with phi and psi\ngfslist = [[\"CM_PHI\"],[\"CM_PSI\"]]\n\n# Add the indices for \\tilde{\\gamma}_{ij}\nfor i in range(DIM):\n for j in range(i,DIM):\n gfslist.append([\"CM_GAMMATILDE\"+chr(ord('X')+i)+chr(ord('X')+j)])\n\n# Add alpha, \\beta^{i}, and \\tilde{\\gamma}^{ii}\ngfslist.append([\"CM_LAPM1\"])\nfor i in range(DIM):\n gfslist.append([\"CM_SHIFT\"+chr(ord('X')+i)])\n\nfor i in range(DIM):\n gfslist.append([\"CM_GAMMATILDEUP\"+chr(ord('X')+i)+chr(ord('X')+i)])\n\ngfslist.append([\"NUMVARS_FOR_CONF_METRIC_FACEVALS\"])\nothervars = [[\"CM_GAMMATILDEUPXY\"]]\nothervars.append([\"CM_GAMMATILDEUPXZ\"])\nothervars.append([\"CM_GAMMATILDEUPYZ\"])\n\n# Set up extra code, not supported by the file generating function\nextra = \"/* Other useful variables */\\n\"\nfor j in range(len(othervars)):\n extra += \"static const int \"+othervars[j][0]\n for k in range(len(\"NUMVARS_FOR_CONF_METRIC_FACEVALS\") - len(othervars[j][0])):\n extra += \" \"\n extra += \" = \"+str(j+len(gfslist)-1)+\";\\n\"\nextra += \"static const int NUMVARS_FOR_CONF_METRIC = \"+str(len(gfslist)+len(othervars)-1)+\";\\n\\n\"\n\n# Set up comments\ncomment = \"/* Variables used for face value reconstructions */\\n\"\n# Set up output file name\nfilename = \"CONF_METRIC_FACEVALS_VARS.h\"\n# Generate variable definition file\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment,extra=extra)\n```\n\n Just generated the file: ../src/NRPy_generated_headers/CONF_METRIC_FACEVALS_VARS.h\n\n\n\n\n## Step 1.d: The `GRMHD_VARS.h` file \\[Back to [top](#toc)\\]\n$$\\label{grmhd_vars}$$\n\nWe will now generate the `GRMHD_VARS.h` file, which sets integers so that we can locate specific variables inside the many variations of the primitive arrays (e.g. `prims`, `PRIMS`, `IN_PRIMS`, `OUT_PRIMS_R`, etc). This header file replaces the following piece of code\n\n```c\n// The order here MATTERS, and must be consistent with the order in the IN_PRIMS[] array in driver_evaluate_MHD_rhs.C.\nstatic const int RHOB=0,PRESSURE=1,VX=2,VY=3,VZ=4,\n BX_CENTER=5,BY_CENTER=6,BZ_CENTER=7,BX_STAGGER=8,BY_STAGGER=9,BZ_STAGGER=10,\n VXR=11,VYR=12,VZR=13,VXL=14,VYL=15,VZL=16,MAXNUMVARS=17; //<-- Be _sure_ to define MAXNUMVARS appropriately!\nstatic const int UT=0,UX=1,UY=2,UZ=3;\n```\n\n\n```python\n# Add primitives\ngfslist = [[\"RHOB\"],[\"PRESSURE\"],\n [\"VX\"],[\"VY\"],[\"VZ\"],\n [\"BX_CENTER\"],[\"BY_CENTER\"],[\"BZ_CENTER\"],\n [\"BX_STAGGER\"],[\"BY_STAGGER\"],[\"BZ_STAGGER\"],\n [\"VXR\"],[\"VYR\"],[\"VZR\"],\n [\"VXL\"],[\"VYL\"],[\"VZL\"],\n [\"MAXNUMVARS\"]]\n\n# Finally, the 4-velocity\nu4list = [\"UT\",\"UX\",\"UY\",\"UZ\"]\n\nextra = \"/* 4-velocity */\\n\"\nfor mu in range(4):\n extra += \"static const int \"+u4list[mu]\n for k in range(len(\"MAXNUMVARS\")-len(u4list[mu])):\n extra += \" \"\n extra += \" = \"+str(mu)+\";\\n\"\n\n# Set up comments\ncomment = \"/* GRMHD variables */\\n\"\n# Set up output file name\nfilename = \"GRMHD_VARS.h\"\n# Generate variable definition file\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment,extra=extra)\n```\n\n Just generated the file: ../src/NRPy_generated_headers/GRMHD_VARS.h\n\n\n\n\n## Step 1.e: The `INTERP_VARS.h` file \\[Back to [top](#toc)\\]\n$$\\label{interp_vars}$$\n\nWe will now generate the `GRMHD_VARS.h` file, which sets integers so that we can locate specific variables inside the many variations of the interpolation array `INTERP_VARS`. This header file replaces the following piece of code\n\n```c\n// The \"I\" suffix denotes interpolation. In other words, these\n// definitions are used for interpolation ONLY. The order here\n// matters as well!\nstatic const int SHIFTXI=0,SHIFTYI=1,SHIFTZI=2,GAMMAUPXXI=3,GAMMAUPXYI=4,GAMMAUPXZI=5,GAMMAUPYYI=6,GAMMAUPYZI=7,GAMMAUPZZI=8,\n PSII=9,LAPM1I=10,A_XI=11,A_YI=12,A_ZI=13,LAPSE_PSI2I=14,LAPSE_OVER_PSI6I=15,MAXNUMINTERP=16;\n```\n\n\n```python\n# Set interpolation variables names\ngfslist = [[\"INTERP_SHIFTX\"],[\"INTERP_SHIFTY\"],[\"INTERP_SHIFTZ\"],\n [\"INTERP_GAMMATILDEUPXX\"],[\"INTERP_GAMMATILDEUPXY\"],[\"INTERP_GAMMATILDEUPXZ\"],\n [\"INTERP_GAMMATILDEUPYY\"],[\"INTERP_GAMMATILDEUPYZ\"],[\"INTERP_GAMMATILDEUPZZ\"],\n [\"INTERP_PSI\"],[\"INTERP_LAPM1\"],\n [\"INTERP_AX\"],[\"INTERP_AY\"],[\"INTERP_AZ\"],\n [\"INTERP_LAPSE_PSI2\"],[\"INTERP_LAPSE_OVER_PSI6\"],\n [\"MAXNUMINTERP\"]]\n\n# Define the indices in the GRMHD_VARS.h header file\n# Start with the variables used in the face value reconstructions\ncomment = \"/* Interpolation variables */\\n\"\nfilename = \"INTERP_VARS.h\"\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment)\n```\n\n Just generated the file: ../src/NRPy_generated_headers/INTERP_VARS.h\n\n\n\n\n## Step 1.f: The `SMALLB_VARS.h` and `CONSERV_VARS.h` files \\[Back to [top](#toc)\\]\n$$\\label{smallb_and_conservs_vars}$$\n\nWe now set up the `SMALLB_VARS.h` and `CONSERV_VARS.h` files to substitute the following lines in the old version of the `IllinoisGRMHD_headers.h` file:\n\n```c\n// Again, the order here MATTERS, since we assume in the code that, e.g., smallb[0]=b^t, smallb[3]=b^z, etc.\nstatic const int SMALLBT=0,SMALLBX=1,SMALLBY=2,SMALLBZ=3,SMALLB2=4,NUMVARS_SMALLB=5;\n\n// Again, the order here MATTERS, since we assume in the code that, CONSERV[STILDEX+1] = \\tilde{S}_y\nstatic const int RHOSTAR=0,STILDEX=1,STILDEY=2,STILDEZ=3,TAUENERGY=4,NUM_CONSERVS=5;\n```\n\n\n```python\n# b^{\\mu} quantities\ngfslist = [[\"SMALLBT\"],[\"SMALLBX\"],[\"SMALLBY\"],[\"SMALLBZ\"],[\"SMALLB2\"],[\"NUMVARS_SMALLB\"]]\n\n# Set up the b^{\\mu} string\ncomment = \"/* smallb (b^{\\mu}) variables */\\n\"\nfilename = \"SMALLB_VARS.h\"\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment)\n\n# Conservative quantities\ngfslist = [[\"RHOSTAR\"],[\"STILDEX\"],[\"STILDEY\"],[\"STILDEZ\"],[\"TAUENERGY\"],[\"NUM_CONSERVS\"]]\n\n# Set up the b^{\\mu} string\ncomment = \"/* Interpolation variables */\\n\"\nfilename = \"CONSERV_VARS.h\"\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment)\n```\n\n Just generated the file: ../src/NRPy_generated_headers/SMALLB_VARS.h\n Just generated the file: ../src/NRPy_generated_headers/CONSERV_VARS.h\n\n\n\n\n## Step 1.g: The `TMUNU_VARS.h` files \\[Back to [top](#toc)\\]\n$$\\label{tmunu_vars}$$\n\nThe `TMUNU_VARS.h` files sets the position of $T^{\\mu\\nu}$ components to be access within all $T^{\\mu\\nu}$ arrays, regardless if it is contravariant, covariant, or \"mixed\" (i.e. $T^{\\mu}_{\\ \\ \\nu}$)\n\n\n```python\n# Then, the right/left values of the 3-velocity\ngfslist = [[\"TMUNU_TT\"]]\nfor i in range(DIM):\n gfslist.append([\"TMUNU_T\"+chr(ord('X')+i)])\n\nfor i in range(DIM):\n for j in range(i,DIM):\n gfslist.append([\"TMUNU_\"+chr(ord('X')+i)+chr(ord('X')+j)])\n\n# Set up comments\ncomment = \"/* Define TMUNU variables (valid for all variants) */\\n\"\n# Set up output file name\nfilename = \"TMUNU_VARS.h\"\n# Generate variable definition file\nIGMout.generate_variable_definition_file(gfslist,filename,comment=comment)\n```\n\n Just generated the file: ../src/NRPy_generated_headers/TMUNU_VARS.h\n\n\n\n\n# Step 2: The `IllinoisGRMHD_headers.h` file \\[Back to [top](#toc)\\]\n$$\\label{igm_headers__h}$$\n\nWe will now document the `IllinoisGRMHD_headers.h` file, even though this is probably one of the most straightforward files in `IllinoisGRMHD`. We start by going over the following three macros:\n\n```c\n#define MIN(a,b) ( ((a) < (b)) ? (a) : (b) )\n#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )\n#define SQR(x) ((x) * (x))\n```\n\nwhich are basically used for our convenience. Using `SQR(x)`, for example, is faster than using `pow(x,2)` and completely equivalent to doing `x*x`, but is very convenient when dealing with large expressions. The `MIN` and `MAX` return the minimum and maximum value of two input numbers, respectively.\n\nWe have also a few more definitions\n\n```c\n#define ONE_OVER_SQRT_4PI 0.282094791773878143474039725780\n\n#define VERR_DEF_PARAMS __LINE__, __FILE__, CCTK_THORNSTRING\n\n#define TINYDOUBLE 1e-100\n```\n\nwhich are there for convenience. The `ONE_OVER_SQRT_4PI` macro precomputes $\\frac{1}{\\sqrt{4\\pi}}$ to many significant digits. The `TINYDOUBLE` macro is used to set a nonzero, yet extremely small value to avoid division by zero in a few computations.\n\nNext we include all files we have generated in [Step 1](#vars_headers).\n\nThen we set the ghostzones struct,\n\n```c\nstruct gf_and_gz_struct {\n CCTK_REAL *gf;\n int gz_lo[4],gz_hi[4];\n};\n```\n\nwhich is extremely useful to set the ghostzones of each gridfunction and keep track of it in different functions. Note that so that we keep things consistent, we update the values of `gz_lo` and `gz_hi` appropriately within each function.\n\nThen comes the equation of state (EOS) struct,\n\n```c\nstruct eos_struct {\n int neos;\n CCTK_REAL rho_ppoly_tab[MAX_EOS_PARAMS-1];\n CCTK_REAL eps_integ_const[MAX_EOS_PARAMS],K_ppoly_tab[MAX_EOS_PARAMS],Gamma_ppoly_tab[MAX_EOS_PARAMS];\n};\n```\n\nwhich sets all EOS parameters. This struct is currently specialized to simple and piecewise polytropic EOSs.\n\nNext comes the stats struct,\n\n```c\nstruct output_stats {\n int font_fixed,vel_limited,failure_checker,rho_star_fix_applied;\n long n_iter;\n};\n```\n\nwhich is used to identify when particularly fixes are applied in the conservative-to-primitive routine.\n\nThen we define the [Kronecker delta](https://en.wikipedia.org/wiki/Kronecker_delta)\n\n```c\nconst int kronecker_delta[4][3] = { { 0,0,0 },\n { 1,0,0 },\n { 0,1,0 },\n { 0,0,1 } };\n```\n\nFinally we set the prototypes of the `IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs()` ([**tutorial**](Tutorial-IllinoisGRMHD__apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.ipynb)), `IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij()` ([**tutorial**](Tutorial-IllinoisGRMHD__convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij.ipynb)), and `IllinoisGRMHD_set_symmetry_gzs_staggered()` ([**tutorial**](Tutorial-IllinoisGRMHD__symmetry__set_gzs_staggered_gfs.ipynb)) functions.\n\n\n```python\n%%writefile $outdir/IllinoisGRMHD_headers.h\n// To safeguard against double-including this header file:\n#ifndef ILLINOISGRMHD_HEADERS_H_\n#define ILLINOISGRMHD_HEADERS_H_\n\n#define MIN(a,b) ( ((a) < (b)) ? (a) : (b) )\n#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )\n#define SQR(x) ((x) * (x))\n#define ONE_OVER_SQRT_4PI 0.282094791773878143474039725780\n\n#define VERR_DEF_PARAMS __LINE__, __FILE__, CCTK_THORNSTRING\n\n#define TINYDOUBLE 1e-100\n\n#include \"NRPy_generated_headers/ADM_3METRIC_VARS.h\"\n\n#include \"NRPy_generated_headers/CONF_METRIC_FACEVALS_VARS.h\"\n\n#include \"NRPy_generated_headers/GRMHD_VARS.h\"\n\n#include \"NRPy_generated_headers/INTERP_VARS.h\"\n\n#include \"NRPy_generated_headers/SMALLB_VARS.h\"\n\n#include \"NRPy_generated_headers/CONSERV_VARS.h\"\n\n#include \"NRPy_generated_headers/TMUNU_VARS.h\"\n\n// Keeping track of ghostzones between routines is a nightmare, so\n// we instead attach ghostzone info to each gridfunction and set\n// the ghostzone information correctly within each routine.\nstruct gf_and_gz_struct {\n CCTK_REAL *gf;\n int gz_lo[4],gz_hi[4];\n};\n\n#define MAX_EOS_PARAMS 10\nstruct eos_struct {\n int neos;\n CCTK_REAL rho_ppoly_tab[MAX_EOS_PARAMS-1];\n CCTK_REAL eps_integ_const[MAX_EOS_PARAMS],K_ppoly_tab[MAX_EOS_PARAMS],Gamma_ppoly_tab[MAX_EOS_PARAMS];\n};\n\nstruct output_stats {\n int font_fixed,vel_limited,failure_checker,rho_star_fix_applied;\n long n_iter;\n};\n\n\n// FIXME: For cosmetic purposes, we might want to make everything either zero-offset or one-offset, instead of a mixture.\nconst int kronecker_delta[4][3] = { { 0,0,0 },\n { 1,0,0 },\n { 0,1,0 },\n { 0,0,1 } };\n\n/* PUBLIC FUNCTIONS, USED OUTSIDE IllinoisGRMHD AS WELL */\nvoid IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs(const int already_computed_physical_metric_and_inverse,CCTK_REAL *U,struct output_stats &stats,eos_struct &eos,\n CCTK_REAL *METRIC,CCTK_REAL g4dn[4][4],CCTK_REAL g4up[4][4], CCTK_REAL *TUPMUNU,CCTK_REAL *TDNMUNU,CCTK_REAL *CONSERVS);\n\nvoid IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij\n(const cGH *cctkGH,const int *cctk_lsh,\n CCTK_REAL *gxx,CCTK_REAL *gxy,CCTK_REAL *gxz,CCTK_REAL *gyy,CCTK_REAL *gyz,CCTK_REAL *gzz,CCTK_REAL *alp,\n CCTK_REAL *gtxx,CCTK_REAL *gtxy,CCTK_REAL *gtxz,CCTK_REAL *gtyy,CCTK_REAL *gtyz,CCTK_REAL *gtzz,\n CCTK_REAL *gtupxx,CCTK_REAL *gtupxy,CCTK_REAL *gtupxz,CCTK_REAL *gtupyy,CCTK_REAL *gtupyz,CCTK_REAL *gtupzz,\n CCTK_REAL *phi,CCTK_REAL *psi,CCTK_REAL *lapm1);\n\nvoid IllinoisGRMHD_set_symmetry_gzs_staggered(const cGH *cctkGH, const int *cctk_lsh,CCTK_REAL *X,CCTK_REAL *Y,CCTK_REAL *Z, CCTK_REAL *gridfunc,\n CCTK_REAL *gridfunc_syms,int stagger_x,int stagger_y,int stagger_z);\n\n#include \"IllinoisGRMHD_EoS_lowlevel_functs.C\"\n#endif // ILLINOISGRMHD_HEADERS_H\n\n\n```\n\n Overwriting ../src/IllinoisGRMHD_headers.h\n\n\n\n\n# Step 3: Code validation \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\nFirst we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.\n\n\n```python\n# # Verify if the code generated by this tutorial module\n# # matches the original IllinoisGRMHD source code\n\n# # First download the original IllinoisGRMHD source code\n# import urllib\n# from os import path\n\n# original_IGM_file_url = \"https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/IllinoisGRMHD_headers.h\"\n# original_IGM_file_name = \"IllinoisGRMHD_headers-original.h\"\n# original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)\n\n# # Then download the original IllinoisGRMHD source code\n# # We try it here in a couple of ways in an attempt to keep\n# # the code more portable\n# try:\n# original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode(\"utf-8\")\n# # Write down the file the original IllinoisGRMHD source code\n# with open(original_IGM_file_path,\"w\") as file:\n# file.write(original_IGM_file_code)\n# except:\n# try:\n# original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode(\"utf-8\")\n# # Write down the file the original IllinoisGRMHD source code\n# with open(original_IGM_file_path,\"w\") as file:\n# file.write(original_IGM_file_code)\n# except:\n# # If all else fails, hope wget does the job\n# !wget -O $original_IGM_file_path $original_IGM_file_url\n\n# # Perform validation\n# Validation__IllinoisGRMHD_headers__h = !diff $original_IGM_file_path $outfile_path__IllinoisGRMHD_headers__h\n\n# if Validation__IllinoisGRMHD_headers__h == []:\n# # If the validation passes, we do not need to store the original IGM source code file\n# !rm $original_IGM_file_path\n# print(\"Validation test for IllinoisGRMHD_headers.h: PASSED!\")\n# else:\n# # If the validation fails, we keep the original IGM source code file\n# print(\"Validation test for IllinoisGRMHD_headers.h: FAILED!\")\n# # We also print out the difference between the code generated\n# # in this tutorial module and the original IGM source code\n# print(\"Diff:\")\n# for diff_line in Validation__IllinoisGRMHD_headers__h:\n# print(diff_line)\n```\n\n\n\n# Step 4: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.pdf](Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).\n\n\n```python\nlatex_nrpy_style_path = os.path.join(nrpy_dir_path,\"latex_nrpy_style.tplx\")\n#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.ipynb\n#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.tex\n#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.tex\n#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.tex\n!rm -f Tut*.out Tut*.aux Tut*.log\n```\n", "meta": {"hexsha": "61c1916bd109aa9b7f7b55b31f1f47d456535b76", "size": 33189, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.ipynb", "max_stars_repo_name": "leowerneck/NRPyIGM", "max_stars_repo_head_hexsha": "f483d6123424fb3e6860dfac4325dd232b223005", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.ipynb", "max_issues_repo_name": "leowerneck/NRPyIGM", "max_issues_repo_head_hexsha": "f483d6123424fb3e6860dfac4325dd232b223005", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__IllinoisGRMHD_headers.ipynb", "max_forks_repo_name": "leowerneck/NRPyIGM", "max_forks_repo_head_hexsha": "f483d6123424fb3e6860dfac4325dd232b223005", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.3329081633, "max_line_length": 561, "alphanum_fraction": 0.6016752538, "converted": true, "num_tokens": 7527, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5506073655352404, "lm_q2_score": 0.47657965106367595, "lm_q1q2_score": 0.2624082661398747}} {"text": "```python\n\n```\n\n\n```python\nimport pandas as pd\n%matplotlib inline\nimport glob, os\nfrom kinetics.ala_kinetics import *\nfrom kinetics.waiting_time import *\nfrom kinetics.sorted_lifetimes import *\nimport pyprind\nfrom kinetics.check_expon import *\n```\n\n\n```python\ncl = sns.color_palette()\n```\n\n\n```python\n! ls ../dt*/md_dt*/ | tail \n```\n\n md_st2md_dt5000__27jan16_blNone_pop.pickle\r\n md_st2md_dt5000__27jan16_blNone_rates.pickle\r\n md_st2md_dt5000__27jan16_blNone_tba.pickle\r\n md_st2md_dt5000__27jan16_blNone_tp.pickle\r\n \r\n ../dt5000/md_dt5000_st3/:\r\n md_st3md_dt5000__27jan16_blNone_pop.pickle\r\n md_st3md_dt5000__27jan16_blNone_rates.pickle\r\n md_st3md_dt5000__27jan16_blNone_tba.pickle\r\n md_st3md_dt5000__27jan16_blNone_tp.pickle\r\n\n\n\n```python\ndt_key_l = glob.glob(\"../dt*\")\ndt_key_l = [ k[5:] for k in dt_key_l]\ndt_key_l\n```\n\n\n\n\n ['1',\n '10',\n '100',\n '1000',\n '2',\n '20',\n '200',\n '25',\n '2500',\n '5',\n '50',\n '500',\n '5000']\n\n\n\n# Lag-time dependency of lifetimes from MD\n\n* load MD transition paths and transition-based state assignments\n\n## Calculate lifetimes MD\n\n\n```python\nmd_dt_tp_d = {}\nmd_dt_state_d = {}\n\n#for key in md_r_dt_d.keys():\nfor key in dt_key_l:\n #print \"../dt{}/md*/*blNone_tp.pickle\".format(dt)\n #dt_tp_l = [glob.glob(\"../dt{}/md*/md_st{}*blNone_tp.pickle\".format(dt,st))[0] for st in range(1,4)]\n _dt_tp_l = []\n _dt_st_l = []\n for st in range(1,4):\n _dt_tp_l.extend(glob.glob(\"../dt{}/md*_st{}/md_st{}md*_27jan16_blNone_tp.pickle\".format(key,st,st)))\n _dt_st_l.extend(glob.glob(\"../dt{}/md*_st{}/md_st{}md*_27jan16_blNone_tba.pickle\".format(key,st,st)))\n # print _dt_st_l\n md_dt_tp_d[key] = _dt_tp_l\n md_dt_state_d[key] = _dt_st_l\n```\n\n\n```python\ntrans_from_h = [t for t in possible_transitions(4) if t[:2] == (0,0,)]\ntrans_from_c = [t for t in possible_transitions(4) if t[:2] == (0,1,)]\n\ntrans_from_11 = [t for t in possible_transitions(4) if t[:2] == (1,1,)]\ntrans_from_10 = [t for t in possible_transitions(4) if t[:2] == (1,0,)]\n```\n\n\n```python\nmd_dt_hdw_d = {}\nmd_dt_cdw_d = {}\nmd_dt_11dw_d = {}\nmd_dt_10dw_d = {}\nbar = pyprind.ProgBar(len(dt_key_l)*3)\n\nfor k, v in md_dt_tp_d.items():\n _tba = md_dt_state_d[k]\n print k\n md_h_l = []\n md_c_l = []\n md_11_l = []\n md_10_l = []\n \n for i, _tp_fn in enumerate(v):\n _st_fn = _tba[i]\n \n _tp_df = pd.read_pickle(_tp_fn)\n _st_df = pd.read_pickle(_st_fn)\n \n _tp_df0 = _tp_df[_tp_df.temperature==0]\n _st_df0 = _st_df[_st_df.temperature==0]\n \n _dw_h = loop_dwell_trans_temp(_tp_df0,\n _st_df0, trans_from_h, tu=float(k))\n md_h_l.append(_dw_h)\n _dw_c = loop_dwell_trans_temp(_tp_df0,\n _st_df0, trans_from_c, tu=float(k))\n md_c_l.append(_dw_c)\n \n # lifetimes in the minor states\n _dw_10 = loop_dwell_trans_temp(_tp_df0,\n _st_df0, trans_from_10, tu=float(k))\n md_10_l.append(_dw_10)\n \n _dw_11 = loop_dwell_trans_temp(_tp_df0,\n _st_df0, trans_from_11, tu=float(k))\n md_11_l.append(_dw_11)\n \n \n md_dt_hdw_d[k] = pd.concat(md_h_l)\n md_dt_cdw_d[k] = pd.concat(md_c_l)\n md_dt_10dw_d[k] = pd.concat(md_10_l)\n md_dt_11dw_d[k] = pd.concat(md_11_l)\n \n bar.update()\n```\n\n 0% 100%\n [ ]\n\n 10\n 5000\n\n\n [# ] | ETA: 00:05:04\n\n 20\n\n\n [## ] | ETA: 00:05:58\n\n 200\n\n\n [### ] | ETA: 00:05:04\n\n 50\n 1\n\n\n [#### ] | ETA: 00:06:51\n\n 2\n\n\n [##### ] | ETA: 00:07:21\n\n 5\n\n\n [###### ] | ETA: 00:07:22\n\n 25\n 100\n\n\n [####### ] | ETA: 00:06:35\n\n 2500\n\n\n [######## ] | ETA: 00:05:48\n\n 500\n\n\n [######### ] | ETA: 00:05:13\n\n 1000\n\n\n## Quick comparison of MD lifetime distributions\n\n\n```python\n_blog = np.logspace(-3,1, 5000)\n```\n\n\n```python\nfig, ax = plt.subplots(2,2, figsize=(8,4))\n\ntf = 1.0 / 1000\n\n_ = fit_plot_cdf(ax[0,0], md_dt_hdw_d['1'].wait_T * tf, bins=_blog,\n dist_label=\"$h$, dt=1 ps\", )\n_ = fit_plot_cdf(ax[0,1], md_dt_hdw_d['5'].wait_T * tf, bins=_blog, plot_fit=False,\n dist_label=\"$h$, dt=5 ps\")\n\n \n_ = fit_plot_cdf(ax[1,0], md_dt_hdw_d['10'].wait_T * tf, bins=_blog, plot_fit=True,\n dist_label=\"$h$, dt=10 ps\")\n\n_ = fit_plot_cdf(ax[1,1], md_dt_hdw_d['25'].wait_T * tf, bins=_blog, plot_fit=True,\n dist_label=\"$h$, dt=25 ps\")\n#_ = fit_plot_cdf(ax[2,1], md_dt_hdw_d['50'].wait_T * tf, bins=_blog, plot_fit=True)\n\n\n_ = fit_plot_cdf(ax[0,0], md_dt_cdw_d['1'].wait_T * tf, bins=_blog,\n dist_label=\"$c$, dt=1 ps\")\n_ = fit_plot_cdf(ax[0,1], md_dt_cdw_d['5'].wait_T * tf, bins=_blog, plot_fit=True,\n dist_label=\"$c$, dt=5 ps\")\n_ = fit_plot_cdf(ax[1,0], md_dt_cdw_d['10'].wait_T * tf, bins=_blog, plot_fit=True,\n dist_label=\"$c$, dt=10 ps\")\n\n_ = fit_plot_cdf(ax[1,1], md_dt_cdw_d['25'].wait_T * tf, bins=_blog, plot_fit=True,\n dist_label=\"$c$, dt=25 ps\")\n#_ = fit_plot_cdf(ax[2,1], md_dt_cdw_d['50'].wait_T * tf, bins=_blog, plot_fit=True)\n\n\nfor a in ax.flat:\n a.semilogx()\n a.legend(loc=2)\n```\n\n## Uncertainty in $\\tau$ REMD from counts\n\n\n```python\nmd_r_dt_d = {}\nfor dt_fn_name in glob.glob(\"../dt*/c_md_dt*/rates_md_st1-3_sym_ln__27jan16.pickle\"):\n dt = dt_fn_name.split(\"/\")[1][2:]\n md_r_dt_d[dt] = pd.read_pickle(dt_fn_name)\n```\n\n# Lag-time dependency of lifetimes from REMD\n\n## Calculate \n\n\n```python\nremd_dt_tp_l = glob.glob(\"../dt*/remd_dt*/*_27jan16_tp.pickle\")\n```\n\n\n```python\nremd_dt_st_l = glob.glob(\"../dt*/remd_dt*/*_27jan16_tba.pickle\")\n```\n\n\n```python\nremd_dt_tp_d = {}\nremd_dt_st_d = {}\n\nfor fn in remd_dt_tp_l:\n dt = fn.split(\"/\")[1][2:]\n #print dt\n remd_dt_tp_d[dt] = pd.read_pickle(fn)\n\nfor fn in remd_dt_st_l:\n dt = fn.split(\"/\")[1][2:]\n remd_dt_st_d[dt] = pd.read_pickle(fn)\n```\n\n\n```python\nremd_dt_hdw_d = {}\nremd_dt_cdw_d = {}\nremd_dt_10w_d = {}\nremd_dt_11dw_d = {}\n\nbar = pyprind.ProgBar(len(remd_dt_tp_l)*4)\n\nfor k, _tp in remd_dt_tp_d.items():\n _tba = remd_dt_st_d[k]\n _tp0 = _tp[_tp.temperature==0]\n _tba0 = _tba[_tba.temperature==0]\n \n _dw_h = loop_dwell_trans_temp(_tp0,\n _tba0, trans_from_h, tu=float(k))\n \n _dw_c = loop_dwell_trans_temp(_tp0,\n _tba0, trans_from_c, tu=float(k))\n remd_dt_hdw_d[k] = _dw_h\n remd_dt_cdw_d[k] = _dw_c\n \n remd_dt_10w_d[k] = loop_dwell_trans_temp(_tp0, _tba0,\n trans_from_10,\n tu=float(k))\n remd_dt_11dw_d[k] = loop_dwell_trans_temp(_tp0, _tba0,\n trans_from_11,\n tu=float(k))\n \n bar.update()\n\n\n```\n\n 0% 100%\n [####### ] | ETA: 00:03:32\n\n## Uncertainty in $\\tau$ REMD from counts\n\n\n```python\nremd_r_dt_d = {}\nfor dt_fn_name in glob.glob(\"../dt*/remd_dt*/ala_remd_st1_dt*ps__27jan16_rates_sym_ln__27jan16.pickle\"):\n dt = dt_fn_name.split(\"/\")[1][2:]\n #print dt\n remd_r_dt_d[dt] = pd.read_pickle(dt_fn_name)\n```\n\n# Comparing lag-time dependences from MD and REMD\n\n## Lagtime independent rate coefficients\n\nassume that Ala2 is a quasi two-state system.\n\nFirst fit $k$\n\n\\begin{equation}\n\\frac{1}{<\\tau_F>_{\\text{app}}} + \\frac{1}{<\\tau_U>_{\\text{app}}} = \\frac{1 - \\exp(-k t) }{ t}\n\\end{equation}\n\nThen fit the relative populations.\n\n\\begin{equation}\n\\frac{1}{<\\tau_F>_{\\text{app}}} = \\frac{pU' (1 - \\exp(-kt)}{t} = k_{U \\leftarrow F}\n\\end{equation}\n\n### rate arrays\n\n\n```python\nmd_dt_cdw_d['1'].head() # wait in the coil state\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
temperaturetypetrajstartstopweightwait_Twaitprev_state
00.0(0.0, 1.0, 0.0, 0.0)0.00.0145.01.080.0145.0NaN
10.0(0.0, 1.0, 0.0, 0.0)0.0148.0699.01.0401.0551.0NaN
20.0(0.0, 1.0, 0.0, 0.0)0.0706.0929.01.0111.0223.0NaN
30.0(0.0, 1.0, 0.0, 0.0)0.0939.01068.01.039.0129.0NaN
40.0(0.0, 1.0, 0.0, 0.0)0.01073.01355.01.089.0282.0NaN
\n
\n\n\n\n\n```python\nmd_dt_hdw_d['1'].head() # wait in the helix state\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
temperaturetypetrajstartstopweightwait_Twaitprev_state
00.0(0.0, 0.0, 0.0, 1.0)0.00.063.01.063.063.0NaN
10.0(0.0, 0.0, 0.0, 1.0)0.068.0295.01.0149.0227.0NaN
20.0(0.0, 0.0, 0.0, 1.0)0.0301.0816.01.0114.0515.0NaN
30.0(0.0, 0.0, 0.0, 1.0)0.0821.01027.01.093.0206.0NaN
40.0(0.0, 0.0, 0.0, 1.0)0.01032.01264.01.0194.0232.0NaN
\n
\n\n\n\n\n```python\ntf = 1.0 / 1000.0\n```\n\n\n```python\nmd_lagtime_l = []\nmd_kc_l, md_kh_l = [], []\n\nfor dt in sorted([int(k) for k in md_dt_cdw_d.keys()]):\n md_lagtime_l.append(dt)\n _c = md_dt_cdw_d[str(dt)]\n _h = md_dt_hdw_d[str(dt)]\n md_kh_l.extend([_c[_c.temperature==0].wait_T.mean()])\n md_kc_l.extend([_h[_h.temperature==0].wait_T.mean()]) \nmd_kc_ar = 1.0 / (np.array(md_kc_l)*tf)\nmd_kh_ar = 1.0/ (np.array(md_kh_l)*tf)\nmd_kc_kh_ar = md_kc_ar + md_kh_ar\n\nmd_lagtime_ar = np.array(md_lagtime_l)\n```\n\n\n```python\nmd_kc_kh_ar\n```\n\n\n\n\n array([ 9.38203042, 9.25406793, 9.04817276, 8.72595441, 8.14774794,\n 7.87115053, 6.76873198, 5.07903558, 3.21880651, 1.44015157,\n 0.67970888, 0.33750258, 0.16809117])\n\n\n\n\n```python\n 1/ (md_dt_cdw_d['1'][md_dt_cdw_d['1'].temperature==0].wait_T.mean() / 1000.0)\n```\n\n\n\n\n 3.7332389118955613\n\n\n\n\n```python\n 1/ (md_dt_hdw_d['1'][md_dt_hdw_d['1'].temperature==0].wait_T.mean() / 1000.0)\n```\n\n\n\n\n 5.648791509837093\n\n\n\n\n```python\nmd_r_dt_d['1'][(md_r_dt_d['1'].temperature==0) & (md_r_dt_d['1'].type==(0,1,0,0))]\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
temperaturetyperateeventssum_weightrev_eventsrev_sum_weightsym_weightstd_pstd_merr_merr_p
360.0(0, 1, 0, 0)3.676874964.0964.0965.0965.01929.03.7615513.5941030.0827710.084677
\n
\n\n\n\n\n```python\nremd_lagtime_l = []\nremd_kc_l, remd_kh_l = [], []\n\nfor dt in sorted([int(k) for k in remd_dt_hdw_d.keys()]):\n print dt\n remd_lagtime_l.append(dt)\n _c = remd_dt_cdw_d[str(dt)]\n _h = remd_dt_hdw_d[str(dt)]\n # tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n _c0 = _c[_c.temperature==0]\n _h0 = _h[_h.temperature==0]\n remd_kh_l.extend([np.average(_c0.wait_T / _c0.weight, weights=_c0.weight)])\n remd_kc_l.extend([np.average(_h0.wait_T / _h0.weight, weights=_h0.weight)]) \nremd_kc_ar = 1.0 / (np.array(remd_kc_l)*tf)\nremd_kh_ar = 1.0/ (np.array(remd_kh_l)*tf)\nremd_kc_kh_ar = remd_kc_ar + remd_kh_ar\n\nremd_lagtime_ar = np.array(remd_lagtime_l)\n```\n\n 1\n 2\n 5\n 10\n 20\n 25\n 50\n 100\n 200\n 500\n 1000\n 2500\n 5000\n\n\n\n```python\n1/ (remd_dt_hdw_d['1'][remd_dt_hdw_d['1'].temperature==0].wait_T.mean() / 1000.0)\n```\n\n\n\n\n 6.813287676068768\n\n\n\n\n```python\n1/ (remd_dt_cdw_d['1'][remd_dt_cdw_d['1'].temperature==0].wait_T.mean() / 1000.0)\n```\n\n\n\n\n 4.470001829156759\n\n\n\n\n```python\nmd_kc_kh_ar\n```\n\n\n\n\n array([ 9.38203042, 9.25406793, 9.04817276, 8.72595441, 8.14774794,\n 7.87115053, 6.76873198, 5.07903558, 3.21880651, 1.44015157,\n 0.67970888, 0.33750258, 0.16809117])\n\n\n\n\n```python\nremd_kc_kh_ar\n```\n\n\n\n\n array([ 9.93467867, 9.80515758, 9.4242392 , 9.11657163, 8.39944827,\n 7.99680929, 7.05139893, 5.20324584, 3.51125646, 1.32326228,\n 0.57495685, 0.28231217, 0.12527778])\n\n\n\n\n```python\nremd_lagtime_ar\n```\n\n\n\n\n array([ 1, 2, 5, 10, 20, 25, 50, 100, 200, 500, 1000,\n 2500, 5000])\n\n\n\n\n```python\npopt_kex_remd = curve_fit(sum_inv_lifetimes_two_state_lagtime,\n remd_lagtime_ar[:]*tf, remd_kc_kh_ar[:], p0=10)\nkex_fit_remd = popt_kex_remd[0][0]\nprint popt_kex_remd\n```\n\n (array([ 9.44258411]), array([[ 0.04464302]]))\n\n\n\n```python\npopt = curve_fit(sum_inv_lifetimes_two_state_lagtime, md_lagtime_ar[:]*tf, md_kc_kh_ar[:], p0=[10.0]) \nprint popt\nk_ex_fit = popt[0][0]\n```\n\n (array([ 9.00931758]), array([[ 0.03963532]]))\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(3,3))\nplt.plot(md_lagtime_ar / 1000.0, md_kc_kh_ar, \".\")\nplt.plot(md_lagtime_ar/ 1000.0, sum_inv_lifetimes_two_state_lagtime(md_lagtime_ar*tf, k_ex_fit))\n\nplt.plot(remd_lagtime_ar / 1000.0, remd_kc_kh_ar, \".\")\nplt.plot(remd_lagtime_ar/ 1000.0, sum_inv_lifetimes_two_state_lagtime(remd_lagtime_ar*tf, kex_fit_remd))\n\nplt.loglog()\n```\n\n### $p'$ fit to determine lag time independent rate coefficients and populations\n#### MD\n\n\n```python\nf = lambda t, p : inv_lifetime_two_state_lagtime(t, p, k_ex_fit)\n```\n\n\n```python\npopt_pc = curve_fit(f, md_lagtime_ar[:] / 1000.0, md_kc_ar[:])\nprint popt_pc\n\npopt_ph = curve_fit(f, md_lagtime_ar[:]/1000.0, md_kh_ar[:])\nprint popt_ph\n```\n\n (array([ 0.59058842]), array([[ 0.00018217]]))\n (array([ 0.39729415]), array([[ 4.00681632e-05]]))\n\n\n\n```python\npc = popt_pc[0][0] / (popt_pc[0][0] + popt_ph[0][0])\nph = popt_ph[0][0] / (popt_pc[0][0] + popt_ph[0][0])\n```\n\n\n```python\nk_ex_fit * pc, k_ex_fit * ph\n```\n\n\n\n\n (5.3860638692643912, 3.6232537105041218)\n\n\n\n\n```python\nmd_r_dt_d['1'][(md_r_dt_d['1'].temperature==0) & (md_r_dt_d['1'].type.isin([(0,0,0,1), (0,1,0,0)]))]\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
temperaturetyperateeventssum_weightrev_eventsrev_sum_weightsym_weightstd_pstd_merr_merr_p
00.0(0, 0, 0, 1)5.565691965.0965.0964.0964.01929.05.6938675.4404010.1252910.128176
360.0(0, 1, 0, 0)3.676874964.0964.0965.0965.01929.03.7615513.5941030.0827710.084677
\n
\n\n\n\n#### REMD\n\n\n```python\nf_remd = lambda t, p: inv_lifetime_two_state_lagtime(t, p, kex_fit_remd)\n```\n\n\n```python\npopt_pc_remd = curve_fit(f, remd_lagtime_ar[:]*tf, remd_kc_ar[:])\nprint popt_pc_remd\npopt_ph_remd = curve_fit(f, remd_lagtime_ar[:]*tf, remd_kh_ar[:])\nprint popt_ph_remd\npc_remd = popt_pc_remd[0][0] / (popt_pc_remd[0][0] + popt_ph_remd[0][0])\nph_remd = popt_ph_remd[0][0] / (popt_pc_remd[0][0] + popt_ph_remd[0][0])\nprint pc_remd, ph_remd\n```\n\n (array([ 0.61844082]), array([[ 0.00022834]]))\n (array([ 0.41136535]), array([[ 6.47087531e-05]]))\n 0.600540993681 0.399459006319\n\n\n\n```python\nkex_fit_remd * pc, kex_fit_remd * ph\n```\n\n\n\n\n (5.6450847289329289, 3.7974993774089545)\n\n\n\n## Load minor state lag-time dependences expected for ideal rate kinetics\n\n\n```python\nsyn_ar = np.genfromtxt(\"../from-syn-trj/syn_dt_ala_st3_st4_c.txt\")\n```\n\n## Comparison plot\n\n\n```python\n! ls ../dt10/c_md_dt10/\n! mkdir -p plot\n```\n\n pt_md_st1-3__27jan16.txt rates_md_st1-3_sym_ln__27jan16.pickle\r\n rates_md_st1-3__27jan16.txt rep_ar_md_st1-3__27jan16.txt\r\n\n\n* error estimates from the count statistics\n\n\n```python\nremd_r_dt_d['1'][(remd_r_dt_d['1'].type.isin(trans_from_10))].head()\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
temperaturetyperateeventssum_weightrev_eventsrev_sum_weightsym_weightstd_pstd_merr_merr_p
720.0(1, 0, 0, 0)0.1484571.01.00.00.01.00.4035480.0546140.0938430.255091
731.0(1, 0, 0, 0)0.000000NaNNaN0.00.00.0NaN0.0000000.000000NaN
742.0(1, 0, 0, 0)0.2708582.02.00.00.02.00.5493310.1335510.1373060.278473
753.0(1, 0, 0, 0)0.000000NaNNaN0.00.00.0NaN0.0000000.000000NaN
764.0(1, 0, 0, 0)0.000000NaNNaN0.00.00.0NaN0.0000000.000000NaN
\n
\n\n\n\n\n```python\ndef _count_err_from_rate_calc(tau, r_df, trans):\n md_r = r_df[(r_df.type.isin(trans))]\n _sum = md_r.rate.sum()\n if _sum > 0:\n _N = md_r.sym_weight.sum()\n return tau*np.exp(1/_N**0.5), tau*np.exp(-1/_N**0.5) \n\n\ndef _count_err_at_temp(r_df, trans, temp):\n p, m = _count_err_from_rate_calc(r_df[r_df.temperature==temp], trans)\n return m, p\n```\n\n\n```python\nfig, ax = plt.subplots(2,2, figsize=(6.5,6))\nsns.set_style(\"ticks\")\n\nfor k, v in remd_dt_hdw_d.items():\n dt_r = remd_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,0,0,1))]\n tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n lg1_remd = ax[0,1].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,\n md_r.err_p.values ],c=cl[1], fmt=\".\",\n label=r\"$h \\rightarrow c$\")\n \nfor k, v in remd_dt_cdw_d.items():\n dt_r = remd_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,1,0,0))]\n tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n #lg2_remd, = ax[0,1].plot(int(k)*tf, 1.0 / tau_av*1000.0 , \"s\", c=cl[3]) \n lg2_remd = ax[0,1].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,\n md_r.err_p.values ],c=cl[3], fmt=\".\",\n label=r\"$c \\rightarrow h$\")\n \nfor k, v in remd_dt_10w_d.items():\n dt_r = remd_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_10))]\n \n if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :\n tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n \n _m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,\n md_r, trans_from_10)\n ax[1,1].plot([int(k)*tf]*2, [ _m, _p], \"-\", c=cl[4] )\n lg3_remd, = ax[1,1].plot(int(k)*tf, 1.0/ tau_av * 1000.0 , \".\", c=cl[4], mec=cl[4]) # mew=1.0, mfc=\"None\"\n \nfor k, v in remd_dt_11dw_d.items():\n dt_r = remd_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_11))]\n if np.any(v.wait_T.values > 0) and md_r.rate.sum() > 0 :\n\n tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n _m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,\n md_r, trans_from_11)\n ax[1,1].plot([int(k)*tf]*2, [ _m, _p], \"-\", c=cl[5] )\n lg4_remd, = ax[1,1].plot(int(k)*tf, 1.0/ tau_av * 1000.0 , \".\", c=cl[5], mec=cl[5]) # mew=1.0, mfc=\"None\" \n \n# mfpt h\nax[0,1].plot(remd_lagtime_ar*tf, inv_lifetime_two_state_lagtime(\n remd_lagtime_ar*tf, popt_ph[0][0], kex_fit_remd),\n c=cl[3])\n# mfpt c\nax[0,1].plot(remd_lagtime_ar*tf, inv_lifetime_two_state_lagtime(\n remd_lagtime_ar*tf, popt_pc[0][0], kex_fit_remd),\n c=cl[1])\n \n \nfor k, v in md_dt_hdw_d.items():\n dt_r = md_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,0,0,1))]\n \n _temp =v.wait_T / v.weight\n if np.any(_temp.values > 0):\n tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)\n lg1_md = ax[0,0].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,\n md_r.err_p.values ],c=cl[0], fmt=\".\",\n label=r\"$h \\rightarrow c$\")\n \n\n \nfor k, v in md_dt_cdw_d.items():\n dt_r = md_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,1,0,0))]\n _temp =v.wait_T / v.weight\n if np.any(_temp.values > 0):\n tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)\n #lg2_md, = ax[0,0].plot(int(k)*tf, 1.0/tau_av*1000.0, \"o\", c=cl[2])\n lg2_md = ax[0,0].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,\n md_r.err_p.values ],c=cl[2], fmt=\".\",\n label=r\"$c \\rightarrow h$\")\n \nax[0,0].legend([lg1_md, lg2_md], [r\"MD $h \\rightarrow c$\",\n r\"MD $c \\rightarrow h$\"],\n loc=3)\n \nax[0,1].legend([lg1_remd, lg2_remd], [r\"REMD $h \\rightarrow c$\",\n r\"REMD $c \\rightarrow h$\"], loc=3) \n \nfor k, v in md_dt_10dw_d.items():\n dt_r = md_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_10))]\n if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :\n tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n \n _m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,\n md_r, trans_from_10)\n ax[1,0].plot([int(k)*tf]*2, [ _m, _p], \"-\", c=cl[4] )\n #tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T) \n lg3_md, = ax[1,0].plot(int(k)*tf, 1.0/tau_av*1000.0, \".\", c=cl[4], mfc=cl[4], mec=cl[4])\n \n\n\n\nfor k, v in md_dt_11dw_d.items():\n \n dt_r = md_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_11))]\n \n if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :\n tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)\n \n _m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,\n md_r, trans_from_11)\n ax[1,0].plot([int(k)*tf]*2, [ _m, _p], \"-\", c=cl[5] )\n #tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)\n lg4_md, = ax[1,0].plot(int(k)*tf, 1.0/tau_av*1000.0, \".\", c=cl[5], mfc=cl[5], mec=cl[5])\n\n \nax[1,0].legend([lg3_md, lg4_md], [r'MD $\\sum_{l(\\neq)3}k_{l3}$',\n r'MD $\\sum_{l(\\neq)4}k_{l4}$'],\n loc=3, handletextpad=-0.5,handleheight=0.5,\n borderaxespad=0.01)\n\n#mfpt h\nax[0,0].plot(md_lagtime_ar*tf, inv_lifetime_two_state_lagtime(\n md_lagtime_ar*tf, popt_ph[0][0], k_ex_fit), \"-\",\n c=cl[2])\n#mfpt c\nax[0,0].plot(md_lagtime_ar*tf, inv_lifetime_two_state_lagtime(\n md_lagtime_ar*tf, popt_pc[0][0], k_ex_fit),\n c=cl[0]) \n \nax[1,1].legend([lg3_remd, lg4_remd], [r'REMD $\\sum_{l(\\neq)3}k_{l3}$',\n r'REMD $\\sum_{l(\\neq)4}k_{l4}$'],\n loc=3, handletextpad=-0.5,handleheight=0.5,\n borderaxespad=0.01)\n \n \n\nfor a in ax.flat:\n a.loglog()\n a.set_xlabel(\"$\\mathregular{\\Delta t \\, [ns]}$\", fontsize=12)\n a.set_xlim(10**-3.2, 10**1)\n a.tick_params(axis='both', which='major', labelsize=12)\n a.tick_params(axis='both', which='both', top=True , right=True)\n \n\n \nfor i, a in enumerate([\"A\", \"B\"]):\n ax.flat[i].text(10**-5, 10**0.7, a, fontsize=20)\n ax.flat[i].set_ylabel(r\"$\\mathregular{k \\, [ns^{-1}]}$\", fontsize=12)\n\n \nfor i, a in enumerate([\"C\", \"D\"]):\n ax.flat[i+2].text(10**-5, 10**1.7, a, fontsize=20)\n ax.flat[i+2].set_ylabel(r\"$\\mathregular{k \\, [ns^{-1}]}$\", fontsize=12)\n \n \nax[1,0].plot(syn_ar[:,0], syn_ar[:,1], c='grey')\nax[1,0].plot(syn_ar[:,0], syn_ar[:,2], c='black') \nax[1,1].plot(syn_ar[:,0], syn_ar[:,1], c='grey')\nax[1,1].plot(syn_ar[:,0], syn_ar[:,2], c='black')\n\nfor a in ax[0,:]:\n a.set_yticks([10**-2, 10**-1, 10**0, 10**1])\n a.set_xticks([10**-3, 10**-2, 10**-1, 10**0, 10**1])\n \nfor a in ax[1,:]:\n a.set_yticks([10**-2, 10**-1, 10**0, 10**1, 10**2])\n a.set_xticks([10**-3, 10**-2, 10**-1, 10**0, 10**1])\n \nfig.tight_layout()\n\nfig.savefig(\"plot/ala_dt-rate.png\")\nfig.savefig(\"plot/ala_dt-rate.pdf\")\n```\n\n# Lag-time dependence of $\\mathrm{var}(\\tau)$\n\n\n```python\nvar_mcmc_h = np.genfromtxt(\"../from-syn-trj/var_dt/var_lifetimes_h_ns.txt\")\nvar_mcmc_c = np.genfromtxt(\"../from-syn-trj/var_dt/var_lifetimes_c_ns.txt\")\nvar_mcmc_st3 = np.genfromtxt(\"../from-syn-trj/var_dt/var_lifetimes_st3_ns.txt\")\nvar_mcmc_st4 = np.genfromtxt(\"../from-syn-trj/var_dt/var_lifetimes_st4_ns.txt\")\n```\n\n\n```python\nfig_l = [\"A\", \"B\", \"C\", \"D\"]\n```\n\n\n```python\nfig, ax = plt.subplots(2,2, figsize=(6.5,6))\n\nfor k, v in md_dt_hdw_d.items():\n _exp = check_moments_w(v.wait_T*tf, v.weight)\n lg_md, = ax[0,0].plot(int(k)*tf, _exp[1] , \"o\", c=cl[0])\n \nfor k, v in md_dt_cdw_d.items():\n _exp = check_moments_w(v.wait_T*tf, v.weight)\n lg2_md, = ax[0,0].plot(int(k)*tf, _exp[1] , \"o\", c=cl[2], label=\"$c \\rightarrow h$\")\n\nfor k, v in md_dt_10dw_d.items(): \n if np.any(v.wait_T > 0) :\n _exp = check_moments_w(v.wait_T*tf , v.weight)\n lg3_md, = ax[1,0].plot(int(k)*tf, _exp[1], \"o\", c=cl[4])\n \n\nfor k, v in md_dt_11dw_d.items(): \n if np.any(v.wait_T > 0) :\n _exp = check_moments_w(v.wait_T*tf , v.weight)\n lg4_md, = ax[1,0].plot(int(k)*tf, _exp[1], \"o\", c=cl[5])\n\n# exclude empty arrays \n \nfor k, v in remd_dt_hdw_d.items():\n _exp = check_moments_w( v.wait_T*tf, v.weight)\n lg1_remd, = ax[0,1].plot(int(k)*tf, _exp[1] , \"s\", c=cl[1]) \n \nfor k, v in remd_dt_cdw_d.items():\n _exp = check_moments_w(v.wait_T*tf , v.weight)\n lg2_remd, = ax[0,1].plot(int(k)*tf, _exp[1] , \"s\", c=cl[3], label=\"$c \\rightarrow h$\") \n \nfor k, v in remd_dt_10w_d.items():\n dt_r = remd_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_10))]\n \n if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :\n _exp = check_moments_w(v.wait_T*tf , v.weight)\n lg3_remd, = ax[1,1].plot(int(k)*tf, _exp[1], \"s\", c=cl[4])\n \nfor k, v in remd_dt_11dw_d.items():\n dt_r = remd_r_dt_d[str(k)]\n md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_11))]\n \n if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :\n _exp = check_moments_w(v.wait_T*tf , v.weight)\n lg4_remd, = ax[1,1].plot(int(k)*tf, _exp[1], \"s\", c=cl[5])\n \n\nfor a in ax[0,:]:\n a.plot(var_mcmc_h[:,0]/1000.0, var_mcmc_h[:,2], c=\"k\")\n a.plot(var_mcmc_c[:,0]/1000.0, var_mcmc_c[:,2], c=\"grey\")\n a.set_ylim(10**-2, 10**3)\n \nfor a in ax[1,:]:\n a.plot(var_mcmc_st3[:,0]/1000.0, var_mcmc_st3[:,2], \"-D\", c='k',\n mfc=\"None\", mec='k', mew=1.0, zorder=1)\n a.plot(var_mcmc_st4[:,0]/1000.0, var_mcmc_st4[:,2], \"-D\", c='grey',\n mfc=\"None\", mec='grey', mew=1.0, zorder=1)\n a.set_ylim(10**-4, 10**3)\n \n \nfor a in ax.flat:\n a.loglog()\n a.set_xlabel(\"$\\mathregular{\\Delta t \\, [ns]}$\", fontsize=12)\n a.set_xlim(10**-3.2, 10**1) \n a.tick_params(axis='both', which='major', labelsize=12)\n a.tick_params(axis='both', which='both', top=True , right=True)\n \nfor i, a in enumerate(fig_l):\n ax.flat[i].text(10**-5, 10**3, a, fontsize=20)\n ax.flat[i].set_ylabel(r\"$\\mathregular{ var(\\tau) \\, [ns^2]}$\", fontsize=12)\n \n ax.flat[i].set_xticks([10**-3, 10**-2, 10**-1, 10**0, 10**1])\n\n \nax[0,1].legend([lg1_remd, lg2_remd], [r\"REMD $h \\rightarrow c$\",\n r\"REMD $c \\rightarrow h$\"], loc=2, borderaxespad=-0.1)\nax[0,0].legend([lg1_md, lg2_md], [r\"MD $h \\rightarrow c$\",\n r\"MD $c \\rightarrow h$\"], loc=2, borderaxespad=-0.1)\n\nax[1,1].legend([lg3_remd, lg4_remd], [r\"REMD state 3\",\n r\"REMD state 4\"], loc=2, borderaxespad=-0.1)\nax[1,0].legend([lg3_md, lg4_md], [r\"MD state 3\",\n r\"MD state 4\"], loc=2, borderaxespad=-0.1)\n\n\nfig.tight_layout()\n\nfig.savefig(\"plot/ala_dt-var_tau.png\")\nfig.savefig(\"plot/ala_dt-var_tau.pdf\")\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "7536eb9867908aa295fb7d354b6b71eb76d318e4", "size": 301977, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ala2-lag-time/comparison-dt/ala_dt-rates.ipynb", "max_stars_repo_name": "lukas-stelzl/kinetics-remd", "max_stars_repo_head_hexsha": "b915729f7bc069091738d008d75c29f432a2f797", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2017-07-20T19:04:15.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-27T14:18:33.000Z", "max_issues_repo_path": "ala2-lag-time/comparison-dt/ala_dt-rates.ipynb", "max_issues_repo_name": "lukas-stelzl/kinetics-remd", "max_issues_repo_head_hexsha": "b915729f7bc069091738d008d75c29f432a2f797", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-02-09T14:59:02.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-09T14:59:02.000Z", "max_forks_repo_path": "ala2-lag-time/comparison-dt/ala_dt-rates.ipynb", "max_forks_repo_name": "lukas-stelzl/kinetics-remd", "max_forks_repo_head_hexsha": "b915729f7bc069091738d008d75c29f432a2f797", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-07-06T12:31:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-13T06:34:33.000Z", "avg_line_length": 144.763662512, "max_line_length": 86602, "alphanum_fraction": 0.8488792193, "converted": true, "num_tokens": 12259, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.48828339529583464, "lm_q1q2_score": 0.2612796775472677}} {"text": "```python\n\"\"\"\nIPython Notebook v4.0 para python 2.7\nLibrer\u00edas adicionales: Ninguna.\nContenido bajo licencia CC-BY 4.0. C\u00f3digo bajo licencia MIT. (c) Sebastian Flores.\n\"\"\"\n\n# Configuracion para recargar m\u00f3dulos y librer\u00edas \n%reload_ext autoreload\n%autoreload 2\n\nfrom IPython.core.display import HTML\n\nHTML(open(\"style/iwi131.css\", \"r\").read())\n```\n\n\n\n\n\n\n\n\n\n
\n\n\n
\n




\n# IWI131\n## Programaci\u00f3n de Computadores\n\n### Sebasti\u00e1n Flores\n\nhttp://progra.usm.cl/ \n\nhttps://www.github.com/usantamaria/iwi131\n\n\n## \u00bfQu\u00e9 contenido aprenderemos?\n\n* Ciclo ***for***\n\n## \u00bfPorqu\u00e9 aprenderemos ese contenido?\n\n* Ciclo ***for***\n\nPorque la mayor parte de los ciclos while simplemente recorren una lista de n\u00fameros o elementos, y conviene simplificar su notaci\u00f3n.\n\n## Ciclo for\n\nEl ciclo for no permite hacer nada nuevo, s\u00f3lo representa una forma compacta de representar un ciclo while.\n\nAparte de ahorrar un par de l\u00edneas, \u00a1permite concentrar la atenci\u00f3n en lo que realmente importa!\n\n\n```python\nj = 0\nwhile j<10:\n print j, \n j += 1\n```\n\n 0 1 2 3 4 5 6 7 8 9\n\n\n\n```python\n# Toda la informaci\u00f3n de los valores utilizados est\u00e1 en range(10)\nfor j in range(10): \n print j, \n```\n\n 0 1 2 3 4 5 6 7 8 9\n\n\n## Ciclo for\n\nLa estructura del ciclo for es la siguiente:\n\n for element in lista:\n hacer algo con elemento\n \nLos elementos de la lista (o tupla) se van obteniendo de manera ordenada y secuencial.\n\nTambi\u00e9n su pueden utilizar break y continue (pero no es habitual).\n\n#### Ciclo for\n## Iterando con range\nLa funci\u00f3n ***range*** entrega una lista de valores, que puede utilizarse directamente con range:\n\n\n```python\nprint range(30)\nfor i in range(30):\n print i**2, \n```\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]\n 0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841\n\n\n\n```python\nfor i in range(30, 60):\n print i, \n```\n\n 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59\n\n\n\n```python\nfor i in tuple(range(30,60,4)):\n print i, \n```\n\n 30 34 38 42 46 50 54 58\n\n\n#### Ciclo for\n## Comparaci\u00f3n con while\nEjercicio: Imprima todos los multiplos de 3 entre 1 y 80\n\n\n```python\nj = 1\nwhile j<80:\n if j%3==0:\n print j, \n j += 1\n```\n\n 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78\n\n\n\n```python\nfor j in range(3,80,3):\n print j, \n```\n\n 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78\n\n\n#### Ciclo for\n## Iterando con listas conocidas\nSi la lista es conocida podemos iterar sobre los \u00edndices con range o sobre los elementos directamente.\n\n\n```python\nmi_lista = [\"sebastian\", \"carlos\", \"jose\"]\n# Iterando con indices\nfor i in range(len(mi_lista)):\n print i, mi_lista[i]\n mi_lista[i] = str(i) + \"holamundo\"+mi_lista[i] * 2\n \nprint mi_lista\n```\n\n 0 sebastian\n 1 carlos\n 2 jose\n ['0holamundosebastiansebastian', '1holamundocarloscarlos', '2holamundojosejose']\n\n\n\n```python\nmi_lista = [\"sebastian\", \"carlos\", \"jose\"]\n# Iterando con indices\nfor nombre in mi_lista:\n for letra in nombre:\n print letra,\n print \"\"\n #print nombre\n #nombre = nombre * 2\n \nprint mi_lista\n```\n\n s e b a s t i a n \n c a r l o s \n j o s e \n ['sebastian', 'carlos', 'jose']\n\n\n#### Ciclo for\n## Iterando con tuplas conocidas\nSi la tupla es conocida podemos iterar sobre los \u00edndices con range o sobre los elementos directamente.\nNo hay diferencias a recorrer una lista.\n\n\n```python\nmi_tupla = (\"sebastian\", \"carlos\", \"jose\")\n\n# Iterando con indices\nfor i in range(len(mi_tupla)):\n print i, mi_tupla[i]\n #mi_tupla[i] = str(i) + \"holamundo\"+mi_tupla[i] * 2\n \nprint mi_tupla\n```\n\n 0 sebastian\n 1 carlos\n 2 jose\n ('sebastian', 'carlos', 'jose')\n\n\n\n```python\nmi_tupla = (\"sebastian\", \"carlos\", \"jose\")\n# Iterando con indices\nfor nombre in mi_tupla:\n print nombre\n nombre = nombre*2\n\nprint mi_tupla\n```\n\n sebastian\n carlos\n jose\n ('sebastian', 'carlos', 'jose')\n\n\n#### Ciclo for\n## Ejemplo\nDadas las listas con nombres y horarios de los ramos, imprima el nombre de cada ramo con su horario.\n\n\n```python\nramos = ['Progra', 'Mate', 'Fisica']\nhoras = ['8:00', '10:00', '12:00',\"21:00\"]\nfor i in range(len(ramos)):\n #if ramos[i]==\"Progra\":\n # horas[i] = \"todo el dia\"\n print ramos[i] + \" a las \" + horas[i]\n\nprint \"\\nProducto cartesiano:\"\nfor ramo in ramos:\n for hora in horas:\n print ramo + \" a las \" + hora\n```\n\n Progra a las 8:00\n Mate a las 10:00\n Fisica a las 12:00\n \n Producto cartesiano:\n Progra a las 8:00\n Progra a las 10:00\n Progra a las 12:00\n Progra a las 21:00\n Mate a las 8:00\n Mate a las 10:00\n Mate a las 12:00\n Mate a las 21:00\n Fisica a las 8:00\n Fisica a las 10:00\n Fisica a las 12:00\n Fisica a las 21:00\n\n\n#### Ciclo for\n## Ejercicio 1: Promedio y Desviaci\u00f3n Estandar Revisitados\n\nCalcule el promedio y la desviaci\u00f3n est\u00e1ndar de $N$ datos, $x_1$, ..., $x_n$, ingresados por el usuario:\n\n$$ \\begin{align}\nmean &= \\frac{1}{n} \\sum_{i=1} x_i \\\\\n(std)^2 &= \\frac{1}{n} \\sum_{i=1} (x_i- mean)^2\n\\end{align}\n$$\n\nTransformando desde ciclo ***while*** a ciclo ***for***\n\n\n```python\ndef leer_datos(cantidad):\n datos = []\n for i in range(cantidad):\n datos.append( float(raw_input('Dato ' + str(i+1) + ': ')) )\n print \"datos\", datos\n return datos\n\ndef promedio(valores):\n return sum(valores) / float(len(valores))\n\ndef desviacion(valores):\n p = promedio(valores)\n suma = 0.0\n for numero in valores:\n suma += (numero - p) ** 2\n return ( suma / len(valores) ) ** 0.5\n\nn = int(raw_input('Cuantos datos? '))\ndatos = leer_datos(n)\nprint 'El promedio es ', promedio(datos)\nprint 'La desviacion estandar es', desviacion(datos)\n```\n\n Cuantos datos? 5\n Dato 1: 4\n datos [4.0]\n Dato 2: 5\n datos [4.0, 5.0]\n Dato 3: 3\n datos [4.0, 5.0, 3.0]\n Dato 4: 8\n datos [4.0, 5.0, 3.0, 8.0]\n Dato 5: 10\n datos [4.0, 5.0, 3.0, 8.0, 10.0]\n El promedio es 6.0\n La desviacion estandar es 2.60768096208\n\n\n#### Ciclo for\n## Ejercicio\nUn pol\u00edgono est\u00e1 determinado por la lista de sus v\u00e9rtices\nEscriba una funci\u00f3n perimetro(vertices) que entregue el per\u00edmetro del pol\u00edgono definido por la lista vertices:\n\n p = [(4, 1), (7, 2), (7, 4), (5, 9)]\n print perimetro(p) # 18.609700215601432\n\n\n```python\n# Definicion de funcion(es)\n\n# Uso de funciones\np = [(4, 1), (7, 2), (7, 4), (5, 9)]\nprint perimetro(p) # 18.609700215601432\n```\n\n#### Ciclo for\n## Ejercicio\nUn pol\u00edgono est\u00e1 determinado por la lista de sus v\u00e9rtices.\n\nEscriba una funci\u00f3n perimetro(vertices) que entregue el per\u00edmetro del pol\u00edgono definido por la lista vertices:\n\n p = [(4, 1), (7, 2), (7, 4), (5, 9)]\n print perimetro(p) # 18.609700215601432\n\n\n```python\n# Definicion de funcion(es)\ndef distancia(p1, p2):\n x1, y1 = p1\n x2, y2 = p2\n return 0. + ((x2 - x1) ** 2 + (y2 - y1) ** 2) ** .5\n\ndef perimetro(vertices):\n n = len(vertices)\n suma = 0.0\n for i in range(n):\n a = vertices[i]\n b = vertices[(i + 1) % n]\n suma += distancia(a, b)\n return suma\n\n# Uso de funciones\np = [(4, 1), (7, 2), (7, 4), (5, 9)]\nprint perimetro(p) # 18.609700215601432\n```\n\n 18.6097002156\n\n\n#### Ciclo for\n## Ejercicio Tipo Certamen: C2 1S 2015\n\nUna nueva cadena de cines creada por emprendedores de la USM, est\u00e1 ingresando al mercado cinematogr\u00e1fico. Por eso necesita de su ayuda para implementar ciertas funciones de la cartelera en python y con ellas manejar la cartelera. \n\nPara ello se cuenta con la informaci\u00f3n de cine en una lista de tuplas como cartelera. A modo de ejemplo, en cartelera la pel\u00edcula \u2019Gloria\u2019 (Chilena), creada en 2013, se exhibir\u00e1 el mes de \u2019enero\u2019 en las \u2019sala1\u2019 y \u2019sala2\u2019.\n\nCada elemento de la lista tiene la siguiente estructura:\n\n (mes, \n pais, \n nombre_pelicula, \n a\u00f1o_filmacion, \n [sala1, sala2, ...])\n\n\n```python\ncartelera = [\n('febrero', 'FRANCIA', 'El muelle', 1962, ['sala1', 'sala3']),\n('febrero', 'FRANCIA', 'La dama de honor', 2004, ['sala1', 'sala4']),\n('abril', 'RUSIA', 'Padre del soldado', 1964, ['sala3', 'sala2', 'sala4']),\n('enero', 'CHILE', 'Gloria', 2013, ['sala1', 'sala2']),\n('mayo', 'MEXICO', 'Cumbres', 2013, ['sala3', 'sala2']),\n('julio', 'FRANCIA', 'Melo', 1986, ['sala3', 'sala1']),\n('junio', 'BELGICA', 'Rondo', 2012, ['sala4', 'sala2']),\n('marzo', 'ALEMANIA', 'Tiempo de Canibales', 2014, ['sala1', 'sala2']),\n('marzo', 'ALEMANIA', 'Soul Kitchen', 2009, ['sala3', 'sala4']),\n]\n```\n\n#### Ejercicio Tipo Certamen: C2 1S 2015\n## Pregunta 1\n\nDesarrolle la funci\u00f3n pelicula_por_pais(cartelera, pais) que recibe la lista de la cartelera y el nombre de un pa\u00eds, y que retorne la lista con las pel\u00edculas realizadas en dicho pa\u00eds. Cada elemento de esta lista resultante es una tupla con el nombre de la pel\u00edcula y el a\u00f1o de filmaci\u00f3n.\n\n >>> pelicula_por_pais(cartelera, 'FRANCIA')\n [('El muelle', 1962), ('La dama de honor', 2004), ('Melo', 1986)]\n\n\n```python\n# Solucion estudiantes\n```\n\n#### Ejercicio Tipo Certamen: C2 1S 2015\n## Soluci\u00f3n pregunta 1\n\nDesarrolle la funci\u00f3n pelicula_por_pais(cartelera, pais) que recibe la lista de la cartelera y el nombre de un pa\u00eds, y que retorne la lista con las pel\u00edculas realizadas en dicho pa\u00eds. Cada elemento de esta lista resultante es una tupla con el nombre de la pel\u00edcula y el a\u00f1o de filmaci\u00f3n.\n\n >>> pelicula_por_pais(cartelera, 'FRANCIA')\n [('El muelle', 1962), ('La dama de honor', 2004), ('Melo', 1986)]\n\n\n```python\n# Solucion con while\ndef pelicula_por_pais(cartelera, pais):\n n = len(cartelera)\n j = 0\n peliculas_pais = []\n while j\n
\n \nQuantum Challenge\u306b\u6700\u9069\u306a\u74b0\u5883\u3067\u53d6\u308a\u7d44\u3093\u3067\u3044\u305f\u3060\u304f\u305f\u3081\u306b\u3001\u53f3\u4e0a\u306e\u30a2\u30ab\u30a6\u30f3\u30c8\u30e1\u30cb\u30e5\u30fc\u3088\u308a **light** \u30e2\u30fc\u30c9\u3092\u9078\u629e\u3055\u308c\u308b\u3053\u3068\u3092\u304a\u52e7\u3081\u3057\u307e\u3059\u3002\n\n## \u306f\u3058\u3081\u306b\uff1a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u3068\u306f\uff1f\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u6700\u9069\u5316\u306f\u3001\u6295\u8cc7\u304b\u3089\u5f97\u3089\u308c\u308b\u30ea\u30bf\u30fc\u30f3\u3092\u6700\u5927\u5316\u3057\u305f\u3044\u3068\u8003\u3048\u308b\u4eba\u306b\u3068\u3063\u3066\u5927\u5909\u91cd\u8981\u306a\u30d7\u30ed\u30bb\u30b9\u3067\u3059\u3002\n\u6295\u8cc7\u306f\u901a\u5e38\u3001\u3044\u308f\u3086\u308b\u8cc7\u7523\uff08\u682a\u5f0f\u3001\u50b5\u6a29\u3001\u50b5\u5238\u3001\u30c7\u30ea\u30d0\u30c6\u30a3\u30d6\u3001\u30b3\u30fc\u30eb\u3001\u30d7\u30c3\u30c8\u306a\u3069\uff09\u306e\u96c6\u307e\u308a\u3067\u3042\u308a\u3001\u3053\u306e\u8cc7\u7523\u306e\u96c6\u307e\u308a\u3092**\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa**\u3068\u547c\u3073\u307e\u3059\u3002\n
\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u6700\u9069\u5316\u306e\u30b4\u30fc\u30eb\u306f\u3001\u30ea\u30b9\u30af\uff08\u91d1\u92ad\u7684\u640d\u5931\uff09\u3092\u6700\u5c0f\u5316\u3057\u3001\u30ea\u30bf\u30fc\u30f3\uff08\u91d1\u92ad\u7684\u5229\u76ca\uff09\u3092\u6700\u5927\u5316\u3059\u308b\u3053\u3068\u3067\u3059\u3002\u3057\u304b\u3057\u3001\u3053\u306e\u30d7\u30ed\u30bb\u30b9\u306f\u305d\u3046\u5358\u7d14\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002\u30ea\u30b9\u30af\u3068\u30ea\u30bf\u30fc\u30f3\u306f\u901a\u5e38\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u306e\u95a2\u4fc2\u306b\u3042\u308a\u3001\u3053\u308c\u304c\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u6700\u9069\u5316\u3092\u5c11\u3057\u8907\u96d1\u306b\u3057\u3066\u3044\u307e\u3059\u3002\u30cf\u30ea\u30fc\u30fb\u30de\u30fc\u30b3\u30a6\u30a3\u30c3\u30c4\u535a\u58eb\u304c1952\u5e74\u306b\u767a\u8868\u3057\u305f\u300e\u73fe\u4ee3\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6\u300f\u3067\u306f\u3001\u300c\u30ea\u30b9\u30af\u306f\u3001\u3088\u308a\u9ad8\u3044\u5831\u916c\u3092\u5f97\u308b\u305f\u3081\u306e\u672c\u8cea\u7684\u306a\u8981\u7d20\u3067\u3042\u308b\u300d\u3068\u8ff0\u3079\u3066\u3044\u307e\u3059\u3002\n\n
\n
\n \n**\u73fe\u4ee3\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6** \n
\n\u6295\u8cc7\u5bb6\u306f\u540c\u3058\u30ea\u30bf\u30fc\u30f3\u304c\u671f\u5f85\u3067\u304d\u308b2\u3064\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u4e0e\u3048\u3089\u308c\u305f\u3068\u304d\u3001\u30ea\u30b9\u30af\u306e\u5c11\u306a\u3044\u65b9\u3092\u597d\u3080\u3068\u3044\u3046\u30ea\u30b9\u30af\u56de\u907f\u306e\u8003\u3048\u65b9\u306b\u57fa\u3065\u304f\u6295\u8cc7\u7406\u8ad6\u3002\u6295\u8cc7\u5bb6\u306f\u3001\u4e0e\u3048\u3089\u308c\u305f\u5e02\u5834\u30ea\u30b9\u30af\u306b\u57fa\u3065\u3044\u3066\u3001\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3092\u6700\u5927\u5316\u3059\u308b\u3088\u3046\u306b\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u69cb\u7bc9\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u3001\u30ea\u30b9\u30af\u306f\u3088\u308a\u9ad8\u3044\u5831\u916c\u306e\u672c\u8cea\u7684\u306a\u90e8\u5206\u3067\u3042\u308b\u3053\u3068\u3092\u5f37\u8abf\u3057\u3066\u3044\u308b\u3002\u73fe\u4ee3\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6\u306f\u3001\u91d1\u878d\u30fb\u6295\u8cc7\u306b\u95a2\u3059\u308b\u6700\u3082\u91cd\u8981\u304b\u3064\u5f71\u97ff\u529b\u306e\u3042\u308b\u7d4c\u6e08\u7406\u8ad6\u306e\u4e00\u3064\u3068\u3057\u3066\u30011952\u5e74\u306b\u30cf\u30ea\u30fc\u30fb\u30de\u30fc\u30b3\u30a6\u30a3\u30c3\u30c4\u535a\u58eb\u304c\u63d0\u5531\u3057\u3001\u3053\u306e\u529f\u7e3e\u306b\u3088\u308a\u30de\u30fc\u30b3\u30a6\u30a3\u30c3\u30c4\u535a\u58eb\u306f1990\u5e74\u306b\u30ce\u30fc\u30d9\u30eb\u7d4c\u6e08\u5b66\u8cde\u3092\u53d7\u8cde\u3057\u305f\u3002

\n\u53c2\u7167: [**\u73fe\u4ee3\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6**](https://en.wikipedia.org/wiki/Modern_portfolio_theory)\n\n## Challenge\n\n
\n\n**Goal**\n\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u6700\u9069\u5316\u306f\u3001\u6295\u8cc7\u304b\u3089\u5f97\u3089\u308c\u308b\u30ea\u30bf\u30fc\u30f3\u3092\u6700\u5927\u5316\u3057\u305f\u3044\u3068\u8003\u3048\u308b\u4eba\u306b\u3068\u3063\u3066\u5927\u5909\u91cd\u8981\u306a\u30d7\u30ed\u30bb\u30b9\u3067\u3059\u3002\u3053\u306e\u6700\u521d\u306e\u30c1\u30e3\u30ec\u30f3\u30b8\u3067\u306f\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u306e\u57fa\u672c\u7684\u306a\u7406\u8ad6\u3068\u3001\u91cf\u5b50\u30b3\u30f3\u30d4\u30e5\u30fc\u30bf\u30fc\u3067\u89e3\u3051\u308b\u3088\u3046\u306b\u554f\u984c\u3092\u5b9a\u5f0f\u5316\u3059\u308b\u65b9\u6cd5\u3092\u5b66\u3073\u307e\u3059\u3002\u305d\u306e\u904e\u7a0b\u3067\u3001Qiskit\u306eFinance\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u30af\u30e9\u30b9\u3092\u3064\u304b\u3063\u3066\u3001\u554f\u984c\u3092\u52b9\u7387\u7684\u306b\u89e3\u6c7a\u3059\u308b\u65b9\u6cd5\u3092\u5b66\u3073\u307e\u3059\u3002\n\n1. **Challenge 1a**: Qiskit\u306eFinance\u30e2\u30b8\u30e5\u30fc\u30eb\u306ePortfolioOptimization()\u30e1\u30bd\u30c3\u30c9\u3092\u4f7f\u3063\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u3092\u4e8c\u6b21\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u5909\u63db\u3059\u308b\u65b9\u6cd5\u3092\u5b66\u3073\u307e\u3059\u3002\n \n2. **Challenge 1b**: Challenge 1a\u3067\u4f5c\u6210\u3057\u305f\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u306b\u57fa\u3065\u3044\u3066\u30014\u9298\u67c4\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u554f\u984c\u3092VQE (Variational Quantum Eigensolver:\u5909\u5206\u91cf\u5b50\u56fa\u6709\u5024\u30bd\u30eb\u30d0\u30fc) \u3092\u3064\u304b\u3063\u3066\u89e3\u304d\u307e\u3059\u3002\n \n \n3. **Challenge 1c**: QAOA(Quantum Approximate Optimazation Algorithm:\u91cf\u5b50\u8fd1\u4f3c\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0)\u3092\u4f7f\u3063\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u304c\u3068\u308a\u3046\u308b\uff14\u3064\u306e\u30a2\u30bb\u30c3\u30c8\u304b\u3089\u3001\u4e88\u7b97=3\u304b\u3064\u3001\u3072\u3068\u3064\u306e\u9298\u67c4\u306b\u3064\u304d\u30c0\u30d6\u30eb\u30a6\u30a7\u30a4\u30c8\u307e\u3067\u3092\u9078\u629e\u3067\u304d\u308b\u5834\u5408\u306e\u554f\u984c\u3092\u89e3\u304d\u307e\u3059\u3002\n\n
\n
\n\n\u4e8b\u524d\u5b66\u7fd2\u3068\u3057\u3066[**Qiskit Finance Demo Session with Julien Gacon**](https://youtu.be/UtMVoGXlz04?t=2022)\u306e\u8996\u8074\u3068\u5bfe\u5fdc\u3059\u308b[**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-finance)\u3092\u30c1\u30a7\u30c3\u30af\u3057\u3066\u3001Qiskit\u306eFinance\u30e2\u30b8\u30e5\u30fc\u30eb\u3068\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u3078\u306e\u5fdc\u7528\u306b\u3064\u3044\u3066\u5b66\u3076\u3053\u3068\u3092\u304a\u52e7\u3081\u3057\u307e\u3059\u3002\n\n
\n\n## 1. \u52b9\u7387\u7684\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u3092\u6c42\u3081\u3066\n\u73fe\u4ee3\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6\uff08MPT\uff09\u306f\u3001\u6295\u8cc7\u5bb6\u306b\u3068\u3063\u3066\u7406\u60f3\u7684\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u6c7a\u5b9a\u3059\u308b\u305f\u3081\u306e\u7406\u8ad6\u7684\u306a\u67a0\u7d44\u307f\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002\u540c\u7406\u8ad6\u306f\u5e73\u5747\u5206\u6563\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6\u3068\u3082\u547c\u3070\u308c\u3001\u6295\u8cc7\u5bb6\u304c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u96c6\u5408\u304b\u3089\u6700\u9069\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u9078\u629e\u3059\u308b\u3053\u3068\u3092\u60f3\u5b9a\u3057\u3066\u3044\u307e\u3059\u3002\n- \u4e0e\u3048\u3089\u308c\u305f\u30ea\u30b9\u30af\u306b\u5bfe\u3057\u3066\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3092\u6700\u5927\u5316\u3059\u308b\u3002\n- \u4e0e\u3048\u3089\u308c\u305f\u30ec\u30d9\u30eb\u306e\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u306b\u5bfe\u3057\u3066\u3001\u30ea\u30b9\u30af\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002\n\n\u4e0b\u56f3\u306f\u3001\u6a2a\u8ef8\u304c\u30ea\u30b9\u30af\u3001\u7e26\u8ef8\u304c\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3092\u793a\u3059\u3001\u73fe\u4ee3\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u7406\u8ad6\u306e\u6700\u5c0f\u5206\u6563\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u3067\u3059\u3002 \n\n
\n\nA\u3068B\u306e2\u3064\u306e\u9298\u67c4\u304c\u3042\u308a\u3001\u3053\u306e2\u3064\u306e\u9298\u67c4\u306e\u3069\u3061\u3089\u304b\u306b\u6295\u8cc7\u3059\u308b\u3068\u3044\u3046\u72b6\u6cc1\u3092\u8003\u3048\u3066\u307f\u307e\u3057\u3087\u3046\u3002\u3042\u308b\u3044\u306f\u3001A\u306b10\uff05\u3001B\u306b90\uff05\u3001A\u306b20\uff05\u3001B\u306b80\uff05\u3001A\u306b70\uff05\u3001B\u306b30\uff05\u306a\u3069\u3068\u6295\u8cc7\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059\u3002\u3053\u3046\u3057\u305f2\u3064\u306e\u9298\u67c4\u3092\u8003\u3048\u308b\u5834\u5408\u306e\u5358\u7d14\u306a\u30b1\u30fc\u30b9\u4e0b\u3067\u3082\u305f\u304f\u3055\u3093\u306e\u6295\u8cc7\u30d1\u30bf\u30fc\u30f3\u306e\u7d44\u307f\u5408\u308f\u305b\u304c\u8003\u3048\u3089\u308c\u307e\u3059\u3002\u3055\u3089\u306b\u4f55\u5343\u3082\u306e\u9298\u67c4\u3092\u691c\u8a0e\u3059\u308b\u5834\u5408\u306b\u306f\u3001\u81a8\u5927\u306a\u6570\u306e\u7d44\u307f\u5408\u308f\u305b\u306b\u306a\u308a\u307e\u3059\u3002\n\n\u6700\u5c0f\u5206\u6563\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u306f\u3001\u3042\u308b\u60f3\u5b9a\u3055\u308c\u308b\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u306b\u5bfe\u3057\u3066\u9054\u6210\u53ef\u80fd\u306a\u6700\u5c0f\u5206\u6563\u3092\u793a\u3057\u307e\u3059\u3002\u3042\u308b\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u6700\u5c0f\u5206\u6563\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u3092\u69cb\u7bc9\u3059\u308b\u306b\u306f\n\n- \u904e\u53bb\u306e\u30c7\u30fc\u30bf\u3092\u7528\u3044\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u5404\u500b\u5225\u9298\u67c4\u306e\u5e73\u5747\u3001\u5206\u6563\u3001\u304a\u3088\u3073\u5404\u7d44\u306e\u9298\u67c4\u306e\u76f8\u95a2\u3092\u63a8\u5b9a\u3059\u308b\u3002\n- \u30b3\u30f3\u30d4\u30e5\u30fc\u30bf\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u4f7f\u3063\u3066\u3001\u4e8b\u524d\u306b\u8a2d\u5b9a\u3057\u305f\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3054\u3068\u306b\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u5206\u6563\u3092\u6700\u5c0f\u5316\u3059\u308b\u5168\u9298\u67c4\u306e\u91cd\u307f\u3092\u6c42\u3081\u308b\u3002\n- \u4e0a\u8a18\u3067\u6c7a\u5b9a\u3057\u305f\u6700\u5c0f\u5206\u6563\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3059\u3079\u3066\u306b\u3064\u3044\u3066\u3001\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3068\u5206\u6563\u3092\u8a08\u7b97\u3057\u30012\u3064\u306e\u5909\u6570\u3092\u30b0\u30e9\u30d5\u5316\u3059\u308b\u3002\n\n\u3053\u306e\u3068\u304d\u3001\u6295\u8cc7\u5bb6\u306f\u6700\u5c0f\u5206\u6563\u70b9\u4ee5\u4e0b\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u4fdd\u6709\u3057\u305f\u3044\u3068\u306f\u601d\u308f\u306a\u3044\u3067\u3057\u3087\u3046\u3002\u6295\u8cc7\u5bb6\u306f\u6700\u5c0f\u5206\u6563\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u306e\u6b63\u306e\u65b9\u5411\u306b\u50be\u659c\u3057\u305f\u90e8\u5206\u306b\u6cbf\u3063\u3066\u3001\u5e38\u306b\u9ad8\u3044\u30ea\u30bf\u30fc\u30f3\u3092\u5f97\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u305f\u3081\u3001\u6700\u5c0f\u5206\u6563\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u306e\u6b63\u306e\u50be\u659c\u90e8\u5206\u306f\u3001**\u52b9\u7387\u7684\u306a\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2**\u3068\u547c\u3070\u308c\u3066\u3044\u307e\u3059\u3002\n\n\u6700\u9069\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u304c\u3069\u3053\u306b\u3042\u308b\u304b\u3092\u793a\u3059**\u52b9\u7387\u7684\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2**\u3092\u6c42\u3081\u308b\u3053\u3068\u306b\u3088\u3063\u3066\u3001\u6295\u8cc7\u5bb6\u306f\u3088\u308a\u30ea\u30bf\u30fc\u30f3\u3092\u671f\u5f85\u3067\u304d\u305d\u3046\u306a\u69d8\u3005\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306f\u3069\u308c\u304b\u3092\u7d5e\u308a\u8fbc\u3080\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\n\n## 2. \u6f14\u7fd2\u306e\u76ee\u7684\n\u3053\u306e\u6f14\u7fd2\u306e\u76ee\u7684\u306f\u3001\u91cf\u5b50\u7684\u30a2\u30d7\u30ed\u30fc\u30c1\u3092\u7528\u3044\u3066\u3001\u3042\u308b\u7279\u5b9a\u306e\u30ea\u30b9\u30af\u306b\u5bfe\u3059\u308b\u52b9\u7387\u7684\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u3092\u898b\u3064\u3051\u308b\u3053\u3068\u3067\u3059\u3002Qiskit\u306eFinance\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u30e2\u30b8\u30e5\u30fc\u30eb\u3092\u4f7f\u7528\u3057\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u554f\u984c\u3092\u4e8c\u6b21\u8a08\u753b\u554f\u984c\u306b\u5909\u63db\u3059\u308b\u3053\u3068\u3067\u3001VQE\u3084QAOA\u306a\u3069\u306e\u5909\u5206\u91cf\u5b50\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u4f7f\u7528\u3057\u3066\u6700\u9069\u5316\u554f\u984c\u3092\u89e3\u6c7a\u3057\u3066\u3044\u304d\u307e\u3059\u3002\u305d\u308c\u3067\u306f\u65e9\u901f\u3001\u4eca\u56de\u306e\u30c1\u30e3\u30ec\u30f3\u30b8\u554f\u984c\u3092\u898b\u3066\u3044\u304d\u307e\u3057\u3087\u3046\u3002\n\n## 3. \u56db\u9298\u67c4\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u554f\u984c\n\n\u3053\u3053\u3067\u30014\u9298\u67c4(e.g. STOCK0, STOCK1, STOCK2, STOCK3) \u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u554f\u984c\u3092\u8003\u3048\u3066\u307f\u307e\u3057\u3087\u3046\u3002\u76ee\u6a19\u306f\u3001\u30ea\u30b9\u30af\u3068\u30ea\u30bf\u30fc\u30f3\u9593\u306e\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u3092\u6700\u5c0f\u5316\u3059\u308b2\u3064\u306e\u8cc7\u7523\u306e\u7d44\u307f\u5408\u308f\u305b\u3092\u898b\u3064\u3051\u308b\u3053\u3068\u3067\u3059\u3002\n\n## 4. \u554f\u984c\u306e\u5b9a\u5f0f\u5316\n\n\u6700\u9069\u5316\u554f\u984c\u306b\u3064\u3044\u3066\u306f\u3001\u691c\u8a0e\u3057\u3066\u3044\u308b\u554f\u984c\u306e\u5b9a\u5f0f\u5316\u304b\u3089\u884c\u3044\u307e\u3059\u3002
\n\u52b9\u7387\u7684\u30d5\u30ed\u30f3\u30c6\u30a3\u30a2\u3092\u8a18\u8ff0\u3059\u308b\u95a2\u6570\u306f\u3001\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u7dda\u5f62\u5236\u7d04\u3092\u6301\u3064\u4e8c\u6b21\u8a08\u753b\u554f\u984c\u306b\u5b9a\u5f0f\u5316\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002
\n\u8d64\u8272\u3067\u793a\u3055\u308c\u3066\u3044\u308b\u7528\u8a9e\u306f\u30ea\u30b9\u30af\u306b\u95a2\u9023\u3059\u308b\u9805\u3001\u9752\u8272\u3067\u793a\u3055\u308c\u3066\u3044\u308b\u9805\u306f\u30ea\u30bf\u30fc\u30f3\u306b\u95a2\u9023\u3059\u308b\u3082\u306e\u3067\u3059\u3002\n\u4e00\u822c\u306b\u3001\u6700\u9069\u5316\u3057\u305f\u3044\u95a2\u6570\u306f\u76ee\u7684\u95a2\u6570\u3068\u547c\u3070\u308c\u3001\u4eca\u56de\u306e\u76ee\u7684\u306f\u3001\u3053\u306e\u30ea\u30b9\u30af\u3068\u30ea\u30bf\u30fc\u30f3\u306e\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u3092\u6700\u5c0f\u5316\u3059\u308b\u3053\u3068\u3067\u3059\u3002

\n\n
$\\min_{x \\in \\{0, 1\\}^n}: $ $q x^n\\Sigma x$ - $\\mu^n x$
\n\n
$subject$ $to: 1^n x = B$
\n\n\n- $x$ \u306f\u8cc7\u7523\u914d\u5206\u3092\u793a\u3059\u3002\n- $\u03a3$ (sigma) \u306f\u3001\u5171\u5206\u6563\u884c\u5217\u3067\u3059\u3002\n\u5171\u5206\u6563\u884c\u5217\u306f\u30012\u3064\u306e\u8cc7\u7523\u4fa1\u683c\u306e\u5024\u52d5\u304d\u304c\u4e92\u3044\u306b\u3069\u306e\u3088\u3046\u306b\u76f8\u95a2\u3057\u3066\u3044\u308b\u304b\u3092\u7d71\u8a08\u7684\u306b\u793a\u3059\u6307\u6a19\u3067\u3001\u91d1\u878d\u5de5\u5b66\u3067\u5e83\u304f\u5fdc\u7528\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u5171\u5206\u6563\u304c\u9ad8\u3044\u3068\u3044\u3046\u3053\u3068\u306f\u3001\u682a\u4fa1\u306e\u5024\u52d5\u304d\u304c\u6fc0\u3057\u304f\u3001\u30dc\u30e9\u30c6\u30a3\u30ea\u30c6\u30a3\u30fc\u304c\u9ad8\u3044\u3053\u3068\u3092\u610f\u5473\u3057\u307e\u3059\u3002\u305d\u306e\u305f\u3081\u3001\u3053\u306e\u6307\u6570\u306f\u3001\u5168\u4f53\u306e\u30ea\u30b9\u30af\u3092\u6e1b\u3089\u3059\u305f\u3081\u306b\u3069\u306e\u3088\u3046\u306a\u8cc7\u7523\u3092\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306b\u542b\u3081\u308b\u3079\u304d\u304b\u3092\u5224\u65ad\u3059\u308b\u306e\u306b\u5f79\u7acb\u3061\u307e\u3059\u3002\n- $q$ i\u306f\u30ea\u30b9\u30af\u30d5\u30a1\u30af\u30bf\u30fc\uff08\u30ea\u30b9\u30af\u8a31\u5bb9\u5ea6\uff09\u3068\u547c\u3070\u308c\u3001\u500b\u4eba\u306e\u30ea\u30b9\u30af\u3092\u53d6\u308b\u610f\u601d\u3084\u80fd\u529b\u3092\u8a55\u4fa1\u3057\u305f\u3082\u306e\u3067\u3059\u3002\n\u4f8b\u3048\u3070\u3001\u81ea\u52d5\u5316\u3055\u308c\u305f\u30d5\u30a1\u30a4\u30ca\u30f3\u30b7\u30e3\u30eb\u30fb\u30a2\u30c9\u30d0\u30a4\u30b8\u30f3\u30b0\u30fb\u30b5\u30fc\u30d3\u30b9\u3001\u3044\u308f\u3086\u308b\u30ed\u30dc\u30fb\u30a2\u30c9\u30d0\u30a4\u30b8\u30f3\u30b0\u3092\u5229\u7528\u3059\u308b\u969b\u306b\u306f\u3001\u901a\u5e38\u3001\u7570\u306a\u308b\u30ea\u30b9\u30af\u8a31\u5bb9\u5ea6\u304c\u8868\u793a\u3055\u308c\u307e\u3059\u3002\u3053\u306eq\u5024\u306f\u305d\u306e\u3088\u3046\u306a\u3082\u306e\u3068\u540c\u3058\u3067\u30010\u304b\u30891\u306e\u9593\u306e\u5024\u3092\u3068\u308a\u307e\u3059\u3002\n- $\ud835\udf41$ (mu) \u306f\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3067\u3001\u5f53\u7136\u306a\u304c\u3089\u6700\u5927\u5316\u3092\u76ee\u6307\u3057\u307e\u3059\u3002\n- $n$ \u306f\u9078\u629e\u53ef\u80fd\u306a\u30a2\u30bb\u30c3\u30c8\u306e\u6570\u3067\u3059\u3002\n- $B$ \u306fBudget\uff08\u4e88\u7b97\uff09\u306e\u7565\u3067\u3059\u304c\u3001\u3053\u3053\u3067\u3044\u3046\u300c\u4e88\u7b97\u300d\u3068\u306f\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306b\u5272\u308a\u5f53\u3066\u308b\u3053\u3068\u306e\u3067\u304d\u308b\u8cc7\u7523\u306e\u6570\u3092\u610f\u5473\u3057\u307e\u3059\u3002\n\n#### \u30b4\u30fc\u30eb:\n\u30b4\u30fc\u30eb\u306f**x**\u5024\u3092\u898b\u3064\u3051\u308b\u3053\u3068\u3067\u3059\u3002\u3053\u3053\u3067\u3044\u3046x\u5024\u3068\u306f\u3001\u3069\u306e\u8cc7\u7523\u3092\u9078\u3076\u304b\uff08\ud835\udc65[\ud835\udc56]=1\uff09\u3001\u3069\u306e\u8cc7\u7523\u3092\u9078\u3070\u306a\u3044\u304b\uff08\ud835\udc65[\ud835\udc56]=0\uff09\u3092\u793a\u3059\u3082\u306e\u3067\u3059\u3002\n\n\n#### \u4eee\u5b9a:\n\u3053\u306e\u6f14\u7fd2\u3067\u306f\u3001\u7c21\u5358\u306e\u305f\u3081\u306b\u4ee5\u4e0b\u3092\u4eee\u5b9a\u3057\u3066\u3044\u307e\u3059\u3002\n- \u3059\u3079\u3066\u306e\u8cc7\u7523\u306e\u4fa1\u683c\u306f\u540c\u3058\u3067\u3042\u308b\uff081\u306b\u6b63\u898f\u5316\uff09\u3002\n- \u4e88\u7b97$B$\u3092\u3059\u3079\u3066\u4f7f\u308f\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3001\u3064\u307e\u308a\u3001\u6b63\u78ba\u306b$B$\u500b\u306e\u8cc7\u7523\u3092\u9078\u629e\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002\n- \u7b49\u5f0f\u5236\u7d04 $1^n x = B$ \u306f\u30da\u30ca\u30eb\u30c6\u30a3\u9805 $(1^n x - B)^2$ \u306b\u30de\u30c3\u30d4\u30f3\u30b0\u3055\u308c\u3001\u3053\u306e\u30da\u30ca\u30eb\u30c6\u30a3\u9805\u306f\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u30b9\u30b1\u30fc\u30ea\u30f3\u30b0\u3055\u308c\u3001\u76ee\u7684\u95a2\u6570\u304b\u3089\u6e1b\u7b97\u3055\u308c\u308b\u3002\n\n\n\n## Step 1. \u5fc5\u8981\u306a\u30e9\u30a4\u30d6\u30e9\u30ea\u3092\u30a4\u30f3\u30dd\u30fc\u30c8\u3057\u307e\u3059\u3002\n\n\n```python\n#Let us begin by importing necessary libraries.\nfrom qiskit import Aer\nfrom qiskit.algorithms import VQE, QAOA, NumPyMinimumEigensolver\nfrom qiskit.algorithms.optimizers import *\nfrom qiskit.circuit.library import TwoLocal\nfrom qiskit.utils import QuantumInstance\nfrom qiskit.utils import algorithm_globals\nfrom qiskit_finance import QiskitFinanceError\nfrom qiskit_finance.applications.optimization import PortfolioOptimization\nfrom qiskit_finance.data_providers import *\nfrom qiskit_optimization.algorithms import MinimumEigenOptimizer \nfrom qiskit_optimization.applications import OptimizationApplication\nfrom qiskit_optimization.converters import QuadraticProgramToQubo\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport datetime\nimport warnings\nfrom sympy.utilities.exceptions import SymPyDeprecationWarning\nwarnings.simplefilter(\"ignore\", SymPyDeprecationWarning)\n```\n\n## Step 2. \u6642\u7cfb\u5217\u30c7\u30fc\u30bf\uff08\u91d1\u878d\u30c7\u30fc\u30bf\uff09\u306e\u751f\u6210\n\u307e\u305a\u3001\u5168\u9298\u67c4\u6570n=4\u306e\u30e9\u30f3\u30c0\u30e0\u306a\u6642\u7cfb\u5217\u91d1\u878d\u30c7\u30fc\u30bf\u3092\u751f\u6210\u3057\u3066\u307f\u307e\u3057\u3087\u3046\u3002\u3053\u308c\u306b\u306fRandomDataProvider\u3092\u4f7f\u3044\u307e\u3059\u3002\u3053\u3053\u3067\u306f\u3042\u308b4\u9298\u67c4\u306b\u3064\u3044\u3066\u3001\u904e\u53bb\u306b\u3055\u304b\u306e\u307c\u3063\u3066\u30011955\u5e7411\u67085\u65e5\u304b\u30891985\u5e7410\u670826\u65e5\u307e\u3067\u306e\u91d1\u878d\u30c7\u30fc\u30bf\u3092\u53d6\u5f97\u3057\u3066\u3044\u307e\u3059\u3002\n\n\n```python\n# Set parameters for assets and risk factor\nnum_assets = 4 # set number of assets to 4\nq = 0.5 # set risk factor to 0.5\nbudget = 2 # set budget as defined in the problem\nseed = 132 #set random seed\n\n# Generate time series data\nstocks = [(\"STOCK%s\" % i) for i in range(num_assets)]\ndata = RandomDataProvider(tickers=stocks,\n start=datetime.datetime(1955,11,5), \n end=datetime.datetime(1985,10,26), \n seed=seed)\ndata.run()\n```\n\n\n```python\n# Let's plot our finanical data\nfor (cnt, s) in enumerate(data._tickers):\n plt.plot(data._data[cnt], label=s)\nplt.legend()\nplt.xticks(rotation=90)\nplt.xlabel('days')\nplt.ylabel('stock value')\nplt.show()\n```\n\n
\n
\n \n**\u6ce8\u610f**\uff1a \u3053\u306e\u30c1\u30e3\u30ec\u30f3\u30b8\u30ce\u30fc\u30c8\u30d6\u30c3\u30af\u4e2d\u306eRandomDataProvider \u306b\u30bb\u30c3\u30c8\u3055\u308c\u305f\u958b\u59cb\u65e5\u3068\u7d42\u4e86\u65e5\u306f\u5909\u66f4\u3057\u306a\u3044\u3067\u304f\u3060\u3055\u3044\u3002\u5909\u66f4\u3057\u305f\u5834\u5408\u3001\u63d0\u51fa\u3055\u308c\u305f\u56de\u7b54\u306f\u6b63\u3057\u304f\u8a55\u4fa1\u3055\u308c\u307e\u305b\u3093\u3002\n
\n\n## Step 3. \u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3068\u5171\u5206\u6563\u884c\u5217\u306e\u53d6\u5f97\n\n\u307e\u305a\u3001\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3068\u5171\u5206\u6563\u884c\u5217\u3092\u8a08\u7b97\u3057\u307e\u3057\u3087\u3046\u3002\n\n## \u671f\u5f85\u30ea\u30bf\u30fc\u30f3 \u03bc\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3068\u306f\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u304c\u751f\u307f\u51fa\u3059\u53ef\u80fd\u6027\u306e\u3042\u308b\u30ea\u30bf\u30fc\u30f3\u306e\u4e88\u60f3\u5024\u3067\u3042\u308a\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u53ef\u80fd\u306a\u30ea\u30bf\u30fc\u30f3\u5206\u5e03\u306e\u5e73\u5747\uff08\u5e73\u5747\u5024\uff09\u3068\u306a\u308a\u307e\u3059\u3002\n\u4f8b\u3048\u3070\u3001\u9298\u67c4A\u3001B\u3001C\u304c\u305d\u308c\u305e\u308c50\uff05\u300120\uff05\u300130\uff05\u306e\u5272\u5408\u3067\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306b\u7d44\u307f\u8fbc\u307e\u308c\u3066\u3044\u305f\u3068\u3057\u307e\u3059\u3002 \u5404\u9298\u67c4\u306e\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u304c\u305d\u308c\u305e\u308c15\uff05\u30016\uff05\u30019\uff05\u3060\u3063\u305f\u5834\u5408\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u306e\u8a08\u7b97\u5f0f\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u306a\u308a\u307e\u3059\u3002
\n\n\n
\u03bc = (50% x 15%) + (20% x 6%) + (30% x 9%) = 11.4%
\n\n\nQiskit\u306eRandomDataProvider\u304c\u63d0\u4f9b\u3059\u308b\u4ee5\u4e0b\u306e`get_period_return_mean_vector()`\u30e1\u30bd\u30c3\u30c9\u3092\u4f7f\u7528\u3059\u308b\u3053\u3068\u3067\u3001\u5148\u307b\u3069\u751f\u6210\u3057\u305f\u6642\u7cfb\u5217\u30c7\u30fc\u30bf\u306b\u3064\u3044\u3066\u3001\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3092\u8a08\u7b97\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\n\n\n```python\n#Let's calculate the expected return for our problem data\n\nmu = data.get_period_return_mean_vector() # Returns a vector containing the mean value of each asset's expected return.\n\nprint(mu)\n```\n\n### \u5171\u5206\u6563\u884c\u5217 \u03a3\n\u5171\u5206\u6563\u03a3\u306f\u30012\u3064\u306e\u8cc7\u7523\u306e\u5e73\u5747\u30ea\u30bf\u30fc\u30f3\u304c\u4e92\u3044\u306b\u3069\u306e\u3088\u3046\u306b\u5909\u5316\u3059\u308b\u304b\u3092\u793a\u3059\u7d71\u8a08\u7684\u306a\u5c3a\u5ea6\u3067\u3001\u6295\u8cc7\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u89b3\u70b9\u304b\u3089\u30ea\u30b9\u30af\u306e\u5927\u304d\u3055\u3092\u7406\u89e3\u3057\u3001\u682a\u5f0f\u306e\u58f2\u8cb7\u3092\u6c7a\u5b9a\u3059\u308b\u306e\u306b\u5f79\u7acb\u3061\u307e\u3059\u3002\n\n\u81ea\u5206\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306bn\u500b\u306e\u9298\u67c4\u304c\u3042\u308b\u5834\u5408\u3001\u5171\u5206\u6563\u884c\u5217\u306e\u5927\u304d\u3055\u306fn\u00d7n\u3068\u306a\u308a\u307e\u3059\u3002\n\u4eca\u56de\u306e4\u9298\u67c4\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u5171\u5206\u6563\u884c\u5217\u3092\u30d7\u30ed\u30c3\u30c8\u3059\u308b\u3068\u30014\u00d74\u306e\u884c\u5217\u306b\u306a\u308a\u307e\u3059\u3002\n\n\n```python\n# Let's plot our covariance matrix \u03a3\uff08sigma\uff09\nsigma = data.get_period_return_covariance_matrix() #Returns the covariance matrix of the four assets\nprint(sigma)\nfig, ax = plt.subplots(1,1)\nim = plt.imshow(sigma, extent=[-1,1,-1,1])\nx_label_list = ['stock3', 'stock2', 'stock1', 'stock0']\ny_label_list = ['stock3', 'stock2', 'stock1', 'stock0']\nax.set_xticks([-0.75,-0.25,0.25,0.75])\nax.set_yticks([0.75,0.25,-0.25,-0.75])\nax.set_xticklabels(x_label_list)\nax.set_yticklabels(y_label_list)\nplt.colorbar()\nplt.clim(-0.000002, 0.00001)\nplt.show()\n```\n\n\u5de6\u304b\u3089\u53f3\u3078\u306e\u5bfe\u89d2\u6210\u5206\uff08\u4e0b\u56f3\u306e\u9ec4\u8272\u3067\u793a\u3055\u308c\u305f\u5024\uff09\u306f\u3001\u3042\u308b\u9298\u67c4\u306e\u300c\u81ea\u8eab\u300d\u3068\u306e\u95a2\u4fc2\u3092\u793a\u3057\u3066\u3044\u307e\u3059\u3002\u307e\u305f\u3001\u975e\u5bfe\u89d2\u7dda\u4e0a\u306e\u5024\u306f\u3001\u5404\u9298\u67c4\u306e\u5e73\u5747\u671f\u5f85\u53ce\u76ca\u7387\u306e\u4e92\u3044\u306e\u504f\u5dee\u3092\u793a\u3057\u3066\u3044\u307e\u3059\u3002 \u5171\u5206\u6563\u884c\u5217\u306e\u7c21\u5358\u306a\u898b\u65b9\u306f\u6b21\u306e\u901a\u308a\u3067\u3059\u3002\n\n - 2\u3064\u306e\u9298\u67c4\u304c\u540c\u6642\u306b\u5897\u52a0\u30fb\u6e1b\u5c11\u3059\u308c\u3070\u3001\u5171\u5206\u6563\u306e\u5024\u306f\u6b63\u306e\u5024\u306b\u306a\u308a\u307e\u3059\u3002\n - \u4e00\u65b9\u304c\u4e0a\u6607\u3057\u3001\u4ed6\u65b9\u304c\u4e0b\u964d\u3057\u305f\u5834\u5408\u3001\u5171\u5206\u6563\u306e\u5024\u306f\u8ca0\u306b\u306a\u308a\u307e\u3059\u3002\n\n
\n\n\"Don't Put All Your Eggs in One Basket \uff08\u5375\u306f\u4e00\u3064\u306e\u30ab\u30b4\u306b\u76db\u308b\u306a\uff09\"\u3068\u3044\u3046\u8a00\u8449\u3092\u805e\u3044\u305f\u3053\u3068\u304c\u3042\u308b\u304b\u3082\u3057\u308c\u307e\u305b\u3093\u3002\u5e38\u306b\u540c\u3058\u65b9\u5411\u306b\u52d5\u304f\u3082\u306e\u306b\u6295\u8cc7\u3057\u3066\u3044\u308b\u3068\u3001\u540c\u6642\u306b\u3059\u3079\u3066\u306e\u8cc7\u91d1\u3092\u5931\u3046\u30ea\u30b9\u30af\u304c\u3042\u308a\u307e\u3059\u3002\u5171\u5206\u6563\u884c\u5217\u306f\u3001\u305d\u306e\u3088\u3046\u306a\u30ea\u30b9\u30af\u3092\u6e1b\u3089\u3059\u305f\u3081\u306b\u3001\u6295\u8cc7\u5bb6\u304c\u8cc7\u7523\u3092\u5206\u6563\u3055\u305b\u308b\u306e\u306b\u5f79\u7acb\u3064\u6307\u6a19\u3067\u3059\u3002\n\n\u3053\u308c\u3067\u3001\u6700\u9069\u5316\u306e\u305f\u3081\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u69cb\u7bc9\u3059\u308b\u305f\u3081\u306b\u5fc5\u8981\u306a\u3059\u3079\u3066\u306e\u5024\u304c\u63c3\u3063\u305f\u306e\u3067\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u4e8c\u6b21\u8a08\u753b\u554f\u984c\u306b\u5b9a\u5f0f\u5316\u3057\u3001\u6700\u9069\u5316\u554f\u984c\u306b\u843d\u3068\u3057\u8fbc\u3093\u3067\u304f\u308c\u308b\u306e\u3092\u624b\u52a9\u3051\u3057\u3066\u304f\u308c\u308bQiskit Finance\u306e\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u30af\u30e9\u30b9\u3092\u307f\u3066\u3044\u304d\u307e\u3057\u3087\u3046\u3002\n\n### Qiskit Finance \u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u30af\u30e9\u30b9\n\n\u30b9\u30c6\u30c3\u30d72\u304b\u3089\u30b9\u30c6\u30c3\u30d74\u307e\u3067\u306e\u306a\u304b\u3067\u8a08\u7b97\u3057\u305f\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u7279\u5fb4\u3065\u3051\u308b\u5024\u3092\u5165\u529b\u3068\u3057\u3048\u3001[`PortfolioOptimization(\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u30af\u30e9\u30b9)`](https://qiskit.org/documentation/finance/stubs/qiskit_finance.applications.PortfolioOptimization.html#qiskit_finance.applications.PortfolioOptimization) \u3092\u3064\u304b\u3063\u3066\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u30fb\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u3064\u304f\u308a\u307e\u3059\u3002\n\nPortfolioOptimization \u30af\u30e9\u30b9\u306f\u4ee5\u4e0b **\uff15\u3064\u306e\u5f15\u6570** \u3092\u3068\u308a\u3001\u305d\u306e\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u4e8c\u6b21\u8a08\u753b\u554f\u984c\u306b\u5909\u63db\u3059\u308b\u6e96\u5099\u3092\u884c\u3044\u307e\u3059\u3002\n- expected_returns\n- covariances\n- risk_factor\n- budget\n- bounds\n\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u304c\u4e8c\u6b21\u8a08\u753b\u554f\u984c\u306b\u5909\u63db\u3055\u308c\u308c\u3070\u3001\u5909\u5206\u91cf\u5b50\u56fa\u6709\u5024\u30bd\u30eb\u30d0\u30fc\uff08VQE\uff09\u3084\u91cf\u5b50\u8fd1\u4f3c\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\uff08QAOA\uff09\u306a\u3069\u306e\u91cf\u5b50\u5909\u5206\u6cd5\u3092\u7528\u3044\u3066\u3001\u554f\u984c\u306e\u6700\u9069\u89e3\u3092\u6c42\u3081\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002
\n\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u8cc7\u7523\u9078\u629e\u306e\u5909\u6570\u304c\u30d0\u30a4\u30ca\u30ea\u5909\u6570\u306e\u5834\u5408\u3001\u300cbounds = None\u300d\u3068\u8a2d\u5b9a\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n\u3059\u3067\u306b\u30b9\u30c6\u30c3\u30d73\u3067\u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3068\u5171\u5206\u6563\u3092\u5f97\u3066\u304a\u308a\u3001\u30ea\u30b9\u30af\u30d5\u30a1\u30af\u30bf\u30fc\u3068\u30d0\u30b8\u30a7\u30c3\u30c8\u3082\u4e8b\u524d\u306b\u5b9a\u7fa9\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u305d\u308c\u3067\u306f\u3001 [`PortfolioOptimization`](https://qiskit.org/documentation/finance/stubs/qiskit_finance.applications.PortfolioOptimization.html#qiskit_finance.applications.PortfolioOptimization) \u30af\u30e9\u30b9\u3092\u4f7f\u3063\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u69cb\u7bc9\u3057\u3066\u307f\u307e\u3057\u3087\u3046\u3002\n\n## Challenge 1a: PortfolioOptimization\u30af\u30e9\u30b9\u3092\u4f7f\u7528\u3057\u305f\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u306e\u4f5c\u6210\n
\n
\n\n**Challenge 1a**
\n[PortfolioOptimization](https://qiskit.org/documentation/finance/stubs/qiskit_finance.applications.PortfolioOptimization.html#qiskit_finance.applications.PortfolioOptimization) \u30af\u30e9\u30b9\u3092\u4f7f\u3063\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u751f\u6210\u3059\u308b\u30b3\u30fc\u30c9\u3092\u5b8c\u6210\u3055\u305b\u3066\u304f\u3060\u3055\u3044\u3002**5\u3064\u306e\u5f15\u6570**\u306b\u524d\u30b9\u30c6\u30c3\u30d7\u3067\u53d6\u5f97\u3057\u305f\u5024\u3092\u4ee3\u5165\u3057\u3001\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u4e8c\u6b21\u8a08\u753b\u554f\u984c **qp** \u306b\u5909\u63db\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n
\n\n\n```python\n##############################\n# \u4ee5\u4e0b\u306b\u30b3\u30fc\u30c9\u3092\u5165\u529b\n\nportfolio =\nqp = \n\n##############################\nprint(qp)\n```\n\nIf you were able to successfully generate the code, you should see a standard representation of the formulation of our qudratic program. \n\n\n```python\n# \u7b54\u3048\u3092\u78ba\u8a8d\u3057\u3066\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u3067\u63d0\u51fa\u3057\u307e\u3059\nfrom qc_grader import grade_ex1a\ngrade_ex1a(qp)\n```\n\n## Minimum Eigen Optimizer\uff08\u6700\u5c0f\u56fa\u6709\u5024\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\uff09\n\n\u8208\u5473\u6df1\u3044\u3053\u3068\u306b\u3001\u3053\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u554f\u984c\u306f\u3001\u30cf\u30df\u30eb\u30c8\u30cb\u30a2\u30f3\u306e\u57fa\u5e95\u72b6\u614b\u3092\u6c42\u3081\u308b\u554f\u984c\u3068\u3057\u3066\u89e3\u304f\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\u30cf\u30df\u30eb\u30c8\u30cb\u30a2\u30f3\u3068\u306f\u3001\u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u3057\u305f\u3044\u7269\u7406\u7cfb\u306e\u5168\u30a8\u30cd\u30eb\u30ae\u30fc\u3092\u8868\u3059\u30a8\u30cd\u30eb\u30ae\u30fc\u95a2\u6570\u3068\u8003\u3048\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\u3053\u306e\u7269\u7406\u7cfb\u306f\u3001\u3055\u3089\u306b [**\u30a4\u30b8\u30f3\u30b0\u30e2\u30c7\u30eb**](https://en.wikipedia.org/wiki/Ising_model) \u3068\u547c\u3070\u308c\u308b\u6570\u5b66\u30e2\u30c7\u30eb\u3067\u8868\u73fe\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\u3053\u306e\u6570\u5b66\u30e2\u30c7\u30eb\u306f\u3001\u30d0\u30a4\u30ca\u30ea\u5909\u6570\u3092\u30b9\u30d4\u30f3\u30a2\u30c3\u30d7(+1)\u307e\u305f\u306f\u30b9\u30d4\u30f3\u30c0\u30a6\u30f3(-1)\u3068\u547c\u3070\u308c\u308b\u72b6\u614b\u306b\u5909\u63db\u3059\u308b\u305f\u3081\u306e\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002\n \n\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u9069\u7528\u3059\u308b\u969b\u306b\u306f\u3001\u901a\u5e38\u3001\u9069\u6027\u306a\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u306b\u5909\u63db\u3057\u3066\u9069\u7528\u53ef\u80fd\u306a\u72b6\u614b\u306b\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3002\u4eca\u56de\u9069\u7528\u3059\u308bVQE\u3084QAOA\u306e\u3088\u3046\u306a\u5909\u5206\u91cf\u5b50\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306f[**\u4e8c\u6b21\u975e\u5236\u7d04\u4e8c\u6b21\u6700\u9069\u5316(QUBO)**](https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization)\u554f\u984c\u306b\u9069\u7528\u3059\u308b\u305f\u3081, Qiskit\u306f\u81ea\u52d5\u7684\u306b\u9069\u5207\u306a\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u306b\u30de\u30c3\u30d4\u30f3\u30b0\u3059\u308b\u30b3\u30f3\u30d0\u30fc\u30bf\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002\n\n
\n\nQUBO\u3092\u89e3\u304f\u3053\u3068\u306f\u3001\u30cf\u30df\u30eb\u30c8\u30cb\u30a2\u30f3\u306e\u57fa\u5e95\u72b6\u614b\u3092\u6c42\u3081\u308b\u3053\u3068\u306b\u76f8\u5f53\u3057\u307e\u3059\u3002Minimum Eigen Optimizer\uff08\u6700\u5c0f\u56fa\u6709\u5024\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\uff09\u306f\u3001\u4e8c\u6b21\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u30cf\u30df\u30eb\u30c8\u30cb\u30a2\u30f3\u306b\u5909\u63db\u3057\u3001VQE\u3084QAOA\u306a\u3069\u306e\u6240\u5b9a\u306e\u6700\u5c0f\u56fa\u6709\u5024\u30bd\u30eb\u30d0\u30fc\u3092\u547c\u3073\u51fa\u3057\u3066\u57fa\u5e95\u72b6\u614b\u3092\u8a08\u7b97\u3057\u3001\u6700\u9069\u5316\u306e\u7d50\u679c\u3092\u8fd4\u3057\u307e\u3059\u3002\n \n\u3053\u306e\u30a2\u30d7\u30ed\u30fc\u30c1\u306b\u3088\u308a\u3001\u6700\u9069\u5316\u554f\u984c\u3092\u89e3\u304f\u969b\u306b\u57fa\u5e95\u72b6\u614b\u306e\u8a08\u7b97\u3092\u5229\u7528\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\u6b21\u306e\u30b9\u30c6\u30c3\u30d7\u306e\u8ab2\u984c\u3067\u3053\u306e\u624b\u7d9a\u304d\u3092\u5b9f\u88c5\u3057\u3066\u307f\u307e\u3057\u3087\u3046\u3002\n\n## Step 5. \u53e4\u5178\u7684\u306a\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\u3067\u53c2\u8003\u5024\u3092\u53d6\u5f97\n\n\u53c2\u8003\u5024\u3068\u3057\u3066\u3001\u4eca\u56de\u306e\u6700\u9069\u5316\u554f\u984c\u89e3\u306e\u7406\u8ad6\u5024\u3092\u3055NumPyMinimumEigensolver\u3092\u4f7f\u3063\u3066\u307e\u305a\u306f\u53d6\u5f97\u3057\u307e\u3059\u3002\u554f\u984c\u306f\u300cising\u300d\u306b\u8a2d\u5b9a\u3057\u3066\u3044\u307e\u3059\u3002\u91cf\u5b50\u8a08\u7b97\u3067\u306f\u306a\u304f\u53e4\u5178\u7684\u306b\u8a08\u7b97\u3055\u308c\u308b\u306e\u3067\u3001\u30d0\u30c3\u30af\u30a8\u30f3\u30c9\u306f\u5fc5\u8981\u3042\u308a\u307e\u305b\u3093\u3002\u7d50\u679c\u306fdictionary\u578b\u3068\u3057\u3066\u8fd4\u3055\u308c\u307e\u3059\u3002\n\n\n```python\nexact_mes = NumPyMinimumEigensolver()\nexact_eigensolver = MinimumEigenOptimizer(exact_mes)\nresult = exact_eigensolver.solve(qp)\n\nprint(result)\n```\n\n
\n
\n \n**\u6ce8:** \u3053\u3053\u3067\u8868\u793a\u3055\u308c\u308bOptimal Value\u3068\u306f\u3001\u3069\u306e\u9298\u67c4\u304c\u9078\u629e\u3055\u308c\u305f\u304b\u3092\u793a\u3057\u3066\u3044\u307e\u3059\u3002\u4f8b\u3048\u3070Optimal Value [1. 1. 0. 0.] \u306f STOCK2\u3068STOCK3\u304c\u9078\u629e\u3055\u308c\u305f\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u610f\u5473\u3057\u307e\u3059\u3002\n \n
\n\n## Challenge1b: VQE\u3092\u3064\u304b\u3063\u305f\u30bd\u30ea\u30e5\u30fc\u30b7\u30e7\u30f3\n\n**Variational Quantum Eigensolver (VQE)**\u306f\u3001\u53e4\u5178\u3068\u91cf\u5b50\u306e\u30cf\u30a4\u30d6\u30ea\u30c3\u30c9\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3067\u3042\u308a\u3001 [**Hamiltonian**](https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics))\u306e\u57fa\u5e95\u30a8\u30cd\u30eb\u30ae\u30fc(\u6700\u3082\u4f4e\u3044\u30a8\u30cd\u30eb\u30ae\u30fc\u72b6\u614b)\u3092\u52b9\u7387\u7684\u306b\u8a08\u7b97\u3059\u308b\u305f\u3081\u306b\u3001\u51e6\u7406\u8ca0\u8377\u306e\u4e00\u90e8\u3092\u53e4\u5178\u8a08\u7b97\u6a5f\u306b\u8a17\u3057\u307e\u3059\u3002 \u5148\u307b\u3069\u8aac\u660e\u3057\u305f\u3088\u3046\u306b\u3001\u4e8c\u6b21\u30d7\u30ed\u30b0\u30e9\u30e0\u3092[**VQE**](https://qiskit.org/documentation/stubs/qiskit.algorithms.VQE.html) \u3067\u89e3\u304f\u57fa\u5e95\u72b6\u614b\u30a8\u30cd\u30eb\u30ae\u30fc\u306e\u63a2\u7d22\u554f\u984c\u3068\u3057\u3066\u518d\u69cb\u7bc9\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\u3053\u306e\u8ab2\u984c\u3067\u306f\u3001VQE\u3092\u7528\u3044\u3066\u6700\u9069\u89e3\u3092\u6c42\u3081\u307e\u3059\u3002
\n\n\n
\n
\n\n**Challenge 1b**
\nVariational Quantum Eigensolver(VQE)\u3092\u4f7f\u3063\u3066\u540c\u3058\u6700\u9069\u89e3\u3092\u6c42\u3081\u307e\u3059\u3002\u4ee5\u4e0b\u306b\u4f7f\u7528\u3059\u308b\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc(optimizer)\u3068\u5909\u5206\u5f62\u5f0f\u3092\u6307\u5b9a\u3057\u307e\u3059\u3002\n
\n\n
\n
\n\n**\u30d2\u30f3\u30c8**: \u884c\u304d\u8a70\u307e\u3063\u305f\u3089\u3001\u3053\u306eqiskit\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u3092\u53c2\u8003\u306b\u554f\u984c\u8a2d\u5b9a\u306b\u5fdc\u3058\u3066\u9069\u5b9c\u8abf\u6574\u3057\u3066\u304f\u3060\u3055\u3044\u3002 https://qiskit.org/documentation/finance/tutorials/01_portfolio_optimization.html\n\n
\n\n\u4ee5\u4e0b\u306b\u3068\u308a\u304b\u304b\u308a\u306e\u305f\u3081\u306e\u30b3\u30fc\u30c9\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002######\u3067\u533a\u5207\u3089\u308c\u305f\u7b87\u6240\u306b\u3054\u81ea\u8eab\u306e\u30b3\u30fc\u30c9\u3092\u5165\u529b\u304f\u3060\u3055\u3044\u3002\n\n\n```python\noptimizer = SLSQP(maxiter=1000) \nalgorithm_globals.random_seed = 1234\nbackend = Aer.get_backend('statevector_simulator')\n\n\n##############################\n# Provide your code here\n\nvqe = \n\n\n##############################\n\nvqe_meo = MinimumEigenOptimizer(vqe) #please do not change this code\n\nresult = vqe_meo.solve(qp) #please do not change this code\n\nprint(result) #please do not change this code\n```\n\n\n```python\n# \u7b54\u3048\u3092\u78ba\u8a8d\u3057\u3066\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u3067\u63d0\u51fa\u3057\u307e\u3059\nfrom qc_grader import grade_ex1b\ngrade_ex1b(vqe, qp)\n```\n\n\u53e4\u5178\u306e\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\u304b\u3089\u5f97\u3089\u308c\u305f\u57fa\u6e96\u89e3\u3068\u540c\u3058\u6700\u9069\u89e3\u304cVQE\u3067\u3082\u6c42\u307e\u308b\u306f\u305a\u3067\u3059\u3002\n\n## Challenge 1c: B=3, n=4 \u9298\u67c4\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u554f\u984c\n\n\u3053\u306e\u30c1\u30e3\u30ec\u30f3\u30b8\u6f14\u7fd2\u3067\u306f\u3001\u540c\u3058\u554f\u984c\u3092\u3001\u4eca\u5ea6\u306fB=3\u3067\u30011\u3064\u306e\u9298\u67c4\u3092\uff12\u3064\u4fdd\u6709\u3067\u304d\u308b\u5834\u5408\u3092\u8003\u3048\u3066\u3044\u304d\u307e\u3059\u3002(\u4f8b\u3048\u3070\u3001STOCK3\u30922\u3064\u4fdd\u6709\u3001STOCK2\u30921\u3064\u4fdd\u6709\u3057\u305f\u5834\u5408\u306e\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306f[2, 1, 0, 0]\u3068\u306a\u308a\u307e\u3059\u3002\u307e\u305f\u3001STOCK0\u3001STOCK1\u3001STOCK02\u3092\u305d\u308c\u305e\u308c\u3001\uff11\u682a\u305a\u3064\u4fdd\u6709\u3057\u305f\u5834\u5408\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306f[0, 1, 1, 1]\u3068\u306a\u308a\u307e\u3059\u3002)
\n\u3053\u306e\u65b0\u3057\u3044\u5236\u7d04\u6761\u4ef6\u3092\u7528\u3044\u3066\u3001\u30ea\u30b9\u30af\u3068\u30ea\u30bf\u30fc\u30f3\u306e\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u3092\u6700\u5c0f\u5316\u3059\u308b\u6700\u9069\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u6c42\u3081\u3066\u304f\u3060\u3055\u3044\u3002\n\n
\n
\n\n**\u30c1\u30e3\u30ec\u30f3\u30b8 1c:**
\nPortfolioOptimization \u30af\u30e9\u30b9\u3092\u4f7f\u7528\u3057\u3066\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u751f\u6210\u3059\u308b\u30b3\u30fc\u30c9\u3092\u5b8c\u6210\u3055\u305b\u307e\u3059\u3002
\u3002\n\u4e88\u7b97=3\u306e\u5834\u5408\u30011\u3064\u306e\u8cc7\u7523\u306b2\u500d\u306e\u30a6\u30a7\u30a4\u30c8\u3092\u5272\u308a\u5f53\u3066\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u6700\u9069\u306a\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u3092\u898b\u3064\u3051\u3066\u304f\u3060\u3055\u3044\u3002
\u3002\n\u6700\u5f8c\u306bQAOA\u3092\u4f7f\u3063\u3066\u6700\u9069\u89e3\u3092\u898b\u3064\u3051\u3001\u7b54\u3048\u3092\u63d0\u51fa\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n \n
\n\n
\n
\n \n**\u30d2\u30f3\u30c8:** STOCK0, STOCK1, STOCK2, STOCK3\u306e\u3069\u308c\u3067\u3082\u3001\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u306e\u4e2d\u30672\u500d\u306e\u30a6\u30a7\u30a4\u30c8\u3092\u6301\u3064\u3053\u3068\u304c\u3067\u304d\u308b\u30b1\u30fc\u30b9\u3067\u3059\u3002\u6574\u6570\u5909\u6570\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u306b\u306f\u3001\u3069\u306e\u3088\u3046\u306b\u30b3\u30fc\u30c9\u3092\u5909\u66f4\u3059\u308c\u3070\u3088\u3044\u3067\u3057\u3087\u3046\u304b\u3002
\n
\n\n## Step 1: \u5fc5\u8981\u306a\u30e9\u30a4\u30d6\u30e9\u30ea\u3092\u30a4\u30f3\u30dd\u30fc\u30c8\u3057\u307e\u3059\u3002\n\n\n```python\n#Step 1: Let us begin by importing necessary libraries\nimport qiskit\nfrom qiskit import Aer\nfrom qiskit.algorithms import VQE, QAOA, NumPyMinimumEigensolver\nfrom qiskit.algorithms.optimizers import *\nfrom qiskit.circuit.library import TwoLocal\nfrom qiskit.utils import QuantumInstance\nfrom qiskit.utils import algorithm_globals\nfrom qiskit_finance import QiskitFinanceError\nfrom qiskit_finance.applications.optimization import *\nfrom qiskit_finance.data_providers import *\nfrom qiskit_optimization.algorithms import MinimumEigenOptimizer\nfrom qiskit_optimization.applications import OptimizationApplication\nfrom qiskit_optimization.converters import QuadraticProgramToQubo\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport datetime\nimport warnings\nfrom sympy.utilities.exceptions import SymPyDeprecationWarning\nwarnings.simplefilter(\"ignore\",SymPyDeprecationWarning)\n```\n\n\u4ee5\u4e0b\u306b\u3068\u308a\u304b\u304b\u308a\u306e\u305f\u3081\u306e\u30b3\u30fc\u30c9\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002######\u3067\u533a\u5207\u3089\u308c\u305f\u7b87\u6240\u306b\u3054\u81ea\u8eab\u306e\u30b3\u30fc\u30c9\u3092\u5165\u529b\u304f\u3060\u3055\u3044\u3002\n\n## Step 2: \u6642\u7cfb\u5217\u30c7\u30fc\u30bf\uff08\u91d1\u878d\u30c7\u30fc\u30bf\uff09\u306e\u751f\u6210\n\n\n```python\n# Step 2. Generate time series data for four assets. \n# Do not change start/end dates specified to generate problem data.\nseed = 132 \nnum_assets = 4\nstocks = [(\"STOCK%s\" % i) for i in range(num_assets)]\ndata = RandomDataProvider(tickers=stocks,\n start=datetime.datetime(1955,11,5), \n end=datetime.datetime(1985,10,26), \n seed=seed)\ndata.run()\n```\n\n\n```python\n# Let's plot our finanical data (We are generating the same time series data as in the previous example.)\nfor (cnt, s) in enumerate(data._tickers):\n plt.plot(data._data[cnt], label=s)\nplt.legend()\nplt.xticks(rotation=90)\nplt.xlabel('days')\nplt.ylabel('stock value')\nplt.show()\n```\n\n## Step 3: \u671f\u5f85\u30ea\u30bf\u30fc\u30f3\u3068\u5171\u5206\u6563\u884c\u5217\u306e\u53d6\u5f97\n\n\n```python\n# Step 3. Calculate mu and sigma for this problem\n\nmu2 = data.get_period_return_mean_vector() #Returns a vector containing the mean value of each asset.\nsigma2 = data.get_period_return_covariance_matrix() #Returns the covariance matrix associated with the assets.\nprint(mu2, sigma2)\n```\n\n## Step 4: \u30c1\u30e3\u30ec\u30f3\u30b81c\u306e\u554f\u984c\u8a2d\u5b9a\u306b\u57fa\u3065\u304f\u5909\u6570\u8a2d\u5b9a\u3092\u884c\u3063\u3066\u304f\u3060\u3055\u3044\u3002\n\n\n```python\n# Step 4. Set parameters and constraints based on this challenge 1c\n\n##############################\n# Provide your code here\n\nq2 = #Set risk factor to 0.5\nbudget2 = #Set budget to 3\n\n##############################\n```\n\n## Step 5: \u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u30fb\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u3092\u751f\u6210\u3059\u308b\u30b3\u30fc\u30c9\u3092\u5b8c\u6210\u3055\u305b\u3066\u304f\u3060\u3055\u3044\u3002\n\n\n```python\n# Step 5. Complete code to generate the portfolio instance\n\n\n##############################\n# Provide your code here\n\nportfolio2 =\nqp2 = \n\n\n##############################\n```\n\n## Step 6: QAOA\u3092\u3064\u304b\u3063\u305f\u30bd\u30ea\u30e5\u30fc\u30b7\u30e7\u30f3\n\n**Quantum Approximate Optimization Algorithm(QAOA)** \u306f\u3001\u3082\u3046\u4e00\u3064\u306e\u5909\u5206\u91cf\u5b50\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3067\u3001\u5c0f\u4e2d\u898f\u6a21\u306e\u30ce\u30a4\u30ba\u6709\u306e\u91cf\u5b50\u30c7\u30d0\u30a4\u30b9\u4e0a\u3067\u306e\u7d44\u5408\u305b\u6700\u9069\u5316\u554f\u984c\u3092\u89e3\u304f\u306e\u306b\u5fdc\u7528\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u3053\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306f\u30cf\u30df\u30eb\u30c8\u30cb\u30a2\u30f3\u306e\u57fa\u5e95\u72b6\u614b\u3092\u6c42\u3081\u308b\u306e\u306b\u3082\u4f7f\u308f\u308c\u3066\u304a\u308a\u3001Qiskit\u306e[**QAOA**](https://qiskit.org/documentation/stubs/qiskit.algorithms.QAOA.html) \u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u3092\u3064\u304b\u3063\u3066\u7c21\u5358\u306b\u5b9f\u88c5\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u3002\uff08QAOA\u306b\u3064\u3044\u3066\u306f\u7b2c4\u56de\u306e\u8ab2\u984c\u3067\u3088\u308a\u8a73\u3057\u3044\u89e3\u8aac\u304c\u4e88\u5b9a\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u3053\u306e\u6f14\u7fd2\u3067\u306f\u307e\u305a\u3001Qiskit\u3092\u4f7f\u3063\u305fQAOA\u306e\u57fa\u672c\u7684\u306a\u5b9f\u88c5\u306b\u7126\u70b9\u3092\u5f53\u3066\u3066\u3044\u307e\u3059)\u3002\n\n\n```python\n# Step 6. Now let's use QAOA to solve this problem. \n\noptimizer = SLSQP(maxiter=1000) \nalgorithm_globals.random_seed = 1234\nbackend = Aer.get_backend('statevector_simulator')\n\n##############################\n# Provide your code here \n\nqaoa = \n\n\n##############################\n\nqaoa_meo = MinimumEigenOptimizer(qaoa) #please do not change this code\n\nresult2 = qaoa_meo.solve(qp2) #please do not change this code\n\nprint(result2) #please do not change this code\n```\n\n\u6ce8\uff1aQAOA\u306e\u5b9f\u884c\u306b\u306f\u6700\u5927\u3067\u6570\u5206\u304b\u304b\u308b\u3053\u3068\u304c\u3042\u308a\u307e\u3059\u3002\n\n# \u56de\u7b54\u3092\u63d0\u51fa\u3059\u308b\n\n\n```python\n# \u7b54\u3048\u3092\u78ba\u8a8d\u3057\u3066\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u3067\u63d0\u51fa\u3057\u307e\u3059\nfrom qc_grader import grade_ex1c\ngrade_ex1c(qaoa, qp2)\n```\n\n### \u66f4\u306b\u5b66\u3073\u305f\u3044\u65b9\u3078:\n\u5165\u9580\u30ec\u30d9\u30eb\u306e\u6700\u521d\u306e\u30c1\u30e3\u30ec\u30f3\u30b8\u3092\u7121\u4e8b\u306b\u89e3\u3044\u305f\u7686\u3055\u3093\u3001**\u304a\u3081\u3067\u3068\u3046\u3054\u3056\u3044\u307e\u3059\uff01**
\n\u30dd\u30fc\u30c8\u30d5\u30a9\u30ea\u30aa\u6700\u9069\u5316\u306b\u3064\u3044\u3066\u3001\u307e\u305fQiskit\u306eFinance\u30e2\u30b8\u30e5\u30fc\u30eb\u3092\u4f7f\u3063\u305f\u6f14\u7fd2\u3092\u89e3\u304f\u65b9\u6cd5\u306b\u3064\u3044\u3066\u5b66\u3076\u3053\u3068\u304c\u3067\u304d\u305f\u306e\u3067\u306f\u306a\u3044\u3067\u3057\u3087\u3046\u304b\u3002
\u4eca\u56de\u767b\u5834\u3057\u305f\u5909\u5206\u91cf\u5b50\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u3064\u304b\u3063\u305f\u6700\u9069\u5316\u554f\u984c\u306b\u3064\u3044\u3066\u66f4\u306b\u6df1\u304f\u5b66\u3073\u305f\u3044\u65b9\u306f\u4ee5\u4e0b\u306e\u6587\u732e\u3092\u3054\u89a7\u304f\u3060\u3055\u3044\u3002\n
\n1. [**Quantum optimization using variational algorithms on near-term quantum devices. Moll et al. 2017**](https://arxiv.org/abs/1710.01022)
\n2. [**Improving Variational Quantum Optimization using CVaR. Barkoutsos et al. 2019.**](https://arxiv.org/abs/1907.04769)
\n\n### Good luck and have fun with the challenge!\n\n## Additional information\n\n**Created by:** Yuri Kobayashi\n\n**Version:** 1.0.0\n", "meta": {"hexsha": "8c5a678c99b297802d26c480f2a948c0537fb724", "size": 29402, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/challenge-1/challenge-1-ja.ipynb", "max_stars_repo_name": "HuangJunye/ibm-quantum-challenge-fall-2021", "max_stars_repo_head_hexsha": "ab084ba386b6d6b2638eeff1b580ca0c6a65fe92", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-03T16:08:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-03T16:08:11.000Z", "max_issues_repo_path": "content/challenge-1/challenge-1-ja.ipynb", "max_issues_repo_name": "HuangJunye/ibm-quantum-challenge-fall-2021", "max_issues_repo_head_hexsha": "ab084ba386b6d6b2638eeff1b580ca0c6a65fe92", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/challenge-1/challenge-1-ja.ipynb", "max_forks_repo_name": "HuangJunye/ibm-quantum-challenge-fall-2021", "max_forks_repo_head_hexsha": "ab084ba386b6d6b2638eeff1b580ca0c6a65fe92", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-02T12:45:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-02T12:45:21.000Z", "avg_line_length": 30.819706499, "max_line_length": 368, "alphanum_fraction": 0.6147540984, "converted": true, "num_tokens": 10096, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5698526514141572, "lm_q2_score": 0.4571367168274948, "lm_q1q2_score": 0.26050057014291067}} {"text": "# Load a model and conduct a simulation\n\n# Purpose\nIt is now possible to load simulation model from disk. This notebook explore this functionality.\n\n# Methodology\n* The simulation model have been stored to disk.\n* Load one model and simulate one model test.\n\n# Setup\n\n\n```python\n# %load imports.py\n## Local packages:\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%config Completer.use_jedi = False ## (To fix autocomplete)\n\n## External packages:\nimport pandas as pd\npd.options.display.max_rows = 999\npd.options.display.max_columns = 999\npd.set_option(\"display.max_columns\", None)\nimport numpy as np\nnp.set_printoptions(linewidth=150)\n\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt\n#if os.name == 'nt':\n# plt.style.use('presentation.mplstyle') # Windows\n\nimport plotly.express as px \nimport plotly.graph_objects as go\n\nimport seaborn as sns\nimport sympy as sp\nfrom sympy.physics.mechanics import (dynamicsymbols, ReferenceFrame,\n Particle, Point)\nfrom sympy.physics.vector.printing import vpprint, vlatex\nfrom IPython.display import display, Math, Latex\nfrom src.substitute_dynamic_symbols import run, lambdify\n\nimport pyro\n\nimport sklearn\nimport pykalman\nfrom statsmodels.sandbox.regression.predstd import wls_prediction_std\nimport statsmodels.api as sm\n\nfrom scipy.integrate import solve_ivp\n\n## Local packages:\nfrom src.data import mdl\n\nfrom src.symbols import *\nfrom src.parameters import *\nimport src.symbols as symbols\nfrom src import prime_system\nfrom src.models import regression\nfrom src.visualization.regression import show_pred\nfrom src.visualization.plot import track_plot\n\n## Load models:\n# (Uncomment these for faster loading):\nimport src.models.vmm_abkowitz as vmm \nfrom src.models.vmm import ModelSimulator\n\n```\n\n\n```python\nfrom src.to_mlflow import mlflow\nimport multiprocessing\n```\n\n\n```python\nfrom src.data.mdl import load_test\n```\n\n## Parameters\n\n\n```python\nrun_params = {\n 'experiment' : 'VCT_linear',\n 'model_test_id' : 22770,\n 'model_test_dir_path' : os.path.abspath('../../../data/processed/kalman'),\n 'model' : os.path.abspath('../../../models/model_VCT_linear.pkl'),\n 'run_name' : '22770',\n}\n```\n\n## Run\n\n\n```python\nartifact_dir = 'artifacts'\nif not os.path.exists(artifact_dir):\n os.mkdir(artifact_dir)\n\nmlflow.set_experiment(run_params['experiment'])\n\nwith mlflow.start_run(run_name=run_params['run_name']) as run:\n \n ## Meta data\n log_params = run_params.copy()\n log_params.pop('experiment')\n log_params.pop('run_name')\n mlflow.log_params(run_params)\n \n ## Load and resimulate\n df, meta_data = load_test(**run_params)\n mlflow.log_params(meta_data.dropna())\n \n model = ModelSimulator.load(run_params['model'])\n result = model.simulate(df)\n \n ## Result to MLFlow\n result.to_mlflow()\n \n```\n", "meta": {"hexsha": "294062d09fee8683f029d2ac913b54bac859d365", "size": 5213, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/model_simulate/templates/16.03_model_simulate.ipynb", "max_stars_repo_name": "martinlarsalbert/wPCC", "max_stars_repo_head_hexsha": "16e0d4cc850d503247916c9f5bd9f0ddb07f8930", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/model_simulate/templates/16.03_model_simulate.ipynb", "max_issues_repo_name": "martinlarsalbert/wPCC", "max_issues_repo_head_hexsha": "16e0d4cc850d503247916c9f5bd9f0ddb07f8930", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/model_simulate/templates/16.03_model_simulate.ipynb", "max_forks_repo_name": "martinlarsalbert/wPCC", "max_forks_repo_head_hexsha": "16e0d4cc850d503247916c9f5bd9f0ddb07f8930", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.9425837321, "max_line_length": 102, "alphanum_fraction": 0.551889507, "converted": true, "num_tokens": 688, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118642792044, "lm_q2_score": 0.5117166047041654, "lm_q1q2_score": 0.25985576301744695}} {"text": "```python\n%matplotlib inline\n```\n\n\nSequence Models and Long-Short Term Memory Networks\n===================================================\n\nAt this point, we have seen various feed-forward networks. That is,\nthere is no state maintained by the network at all. This might not be\nthe behavior we want. Sequence models are central to NLP: they are\nmodels where there is some sort of dependence through time between your\ninputs. The classical example of a sequence model is the Hidden Markov\nModel for part-of-speech tagging. Another example is the conditional\nrandom field.\n\nA recurrent neural network is a network that maintains some kind of\nstate. For example, its output could be used as part of the next input,\nso that information can propogate along as the network passes over the\nsequence. In the case of an LSTM, for each element in the sequence,\nthere is a corresponding *hidden state* $h_t$, which in principle\ncan contain information from arbitrary points earlier in the sequence.\nWe can use the hidden state to predict words in a language model,\npart-of-speech tags, and a myriad of other things.\n\n\nLSTM's in Pytorch\n~~~~~~~~~~~~~~~~~\n\nBefore getting to the example, note a few things. Pytorch's LSTM expects\nall of its inputs to be 3D tensors. The semantics of the axes of these\ntensors is important. The first axis is the sequence itself, the second\nindexes instances in the mini-batch, and the third indexes elements of\nthe input. We haven't discussed mini-batching, so lets just ignore that\nand assume we will always have just 1 dimension on the second axis. If\nwe want to run the sequence model over the sentence \"The cow jumped\",\nour input should look like\n\n\\begin{align}\\begin{bmatrix}\n \\overbrace{q_\\text{The}}^\\text{row vector} \\\\\n q_\\text{cow} \\\\\n q_\\text{jumped}\n \\end{bmatrix}\\end{align}\n\nExcept remember there is an additional 2nd dimension with size 1.\n\nIn addition, you could go through the sequence one at a time, in which\ncase the 1st axis will have size 1 also.\n\nLet's see a quick example.\n\n\n\n```python\n# Author: Robert Guthrie\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)\n```\n\n\n```python\nlstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3\ninputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5\n\n# initialize the hidden state.\nhidden = (torch.randn(1, 1, 3),\n torch.randn(1, 1, 3))\nfor i in inputs:\n # Step through the sequence one element at a time.\n # after each step, hidden contains the hidden state.\n out, hidden = lstm(i.view(1, 1, -1), hidden)\n\n# alternatively, we can do the entire sequence all at once.\n# the first value returned by LSTM is all of the hidden states throughout\n# the sequence. the second is just the most recent hidden state\n# (compare the last slice of \"out\" with \"hidden\" below, they are the same)\n# The reason for this is that:\n# \"out\" will give you access to all hidden states in the sequence\n# \"hidden\" will allow you to continue the sequence and backpropagate,\n# by passing it as an argument to the lstm at a later time\n# Add the extra 2nd dimension\ninputs = torch.cat(inputs).view(len(inputs), 1, -1)\nhidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) # clean out hidden state\nout, hidden = lstm(inputs, hidden)\nprint(out)\nprint(hidden)\n```\n\nExample: An LSTM for Part-of-Speech Tagging\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn this section, we will use an LSTM to get part of speech tags. We will\nnot use Viterbi or Forward-Backward or anything like that, but as a\n(challenging) exercise to the reader, think about how Viterbi could be\nused after you have seen what is going on.\n\nThe model is as follows: let our input sentence be\n$w_1, \\dots, w_M$, where $w_i \\in V$, our vocab. Also, let\n$T$ be our tag set, and $y_i$ the tag of word $w_i$.\nDenote our prediction of the tag of word $w_i$ by\n$\\hat{y}_i$.\n\nThis is a structure prediction, model, where our output is a sequence\n$\\hat{y}_1, \\dots, \\hat{y}_M$, where $\\hat{y}_i \\in T$.\n\nTo do the prediction, pass an LSTM over the sentence. Denote the hidden\nstate at timestep $i$ as $h_i$. Also, assign each tag a\nunique index (like how we had word\\_to\\_ix in the word embeddings\nsection). Then our prediction rule for $\\hat{y}_i$ is\n\n\\begin{align}\\hat{y}_i = \\text{argmax}_j \\ (\\log \\text{Softmax}(Ah_i + b))_j\\end{align}\n\nThat is, take the log softmax of the affine map of the hidden state,\nand the predicted tag is the tag that has the maximum value in this\nvector. Note this implies immediately that the dimensionality of the\ntarget space of $A$ is $|T|$.\n\n\nPrepare data:\n\n\n\n\n```python\ndef prepare_sequence(seq, to_ix):\n idxs = [to_ix[w] for w in seq]\n return torch.tensor(idxs, dtype=torch.long)\n\n\ntraining_data = [\n (\"The dog ate the apple\".split(), [\"DET\", \"NN\", \"V\", \"DET\", \"NN\"]),\n (\"Everybody read that book\".split(), [\"NN\", \"V\", \"DET\", \"NN\"])\n]\nword_to_ix = {}\nfor sent, tags in training_data:\n for word in sent:\n if word not in word_to_ix:\n word_to_ix[word] = len(word_to_ix)\nprint(word_to_ix)\ntag_to_ix = {\"DET\": 0, \"NN\": 1, \"V\": 2}\n\n# These will usually be more like 32 or 64 dimensional.\n# We will keep them small, so we can see how the weights change as we train.\nEMBEDDING_DIM = 6\nHIDDEN_DIM = 6\n```\n\nCreate the model:\n\n\n\n\n```python\nclass LSTMTagger(nn.Module):\n\n def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):\n super(LSTMTagger, self).__init__()\n self.hidden_dim = hidden_dim\n\n self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)\n\n # The LSTM takes word embeddings as inputs, and outputs hidden states\n # with dimensionality hidden_dim.\n self.lstm = nn.LSTM(embedding_dim, hidden_dim)\n\n # The linear layer that maps from hidden state space to tag space\n self.hidden2tag = nn.Linear(hidden_dim, tagset_size)\n\n def forward(self, sentence):\n embeds = self.word_embeddings(sentence)\n lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))\n tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))\n tag_scores = F.log_softmax(tag_space, dim=1)\n return tag_scores\n```\n\nTrain the model:\n\n\n\n\n```python\nmodel = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))\nloss_function = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.1)\n\n# See what the scores are before training\n# Note that element i,j of the output is the score for tag j for word i.\n# Here we don't need to train, so the code is wrapped in torch.no_grad()\nwith torch.no_grad():\n inputs = prepare_sequence(training_data[0][0], word_to_ix)\n tag_scores = model(inputs)\n print(tag_scores)\n\nfor epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data\n for sentence, tags in training_data:\n # Step 1. Remember that Pytorch accumulates gradients.\n # We need to clear them out before each instance\n model.zero_grad()\n\n # Step 2. Get our inputs ready for the network, that is, turn them into\n # Tensors of word indices.\n sentence_in = prepare_sequence(sentence, word_to_ix)\n targets = prepare_sequence(tags, tag_to_ix)\n\n # Step 3. Run our forward pass.\n tag_scores = model(sentence_in)\n\n # Step 4. Compute the loss, gradients, and update the parameters by\n # calling optimizer.step()\n loss = loss_function(tag_scores, targets)\n loss.backward()\n optimizer.step()\n\n# See what the scores are after training\nwith torch.no_grad():\n inputs = prepare_sequence(training_data[0][0], word_to_ix)\n tag_scores = model(inputs)\n\n # The sentence is \"the dog ate the apple\". i,j corresponds to score for tag j\n # for word i. The predicted tag is the maximum scoring tag.\n # Here, we can see the predicted sequence below is 0 1 2 0 1\n # since 0 is index of the maximum value of row 1,\n # 1 is the index of maximum value of row 2, etc.\n # Which is DET NOUN VERB DET NOUN, the correct sequence!\n print(tag_scores)\n```\n\nExercise: Augmenting the LSTM part-of-speech tagger with character-level features\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn the example above, each word had an embedding, which served as the\ninputs to our sequence model. Let's augment the word embeddings with a\nrepresentation derived from the characters of the word. We expect that\nthis should help significantly, since character-level information like\naffixes have a large bearing on part-of-speech. For example, words with\nthe affix *-ly* are almost always tagged as adverbs in English.\n\nTo do this, let $c_w$ be the character-level representation of\nword $w$. Let $x_w$ be the word embedding as before. Then\nthe input to our sequence model is the concatenation of $x_w$ and\n$c_w$. So if $x_w$ has dimension 5, and $c_w$\ndimension 3, then our LSTM should accept an input of dimension 8.\n\nTo get the character level representation, do an LSTM over the\ncharacters of a word, and let $c_w$ be the final hidden state of\nthis LSTM. Hints:\n\n* There are going to be two LSTM's in your new model.\n The original one that outputs POS tag scores, and the new one that\n outputs a character-level representation of each word.\n* To do a sequence model over characters, you will have to embed characters.\n The character embeddings will be the input to the character LSTM.\n\n\n\n", "meta": {"hexsha": "0a190b2532db81b2668a0794223787a4841c8e22", "size": 11746, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/5edaebfc06ec3968b8c1da100da2253d/sequence_models_tutorial.ipynb", "max_stars_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_stars_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-05T05:16:44.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-05T05:16:44.000Z", "max_issues_repo_path": "docs/_downloads/5edaebfc06ec3968b8c1da100da2253d/sequence_models_tutorial.ipynb", "max_issues_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_issues_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_downloads/5edaebfc06ec3968b8c1da100da2253d/sequence_models_tutorial.ipynb", "max_forks_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_forks_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.2222222222, "max_line_length": 2061, "alphanum_fraction": 0.6517963562, "converted": true, "num_tokens": 2406, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5078118642792044, "lm_q2_score": 0.5117166047041652, "lm_q1q2_score": 0.2598557630174469}} {"text": "# Calculation of the Feasible Joint Reaction Loads\n\nThe importance of evaluating the feasible muscle forces is demonstrated in the\ncontext of joint reaction analysis. An accurate estimation of the muscle forces\nis essential for the assessment of joint reaction loads. Consequently, the null\nspace muscle forces can significantly alter the reaction forces without\naffecting the movement. The process is separated into four steps:\n\n1. Perform an inverse kinematics (IK), static optimization (SO) and joint\nreaction analysis (JRA) using OpenSim to generate the required data for the next\nsteps.\n\n2. Calculate the feasible muscle forces that satisfy both the action and the\nphysiological constraints of the muscles.\n\n3. For each distinct muscle force realization perform a joint reaction analysis.\n\n4. Post-process the reaction loads.\n\n\n```python\n# notebook general configuration\n%load_ext autoreload\n%autoreload 2\n\nimport os\nimport pickle\nimport numpy as np\nimport sympy as sp\nfrom IPython.display import display\nsp.interactive.printing.init_printing()\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (10.0, 6.0)\n%matplotlib inline\n```\n\n The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n\n\n## Subject Data\n\nPlease provide the following information for the subject under study:\n\n- **.osim** the scaled OpenSim model (after scaling the model using a static\n pose)\n\n- **.trc** the marker trajectories of the movement (based on OpenSim file\n requirements)\n\n- **.mot** the ground reaction forces (based on the OpenSim file format)\n\n- **.xml** describing how to apply the ground reaction forces (based on OpenSim\n force requirements)\n\n- **.xml** file with the reserve actuator for the SO (based on OpenSim requirements)\n\n- **subject mass** (used for scaling the reaction loads, and is not a\n requirement)\n\n- **heel strike** and **toe off** events (used for annotation of the results,\n and are not so important)\n\n\nThe following calculations are performed for the *Gait10dof18musc* dataset. \n\n\n```python\n# subject data\nmass = 72.6 # kg\ng = 9.8 # m/s^2\nbody_weight = mass * g\n\n# time events of toe off and heel strike\nheel_strike_right = [0.65, 1.85]\ntoe_off_right = [0.15, 1.4]\nheel_strike_left = [0.0, 1.25]\ntoe_off_left = [0.8, 2]\n\n# required files and directories\nsubject_dir = os.getcwd() + '/../dataset/Gait10dof18musc/'\nmodel_file = subject_dir + 'subject01.osim'\ntrc_file = subject_dir + 'subject01_walk.trc'\ngrf_file = subject_dir + 'subject01_walk_grf.mot'\ngrf_xml_file = subject_dir + 'subject01_walk_grf.xml'\nreserve_actuators_file = subject_dir + 'reserve_actuators.xml'\nresults_dir = subject_dir + 'notebook_results/'\nfeasible_set_dir = results_dir + 'feasible_force_set/'\njra_results_dir = results_dir + 'joint_reaction_analyses/'\nfigures_dir = results_dir + 'fig/'\n\nif not (os.path.isfile(model_file) and\n os.path.isfile(trc_file) and\n os.path.isfile(grf_file) and\n os.path.isfile(grf_xml_file) and\n os.path.isfile(reserve_actuators_file)):\n raise RuntimeError('required files do not exist')\n \nif not (os.path.isdir(results_dir) and \n os.path.isdir(feasible_set_dir) and\n os.path.isdir(jra_results_dir) and\n os.path.isdir(figures_dir)):\n raise RuntimeError('required folders do not exist')\n```\n\n## Step 1: Perform Required OpenSim Analyses\n\nIn order to perform the feasible muscle force calculations, the kinematics and\nkinetics that satisfy the experimental measured motion and ground reaction\nforces must be calculated. Furthermore, OpenSim JRA is performed so as to\ncompare the feasible reaction loads in step 4. The following functions make use\nof the OpenSim IK, SO and RJA analyses.\n\n\n```python\nfrom opensim_utils import perform_ik, perform_so, perform_jra, plot_sto\n\n# perform OpenSim inverse kinematics\nik_file = perform_ik(model_file, trc_file, results_dir)\nplot_sto(ik_file, 4, save=True)\n\n# perform OpenSim static optimization\n(so_force_file, so_activation_file) = perform_so(model_file, ik_file, grf_file, \n grf_xml_file, reserve_actuators_file, \n results_dir)\nplot_sto(so_force_file, 4, save=True)\n\n# perform OpenSim joint reaction analysis\njra_file = perform_jra(model_file, ik_file, grf_file, grf_xml_file,\n reserve_actuators_file, so_force_file, results_dir)\n\n# store file names so that they can be loaded without running this section\npickle.dump((ik_file, so_force_file, jra_file), file(results_dir + 'opensim_files.dat', 'w'))\n```\n\n\n```python\n# get file names from step 1\n(ik_file, so_force_file, jra_file) = pickle.load(file(results_dir + 'opensim_files.dat', 'r'))\n```\n\n## Step 2: Calculate the Feasible Muscle Forces\n\nThe feasible muscle forces are calculated below. Initially, the moment arm and\nmaximum muscle force quantities are computed for each instance of the\nmovement. Then the following inequality is formed assuming a linear muscle model\n\n\\begin{equation}\\label{equ:linear-muscle-null-space-inequality}\n \\begin{gathered}\n f_m = f_{max} \\circ a_m = f_m^{\\parallel} +\n N_{R} f_{m0},\\; 0 \\preceq a_m \\preceq 1\n \\rightarrow \\\\\n \\begin{bmatrix}\n - N_{R} \\\\\n \\hdashline\n N_{R}\n \\end{bmatrix}\n f_{m0} \\preceq\n \\begin{bmatrix}\n f_m^{\\parallel} \\\\\n \\hdashline\n f_{max} - f_m^{\\parallel}\n \\end{bmatrix} \\\\\n Z f_{m0} \\preceq \\beta\n \\end{gathered}\n\\end{equation}\n\nwhere $a_m \\in \\Re^{m}$ represents a vector of muscle activations, $f_{max} \\in\n\\Re^{m}$ a vector specifying the maximum muscle forces, $\\circ$ the Hadamard\n(elementwise) product, $f_m^{\\parallel}$ the particular muscle force solution\nthat satisfies the action, $N_{R}$ the moment arm null space and $f_{m0}$ the\nnull space forces.\n\n\n```python\n# since this section of code may be called independently after step 1\n# ensure that cells 2, 5 and 9 are evaluated\nfrom IPython.display import Javascript\nJavascript(\"Jupyter.notebook.execute_cells([2, 5, 9])\")\n```\n\n\n```python\nfrom opensim_utils import calculate_muscle_data\nfrom util import null_space, construct_muscle_space_inequality, \\\n convex_bounded_vertex_enumeration, readMotionFile, \\\n index_containing_substring, write_as_sto_file\n \n# calculate the moment arm and maximum muscle forces assuming a linear muscle\n# model (in a future version this can calculate the properties for a nonlinear\n# muscle model)\nmoment_arm, max_force = calculate_muscle_data(model_file, ik_file)\n\n# read SO results\nso_header, so_labels, so_data = readMotionFile(so_force_file)\nso_data = np.array(so_data)\ncoordinates = moment_arm[0].shape[0]\nmuscles = moment_arm[0].shape[1]\ntime = so_data[:, 0]\nentries = time.shape[0]\nnp.\n# collect quantities for computing the feasible muscle forces\nNR = []\nZ = []\nb = []\nfm_par = []\nfor t in tqdm(range(0, entries)):\n # get tau, R, Fmax\n fm = so_data[t, 1:(muscles + 1)] # time the first column\n RT_temp = moment_arm[t, :, :]\n fmax_temp = max_force[t, :]\n\n # calculate the reduced rank (independent columns) null space to avoid\n # singularities\n NR_temp = null_space(RT_temp)\n \n # fm_par = fm is used instead of fm_par = -RBarT * tau because the\n # muscle may not be able to satisfy the action. In OpenSim reserve\n # actuators are used to ensure that Static Optimization can satisfy the\n # action. In this case, we ignore the residual forces and assume that fm\n # is the minimum effort solution. If the model is able to satisfy the\n # action without needing reserve forces then we can use fm_par = -RBarT\n # * tau as obtained form Inverse Dynamics.\n fm_par_temp = fm\n\n Z_temp, b_temp = construct_muscle_space_inequality(NR_temp,\n fm_par_temp,\n fmax_temp)\n\n # append results\n NR.append(NR_temp)\n Z.append(Z_temp)\n b.append(b_temp)\n fm_par.append(fm_par_temp)\n```\n\nThe next step is to sample the inequality $Z f_{m0} \\leq \\beta$. This is the\nbottleneck of the analysis. The *convex_bounded_vertex_enumeration* uses the\nlsr method, which is a vertex enumeration algorithm for finding the vertices\nof a polytope in $O(v m^3)$.\n\n\n```python\n# calculate the feasible muscle force set by sampling the inequality\nf_set = []\nfor t in tqdm(range(0, entries)):\n try:\n fs = convex_bounded_vertex_enumeration(Z[t], b[t][:, 0], 0, method='lrs')\n except:\n print('inequlity is infeasible thus append previous iteration')\n f_set.append(f_set[-1])\n continue\n \n temp = []\n for i in range(0, fs.shape[0]):\n temp.append(fm_par[t] + NR[t].dot(fs[i, :]))\n\n f_set.append(temp)\n\n# serialization f_set -> [time x feasible force set set x muscles] so as to\n# avoid recomputing it\npickle.dump(f_set, file(feasible_set_dir + 'f_set.dat', 'w'))\n```\n\nFinally, store the feasible muscle force set into multiple .sto files that can\nbe used by the JRA.\n\n\n```python\n# keep only muscle forces\nidx = index_containing_substring(so_labels, 'FX')[0]\nlabels = so_labels[:idx]\n\n# find largest feasible set\nS = len(f_set[0])\nfor fs in f_set:\n if len(fs) > S:\n S = len(fs)\n\n# export muscle force realizations\nfor j in tqdm(range(0, S)):\n data_temp = []\n for i in range(0, so_data.shape[0]):\n data_temp.append(f_set[i][j % len(f_set[i])])\n\n write_as_sto_file(feasible_set_dir + str(j) + '.sto', labels,\n time, np.array(data_temp))\n```\n\n## Step 3: Perform Joint Reaction Analyses on Feasible Muscle Forces\n\nFor each distinct muscle force realization computed in the previous step\ninitiate a JRA so as to calculate the influence of the muscle forces on the\njoint reaction loads. Due to memory leaks in OpenSim JRA implementation this\nprocess must be restarted since it can cause problems with the RAM usage. The if\nstatement is used so as to continue from the last performed analysis.\n\n\n```python\n# since this section of code may be called independently after step 2\n# ensure that cells 2, 5 and 9 are evaluated\nfrom IPython.display import Javascript\nJavascript(\"Jupyter.notebook.execute_cells([2, 5, 9])\")\n```\n\n\n```python\nfrom opensim_utils import perform_jra\n\n# get all files in the directory\nfeasible_force_files = os.listdir(feasible_set_dir)\n\n# remove files that are not .sto\nfeasible_force_files = [e for e in feasible_force_files if '.sto' in e]\n\n# perform joint reaction analyses\nprevious_iteration = 0\nfor i, force_file in enumerate(tqdm(feasible_force_files)):\n # change the previous_iteration variable\n if i < previous_iteration:\n continue\n\n if i > previous_iteration + 200:\n print('please shutdown this notebook and reopen (RAM usage problem)')\n print('set previous_iteration=' + str(previous_iteration + 200))\n break\n \n perform_jra(model_file, ik_file, grf_file, grf_xml_file,\n reserve_actuators_file, feasible_set_dir + force_file,\n jra_results_dir, prefix=str(i) + '_')\n```\n\n## Step 4: Postprocess Joint Reaction Loads\n\nThis section collects the joint reaction loads calculated in the previous step.\n\n\n```python\n# since this section of code may be called independently after step 3\n# ensure that cells 2, 5 and 9 are evaluated\nfrom IPython.display import Javascript\nJavascript(\"Jupyter.notebook.execute_cells([2, 5, 9])\")\n```\n\n\n\n\n \n\n\n\n\n```python\nfrom util import readMotionFile, index_containing_substring\n\n# select the joint of interest\njoint = 'hip_r'\n\n# load OpenSim JRA results (step 1)\nos_header, os_labels, os_data = readMotionFile(jra_file)\nos_data = np.array(os_data)\njoint_index = index_containing_substring(os_labels, joint)\nassert(joint_index != -1)\n\n# get all files from the JRA batch simulation (step 3)\njra_files = os.listdir(jra_results_dir)\n\n# remove files that are not joint reactions\njra_files = [e for e in jra_files if 'ReactionLoads' in e]\n \n# allocate the necessary space to collect all results\nsolutions_to_keep = len(jra_files)\nsimulationData = np.empty([solutions_to_keep,\n os_data.shape[0],\n os_data.shape[1]],\n dtype=float)\n\n# collect reaction loads\nfor i, f in enumerate(tqdm(jra_files)):\n header, labels, data = readMotionFile(jra_results_dir + f)\n simulationData[i, :, :] = np.array(data)\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 16524/16524 [01:59<00:00, 138.28it/s]\n\n\nThe following section of code compares the feasible and OpenSim reaction loads\nfor the joint of interest. The reaction loads are normalized using the body\nweight of the subject. The heel strike and toe off events are annotated\naccordingly.\n\n\n```python\n# select the heel strike and toe off events\nif '_l' in joint:\n heel_strike = heel_strike_left\n toe_off = toe_off_left\nelse:\n heel_strike = heel_strike_right\n toe_off = toe_off_right\n\n# plot data min/max reactions vs OpenSim JRA\njoints = 3\nfig, ax = plt.subplots(nrows=1, ncols=joints, figsize=(15, 5))\nfor i in range(0, joints):\n # plot feasible reaction loads\n min_reaction = np.min(simulationData[1:, 1:, joint_index[i]] / body_weight, \n axis=0)\n max_reaction = np.max(simulationData[1:, 1:, joint_index[i]] / body_weight, \n axis=0)\n ax[i].fill_between(os_data[1:, 0], min_reaction, max_reaction, \n color='b', alpha=0.2, label='Feasible Reactions')\n # plot OpenSim reaction loads\n ax[i].plot(os_data[1:, 0], os_data[1:, joint_index[i]] / body_weight,\n '-.r', label='OpenSim JRA')\n # annotate the heel strike and toe off regions\n min_min = np.min(min_reaction)\n max_max = np.max(max_reaction)\n ax[i].vlines(x=heel_strike, ymin=min_min, ymax=max_max,\n color='c', linestyle='--', label='HS')\n ax[i].vlines(x=toe_off, ymin=min_min, ymax=max_max,\n color='m', linestyle=':', label='TO')\n # figure settings\n ax[i].set_title(os_labels[joint_index[i]])\n ax[i].set_xlabel('time (s)')\n ax[i].set_ylabel('reaction / body weight')\n if i == joints - 1:\n ax[i].legend()\n\n# export results\nfig.tight_layout()\nfig.savefig(figures_dir + joint + '.pdf',\n format='pdf', dpi=300)\nfig.savefig(figures_dir + joint + '.png',\n format='png', dpi=300)\n# fig.show()\n```\n\nWe observe that the results obtained from OpenSim joint reaction analysis\npredict low reaction load levels, since the minimum muscle effort criterion is\nused to compute the muscle forces, ignoring muscle co-contraction. On the\ncontrary, it is possible to calculate the feasible reactions without making any\nprior assumption, which can limit the scope and extent of the analysis. Last and\nperhaps most importantly, the large range of possible values confirms that\nmisinterpretation of the results is possible if the null space solutions are\nignored.\n", "meta": {"hexsha": "22fb189b8341aa175c95b264162a1d8b73f58ad3", "size": 119842, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "feasible_joint_reaction_loads/python/.ipynb_checkpoints/feasible_joint_reaction_loads-checkpoint.ipynb", "max_stars_repo_name": "mitkof6/musculoskeletal-redundancy", "max_stars_repo_head_hexsha": "331ae7ab01e768c6a8c20ec8090464eeef547eea", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-01-08T19:11:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T10:20:46.000Z", "max_issues_repo_path": "feasible_joint_reaction_loads/python/feasible_joint_reaction_loads.ipynb", "max_issues_repo_name": "mitkof6/musculoskeletal-redundancy", "max_issues_repo_head_hexsha": "331ae7ab01e768c6a8c20ec8090464eeef547eea", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "feasible_joint_reaction_loads/python/feasible_joint_reaction_loads.ipynb", "max_forks_repo_name": "mitkof6/musculoskeletal-redundancy", "max_forks_repo_head_hexsha": "331ae7ab01e768c6a8c20ec8090464eeef547eea", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 148.6873449132, "max_line_length": 94928, "alphanum_fraction": 0.8741426211, "converted": true, "num_tokens": 3797, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5774953651858118, "lm_q2_score": 0.4493926344647596, "lm_q1q2_score": 0.2595221635520404}} {"text": "```python\n%matplotlib inline\n```\n\n\n\ub2e8\uc5b4 \uc784\ubca0\ub529: \uc5b4\ud718\uc758 \uc758\ubbf8\ub97c \uc778\ucf54\ub529\ud558\uae30\n===========================================\n**\ubc88\uc5ed**: `\uc784\uc131\uc5f0 `_\n\n\ub2e8\uc5b4 \uc784\ubca0\ub529(word embedding)\uc774\ub780 \ub9d0\ubb49\uce58(\ud639\uc740 \ucf54\ud37c\uc2a4, corpus) \ub0b4 \uac01 \ub2e8\uc5b4\uc5d0 \uc77c\ub300\uc77c\ub85c \ub300\uc751\ud558\ub294 \ubc00\uc9d1\ub41c \uc2e4\uc218 \ubca1\ud130(dense vector)\uc758 \uc9d1\ud569, \ud639\uc740 \uc774 \ubca1\ud130\ub97c\n\uad6c\ud558\ub294 \ud589\uc704\ub97c \uac00\ub9ac\ud0b5\ub2c8\ub2e4. \uc8fc\ub85c \ub2e8\uc5b4\ub97c \ud53c\ucc98(feature)\ub85c \uc0ac\uc6a9\ud558\ub294 \uc790\uc5f0\uc5b4 \ucc98\ub9ac \ubd84\uc57c\uc5d0\uc11c\ub294 \ub2e8\uc5b4\ub97c \ucef4\ud4e8\ud130 \uce5c\ud654\uc801\uc778\n\ud615\ud0dc\ub85c \ubc14\uafb8\uc5b4 \uc8fc\ub294 \uc791\uc5c5\uc774 \ud544\uc218\uc801\uc785\ub2c8\ub2e4. \ucef4\ud4e8\ud130\uac00 \ub2e8\uc5b4\ub97c \ubc14\ub85c \uc774\ud574\ud558\uae30\ub294 \uc0c1\ub2f9\ud788 \uc5b4\ub835\uae30 \ub54c\ubb38\uc774\uc8e0.\n\uadf8\ub807\ub2e4\uba74, \ub2e8\uc5b4\ub97c \uc5b4\ub5bb\uac8c \ud45c\ud604\ud558\ub294 \uac83\uc774 \uc88b\uc744\uae4c\uc694? \ubb3c\ub860 \uac01 \ubb38\uc790\uc5d0 \ud574\ub2f9\ud558\ub294 ASCII\ucf54\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uaca0\uc9c0\ub9cc,\nASCII\ucf54\ub4dc\ub294 \uc774 \ub2e8\uc5b4\uac00 *\ubb34\uc5c7* \uc778\uc9c0\ub97c \uc54c\ub824\uc904 \ubfd0, \ub2e8\uc5b4\uac00 \uc5b4\ub5a4 *\uc758\ubbf8* \ub97c \uac00\uc9c0\ub294\uc9c0\ub294 \uc54c\ub824\uc8fc\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n(\ub8f0\ubca0\uc774\uc2a4\ub85c \uc5b4\ubbf8 \ub4f1 \ubb38\ubc95\uc801 \ud2b9\uc9d5\uc744 \ud65c\uc6a9\ud558\uac70\ub098 \uc601\uc5b4\uc758 \uacbd\uc6b0 \ub300\ubb38\uc790\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uaca0\uc9c0\ub9cc \ucda9\ubd84\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.)\n\ub2e8\uc5b4\ub97c \uc5b4\ub5bb\uac8c \ud45c\ud604\ud560\uc9c0 \ubfd0 \uc544\ub2c8\ub77c, \uc774 \ud45c\ud604\ubc95\uc744 \uc5b4\ub5a0\ud55c \ubc29\uc2dd\uc73c\ub85c \uc5f0\uc0b0\ud574\uc57c \ud560 \uc9c0 \ub610\ud55c \ud070 \ubb38\uc81c\uc785\ub2c8\ub2e4.\n\ubcf4\ud1b5 \uc774\ub7ec\ud55c \ubc00\ub3c4 \ub192\uc740 \ubca1\ud130\ub97c \uc5bb\uae30 \uc704\ud574 \uc0ac\uc6a9\ud558\ub294 \ub274\ub7f4\ub137 \ubaa8\ub378\uc740 $|V|$ (\ub9d0\ubb49\uce58\uc758 \ub2e8\uc5b4 \uac1c\uc218)\uc758\n\ud070 \uc785\ub825 \ucc28\uc6d0\uacfc \uba87 \uc548\ub418\ub294 (\ud14d\uc2a4\ub97c \ubd84\ub958\ud558\ub294 \ubb38\uc81c\ub77c\uace0 \ud560 \uacbd\uc6b0) \uc791\uc740 \ucd9c\ub825 \ucc28\uc6d0\uc744 \uac00\uc9d1\ub2c8\ub2e4.\n\uc989, \ub2e8\uc5b4\ub4e4 \uac04\uc758 \uc5f0\uc0b0\uc774 \ud544\uc218\uc785\ub2c8\ub2e4. \uc5b4\ub5bb\uac8c \uc774 \ud070 \ucc28\uc6d0\uc758 \uacf5\uac04\uc744 \uc791\uc740 \uacf5\uac04\uc73c\ub85c \ubcc0\ud615\uc2dc\ud0ac \uc218 \uc788\uc744\uae4c\uc694?\n\n\uba3c\uc800, \uc0c1\uae30\ud55c ASCII\ucf54\ub4dc \ub300\uc2e0 \uc6d0\ud56b \uc778\ucf54\ub529(one-hot encoding)\uc744 \uc0ac\uc6a9\ud574\ubcf4\ub294 \uac83\uc740 \uc5b4\ub5a8\uae4c\uc694? \uc6d0\ud56b \uc778\ucf54\ub529\uc774\ub780\n\ud558\ub098\uc758 \ub2e8\uc5b4 $w$ \ub97c \uc544\ub798\uc758 \ubca1\ud130\ub85c \ud45c\ud604\ud558\ub294 \uac83\uc744 \ub9d0\ud569\ub2c8\ub2e4.\n\n\\begin{align}\\overbrace{\\left[ 0, 0, \\dots, 1, \\dots, 0, 0 \\right]}^\\text{|V| elements}\\end{align}\n\n\uc5ec\uae30\uc11c 1\uc740 \ud574\ub2f9 \ubca1\ud130\uac00 \ud45c\ud604\ud558\uace0\uc790 \ud558\ub294 \ub2e8\uc5b4\uc5d0 \ud574\ub2f9\ud558\ub294 \uc704\uce58 1\uacf3\uc5d0 \uc790\ub9ac\ud569\ub2c8\ub2e4. (\ub098\uba38\uc9c0\ub294 \uc804\ubd80\n0\uc785\ub2c8\ub2e4.) \ub2e4\ub978 \ub2e8\uc5b4\ub97c \ub098\ud0c0\ub0b4\ub294 \ubca1\ud130\uc5d0\uc120 1\uc774 \ub2e4\ub978 \uacf3\uc5d0 \uc704\uce58\ud574 \uc788\uaca0\uc8e0.\n\n\uc6d0\ud56b \uc778\ucf54\ub529\uc740 \ub9cc\ub4e4\uae30\uac00 \uc27d\ub2e4\ub294 \uc7a5\uc810\uc774 \uc788\uc9c0\ub9cc, \ub2e8\uc21c\ud55c \ub9cc\ud07c \ub2e8\uc810\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc77c\ub2e8 \ub2e8\uc5b4 \ubca1\ud130 \ud55c \uac1c\ub294\n\ubaa8\ub4e0 \ub2e8\uc5b4\ub97c \ud45c\ud604\ud560 \uc218 \uc788\uc744 \ub9cc\ud55c \ud06c\uae30\uac00 \ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc6b0\ub9ac\uac00 \uc5bc\ub9c8\ub098 \ub9ce\uc740 \uc885\ub958\uc758 \ub2e8\uc5b4\ub97c\n\uc0ac\uc6a9\ud558\ub294\uc9c0\ub97c \uc0dd\uac01 \ud55c\ub2e4\uba74 \uc5b4\ub9c8\uc5b4\ub9c8\ud558\uac8c \ud070 \ubca1\ud130\ub77c\ub294 \uac83\uc744 \uc54c \uc218 \uc788\uc8e0. \uc774 \ubfd0\ub9cc\uc774 \uc544\ub2d9\ub2c8\ub2e4.\n\uc6d0\ud56b \ubca1\ud130\ub294 \ubaa8\ub4e0 \ub2e8\uc5b4\ub97c \ub3c5\ub9bd\uc801\uc778 \uac1c\uccb4\ub85c \uac00\uc815\ud558\ub294 \uac83\uc744 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc989, \uacf5\uac04\uc0c1\uc5d0\uc11c\n\uc644\uc804\ud788 \ub2e4\ub978 \ucd95\uc5d0 \uc704\uce58\ud574 \uc788\uc5b4\uc11c \ub2e8\uc5b4\uac04\uc758 \uad00\uacc4\ub97c \ub098\ud0c0\ub0bc \uc218\uac00 \uc5c6\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc6b0\ub9ac\ub294 \ub2e8\uc5b4\n\uc0ac\uc774\uc758 *\uc720\uc0ac\ub3c4* \ub97c \uc5b4\ub5bb\uac8c\ub4e0 \uacc4\uc0b0\ud558\uace0 \uc2f6\uc740\uac70\uc8e0. \uc65c \uc720\uc0ac\ub3c4\uac00 \uc911\uc694\ud558\ub0d0\uad6c\uc694? \ub2e4\uc74c \uc608\uc81c\ub97c \ubd05\uc2dc\ub2e4.\n\n\uc6b0\ub9ac\uc758 \ubaa9\ud45c\uac00 \uc5b8\uc5b4 \ubaa8\ub378\uc744 \ub9cc\ub4dc\ub294 \uac83\uc774\ub77c\uace0 \uac00\uc815\ud558\uace0 \ub2e4\uc74c\uc758 \ubb38\uc7a5\uc774 \ud559\uc2b5 \ub370\uc774\ud130\ub85c\uc368 \uc8fc\uc5b4\uc84c\ub2e4\uace0 \ud574\ubd05\uc2dc\ub2e4.\n\n* \uc218\ud559\uc790\uac00 \uac00\uac8c\ub85c \ub6f0\uc5b4\uac14\ub2e4.\n* \ubb3c\ub9ac\ud559\uc790\uac00 \uac00\uac8c\ub85c \ub6f0\uc5b4\uac14\ub2e4.\n* \uc218\ud559\uc790\uac00 \ub9ac\ub9cc \uac00\uc124\uc744 \uc99d\uba85\ud588\ub2e4.\n\n\ub610\ud55c \ud559\uc2b5 \ub370\uc774\ud130\uc5d0\ub294 \uc5c6\ub294 \uc544\ub798 \ubb38\uc7a5\uc774 \uc788\ub2e4\uace0 \uc0dd\uac01\ud574\ubd05\uc2dc\ub2e4.\n\n* \ubb3c\ub9ac\ud559\uc790\uac00 \ub9ac\ub9cc \uac00\uc124\uc744 \uc99d\uba85\ud588\ub2e4.\n\nASCII \ucf54\ub4dc\ub098 \uc6d0\ud56b \uc778\ucf54\ub529 \uae30\ubc18 \uc5b8\uc5b4 \ubaa8\ub378\uc740 \uc704 \ubb38\uc7a5\uc744 \uc5b4\ub290\uc815\ub3c4 \ub2e4\ub8f0 \uc218 \uc788\uaca0\uc9c0\ub9cc, \uac1c\uc120\uc758 \uc5ec\uc9c0\uac00 \uc788\uc9c0 \uc54a\uc744\uae4c\uc694?\n\uba3c\uc800 \uc544\ub798\uc758 \ub450 \uc0ac\uc2e4\uc744 \uc0dd\uac01\ud574\ubd05\uc2dc\ub2e4.\n\n* '\uc218\ud559\uc790'\uc640 '\ubb3c\ub9ac\ud559\uc790'\uac00 \ubb38\uc7a5 \ub0b4\uc5d0\uc11c \uac19\uc740 \uc5ed\ud560\uc744 \ub9e1\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ub450 \ub2e8\uc5b4\ub294 \uc5b4\ub5bb\uac8c\ub4e0 \uc758\ubbf8\uc801\uc778 \uc5f0\uad00\uc131\uc774 \uc788\uc744 \uac81\ub2c8\ub2e4.\n* \uc0c8\ub85c\uc6b4 \ubb38\uc7a5\uc5d0\uc11c '\ubb3c\ub9ac\ud559\uc790'\uac00 \ub9e1\uc740 \uc5ed\ud560\uc744 '\uc218\ud559\uc790'\uac00 \ub9e1\ub294 \uac83\uc744 \ud559\uc2b5 \ub370\uc774\ud130\uc5d0\uc11c \ubcf8 \uc801\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc6b0\ub9ac \ubaa8\ub378\uc774 \uc704\uc758 \uc0ac\uc2e4\uc744 \ud1b5\ud574 '\ubb3c\ub9ac\ud559\uc790'\uac00 \uc0c8 \ubb38\uc7a5\uc5d0 \uc798 \ub4e4\uc5b4 \ub9de\ub294\ub2e4\ub294 \uac83\uc744 \ucd94\ub860\ud560 \uc218\n\uc788\ub2e4\uba74 \ucc38 \uc88b\uc744 \uac83\uc785\ub2c8\ub2e4. \uc774\uac83\uc774 \uc704\uc5d0\uc11c \uc5b8\uae09\ud55c \uc720\uc0ac\ub3c4\uc758 \uc758\ubbf8\uc785\ub2c8\ub2e4. \ucca0\uc790\uc801 \uc720\uc0ac\ub3c4 \ubfd0\n\uc544\ub2c8\ub77c *\uc758\ubbf8\uc801 \uc720\uc0ac\ub3c4* \uc778 \uac83\uc785\ub2c8\ub2e4. \uc774\uac83\uc774\uc57c\ub9d0\ub85c \uc5b8\uc5b4 \ub370\uc774\ud130\uc5d0 \ub0b4\uc7ac\ud558\ub294 \ud76c\ubc15\uc131(sparsity)\uc5d0\n\ub300\ud55c \ucc98\ubc29\uc774 \ub420 \uac83\uc785\ub2c8\ub2e4. \uc6b0\ub9ac\uac00 \ubcf8 \uac83\uacfc \uc544\uc9c1 \ubcf4\uc9c0 \uc54a\uc740 \uac83 \uc0ac\uc774\ub97c \uc774\uc5b4\uc8fc\ub294 \uac83\uc774\uc8e0.\n\uc55e\uc73c\ub85c\ub294 \ub2e4\uc74c\uc758 \uc5b8\uc5b4\ud559\uc801 \uae30\ubcf8 \uba85\uc81c\ub97c \uac00\uc815\ud558\ub3c4\ub85d \ud569\uc2dc\ub2e4. \ubc14\ub85c \ube44\uc2b7\ud55c \ub9e5\ub77d\uc5d0\uc11c \ub4f1\uc7a5\ud558\ub294\n\ub2e8\uc5b4\ub4e4\uc740 \uc11c\ub85c \uc758\ubbf8\uc801 \uc5f0\uad00\uc131\uc744 \uac00\uc9c4\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4. \uc5b8\uc5b4\ud559\uc801\uc73c\ub85c\ub294 `\ubd84\uc0b0 \uc758\ubbf8 \uac00\uc124(distributional\nhypothesis) `__ \uc774\ub77c\uace0\ub3c4 \ud569\ub2c8\ub2e4.\n\n\n\ubc00\uc9d1\ub41c \ub2e8\uc5b4 \uc784\ubca0\ub529 \uad6c\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc5b4\ub5bb\uac8c \ub2e8\uc5b4\uc758 \uc758\ubbf8\uc801 \uc720\uc0ac\ub3c4\ub97c \uc778\ucf54\ub529 \ud560 \uc218 \uc788\uc744\uae4c\uc694? \ub2e4\uc2dc \ub9d0\ud574, \uc5b4\ub5bb\uac8c \ud574\uc57c \ub2e8\uc5b4\uc758 \uc720\uc0ac\ub3c4\ub97c\n\ub2e8\uc5b4 \ubca1\ud130\uc5d0 \ubc18\uc601\ud560 \uc218 \uc788\uc744\uae4c\uc694? \ub2e8\uc5b4 \ub370\uc774\ud130\uc5d0 \uc758\ubbf8\uc801 \uc18d\uc131(attribute)\uc744 \ubd80\uc5ec\ud558\ub294 \uac74 \uc5b4\ub5a4\uac00\uc694?\n\uc608\ub97c \ub4e4\uc5b4 '\uc218\ud559\uc790'\uc640 '\ubb3c\ub9ac\ud559\uc790'\uac00 \ubaa8\ub450 \ub6f8 \uc218 \uc788\ub2e4\uba74, \ud574\ub2f9 \ub2e8\uc5b4\uc758 '\ub6f8 \uc218 \uc788\uc74c' \uc18d\uc131\uc5d0 \ub192\uc740 \uc810\uc218\ub97c \uc8fc\ub294 \uac81\ub2c8\ub2e4.\n\uacc4\uc18d \ud574\ubd05\uc2dc\ub2e4. \ub2e4\ub978 \ub2e8\uc5b4\ub4e4\uc5d0 \ub300\ud574\uc11c\ub294 \uc5b4\ub5a0\ud55c \uc18d\uc131\uc744 \ub9cc\ub4e4 \uc218 \uc788\uc744\uc9c0 \uc0dd\uac01\ud574\ubd05\uc2dc\ub2e4.\n\n\ub9cc\uc57d \uac01 \uc18d\uc131\uc744 \ud558\ub098\uc758 \ucc28\uc6d0\uc774\ub77c\uace0 \ubcf8\ub2e4\uba74 \ud558\ub098\uc758 \ub2e8\uc5b4\uc5d0 \uc544\ub798\uc640 \uac19\uc740 \ubca1\ud130\ub97c \ubc30\uc815\ud560 \uc218 \uc788\uc744\uac81\ub2c8\ub2e4.\n\n\\begin{align}q_\\text{\uc218\ud559\uc790} = \\left[ \\overbrace{2.3}^\\text{\ub6f8 \uc218 \uc788\uc74c},\n \\overbrace{9.4}^\\text{\ucee4\ud53c\ub97c \uc88b\uc544\ud568}, \\overbrace{-5.5}^\\text{\ubb3c\ub9ac \uc804\uacf5\uc784}, \\dots \\right]\\end{align}\n\n\\begin{align}q_\\text{\ubb3c\ub9ac\ud559\uc790} = \\left[ \\overbrace{2.5}^\\text{\ub6f8 \uc218 \uc788\uc74c},\n \\overbrace{9.1}^\\text{\ucee4\ud53c\ub97c \uc88b\uc544\ud568}, \\overbrace{6.4}^\\text{\ubb3c\ub9ac \uc804\uacf5\uc784}, \\dots \\right]\\end{align}\n\n\uadf8\ub7ec\uba74 \uc544\ub798\uc640 \uac19\uc774 \ub450 \ub2e8\uc5b4 \uc0ac\uc774\uc758 \uc720\uc0ac\ub3c4\ub97c \uad6c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. ('\uc720\uc0ac\ub3c4'\ub77c\ub294 \ud568\uc218\ub97c \uc815\uc758\ud558\ub294 \uac81\ub2c8\ub2e4)\n\n\\begin{align}\\text{\uc720\uc0ac\ub3c4}(\\text{\ubb3c\ub9ac\ud559\uc790}, \\text{\uc218\ud559\uc790}) = q_\\text{\ubb3c\ub9ac\ud559\uc790} \\cdot q_\\text{\uc218\ud559\uc790}\\end{align}\n\n\ubb3c\ub860 \ubcf4\ud1b5\uc740 \uc774\ub807\uac8c \ubca1\ud130\uc758 \uae38\uc774\ub85c \ub098\ub220\uc8fc\uc9c0\ub9cc\uc694.\n\n\\begin{align}\\text{\uc720\uc0ac\ub3c4}(\\text{\ubb3c\ub9ac\ud559\uc790}, \\text{\uc218\ud559\uc790}) = \\frac{q_\\text{\ubb3c\ub9ac\ud559\uc790} \\cdot q_\\text{\uc218\ud559\uc790}}\n {\\| q_\\text{\ubb3c\ub9ac\ud559\uc790} \\| \\| q_\\text{\uc218\ud559\uc790} \\|} = \\cos (\\phi)\\end{align}\n\n$\\phi$ \ub294 \ub450 \ubca1\ud130 \uc0ac\uc774\uc758 \uac01\uc785\ub2c8\ub2e4. \uc774\ub7f0 \uc2dd\uc774\uba74 \uc815\ub9d0 \ube44\uc2b7\ud55c \ub2e8\uc5b4\ub294 \uc720\uc0ac\ub3c4 1\uc744 \uac16\uace0,\n\uc815\ub9d0 \ub2e4\ub978 \ub2e8\uc5b4\ub294 \uc720\uc0ac\ub3c4 -1\uc744 \uac16\uaca0\uc8e0. \ube44\uc2b7\ud55c \uc758\ubbf8\ub97c \uac00\uc9c8\uc218\ub85d \uac19\uc740 \ubc29\ud5a5\uc744 \uac00\ub9ac\ud0a4\uace0 \uc788\uc744 \ud14c\ub2c8\uae4c\uc694.\n\n\uc774 \uae00 \ucd08\ubc18\uc5d0 \ub098\uc628 \ud76c\ubc15\ud55c \uc6d0\ud56b \ubca1\ud130\uac00 \uc0ac\uc2e4\uc740 \uc6b0\ub9ac\uac00 \ubc29\uae08 \uc815\uc758\ud55c \uc758\ubbf8 \ubca1\ud130\uc758\n\ud2b9\uc774 \ucf00\uc774\uc2a4\ub77c\ub294 \uac83\uc744 \uae08\ubc29 \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e8\uc5b4 \ubca1\ud130\uc758 \uac01 \uc6d0\uc18c\ub294 \uadf8 \ub2e8\uc5b4\uc758 \uc758\ubbf8\uc801 \uc18d\uc131\uc744\n\ud45c\ud604\ud558\uace0, \ubaa8\ub4e0 \ub2e8\uc5b4 \uc30d\uc758 \uc720\uc0ac\ub3c4\ub294 0\uc774\uae30 \ub54c\ubb38\uc774\uc8e0. \uc704\uc5d0\uc11c \uc815\uc758\ud55c \uc758\ubbf8 \ubca1\ud130\ub294 *\ubc00\uc9d1* \ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\uc989, \uc6d0\ud56b \ubca1\ud130\uc5d0 \ube44\ud574 0 \uc6d0\uc18c\uc758 \uc218\uac00 \uc801\ub2e4\uace0 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud558\uc9c0\ub9cc \uc774 \ubca1\ud130\ub4e4\uc740 \uad6c\ud558\uae30\uac00 \uc9c4\uc9dc \uc5b4\ub835\uc2b5\ub2c8\ub2e4. \ub2e8\uc5b4\uac04\uc758 \uc720\uc0ac\ub3c4\ub97c \uacb0\uc815 \uc9c0\uc744 \ub9cc\ud55c\n\uc758\ubbf8\uc801 \uc18d\uc131\uc740 \uc5b4\ub5bb\uac8c \uacb0\uc815\ud560 \uac83\uc774\uba70, \uc18d\uc131\uc744 \uacb0\uc815\ud588\ub2e4\uace0 \ud558\ub354\ub77c\ub3c4 \uac01 \uc18d\uc131\uc5d0\n\ud574\ub2f9\ud558\ub294 \uac12\uc740 \ub3c4\ub300\uccb4 \uc5b4\ub5a0\ud55c \uae30\uc900\uc73c\ub85c \uc815\ud574\uc57c \ud560\uae4c\uc694? \uc18d\uc131\uacfc \uac12\uc744 \ub370\uc774\ud130\uc5d0 \uae30\ubc18\ud574\n\ub9cc\ub4e4\uace0 \uc790\ub3d9\uc73c\ub85c \ub2e8\uc5b4 \ubca1\ud130\ub97c \ub9cc\ub4e4 \uc218\ub294 \uc5c6\uc744\uae4c\uc694? \uc788\uc2b5\ub2c8\ub2e4. \ub525\ub7ec\ub2dd\uc744 \uc0ac\uc6a9\ud558\uba74 \ub9d0\uc774\uc8e0.\n\ub525\ub7ec\ub2dd\uc740 \uc778\uacf5\uc2e0\uacbd\ub9dd\uc744 \uc774\uc6a9\ud558\uc5ec \uc0ac\ub78c\uc758 \uac1c\uc785 \uc5c6\uc774 \uc18d\uc131\uc758 \ud45c\ud604 \ubc29\ubc95\uc744 \uc790\ub3d9\uc73c\ub85c \ud559\uc2b5\ud569\ub2c8\ub2e4.\n\uc774\ub97c \uc774\uc6a9\ud574 \ub2e8\uc5b4 \ubca1\ud130\ub97c \ubaa8\ub378 \ubaa8\uc218\ub85c \uc124\uc815\ud558\uace0 \ubaa8\ub378 \ud559\uc2b5\uc2dc\uc5d0 \ub2e8\uc5b4 \ubca1\ud130\ub3c4 \ud568\uaed8 \uc5c5\ub370\uc774\ud2b8 \ud558\uba74\n\ub420 \uac83\uc785\ub2c8\ub2e4. \uc774\ub807\uac8c \uc6b0\ub9ac \uc2e0\uacbd\ub9dd \ubaa8\ub378\uc740 \uc801\uc5b4\ub3c4 \uc774\ub860\uc0c1\uc73c\ub85c\ub294 \ucda9\ubd84\ud788 \ud559\uc2b5\ud560 \uc218 \uc788\ub294\n*\uc7a0\uc7ac \uc758\ubbf8 \uc18d\uc131* \uc744 \ucc3e\uc744 \uac83\uc785\ub2c8\ub2e4. \uc5ec\uae30\uc11c \ub9d0\ud558\ub294 \uc7a0\uc7ac \uc758\ubbf8 \uc18d\uc131\uc73c\ub85c \uc774\ub8e8\uc5b4\uc9c4 \ubca1\ud130\ub294 \uc0ac\ub78c\uc774\n\ud574\uc11d\ud558\uae30 \uc0c1\ub2f9\ud788 \uc5b4\ub835\ub2e4\ub294 \uc810\uc744 \uae30\uc5b5\ud574 \ub450\uc138\uc694. \uc704\uc5d0\uc11c \uc218\ud559\uc790\uc640 \ubb3c\ub9ac\ud559\uc790\uc5d0\uac8c \ucee4\ud53c\ub97c \uc88b\uc544\ud55c\ub2e4\ub294\n\ub4f1 \uc0ac\ub78c\uc774 \uc784\uc758\uc801\uc73c\ub85c \ub2e8\uc5b4\uc5d0 \ubd80\uc5ec\ud55c \uc18d\uc131\uacfc\ub294 \ub2ec\ub9ac, \uc778\uacf5\uc2e0\uacbd\ub9dd\uc774 \uc790\ub3d9\uc73c\ub85c \ub2e8\uc5b4\uc758 \uc18d\uc131\uc744 \ucc3e\ub294\ub2e4\uba74\n\uadf8 \uc18d\uc131\uacfc \uac12\uc774 \uc758\ubbf8\ud558\ub294 \ubc14\ub97c \uc54c\uae30\uac00 \uc5b4\ub824\uc6b8 \uac83\uc785\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4\uc11c \uc2e0\uacbd\ub9dd \ubaa8\ub378\uc774 \ucc3e\uc740 '\uc218\ud559\uc790'\uc640\n'\ubb3c\ub9ac\ud559\uc790'\uc758 \ud45c\ud604 \ubca1\ud130 \ub458 \ub2e4 \ub450\ubc88\uc9f8 \uc6d0\uc18c\uac00 \ud06c\ub2e4\uace0 \uac00\uc815\ud574 \ubd05\uc2dc\ub2e4. \ub458\uc774 \ube44\uc2b7\ud558\ub2e4\ub294 \uac74 \uc54c\uaca0\uc9c0\ub9cc,\n\ub3c4\ub300\uccb4 \ub450\ubc88\uc9f8 \uc6d0\uc18c\uac00 \ubb34\uc5c7\uc744 \uc758\ubbf8\ud558\ub294\uc9c0\ub294 \uc54c\uae30\uac00 \ub9e4\uc6b0 \ud798\ub4e0 \uac83\uc785\ub2c8\ub2e4. \ud45c\ud604 \ubca1\ud130 \uacf5\uac04\uc0c1\uc5d0\uc11c\n\ube44\uc2b7\ud558\ub2e4\ub294 \uc815\ubcf4 \uc678\uc5d0\ub294 \uc544\ub9c8 \ub9ce\uc740 \uc815\ubcf4\ub97c \uc8fc\uae34 \uc5b4\ub824\uc6b8 \uac83\uc785\ub2c8\ub2e4.\n\n\uc694\uc57d\ud558\uc790\uba74, **\ub2e8\uc5b4 \uc784\ubca0\ub529\uc740 \ub2e8\uc5b4\uc758 *\uc758\ubbf8* \ub97c \ud45c\ud604\ud558\ub294 \ubc29\ubc95\uc774\uba70, \ucc28\ud6c4\uc5d0 \uc784\ubca0\ub529\uc744 \uc0ac\uc6a9\ud574\uc11c\n\ud480\uace0\uc790 \ud558\ub294 \ubb38\uc81c\uc5d0 \uc720\uc6a9\ud560 \uc758\ubbf8 \uc815\ubcf4\ub97c \ud6a8\uc728\uc801\uc73c\ub85c \uc778\ucf54\ub529\ud55c \uac83\uc785\ub2c8\ub2e4.** \ud488\uc0ac \ud0dc\uadf8, \ud30c\uc2a4 \ud2b8\ub9ac(parse tree) \ub4f1\n\ub2e8\uc5b4\uc758 \uc758\ubbf8 \uc678\uc5d0 \ub2e4\ub978 \uac83\ub3c4 \uc778\ucf54\ub529 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4! \ud53c\ucc98 \uc784\ubca0\ub529\uc758 \uac1c\ub150\uc744 \uc7a1\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4.\n\n\n\ud30c\uc774\ud1a0\uce58\uc5d0\uc11c \ub2e8\uc5b4 \uc784\ubca0\ub529 \ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc2e4\uc81c\ub85c \ucf54\ub4dc\uc640 \uc608\uc2dc\ub97c \ubcf4\uae30 \uc804\uc5d0, \ud30c\uc774\ud1a0\uce58\ub97c \ube44\ub86f\ud574 \ub525\ub7ec\ub2dd \uad00\ub828 \ud504\ub85c\uadf8\ub798\ubc0d\uc744 \ud560 \ub54c\n\ub2e8\uc5b4 \uc784\ubca0\ub529\uc744 \uc5b4\ub5bb\uac8c \uc0ac\uc6a9\ud558\ub294\uc9c0\uc5d0 \ub300\ud574 \uc870\uae08 \uc54c\uc544\ubd05\uc2dc\ub2e4. \ub9e8 \uc704\uc5d0\uc11c \uc6d0\ud56b \ubca1\ud130\ub97c\n\uc815\uc758\ud588\ub358 \uac83 \ucc98\ub7fc, \ub2e8\uc5b4 \uc784\ubca0\ub529\uc744 \uc0ac\uc6a9\ud560 \ub54c\uc5d0\ub3c4 \uac01 \ub2e8\uc5b4\uc5d0 \uc778\ub371\uc2a4\ub97c \ubd80\uc5ec\ud574\uc57c \ud569\ub2c8\ub2e4.\n\uc774 \uc778\ub371\uc2a4\ub97c \ucc38\uc870 \ud14c\uc774\ube14(look-up table)\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uac83\uc785\ub2c8\ub2e4. \uc989, $|V| \\times D$ \ud06c\uae30\uc758 \ud589\ub82c\uc5d0\n\ub2e8\uc5b4 \uc784\ubca0\ub529\uc744 \uc800\uc7a5\ud558\ub294\ub370, $D$ \ucc28\uc6d0\uc758 \uc784\ubca0\ub529 \ubca1\ud130\uac00 \ud589\ub82c\uc758 $i$ \ubc88\uc9f8 \ud589\uc5d0\n\uc800\uc7a5\ub418\uc5b4\uc788\uc5b4 $i$ \ub97c \uc778\ub371\uc2a4\ub85c \ud65c\uc6a9\ud574 \uc784\ubca0\ub529 \ubca1\ud130\ub97c \ucc38\uc870\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\uc544\ub798\uc758 \ubaa8\ub4e0 \ucf54\ub4dc\uc5d0\uc11c\ub294 \ub2e8\uc5b4\uc640 \uc778\ub371\uc2a4\ub97c \ub9e4\ud551\ud574\uc8fc\ub294 \ub515\uc154\ub108\ub9ac\ub97c word\\_to\\_ix\ub77c \uce6d\ud569\ub2c8\ub2e4.\n\n\ud30c\uc774\ud1a0\uce58\ub294 \uc784\ubca0\ub529\uc744 \uc190\uc27d\uac8c \uc0ac\uc6a9\ud560 \uc218 \uc788\uac8c torch.nn.Embedding\uc5d0 \uc704\uc5d0\uc11c \uc124\uba85\ud55c \ucc38\uc870 \ud14c\uc774\ube14\n\uae30\ub2a5\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc774 \ubaa8\ub4c8\uc740 \ub2e8\uc5b4\uc758 \uac1c\uc218\uc640 \uc784\ubca0\ub529\uc758 \ucc28\uc6d0, \ucd1d 2\uac1c\uc758 \ubcc0\uc218\ub97c \uc785\ub825 \ubcc0\uc218\ub85c \ubc1b\uc2b5\ub2c8\ub2e4.\n\ntorch.nn.Embedding \ud14c\uc774\ube14\uc758 \uc784\ubca0\ub529\uc744 \ucc38\uc870\ud558\uae30 \uc704\ud574\uc120 torch.LongTensor \ud0c0\uc785\uc758 \uc778\ub371\uc2a4 \ubcc0\uc218\ub97c\n\uaf2d \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. (\uc778\ub371\uc2a4\ub294 \uc2e4\uc218\uac00 \uc544\ub2cc \uc815\uc218\uc774\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.)\n\n\n\n```python\n# Author: Robert Guthrie\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)\n```\n\n\n```python\nword_to_ix = {\"hello\": 0, \"world\": 1}\nembeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings\nlookup_tensor = torch.tensor([word_to_ix[\"hello\"]], dtype=torch.long)\nhello_embed = embeds(lookup_tensor)\nprint(hello_embed)\n```\n\n\uc608\uc2dc: N\uadf8\ub7a8 \uc5b8\uc5b4 \ubaa8\ub378\ub9c1\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nN\uadf8\ub7a8 \uc5b8\uc5b4 \ubaa8\ub378\ub9c1\uc5d0\uc120 \ub2e8\uc5b4 \uc2dc\ud000\uc2a4 $w$ \uac00 \uc8fc\uc5b4\uc84c\uc744 \ub54c \uc544\ub798\uc758 \uac83\uc744 \uc5bb\uace0\uc790\n\ud568\uc744 \uc0c1\uae30\ud574 \ubd05\uc2dc\ub2e4.\n\n\\begin{align}P(w_i | w_{i-1}, w_{i-2}, \\dots, w_{i-n+1} )\\end{align}\n\n$w_i$ \ub294 \uc2dc\ud000\uc2a4\uc5d0\uc11c i\ubc88\uc9f8 \ub2e8\uc5b4\uc785\ub2c8\ub2e4.\n\n\uc774 \uc608\uc2dc\uc5d0\uc11c\ub294 \ud559\uc2b5 \ub370\uc774\ud130\ub97c \ubc14\ud0d5\uc73c\ub85c \uc190\uc2e4 \ud568\uc218\ub97c \uacc4\uc0b0\ud558\uace0 \uc5ed\uc804\ud30c\ub97c \ud1b5\ud574\n\ubaa8\uc218\ub97c \uc5c5\ub370\uc774\ud2b8 \ud574\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2\nEMBEDDING_DIM = 10\n# \uc170\uc775\uc2a4\ud53c\uc5b4 \uc18c\ub124\ud2b8(Sonnet) 2\ub97c \uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\ntest_sentence = \"\"\"When forty winters shall besiege thy brow,\nAnd dig deep trenches in thy beauty's field,\nThy youth's proud livery so gazed on now,\nWill be a totter'd weed of small worth held:\nThen being asked, where all thy beauty lies,\nWhere all the treasure of thy lusty days;\nTo say, within thine own deep sunken eyes,\nWere an all-eating shame, and thriftless praise.\nHow much more praise deserv'd thy beauty's use,\nIf thou couldst answer 'This fair child of mine\nShall sum my count, and make my old excuse,'\nProving his beauty by succession thine!\nThis were to be new made when thou art old,\nAnd see thy blood warm when thou feel'st it cold.\"\"\".split()\n# \uc6d0\ub798\ub294 \uc785\ub825\uc744 \uc81c\ub300\ub85c \ud1a0\ud06c\ub098\uc774\uc988(tokenize) \ud574\uc57c\ud558\uc9c0\ub9cc \uc774\ubc88\uc5d4 \uac04\uc18c\ud654\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n# \ud29c\ud50c\ub85c \uc774\ub8e8\uc5b4\uc9c4 \ub9ac\uc2a4\ud2b8\ub97c \ub9cc\ub4e4\uaca0\uc2b5\ub2c8\ub2e4. \uac01 \ud29c\ud50c\uc740 ([ i-2 \ubc88\uc9f8 \ub2e8\uc5b4, i-1 \ubc88\uc9f8 \ub2e8\uc5b4 ], \ubaa9\ud45c \ub2e8\uc5b4)\uc785\ub2c8\ub2e4.\ntrigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])\n for i in range(len(test_sentence) - 2)]\n# \uccab 3\uac1c\uc758 \ud29c\ud50c\uc744 \ucd9c\ub825\ud558\uc5ec \ub370\uc774\ud130\uac00 \uc5b4\ub5bb\uac8c \uc0dd\uacbc\ub294\uc9c0 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\nprint(trigrams[:3])\n\nvocab = set(test_sentence)\nword_to_ix = {word: i for i, word in enumerate(vocab)}\n\n\nclass NGramLanguageModeler(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(NGramLanguageModeler, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n self.linear2 = nn.Linear(128, vocab_size)\n\n def forward(self, inputs):\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n\nlosses = []\nloss_function = nn.NLLLoss()\nmodel = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\nfor epoch in range(10):\n total_loss = 0\n for context, target in trigrams:\n\n # \uccab\ubc88\uc9f8. \ubaa8\ub378\uc5d0 \ub123\uc5b4\uc904 \uc785\ub825\uac12\uc744 \uc900\ube44\ud569\ub2c8\ub2e4. (i.e, \ub2e8\uc5b4\ub97c \uc815\uc218 \uc778\ub371\uc2a4\ub85c\n # \ubc14\uafb8\uace0 \ud30c\uc774\ud1a0\uce58 \ud150\uc11c\ub85c \uac10\uc2f8\uc90d\uc2dc\ub2e4.)\n context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n\n # \ub450\ubc88\uc9f8. \ud1a0\uce58\ub294 \uae30\uc6b8\uae30\uac00 *\ub204\uc801* \ub429\ub2c8\ub2e4. \uc0c8 \uc778\uc2a4\ud134\uc2a4\ub97c \ub123\uc5b4\uc8fc\uae30 \uc804\uc5d0\n # \uae30\uc6b8\uae30\ub97c \ucd08\uae30\ud654\ud569\ub2c8\ub2e4.\n model.zero_grad()\n\n # \uc138\ubc88\uc9f8. \uc21c\uc804\ud30c\ub97c \ud1b5\ud574 \ub2e4\uc74c\uc5d0 \uc62c \ub2e8\uc5b4\uc5d0 \ub300\ud55c \ub85c\uadf8 \ud655\ub960\uc744 \uad6c\ud569\ub2c8\ub2e4.\n log_probs = model(context_idxs)\n\n # \ub124\ubc88\uc9f8. \uc190\uc2e4\ud568\uc218\ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4. (\ud30c\uc774\ud1a0\uce58\uc5d0\uc11c\ub294 \ubaa9\ud45c \ub2e8\uc5b4\ub97c \ud150\uc11c\ub85c \uac10\uc2f8\uc918\uc57c \ud569\ub2c8\ub2e4.)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # \ub2e4\uc12f\ubc88\uc9f8. \uc5ed\uc804\ud30c\ub97c \ud1b5\ud574 \uae30\uc6b8\uae30\ub97c \uc5c5\ub370\uc774\ud2b8 \ud574\uc90d\ub2c8\ub2e4.\n loss.backward()\n optimizer.step()\n\n # tensor.item()\uc744 \ud638\ucd9c\ud558\uc5ec \ub2e8\uc77c\uc6d0\uc18c \ud150\uc11c\uc5d0\uc11c \uc22b\uc790\ub97c \ubc18\ud658\ubc1b\uc2b5\ub2c8\ub2e4.\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # \ubc18\ubcf5\ud560 \ub584\ub9c8\ub2e4 \uc190\uc2e4\uc774 \uc904\uc5b4\ub4dc\ub294 \uac83\uc744 \ubd05\uc2dc\ub2e4!\n```\n\n\uc608\uc2dc: \ub2e8\uc5b4 \uc784\ubca0\ub529 \uacc4\uc0b0\ud558\uae30: Continuous Bag-of-Words\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe Continuous Bag-of-Words (CBOW) \ubaa8\ub378\uc740 NLP \ub525\ub7ec\ub2dd\uc5d0\uc11c \ub9ce\uc774 \uc4f0\uc785\ub2c8\ub2e4.\n\uc774 \ubaa8\ub378\uc740 \ubb38\uc7a5 \ub0b4\uc5d0\uc11c \uc8fc\ubcc0 \ub2e8\uc5b4, \uc989 \uc55e \uba87 \ub2e8\uc5b4\uc640 \ub4a4 \uba87 \ub2e8\uc5b4\ub97c \ubcf4\uace0 \ud2b9\uc815\n\ub2e8\uc5b4\ub97c \uc608\uce21\ud558\ub294\ub370, \uc5b8\uc5b4 \ubaa8\ub378\ub9c1\uacfc\ub294 \ub2ec\ub9ac \uc21c\ucc28\uc801\uc774\uc9c0\ub3c4 \uc54a\uace0 \ud655\ub960\uc801\uc774\uc9c0\ub3c4 \uc54a\uc2b5\ub2c8\ub2e4.\n\uc8fc\ub85c CBOW\ub294 \ubcf5\uc7a1\ud55c \ubaa8\ub378\uc758 \ucd08\uae30 \uc785\ub825\uac12\uc73c\ub85c \uc4f0\uc77c \ub2e8\uc5b4 \uc784\ubca0\ub529\uc744 \ube60\ub974\uac8c \ud559\uc2b5\ud558\ub294\n\ub370\uc5d0 \uc4f0\uc785\ub2c8\ub2e4. \uc774\uac83\uc744 *\uc0ac\uc804 \ud6c8\ub828\ub41c(pre-trained) \uc784\ubca0\ub529* \uc774\ub77c\uace0 \ubd80\ub974\uc8e0.\n\uba87 \ud37c\uc13c\ud2b8 \uc815\ub3c4\uc758 \uc131\ub2a5 \ud5a5\uc0c1\uc744 \uae30\ub300\ud560 \uc218 \uc788\ub294 \uae30\ubc95\uc785\ub2c8\ub2e4.\n\nCBOW \ubaa8\ub378\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4. \ubaa9\ud45c \ub2e8\uc5b4 $w_i$ \uc640 \uadf8 \uc591\ucabd\uc5d0 $N$ \uac1c\uc758\n\ubb38\ub9e5 \ub2e8\uc5b4 $w_{i-1}, \\dots, w_{i-N}$ \uc640 $w_{i+1}, \\dots, w_{i+N}$\n\uac00 \uc8fc\uc5b4\uc84c\uc744 \ub54c, (\ubb38\ub9e5 \ub2e8\uc5b4\ub97c \ucd1d\uce6d\ud574 $C$ \ub77c\uace0 \ud569\uc2dc\ub2e4.)\n\n\\begin{align}-\\log p(w_i | C) = -\\log \\text{Softmax}(A(\\sum_{w \\in C} q_w) + b)\\end{align}\n\n\uc704 \uc2dd\uc744 \ucd5c\uc18c\ud654\ud558\ub294 \uac83\uc774 CBOW\uc758 \ubaa9\uc801\uc785\ub2c8\ub2e4. \uc5ec\uae30\uc11c $q_w$ \ub294 \ub2e8\uc5b4 $w$ \uc758\n\uc784\ubca0\ub529 \uc785\ub2c8\ub2e4.\n\n\uc544\ub798\uc758 \ud074\ub798\uc2a4 \ud15c\ud50c\ub9bf\uc744 \ubcf4\uace0 \ud30c\uc774\ud1a0\uce58\ub85c CBOW\ub97c \uad6c\ud604\ud574 \ubcf4\uc138\uc694. \ud78c\ud2b8\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n* \uc5b4\ub5a4 \ubaa8\uc218\ub97c \uc815\uc758\ud574\uc57c \ud558\ub294\uc9c0 \uc0dd\uac01\ud574\ubcf4\uc138\uc694.\n* \uac01 \uc791\uc5c5\uc5d0\uc11c \ub2e4\ub8e8\uc5b4\uc9c0\ub294 \ubcc0\uc218\uc758 \ucc28\uc6d0\uc774 \uc5b4\ub5a4\uc9c0 \uaf2d \uc0dd\uac01\ud574\ubcf4\uc138\uc694.\n \ud150\uc11c\uc758 \ubaa8\uc591\uc744 \ubc14\uafd4\uc57c \ud55c\ub2e4\uba74 .view()\ub97c \uc0ac\uc6a9\ud558\uc138\uc694.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2 # \uc67c\ucabd\uc73c\ub85c 2\ub2e8\uc5b4, \uc624\ub978\ucabd\uc73c\ub85c 2\ub2e8\uc5b4\nraw_text = \"\"\"We are about to study the idea of a computational process.\nComputational processes are abstract beings that inhabit computers.\nAs they evolve, processes manipulate other abstract things called data.\nThe evolution of a process is directed by a pattern of rules\ncalled a program. People create programs to direct processes. In effect,\nwe conjure the spirits of the computer with our spells.\"\"\".split()\n\n# \uc911\ubcf5\ub41c \ub2e8\uc5b4\ub97c \uc81c\uac70\ud558\uae30 \uc704\ud574 `raw_text` \ub97c \uc9d1\ud569(set) \uc790\ub8cc\ud615\uc73c\ub85c \ubc14\uafd4\uc90d\ub2c8\ub2e4.\nvocab = set(raw_text)\nvocab_size = len(vocab)\n\nword_to_ix = {word: i for i, word in enumerate(vocab)}\ndata = []\nfor i in range(2, len(raw_text) - 2):\n context = [raw_text[i - 2], raw_text[i - 1],\n raw_text[i + 1], raw_text[i + 2]]\n target = raw_text[i]\n data.append((context, target))\nprint(data[:5])\n\n\nclass CBOW(nn.Module):\n\n def __init__(self):\n pass\n\n def forward(self, inputs):\n pass\n\n# \ubaa8\ub378\uc744 \ub9cc\ub4e4\uace0 \ud559\uc2b5\ud574 \ubcf4\uc138\uc694.\n# \uc544\ub798\ub294 \ub370\uc774\ud130 \uc900\ube44\ub97c \uc6d0\ud65c\ud558\uac8c \ub3d5\uae30 \uc704\ud55c \ud568\uc218\uc785\ub2c8\ub2e4.\n\n\ndef make_context_vector(context, word_to_ix):\n idxs = [word_to_ix[w] for w in context]\n return torch.tensor(idxs, dtype=torch.long)\n\n\nmake_context_vector(data[0][0], word_to_ix) # \uc608\uc2dc\n```\n", "meta": {"hexsha": "1d1e7d0642821b1cb62d679e241dec301233325b", "size": 30193, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/488f487bc18215975a964f1cbac141dd/word_embeddings_tutorial.ipynb", "max_stars_repo_name": "junhyung9985/PyTorch-tutorials-kr", "max_stars_repo_head_hexsha": "07c50e5ddfc2f118f01ecbc071a24763f9891171", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/_downloads/488f487bc18215975a964f1cbac141dd/word_embeddings_tutorial.ipynb", "max_issues_repo_name": "junhyung9985/PyTorch-tutorials-kr", "max_issues_repo_head_hexsha": "07c50e5ddfc2f118f01ecbc071a24763f9891171", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_downloads/488f487bc18215975a964f1cbac141dd/word_embeddings_tutorial.ipynb", "max_forks_repo_name": "junhyung9985/PyTorch-tutorials-kr", "max_forks_repo_head_hexsha": "07c50e5ddfc2f118f01ecbc071a24763f9891171", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 298.9405940594, "max_line_length": 18801, "alphanum_fraction": 0.7335806313, "converted": true, "num_tokens": 6297, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.5117166047041654, "lm_q2_score": 0.5039061705290805, "lm_q1q2_score": 0.2578571546726192}} {"text": "```python\n%matplotlib inline\n```\n\n\nAdversarial Example Generation\n==============================\n\n**Author:** `Nathan Inkawhich `__\n\nIf you are reading this, hopefully you can appreciate how effective some\nmachine learning models are. Research is constantly pushing ML models to\nbe faster, more accurate, and more efficient. However, an often\noverlooked aspect of designing and training models is security and\nrobustness, especially in the face of an adversary who wishes to fool\nthe model.\n\nThis tutorial will raise your awareness to the security vulnerabilities\nof ML models, and will give insight into the hot topic of adversarial\nmachine learning. You may be surprised to find that adding imperceptible\nperturbations to an image *can* cause drastically different model\nperformance. Given that this is a tutorial, we will explore the topic\nvia example on an image classifier. Specifically we will use one of the\nfirst and most popular attack methods, the Fast Gradient Sign Attack\n(FGSM), to fool an MNIST classifier.\n\n\n\n\nThreat Model\n------------\n\nFor context, there are many categories of adversarial attacks, each with\na different goal and assumption of the attacker\u2019s knowledge. However, in\ngeneral the overarching goal is to add the least amount of perturbation\nto the input data to cause the desired misclassification. There are\nseveral kinds of assumptions of the attacker\u2019s knowledge, two of which\nare: **white-box** and **black-box**. A *white-box* attack assumes the\nattacker has full knowledge and access to the model, including\narchitecture, inputs, outputs, and weights. A *black-box* attack assumes\nthe attacker only has access to the inputs and outputs of the model, and\nknows nothing about the underlying architecture or weights. There are\nalso several types of goals, including **misclassification** and\n**source/target misclassification**. A goal of *misclassification* means\nthe adversary only wants the output classification to be wrong but does\nnot care what the new classification is. A *source/target\nmisclassification* means the adversary wants to alter an image that is\noriginally of a specific source class so that it is classified as a\nspecific target class.\n\nIn this case, the FGSM attack is a *white-box* attack with the goal of\n*misclassification*. With this background information, we can now\ndiscuss the attack in detail.\n\nFast Gradient Sign Attack\n-------------------------\n\nOne of the first and most popular adversarial attacks to date is\nreferred to as the *Fast Gradient Sign Attack (FGSM)* and is described\nby Goodfellow et. al.\u00a0in `Explaining and Harnessing Adversarial\nExamples `__. The attack is remarkably\npowerful, and yet intuitive. It is designed to attack neural networks by\nleveraging the way they learn, *gradients*. The idea is simple, rather\nthan working to minimize the loss by adjusting the weights based on the\nbackpropagated gradients, the attack *adjusts the input data to maximize\nthe loss* based on the same backpropagated gradients. In other words,\nthe attack uses the gradient of the loss w.r.t the input data, then\nadjusts the input data to maximize the loss.\n\nBefore we jump into the code, let\u2019s look at the famous\n`FGSM `__ panda example and extract\nsome notation.\n\n.. figure:: /_static/img/fgsm_panda_image.png\n :alt: fgsm_panda_image\n\nFrom the figure, $\\mathbf{x}$ is the original input image\ncorrectly classified as a \u201cpanda\u201d, $y$ is the ground truth label\nfor $\\mathbf{x}$, $\\mathbf{\\theta}$ represents the model\nparameters, and $J(\\mathbf{\\theta}, \\mathbf{x}, y)$ is the loss\nthat is used to train the network. The attack backpropagates the\ngradient back to the input data to calculate\n$\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y)$. Then, it adjusts\nthe input data by a small step ($\\epsilon$ or $0.007$ in the\npicture) in the direction (i.e.\n$sign(\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y))$) that will\nmaximize the loss. The resulting perturbed image, $x'$, is then\n*misclassified* by the target network as a \u201cgibbon\u201d when it is still\nclearly a \u201cpanda\u201d.\n\nHopefully now the motivation for this tutorial is clear, so lets jump\ninto the implementation.\n\n\n\n\n\n```python\nfrom __future__ import print_function\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nImplementation\n--------------\n\nIn this section, we will discuss the input parameters for the tutorial,\ndefine the model under attack, then code the attack and run some tests.\n\nInputs\n~~~~~~\n\nThere are only three inputs for this tutorial, and are defined as\nfollows:\n\n- **epsilons** - List of epsilon values to use for the run. It is\n important to keep 0 in the list because it represents the model\n performance on the original test set. Also, intuitively we would\n expect the larger the epsilon, the more noticeable the perturbations\n but the more effective the attack in terms of degrading model\n accuracy. Since the data range here is $[0,1]$, no epsilon\n value should exceed 1.\n\n- **pretrained_model** - path to the pretrained MNIST model which was\n trained with\n `pytorch/examples/mnist `__.\n For simplicity, download the pretrained model `here `__.\n\n- **use_cuda** - boolean flag to use CUDA if desired and available.\n Note, a GPU with CUDA is not critical for this tutorial as a CPU will\n not take much time.\n\n\n\n\n\n```python\nepsilons = [0, .05, .1, .15, .2, .25, .3]\npretrained_model = \"data/lenet_mnist_model.pth\"\nuse_cuda=True\n```\n\nModel Under Attack\n~~~~~~~~~~~~~~~~~~\n\nAs mentioned, the model under attack is the same MNIST model from\n`pytorch/examples/mnist `__.\nYou may train and save your own MNIST model or you can download and use\nthe provided model. The *Net* definition and test dataloader here have\nbeen copied from the MNIST example. The purpose of this section is to\ndefine the model and dataloader, then initialize the model and load the\npretrained weights.\n\n\n\n\n\n```python\n# LeNet Model definition\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=1)\n\n# MNIST Test dataset and dataloader declaration\ntest_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([\n transforms.ToTensor(),\n ])), \n batch_size=1, shuffle=True)\n\n# Define what device we are using\nprint(\"CUDA Available: \",torch.cuda.is_available())\ndevice = torch.device(\"cuda\" if (use_cuda and torch.cuda.is_available()) else \"cpu\")\n\n# Initialize the network\nmodel = Net().to(device)\n\n# Load the pretrained model\nmodel.load_state_dict(torch.load(pretrained_model, map_location='cpu'))\n\n# Set the model in evaluation mode. In this case this is for the Dropout layers\nmodel.eval()\n```\n\nFGSM Attack\n~~~~~~~~~~~\n\nNow, we can define the function that creates the adversarial examples by\nperturbing the original inputs. The ``fgsm_attack`` function takes three\ninputs, *image* is the original clean image ($x$), *epsilon* is\nthe pixel-wise perturbation amount ($\\epsilon$), and *data_grad*\nis gradient of the loss w.r.t the input image\n($\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y)$). The function\nthen creates perturbed image as\n\n\\begin{align}perturbed\\_image = image + epsilon*sign(data\\_grad) = x + \\epsilon * sign(\\nabla_{x} J(\\mathbf{\\theta}, \\mathbf{x}, y))\\end{align}\n\nFinally, in order to maintain the original range of the data, the\nperturbed image is clipped to range $[0,1]$.\n\n\n\n\n\n```python\n# FGSM attack code\ndef fgsm_attack(image, epsilon, data_grad):\n # Collect the element-wise sign of the data gradient\n sign_data_grad = data_grad.sign()\n # Create the perturbed image by adjusting each pixel of the input image\n perturbed_image = image + epsilon*sign_data_grad\n # Adding clipping to maintain [0,1] range\n perturbed_image = torch.clamp(perturbed_image, 0, 1)\n # Return the perturbed image\n return perturbed_image\n```\n\nTesting Function\n~~~~~~~~~~~~~~~~\n\nFinally, the central result of this tutorial comes from the ``test``\nfunction. Each call to this test function performs a full test step on\nthe MNIST test set and reports a final accuracy. However, notice that\nthis function also takes an *epsilon* input. This is because the\n``test`` function reports the accuracy of a model that is under attack\nfrom an adversary with strength $\\epsilon$. More specifically, for\neach sample in the test set, the function computes the gradient of the\nloss w.r.t the input data ($data\\_grad$), creates a perturbed\nimage with ``fgsm_attack`` ($perturbed\\_data$), then checks to see\nif the perturbed example is adversarial. In addition to testing the\naccuracy of the model, the function also saves and returns some\nsuccessful adversarial examples to be visualized later.\n\n\n\n\n\n```python\ndef test( model, device, test_loader, epsilon ):\n\n # Accuracy counter\n correct = 0\n adv_examples = []\n\n # Loop over all examples in test set\n for data, target in test_loader:\n\n # Send the data and label to the device\n data, target = data.to(device), target.to(device)\n\n # Set requires_grad attribute of tensor. Important for Attack\n data.requires_grad = True\n\n # Forward pass the data through the model\n output = model(data)\n init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n\n # If the initial prediction is wrong, dont bother attacking, just move on\n if init_pred.item() != target.item():\n continue\n\n # Calculate the loss\n loss = F.nll_loss(output, target)\n\n # Zero all existing gradients\n model.zero_grad()\n\n # Calculate gradients of model in backward pass\n loss.backward()\n\n # Collect datagrad\n data_grad = data.grad.data\n\n # Call FGSM Attack\n perturbed_data = fgsm_attack(data, epsilon, data_grad)\n\n # Re-classify the perturbed image\n output = model(perturbed_data)\n\n # Check for success\n final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n if final_pred.item() == target.item():\n correct += 1\n # Special case for saving 0 epsilon examples\n if (epsilon == 0) and (len(adv_examples) < 5):\n adv_ex = perturbed_data.squeeze().detach().cpu().numpy()\n adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )\n else:\n # Save some adv examples for visualization later\n if len(adv_examples) < 5:\n adv_ex = perturbed_data.squeeze().detach().cpu().numpy()\n adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )\n\n # Calculate final accuracy for this epsilon\n final_acc = correct/float(len(test_loader))\n print(\"Epsilon: {}\\tTest Accuracy = {} / {} = {}\".format(epsilon, correct, len(test_loader), final_acc))\n\n # Return the accuracy and an adversarial example\n return final_acc, adv_examples\n```\n\nRun Attack\n~~~~~~~~~~\n\nThe last part of the implementation is to actually run the attack. Here,\nwe run a full test step for each epsilon value in the *epsilons* input.\nFor each epsilon we also save the final accuracy and some successful\nadversarial examples to be plotted in the coming sections. Notice how\nthe printed accuracies decrease as the epsilon value increases. Also,\nnote the $\\epsilon=0$ case represents the original test accuracy,\nwith no attack.\n\n\n\n\n\n```python\naccuracies = []\nexamples = []\n\n# Run test for each epsilon\nfor eps in epsilons:\n acc, ex = test(model, device, test_loader, eps)\n accuracies.append(acc)\n examples.append(ex)\n```\n\nResults\n-------\n\nAccuracy vs Epsilon\n~~~~~~~~~~~~~~~~~~~\n\nThe first result is the accuracy versus epsilon plot. As alluded to\nearlier, as epsilon increases we expect the test accuracy to decrease.\nThis is because larger epsilons mean we take a larger step in the\ndirection that will maximize the loss. Notice the trend in the curve is\nnot linear even though the epsilon values are linearly spaced. For\nexample, the accuracy at $\\epsilon=0.05$ is only about 4% lower\nthan $\\epsilon=0$, but the accuracy at $\\epsilon=0.2$ is 25%\nlower than $\\epsilon=0.15$. Also, notice the accuracy of the model\nhits random accuracy for a 10-class classifier between\n$\\epsilon=0.25$ and $\\epsilon=0.3$.\n\n\n\n\n\n```python\nplt.figure(figsize=(5,5))\nplt.plot(epsilons, accuracies, \"*-\")\nplt.yticks(np.arange(0, 1.1, step=0.1))\nplt.xticks(np.arange(0, .35, step=0.05))\nplt.title(\"Accuracy vs Epsilon\")\nplt.xlabel(\"Epsilon\")\nplt.ylabel(\"Accuracy\")\nplt.show()\n```\n\nSample Adversarial Examples\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRemember the idea of no free lunch? In this case, as epsilon increases\nthe test accuracy decreases **BUT** the perturbations become more easily\nperceptible. In reality, there is a tradeoff between accuracy\ndegredation and perceptibility that an attacker must consider. Here, we\nshow some examples of successful adversarial examples at each epsilon\nvalue. Each row of the plot shows a different epsilon value. The first\nrow is the $\\epsilon=0$ examples which represent the original\n\u201cclean\u201d images with no perturbation. The title of each image shows the\n\u201coriginal classification -> adversarial classification.\u201d Notice, the\nperturbations start to become evident at $\\epsilon=0.15$ and are\nquite evident at $\\epsilon=0.3$. However, in all cases humans are\nstill capable of identifying the correct class despite the added noise.\n\n\n\n\n\n```python\n# Plot several examples of adversarial samples at each epsilon\ncnt = 0\nplt.figure(figsize=(8,10))\nfor i in range(len(epsilons)):\n for j in range(len(examples[i])):\n cnt += 1\n plt.subplot(len(epsilons),len(examples[0]),cnt)\n plt.xticks([], [])\n plt.yticks([], [])\n if j == 0:\n plt.ylabel(\"Eps: {}\".format(epsilons[i]), fontsize=14)\n orig,adv,ex = examples[i][j]\n plt.title(\"{} -> {}\".format(orig, adv))\n plt.imshow(ex, cmap=\"gray\")\nplt.tight_layout()\nplt.show()\n```\n\nWhere to go next?\n-----------------\n\nHopefully this tutorial gives some insight into the topic of adversarial\nmachine learning. There are many potential directions to go from here.\nThis attack represents the very beginning of adversarial attack research\nand since there have been many subsequent ideas for how to attack and\ndefend ML models from an adversary. In fact, at NIPS 2017 there was an\nadversarial attack and defense competition and many of the methods used\nin the competition are described in this paper: `Adversarial Attacks and\nDefences Competition `__. The work\non defense also leads into the idea of making machine learning models\nmore *robust* in general, to both naturally perturbed and adversarially\ncrafted inputs.\n\nAnother direction to go is adversarial attacks and defense in different\ndomains. Adversarial research is not limited to the image domain, check\nout `this `__ attack on\nspeech-to-text models. But perhaps the best way to learn more about\nadversarial machine learning is to get your hands dirty. Try to\nimplement a different attack from the NIPS 2017 competition, and see how\nit differs from FGSM. Then, try to defend the model from your own\nattacks.\n\n\n\n", "meta": {"hexsha": "e67d3baadeb9cb7de80c79957906048882e61516", "size": 19697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/8c22803172fb62b19326a346e208ba61/fgsm_tutorial.ipynb", "max_stars_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_stars_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-05T05:16:44.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-05T05:16:44.000Z", "max_issues_repo_path": "docs/_downloads/8c22803172fb62b19326a346e208ba61/fgsm_tutorial.ipynb", "max_issues_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_issues_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_downloads/8c22803172fb62b19326a346e208ba61/fgsm_tutorial.ipynb", "max_forks_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_forks_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.5309278351, "max_line_length": 3311, "alphanum_fraction": 0.6722851196, "converted": true, "num_tokens": 3915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5156199157230156, "lm_q2_score": 0.5, "lm_q1q2_score": 0.2578099578615078}} {"text": "# Multi-TX/TL\n\nMatthieu Kratz\n\n## Why this model?\n\nLike most genetic circuit models, you typically start with a model that captures the two fundamental processes of transcription and translation. These processes can modelled with varying degrees of complexity, ranging from a basic tx/tl model (__Equation 1__) to complex models that simulate tx/tl at a base pair level. For my project, it was of particular importance that I accurately model the sequestration of transcriptional machinery. Further, I was working without RPU data, making typical models that assume some maximum steady state saturation e.g. positive proportional hill intractable for my system. \n\n\\begin{align}\n\\\\ \\\\\n&G \\xrightarrow{ktx} T + G \\\\ \\\\ \n&\\textbf{Equation 1.} \\ \\ \\text{Simple Transcription}\n\\\\\n\\end{align}\n\n\n\nHence, I needed a model that gave reasonable steady state transcription and translation dynamics from non-RPU parameters, all while accurately tracking the sequestration of the relevant machinery. \n\n## The Multi-TX Model\n\nThe model CRN is below and the mechanism effectively relies on accounting for all possible RNAp occupancy states, including the various transitions between these occupancy states.\n\n\\begin{align}\n\\\\ \\\\\n&\\textbf{1. Binding} \\\\ \n&T7_p + T7_p:G_n \\underset{kT72}{\\overset{kT71}\\rightleftharpoons} T7_p:G_{n\\alpha} \\ \\ , \\ \\text{n $\\neq$ $n_{max}$} \\\\ \\\\\n&\\textbf{2. Closed --> Open} \\\\ \n&T7_p:G_{n\\alpha} \\xrightarrow{k_{iso}} T7_p:G_{n+1} \\ \\ , \\ \\text{n $\\neq$ $n_{max}$} \\\\ \\\\\n&\\textbf{3. TX} \\\\ \n&T7_p:G_{n\\alpha} \\xrightarrow{ktx} nT7_m + nT7_p + T7_p:G_{0\\alpha} \\ \\ , \\ \\text{n $\\neq$ 0} \\\\ \n&T7_p:G_n \\xrightarrow{ktx} nT7_m + nT7_p + T7_p:G_{0} \\ \\ , \\ \\text{n $\\neq$ 0} \\\\\n\\end{align}\n\nkT71 --> Promoter binding rate constant (bimolecular)\n\nkT72 --> Promoter unbinding rate constant (unimolecular)\n\nk_iso --> Closed to open complex transition rate constant (unimolecular)\n\nktx --> Single polymerase mRNA synthesis rate constant (unimolecular)\n\nA couple of comments:\n- We explicitly model the process of transitioning from the closed ($T7_p:G_{n\\alpha}$) to open complex ($T7_p:G_{n+1}$). This a pretty slow reaction in vivo, and in the various iterations of this model, including this seemed to be key in accurately reflecting transcription dynamics.\n\n- $n_{max}$ refers to the maximum possible occupancy of the gene. This is the physical limit which is ultimately determined by the footprint of the polymerase and the length of the gene. In relation to the point above, without isomerization, this physical limit is always met. With the explicit isomerization, this physical saturation is not met (with my parameter set at least).\n\n- Polymerase can only be added one at a time to existing genes i.e. cannot have multiple binding events or closed complexes simultaneously.\n\n- We have two TX reactions, one from the closed state and the other from the open state, both with $N$ actively transcribing polymerases. We decided to allow release from the closed state as there is no reason why one polymerase in the closed form should inhibit of other activitely transcribing polymerases. Further, at a modelling level, not allowing release from the closed state results in excessive sequestration of polymerase due to the long time scale of isomerization.\n\n- In both TX reactions, the entirety of the $N$ polymerases (bar the one in the closed state) are simultaneously release, along with $N$ transcripts and the unoccupied gene ($T7_p:G_{0}$)\n\n## Biocrnpyler multi_tx Mechanism subclass\n\nSubclass should be availabe via `from biocrnpyler import mechanism`, code is below for clarity\n\nCouple of comments:\n- The subclass has been designed to be used in concert with the Promoter and DNA assemblies subclasses\n\n- Have to define a maximum occupancy (int) and cognate polymerase (species or str) when instantiating \n\n- The various complex species are generated within the subclass from the names of the polymerase and dna objects in the DNAassembly object. They are defined as species with DNA material types.\n \n\n## Example: T7 Polymerase Transcription of GFP mRNA\n\nNow we define a DNA assembly that use our mechanism in the following steps:\n- Create a species for the relevant polymerase\n- Create multi_tx Mechanism, give a maximum occupancy and polymerase (must be species)\n- Associate this mechanism with a promoter\n- Place this promoter into a DNA assembly\n\nAnd voila, the cassette regulated by T7p is ready to use!\n\nThis DNAassembly will be placed in a SimpleTxTlExtract which only dilutes mRNA, not proteins. I haven't used protein dilution as it makes it easier to glean the degree of sequestration present with this mechanism. In practical use, you would of course allow your polymerase to be diluted.\n\n\n```python\nfrom biocrnpyler import *\n\n#the most important parameters for multi_tx\n#max_occ is the number of ribosomes that can bind to a transcript\n#k_iso is the rate of isomerization\nparameters = {\"max_occ\":10, \"k_iso\":10}\n\n# Define Polymerase, and max occupancy and instatiate Mechanism Object\nT7P = Species('T7p','protein')\n#Multi-tx mechanism\nMX = multi_tx(pol=T7P,name='MX')\n\n# create promoter object, associated MX and params with it\npT7_mx = Promoter(name='pT7',mechanisms={'transcription':MX})\n\n# place promoter object into DNA assembly with GFP reporter\nGFP = Species(\"GFP\", material_type = \"protein\")\nPFL_mx = DNAassembly('PFL', promoter=pT7_mx, rbs = \"weak\", protein=GFP)\n\n# create simple promoter and DNA assembly objects that synthesize polymerase\nSC = DNAassembly('T7_const', promoter=\"weak\", rbs = \"weak\", protein=T7P)\n\n# make extract with T7p source and GFP and compile CRN \nmixture = SimpleTxTlDilutionMixture(components=[PFL_mx, SC], parameter_file = \"default_parameters.txt\", parameters = parameters)\n\nCRN1 = mixture.compile_crn()\n\nprint(CRN1.pretty_print(show_rates = False))\nprint(mixture.components)\n```\n\n Species (26) = {0. dna[PFL] init_conc = 0, 1. complex[dna[PFL]:protein[T7p]] init_conc = 0, 2. complex[dna[PFL]:2x_protein[T7p]] init_conc = 0, 3. complex[dna[PFL]:3x_protein[T7p]] init_conc = 0, 4. complex[dna[PFL]:4x_protein[T7p]] init_conc = 0, 5. complex[dna[PFL]:5x_protein[T7p]] init_conc = 0, 6. complex[dna[PFL]:6x_protein[T7p]] init_conc = 0, 7. complex[dna[PFL]:7x_protein[T7p]] init_conc = 0, 8. complex[dna[PFL]:8x_protein[T7p]] init_conc = 0, 9. complex[dna[PFL]:9x_protein[T7p]] init_conc = 0, 10. complex[dna[PFL]:10x_protein[T7p]] init_conc = 0, 11. complex[dna[PFL]:protein[T7p](closed)] init_conc = 0, 12. complex[dna[PFL]:2x_protein[T7p](closed)] init_conc = 0, 13. complex[dna[PFL]:3x_protein[T7p](closed)] init_conc = 0, 14. complex[dna[PFL]:4x_protein[T7p](closed)] init_conc = 0, 15. complex[dna[PFL]:5x_protein[T7p](closed)] init_conc = 0, 16. complex[dna[PFL]:6x_protein[T7p](closed)] init_conc = 0, 17. complex[dna[PFL]:7x_protein[T7p](closed)] init_conc = 0, 18. complex[dna[PFL]:8x_protein[T7p](closed)] init_conc = 0, 19. complex[dna[PFL]:9x_protein[T7p](closed)] init_conc = 0, 20. complex[dna[PFL]:10x_protein[T7p](closed)] init_conc = 0, 21. protein[T7p] init_conc = 0, 22. rna[PFL] init_conc = 0, 23. protein[GFP] init_conc = 0, 24. dna[T7_const] init_conc = 0, 25. rna[T7_const] init_conc = 0}\n \n Reactions (48) = [\n 0. protein[T7p]+complex[dna[PFL]:protein[T7p]] <--> complex[dna[PFL]:2x_protein[T7p](closed)]\n 1. protein[T7p]+complex[dna[PFL]:2x_protein[T7p]] <--> complex[dna[PFL]:3x_protein[T7p](closed)]\n 2. protein[T7p]+complex[dna[PFL]:3x_protein[T7p]] <--> complex[dna[PFL]:4x_protein[T7p](closed)]\n 3. protein[T7p]+complex[dna[PFL]:4x_protein[T7p]] <--> complex[dna[PFL]:5x_protein[T7p](closed)]\n 4. protein[T7p]+complex[dna[PFL]:5x_protein[T7p]] <--> complex[dna[PFL]:6x_protein[T7p](closed)]\n 5. protein[T7p]+complex[dna[PFL]:6x_protein[T7p]] <--> complex[dna[PFL]:7x_protein[T7p](closed)]\n 6. protein[T7p]+complex[dna[PFL]:7x_protein[T7p]] <--> complex[dna[PFL]:8x_protein[T7p](closed)]\n 7. protein[T7p]+complex[dna[PFL]:8x_protein[T7p]] <--> complex[dna[PFL]:9x_protein[T7p](closed)]\n 8. protein[T7p]+complex[dna[PFL]:9x_protein[T7p]] <--> complex[dna[PFL]:10x_protein[T7p](closed)]\n 9. complex[dna[PFL]:protein[T7p](closed)] --> complex[dna[PFL]:protein[T7p]]\n 10. complex[dna[PFL]:2x_protein[T7p](closed)] --> complex[dna[PFL]:2x_protein[T7p]]\n 11. complex[dna[PFL]:3x_protein[T7p](closed)] --> complex[dna[PFL]:3x_protein[T7p]]\n 12. complex[dna[PFL]:4x_protein[T7p](closed)] --> complex[dna[PFL]:4x_protein[T7p]]\n 13. complex[dna[PFL]:5x_protein[T7p](closed)] --> complex[dna[PFL]:5x_protein[T7p]]\n 14. complex[dna[PFL]:6x_protein[T7p](closed)] --> complex[dna[PFL]:6x_protein[T7p]]\n 15. complex[dna[PFL]:7x_protein[T7p](closed)] --> complex[dna[PFL]:7x_protein[T7p]]\n 16. complex[dna[PFL]:8x_protein[T7p](closed)] --> complex[dna[PFL]:8x_protein[T7p]]\n 17. complex[dna[PFL]:9x_protein[T7p](closed)] --> complex[dna[PFL]:9x_protein[T7p]]\n 18. complex[dna[PFL]:10x_protein[T7p](closed)] --> complex[dna[PFL]:10x_protein[T7p]]\n 19. complex[dna[PFL]:protein[T7p]] --> protein[T7p]+rna[PFL]+dna[PFL]\n 20. complex[dna[PFL]:2x_protein[T7p]] --> 2protein[T7p]+2rna[PFL]+dna[PFL]\n 21. complex[dna[PFL]:3x_protein[T7p]] --> 3protein[T7p]+3rna[PFL]+dna[PFL]\n 22. complex[dna[PFL]:4x_protein[T7p]] --> 4protein[T7p]+4rna[PFL]+dna[PFL]\n 23. complex[dna[PFL]:5x_protein[T7p]] --> 5protein[T7p]+5rna[PFL]+dna[PFL]\n 24. complex[dna[PFL]:6x_protein[T7p]] --> 6protein[T7p]+6rna[PFL]+dna[PFL]\n 25. complex[dna[PFL]:7x_protein[T7p]] --> 7protein[T7p]+7rna[PFL]+dna[PFL]\n 26. complex[dna[PFL]:8x_protein[T7p]] --> 8protein[T7p]+8rna[PFL]+dna[PFL]\n 27. complex[dna[PFL]:9x_protein[T7p]] --> 9protein[T7p]+9rna[PFL]+dna[PFL]\n 28. complex[dna[PFL]:10x_protein[T7p]] --> 10protein[T7p]+10rna[PFL]+dna[PFL]\n 29. complex[dna[PFL]:2x_protein[T7p](closed)] --> protein[T7p]+rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 30. complex[dna[PFL]:3x_protein[T7p](closed)] --> 2protein[T7p]+2rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 31. complex[dna[PFL]:4x_protein[T7p](closed)] --> 3protein[T7p]+3rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 32. complex[dna[PFL]:5x_protein[T7p](closed)] --> 4protein[T7p]+4rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 33. complex[dna[PFL]:6x_protein[T7p](closed)] --> 5protein[T7p]+5rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 34. complex[dna[PFL]:7x_protein[T7p](closed)] --> 6protein[T7p]+6rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 35. complex[dna[PFL]:8x_protein[T7p](closed)] --> 7protein[T7p]+7rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 36. complex[dna[PFL]:9x_protein[T7p](closed)] --> 8protein[T7p]+8rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 37. complex[dna[PFL]:10x_protein[T7p](closed)] --> 9protein[T7p]+9rna[PFL]+complex[dna[PFL]:protein[T7p](closed)]\n 38. dna[PFL]+protein[T7p] <--> complex[dna[PFL]:protein[T7p](closed)]\n 39. rna[PFL] --> rna[PFL]+protein[GFP]\n 40. dna[T7_const] --> dna[T7_const]+rna[T7_const]\n 41. rna[T7_const] --> rna[T7_const]+protein[T7p]\n 42. protein[T7p] --> \n 43. rna[PFL] --> \n 44. protein[GFP] --> \n 45. rna[T7_const] --> \n 46. rna[PFL] --> \n 47. rna[T7_const] --> \n ]\n [DNAassembly: PFL\n \tPromoter: pT7\n \ttranscript = rna_PFL\n \tRBS: weak\n \tprotein = protein_GFP, DNAassembly: T7_const\n \tPromoter: weak\n \ttranscript = rna_T7_const\n \tRBS: weak\n \tprotein = protein_T7p]\n\n\n\n```python\ntry:\n import numpy as np\n import matplotlib.pyplot as plt\n\n ts = np.arange(0,5000,1)\n\n x0_dict = {repr(PFL_mx.dna):1.,repr(SC.dna):.1}\n\n R1 = CRN1.simulate_with_bioscrape_via_sbml(ts, initial_condition_dict = x0_dict, stochastic = False)\n if R1 is not None:\n fig, ax = plt.subplots(1,2,figsize=(18,8))\n ax[0].set_title('Polymerase Levels',pad=20,fontdict={'fontsize':18})\n ax[0].plot(R1[str(T7P)],linewidth=3)\n ax[0].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[0].set_ylabel('Polymerase Count',labelpad=15,fontdict={'fontsize':14})\n\n ax[1].set_title('GFP Levels',pad=20,fontdict={'fontsize':18})\n ax[1].plot(R1[str(GFP)],linewidth=3,c='k')\n ax[1].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[1].set_ylabel('[GFP]',labelpad=15,fontdict={'fontsize':14})\nexcept ModuleNotFoundError:\n print('please install the plotting libraries: pip install biocrnpyler[all]')\n\n\n```\n\n## Comparison of Transcript SS with RPU Data\n\nHere we will do a head to head comparison with a simple transcription model built using RPU data. I consider this to effectively be the ground truth for the SS Transcript Count, although it should be noted this comparison does not necessary validate any of the pre-SS dynamics as the simple transcription model assumes immediate saturation. RPU data is from Qi et al.(2012) and RPU standard is from supplement of Nielsen et al. (2016).\n\n\n```python\n# place promoter object into DNA assembly\nmech_default = Transcription_MM(rnap=T7P,name='MX')\npT7 = Promoter(\"pT7\", mechanisms = [mech_default])\nPFL_default = DNAassembly('PFL',dna='T7',promoter=pT7,rbs = \"weak\", protein = GFP)\n\n# make extract with T7p source and GFP and compile CRN \nTest_EX = SimpleTxTlDilutionMixture(components=[PFL_default, SC],parameter_file = \"default_parameters.txt\")\nCRN2 = Test_EX.compile_crn()\n```\n\n\n```python\ntry:\n import numpy as np\n import matplotlib.pyplot as plt\n\n ts = np.arange(0,5000,1)\n x0_dict = {PFL_default.dna:1, SC.dna:.1}\n\n R2 = CRN2.simulate_with_bioscrape_via_sbml(ts, initial_condition_dict = x0_dict, stochastic = False,)\n if R2 is not None:\n fig = plt.figure(figsize=(10,6))\n plt.title('GFP Levels',pad=20,fontdict={'fontsize':18})\n plt.plot(R1[str(GFP)],linewidth=3,c='k',label='MTX Model')\n plt.plot(R2[str(GFP)],linewidth=3,c='b', label='RPU Model')\n plt.xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n plt.ylabel('[GFP]',labelpad=15,fontdict={'fontsize':14})\n plt.legend(fontsize=14)\n\n print('\\n \\n')\n print(f\"MX Model predicts {np.round(R1[str(str(GFP))].iloc[-1])} GFP and RPU Model predicts {np.round(R2[str(str(GFP))].iloc[-1])} GFP at SS \\n \\n\")\nexcept ModuleNotFoundError:\n print('please install the plotting libraries: pip install biocrnpyler[all]')\n\n\n```\n\nThe two models seem to agree pretty well, with some minor difference. This is pretty cool given the fact that MX model uses indirect parameters (promoter affinity, isomerization and tx rates) to predict this SS value. Of course, it's also important to keep in mind that alternative models typically do to not account for the constant sequestration of the polymerase machinery to sustain this SS transcript level. As such, they may not reflect key dynamics resulting from machinery allocation/sharing e.g. retroactivity in tx/tl.\n\n## Putative Multi-TL Model\n\nThe general model of binding, isomerization and production can be readily extended to the process of translation. However, there are several caveats for translation. \n\n- First of all, I don't have any direct data like RPU experiments to compare my model with. All I can say is that with biologically reasonable parameter sets, you get something like a few 100 proteins per mRNA (RBZ in excess) and e. coli has a protein to mRNA ratio in that range (100:1 - 1000:1) (Taniguchi et al.(2010) or bionumbers book). \n\n- Second of all, since we're no longer working with DNA complexes, the mRNA-RBZ complexes should be subject to degredation/dilution. However, it seems that at the level of the biology, initiation can have a stabilizing effect and for elongation it is uncertain (Roy et al. (2013)). In my simulations, I've effectively done as if only dilution is applied to these complexes, but there is an argument to include some form of active degredation (at a reduced rate). Depending on how this implemented, one may need to make a new degredation mechanism where the complexes effectively releases the ribosomes i.e. only mRNA degraded. This is also complicated by the fact that using subclasses like ComplexSpecies() results in inheritance of degredation/material properties. \n\n## Quick Example \nPFL is transcribed through simple transcription and translated through my multi_tl mechanism. We also have a constitutive source of our RBZ being produced to provide a constant saturating concentration of RBZ.\n\n\n```python\n#the most important parameters for multi_tl\n#max_occ is the number of ribosomes that can bind to a transcript\n#k_iso is the rate of isomerization\nparameters = {\"max_occ\":2, \"k_iso\":10}\n\n#Create a DNA assembly and Mixture\nPFL = DNAassembly('PFL', dna='T7', rbs='medium', promoter='medium',transcript='GFP',protein='GFP')\nEM = TxTlDilutionMixture(name = 'e coli',components=[PFL], parameter_file = \"default_parameters.txt\", parameters = parameters)\n\n# Instantiate mechanism and pass it the ribosome\nML = multi_tl(EM.ribosome.get_species())\n\n#Overwrite the translation mechanism\nEM.add_mechanism(ML, overwrite = True)\n\nCRN3 = EM.compile_crn()\nprint(CRN3.pretty_print(show_rates = False))\n```\n\n Species (29) = {0. dna[T7] init_conc = 0, 1. protein[RNAP(machinery)] init_conc = 0, 2. rna[GFP] init_conc = 0, 3. complex[dna[T7]:protein[RNAP]] init_conc = 0, 4. complex[protein[Ribo]:rna[GFP]] init_conc = 0, 5. complex[2x_protein[Ribo]:rna[GFP]] init_conc = 0, 6. complex[protein[Ribo]:rna[GFP](closed)] init_conc = 0, 7. complex[2x_protein[Ribo]:rna[GFP](closed)] init_conc = 0, 8. protein[Ribo(machinery)] init_conc = 0, 9. protein[GFP] init_conc = 0, 10. protein[RNAase(machinery)] init_conc = 0, 11. dna[cellular_processes] init_conc = 0, 12. rna[cellular_processes] init_conc = 0, 13. complex[dna[cellular_processes]:protein[RNAP]] init_conc = 0, 14. complex[protein[Ribo]:rna[cellular_processes]] init_conc = 0, 15. complex[2x_protein[Ribo]:rna[cellular_processes]] init_conc = 0, 16. complex[protein[Ribo]:rna[cellular_processes](closed)] init_conc = 0, 17. complex[2x_protein[Ribo]:rna[cellular_processes](closed)] init_conc = 0, 18. protein[cellular_processes] init_conc = 0, 19. complex[protein[RNAase]:rna[GFP]] init_conc = 0, 20. complex[complex[protein[Ribo]:rna[GFP]]:protein[RNAase]] init_conc = 0, 21. complex[complex[2x_protein[Ribo]:rna[GFP]]:protein[RNAase]] init_conc = 0, 22. complex[complex[protein[Ribo]:rna[GFP]]:protein[RNAase]] init_conc = 0, 23. complex[complex[2x_protein[Ribo]:rna[GFP]]:protein[RNAase]] init_conc = 0, 24. complex[protein[RNAase]:rna[cellular_processes]] init_conc = 0, 25. complex[complex[protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] init_conc = 0, 26. complex[complex[2x_protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] init_conc = 0, 27. complex[complex[protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] init_conc = 0, 28. complex[complex[2x_protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] init_conc = 0}\n \n Reactions (42) = [\n 0. dna[T7]+protein[RNAP(machinery)] <--> complex[dna[T7]:protein[RNAP]]\n 1. complex[dna[T7]:protein[RNAP]] --> dna[T7]+rna[GFP]+protein[RNAP(machinery)]\n 2. protein[Ribo(machinery)]+complex[protein[Ribo]:rna[GFP]] <--> complex[2x_protein[Ribo]:rna[GFP](closed)]\n 3. complex[protein[Ribo]:rna[GFP](closed)] --> complex[protein[Ribo]:rna[GFP]]\n 4. complex[2x_protein[Ribo]:rna[GFP](closed)] --> complex[2x_protein[Ribo]:rna[GFP]]\n 5. complex[protein[Ribo]:rna[GFP]] --> protein[Ribo(machinery)]+protein[GFP]+rna[GFP]\n 6. complex[2x_protein[Ribo]:rna[GFP]] --> 2protein[Ribo(machinery)]+2protein[GFP]+rna[GFP]\n 7. complex[2x_protein[Ribo]:rna[GFP](closed)] --> protein[Ribo(machinery)]+protein[GFP]+complex[protein[Ribo]:rna[GFP](closed)]\n 8. rna[GFP]+protein[Ribo(machinery)] <--> complex[protein[Ribo]:rna[GFP](closed)]\n 9. dna[cellular_processes]+protein[RNAP(machinery)] <--> complex[dna[cellular_processes]:protein[RNAP]]\n 10. complex[dna[cellular_processes]:protein[RNAP]] --> dna[cellular_processes]+rna[cellular_processes]+protein[RNAP(machinery)]\n 11. protein[Ribo(machinery)]+complex[protein[Ribo]:rna[cellular_processes]] <--> complex[2x_protein[Ribo]:rna[cellular_processes](closed)]\n 12. complex[protein[Ribo]:rna[cellular_processes](closed)] --> complex[protein[Ribo]:rna[cellular_processes]]\n 13. complex[2x_protein[Ribo]:rna[cellular_processes](closed)] --> complex[2x_protein[Ribo]:rna[cellular_processes]]\n 14. complex[protein[Ribo]:rna[cellular_processes]] --> protein[Ribo(machinery)]+protein[cellular_processes]+rna[cellular_processes]\n 15. complex[2x_protein[Ribo]:rna[cellular_processes]] --> 2protein[Ribo(machinery)]+2protein[cellular_processes]+rna[cellular_processes]\n 16. complex[2x_protein[Ribo]:rna[cellular_processes](closed)] --> protein[Ribo(machinery)]+protein[cellular_processes]+complex[protein[Ribo]:rna[cellular_processes](closed)]\n 17. rna[cellular_processes]+protein[Ribo(machinery)] <--> complex[protein[Ribo]:rna[cellular_processes](closed)]\n 18. rna[GFP]+protein[RNAase(machinery)] <--> complex[protein[RNAase]:rna[GFP]]\n 19. complex[protein[RNAase]:rna[GFP]] --> protein[RNAase(machinery)]\n 20. complex[protein[Ribo]:rna[GFP]]+protein[RNAase(machinery)] <--> complex[complex[protein[Ribo]:rna[GFP]]:protein[RNAase]]\n 21. complex[complex[protein[Ribo]:rna[GFP]]:protein[RNAase]] --> protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 22. complex[2x_protein[Ribo]:rna[GFP]]+protein[RNAase(machinery)] <--> complex[complex[2x_protein[Ribo]:rna[GFP]]:protein[RNAase]]\n 23. complex[complex[2x_protein[Ribo]:rna[GFP]]:protein[RNAase]] --> 2protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 24. complex[protein[Ribo]:rna[GFP](closed)]+protein[RNAase(machinery)] <--> complex[complex[protein[Ribo]:rna[GFP]]:protein[RNAase]]\n 25. complex[complex[protein[Ribo]:rna[GFP]]:protein[RNAase]] --> protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 26. complex[2x_protein[Ribo]:rna[GFP](closed)]+protein[RNAase(machinery)] <--> complex[complex[2x_protein[Ribo]:rna[GFP]]:protein[RNAase]]\n 27. complex[complex[2x_protein[Ribo]:rna[GFP]]:protein[RNAase]] --> 2protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 28. rna[cellular_processes]+protein[RNAase(machinery)] <--> complex[protein[RNAase]:rna[cellular_processes]]\n 29. complex[protein[RNAase]:rna[cellular_processes]] --> protein[RNAase(machinery)]\n 30. complex[protein[Ribo]:rna[cellular_processes]]+protein[RNAase(machinery)] <--> complex[complex[protein[Ribo]:rna[cellular_processes]]:protein[RNAase]]\n 31. complex[complex[protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] --> protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 32. complex[2x_protein[Ribo]:rna[cellular_processes]]+protein[RNAase(machinery)] <--> complex[complex[2x_protein[Ribo]:rna[cellular_processes]]:protein[RNAase]]\n 33. complex[complex[2x_protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] --> 2protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 34. complex[protein[Ribo]:rna[cellular_processes](closed)]+protein[RNAase(machinery)] <--> complex[complex[protein[Ribo]:rna[cellular_processes]]:protein[RNAase]]\n 35. complex[complex[protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] --> protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 36. complex[2x_protein[Ribo]:rna[cellular_processes](closed)]+protein[RNAase(machinery)] <--> complex[complex[2x_protein[Ribo]:rna[cellular_processes]]:protein[RNAase]]\n 37. complex[complex[2x_protein[Ribo]:rna[cellular_processes]]:protein[RNAase]] --> 2protein[Ribo(machinery)]+protein[RNAase(machinery)]\n 38. rna[GFP] --> \n 39. protein[GFP] --> \n 40. rna[cellular_processes] --> \n 41. protein[cellular_processes] --> \n ]\n\n\n\n```python\ntry:\n\n import numpy as np\n import matplotlib.pyplot as plt\n\n ts = np.arange(0,10000,1)\n x0_dict = {PFL.dna:1, EM.ribosome.get_species():100, EM.rnap.get_species():20}\n\n R3 = CRN3.simulate_with_bioscrape_via_sbml(ts, initial_condition_dict = x0_dict, stochastic = False,)\n\n if R3 is not None:\n fig, ax = plt.subplots(1,2,figsize=(18,8))\n ax[0].set_title('GFP Protein Levels',pad=20,fontdict={'fontsize':18})\n ax[0].plot(R3[str(PFL.protein)],linewidth=3)\n #ax[0].plot(R3[str(EM.ribosome.get_species())],linewidth=3)\n ax[0].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[0].set_ylabel('GFP Protein Count',labelpad=15,fontdict={'fontsize':14})\n\n ax[1].set_title('GFP Transcript Levels',pad=20,fontdict={'fontsize':18})\n ax[1].plot(R3[str(PFL.transcript)],linewidth=3,c='k')\n ax[1].set_xlabel('Time (sec)',labelpad=15,fontdict={'fontsize':14})\n ax[1].set_ylabel('GFP Transcript Count',labelpad=15,fontdict={'fontsize':14})\nexcept ModuleNotFoundError:\n print('please install the plotting libraries: pip install biocrnpyler[all]')\n\n\n```\n\n# Future Work\n\n- Want to do more rigorous validation of multi-tx mechanism. As part of that, I first plan on comparing RPU data of existing consitutive promoters and non-native polymerase expression systems e.g. T5 with what the multi-tx model would predict. \n- Is there data out there that would help validate the predicted RNAp occupancy of genes from the multi-tx model (same story for multi-tl model)?\n- Eventually want to develop a multi-tx mechanism that can be used for TF-mediated transcription.\n- Start looking towards validating multi-tl model, currently plan on seeing what data and parameters I could scrape from existing resource e.g. BCD RBS binding rates from biocrnpyler. \n- Do some model comparison using bioscrape inference, where I generate parameters with RPU data model and fit parameters (vary known and unknown parameters) from the MTX model. Definitely very interesting for deriving isomerization rates as that seems hard to come by in literature and seems to be a key part of the model in determining system behaviour (Pre-SS and SS dynamics).\n\n# Mentioned Papers and Parameter Resources\n\nT7 parameters:\n- Promoter Binding and Unbinding: Jia, Y., Kumar, A., & Patel, S. S. (1996). Equilibrium and Stopped-flow Kinetic Studies of Interaction between T7 RNA Polymerase and Its Promoters Measured by Protein and 2-Aminopurine Fluorescence Changes. Journal of Biological Chemistry , 271(48), 30451\u201330458. https://doi.org/10.1074/jbc.271.48.30451 \n- Isomerization Rate: Skinner, G. M., Baumann, C. G., Quinn, D. M., Molloy, J. E., & Hoggett, J. G. (2004). Promoter Binding, Initiation, and Elongation By Bacteriophage T7 RNA Polymerase: A SINGLE-MOLECULE VIEW OF THE TRANSCRIPTION CYCLE . Journal of Biological Chemistry , 279(5), 3239\u20133244. https://doi.org/10.1074/jbc.M310471200 \n- Translation Rate (T7p is VERY fast): Kochetkov, S. N., Rusakova, E. E., & Tunitskaya, V. L. (1998). Recent studies of T7 RNA polymerase mechanism. FEBS Letters, 440(3), 264\u2013267. https://doi.org/https://doi.org/10.1016/S0014-5793(98)01484-7\n \nTranslation Parameters:\n- RBS Binding and Unbinding: Chandra, F., & Del Vecchio, D. (2016). The Effects of Ribosome Autocatalysis and Negative Feedback in Resource Competition. bioRxiv. https://doi.org/10.1101/042127\n- Isomerization Rate: Draper, D. E. (1993). Mechanisms of Translational Initiation and Repression in Prokaryotes BT - The Translational Apparatus: Structure, Function, Regulation, Evolution. In K. H. Nierhaus, F. Franceschi, A. R. Subramanian, V. A. Erdmann, & B. Wittmann-Liebold (Eds.) (pp. 197\u2013207). Boston, MA: Springer US. https://doi.org/10.1007/978-1-4615-2407-6_19\n\nMisc:\n- mRNA Degredation: Roy, B., & Jacobson, A. (2013). The intimate relationships of mRNA decay and translation. Trends in Genetics : TIG, 29(12), 691\u2013699. https://doi.org/10.1016/j.tig.2013.09.002\n- RPU Data: Nielsen, A. A. K., Der, B. S., Shin, J., Vaidyanathan, P., Paralanov, V., Strychalski, E. A., \u2026 Voigt, C. A. (2016). Genetic circuit design automation. Science, 352(6281), aac7341. https://doi.org/10.1126/science.aac7341 and Qi, L., Haurwitz, R. E., Shao, W., Doudna, J. A., & Arkin, A. P. (2012). RNA processing enables predictable programming of gene expression. Nature Biotechnology, 30(10), 1002\u20131006. https://doi.org/10.1038/nbt.2355\n- Protein:mRNA ratios: Taniguchi, Y., Choi, P. J., Li, G.-W., Chen, H., Babu, M., Hearn, J., \u2026 Xie, X. S. (2010). Quantifying E. coli Proteome and Transcriptome with Single-Molecule Sensitivity in Single Cells. Science, 329(5991), 533 LP \u2013 538. https://doi.org/10.1126/science.1188308\n \n", "meta": {"hexsha": "59d0542b9dd51d3945e231f6e99ac3a125e77548", "size": 154929, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/Specialized Tutorials/3, MultiOccupancyTxTl_Demo.ipynb", "max_stars_repo_name": "WilliamIX/BioCRNPyler", "max_stars_repo_head_hexsha": "737e87fc99510071bb4b1b6141b2043243c25673", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2019-08-28T09:01:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-16T11:28:05.000Z", "max_issues_repo_path": "examples/Specialized Component Tutorials/MultiOccupancyTxTl_Demo.ipynb", "max_issues_repo_name": "BuildACell/BioCRNPyler", "max_issues_repo_head_hexsha": "a0a395b358f66a72a74052d69a40ca2d80a5ba24", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 211, "max_issues_repo_issues_event_min_datetime": "2019-08-14T16:59:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-08T02:21:09.000Z", "max_forks_repo_path": "examples/Specialized Component Tutorials/MultiOccupancyTxTl_Demo.ipynb", "max_forks_repo_name": "zjuradoq/BioCRNPyler", "max_forks_repo_head_hexsha": "30a7324b77e7bf88664e5a70920fa21bb4354acf", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2019-08-15T22:35:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T14:40:05.000Z", "avg_line_length": 259.0785953177, "max_line_length": 45736, "alphanum_fraction": 0.8924862356, "converted": true, "num_tokens": 8820, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5195213219520929, "lm_q2_score": 0.49609382947091957, "lm_q1q2_score": 0.2577313220990083}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\nimport pylab\nimport matplotlib.pyplot as plt\nimport math\nimport sympy as sym\nimport numpy as np\nimport ipywidgets as widgets\nimport control as control\nimport math as math\nfrom ipywidgets import interact\nfrom IPython.display import Latex, display, Markdown\n```\n\n## Linearization of a function\n\n### Introduction\n\n> Linearization is defined as a process of finding a linear approximation of a function at a certain point. The linear approximation of a function is obtained by the Taylor expansion around the point of interest in which only the first two terms are kept. Linearization is an effective method for approximating the output of a function $y=f(x)$ at any $x=x_0+\\Delta x$ based on the value and the slope of the function at $x=x_0+\\Delta x$, given that $f(x)$ is differentiable on $[x_0,x_0+\\Delta x]$ (or $[x_0+\\Delta x,x_0]$) and that $x_0$ is close to $x_0+\\Delta x$. In short, linearization approximates the output of a function near $x=x_0$. (source: [Wikipedia](https://en.wikipedia.org/wiki/Linearization))\n\nIn this example, linearization is defined as: \n\n\\begin{equation}\n f(x)\\approx f(x_0)+f^{\\prime}(x_0) \\cdot (x-x_0),\n\\end{equation}\n\nwhere $f^{\\prime}=\\frac{f(x_0+h)-f(x_0)}{h}$ ($h$ is set to $0.01$ in order to calculate the derivative).\n\nUnit step function is defined as: \n\n\\begin{equation}\n u(x) =\n \\begin{cases}\n 0; & \\text{$x<0$}\\\\\n 1; & \\text{$x\\geq0$}\n \\end{cases},\n\\end{equation}\n\nand unit ramp function: \n\n\\begin{equation}\n r(x) =\n \\begin{cases}\n 0; & \\text{$x<0$}\\\\\n x; & \\text{$x\\geq0$} \n \\end{cases}.\n\\end{equation} \n \n---\n\n### How to use this notebook?\nMove the slider to change the value of $x_0$, i.e. the $x$ value at which you want to linearize the function.\n\n\n```python\n# sinus, step, ramp, x^2, sqrt(x)\nfunctionSelect = widgets.ToggleButtons(\n options=[('sine function', 0), ('unit step function', 1), ('unit ramp function', 2), ('parabolic function', 3), ('square root function', 4)],\n description='Select: ')\n\nfig = plt.figure(num='Linearization of a function')\nfig.set_size_inches((9.8, 3))\nfig.set_tight_layout(True)\nf1 = fig.add_subplot(1, 1, 1)\n\nf1.grid(which='both', axis='both', color='lightgray')\n\nf1.set_xlabel('$x$')\nf1.set_ylabel('$f(x)$')\n\nf1.axhline(0,Color='black',linewidth=0.5)\nf1.axvline(0,Color='black',linewidth=0.5)\n\nfunc_plot, = f1.plot([],[])\ntang_plot, = f1.plot([],[])\npoint_plot, = f1.plot([],[])\n\nf1.set_xlim((-5,5))\nf1.set_ylim((-6,6))\n\ndef create_draw_functions(x0,index):\n x=np.linspace(-5,5,1001)\n h=0.001 # equal to \\Delta x\n \n global func_plot, tang_plot, point_plot\n \n if index==0:\n y=np.sin(x)\n fprime=(np.sin(x0+h)-np.sin(x0))/h\n tang=np.sin(x0)+fprime*(x-x0)\n fx0=np.sin(x0) \n elif index==1:\n y=np.zeros(1001)\n y[510:1001]=1\n elif index==2:\n y=np.zeros(1001)\n y[500:1001]=np.linspace(0,5,501)\n elif index==3:\n y=x*x\n fprime=((x0+h)*(x0+h)-(x0*x0))/h\n tang=x0*x0+fprime*(x-x0)\n fx0=x0*x0 \n elif index==4:\n x1=np.linspace(0,5,500)\n y=np.sqrt(x1)\n if x0>=0:\n fprime=(np.sqrt(x0+h)-np.sqrt(x0))/h\n tang=np.sqrt(x0)+fprime*(x-x0)\n fx0=np.sqrt(x0)\n \n f1.lines.remove(func_plot)\n f1.lines.remove(tang_plot)\n f1.lines.remove(point_plot)\n \n if index == 0:\n func_plot, = f1.plot(x,y,label='$f(x)=sin(x)$',color='C0')\n tang_plot, = f1.plot(x,tang,'--r',label='tangent')\n point_plot, = f1.plot(x0,fx0,'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False)\n elif index == 1: # in case of the unit step function\n if x0==0:\n func_plot, = f1.step(x,y,label='$f(x)=u(x)$',color='C0')\n tang_plot, = f1.plot([],[])\n point_plot, = f1.plot([],[]) \n f1.text(0.1,1.3,'Linearization at $x_0=0$ is not possible!',fontsize=14)\n elif x0<0:\n tang=np.zeros(1001)\n func_plot, = f1.step(x,y,label='$f(x)=u(x)$',color='C0')\n tang_plot, = f1.plot(x,tang,'--r',label='tangent')\n point_plot, = f1.plot(x0,[0],'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False)\n elif x0>0:\n tang=np.ones(1001)\n func_plot, = f1.step(x,y,label='$f(x)=u(x)$',color='C0')\n tang_plot, = f1.plot(x,tang,'--r',label='tangent')\n point_plot, = f1.plot(x0,[1],'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False)\n elif index==2: # in case of the ramp\n if x0<0:\n tang=np.zeros(1001)\n func_plot, = f1.plot(x,y,label='$f(x)=R(x)$',color='C0')\n tang_plot, = f1.plot(x,np.zeros(1001),'--r',label='tangent')\n point_plot, = f1.plot(x0,[0],'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False)\n elif x0>=0:\n tang=x\n func_plot, = f1.plot(x,y,label='$f(x)=R(x)$',color='C0')\n tang_plot, = f1.plot(x,tang,'--r',label='tangent')\n point_plot, = f1.plot(x0,x0,'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False)\n elif index==3:\n func_plot, = f1.plot(x,y,label='$f(x)=x^2$',color='C0')\n tang_plot, = f1.plot(x,tang,'--r',label='tangent')\n point_plot, = f1.plot(x0,fx0,'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False) \n elif index==4: #in case of the square root function\n if x0<0:\n for txt in f1.texts: \n txt.set_visible(False)\n func_plot, = f1.plot(x1,y,label='$f(x)=\\sqrt{x}$',color='C0')\n tang_plot, = f1.plot([],[])\n point_plot, = f1.plot([],[])\n f1.text(-4.9,1.3,'Square root function is not defined for $x<0$!',fontsize=14)\n else:\n func_plot, = f1.plot(x1,y,label='$f(x)=\\sqrt{x}$',color='C0')\n tang_plot, = f1.plot(x,tang,'--r',label='tangent')\n point_plot, = f1.plot(x0,fx0,'om',label='$x_0$')\n for txt in f1.texts: \n txt.set_visible(False)\n \n if (index==1) and x0==0 or (index==4 and x0<0):\n display(Markdown('See comment on the figure.'))\n else:\n k=round(((tang[-1]-tang[0])/(x[-1]-x[0])),3)\n n=round(((tang[-1]-k*x[-1])),3)\n display(Markdown('Equation of the tangent: $y=%.3fx+%.3f$.'%(k,n)))\n \n f1.legend()\n \n f1.relim()\n f1.relim()\n f1.autoscale_view()\n f1.autoscale_view() \n\n \nx0_slider = widgets.FloatSlider(value=1, min=-5, max=5, step=0.2, description='$x_0$',\n continuous_update=True, layout=widgets.Layout(width='auto', flex='5 5 auto'),readout_format='.1f')\n\ninput_data = widgets.interactive_output(create_draw_functions, {'x0':x0_slider, 'index':functionSelect})\n\ndef update_sliders(index):\n global x0_slider\n \n x0val = [0.5, 0.5, 1, 1, 5, 10]\n x0slider.value = x0val[index]\n \ninput_data2 = widgets.interactive_output(update_sliders, {'index':functionSelect})\n\ndisplay(functionSelect)\n\ndisplay(x0_slider,input_data)\n\n# display(Markdown(\"The system can be represented as $f(x)=5$ for small excursions of x about x0.\"))\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Select: ', options=(('sine function', 0), ('unit step function', 1), ('unit ramp fu\u2026\n\n\n\n FloatSlider(value=1.0, description='$x_0$', layout=Layout(flex='5 5 auto', width='auto'), max=5.0, min=-5.0, r\u2026\n\n\n\n Output()\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9d66d9df6715719d59be53f1086beaf6aa06541d", "size": 95032, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_en/examples/02/.ipynb_checkpoints/TD-05-Linearization-Functions-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT/ENG/examples/02/TD-05-Linearization-Functions.ipynb", "max_issues_repo_name": "tuxsaurus/ICCT", "max_issues_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT/ENG/examples/02/TD-05-Linearization-Functions.ipynb", "max_forks_repo_name": "tuxsaurus/ICCT", "max_forks_repo_head_hexsha": "30d1aea4fb056c9736c9b4c5a0f50fff14fa6382", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 82.1365600691, "max_line_length": 47317, "alphanum_fraction": 0.7221988383, "converted": true, "num_tokens": 2507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5312093733737563, "lm_q2_score": 0.4843800842769844, "lm_q1q2_score": 0.25730724104350416}} {"text": "```python\n%run notebook_setup\n```\n\n# Getting started with The Joker\n\n*The Joker* (pronounced Yo-kurr) is a highly specialized Monte Carlo (MC) sampler that is designed to generate converged posterior samplings for Keplerian orbital parameters, even when your data are sparse, non-uniform, or very noisy. This is *not* a general MC sampler, and this is *not* a Markov Chain MC sampler like `emcee`, or `pymc3`: This is fundamentally a [rejection sampler](https://en.wikipedia.org/wiki/Rejection_sampling) with some tricks that help improve performance for the two-body problem.\n\n*The Joker* shines over more conventional MCMC sampling methods when your radial velocity data is imprecise, non-uniform, sparse, or has a short baseline: In these cases, your likelihood function will have many, approximately equal-height modes that are often spaced widely, all properties that make conventional MCMC bork when applied to this problem. In this tutorial, we will not go through the math behind the sampler (most of that is covered [in the original paper](https://arxiv.org/abs/1610.07602)). However, some terminology is important to know for the tutorial below or for reading the documentation. Most relevant, the parameters in the two-body problem (Kepler orbital parameters) split into two sets: nonlinear and linear parameters. The nonlinear parameters are always the same in each run of The Joker: period $P$, eccentricity $e$, argument of pericenter $\\omega$, and a phase $M_0$. The default linear parameters are the velocity semi-ampltude $K$, and a systemtic velocity $v_0$. However, there are ways to add additional linear parameters into the model (as described in other tutorials).\n\nFor this tutorial, we will set up an inference problem that is common to binary star or exoplanet studies, show how to generate posterior orbit samples from the data, and then demonstrate how to visualize the samples. Other tutorials demonstrate more advanced or specialized functionality included in *The Joker*, like:\n- [fully customizing the parameter prior distributions](2-Customize-prior.ipynb), \n- [allowing for a long-term velocity trend in the data](3-Polynomial-velocity-trend.ipynb), \n- [continuing sampling with standard MCMC methods](4-Continue-sampling-mcmc.ipynb) when *The Joker* returns one or few samples,\n- [simultaneously inferring constant offsets between data sources](5-Calibration-offsets.ipynb) (i.e. when using data from multiple instruments that may have calibration offsets)\n\nBut let's start here with the most basic functionality!\n\nFirst, imports we will need later:\n\n\n```python\nimport astropy.table as at\nfrom astropy.time import Time\nimport astropy.units as u\nfrom astropy.visualization.units import quantity_support\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n\nimport thejoker as tj\n```\n\n\n```python\n# set up a random generator to ensure reproducibility\nrnd = np.random.default_rng(seed=42)\n```\n\n## Loading radial velocity data\n\nTo start, we need some radial velocity data to play with. Our ultimate goal is to construct or read in a `thejoker.RVData` instance, which is the main data container object used in *The Joker*. For this tutorial, we will use a simulated RV curve that was generated using a separate script and saved to a CSV file, and we will create an `RVData` instance manually. \n\nBecause we previously saved this data as an Astropy [ECSV](http://docs.astropy.org/en/latest/api/astropy.io.ascii.Ecsv.html#astropy.io.ascii.Ecsv) file, the units are provided with the column data and read in automatically using the [astropy.table read/write interface](http://docs.astropy.org/en/latest/table/index.html):\n\n\n```python\ndata_tbl = at.QTable.read('data.ecsv')\ndata_tbl[:2]\n```\n\nThe full simulated data table has many rows (256), so let's randomly grab 4 rows to work with:\n\n\n```python\nsub_tbl = data_tbl[rnd.choice(len(data_tbl), size=4, replace=False)]\n```\n\n\n```python\nsub_tbl\n```\n\nIt looks like the time column is given in Barycentric Julian Date (BJD), so in order to create an `RVData` instance, we will need to create an `astropy.time.Time` object from this column:\n\n\n```python\nt = Time(sub_tbl['bjd'], format='jd', scale='tcb') \ndata = tj.RVData(t=t, rv=sub_tbl['rv'], rv_err=sub_tbl['rv_err'])\n```\n\nWe now have an `RVData` object, so we could continue on with the tutorial. But as a quick aside, there is an alternate, more automatic (automagical?) way to create an `RVData` instance from tabular data: `RVData.guess_from_table`. This classmethod attempts to guess the time format and radial velocity column names from the columns in the data table. It is very much an experimental feature, so if you think it can be improved, please open an issue in the [GitHub repo for The Joker](https://github.com/adrn/thejoker/issues). In any case, here it successfully works:\n\n\n```python\ndata = tj.RVData.guess_from_table(sub_tbl)\n```\n\nOne of the handy features of `RVData` is the `.plot()` method, which generates a quick view of the data:\n\n\n```python\n_ = data.plot()\n```\n\nThe data are clearly variable! But what orbits are consistent with these data? I suspect many, given how sparse they are! Now that we have the data in hand, we need to set up the sampler by specifying prior distributions over the parameters in *The Joker*.\n\n## Specifying the prior distributions for The Joker parameters\n\nThe prior *pdf* (probability distribution function) for *The Joker* is controlled and managed through the `thejoker.JokerPrior` class. The prior for *The Joker* is fairly customizable and the initializer for `JokerPrior` is therefore pretty flexible; usually too flexible for typical use cases. We will therefore start by using an alternate initializer defined on the class, `JokerPrior.default()`, that provides a simpler interface for creating a `JokerPrior` instance that uses the default prior distributions assumed in *The Joker*. In the default prior:\n\n$$\n\\begin{align}\n&p(P) \\propto \\frac{1}{P} \\quad ; \\quad P \\in (P_{\\rm min}, P_{\\rm max})\\\\\n&p(e) = B(a_e, b_e)\\\\\n&p(\\omega) = \\mathcal{U}(0, 2\\pi)\\\\\n&p(M_0) = \\mathcal{U}(0, 2\\pi)\\\\\n&p(K) = \\mathcal{N}(K \\,|\\, \\mu_K, \\sigma_K)\\\\\n&\\sigma_K = \\sigma_{K, 0} \\, \\left(\\frac{P}{P_0}\\right)^{-1/3} \\, \\left(1 - e^2\\right)^{-1/2}\\\\\n&p(v_0) = \\mathcal{N}(v_0 \\,|\\, \\mu_{v_0}, \\sigma_{v_0})\\\\\n\\end{align}\n$$\n\nwhere $B(.)$ is the beta distribution, $\\mathcal{U}$ is the uniform distribution, and $\\mathcal{N}$ is the normal distribution.\n\nMost parameters in the distributions above are set to reasonable values, but there are a few required parameters for the default case: the range of allowed period values (``P_min`` and ``P_max``), the scale of the ``K`` prior variance ``sigma_K0``, and the standard deviation of the $v_0$ prior ``sigma_v``. Let's set these to some arbitrary numbers. Here, I chose the value for ``sigma_K0`` to be typical of a binary star system; if using The Joker for exoplanet science, you will want to adjust this correspondingly.\n\n\n```python\nprior = tj.JokerPrior.default(\n P_min=2*u.day, P_max=1e3*u.day,\n sigma_K0=30*u.km/u.s,\n sigma_v=100*u.km/u.s)\n```\n\nOnce we have the prior instance, we need to generate some prior samples that we will then use *The Joker* to rejection sample down to a set of posterior samples. To generate prior samples, use the `JokerSamples.sample()` method. Here, we'll generate a lare number of samples to use:\n\n\n```python\nprior_samples = prior.sample(size=250_000,\n random_state=rnd)\nprior_samples\n```\n\nThis object behaves like a Python dictionary in that the parameter values can be accessed via their key names:\n\n\n```python\nprior_samples['P']\n```\n\n\n```python\nprior_samples['e']\n```\n\nThey can also be written to disk or re-loaded using this same class. For example, to save these prior samples to the current directory to the file \"prior_samples.hdf5\":\n\n\n```python\nprior_samples.write(\"prior_samples.hdf5\", overwrite=True)\n```\n\nWe could then load the samples from this file using:\n\n\n```python\ntj.JokerSamples.read(\"prior_samples.hdf5\")\n```\n\n## Running The Joker\n\nNow that we have a set of prior samples, we can create an instance of The Joker and use the rejection sampler:\n\n\n```python\njoker = tj.TheJoker(prior, random_state=rnd)\njoker_samples = joker.rejection_sample(data, prior_samples, \n max_posterior_samples=256)\n```\n\nThis works by either passing in an instance of `JokerSamples` containing the prior samples, or by passing in a filename that contains `JokerSamples` written to disk. So, for example, this is equivalent:\n\n\n```python\njoker_samples = joker.rejection_sample(data, \"prior_samples.hdf5\", \n max_posterior_samples=256)\n```\n\nThe ``max_posterior_samples`` argument above specifies the maximum number of posterior samples to return. It is often helpful to set a threshold here in cases when your data are very uninformative to avoid generating huge numbers of samples (which can slow down the sampler considerably).\n\nIn either case above, the ``joker_samples`` object returned from ``rejection_sample()`` is also an instance of the ``JokerSamples`` class, but now contains posterior samples for all nonlinear and linear parameters in the model:\n\n\n```python\njoker_samples\n```\n\n## Plotting The Joker orbit samples over the input data\n\nWith posterior samples in Keplerian orbital parameters in hand for our data set, we can now plot the posterior samples over the input data to get a sense for how constraining the data are. *The Joker* comes with a convenience plotting function, ``plot_rv_curves``, for doing just this:\n\n\n```python\n_ = tj.plot_rv_curves(joker_samples, data=data)\n```\n\nIt has various options to allow customizing the style of the plot:\n\n\n```python\nfig, ax = plt.subplots(1, 1, figsize=(8, 4))\n_ = tj.plot_rv_curves(joker_samples, data=data, \n plot_kwargs=dict(color='tab:blue'),\n data_plot_kwargs=dict(color='tab:red'),\n relative_to_t_ref=True, ax=ax)\nax.set_xlabel(f'BMJD$ - {data.t.tcb.mjd.min():.3f}$')\n```\n\nAnother way to visualize the samples is to plot 2D projections of the sample values, for example, to plot period against eccentricity:\n\n\n```python\nfig, ax = plt.subplots(1, 1, figsize=(8, 5))\n\nwith quantity_support():\n ax.scatter(joker_samples['P'], \n joker_samples['e'],\n s=20, lw=0, alpha=0.5)\n \nax.set_xscale('log')\nax.set_xlim(prior.pars['P'].distribution.a,\n prior.pars['P'].distribution.b)\nax.set_ylim(0, 1)\n\nax.set_xlabel('$P$ [day]')\nax.set_ylabel('$e$')\n```\n\nBut is the true period value included in those distinct period modes returned by *The Joker*? When generating the simulated data, I also saved the true orbital parameters used to generate the data, so we can load and over-plot it:\n\n\n```python\nimport pickle\nwith open('true-orbit.pkl', 'rb') as f:\n truth = pickle.load(f)\n```\n\n\n```python\nfig, ax = plt.subplots(1, 1, figsize=(8, 5))\n\nwith quantity_support():\n ax.scatter(joker_samples['P'], \n joker_samples['e'],\n s=20, lw=0, alpha=0.5)\n \n ax.axvline(truth['P'], zorder=-1, color='tab:green')\n ax.axhline(truth['e'], zorder=-1, color='tab:green')\n ax.text(truth['P'], 0.95, 'truth', fontsize=20, \n va='top', ha='left', color='tab:green')\n \nax.set_xscale('log')\nax.set_xlim(prior.pars['P'].distribution.a,\n prior.pars['P'].distribution.b)\nax.set_ylim(0, 1)\n\nax.set_xlabel('$P$ [day]')\nax.set_ylabel('$e$')\n```\n\nIt indeed looks like there are posterior samples from *The Joker* in the vicinity of the true value. Deciding what to do next depends on the problem you would like to solve. For example, if you just want to get a sense of how multi-modal the posterior *pdf* over orbital parameters is, you might be satisfied with the number of samples we generated and the plots we made in this tutorial. However, if you want to fully propagate the uncertainty in these orbital parameters through some other inference (for example, to transform the samples into constraints on companion mass or other properties), you may want or need to generate a lot more samples. To start, you could change ``max_posterior_samples`` to be a much larger number in the ``rejection_sample()`` step above. But I have found that in many cases, you need to run with many, many more (e.g., 500 million) prior samples. To read more, check out the next tutorial!\n", "meta": {"hexsha": "1d67a2dfeecc9714ce770c43e3d3e92b50657c44", "size": 18337, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/examples/1-Getting-started.ipynb", "max_stars_repo_name": "adrn/thejoker", "max_stars_repo_head_hexsha": "e77182bdb368e20127a17cc76ba1083ab77746ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2016-09-05T00:01:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-14T19:28:23.000Z", "max_issues_repo_path": "docs/examples/1-Getting-started.ipynb", "max_issues_repo_name": "adrn/thejoker", "max_issues_repo_head_hexsha": "e77182bdb368e20127a17cc76ba1083ab77746ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 111, "max_issues_repo_issues_event_min_datetime": "2016-09-04T18:21:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-13T06:38:27.000Z", "max_forks_repo_path": "docs/examples/1-Getting-started.ipynb", "max_forks_repo_name": "adrn/thejoker", "max_forks_repo_head_hexsha": "e77182bdb368e20127a17cc76ba1083ab77746ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-09-04T17:12:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-18T13:12:09.000Z", "avg_line_length": 37.4224489796, "max_line_length": 1117, "alphanum_fraction": 0.6228390685, "converted": true, "num_tokens": 3065, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5467381372136563, "lm_q2_score": 0.46879062662624377, "lm_q1q2_score": 0.2563057139448552}} {"text": "```python\nimport sys\nsys.path.append('..') # Add src to path\nimport os\nos.environ[\"HDF5_USE_FILE_LOCKING\"]='FALSE'\nimport datetime\nimport h5py\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.animation import FuncAnimation\nfrom IPython.display import Image\n\nimport tensorflow as tf\n\nfrom src.display import get_cmap\nfrom src.utils import make_log_dir\n\n# comment these out if you don't have cartopy\nimport cartopy.feature as cfeature\nfrom src.display.cartopy import make_ccrs,make_animation\n\nfrom make_dataset import NowcastGenerator,get_nowcast_train_generator,get_nowcast_test_generator\n\nfrom unet_benchmark import create_model\nfrom unet_benchmark import nowcast_mae, nowcast_mse\nfrom IPython.core.debugger import set_trace\n```\n\n# Radar Nowcast Challenge\n\nThis notebook will describe the SEVIR radar nowcasting challenge. Nowcasts (https://en.wikipedia.org/wiki/Nowcasting_(meteorology)) are short-term forecast of weather variables typically measured by weather radar or satellite. Nowcasts are different from traditional weather forecasts (like those you see on the news, or on your phones weather app) in that they are based (mostly) on statistical extrapolations of recent data, rather than full physics-based numerical weather prediction (NWP) models. The advantages of using statistical extrapolation techniques is that they generally run much faster than NWP models (seconds compared to hours), and because of this, nowcasts are able to leverage the most recently observed data and use it for forecasting. Because of this extremely low latency, nowcast out-perform traditional weather models in terms of accuracy and precision for short look-aheads (typically 1-2 hours at most). \n\nBelow is an example that a nowcast generated using the baseline model created in this notebook. The first half of the movie is observed radar VIL. The second half of the movie is the forecast generated by a model trained in this notebook. The challenge is to create the best possible nowcast algorithm given the data in SEVIR.\n\n\n\n### Contents:\n* [Python environment](#env)\n* [Problem Statement](#problem)\n* [Training & testing dataset](#datasets)\n * [Visualizing training samples](#vistraining)\n* [Model Training](#training)\n* [Visualize Result](#visualize)\n* [Forecast Scoring](#scoring)\n* [Applying to full-sized images](#fillimgs)\n\n## Python Environment \n\nRunning this notebook requires a python environment that includes the modules described here. You can setup the environment with anaconda via\n\n```\nconda create --name sevir_challenges\nconda activate sevir_challenges\nconda install --file requirements.txt\n```\n\n\n## Problem Statement \n\nThe goal of this challenge is to predict 12 future frames of radar images in 5 minute steps (`vil` in this case) given the previous 13 images of `vil` (also in 5 minute steps) in addition to other modalities in SEVIR. \n\n\n\n## Training and testing datasets \n\n### MIT Supercloud\n\nIf you have access to MIT supercloud, the data for this challenge can be accessed directly:\n\n### AWS Download\n\nSample training and testing datasets are provided on the SEVIR S3 bucket. These can be obtained by running the following cell. \n\n**This downloads and decompresses files that will take up roughly 40 GB so make sure you have sufficient disk space**\n\n\n```python\ndata_path=\"/scratch/li.baol/SEVIR\"\n```\n\n\n```python\n# Target locations of sample training & testing data\nDEST_TRAIN_FILE= os.path.join(data_path,'data/processed/nowcast_training_000.h5')\nDEST_TRAIN_META=os.path.join(data_path, 'data/processed/nowcast_training_000_META.csv')\nDEST_TEST_FILE= os.path.join(data_path, 'data/processed/nowcast_testing_000.h5')\nDEST_TEST_META= os.path.join(data_path, 'data/processed/nowcast_testing_000_META.csv')\n```\n\n\n```python\n# THIS DOWNLOADS APPROXIMATELY 40 GB DATASETS (AFTER DECOMPRESSION)\nimport boto3\nfrom botocore.handlers import disable_signing\nimport tarfile\nresource = boto3.resource('s3')\nresource.meta.client.meta.events.register('choose-signer.s3.*', disable_signing)\nbucket=resource.Bucket('sevir')\n\nprint('Dowloading sample training data')\nif not os.path.exists(DEST_TRAIN_FILE):\n bucket.download_file('data/processed/nowcast_training_000.h5.tar.gz',DEST_TRAIN_FILE+'.tar.gz')\n bucket.download_file('data/processed/nowcast_training_000_META.csv',DEST_TRAIN_META)\n with tarfile.open(DEST_TRAIN_FILE+'.tar.gz') as tfile:\n tfile.extract('data/processed/nowcast_training_000.h5',data_path)\nelse:\n print('Train file %s already exists' % DEST_TRAIN_FILE)\nprint('Dowloading sample testing data')\nif not os.path.exists(DEST_TEST_FILE):\n bucket.download_file('data/processed/nowcast_testing_000.h5.tar.gz',DEST_TEST_FILE+'.tar.gz')\n bucket.download_file('data/processed/nowcast_testing_000_META.csv',DEST_TEST_META)\n with tarfile.open(DEST_TEST_FILE+'.tar.gz') as tfile:\n tfile.extract('data/processed/nowcast_testing_000.h5',data_path)\nelse:\n print('Test file %s already exists' % DEST_TEST_FILE)\n```\n\n Dowloading sample training data\n Train file /scratch/li.baol/SEVIR/data/processed/nowcast_training_000.h5 already exists\n Dowloading sample testing data\n Test file /scratch/li.baol/SEVIR/data/processed/nowcast_testing_000.h5 already exists\n\n\nSee the contents of the training data file using `h5ls`:\n\n\n```python\n#!h5ls /scratch/li.baol/SEVIR/data/processed/nowcast_training_000.h5\n```\n\nThis file contains four modalities you can use as input to a nowcast model (keys start with `IN_` -- `vil`,`ir_ir107`,`ir069`, or `lght`). The challenge is to use these inputs to predict the output vil `OUT_vil`. \n\nIf you have downloaded the full SEVIR dataset, you can extract even larger datasets of this format, or even modify the dataset for your puprose. We assume SEVIR is located in the directory `$SEVIR_ROOT`. To make your own dataset, run the script `make_dataset.py`:\n\n```\npython make_dataset.py --input_types vil --output_types vil --sevir_data $SEVIR_ROOT/data/ --sevir_catalog $SEVIR_ROOT/CATALOG.csv --output_location ../data/processed/\n```\n\nThe rest of this notebook assumes you are using the sample training and test files located in the directory `../data/processed/` relative to this notebook. Adjust as needed.\n\n### Exploring the training dataset \n\nFirst load some samples from the training data and assocaited metadata for georeferencing and plotting. If the full training set is too large to fit into memory, adjust `N_TRAIN` and `N_TEST` below. This will use `TRAIN_VAL_FRAC * N_TRAIN` training samples for a validation set.\n\nNOTE: For this baseline we will only use `IN_vil` frames as inputs. However the other modalities may be useful for nowcasting.\n\n\n\n```python\n# Control how many samples are read. Set to -1 to read all 5000 samples.\nN_TRAIN=-1\nTRAIN_VAL_FRAC=0.8\n#set_trace()\nN_TEST=-1\n```\n\n\n```python\n# Loading data takes a few minutes\nwith h5py.File(DEST_TRAIN_FILE,'r') as hf:\n Nr = N_TRAIN if N_TRAIN>=0 else hf['IN_vil'].shape[0]\n X_train = hf['IN_vil'][:Nr]\n Y_train = hf['OUT_vil'][:Nr]\n training_meta = pd.read_csv(DEST_TRAIN_META).iloc[:Nr]\n X_train,X_val=np.split(X_train,[int(TRAIN_VAL_FRAC*Nr)])\n Y_train,Y_val=np.split(Y_train,[int(TRAIN_VAL_FRAC*Nr)])\n training_meta,val_meta=np.split(training_meta,[int(TRAIN_VAL_FRAC*Nr)])\n#set_trace() \nwith h5py.File(DEST_TEST_FILE,'r') as hf:\n Nr = N_TEST if N_TEST>=0 else hf['IN_vil'].shape[0]\n X_test = hf['IN_vil'][:Nr]\n Y_test = hf['OUT_vil'][:Nr]\n testing_meta=pd.read_csv(DEST_TEST_META).iloc[:Nr]\n```\n\nGet a sample batch of image sequences. Note the shape`[N,L,L,T]` where `N` is the batch size, `L` is the size of the image patch, and `T` is the number of time frames in the video (1 time step = 5 minutes). The VIL images in SEVIR are scaled to the range [0-255], and can be decoded back to orginal units following the [SEVIR Tutorial](https://nbviewer.jupyter.org/github/MIT-AI-Accelerator/eie-sevir/blob/master/examples/SEVIR_Tutorial.ipynb).\n\n\n```python\nbatch_size,batch_num=8,0\nbs,be=batch_size*batch_num,batch_size*(batch_num+1)\nX,Y,meta = X_train[bs:be],Y_train[bs:be],training_meta.iloc[bs:be]\nprint('Input X:',X.shape)\nprint('Output Y:',Y.shape)\nprint('META:',meta.shape)\n```\n\n Input X: (8, 384, 384, 13)\n Output Y: (8, 384, 384, 12)\n META: (8, 14)\n\n\nWe'll start by visualizing some of the data. This includes some colormaps for visualization. The following plots a few frames with the custom colormap provided for the `vil` type:\n\n\n```python\nbatch_index=1\ncmap,norm,vmin,vmax=get_cmap('vil')\nfig,axs=plt.subplots(1,4,figsize=(12,4))\nfor i in range(4):\n axs[i].imshow(X[batch_index,:,:,4*i],origin='lower',cmap=cmap,norm=norm,vmin=vmin,vmax=vmax) # every 4th frame\n axs[i].set_axis_off()\n```\n\nThe variable `meta` contains additional information about the event pictured: \n\n\n```python\nmeta.iloc[batch_index]\n```\n\n\n\n\n Unnamed: 0 19989\n id S819072\n time_utc 2019-04-09 17:10:00\n episode_id 136472.0\n event_id 819072.0\n event_type Thunderstorm Wind\n minute_offsets -60:-55:-50:-45:-40:-35:-30:-25:-20:-15:-10:-5...\n llcrnrlat 41.434316\n llcrnrlon -77.367946\n urcrnrlat 43.830965\n urcrnrlon -71.510706\n proj +proj=laea +lat_0=38 +lon_0=-98 +units=m +a=63...\n height_m 384000.0\n width_m 384000.0\n Name: 1, dtype: object\n\n\n\nSince this event has an associated `event_id`, more information can be obtained from NCEI storm event database associated to the year of the event (2019 in this case). Look for the files named `StormEvents_details` for the particular years of interest here: \n\n[https://www1.ncdc.noaa.gov/pub/data/swdi/stormevents/csvfiles](https://www1.ncdc.noaa.gov/pub/data/swdi/stormevents/csvfiles)\n\nIf you download relevant files, additional information for the event can be obtained by searching these files for the `episode_id` or `event_id` found above.\n\nThe metadata can be used to georeference the patches shown above. If cartopy is installed in your environment, map features can be added to these plots:\n\n\n```python\n# REQUIRES CARTOPY (and an internet connection if this is your first time using cartopy)\nproj,img_extent = make_ccrs(meta.iloc[batch_index])\nxll,xur=img_extent[0],img_extent[1]\nyll,yur=img_extent[2],img_extent[3]\nfig,axs=plt.subplots(1,4,figsize=(12,4), subplot_kw={'projection':proj})\nfor i in range(4):\n axs[i].set_xlim((xll,xur))\n axs[i].set_ylim((yll,yur))\n axs[i].imshow(X[batch_index,:,:,4*i],interpolation='nearest',\n origin='lower',transform=proj,extent=[xll,xur,yll,yur],\n cmap=cmap,norm=norm,vmin=vmin,vmax=vmax) # every 4th frame\n axs[i].add_feature(cfeature.STATES)\n\n```\n\nThe function `make_animation` creates a short movie loop:\n\n\n```python\n%%capture off \nanim = make_animation(X[batch_index],meta.iloc[batch_index],title='Inputs')\nanim.save('imgs/input_animation.gif', writer='imagemagick', fps=6)\nanim = make_animation(Y[batch_index],meta.iloc[batch_index],title='Target')\nanim.save('imgs/output_animation.gif', writer='imagemagick', fps=6)\n```\n\n\n```python\nImage('imgs/input_animation.gif')\n```\n\n\n\n\n \n\n\n\n\n```python\nImage('imgs/output_animation.gif')\n```\n\n\n\n\n \n\n\n\n## Create baseline model\n\nIn this section we will initialize a baseline model and train it.\n\n### Initialize U-Net model\n\nThe baseline model used in this notebook is the U-Net model based on the model described [here](https://proceedings.neurips.cc//paper/2020/hash/fa78a16157fed00d7a80515818432169-Abstract.html).\n\nThis model has the following structure, which takes a sequence of VIL images, and outputs another sequence that covers the next hour of time:\n\n\nCode for the model is defined in `unet_benchmark.py`. \n\nThe following cell defines training hyperparamters we'll use to start. These can be adjusted as needed.\n\n\n```python\n# Add more as needed\nparams={\n 'start_neurons' :16, # Controls size of hidden layers in CNN, higher = more complexity \n 'activation' :'relu', # Activation used throughout the U-Net, see https://www.tensorflow.org/api_docs/python/tf/keras/activations\n 'loss' :'mae', # Either 'mae' or 'mse', or others as https://www.tensorflow.org/api_docs/python/tf/keras/losses\n 'loss_weights' :0.021, # Scale for loss. Recommend squaring this if using MSE\n 'opt' :tf.keras.optimizers.Adam, # optimizer, see https://www.tensorflow.org/api_docs/python/tf/keras/optimizers\n 'learning_rate' :0.001, # Learning rate for optimizer\n 'num_epochs' :10, # Number of epochs to train for\n 'batch_size' :8 # Size of batches during training\n}\n```\n\n\n```python\nunet = create_model(start_neurons=params['start_neurons'],activation=params['activation']) \nunet.summary()\n```\n\nFor a loss function, we'll start with mean absolute error scaled by an estimate of the standard deviation computed over the training data ($\\sigma^{-1} \\approx 0.021$)\n\\begin{equation}\nL(Y,\\hat{Y})=\\frac{1}{\\sigma}\\sum_{i} ||Y-\\hat{Y}||_1\n\\end{equation}\n\n\n```python\nopt=params['opt'](learning_rate=params['learning_rate'])\nunet.compile(optimizer=opt, loss=params['loss'],loss_weights=[params['loss_weights']])\n```\n\n### Train Model \n\nWe'll set up two callbacks. The first saves checkpoints whenever `val_loss` is minimized. The other sets up tensorboard.\n\n**NOTE: Model checkpoints will be saved under the `experiments` directory in a newly created time-stamped directory. A link named `experiments/latest` is also created that points to the most recent experiment.**\n\n\n\n```python\n# Training 10 epochs takes around 10-20 minutes on GPU\nnum_epochs=params['num_epochs']\nbatch_size=params['batch_size']\nexprmt_dir=make_log_dir('experiments')\n\ncallbacks=[\n tf.keras.callbacks.ModelCheckpoint(exprmt_dir+'/nowcast-unet-{epoch:04d}-{val_loss:04f}.hdf5', \n monitor='val_loss',save_best_only=True),\n tf.keras.callbacks.TensorBoard(log_dir=exprmt_dir+'/tboardlogs')\n]\n\nhistory = unet.fit(x=X_train, y=Y_train,\n batch_size=batch_size,\n epochs=num_epochs,\n callbacks=callbacks,\n validation_data=(X_val, Y_val))\n```\n\nPlot model training performance for the first 10 epochs\n\n\n```python\nplt.plot(history.history['loss'],label='Train loss')\nplt.plot(history.history['val_loss'],label='Val loss')\nplt.legend()\n```\n\n\n```python\n# save model for later use\nunet.save('unet_10epochs.h5')\n```\n\n\n```python\n# Reload previously saved model\n# Make sure to use `custom_objects` for custom loss functions\n\n# NOTE: The best performing model (on val set) across all epochs was saved under experiments/latest, \n# so you can also load that instead\n\nunet = tf.keras.models.load_model('unet_10epochs.h5',custom_objects={})\n```\n\n## Visualize model on test samples \n\nStart by visualizing the resulting model on test samples\n\n\n\n```python\nbatch_size,batch_num=8,4\nbs,be=batch_size*batch_num,batch_size*(batch_num+1)\nx_test,y_test,meta = X_test[bs:be],Y_test[bs:be],testing_meta.iloc[bs:be]\n```\n\n\n```python\n%%capture off \nbidx=5\nanim = make_animation(x_test[bidx],meta.iloc[bidx],title='Inputs')\nanim.save('imgs/input_test_animation.gif', writer='imagemagick', fps=6)\nanim = make_animation(y_test[bidx],meta.iloc[bidx],title='Target')\nanim.save('imgs/output_test_animation.gif', writer='imagemagick', fps=6)\n```\n\n\n```python\nImage('imgs/input_test_animation.gif')\n```\n\n\n```python\nImage('imgs/output_test_animation.gif')\n```\n\nNow compute the prediction\n\n\n```python\n%%capture off \ny_pred = unet.predict(x_test)\nanim = make_animation(y_pred[bidx],meta.iloc[bidx],title='Prediction')\nanim.save('imgs/pred_test_animation.gif', writer='imagemagick', fps=6)\n```\n\n\n```python\nImage('imgs/pred_test_animation.gif')\n```\n\nWhat do you think? You can adjust `batch_num` and `bidx` to view other cases from the test set.\n\nThe good:\n* Seems to be moving the weather in the correct direction, and retaining some of the structure\n\nThe bad:\n* A lot of the detail gets washed out, and for longer leads, the more intense sections of the storm wash away.\n\n\n## Forecast Scoring \n\nLet's apply some more quantitative verifications to establish baseline performance. \n\nTo define metrics used, let $T_i(t)$ and $F_i(t)$ represent target and forecast images, respectively, for test sample $i$ at lead times $t=5,10,...,60$ minutes. The first metric to consider is mean-absolute error (MAE): \n\n\\begin{equation}\nMAE(t) = \\frac{1}{L^2 N_{test}}\\sum_{i=1}^{N_{test}} ||T_i(t)-F_i(t)||_1 \n\\end{equation}\n\nMAE is nice because it is simple, however it often fails to capture forecast skill in predicting fine detail in storm, such as severe storm cores. Therefore, we'll also apply more standard forecast verification metrics (see [https://www.cawcr.gov.au/projects/verification/](https://www.cawcr.gov.au/projects/verification/)).\n\nGiven a threshold $\\tau$, we binarize both the target and forecast images, and label each pixel as a \"hit\" ($H$) if both target and forecast are $\\geq \\tau$, a \"miss\" ($M$) if $T\\geq\\tau$ and $F<\\tau$, a \"false alarm\" ($FA$) if $T<\\tau$ and $F\\geq \\tau$, and a correct rejection if both $T$ and $F$ are less than $\\tau$. \n\nThese pixel scores are rolled up into Critical Success Index (CSI) (which is similar to Intersection over Union, which is another common name for this metric) and is computed as \n\n\\begin{equation}\nCSI(t;\\tau) = \\frac{\\#H}{\\#H + \\#M + \\#FA}\n\\end{equation}\n\nA CSI of 1 represents a perfect pixel-to-pixel forecast. For scoring VIL nowcasts, we'll average this over three thresholds that are used in practice [16,74,133] representing low, medium, and high strom intensity. We'll refer to this average as \"mCSI\", computed for each lead time $t$.\n\n\\begin{equation}\nmCSI(t) = \\frac{1}{3}(CSI(t;16) + CSI(t;74) + CSI(t;133))\n\\end{equation}\n\nA \"final\" test scores can be computed by averaging $MAE$ and $mCSI$ over all lead times in 0-60 minutes.\n\nThis repo contains some code to compute CSI for the test set.\n\n\n\n```python\n# Run unet over the test set\ny_pred = unet.predict(X_test,batch_size=4)\n```\n\n\n```python\n# Compute metrics over the test set separately for each lead (takes a few minutes)\nfrom tqdm import tqdm\nfrom src.metrics import critical_success_index\nCSI = lambda yt,yp,tau: critical_success_index(yt.astype(np.float32),\n yp.astype(np.float32),np.array([tau],dtype=np.float32)).numpy()\n\nTHRESHOLDS=np.array([16,74,133],dtype=np.float32)\nnT=Y_test.shape[3]\nmae_t=[]\ncsi_t = []\nfor t in tqdm(range(nT)):\n mae_t.append(np.mean(tf.keras.losses.MAE(Y_test[:,:,:,t:t+1],y_pred[:,:,:,t:t+1])))\n csi_t.append(np.mean([CSI(Y_test[:,:,:,t:t+1], y_pred[:,:,:,t:t+1],tau) for tau in THRESHOLDS]))\n \n```\n\nWe'll also include another standard baseline model in nowcasting, the *persistence model*. This model represents the \"do nothing\" forecast where the last image in the input sequence is just repeated for every future frame.\n\n\n```python\n# Also compute scores of persistence model\nnT=Y_test.shape[3]\npers_mae_t=[]\npers_csi_t = []\nfor t in tqdm(range(nT)):\n pers_mae_t.append(np.mean(tf.keras.losses.MAE(Y_test[:,:,:,t:t+1],X_test[:,:,:,-1:].astype(np.float32))))\n pers_csi_t.append(np.mean([CSI(Y_test[:,:,:,t:t+1], X_test[:,:,:,-1:],tau) for tau in THRESHOLDS]))\n```\n\n\n```python\nt=np.arange(5,65,5)\nfig,ax=plt.subplots(1,2,figsize=(15,5))\nax[0].plot(t,mae_t,label='U-Net Baseline')\nax[0].plot(t,pers_mae_t,label='Persistence')\nax[0].set_xlabel('Lead time (minutes)')\nax[0].set_ylabel('Nowcast MAE (lower is better)')\nax[0].set_title('MAE')\nax[0].legend()\n\nax[1].plot(t,csi_t,label='U-Net Baseline')\nax[1].plot(t,pers_csi_t,label='Persistence')\nax[1].set_xlabel('Lead time (minutes)')\nax[1].set_ylabel('Nowcast mCSI (higher is better)')\nax[1].set_title('mCSI')\nax[1].legend()\nplt.show()\n\nprint('Average MAE of U-Net on test set: ',np.mean(mae_t))\nprint('Average mCSI of U-Net on test set: ',np.mean(csi_t))\n\n```\n\nThe baseline model beats persistence (phew!) but it can still be better! \n\n##### Can you create a model that improves upon the above test scores?\n\n## (Optional) Applying to US-sized images \n\nThe model trained above is fully convolutional and can be apllied to full-sized radar mosaics. Sample US radar mosaics of VIL can be obtained from the MRMS system [https://mrms.ncep.noaa.gov/data/2D/VIL/](https://mrms.ncep.noaa.gov/data/2D/VIL/). The past 24 hours of files are avaialble online for download. The model *may* also work for other radar fields, e.g. reflectivity, so long as they are first normalized to the range [0-255] first.\n\nMRMS data is not a part of this repo so you'll need to download and prepared this data yourself. Before feeding this data to the CNN, some preparation is required first. First, the MRMS files are available in `grib2` format and need to be converted to a numpy array. Second, the units need to be converted to the [0-255] scale. Code for doing these steps can be found in [this gist](https://gist.github.com/markveillette/048d7e6b6be35d26bac37378bd66ae01), which provides a script for doing this preprocessing. \n\n\n\n```python\n# MRMS updates every 2 minutes. We will approximate 5 minute updates by switching between 4 and 6 minute \n# steps. \n\n# UPDATE WITH YOUR OWN LOCATION AND FILES\nmrms_root='../data/MRMS'\nfiles=[\n 'MRMS_VIL_00.50_20201125-160039.grib2.npy','MRMS_VIL_00.50_20201125-160436.grib2.npy',\n 'MRMS_VIL_00.50_20201125-161039.grib2.npy','MRMS_VIL_00.50_20201125-161442.grib2.npy',\n 'MRMS_VIL_00.50_20201125-162039.grib2.npy','MRMS_VIL_00.50_20201125-162434.grib2.npy',\n 'MRMS_VIL_00.50_20201125-163037.grib2.npy','MRMS_VIL_00.50_20201125-163436.grib2.npy',\n 'MRMS_VIL_00.50_20201125-164041.grib2.npy','MRMS_VIL_00.50_20201125-164438.grib2.npy',\n 'MRMS_VIL_00.50_20201125-165039.grib2.npy','MRMS_VIL_00.50_20201125-165436.grib2.npy',\n 'MRMS_VIL_00.50_20201125-170042.grib2.npy']\nvil=[]\nfor f in files:\n vil.append(np.load(mrms_root+'/'+f)[np.newaxis,:,:,np.newaxis])\nvil=np.concatenate(vil,axis=3)\nvil = np.pad(vil, ((0,0),(2,2),(4,4),(0,0))) # needed to make sure downsample layers can be done cleanly\nm=vil[0:1,2:-2,4:-4,0]==255\nvil[vil==255]=0\n```\n\n\n```python\n# Load weights of your own saved model, but make sure input_shape is None so it works on full sized images\nunet = create_model(start_neurons=params['start_neurons'],\n activation=params['activation'],input_shape=(None,None,13))\nunet.load_weights('experiments/unet_baseline/nowcast-unet-0039-0.133934.hdf5')\n```\n\n\n```python\n# If this overwhelms your GPU, you might need to compute forecast in smaller subsections\nforecast = unet.predict(vil)\n```\n\n\n```python\n%%capture off \nimport cartopy.crs as crs\nfrom cartopy.crs import Globe\nimport cartopy.feature as cfeature\nfrom matplotlib import animation, rc\ndef make_mrms_animation(frames,img_type='vil',fig=None,\n interval=100,title=None,**kwargs):\n \"\"\"\n Makes animation of MRMS data\n \"\"\"\n ellps='WGS84'\n globe=Globe(ellipse=ellps)\n proj=crs.PlateCarree(globe=globe)\n img_extent=(-14471586.0, -6679194.0322265625, 2226397.75, 6122594.1611328125) # MRMS extent\n xll,xur=img_extent[0],img_extent[1]\n yll,yur=img_extent[2],img_extent[3]\n if fig is None:\n fig=plt.gcf()\n ax=fig.add_subplot(1,1,1,projection=proj)\n ax.set_xlim((xll,xur))\n ax.set_ylim((yll,yur))\n cmap,norm,vmin,vmax=get_cmap(img_type)\n ax.add_feature(cfeature.STATES)\n ax.add_feature(cfeature.LAKES, alpha=0.5)\n ax.add_feature(cfeature.RIVERS, alpha=0.5)\n ax.add_feature(cfeature.COASTLINE)\n ax.add_feature(cfeature.BORDERS )\n \n im=ax.imshow(frames[:,:,0], interpolation='nearest',\n origin='upper', extent=[xll,xur,yll,yur],\n transform=proj,cmap=cmap,norm=norm,vmin=vmin,vmax=vmax);\n\n def init():\n return (im,)\n def animate(i):\n im.set_data(frames[:,:,i]);\n ftype='Analysis' if i<13 else 'Forecast'\n ax.set_title(ftype)\n return (im,)\n return animation.FuncAnimation(fig, animate, init_func=init,\n frames=range(frames.shape[2]), \n interval=interval, blit=True);\n\nframes = np.concatenate((vil[:,2:-2,4:-4,:],forecast[:,2:-2,4:-4,:]),axis=3)\nfor i in range(frames.shape[3]):\n frames[:,:,:,i][m]=np.nan\nplt.figure(figsize=(15,7))\nanim = make_mrms_animation(frames[0])\nanim.save('imgs/mrms_animation.gif', writer='imagemagick', fps=6)\n```\n\n\n```python\nImage('imgs/mrms_animation.gif')\n```\n", "meta": {"hexsha": "22db41bdde853a15eeedc0d7f791219db37d1317", "size": 481928, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "radar_nowcasting/RadarNowcastBenchmarks.ipynb", "max_stars_repo_name": "boringlee24/sevir_challenges", "max_stars_repo_head_hexsha": "be5e42795246f791932ada2c7a92e18df0b5d8b7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "radar_nowcasting/RadarNowcastBenchmarks.ipynb", "max_issues_repo_name": "boringlee24/sevir_challenges", "max_issues_repo_head_hexsha": "be5e42795246f791932ada2c7a92e18df0b5d8b7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "radar_nowcasting/RadarNowcastBenchmarks.ipynb", "max_forks_repo_name": "boringlee24/sevir_challenges", "max_forks_repo_head_hexsha": "be5e42795246f791932ada2c7a92e18df0b5d8b7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 448.7225325885, "max_line_length": 191404, "alphanum_fraction": 0.946749307, "converted": true, "num_tokens": 6849, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5156199157230157, "lm_q2_score": 0.4960938294709195, "lm_q1q2_score": 0.25579585854250364}} {"text": "# Pawnee Fire Analysis\n\n> * \ud83d\udd2c Data Science\n* \ud83d\udda5\ufe0f Requires RasterAnalytics Portal Configuration\n* \ud83d\udda5\ufe0f Requires GeoEnrichment Portal Configuration\n* \ud83d\udda5\ufe0f Requires GeoAnalytics Portal Configuration\n\nThe Pawnee Fire was a large wildfire that burned in Lake County, California. The fire started on June 23, 2018 and burned a total of 15,185 acres (61 km2) before it was fully contained on July 8, 2018.\n\n\n\n

Table of Contents

\n\n\n## Remote Sensing using Sentinel-2 data\n\n\n```python\nfrom datetime import datetime\nimport warnings\n\nfrom IPython.display import HTML\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport arcgis\nfrom arcgis import GIS\nfrom arcgis.raster.functions import *\nfrom arcgis.geoanalytics.use_proximity import create_buffers\nfrom arcgis.geoenrichment import enrich\nfrom arcgis.features import SpatialDataFrame\nfrom arcgis.raster.analytics import create_image_collection\nfrom arcgis.raster.analytics import list_datastore_content\n\ngis= GIS(\"home\")\n```\n\n### Data Preparation\n\nIn this analysis, we will be using Sentinel-2 data.\n\nSentinel-2 is an Earth observation mission developed by ESA as part of the Copernicus Programme to perform terrestrial observations in support of services such as forest monitoring, land cover change detection, and natural disaster management.\n\nIn this analysis data downloaded from https://earthexplorer.usgs.gov/ is used for creating hosted image service. \nWe add the data to the datastore and we then run the create_image_collection function which creates a collection with the input_rasters specified and publishes the collection as an image service. \n\n\nWe use list_datasore_content() in order to see the contents in the rasterstore.\n\n\n```python\nlist_datastore_content(\"/rasterStores/LocalRasterStore\")\n```\n\n\n\n\n ['/rasterStores/LocalRasterStore/Hosted_GeneratedRasterProduct_AANS52.crf/',\n '/rasterStores/LocalRasterStore/L1C_T10SEJ_A015697_20180624T190108.zip',\n '/rasterStores/LocalRasterStore/me.txt',\n '/rasterStores/LocalRasterStore/m_7013bfcd7273e0a4a779fce061167d5c/',\n '/rasterStores/LocalRasterStore/m_b5f745ad6ef601d5e6adf104c8b4ef70/',\n '/rasterStores/LocalRasterStore/m_d0221b176ab5ed2398828b8079d62ef8/',\n '/rasterStores/LocalRasterStore/pawnee_fire_multispectral/',\n '/rasterStores/LocalRasterStore/pool_chips_1/',\n '/rasterStores/LocalRasterStore/pool_chips_2/',\n '/rasterStores/LocalRasterStore/S2A_MSIL1C_20180624T184921_N0206_R113_T10SEJ_20180624T234856.SAFE/',\n '/rasterStores/LocalRasterStore/S2B_MSIL1C_20180622T185919_N0206_R013_T10SEJ_20180622T205930.SAFE/',\n '/rasterStores/LocalRasterStore/sentinel_data/']\n\n\n\n\n```python\nsentinel_collection = create_image_collection(image_collection=\"pawnee_fire_multispectral\",\n input_rasters=[\"/rasterStores/LocalRasterStore/S2A_MSIL1C_20180624T184921_N0206_R113_T10SEJ_20180624T234856.SAFE\",\n \"/rasterStores/LocalRasterStore/S2B_MSIL1C_20180622T185919_N0206_R013_T10SEJ_20180622T205930.SAFE\"],\n raster_type_name=\"Sentinel-2\", \n raster_type_params={\"productType\":\"All\",\"processingTemplate\":\"Multispectral\"},\n context={\"image_collection_properties\":{\"imageCollectionType\":\"Satellite\"},\"byref\":True}, gis = gis)\n```\n\n\n```python\nsentinel = sentinel_collection.layers[0]\n```\n\n### Select before and after rasters\n\n\n```python\naoi = {'spatialReference': {'latestWkid': 3857, 'wkid': 102100},\n 'xmax': -13643017.100720055,\n 'xmin': -13652113.10708598,\n 'ymax': 4739654.477447927,\n 'ymin': 4731284.622850712}\narcgis.env.analysis_extent = aoi\nsentinel.extent = aoi\n```\n\n\n```python\nselected = sentinel.filter_by(where=\"acquisitiondate BETWEEN timestamp '2018-06-15 00:00:00' AND timestamp '2018-06-24 19:59:59'\",\n geometry=arcgis.geometry.filters.intersects(aoi))\n\ndf = selected.query(out_fields=\"*\", order_by_fields=\"OBJECTID ASC\").df\ndf['AcquisitionDate'] = pd.to_datetime(df['AcquisitionDate'], unit='ms')\ndf.tail(40)\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
AcquisitionDateCategoryCenterXCenterYCloudCoverCreationTimeCreatorGroupNameHighPSLowPS...SOrderSensingOrbitSensorNameShape_AreaShape_LengthStereoIDTagThumbnailZOrderSHAPE
02018-06-24 18:49:21.0241-1.362127e+074.758219e+0601548778472000rjacksonMTD_MSIL1C6060...None113Sentinel-2A2.008228e+10562920.108277NoneMSNoneNone{'rings': [[[-13549665.930399999, 4828694.125]...
12018-06-22 18:59:19.0241-1.364188e+074.762904e+0601548778472000rjacksonMTD_MSIL1C6060...None13Sentinel-2B1.417943e+10488606.729731NoneMSNoneNone{'rings': [[[-13611168.835900001, 4689604.0966...
\n

2 rows \u00d7 26 columns

\n
\n\n\n\n\n```python\nprefire = sentinel.filter_by('OBJECTID=2') \nmidfire = sentinel.filter_by('OBJECTID=1')\n```\n\n## Visual Assessment\n\n\n```python\ntruecolor = extract_band(midfire, [4,3,2])\ntruecolor\n```\n\n\n\n\n \n\n \n\n\n\n### Visualize Burn Scars\n\nWe extract the [13, 12, 4] bands to improve visibility of fire and burn scars. This band combination pushes further into the SWIR range of the electromagnetic spectrum, where there is less susceptibility to smoke and haze generated by a burning fire.\n\n\n```python\nextract_band(midfire, [13,12,4])\n\n```\n\n\n\n\n \n\n \n\n\n\nFor comparison, the same area before the fire started shows no burn scar.\n\n\n```python\nextract_band(prefire, [13,12,4])\n\n```\n\n\n\n\n \n\n \n\n\n\n## Quantitative Assessment\n\nThe **Normalized Burn Ratio (NBR)** can be used to delineate the burnt areas and identify the severity of the fire. \n\nThe formula for the NBR is very similar to that of NDVI except that it uses near-infrared band 9 and the short-wave infrared band 13:\n\\begin{align}\n{\\mathbf{NBR}} = \\frac{\\mathbf{B9} - \\mathbf{B13}}{\\mathbf{B9} + \\mathbf{B13} + \\mathbf{WS}} \\\\ \n\\end{align}\n\nThe NBR equation was designed to be calcualted from reflectance, but it can be calculated from radiance and digital_number_(dn) with changes to the burn severity table below. The WS parameter is used for water suppression, and is typically 2000. \n\nFor a given area, NBR is calculated from an image just prior to the burn and a second NBR is calculated for an image immediately following the burn. Burn extent and severity is judged by taking the difference between these two index layers:\n\n\\begin{align}\n{\\Delta \\mathbf{NBR}} = \\mathbf{NBR_{prefire}} - \\mathbf{NBR_{postfire}} \\\\ \n\\end{align}\n\nThe meaning of the \u2206NBR values can vary by scene, and interpretation in specific instances should always be based on some field assessment. However, the following table from the USGS FireMon program can be useful as a first approximation for interpreting the NBR difference:\n\n\n| \\begin{align}{\\Delta \\mathbf{NBR}} \\end{align} | Burn Severity |\n| ------------- |:-------------:|\n| 0.1 to 0.27 | Low severity burn |\n| 0.27 to 0.44 | Medium severity burn |\n| 0.44 to 0.66 | Moderate severity burn |\n| > 0.66 | High severity burn |\n\n[Source: http://wiki.landscapetoolbox.org/doku.php/remote_sensing_methods:normalized_burn_ratio]\n\n### Use Band Arithmetic and Map Algebra \n\nIn order to perform raster analysis on raw pixel value, we filter out the scenes from the sentinel image service again and create new layers\n\n\n```python\nnbr_prefire = band_arithmetic(prefire, \"(b9 - b13) / (b9 + b13 + 2000)\")\nnbr_postfire = band_arithmetic(midfire, \"(b9 - b13) / (b9 + b13 + 2000)\")\n\nnbr_diff = nbr_prefire - nbr_postfire\n```\n\n\n```python\nburnt_areas = colormap(remap(nbr_diff, \n input_ranges=[0.1, 0.27, # low severity \n 0.27, 0.44, # medium severity\n 0.44, 0.66, # moderate severity\n 0.66, 1.00], # high severity burn\n output_values=[1, 2, 3, 4], \n no_data_ranges=[-1, 0.1], astype='u8'), \n colormap=[[4, 0xFF, 0xC3, 0], [3, 0xFA, 0x8E, 0], [2, 0xF2, 0x55, 0], [1, 0xE6, 0, 0]])\n```\n\n\n```python\n# Visualize burnt areas\nburnt_areas\n```\n\n\n\n\n \n\n \n\n\n\nWith this, we have computed the NBR on scenes from before and after the burn, and computed the NBR difference to identify places that have been affected by the fire. We've also normalized the values to match a burn severity index, and applied a color map that brings out the extent of fire damage.\n\n\n### Area calculation\n\n\n```python\npixx = (aoi['xmax'] - aoi['xmin']) / 1200.0\npixy = (aoi['ymax'] - aoi['ymin']) / 450.0\n\nres = burnt_areas.compute_histograms(aoi, pixel_size={'x':pixx, 'y':pixy})\n\nnumpix = 0\nhistogram = res['histograms'][0]['counts'][1:]\nfor i in histogram:\n numpix += i\n```\n\n### Report burnt area\n\n\n```python\nsqmarea = numpix * pixx * pixy # in sq. m\nacres = 0.00024711 * sqmarea # in acres\n\nHTML('

Fire has consumed {:,} acres till {}

.' \\\n .format(int(acres), df.iloc[-1]['AcquisitionDate'].date()))\n```\n\n\n\n\n

Fire has consumed 3,569 acres till 2018-06-22

.\n\n\n\n\n```python\n%matplotlib inline\n\nplt.title('Distribution by severity', y=-0.1)\nplt.pie(histogram, labels=['Low Severity', 'Medium Severity', 'Moderate Severity', 'High Severity']);\nplt.axis('equal');\n```\n\n### Visualize burnt areas\n\n\n```python\nfiremap = gis.map()\nfiremap.extent = aoi\nfiremap.add_layer([truecolor, burnt_areas])\n\nfiremap\n```\n\n\n\n\n\n\n\n\n### Persist the burnt areas layer in the GIS\n\nIf required, using the save(), we can persist the output in the gis as a new layer. This uses distributed raster analysis to perform the analysis at the source resolution.\n\n\n```python\nburnt_areas = burnt_areas.save()\n```\n\n## Raster to Feature layer conversion\n\nUse Raster Analytics and Geoanalytics to convert the burnt area raster to a feature layer. The `to_features()` method converts the raster to a feature layer and `create_buffers()` fills holes in the features and dissolves them to output one feature that covers the extent of the Pawnee Fire.\n\n\n```python\nburnt_areas = burnt_areas.layers[0]\nfire_item = burnt_areas.to_features(output_name='Pawnee_Fire_Feature_Layer', gis=gis)\nfire_layer = fire_item.layers[0]\n\n```\n\n\n```python\nfire = create_buffers(fire_layer, 100, 'Meters', dissolve_option='All', multipart=True, output_name='PawneeFireArea_Buffer')\nfire = fire.layers[0]\n```\n\n## Visualize Feature Layer\n\n\n```python\nvectormap = gis.map()\nvectormap.basemap = 'dark-gray'\nvectormap.extent = aoi\n\nvectormap.add_layer(fire)\nvectormap\n```\n\n\n\n\n\n\n\n\n## Impact Assessment\n\n### Assess Human Impact\n\n\n```python\nfrom arcgis import geometry \n \nsdf = SpatialDataFrame.from_layer(fire)\n\nfire_geometry = sdf.iloc[0].SHAPE\nsa_filter = geometry.filters.intersects(geometry=fire_geometry, sr=4326)\n\ndef age_pyramid(df):\n %matplotlib inline\n warnings.simplefilter(action='ignore', category=FutureWarning)\n pd.options.mode.chained_assignment = None \n plt.style.use('ggplot')\n\n df = df[[x for x in impacted_people.columns if 'MALE' in x or 'FEM' in x]]\n sf = pd.DataFrame(df.sum())\n age = sf.index.str.extract('(\\d+)').astype('int64')\n f = sf[sf.index.str.startswith('FEM')]\n m = sf[sf.index.str.startswith('MALE')]\n sf = sf.reset_index(drop = True)\n f = f.reset_index(drop = True)\n m = m.reset_index(drop = True)\n sf['age'] = age\n f[\"age\"] = age\n m[\"age\"] = age\n f = f.sort_values(by='age', ascending=False).set_index('age')\n m = m.sort_values(by='age', ascending=False).set_index('age')\n \n\n popdf = pd.concat([f, m], axis=1)\n popdf.columns = ['F', 'M']\n popdf['agelabel'] = popdf.index.map(str) + ' - ' + (popdf.index+4).map(str)\n popdf.M = -popdf.M\n \n sns.barplot(x=\"F\", y=\"agelabel\", color=\"#CC6699\", label=\"Female\", data=popdf, edgecolor='none')\n sns.barplot(x=\"M\", y=\"agelabel\", color=\"#008AB8\", label=\"Male\", data=popdf, edgecolor='none')\n plt.ylabel('Age group')\n plt.xlabel('Number of people');\n return plt;\n```\n\n### Age Pyramid of Affected Population\n\n\n```python\nimpacted_people = enrich(sdf, 'Age')\nage_pyramid(impacted_people);\n```\n\n# Conclusion\n\nIn this notebook example, we used Sentinel-2 data in order to perform remote sensing. For this we filtered out pre and post fire scenes. Using extract_band() we carried out visual assessment of the burnt area. We then computed the NBR on these scenes and computed the NBR difference to identify places that have been affected by the fire, using raster functions. We also normalized the values to match the burn severity index, applied a color map raster function that brings out the extent of fire damage and calculated the burnt area. Finally, we carried out a human impact assessment by plotting the age pyramid of affected population \n\n\n\n\n\n", "meta": {"hexsha": "eb64ddf4f8f73ec8ff64270372492822b38f1082", "size": 771173, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "samples/04_gis_analysts_data_scientists/wildfire_analysis_using_sentinel-2_imagery.ipynb", "max_stars_repo_name": "HONGcalmJIN/arcgis-python-api", "max_stars_repo_head_hexsha": "f8c9b4f0b505ad68ff58f33cbe8480d5d0408ae6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-22T08:56:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-22T08:56:50.000Z", "max_issues_repo_path": "samples/04_gis_analysts_data_scientists/wildfire_analysis_using_sentinel-2_imagery.ipynb", "max_issues_repo_name": "HONGcalmJIN/arcgis-python-api", "max_issues_repo_head_hexsha": "f8c9b4f0b505ad68ff58f33cbe8480d5d0408ae6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "samples/04_gis_analysts_data_scientists/wildfire_analysis_using_sentinel-2_imagery.ipynb", "max_forks_repo_name": "HONGcalmJIN/arcgis-python-api", "max_forks_repo_head_hexsha": "f8c9b4f0b505ad68ff58f33cbe8480d5d0408ae6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 888.448156682, "max_line_length": 210211, "alphanum_fraction": 0.9605458179, "converted": true, "num_tokens": 4355, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5350984286266115, "lm_q2_score": 0.47657965106367595, "lm_q1q2_score": 0.25501702239959184}} {"text": "```python\nfrom IPython.display import Image\nImage('../../Python_probability_statistics_machine_learning_2E.png',width=200)\n```\n\nIt is sometimes very difficult to unequivocally attribute outcomes to causal\nfactors. For example, did your experiment generate the outcome you were hoping\nfor or not? Maybe something did happen, but the effect is not pronounced\nenough\nto separate it from inescapable measurement errors or other\nfactors in the\nambient environment? Hypothesis testing is a powerful\nstatistical method to\naddress these questions. Let's begin by again\nconsidering our coin-tossing\nexperiment with unknown parameter $p$. Recall\nthat the individual coin-flips\nare Bernoulli distributed. The first step is\nto establish separate hypotheses.\nFirst, $H_0$ is the so-called null\nhypothesis. In our case this can be\n\n$$\nH_0 \\colon \\theta < \\frac{1}{2}\n$$\n\n and the alternative hypothesis is then\n\n$$\nH_1 \\colon \\theta \\geq \\frac{1}{2}\n$$\n\n With this set up, the question now boils down to figuring out which\nhypothesis\nthe data is most consistent with. To choose between these, we need\na\nstatistical test that is a function, $G$, of the sample set\n$\\mathbf{X}_n=\\left\\{ X_i \\right\\}_n $ into the real line, where $X_i$ is the\nheads or tails outcome ($X_i \\in \\lbrace 0,1 \\rbrace$). In other words, we\ncompute $G(\\mathbf{X}_n)$ and check if it exceeds a threshold $c$. If not, then\nwe declare $H_0$ (otherwise, declare $H_1$). Notationally, this is the\nfollowing:\n\n$$\n\\begin{align*}\n G(\\mathbf{X}_n) < c & \\Rightarrow H_0 \\\\\\\n G(\\mathbf{X}_n)\n\\geq c & \\Rightarrow H_1\n\\end{align*}\n$$\n\n In summary, we have the observed data $\\mathbf{X}_n$ and a function\n$G$ that\nmaps that data onto the real line. Then, using the\nconstant $c$ as a threshold,\nthe inequality effectively divides the real line\ninto two parts, one\ncorresponding to each of the hypotheses.\n\nWhatever this test $G$ is, it will\nmake mistakes of two types --- false\nnegatives and false positives. The false\npositives arise from the case where we\ndeclare $H_0$ when the test says we\nshould declare $H_1$. This is\nsummarized in the Table [1](#tbl:decision).\n\n\n
\n\n$$\n\\begin{table}\n\\footnotesize\n\\centering\n\\begin{tabular}{l|p{1.3in}|p{1.3in}}\n\\multicolumn{1}{c}{ } & \\multicolumn{1}{c}{Declare $H_0$ } & \\multicolumn{1}{c}{\nDeclare $H_1$ } \\\\\n\\hline\n$H_0\\:$ True & Correct & False\npositive (Type I error) \\\\\n\\hline\n$H_1\\:$ True & False negative (Type II error)\n& Correct (true-detect) \\\\\n\\hline\n\\end{tabular}\n\\caption{Truth table for\nhypotheses testing.}\n\\label{tbl:decision} \\tag{1}\n\\end{table}\n$$\n\n For this example, here are the false positives (aka false alarms):\n\n$$\nP_{FA} = \\mathbb{P}\\left( G(\\mathbf{X}_n) > c \\mid \\theta \\leq \\frac{1}{2}\n\\right)\n$$\n\n Or, equivalently,\n\n$$\nP_{FA} = \\mathbb{P}\\left( G(\\mathbf{X}_n) > c \\mid H_0 \\right)\n$$\n\n Likewise, the other error is a false negative, which we can write\nanalogously\nas\n\n$$\nP_{FN} = \\mathbb{P}\\left( G(\\mathbf{X}_n) < c \\vert H_1\\right)\n$$\n\n By choosing some acceptable values for either of these errors,\nwe can solve for\nthe other one. The practice is usually to pick a value of\n$P_{FA}$ and then\nfind the corresponding value of $P_{FN}$. Note that it is\ntraditional in\nengineering to speak about *detection probability*, which is\ndefined as\n\n$$\nP_{D} = 1- P_{FN} = \\mathbb{P}\\left( G(\\mathbf{X}_n) > c \\mid H_1\\right)\n$$\n\n In other words, this is the probability of declaring $H_1$ when the\ntest\nexceeds the threshold. This is otherwise known as the *probability of a\ntrue\ndetection* or *true-detect*.\n\n## Back to the Coin Flipping Example\n\nIn our\nprevious maximum likelihood discussion, we wanted to derive an\nestimator for the\n*value* of the probability of heads for the coin\nflipping experiment. For\nhypthesis testing, we want to ask a softer\nquestion: is the probability of heads\ngreater or less than $\\nicefrac{1}{2}$? As we\njust established, this leads to\nthe two hypotheses:\n\n$$\nH_0 \\colon \\theta < \\frac{1}{2}\n$$\n\n versus,\n\n$$\nH_1 \\colon \\theta > \\frac{1}{2}\n$$\n\n Let's assume we have five observations. Now we need the $G$ function\nand a\nthreshold $c$ to help pick between the two hypotheses. Let's count the\nnumber of\nheads observed in five observations as our\ncriterion. Thus, we have\n\n$$\nG(\\mathbf{X}_5) := \\sum_{i=1}^5 X_i\n$$\n\n and, suppose further that we pick $H_1$ only if exactly five out of\nfive\nobservations are heads. We'll call this the *all-heads* test.\n\nNow, because all\nof the $X_i$ are random variables, so is $G$ and we must\nfind the corresponding\nprobability mass function for $G$. Assuming the\nindividual coin tosses are\nindependent, the probability of five heads is $\\theta^5$.\nThis means that the\nprobability of rejecting the $H_0$ hypothesis (and choosing\n$H_1$, because there\nare only two choices here) based on the unknown underlying\nprobability is\n$\\theta^5$. In the parlance, this is known and the *power function*\nas in\ndenoted by $\\beta$ as in\n\n$$\n\\beta(\\theta) = \\theta^5\n$$\n\n Let's get a quick plot this in [Figure](#fig:Hypothesis_testing_001).\n\n\n\n\n```python\n%matplotlib inline\n\nfrom matplotlib.pylab import subplots\nimport numpy as np\nfig,ax=subplots()\nfig.set_size_inches((6,3))\nxi = np.linspace(0,1,50)\n_=ax.plot(xi, (xi)**5,'-k',label='all heads')\n_=ax.set_xlabel(r'$\\theta$',fontsize=22)\n_=ax.plot(0.5,(0.5)**5,'ko')\nfig.tight_layout()\nfig.savefig('fig-statistics/Hypothesis_Testing_001.png')\n```\n\n\n\n
\n\n

Power function for the all-heads\ntest. The dark circle indicates the value of the function indicating\n$\\alpha$.

\n\n\n\n\n Now, we have the following false alarm probability,\n\n$$\nP_{FA} = \\mathbb{P}( G(\\mathbf{X}_n)= 5 \\vert H_0) =\\mathbb{P}( \\theta^5\n\\vert H_0)\n$$\n\n Notice that this is a function of $\\theta$, which means there are\nmany false\nalarm probability values that correspond to this test. To be on the\nconservative\nside, we'll pick the supremum (i.e., maximum) of this function,\nwhich is known\nas the *size* of the test, traditionally denoted by $\\alpha$,\n\n$$\n\\alpha = \\sup_{\\theta \\in \\Theta_0} \\beta(\\theta)\n$$\n\n with domain $\\Theta_0 = \\lbrace \\theta < 1/2 \\rbrace$ which in our case is\n\n$$\n\\alpha = \\sup_{\\theta < \\frac{1}{2}} \\theta^5 = \\left(\\frac{1}{2}\\right)^5 =\n0.03125\n$$\n\n Likewise, for the detection probability,\n\n$$\n\\mathbb{P}_{D}(\\theta) = \\mathbb{P}( \\theta^5 \\vert H_1)\n$$\n\n which is again a function of the parameter $\\theta$. The problem with\nthis test\nis that the $P_{D}$ is pretty low for most of the domain of\n$\\theta$. For\ninstance, values in the nineties for $P_{D}$\nonly happen when $\\theta > 0.98$.\nIn other words, if the coin produces\nheads 98 times out of 100, then we can\ndetect $H_1$ reliably. Ideally, we want\na test that is zero for the domain\ncorresponding to $H_0$ (i.e., $\\Theta_0$) and\nequal to one otherwise.\nUnfortunately, even if we increase the length of the\nobserved sequence, we\ncannot escape this effect with this test. You can try\nplotting $\\theta^n$ for\nlarger and larger values of $n$ to see this.\n\n### Majority Vote Test\n\nDue to the\nproblems with the detection probability in the all-heads test, maybe\nwe can\nthink of another test that will have the performance we want? Suppose we\nreject\n$H_0$ if the majority of the observations are heads. Then, using the\nsame\nreasoning as above, we have\n\n$$\n\\beta(\\theta) = \\sum_{k=3}^5 \\binom{5}{k} \\theta^k(1-\\theta)^{5-k}\n$$\n\n[Figure](#fig:Hypothesis_testing_002) shows the power function\nfor both the\nmajority vote and the all-heads tests.\n\n\n```python\nfig,ax=subplots()\nfig.set_size_inches((6,3))\nfrom sympy.abc import theta,k # get some variable symbols\nimport sympy as S\nxi = np.linspace(0,1,50)\nexpr=S.Sum(S.binomial(5,k)*theta**(k)*(1-theta)**(5-k),(k,3,5)).doit()\n_=ax.plot(xi, (xi)**5,'-k',label='all heads')\n_=ax.plot(xi, S.lambdify(theta,expr)(xi),'--k',label='majority vote')\n_=ax.plot(0.5, (0.5)**5,'ko')\n_=ax.plot(0.5, S.lambdify(theta,expr)(0.5),'ko')\n_=ax.set_xlabel(r'$\\theta$',fontsize=22)\n_=ax.legend(loc=0)\nfig.tight_layout()\nfig.savefig('fig-statistics/Hypothesis_Testing_002.png')\n```\n\n\n\n
\n\n

Compares the power\nfunction for the all-heads test with that of the majority-vote test.

\n\n\n\nIn this case, the new test has *size*\n\n$$\n\\alpha = \\sup_{\\theta < \\frac{1}{2}} \\theta^{5} + 5 \\theta^{4} \\left(- \\theta\n+ 1\\right) + 10 \\theta^{3} \\left(- \\theta + 1\\right)^{2} = \\frac{1}{2}\n$$\n\n As before we only get to upwards of 90% for detection\nprobability only when the\nunderlying parameter $\\theta > 0.75$. \nLet's see what happens when we consider\nmore than five samples. For\nexample, let's suppose that we have $n=100$ samples\nand we want to\nvary the threshold for the majority vote test. For example, let's\nhave\na new test where we declare $H_1$ when $k=60$ out of the 100 trials\nturns\nout to be heads. What is the $\\beta$ function in this case?\n\n$$\n\\beta(\\theta) = \\sum_{k=60}^{100} \\binom{100}{k} \\theta^k(1-\\theta)^{100-k}\n$$\n\n This is too complicated to write by hand, but the statistics module\nin Sympy\nhas all the tools we need to compute this.\n\n\n```python\nfrom sympy.stats import P, Binomial\ntheta = S.symbols('theta',real=True)\nX = Binomial('x',100,theta)\nbeta_function = P(X>60)\nprint (beta_function.subs(theta,0.5)) # alpha\nprint (beta_function.subs(theta,0.70))\n```\n\n 0.0176001001088524\n 0.979011423996075\n\n\nThese results are much better than before because the $\\beta$\nfunction is much\nsteeper. If we declare $H_1$ when we observe 60 out of 100\ntrials are heads,\nthen we wrongly declare heads approximately 1.8% of the\ntime. Otherwise, if it\nhappens that the true value for $p>0.7$, we will\nconclude correctly\napproximately 97% of the time. A quick simulation can sanity\ncheck these results\nas shown below:\n\n\n```python\nfrom scipy import stats\nrv=stats.bernoulli(0.5) # true p = 0.5\n# number of false alarms ~ 0.018\nprint (sum(rv.rvs((1000,100)).sum(axis=1)>60)/1000.)\n```\n\n 0.015\n\n\nThe above code is pretty dense so let's unpack it. In the first line, we use\nthe `scipy.stats` module to define the\nBernoulli random variable for the coin\nflip. Then, we use the `rvs` method of\nthe variable to generate 1000 trials of\nthe experiment where each trial\nconsists of 100 coin flips. This generates a\n$1000 \\times 100$ matrix where the\nrows are the individual trials and the\ncolumns are the outcomes of each\nrespective set of 100 coin flips. The\n`sum(axis=1)` part computes the sum across the\ncolumns. Because the values of\nthe embedded matrix are only `1` or `0` this\ngives us the count of flips that\nare heads per row. The next `>60` part\ncomputes the boolean 1000-long vector of\nvalues that are bigger than 60. The\nfinal `sum` adds these up. Again, because\nthe entries in the array are `True`\nor `False` the `sum` computes the count of\ntimes the number of heads has\nexceeded 60 per 100 coin flips in each of 1000\ntrials. Then, dividing this\nnumber by 1000 gives a quick approximation of false\nalarm probability we\ncomputed above for this case where the true value of\n$p=0.5$.\n\n## Receiver Operating Characteristic\n\nBecause the majority vote test\nis a binary test, we can compute the *Receiver\nOperating Characteristic* (ROC)\nwhich is the graph of the $(P_{FA},\nP_D)$. The term comes from radar systems but\nis a very general method for\nconsolidating all of these issues into a single\ngraph. Let's consider a typical\nsignal processing example with two hypotheses.\nIn $H_0$, there is noise but no\nsignal present at the receiver,\n\n$$\nH_0 \\colon X = \\epsilon\n$$\n\n where $\\epsilon \\sim \\mathcal{N}(0,\\sigma^2)$ represents additive\nnoise. In the\nalternative hypothesis, there is a deterministic signal at the receiver,\n\n$$\nH_1 \\colon X = \\mu + \\epsilon\n$$\n\n Again, the problem is to choose between these two hypotheses. For\n$H_0$, we\nhave $X \\sim \\mathcal{N}(0,\\sigma^2)$ and for $H_1$, we have $ X \\sim\n\\mathcal{N}(\\mu,\\sigma^2)$. Recall that we only observe values for $x$ and\nmust\npick either $H_0$ or $H_1$ from these observations. Thus, we need a\nthreshold,\n$c$, to compare $x$ against in order to distinguish the two\nhypotheses.\n[Figure](#fig:Hypothesis_testing_003) shows the probability density\nfunctions\nunder each of the hypotheses. The dark vertical line is the threshold\n$c$. The\ngray shaded area is the probability of detection, $P_D$ and the shaded\narea is\nthe probability of false alarm, $P_{FA}$. The test evaluates every\nobservation\nof $x$ and concludes $H_0$ if $x
-->\n\n\n\n

The two density functions for the\n$H_0$ and $H_1$ hypotheses. The shaded gray area is the detection probability\nand the shaded dark gray area is the probability of false alarm. The vertical\nline is the decision threshold.

\n\n\n\n**Programming Tip.**\n\nThe shading shown in [Figure](#fig:Hypothesis_testing_003)\ncomes from\nMatplotlib's `fill_between` function. This function has a `where`\nkeyword\nargument to specify which part of the plot to apply shading with\nspecified\n`color` keyword argument. Note there is also a `fill_betweenx`\nfunction that\nfills horizontally. The `text` function can place formatted\ntext\nanywhere in the plot and can utilize basic \\LaTeX{} formatting.\n\n\n\nAs we slide\nthe threshold left and right along the horizontal axis, we naturally change the\ncorresponding areas under\neach of the curves shown in\n[Figure](#fig:Hypothesis_testing_003) and thereby\nchange the values of $P_D$ and\n$P_{FA}$. The contour that emerges from sweeping\nthe threshold this way is the\nROC as shown in [Figure](#fig:Hypothesis_testing_004). This figure also shows\nthe diagonal line which\ncorresponds to making decisions based on the flip of a\nfair coin. Any\nmeaningful test must do better than coin flipping so the more the\nROC bows up\nto the top left corner of the graph, the better. Sometimes ROCs are\nquantified\ninto a single number called the *area under the curve* (AUC), which\nvaries from\n0.5 to 1.0 as shown. In our example, what separates the two\nprobability density\nfunctions is the value of $\\mu$. In a real situation, this\nwould be determined\nby signal processing methods that include many complicated\ntrade-offs. The key\nidea is that whatever those trade-offs are, the test itself\nboils down to the\nseparation between these two density functions --- good tests\nseparate the two\ndensity functions and bad tests do not. Indeed, when there is\nno separation, we\narrive at the diagonal-line coin-flipping situation we just\ndiscussed.\n\nWhat values for $P_D$ and $P_{FA}$ are considered *acceptable*\ndepends on the\napplication. For example, suppose you are testing for a fatal\ndisease. It could\nbe that you are willing to except a relatively high $P_{FA}$\nvalue if that\ncorresponds to a good $P_D$ because the test is relatively cheap\nto administer\ncompared to the alternative of missing a detection. On the other\nhand,\nmay be a false alarm triggers an expensive response, so that minimizing\nthese alarms is more important than potentially missing a detection. These\ntrade-offs can only be determined by the application and design factors.\n\n\n\n\n\n

The Receiver Operating Characteristic\n(ROC) corresponding to [Figure](#fig:Hypothesis_testing_003).

\n\n\n\n\n\n##\nP-Values\n\nThere are a lot of moving parts in hypothesis testing. What we need\nis\na way to consolidate the findings. The idea is that we want to find\nthe minimum\nlevel at which the test rejects $H_0$. Thus, the p-value\nis the probability,\nunder $H_0$, that the test-statistic is at least\nas extreme as what was actually\nobserved. Informally, this means\nthat smaller values imply that $H_0$ should be\nrejected, although\nthis doesn't mean that large values imply that $H_0$ should\nbe\nretained. This is because a large p-value can arise from either $H_0$\nbeing\ntrue or the test having low statistical power.\n\nIf $H_0$ is true, the p-value is\nuniformly distributed in the interval $(0,1)$.\nIf $H_1$ is true, the\ndistribution of the p-value will concentrate closer to\nzero. For continuous\ndistributions, this can be proven rigorously and implies\nthat if we reject $H_0$\nwhen the corresponding p-value is less than $\\alpha$,\nthen the probability of a\nfalse alarm is $\\alpha$. Perhaps it helps to\nformalize this a bit before\ncomputing it. Suppose $\\tau(X)$ is a test\nstatistic that rejects $H_0$ as it\ngets bigger. Then, for each sample $x$,\ncorresponding to the data we actually\nhave on-hand, we define\n\n$$\np(x) = \\sup_{\\theta \\in \\Theta_0} \\mathbb{P}_{\\theta}(\\tau(X) > \\tau(x))\n$$\n\n This equation states that the supremum (i.e., maximum)\nprobability that the\ntest statistic, $\\tau(X)$, exceeds the value for\nthe test statistic on this\nparticular data ($\\tau(x)$) over the\ndomain $\\Theta_0$ is defined as the\np-value. Thus, this embodies a\nworst-case scenario over all values of $\\theta$.\nHere's one way to think about this. Suppose you rejected $H_0$, and someone\nsays\nthat you just got *lucky* and somehow just drew data that happened to\ncorrespond\nto a rejection of $H_0$. What p-values provide is a way to address\nthis by\ncapturing the odds of just a favorable data-draw. Thus, suppose that\nyour\np-value is 0.05. Then, what you are showing is that the odds of just\ndrawing\nthat data sample, given $H_0$ is in force, is just 5%. This means that\nthere's a\n5% chance that you somehow lucked out and got a favorable draw of\ndata.\n\nLet's\nmake this concrete with an example. Given, the majority-vote rule above,\nsuppose\nwe actually do observe three of five heads. Given the $H_0$, the\nprobability of\nobserving this event is the following:\n\n$$\np(x) =\\sup_{\\theta \\in \\Theta_0} \\sum_{k=3}^5\\binom{5}{k}\n\\theta^k(1-\\theta)^{5-k} = \\frac{1}{2}\n$$\n\n For the all-heads test, the corresponding computation is the following:\n\n$$\np(x) =\\sup_{\\theta \\in \\Theta_0} \\theta^5 = \\frac{1}{2^5} = 0.03125\n$$\n\nFrom just looking at these p-values, you might get the feeling that the second\ntest is better, but we still have the same detection probability issues we\ndiscussed above; so, p-values help in summarizing some aspects of our\nhypothesis\ntesting, but they do *not* summarize all the salient aspects of the\n*entire*\nsituation.\n\n## Test Statistics\n\nAs we have seen, it is difficult to derive good\ntest statistics for hypothesis\ntesting without a systematic process. The\nNeyman-Pearson Test is derived from\nfixing a false-alarm value ($\\alpha$) and\nthen maximizing the detection\nprobability. This results in the Neyman-Pearson\nTest,\n\n$$\nL(\\mathbf{x}) = \\frac{f_{X|H_1}(\\mathbf{x})}{f_{X|H_0}(\\mathbf{x})}\n\\stackrel[H_0]{H_1}{\\gtrless} \\gamma\n$$\n\n where $L$ is the likelihood ratio and where the threshold\n$\\gamma$ is chosen\nsuch that\n\n$$\n\\int_{x:L(\\mathbf{x})>\\gamma} f_{X|H_0}(\\mathbf{x}) d\\mathbf{x}=\\alpha\n$$\n\n The Neyman-Pearson Test is one of a family of tests that use\nthe likelihood\nratio.\n\n**Example.** Suppose we have a receiver and we want to distinguish\nwhether just noise ($H_0$) or signal pluse noise ($H_1$) is received.\nFor the\nnoise-only case, we have $x\\sim \\mathcal{N}(0,1)$ and for the\nsignal pluse\nnoise case we have $x\\sim \\mathcal{N}(1,1)$. In other\nwords, the mean of the\ndistribution shifts in the presence of the\nsignal. This is a very common problem\nin signal processing and\ncommunications. The Neyman-Pearson Test then boils down\nto the\nfollowing,\n\n$$\nL(x)= e^{-\\frac{1}{2}+x}\\stackrel[H_0]{H_1}{\\gtrless}\\gamma\n$$\n\n Now we have to find the threshold $\\gamma$ that solves the\nmaximization problem\nthat characterizes the Neyman-Pearson Test. Taking\nthe natural logarithm and\nre-arranging gives,\n\n$$\nx\\stackrel[H_0]{H_1}{\\gtrless} \\frac{1}{2}+\\log\\gamma\n$$\n\n The next step is find $\\gamma$ corresponding to the desired\n$\\alpha$ by\ncomputing it from the following,\n\n$$\n\\int_{1/2+\\log\\gamma}^{\\infty} f_{X|H_0}(x)dx = \\alpha\n$$\n\n For example, taking $\\alpha=1/100$, gives\n$\\gamma\\approx 6.21$. To summarize\nthe test in this case, we have,\n\n$$\nx\\stackrel[H_0]{H_1}{\\gtrless} 2.32\n$$\n\n Thus, if we measure $X$ and see that its value\nexceeds the threshold above, we\ndeclare $H_1$ and otherwise\ndeclare $H_0$. The following code shows how to\nsolve\nthis example using Sympy and Scipy. First, we\nset up the likelihood ratio,\n\n\n```python\nimport sympy as S\nfrom sympy import stats\ns = stats.Normal('s',1,1) # signal+noise\nn = stats.Normal('n',0,1) # noise\nx = S.symbols('x',real=True)\nL = stats.density(s)(x)/stats.density(n)(x)\n```\n\nNext, to find the $\\gamma$ value,\n\n\n```python\ng = S.symbols('g',positive=True) # define gamma\nv=S.integrate(stats.density(n)(x),\n (x,S.Rational(1,2)+S.log(g),S.oo))\n```\n\n**Programming Tip.**\n\nProviding additional information regarding the Sympy\nvariable by using the\nkeyword argument `positive=True` helps the internal\nsimplification algorithms\nwork faster and better. This is especially useful when\ndealing with complicated\nintegrals that involve special functions. Furthermore,\nnote that we used the\n`Rational` function to define the `1/2` fraction, which is\nanother way of\nproviding hints to Sympy. Otherwise, it's possible that the\nfloating-point\nrepresentation of the fraction could disguise the simple\nfraction and\nthereby miss internal simplification opportunities.\n\n\n\n We want to\nsolve for `g` in the above expression. Sympy has some\nbuilt-in numerical solvers\nas in the following,\n\n\n```python\nprint (S.nsolve(v-0.01,3.0)) # approx 6.21\n```\n\n 6.21116124253284\n\n\nNote that in this situation it is better to use the numerical\nsolvers because\nSympy `solve` may grind along for a long time to\nresolve this.\n\n### Generalized\nLikelihood Ratio Test\n\nThe likelihood ratio test can be generalized using the\nfollowing statistic,\n\n$$\n\\Lambda(\\mathbf{x})= \\frac{\\sup_{\\theta\\in\\Theta_0}\nL(\\theta)}{\\sup_{\\theta\\in\\Theta}\nL(\\theta)}=\\frac{L(\\hat{\\theta}_0)}{L(\\hat{\\theta})}\n$$\n\n where $\\hat{\\theta}_0$ maximizes $L(\\theta)$ subject to\n$\\theta\\in\\Theta_0$ and\n$\\hat{\\theta}$ is the maximum likelihood estimator.\nThe intuition behind this\ngeneralization of the Likelihood Ratio Test is that\nthe denomimator is the usual\nmaximum likelihood estimator and the numerator is\nthe maximum likelihood\nestimator, but over a restricted domain ($\\Theta_0$).\nThis means that the ratio\nis always less than unity because the maximum\nlikelihood estimator over the\nentire space will always be at least as maximal\nas that over the more restricted\nspace. When this $\\Lambda$ ratio gets small\nenough, it means that the maximum\nlikelihood estimator over the entire domain\n($\\Theta$) is larger which means\nthat it is safe to reject the null hypothesis\n$H_0$. The tricky part is that\nthe statistical distribution of $\\Lambda$ is\nusually eye-wateringly difficult.\nFortunately, Wilks Theorem says that with\nsufficiently large $n$, the\ndistribution of $-2\\log\\Lambda$ is approximately\nchi-square with $r-r_0$ degrees\nof freedom, where $r$ is the number of free\nparameters for $\\Theta$ and $r_0$ is\nthe number of free parameters in\n$\\Theta_0$. With this result, if we want an\napproximate test at level\n$\\alpha$, we can reject $H_0$ when $-2\\log\\Lambda \\ge\n\\chi^2_{r-r_0}(\\alpha)$\nwhere $\\chi^2_{r-r_0}(\\alpha)$ denotes the $1-\\alpha$\nquantile of the\n$\\chi^2_{r-r_0}$ chi-square distribution. However, the problem\nwith this\nresult is that there is no definite way of knowing how big $n$ should\nbe. The\nadvantage of this generalized likelihood ratio test is that it \ncan test\nmultiple hypotheses simultaneously, as illustrated\nin the following example.\n**Example.** Let's return to our coin-flipping example, except now we have\nthree\ndifferent coins. The likelihood function is then,\n\n$$\nL(p_1,p_2,p_3) =\n\\texttt{binom}(k_1;n_1,p_1)\\texttt{binom}(k_2;n_2,p_2)\\texttt{binom}(k_3;n_3,p_3)\n$$\n\n where $\\texttt{binom}$ is the binomial distribution with \nthe given parameters.\nFor example,\n\n$$\n\\texttt{binom}(k;n,p) =\\sum_{k=0}^n \\binom{n}{k} p^k(1-p)^{n-k}\n$$\n\n The null hypothesis is that all three coins have the\nsame probability of\nheads, $H_0:p=p_1=p_2=p_3$. The alternative hypothesis is\nthat at least one of\nthese probabilites is different. Let's consider the\nnumerator of the $\\Lambda$\nfirst, which will give us the maximum likelihood\nestimator of $p$. Because the\nnull hypothesis is that all the $p$ values are\nequal, we can just treat this as\none big binomial distribution with\n$n=n_1+n_2+n_3$ and $k=k_1+k_2+k_3$ is the\ntotal number of heads observed for\nany coin. Thus, under the null hypothesis,\nthe distribution of $k$ is binomial\nwith parameters $n$ and $p$. Now, what is\nthe maximum likelihood estimator for\nthis distribution? We have worked this\nproblem before and have the following,\n\n$$\n\\hat{p}_0= \\frac{k}{n}\n$$\n\n In other words, the maximum likelihood estimator under the null\nhypothesis is\nthe proportion of ones observed in the sequence of $n$ trials\ntotal. Now, we\nhave to substitute this in for the likelihood under the null\nhypothesis to\nfinish the numerator of $\\Lambda$,\n\n$$\nL(\\hat{p}_0,\\hat{p}_0,\\hat{p}_0) =\n\\texttt{binom}(k_1;n_1,\\hat{p}_0)\\texttt{binom}(k_2;n_2,\\hat{p}_0)\\texttt{binom}(k_3;n_3,\\hat{p}_0)\n$$\n\nFor the denomimator of $\\Lambda$, which represents the case of maximizing over\nthe entire space, the maximum likelihood estimator for each separate binomial\ndistribution is likewise,\n\n$$\n\\hat{p}_i= \\frac{k_i}{n_i}\n$$\n\n which makes the likelihood in the denominator the following,\n\n$$\nL(\\hat{p}_1,\\hat{p}_2,\\hat{p}_3) =\n\\texttt{binom}(k_1;n_1,\\hat{p}_1)\\texttt{binom}(k_2;n_2,\\hat{p}_2)\\texttt{binom}(k_3;n_3,\\hat{p}_3)\n$$\n\n for each of the $i\\in \\lbrace 1,2,3 \\rbrace$ binomial distributions. Then, the\n$\\Lambda$ statistic is then the following,\n\n$$\n\\Lambda(k_1,k_2,k_3) =\n\\frac{L(\\hat{p}_0,\\hat{p}_0,\\hat{p}_0)}{L(\\hat{p}_1,\\hat{p}_2,\\hat{p}_3)}\n$$\n\n Wilks theorems states that $-2\\log\\Lambda$ is chi-square\ndistributed. We can\ncompute this example with the statistics tools in Sympy and\nScipy.\n\n\n```python\nfrom scipy.stats import binom, chi2\nimport numpy as np\n# some sample parameters\np0,p1,p2 = 0.3,0.4,0.5\nn0,n1,n2 = 50,180,200\nbrvs= [ binom(i,j) for i,j in zip((n0,n1,n2),(p0,p1,p2))]\ndef gen_sample(n=1):\n 'generate samples from separate binomial distributions'\n if n==1:\n return [i.rvs() for i in brvs]\n else:\n return [gen_sample() for k in range(n)]\n```\n\n**Programming Tip.**\n\nNote the recursion in the definition of the `gen_sample`\nfunction where a\nconditional clause of the function calls itself. This is a\nquick way to reusing\ncode and generating vectorized output. Using `np.vectorize`\nis another way, but\nthe code is simple enough in this case to use the\nconditional clause. In\nPython, it is generally bad for performance to have code\nwith nested recursion\nbecause of how the stack frames are managed. However,\nhere we are only\nrecursing once so this is not an issue.\n\n\n\n Next, we compute\nthe logarithm of the numerator of the $\\Lambda$\nstatistic,\n\n\n```python\nnp.random.seed(1234)\n```\n\n\n```python\nk0,k1,k2 = gen_sample()\nprint (k0,k1,k2)\npH0 = sum((k0,k1,k2))/sum((n0,n1,n2))\nnumer = np.sum([np.log(binom(ni,pH0).pmf(ki)) \n for ni,ki in \n zip((n0,n1,n2),(k0,k1,k2))])\nprint (numer)\n```\n\n 12 68 103\n -15.545863836567879\n\n\nNote that we used the null hypothesis estimate for the $\\hat{p}_0$.\nLikewise,\nfor the logarithm of the denominator we have the following,\n\n\n```python\ndenom = np.sum([np.log(binom(ni,pi).pmf(ki)) \n for ni,ki,pi in \n zip((n0,n1,n2),(k0,k1,k2),(p0,p1,p2))])\nprint (denom)\n```\n\n -8.424106480792402\n\n\nNow, we can compute the logarithm of the $\\Lambda$ statistic as\nfollows and see\nwhat the corresponding value is according to Wilks theorem,\n\n\n```python\nchsq=chi2(2)\nlogLambda =-2*(numer-denom)\nprint (logLambda)\nprint (1- chsq.cdf(logLambda))\n```\n\n 14.243514711550954\n 0.0008073467083287156\n\n\nBecause the value reported above is less than the 5% significance\nlevel, we\nreject the null hypothesis that all the coins have the same\nprobability of\nheads. Note that there are two degrees of freedom because the\ndifference in the\nnumber of parameters between the null hypothesis ($p$) and\nthe alternative\n($p_1,p_2,p_3$) is two. We can build a quick Monte\nCarlo simulation to check the\nprobability of detection for this example using\nthe following code, which is\njust a combination of the last few code blocks,\n\n\n```python\nc= chsq.isf(.05) # 5% significance level\nout = []\nfor k0,k1,k2 in gen_sample(100):\n pH0 = sum((k0,k1,k2))/sum((n0,n1,n2))\n numer = np.sum([np.log(binom(ni,pH0).pmf(ki)) \n for ni,ki in \n zip((n0,n1,n2),(k0,k1,k2))])\n denom = np.sum([np.log(binom(ni,pi).pmf(ki)) \n for ni,ki,pi in \n zip((n0,n1,n2),(k0,k1,k2),(p0,p1,p2))])\n out.append(-2*(numer-denom)>c)\n\nprint (np.mean(out)) # estimated probability of detection\n```\n\n 0.59\n\n\nThe above simulation shows the estimated probability of\ndetection, for this set\nof example parameters. This relative low\nprobability of detection means that\nwhile the test is unlikely (i.e.,\nat the 5% significance level) to mistakenly\npick the null hypothesis,\nit is likewise missing many of the $H_1$ cases (i.e.,\nlow probability\nof detection). The trade-off between which is more important is\nup to\nthe particular context of the problem. In some situations, we may\nprefer\nadditional false alarms in exchange for missing fewer $H_1$\ncases.\n\n\n###\nPermutation Test\n\n\n\n\n\n\n\nThe Permutation Test is good way to test whether or not\nsamples\nsamples come from the same distribution. For example, suppose that\n\n$$\nX_1, X_2, \\ldots, X_m \\sim F\n$$\n\n and also,\n\n$$\nY_1, Y_2, \\ldots, Y_n \\sim G\n$$\n\n That is, $Y_i$ and $X_i$ come from different distributions. Suppose\nwe have\nsome test statistic, for example\n\n$$\nT(X_1,\\ldots,X_m,Y_1,\\ldots,Y_n) = \\vert\\overline{X}-\\overline{Y}\\vert\n$$\n\n Under the null hypothesis for which $F=G$, any of the\n$(n+m)!$ permutations are\nequally likely. Thus, suppose for\neach of the $(n+m)!$ permutations, we have the\ncomputed\nstatistic,\n\n$$\n\\lbrace T_1,T_2,\\ldots,T_{(n+m)!} \\rbrace\n$$\n\n Then, under the null hypothesis, each of these values is equally\nlikely. The\ndistribution of $T$ under the null hypothesis is the *permutation\ndistribution*\nthat puts weight $1/(n+m)!$ on each $T$-value. Suppose $t_o$ is\nthe observed\nvalue of the test statistic and assume that large $T$ rejects the\nnull\nhypothesis, then the p-value for the permutation test is the following,\n\n$$\nP(T>t_o)= \\frac{1}{(n+m)!} \\sum_{j=1}^{(n+m)!} I(T_j>t_o)\n$$\n\n where $I()$ is the indicator function. For large $(n+m)!$, we can\nsample\nrandomly from the set of all permutations to estimate this p-value.\n**Example.** Let's return to our coin-flipping example from last time, but\nnow\nwe have only two coins. The hypothesis is that both coins\nhave the same\nprobability of heads. We can use the built-in\nfunction in Numpy to compute the\nrandom permutations.\n\n\n```python\nx=binom(10,0.3).rvs(5) # p=0.3\ny=binom(10,0.5).rvs(3) # p=0.5\nz = np.hstack([x,y]) # combine into one array\nt_o = abs(x.mean()-y.mean()) \nout = [] # output container\nfor k in range(1000):\n perm = np.random.permutation(z)\n T=abs(perm[:len(x)].mean()-perm[len(x):].mean())\n out.append((T>t_o))\n\nprint ('p-value = ', np.mean(out))\n```\n\n p-value = 0.0\n\n\nNote that the size of total permutation space is\n$8!=40320$ so we are taking\nrelatively few (i.e., 100) random\npermutations from this space.\n\n### Wald Test\nThe Wald Test is an asympotic test. Suppose we have $H_0:\\theta=\\theta_0$ and\notherwise $H_1:\\theta\\ne\\theta_0$, the corresponding statistic is defined as\nthe\nfollowing,\n\n$$\nW=\\frac{\\hat{\\theta}_n-\\theta_0}{\\texttt{se}}\n$$\n\n where $\\hat{\\theta}$ is the maximum likelihood estimator and\n$\\texttt{se}$ is\nthe standard error,\n\n$$\n\\texttt{se} = \\sqrt{\\mathbb{V}(\\hat{\\theta}_n)}\n$$\n\n Under general conditions, $W\\overset{d}{\\to} \\mathcal{N}(0,1)$.\nThus, an\nasympotic test at level $\\alpha$ rejects when $\\vert W\\vert>\nz_{\\alpha/2}$ where\n$z_{\\alpha/2}$ corresponds to $\\mathbb{P}(\\vert\nZ\\vert>z_{\\alpha/2})=\\alpha$\nwith $Z \\sim \\mathcal{N}(0,1)$. For our favorite\ncoin-flipping example, if\n$H_0:\\theta=\\theta_0$, then\n\n$$\nW = \\frac{\\hat{\\theta}-\\theta_0}{\\sqrt{\\hat{\\theta}(1-\\hat{\\theta})/n}}\n$$\n\n We can simulate this using the following code at the usual\n5% significance\nlevel,\n\n\n```python\nfrom scipy import stats\ntheta0 = 0.5 # H0\nk=np.random.binomial(1000,0.3)\ntheta_hat = k/1000. # MLE\nW = (theta_hat-theta0)/np.sqrt(theta_hat*(1-theta_hat)/1000)\nc = stats.norm().isf(0.05/2) # z_{alpha/2}\nprint (abs(W)>c) # if true, reject H0\n```\n\n True\n\n\nThis rejects $H_0$ because the true $\\theta=0.3$ and the null hypothesis\nis\nthat $\\theta=0.5$. Note that $n=1000$ in this case which puts us well inside\nthe\nasympotic range of the result. We can re-do this example to estimate\nthe\ndetection probability for this example as in the following code,\n\n\n```python\ntheta0 = 0.5 # H0\nc = stats.norm().isf(0.05/2.) # z_{alpha/2}\nout = []\nfor i in range(100):\n k=np.random.binomial(1000,0.3)\n theta_hat = k/1000. # MLE\n W = (theta_hat-theta0)/np.sqrt(theta_hat*(1-theta_hat)/1000.)\n out.append(abs(W)>c) # if true, reject H0\n\nprint (np.mean(out)) # detection probability\n```\n\n 1.0\n\n\n## Testing Multiple Hypotheses\n\nThus far, we have focused primarily on two\ncompeting hypotheses. Now, we\nconsider multiple comparisons. The general\nsituation is the following. We test\nthe null hypothesis against a sequence of\n$n$ competing hypotheses $H_k$. We\nobtain p-values for each hypothesis so now\nwe have multiple p-values to\nconsider $\\lbrace p_k \\rbrace$. To boil this\nsequence down to a single\ncriterion, we can make the following argument. Given\n$n$ independent hypotheses\nthat are all untrue, the probability of getting at\nleast one false alarm is the\nfollowing,\n\n$$\nP_{FA} = 1-(1-p_0)^n\n$$\n\n where $p_0$ is the individual p-value threshold (say, 0.05). The\nproblem here\nis that $P_{FA}\\rightarrow 1$ as $n\\rightarrow\\infty$. If we want\nto make many\ncomparisons at once and control the overall false alarm rate the\noverall p-value\nshould be computed under the assumption that none of the\ncompeting hypotheses is\nvalid. The most common way to address this is with the\nBonferroni correction\nwhich says that the individual significance level should\nbe reduced to $p/n$.\nObviously, this makes it much harder to declare\nsignificance for any particular\nhypothesis. The natural consequence of this\nconservative restriction is to\nreduce the statistical power of the experiment,\nthus making it more likely the\ntrue effects will be missed.\n\nIn 1995, Benjamini and Hochberg devised a simple\nmethod that tells which\np-values are statistically significant. The procedure is\nto sort the list of\np-values in ascending order, choose a false-discovery rate\n(say, $q$), and then\nfind the largest p-value in the sorted list such that $p_k\n\\le k q/n$, where\n$k$ is the p-value's position in the sorted list. Finally,\ndeclare that $p_k$\nvalue and all the others less than it statistically\nsignificant. This procedure\nguarantees that the proportion of false-positives is\nless than $q$ (on\naverage). The Benjamini-Hochberg procedure (and its\nderivatives) is fast and\neffective and is widely used for testing hundreds of\nprimarily false hypotheses\nwhen studying genetics or diseases. Additionally,\nthis\nprocedure provides better statistical power than the Bonferroni correction.\n\n\n\n\n\n\n## Fisher Exact Test\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n$$\n\\begin{table}[]\n\\centering\n\\caption{Example Contingency Table}\n\\label{tab:contingencyTable} \\tag{2}\n\\begin{tabular}{lllll}\n\\cline{1-4}\n\\multicolumn{1}{|l|}{} & \\multicolumn{1}{l|}{Infection} &\n\\multicolumn{1}{l|}{No infection} & \\multicolumn{1}{l|}{Total} & \\\\ \\cline{1-4}\n\\multicolumn{1}{|l|}{Male} & \\multicolumn{1}{c|}{13} &\n\\multicolumn{1}{c|}{11} & \\multicolumn{1}{c|}{24} & \\\\ \\cline{1-4}\n\\multicolumn{1}{|l|}{Female} & \\multicolumn{1}{c|}{12} &\n\\multicolumn{1}{c|}{1} & \\multicolumn{1}{c|}{13} & \\\\ \\cline{1-4}\n\\multicolumn{1}{|l|}{Total} & \\multicolumn{1}{c|}{25} &\n\\multicolumn{1}{c|}{12} & \\multicolumn{1}{c|}{37} & \\\\ \\cline{1-4}\n\\end{tabular}\n\\end{table}\n$$\n\nContingency tables represent the partitioning of a sample population of two\ncategories between two different classifications as shown in the following\nTable\n[2](#tab:contingencyTable). The question is whether or not the observed\ntable\ncorresponds to a random partition of the sample population, constrained\nby the\nmarginal sums. Note that because this is a two-by-two table, a change in\nany of\nthe table entries automatically affects all of the other terms because\nof the\nrow and column sum constraints. This means that equivalent questions\nlike \"Under\na random partition, what is the probability that a particular\ntable entry is at\nleast as large as a given value?\" can be meaningfully posed. \n\n\nThe Fisher Exact Test addresses this question. The idea is to compute the\nprobability of a particular entry of the table, conditioned upon the marginal\nrow and column sums,\n\n$$\n\\mathbb{P}(X_{i,j}\\vert r_1,r_2,c_1,c_2)\n$$\n\n where $X_{i,j}$ is $(i,j)$ table entry, $r_1$ represents the sum of\nthe first\nrow, $r_2$ represents the sum of the second row, $c_1$ represents the\nsum of the\nfirst column, and $c_2$ is the sum of the second column. This\nprobability is\ngiven by the *hypergeometric distribution*. Recall that the\nhypergeometric\ndistribution gives the probability of sampling (without\nreplacement) $k$ items\nfrom a population of $N$ items consisting of exactly two\ndifferent kinds of\nitems,\n\n$$\n\\mathbb{P}(X=k) = \\frac{\\binom{K}{k}\\binom{N-K}{n-k}}{\\binom{N}{n}}\n$$\n\n where $N$ is the population size, $K$ is the total number of possible\nfavorable\ndraws, $n$ is the number of draws, and $k$ is the number of observed\nfavorable\ndraws. With the corresponding identification of variables, the\nhypergeometric\ndistribution gives the desired conditional probability: $K=r_1,\nk=x, n= c_1,\nN=r_1+r_2$. \n\nIn the example of the Table [2](#tab:contingencyTable), the\nprobability for\n$x=13$ male infections among a population of $r_1=24$ males in a\ntotal\npopulation of $c_1=25$ infected persons, including $r_2=13$ females. The\n`scipy.stats` module has the Fisher Exact Test implemented as shown below,\n\n\n```python\nimport scipy.stats\ntable = [[13,11],[12,1]]\nodds_ratio, p_value=scipy.stats.fisher_exact(table)\nprint(p_value)\n```\n\n 0.02718387758955712\n\n\nThe default for `scipy.stats.fisher_exact` is the two-sided\ntest. The following\nresult is for the `less` option,\n\n\n```python\nimport scipy.stats\nodds_ratio, p_value=scipy.stats.fisher_exact(table,alternative='less')\nprint(p_value)\n```\n\n 0.018976707519532877\n\n\nThis means that the p-value is computed by summing over the\nprobabilities of\ncontingency tables that are *less* extreme than the \ngiven table. To undertand\nwhat this means, we can use \nthe `scipy.stats.hypergeom` function to compute the\nprobabilities of\nthese with the number of infected men is less than or equal to\n13.\n\n\n```python\nhg = scipy.stats.hypergeom(37, 24, 25)\nprobs = [(hg.pmf(i)) for i in range(14)]\nprint (probs)\nprint(sum(probs))\n```\n\n [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0014597467322717626, 0.017516960787261115]\n 0.018976707519532877\n\n\nThis is the same as the prior p-value result we obtained from\n`scipy.stats.fisher_exact`. Another option is `greater` which derives from the\nfollowing analogous summation,\n\n\n```python\nodds_ratio, p_value=scipy.stats.fisher_exact(table,alternative='greater')\nprobs = [hg.pmf(i) for i in range(13,25)]\nprint(probs)\nprint(p_value)\nprint(sum(probs))\n```\n\n [0.017516960787261115, 0.08257995799708828, 0.2018621195484381, 0.28386860561499044, 0.24045340710916852, 0.12467954442697629, 0.039372487713781906, 0.00738234144633414, 0.0007812001530512284, 4.261091743915799e-05, 1.0105355914424832e-06, 7.017608273906114e-09]\n 0.9985402532677288\n 0.9985402532677288\n\n\nFinally, the two-sided version excludes those individual\ntable probabilities\nthat are less that of the given table\n\n\n```python\n_,p_value=scipy.stats.fisher_exact(table)\nprobs = [ hg.pmf(i) for i in range(25) ]\nprint(sum(i for i in probs if i<= hg.pmf(13)))\nprint(p_value)\n```\n\n 0.027183877589557117\n 0.02718387758955712\n\n\nThus, for this particular contingency table, we \ncould reasonably conclude that\n13 infected males in this total\npopulation is statistically significant with a\np-value less than\nfive percent.\n\nPerforming this kind of analysis for tables\nlarger than `2x2` easily becomes\ncomputationally challenging due to the nature\nof the underlying combinatorics and \nusually requires specialized\napproximations.\n\nIn this section, we discussed the structure of statistical\nhypothesis testing\nand defined the various terms that are commonly used for\nthis process, along\nwith the illustrations of what they mean in our running\ncoin-flipping example.\nFrom an engineering standpoint, hypothesis testing is not\nas common as\nconfidence-intervals and point estimates. On the other hand,\nhypothesis testing\nis very common in social and medical science, where one must\ndeal with\npractical constraints that may limit the sample size or other aspects\nof the\nhypothesis testing rubric. In engineering, we can usually have much more\ncontrol over the samples and models we employ because they are typically\ninanimate objects that can be measured repeatedly and consistently. This is\nobviously not so with human studies, which generally have other ethical and\nlegal considerations.\n", "meta": {"hexsha": "0bce87756282d9253c0086ad88ae91d3338308b8", "size": 268220, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter/statistics/Hypothesis_Testing.ipynb", "max_stars_repo_name": "derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E", "max_stars_repo_head_hexsha": "9d12a298d43ae285d9549a79bb5544cf0a9b7516", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 224, "max_stars_repo_stars_event_min_datetime": "2019-05-07T08:56:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T15:50:41.000Z", "max_issues_repo_path": "chapter/statistics/Hypothesis_Testing.ipynb", "max_issues_repo_name": "derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E", "max_issues_repo_head_hexsha": "9d12a298d43ae285d9549a79bb5544cf0a9b7516", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-08-27T12:57:17.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T15:45:13.000Z", "max_forks_repo_path": "chapter/statistics/Hypothesis_Testing.ipynb", "max_forks_repo_name": "derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E", "max_forks_repo_head_hexsha": "9d12a298d43ae285d9549a79bb5544cf0a9b7516", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 73, "max_forks_repo_forks_event_min_datetime": "2019-05-25T07:15:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T00:22:37.000Z", "avg_line_length": 144.8272138229, "max_line_length": 176652, "alphanum_fraction": 0.8746253076, "converted": true, "num_tokens": 12903, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.47657963619520866, "lm_q2_score": 0.5350984286266116, "lm_q1q2_score": 0.2550170144434984}} {"text": "\n\n\n# PHY321: Classical Mechanics 1\n**Homework 7, due Monday March 25**\n\nDate: **Mar 16, 2022**\n\n### Practicalities about homeworks and projects\n\n1. You can work in groups (optimal groups are often 2-3 people) or by yourself. If you work as a group you can hand in one answer only if you wish. **Remember to write your name(s)**!\n\n2. Homeworks are available the week before the deadline. \n\n3. How do I(we) hand in? Due to the corona virus and many of you not being on campus, we recommend that you scan your handwritten notes and upload them to D2L. If you are ok with typing mathematical formulae using say Latex, you can hand in everything as a single jupyter notebook at D2L. The numerical exercise(s) should always be handed in as a jupyter notebook by the deadline at D2L.\n\n### Introduction to homework 7\n\nIn this week's homework we will apply our insights about harmonic\noscillations. The relevant material to survey is chapter 5 of Taylor.\nSee also the slides from [week 11](https://mhjensen.github.io/Physics321/doc/pub/week10/html/week10.html).\n\nWe have also added an exercise (exercise 2) related to our discussion of two-body problems. \nThe relevant reading background for exercise 2 is given by sections 8.1-8.2 of Taylor.\n\n### Exercise 1 (80 pts), the mathematical pendulum\n\nRelevant reading here is Taylor chapter 5 and the lecture notes on oscillations from [week 11](https://mhjensen.github.io/Physics321/doc/pub/week10/html/week10.html).\n\nThe angular equation of motion of the pendulum is given by\nNewton's equation and with no external force it reads\n\n\n
\n\n$$\n\\begin{equation}\n ml\\frac{d^2\\theta}{dt^2}+mgsin(\\theta)=0,\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nwith an angular velocity and acceleration given by\n\n\n
\n\n$$\n\\begin{equation}\n v=l\\frac{d\\theta}{dt},\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n a=l\\frac{d^2\\theta}{dt^2}.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nWe do however expect that the motion will gradually come to an end\ndue a viscous drag torque acting on the pendulum. \nIn the presence of the drag, the above equation becomes\n\n\n
\n\n$$\n\\begin{equation}\n ml\\frac{d^2\\theta}{dt^2}+\\nu\\frac{d\\theta}{dt} +mgsin(\\theta)=0,\n\\label{eq:pend1} \\tag{4}\n\\end{equation}\n$$\n\nwhere $\\nu$ is now a positive constant parameterizing the viscosity\nof the medium in question. In order to maintain the motion against\nviscosity, it is necessary to add some external driving force. \nWe choose here a periodic driving force. The last equation becomes then\n\n\n
\n\n$$\n\\begin{equation}\n ml\\frac{d^2\\theta}{dt^2}+\\nu\\frac{d\\theta}{dt} +mgsin(\\theta)=Asin(\\omega t),\n\\label{eq:pend2} \\tag{5}\n\\end{equation}\n$$\n\nwith $A$ and $\\omega$ two constants representing the amplitude and \nthe angular frequency respectively. The latter is called the driving frequency.\n\n* 1a (10pts)\n\nRewrite Eqs. ([4](#eq:pend1)) and ([5](#eq:pend2)) as dimensionless\nequations in time. \n\n* 1b (40pts)\n\nWrite then a code which solves Eq. ([4](#eq:pend1)) using the\nEuler-Cromer method or for example the fourth-order Runge Kutta method. Perform\ncalculations for at least ten periods with $N=100$, $N=1000$ and\n$N=10000$ mesh points and values of $\\nu = 1$, $\\nu = 5$ and $\\nu\n=10$. Set $l=1.0$ m, $g=1$ m/s$^2$ and $m=1$ kg. Choose as initial\nconditions $\\theta(0) = 0.2$ (radians) and $v(0) = 0$ (radians/s).\nMake plots of $\\theta$ (in radians) as function of time and phase\nspace plots of $\\theta$ versus the velocity $v$. Check the stability\nof your results as functions of time and number of mesh points. Which\ncase corresponds to damped, underdamped and overdamped oscillatory\nmotion? Comment your results.\n\n* 1c (30pts) \n\nNow we switch to Eq. ([5](#eq:pend2)) for the rest of the exercise. Add\nan external driving force and set $l=g=1$, $m=1$, $\\nu = 1/2$ and\n$\\omega = 2/3$. Choose as initial conditions $\\theta(0) = 0.2$ and\n$v(0) = 0$ and $A=0.5$ and $A=1.2$. Make plots of $\\theta$ (in\nradians) as function of time for at least 300 periods and phase space\nplots of $\\theta$ versus the velocity $v$. Choose an appropriate time\nstep. Comment and explain the results for the different values of $A$.\n\n* 1d **optional exercise** (20pts bonus) \n\nKeep now the constants from the previous exercise fixed but\nset now $A=1.35$, $A=1.44$ and $A=1.465$. Plot $\\theta$ (in radians)\nas function of time for at least 300 periods for these values of $A$\nand comment your results.\n\n* 1e **optional exercise** (20pts bonus)\n\nWe want to analyse further these results by making phase space plots\nof $\\theta$ versus the velocity $v$ using only the points where we\nhave $\\omega t=2n\\pi$ where $n$ is an integer. These are normally\ncalled the drive periods. This is an example of what is called a\nPoincare section and is a very useful way to plot and analyze the\nbehavior of a dynamical system. Comment your results.\n\n### Exercise 2 (20pt), Center-of-Mass and Relative Coordinates and Reference Frames\n\nWe define the two-body center-of-mass coordinate and relative coordinate by expressing the trajectories for\n$\\boldsymbol{r}_1$ and $\\boldsymbol{r}_2$ into the center-of-mass coordinate\n$\\boldsymbol{R}_{\\rm cm}$\n\n$$\n\\boldsymbol{R}_{\\rm cm}\\equiv\\frac{m_1\\boldsymbol{r}_1+m_2\\boldsymbol{r}_2}{m_1+m_2},\n$$\n\nand the relative coordinate\n\n$$\n\\boldsymbol{r}\\equiv\\boldsymbol{r}_1-\\boldsymbol{r_2}.\n$$\n\nHere, we assume the two particles interact only with one another, so $\\boldsymbol{F}_{12}=-\\boldsymbol{F}_{21}$ (where $\\boldsymbol{F}_{ij}$ is the force on $i$ due to $j$.\n\n* 2a (5pt) Show that the equations of motion then become $\\ddot{\\boldsymbol{R}}_{\\rm cm}=0$ and $\\mu\\ddot{\\boldsymbol{r}}=\\boldsymbol{F}_{12}$, with the reduced mass $\\mu=m_1m_2/(m_1+m_2)$.\n\nThe first expression simply states that the center of mass coordinate $\\boldsymbol{R}_{\\rm cm}$ moves at a fixed velocity. The second expression can be rewritten in terms of the reduced mass $\\mu$.\n\n* 2b (5pt) Show that the linear momenta for the center-of-mass $\\boldsymbol{P}$ motion and the relative motion $\\boldsymbol{q}$ are given by $\\boldsymbol{P}=M\\dot{\\boldsymbol{R}}_{\\rm cm}$ with $M=m_1+m_2$ and $\\boldsymbol{q}=\\mu\\dot{\\boldsymbol{r}}$. The linear momentum of the relative motion is defined $\\boldsymbol{q} = (m_2\\boldsymbol{p}_1-m_1\\boldsymbol{p}_2)/(m_1+m_2)$.\n\n* 2c (5pt) Show then the kinetic energy for two objects can then be written as\n\n$$\nK=\\frac{P^2}{2M}+\\frac{q^2}{2\\mu}.\n$$\n\n* 2d (5pt) Show that the total angular momentum for two-particles in the center-of-mass frame $\\boldsymbol{R}=0$, is given by\n\n$$\n\\boldsymbol{L}=\\boldsymbol{r}\\times \\mu\\dot{\\boldsymbol{r}}.\n$$\n\n### Classical Mechanics Extra Credit Assignment: Scientific Writing and attending Talks\n\nThe following gives you an opportunity to earn **five extra credit\npoints** on each of the remaining homeworks and **ten extra credit points**\non the midterms and finals. This assignment also covers an aspect of\nthe scientific process that is not taught in most undergraduate\nprograms: scientific writing. Writing scientific reports is how\nscientist communicate their results to the rest of the field. Knowing\nhow to assemble a well written scientific report will greatly benefit\nyou in you upper level classes, in graduate school, and in the work\nplace.\n\nThe full information on extra credits is found at . There you will also find examples on how to write a scientific article. \nBelow you can also find a description on how to gain extra credits by attending scientific talks.\n\nThis assignment allows you to gain extra credit points by practicing\nyour scientific writing. For each of the remaining homeworks you can\nsubmit the specified section of a scientific report (written about the\nnumerical aspect of the homework) for five extra credit points on the\nassignment. For the two midterms and the final, submitting a full\nscientific report covering the numerical analysis problem will be\nworth ten extra points. For credit the grader must be able to tell\nthat you put effort into the assignment (i.e. well written, well\nformatted, etc.). If you are unfamiliar with writing scientific\nreports, [see the information here](https://github.com/mhjensen/Physics321/blob/master/doc/Homeworks/ExtraCredits/IntroductionScientificWriting.md)\n\nThe following table explains what aspect of a scientific report is due\nwith which homework. You can submit the assignment in any format you\nlike, in the same document as your homework, or in a different one.\nRemember to cite any external references you use and include a\nreference list. There are no length requirements, but make sure what\nyou turn in is complete and through. If you have any questions,\nplease contact Julie Butler at butler@frib.msu.edu.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HW/Project Due Date Extra Credit Assignment
HW 3 2-8 Abstract
HW 4 2-15 Introduction
HW 5 2-22 Methods
HW 6 3-1 Results and Discussion
**Midterm 1** **3-12** *Full Written Report*
HW 7 3-22 Abstract
HW 8 3-29 Introduction
HW 9 4-5 Results and Discussion
**Midterm 2** **4-16** *Full Written Report*
HW 10 4-26 Abstract
**Final** **4-30** *Full Written Report*
\n\nYou can also gain extra credits if you attend scientific talks.\nThis is described here.\n\n### Integrating Classwork With Research\n\nThis opportunity will allow you to earn up to 5 extra credit points on a Homework per week. These points can push you above 100% or help make up for missed exercises.\nIn order to earn all points you must:\n\n1. Attend an MSU research talk (recommended research oriented Clubs is provided below)\n\n2. Summarize the talk using at least 150 words\n\n3. Turn in the summary along with your Homework.\n\nApproved talks:\nTalks given by researchers through the following clubs:\n* Research and Idea Sharing Enterprise (RAISE)\u200b: Meets Wednesday Nights Society for Physics Students (SPS)\u200b: Meets Monday Nights\n\n* Astronomy Club\u200b: Meets Monday Nights\n\n* Facility For Rare Isotope Beam (FRIB) Seminars: \u200bOccur multiple times a week\n\nIf you have any questions please consult Jeremy Rebenstock, rebensto@msu.edu.\n\nAll the material on extra credits is at .\n", "meta": {"hexsha": "bd9942c2133938441628d03589c11969337d1e01", "size": 17500, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/Homeworks/hw7/ipynb/hw7.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/Homeworks/hw7/ipynb/hw7.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Homeworks/hw7/ipynb/hw7.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4732334047, "max_line_length": 400, "alphanum_fraction": 0.5813142857, "converted": true, "num_tokens": 3405, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.5544704649604273, "lm_q1q2_score": 0.25346880792982424}} {"text": "Lambda School Data Science\n\n*Unit 4, Sprint 2, Module 2*\n\n---\n\n# Train (Prepare)\n__*Neural Network Foundations*__\n\n## Learning Objectives\n* Part 1: Student should be able to explain the intuition behind backpropagation and gradient descent\n* Part 2: Student should be able to discuss the importance of batch size\n* Part 3: Student should be able to discuss the importance of learning rate\n\n## Summary of Yesterday\n\nYesterday, we learned about some of the principal components of Neural Networks: Neurons, Weights, Activation Functions, and layers (input, output, & hidden). Today, we will reinforce our understanding of those components and introduce the mechanics of training a neural network. Feed-forward neural networks, such as multi-layer perceptrons (MLPs), are almost always trained using some variation of gradient descent where the gradient has been calculated by backpropagation.\n\n
\n\n- There are three kinds of layers: input, hidden, and output layers.\n- Each layer is made up of **n** individual neurons (aka activation units) which have a corresponding weight and bias.\n- Signal is passed from layer to layer through a network by:\n - Taking in inputs from the training data (or previous layer)\n - Multiplying each input by its corresponding weight (think arrow/connecting line)\n - Adding a bias to this weighted some of inputs and weights\n - Activating this weighted sum + bias by squishifying it with sigmoid or some other activation function. With a single perceptron with three inputs, calculating the output from the node is done like so:\n\\begin{align}\n y = sigmoid(\\sum(weight_{1}input_{1} + weight_{2}input_{2} + weight_{3}input_{3}) + bias)\n\\end{align}\n - this final activated value is the signal that gets passed onto the next layer of the network.\n \n\n## Training a Neural Network: *Formal Summary*\n\n0. Pick a network architecture\n - No. of input units = No. of features\n - No. of output units = Number of Classes (or expected targets)\n - Select the number of hidden layers and number of neurons within each hidden layer\n1. Randomly initialize weights\n2. Implement forward propagation to get $h_{\\theta}(x^{(i)})$ for any $x^{(i)}$\n3. Implement code to compute a cost function $J(\\theta)$\n4. Implement backpropagation to compute partial derivatives $\\frac{\\delta}{\\delta\\theta_{jk}^{l}}{J(\\theta)}$\n5. Use gradient descent (or other advanced optimizer) with backpropagation to minimize $J(\\theta)$ as a function of parameters $\\theta\\$\n6. Repeat steps 2 - 5 until cost function is 'minimized' or some other stopping criteria is met. One pass over steps 2 - 5 is called an iteration or epoch.\n\n# Backpropagation & Gradient Descent (Learn)\n\n\n## Overview\n\nBackpropagation is short for [\"Backwards Propagation of errors\"](https://en.wikipedia.org/wiki/Backpropagation) and refers to a specific (rather calculus intensive) algorithm for how weights in a neural network are updated in reverse order at the end of each training epoch. Our purpose today is to demonstrate the backpropagation algorithm on a simple Feedforward Neural Network and in so doing help you get a grasp on the main process. If you want to understand all of the underlying calculus of how the gradients are calculated then you'll need to dive into it yourself, [3Blue1Brown's video is a great starting place](https://www.youtube.com/watch?v=tIeHLnjs5U8). I also highly recommend this Welch Labs series [Neural Networks Demystified](https://www.youtube.com/watch?v=bxe2T-V8XRs) if you want a rapid yet orderly walk through of the main intuitions and math behind the backpropagation algorithm. \n\n### What is a Gradient?\n\n> In vector calculus, the gradient is a multi-variable generalization of the derivative. \n\nThe gradients that we will deal with today will be vector representations of the derivative of the activation function. \n\n## Follow Along\n\nIn this section, we will again a simple neural network using base TensorFlow. We'll focus on using a __Feed Forward Neural Network__ to predict test scores. \n\n
\n\n### Generate some Fake Data\n\n\n```python\nimport tensorflow as tf\n\n# Imagine that our data is drawn from a linear function\n# y = 3*hours_studying + 50\n\nTRUE_W = 3.5\nTRUE_b = 50.0\nNUM_EXAMPLES = 1000\n\ninputs = tf.random.normal(shape=[NUM_EXAMPLES])\nnoise = tf.random.normal(shape=[NUM_EXAMPLES])\n\noutputs = inputs * TRUE_W + TRUE_b + noise\n```\n\n### Loss Function\nHere we will use Mean Squared Error (MSE), because this is a regression problem. We are trying to predict a continuous target.\n\n\n```python\ndef loss(target_y, predicted_y):\n \"MSE\"\n return tf.reduce_mean(tf.square(target_y - predicted_y))\n```\n\n### Neural Network Architecture\nLets create a Neural Network class called \"Model\" to contain this functionality. Note: This is essentially a linear regression whose coefficients are trained by gradient descent. In practice, gradient descent works on much more complex function like the multi-layer networks we constructed yesterday.\n\n\n```python\nclass Model(object):\n\n def __init__(self):\n self.W = tf.Variable(8.0)\n self.b = tf.Variable(40.0)\n\n def __call__(self, x):\n return self.W * x + self.b\n\nmodel = Model()\n\nassert model(3.0).numpy() == 64.0\n```\n\n### Initial Weights\nThe initial weights in our model were arbitrary. In practice, weights are initialized randomly. \n\n\n```python\nimport matplotlib.pyplot as plt\n\nplt.scatter(inputs, outputs, c='b')\nplt.scatter(inputs, model(inputs), c='r')\nplt.show()\n\nprint('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())\n```\n\n### Update Weights Based on Gradient\n\n> *Assigning blame for bad predictions and delivering justice - repeatedly and a little bit at a time*\n\nYou should also know that with neural networks it is common to have gradients that are not convex (like what we saw when we applied gradient descent to linear regression). \n\nDue to the high complexity of these models and their nonlinearity, it is common for gradient descent to get stuck in a local minimum, but there are ways to combat this:\n\n1) Stochastic Gradient Descent\n\n2) More advanced Gradient-Descent-based \"Optimizers\" - See Stretch Goals on assignment.\n\n\n```python\n def train(model, inputs, outputs, learning_rate):\n with tf.GradientTape() as t: \n current_loss = loss(outputs, model(inputs))\n dW, db = t.gradient(current_loss, [model.W, model.b])\n model.W.assign_sub(learning_rate * dW)\n model.b.assign_sub(learning_rate * db)\n```\n\n### Train the Network\n\n\n```python\nmodel = Model()\n\n# Store Some history of weights\nWs, bs = [], []\nepochs = range(10)\nfor epoch in epochs:\n Ws.append(model.W.numpy())\n bs.append(model.b.numpy())\n current_loss = loss(outputs, model(inputs))\n\n train(model, inputs, outputs, learning_rate=0.1)\n print('Epoch %2d: W=%1.2f b=%1.2f loss=%2.5f' % (epoch, Ws[-1], bs[-1], current_loss))\n```\n\n Epoch 0: W=8.00 b=40.00 loss=127.54656\n Epoch 1: W=6.97 b=42.05 loss=80.43711\n Epoch 2: W=6.17 b=43.68 loss=50.89282\n Epoch 3: W=5.56 b=44.98 loss=32.35456\n Epoch 4: W=5.08 b=46.02 loss=20.71602\n Epoch 5: W=4.71 b=46.84 loss=13.40525\n Epoch 6: W=4.43 b=47.50 loss=8.81047\n Epoch 7: W=4.21 b=48.02 loss=5.92111\n Epoch 8: W=4.04 b=48.44 loss=4.10316\n Epoch 9: W=3.91 b=48.77 loss=2.95870\n\n\n\n```python\nimport matplotlib.pyplot as plt\n\nplt.plot(epochs, Ws, 'r', epochs, bs, 'b')\nplt.plot([TRUE_W] * len(epochs), 'r--',\n [TRUE_b] * len(epochs), 'b--')\nplt.legend(['W', 'b', 'True W', 'True b'])\nplt.show()\n```\n\n## Challenge\n\nIn the module project, you will be asked to explain the logic of backpropagation and gradient descent.\n\n### In the following two sections we'll look at batch size and learning rate hyperparameters in isolation. \nHowever, it's important to know that recent research found some interesting relationships between batch sizes and learning rates. The best available suggestion today is to scale batch size proportionally to a learning rate:\n- https://openreview.net/pdf?id=B1Yy1BxCZ\n- https://papers.nips.cc/paper/8398-control-batch-size-and-learning-rate-to-generalize-well-theoretical-and-empirical-evidence.pdf\n\n# Batch Size (Learn)\n\n## Overview\n\nThe What - Stochastic Gradient Descent calculates an approximation of the gradient over the entire dataset by reviewing the predictions of a random sample. \n\nThe Why - *Speed*. Calculating the gradient over the entire dataset is extremely expensive computationally. \n\n### Batch Size\nBatches are the number of observations our model is shown to make predictions and update the weights. Batches are selected randomly during epoch. All observations are considered when passing thru an epoch at some point.\n\n### Baseline Model\n\n\n```python\nfrom tensorflow.keras.datasets import mnist\n\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\nX_train = X_train / 255.\nX_test = X_test / 255.\n\nX_train = X_train.reshape((60000, 784))\nX_test = X_test.reshape((10000, 784))\n```\n\n\n```python\n# Our Model\n\n```\n\n## Follow Along\nLet's run a series of experiments for a default, small, and large batch size.\n\n### Default\nBatch Size is 32\n\n\n```python\n\n```\n\n### Small Batch Size\nBatch Size is 8\n\n\n```python\n\n```\n\n### Large Batch Size\nBatch Size is 512\n\n\n```python\n\n```\n\n### Visualization of Results\n\n\n```python\n\n```\n\n## Challenge\n\nYou will be expected to experiment with batch size on today's assignment.\n\n# Learning Rate (Learn)\n\n## Overview\n\nLearning Rate controls the size of the update to our weights that the optimization algorithm makes. VERY IMPORTANT hyperparameter.\n\n* Too high of a learning rate causes unstable results\n* Too Low of a learning rate the model will underfit\n* Goldy Locks parameters - it needs be \"just right\"\n* Scale of 0-1\n\n## Follow Along\n\nSame experiment with Batch but different learning rates:\n* High Learning = .75\n* Default Learning = .01\n* Low Learning Rate = .0001\n\n### Default Learning Rate\n\n\n```python\n\n```\n\n### High Learning Rate\n\n\n```python\n\n```\n\n### Low Learning Rate\n\n\n```python\n\n```\n\n### Visualization of Results\n\n\n```python\n\n```\n\n## Challenge\n\nYou will be expected to experiment with different learning rates today.\n\n---\n", "meta": {"hexsha": "4c979b16757b7901e82631ef9d198518cd51413d", "size": 42852, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DataScience/DS-Unit-4-Sprint-2-Neural-Networks/module2-Train/LS_DS_422_Train_Lecture.ipynb", "max_stars_repo_name": "dustin-py/Education", "max_stars_repo_head_hexsha": "96ad77b9859278fdd9d3fc6a37f1568696af4dfa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-04T22:01:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-28T18:13:44.000Z", "max_issues_repo_path": "DataScience/DS-Unit-4-Sprint-2-Neural-Networks/module2-Train/LS_DS_422_Train_Lecture.ipynb", "max_issues_repo_name": "dustin-py/Education", "max_issues_repo_head_hexsha": "96ad77b9859278fdd9d3fc6a37f1568696af4dfa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DataScience/DS-Unit-4-Sprint-2-Neural-Networks/module2-Train/LS_DS_422_Train_Lecture.ipynb", "max_forks_repo_name": "dustin-py/Education", "max_forks_repo_head_hexsha": "96ad77b9859278fdd9d3fc6a37f1568696af4dfa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.9690976514, "max_line_length": 11380, "alphanum_fraction": 0.7658452348, "converted": true, "num_tokens": 2544, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. YES", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.5273165233795671, "lm_q1q2_score": 0.2513083247123449}} {"text": "```python\n%matplotlib inline\n```\n\n\nTraining Transformer models using Distributed Data Parallel and Pipeline Parallelism\n====================================================================================\n\n**Author**: `Pritam Damania `_\n\nThis tutorial demonstrates how to train a large Transformer model across\nmultiple GPUs using `Distributed Data Parallel `__ and\n`Pipeline Parallelism `__. This tutorial is an extension of the\n`Sequence-to-Sequence Modeling with nn.Transformer and TorchText `__ tutorial\nand scales up the same model to demonstrate how Distributed Data Parallel and\nPipeline Parallelism can be used to train Transformer models.\n\nPrerequisites:\n\n * `Pipeline Parallelism `__\n * `Sequence-to-Sequence Modeling with nn.Transformer and TorchText `__\n * `Getting Started with Distributed Data Parallel `__\n\n\nDefine the model\n----------------\n\n\n\n\n``PositionalEncoding`` module injects some information about the\nrelative or absolute position of the tokens in the sequence. The\npositional encodings have the same dimension as the embeddings so that\nthe two can be summed. Here, we use ``sine`` and ``cosine`` functions of\ndifferent frequencies.\n\n\n\n\n```python\nimport sys\nimport os\nimport math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport tempfile\nfrom torch.nn import TransformerEncoder, TransformerEncoderLayer\n\nclass PositionalEncoding(nn.Module):\n\n def __init__(self, d_model, dropout=0.1, max_len=5000):\n super(PositionalEncoding, self).__init__()\n self.dropout = nn.Dropout(p=dropout)\n\n pe = torch.zeros(max_len, d_model)\n position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)\n div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))\n pe[:, 0::2] = torch.sin(position * div_term)\n pe[:, 1::2] = torch.cos(position * div_term)\n pe = pe.unsqueeze(0).transpose(0, 1)\n self.register_buffer('pe', pe)\n\n def forward(self, x):\n x = x + self.pe[:x.size(0), :]\n return self.dropout(x)\n```\n\nIn this tutorial, we will split a Transformer model across two GPUs and use\npipeline parallelism to train the model. In addition to this, we use\n`Distributed Data Parallel `__\nto train two replicas of this pipeline. We have one process driving a pipe across\nGPUs 0 and 1 and another process driving a pipe across GPUs 2 and 3. Both these\nprocesses then use Distributed Data Parallel to train the two replicas. The\nmodel is exactly the same model used in the `Sequence-to-Sequence Modeling with nn.Transformer and TorchText\n`__ tutorial,\nbut is split into two stages. The largest number of parameters belong to the\n`nn.TransformerEncoder `__ layer.\nThe `nn.TransformerEncoder `__\nitself consists of ``nlayers`` of `nn.TransformerEncoderLayer `__.\nAs a result, our focus is on ``nn.TransformerEncoder`` and we split the model\nsuch that half of the ``nn.TransformerEncoderLayer`` are on one GPU and the\nother half are on another. To do this, we pull out the ``Encoder`` and\n``Decoder`` sections into seperate modules and then build an nn.Sequential\nrepresenting the original Transformer module.\n\n\n\n\n```python\nif sys.platform == 'win32':\n print('Windows platform is not supported for pipeline parallelism')\n sys.exit(0)\nif torch.cuda.device_count() < 4:\n print('Need at least four GPU devices for this tutorial')\n sys.exit(0)\n\nclass Encoder(nn.Module):\n def __init__(self, ntoken, ninp, dropout=0.5):\n super(Encoder, self).__init__()\n self.pos_encoder = PositionalEncoding(ninp, dropout)\n self.encoder = nn.Embedding(ntoken, ninp)\n self.ninp = ninp\n self.init_weights()\n\n def init_weights(self):\n initrange = 0.1\n self.encoder.weight.data.uniform_(-initrange, initrange)\n\n def forward(self, src):\n # Need (S, N) format for encoder.\n src = src.t()\n src = self.encoder(src) * math.sqrt(self.ninp)\n return self.pos_encoder(src)\n\nclass Decoder(nn.Module):\n def __init__(self, ntoken, ninp):\n super(Decoder, self).__init__()\n self.decoder = nn.Linear(ninp, ntoken)\n self.init_weights()\n\n def init_weights(self):\n initrange = 0.1\n self.decoder.bias.data.zero_()\n self.decoder.weight.data.uniform_(-initrange, initrange)\n\n def forward(self, inp):\n # Need batch dimension first for output of pipeline.\n return self.decoder(inp).permute(1, 0, 2)\n```\n\nStart multiple processes for training\n-------------------------------------\n\n\n\n\nWe start two processes where each process drives its own pipeline across two\nGPUs. ``run_worker`` is executed for each process.\n\n\n\n\n```python\ndef run_worker(rank, world_size):\n```\n\nLoad and batch data\n-------------------\n\n\n\n\nThe training process uses Wikitext-2 dataset from ``torchtext``. The\nvocab object is built based on the train dataset and is used to numericalize\ntokens into tensors. Starting from sequential data, the ``batchify()``\nfunction arranges the dataset into columns, trimming off any tokens remaining\nafter the data has been divided into batches of size ``batch_size``.\nFor instance, with the alphabet as the sequence (total length of 26)\nand a batch size of 4, we would divide the alphabet into 4 sequences of\nlength 6:\n\n\\begin{align}\\begin{bmatrix}\n \\text{A} & \\text{B} & \\text{C} & \\ldots & \\text{X} & \\text{Y} & \\text{Z}\n \\end{bmatrix}\n \\Rightarrow\n \\begin{bmatrix}\n \\begin{bmatrix}\\text{A} \\\\ \\text{B} \\\\ \\text{C} \\\\ \\text{D} \\\\ \\text{E} \\\\ \\text{F}\\end{bmatrix} &\n \\begin{bmatrix}\\text{G} \\\\ \\text{H} \\\\ \\text{I} \\\\ \\text{J} \\\\ \\text{K} \\\\ \\text{L}\\end{bmatrix} &\n \\begin{bmatrix}\\text{M} \\\\ \\text{N} \\\\ \\text{O} \\\\ \\text{P} \\\\ \\text{Q} \\\\ \\text{R}\\end{bmatrix} &\n \\begin{bmatrix}\\text{S} \\\\ \\text{T} \\\\ \\text{U} \\\\ \\text{V} \\\\ \\text{W} \\\\ \\text{X}\\end{bmatrix}\n \\end{bmatrix}\\end{align}\n\nThese columns are treated as independent by the model, which means that\nthe dependence of ``G`` and ``F`` can not be learned, but allows more\nefficient batch processing.\n\n\n\n\n\n```python\n# In 'run_worker'\n def print_with_rank(msg):\n print('[RANK {}]: {}'.format(rank, msg))\n\n from torchtext.datasets import WikiText2\n from torchtext.data.utils import get_tokenizer\n from torchtext.vocab import build_vocab_from_iterator\n\n train_iter = WikiText2(split='train')\n tokenizer = get_tokenizer('basic_english')\n vocab = build_vocab_from_iterator(map(tokenizer, train_iter), specials=[\"\"])\n vocab.set_default_index(vocab[\"\"])\n\n def data_process(raw_text_iter):\n data = [torch.tensor(vocab(tokenizer(item)), dtype=torch.long) for item in raw_text_iter]\n return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))\n\n train_iter, val_iter, test_iter = WikiText2()\n train_data = data_process(train_iter)\n val_data = data_process(val_iter)\n test_data = data_process(test_iter)\n\n device = torch.device(2 * rank)\n\n def batchify(data, bsz, rank, world_size, is_train=False):\n # Divide the dataset into bsz parts.\n nbatch = data.size(0) // bsz\n # Trim off any extra elements that wouldn't cleanly fit (remainders).\n data = data.narrow(0, 0, nbatch * bsz)\n # Evenly divide the data across the bsz batches.\n data = data.view(bsz, -1).t().contiguous()\n # Divide the data across the ranks only for training data.\n if is_train:\n data_per_rank = data.size(0) // world_size\n data = data[rank * data_per_rank : (rank + 1) * data_per_rank]\n return data.to(device)\n\n batch_size = 20\n eval_batch_size = 10\n train_data = batchify(train_data, batch_size, rank, world_size, True)\n val_data = batchify(val_data, eval_batch_size, rank, world_size)\n test_data = batchify(test_data, eval_batch_size, rank, world_size)\n```\n\nFunctions to generate input and target sequence\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\n\n\n``get_batch()`` function generates the input and target sequence for\nthe transformer model. It subdivides the source data into chunks of\nlength ``bptt``. For the language modeling task, the model needs the\nfollowing words as ``Target``. For example, with a ``bptt`` value of 2,\nwe\u2019d get the following two Variables for ``i`` = 0:\n\n\n\n\nIt should be noted that the chunks are along dimension 0, consistent\nwith the ``S`` dimension in the Transformer model. The batch dimension\n``N`` is along dimension 1.\n\n\n\n\n\n```python\n# In 'run_worker'\n bptt = 35\n def get_batch(source, i):\n seq_len = min(bptt, len(source) - 1 - i)\n data = source[i:i+seq_len]\n target = source[i+1:i+1+seq_len].view(-1)\n # Need batch dimension first for pipeline parallelism.\n return data.t(), target\n```\n\nModel scale and Pipe initialization\n-----------------------------------\n\n\n\n\nTo demonstrate training large Transformer models using pipeline parallelism,\nwe scale up the Transformer layers appropriately. We use an embedding\ndimension of 4096, hidden size of 4096, 16 attention heads and 8 total\ntransformer layers (``nn.TransformerEncoderLayer``). This creates a model with\n**~1 billion** parameters.\n\nWe need to initialize the `RPC Framework `__\nsince Pipe depends on the RPC framework via `RRef `__\nwhich allows for future expansion to cross host pipelining. We need to\ninitialize the RPC framework with only a single worker since we're using a\nsingle process to drive multiple GPUs.\n\nThe pipeline is then initialized with 8 transformer layers on one GPU and 8\ntransformer layers on the other GPU. One pipe is setup across GPUs 0 and 1 and\nanother across GPUs 2 and 3. Both pipes are then replicated using DistributedDataParallel.\n\n\n\n\n```python\n# In 'run_worker'\n ntokens = len(vocab) # the size of vocabulary\n emsize = 4096 # embedding dimension\n nhid = 4096 # the dimension of the feedforward network model in nn.TransformerEncoder\n nlayers = 8 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder\n nhead = 16 # the number of heads in the multiheadattention models\n dropout = 0.2 # the dropout value\n\n from torch.distributed import rpc\n tmpfile = tempfile.NamedTemporaryFile()\n rpc.init_rpc(\n name=\"worker\",\n rank=0,\n world_size=1,\n rpc_backend_options=rpc.TensorPipeRpcBackendOptions(\n init_method=\"file://{}\".format(tmpfile.name),\n # Specifying _transports and _channels is a workaround and we no longer\n # will have to specify _transports and _channels for PyTorch\n # versions >= 1.8.1\n _transports=[\"ibv\", \"uv\"],\n _channels=[\"cuda_ipc\", \"cuda_basic\"],\n )\n )\n\n # Num gpus for model parallelism.\n num_gpus = 2\n partition_len = ((nlayers - 1) // num_gpus) + 1\n\n # Add encoder in the beginning.\n tmp_list = [Encoder(ntokens, emsize, dropout).cuda(2 * rank)]\n module_list = []\n\n # Add all the necessary transformer blocks.\n for i in range(nlayers):\n transformer_block = TransformerEncoderLayer(emsize, nhead, nhid, dropout)\n if i != 0 and i % (partition_len) == 0:\n module_list.append(nn.Sequential(*tmp_list))\n tmp_list = []\n device = i // (partition_len)\n tmp_list.append(transformer_block.to(2 * rank + device))\n\n # Add decoder in the end.\n tmp_list.append(Decoder(ntokens, emsize).cuda(2 * rank + num_gpus - 1))\n module_list.append(nn.Sequential(*tmp_list))\n\n # Need to use 'checkpoint=never' since as of PyTorch 1.8, Pipe checkpointing\n # doesn't work with DDP.\n from torch.distributed.pipeline.sync import Pipe\n chunks = 8\n model = Pipe(torch.nn.Sequential(\n *module_list), chunks = chunks, checkpoint=\"never\")\n\n # Initialize process group and wrap model in DDP.\n from torch.nn.parallel import DistributedDataParallel\n import torch.distributed as dist\n os.environ['MASTER_ADDR'] = 'localhost'\n os.environ['MASTER_PORT'] = '29500'\n dist.init_process_group(\n backend=\"nccl\", rank=rank, world_size=world_size)\n model = DistributedDataParallel(model)\n\n def get_total_params(module: torch.nn.Module):\n total_params = 0\n for param in module.parameters():\n total_params += param.numel()\n return total_params\n\n print_with_rank('Total parameters in model: {:,}'.format(get_total_params(model)))\n```\n\nRun the model\n-------------\n\n\n\n\n`CrossEntropyLoss `__\nis applied to track the loss and\n`SGD `__\nimplements stochastic gradient descent method as the optimizer. The initial\nlearning rate is set to 5.0. `StepLR `__ is\napplied to adjust the learn rate through epochs. During the\ntraining, we use\n`nn.utils.clip_grad_norm\\_ `__\nfunction to scale all the gradient together to prevent exploding.\n\n\n\n\n\n```python\n# In 'run_worker'\n criterion = nn.CrossEntropyLoss()\n lr = 5.0 # learning rate\n optimizer = torch.optim.SGD(model.parameters(), lr=lr)\n scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)\n\n import time\n def train():\n model.train() # Turn on the train mode\n total_loss = 0.\n start_time = time.time()\n ntokens = len(vocab)\n\n # Train only for 50 batches to keep script execution time low.\n nbatches = min(50 * bptt, train_data.size(0) - 1)\n\n for batch, i in enumerate(range(0, nbatches, bptt)):\n data, targets = get_batch(train_data, i)\n optimizer.zero_grad()\n # Since the Pipe is only within a single host and process the ``RRef``\n # returned by forward method is local to this node and can simply\n # retrieved via ``RRef.local_value()``.\n output = model(data).local_value()\n # Need to move targets to the device where the output of the\n # pipeline resides.\n loss = criterion(output.view(-1, ntokens), targets.cuda(2 * rank + 1))\n loss.backward()\n torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)\n optimizer.step()\n\n total_loss += loss.item()\n log_interval = 10\n if batch % log_interval == 0 and batch > 0:\n cur_loss = total_loss / log_interval\n elapsed = time.time() - start_time\n print_with_rank('| epoch {:3d} | {:5d}/{:5d} batches | '\n 'lr {:02.2f} | ms/batch {:5.2f} | '\n 'loss {:5.2f} | ppl {:8.2f}'.format(\n epoch, batch, nbatches // bptt, scheduler.get_last_lr()[0],\n elapsed * 1000 / log_interval,\n cur_loss, math.exp(cur_loss)))\n total_loss = 0\n start_time = time.time()\n\n def evaluate(eval_model, data_source):\n eval_model.eval() # Turn on the evaluation mode\n total_loss = 0.\n ntokens = len(vocab)\n # Evaluate only for 50 batches to keep script execution time low.\n nbatches = min(50 * bptt, data_source.size(0) - 1)\n with torch.no_grad():\n for i in range(0, nbatches, bptt):\n data, targets = get_batch(data_source, i)\n output = eval_model(data).local_value()\n output_flat = output.view(-1, ntokens)\n # Need to move targets to the device where the output of the\n # pipeline resides.\n total_loss += len(data) * criterion(output_flat, targets.cuda(2 * rank + 1)).item()\n return total_loss / (len(data_source) - 1)\n```\n\nLoop over epochs. Save the model if the validation loss is the best\nwe've seen so far. Adjust the learning rate after each epoch.\n\n\n\n\n```python\n# In 'run_worker'\n best_val_loss = float(\"inf\")\n epochs = 3 # The number of epochs\n best_model = None\n\n for epoch in range(1, epochs + 1):\n epoch_start_time = time.time()\n train()\n val_loss = evaluate(model, val_data)\n print_with_rank('-' * 89)\n print_with_rank('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '\n 'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),\n val_loss, math.exp(val_loss)))\n print_with_rank('-' * 89)\n\n if val_loss < best_val_loss:\n best_val_loss = val_loss\n best_model = model\n\n scheduler.step()\n```\n\nEvaluate the model with the test dataset\n-------------------------------------\n\nApply the best model to check the result with the test dataset.\n\n\n\n\n```python\n# In 'run_worker'\n test_loss = evaluate(best_model, test_data)\n print_with_rank('=' * 89)\n print_with_rank('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(\n test_loss, math.exp(test_loss)))\n print_with_rank('=' * 89)\n\n# Main execution\nimport torch.multiprocessing as mp\n\nif __name__==\"__main__\":\n world_size = 2\n mp.spawn(run_worker, args=(world_size, ), nprocs=world_size, join=True)\n```\n\nOutput\n------\n\n\n\n\n.. code-block:: py\n\n [RANK 0]: | epoch 1 | 10/ 50 batches | lr 5.00 | ms/batch 778.97 | loss 43.31 | ppl 6432469059895903232.00\n [RANK 1]: | epoch 1 | 10/ 50 batches | lr 5.00 | ms/batch 778.90 | loss 44.50 | ppl 21245447128217366528.00\n [RANK 0]: | epoch 1 | 20/ 50 batches | lr 5.00 | ms/batch 699.89 | loss 44.50 | ppl 21176949187407757312.00\n [RANK 1]: | epoch 1 | 20/ 50 batches | lr 5.00 | ms/batch 699.87 | loss 44.62 | ppl 23975861229620961280.00\n [RANK 0]: | epoch 1 | 30/ 50 batches | lr 5.00 | ms/batch 698.86 | loss 41.62 | ppl 1193312915629888256.00\n [RANK 1]: | epoch 1 | 30/ 50 batches | lr 5.00 | ms/batch 698.87 | loss 40.69 | ppl 471605759847546240.00\n [RANK 0]: | epoch 1 | 40/ 50 batches | lr 5.00 | ms/batch 698.34 | loss 45.20 | ppl 42812308420836458496.00\n [RANK 1]: | epoch 1 | 40/ 50 batches | lr 5.00 | ms/batch 698.33 | loss 45.68 | ppl 68839569686012223488.00\n [RANK 1]: -----------------------------------------------------------------------------------------\n [RANK 1]: | end of epoch 1 | time: 40.08s | valid loss 0.80 | valid ppl 2.22\n [RANK 1]: -----------------------------------------------------------------------------------------\n [RANK 0]: -----------------------------------------------------------------------------------------\n [RANK 0]: | end of epoch 1 | time: 40.09s | valid loss 0.80 | valid ppl 2.22\n [RANK 0]: -----------------------------------------------------------------------------------------\n [RANK 0]: | epoch 2 | 10/ 50 batches | lr 4.75 | ms/batch 768.51 | loss 36.34 | ppl 6063529544668166.00\n [RANK 1]: | epoch 2 | 10/ 50 batches | lr 4.75 | ms/batch 769.23 | loss 37.41 | ppl 17651211266236086.00\n [RANK 0]: | epoch 2 | 20/ 50 batches | lr 4.75 | ms/batch 699.57 | loss 28.97 | ppl 3798441739584.11\n [RANK 1]: | epoch 2 | 20/ 50 batches | lr 4.75 | ms/batch 699.56 | loss 29.28 | ppl 5203636967575.47\n [RANK 0]: | epoch 2 | 30/ 50 batches | lr 4.75 | ms/batch 699.04 | loss 28.43 | ppl 2212498693571.25\n [RANK 1]: | epoch 2 | 30/ 50 batches | lr 4.75 | ms/batch 699.05 | loss 28.33 | ppl 2015144761281.48\n [RANK 0]: | epoch 2 | 40/ 50 batches | lr 4.75 | ms/batch 699.10 | loss 23.30 | ppl 13121380184.92\n [RANK 1]: | epoch 2 | 40/ 50 batches | lr 4.75 | ms/batch 699.09 | loss 23.41 | ppl 14653799192.87\n [RANK 0]: -----------------------------------------------------------------------------------------\n [RANK 0]: | end of epoch 2 | time: 39.97s | valid loss 0.24 | valid ppl 1.27\n [RANK 0]: -----------------------------------------------------------------------------------------\n [RANK 1]: -----------------------------------------------------------------------------------------\n [RANK 1]: | end of epoch 2 | time: 39.98s | valid loss 0.24 | valid ppl 1.27\n [RANK 1]: -----------------------------------------------------------------------------------------\n [RANK 0]: | epoch 3 | 10/ 50 batches | lr 4.51 | ms/batch 769.36 | loss 12.80 | ppl 361681.11\n [RANK 1]: | epoch 3 | 10/ 50 batches | lr 4.51 | ms/batch 768.97 | loss 12.57 | ppl 287876.61\n [RANK 0]: | epoch 3 | 20/ 50 batches | lr 4.51 | ms/batch 698.27 | loss 12.01 | ppl 164364.60\n [RANK 1]: | epoch 3 | 20/ 50 batches | lr 4.51 | ms/batch 698.30 | loss 11.98 | ppl 159095.89\n [RANK 0]: | epoch 3 | 30/ 50 batches | lr 4.51 | ms/batch 697.75 | loss 10.90 | ppl 54261.91\n [RANK 1]: | epoch 3 | 30/ 50 batches | lr 4.51 | ms/batch 697.72 | loss 10.89 | ppl 53372.39\n [RANK 0]: | epoch 3 | 40/ 50 batches | lr 4.51 | ms/batch 699.49 | loss 10.78 | ppl 47948.35\n [RANK 1]: | epoch 3 | 40/ 50 batches | lr 4.51 | ms/batch 699.50 | loss 10.79 | ppl 48664.42\n [RANK 0]: -----------------------------------------------------------------------------------------\n [RANK 0]: | end of epoch 3 | time: 39.96s | valid loss 0.38 | valid ppl 1.46\n [RANK 0]: -----------------------------------------------------------------------------------------\n [RANK 1]: -----------------------------------------------------------------------------------------\n [RANK 1]: | end of epoch 3 | time: 39.96s | valid loss 0.38 | valid ppl 1.46\n [RANK 1]: -----------------------------------------------------------------------------------------\n [RANK 0]: =========================================================================================\n [RANK 0]: | End of training | test loss 0.33 | test ppl 1.39\n [RANK 0]: =========================================================================================\n [RANK 1]: =========================================================================================\n [RANK 1]: | End of training | test loss 0.33 | test ppl 1.39\n [RANK 1]: =========================================================================================\n\n\n\n", "meta": {"hexsha": "b54bddbd971806a866d43361f6b81b1ee3bc6e41", "size": 27875, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/e081556e6a716888948cd45335518ed9/ddp_pipeline.ipynb", "max_stars_repo_name": "woojinsong/PyTorch-tutorials-kr", "max_stars_repo_head_hexsha": "36fefd556f45c2b1f5db912793172c0369430fd4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 221, "max_stars_repo_stars_event_min_datetime": "2018-04-06T01:42:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-28T10:12:45.000Z", "max_issues_repo_path": "docs/_downloads/e081556e6a716888948cd45335518ed9/ddp_pipeline.ipynb", "max_issues_repo_name": "woojinsong/PyTorch-tutorials-kr", "max_issues_repo_head_hexsha": "36fefd556f45c2b1f5db912793172c0369430fd4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 280, "max_issues_repo_issues_event_min_datetime": "2018-05-25T08:53:21.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-02T05:37:25.000Z", "max_forks_repo_path": "docs/_downloads/e081556e6a716888948cd45335518ed9/ddp_pipeline.ipynb", "max_forks_repo_name": "woojinsong/PyTorch-tutorials-kr", "max_forks_repo_head_hexsha": "36fefd556f45c2b1f5db912793172c0369430fd4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 181, "max_forks_repo_forks_event_min_datetime": "2018-05-25T02:00:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T11:56:39.000Z", "avg_line_length": 106.8007662835, "max_line_length": 5099, "alphanum_fraction": 0.5776143498, "converted": true, "num_tokens": 6150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5273165233795671, "lm_q2_score": 0.47657965106367595, "lm_q1q2_score": 0.2513083247123448}} {"text": "# Project 4 - Reinforcement Learning\n### Deadline - December 5, 11.59pm\n\nWelcome to your last assignment for CSE4/574 course! For this assignment you will implement a reinforcement learing algorithm - DQN, that will train the agent to play a game.\n\nAll the code deliverables has to be provided within this notebook.\n\n## 1 - Packages\n\nLet's first import all the packages that you will need during this assignment.\n\n- [random](https://docs.python.org/3/library/random.html) - generates pseudo-random numbers\n- [math](https://docs.python.org/3/library/math.html?highlight=math#module-math) - provides access to the a big variety of mathematical functions\n- [time](https://docs.python.org/3/library/time.html?highlight=time#module-time) - will be used to track how much time each computation takes\n- [numpy](http://www.numpy.org/) - is the main package for scientific computing with Python\n- [keras](https://keras.io/) - is a high-level neural networks API, we will use to biuild a neural network for our agent\n- [matplotlib](https://matplotlib.org/) - is a plotting library\n- [IPython](https://ipython.org/) - is an interactive shell, that will help us to display our framework\n\n\n```python\n#######################################################################\n# Authors: #\n# Nathan Margaglio (nathanmargaglio@gmail.com) #\n# Alina Vereshchaka (avereshc@buffalo.edu) #\n#######################################################################\n\nimport random, math, time\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import *\nfrom keras.optimizers import *\n\nimport matplotlib\n#matplotlib.use(\"Agg\")\nimport matplotlib.pyplot as plt\nfrom matplotlib.image import imread\nfrom matplotlib import rc, animation\nfrom IPython import display\nfrom IPython.display import HTML\n%matplotlib inline\n\ntry:\n from google.colab import files\nexcept:\n print(\"Could not import Google Colab.\")\n```\n\n Using TensorFlow backend.\n\n\n## 2 - Environment\nHere we define our grid-world environment.\nNo need to make any changes!\n\n\n```python\nclass Environment:\n\n def __init__(self, grid_size):\n self.grid_size = grid_size\n \n self.cat = imread('https://image.ibb.co/btGeAA/tom.png')\n self.mouse = imread('https://image.ibb.co/njNNxq/jerry.png')\n self.confetti = imread('https://image.ibb.co/ganuAA/tom-and-jerry.png')\n self.dim = 1.5\n \n self.rewards = []\n \n def _update_state(self, action):\n state = self.state\n # 0 = left\n # 1 = right\n # 2 = down\n # 3 = up\n\n fy, fx, py, px = state\n old_d = abs(fx - px) + abs(fy - py)\n\n if action == 0:\n if px > 0:\n px -= 1\n if action == 1:\n if px < self.grid_size-1:\n px += 1\n if action == 2:\n if py > 0:\n py-= 1\n if action == 3:\n if py < self.grid_size-1:\n py += 1\n\n new_d = abs(fx - px) + abs(fy - py)\n self.d = old_d-new_d\n self.time = self.time - 1\n return np.array([fy, fx, py, px])\n\n def _get_reward(self):\n fruit_y, fruit_x, player_y, player_x = self.state\n if fruit_x == player_x and fruit_y == player_y: return 1\n if self.d == 1: return 1\n if self.d == 0: return -1\n if self.d == -1: return -1\n\n def _is_over(self):\n fruit_y, fruit_x, player_y, player_x = self.state\n if self.time == 0: return True\n if fruit_x == player_x and fruit_y == player_y: return True\n return False\n\n def step(self, action):\n self.state = self._update_state(action)\n reward = self._get_reward()\n self.rewards.append(reward)\n game_over = self._is_over()\n return self.state, reward, game_over\n \n def render(self):\n # Note: there's no promises of efficieny with this method\n # If things are slow, remove it\n \n im_size = (self.grid_size,)*2\n state = self.state\n \n self.fig = plt.figure(figsize=(8, 6), dpi=80)\n self.ax = self.fig.add_subplot(111)\n \n self.ax.clear()\n self.ax.set_ylim((-1, self.grid_size))\n self.ax.set_xlim((-1, self.grid_size))\n #self.ax.axis('off') # uncomment to turn off axes\n self.ax.get_xaxis().set_ticks(range(self.grid_size))\n self.ax.get_yaxis().set_ticks(range(self.grid_size))\n \n xc = state[2]\n yc = state[3]\n xm = state[0]\n ym = state[1]\n \n if state[0] == state[2] and state[1] == state[3]:\n self.ax.imshow(self.confetti, \n extent=(-1, self.grid_size,\n -1, self.grid_size))\n else:\n self.ax.imshow(self.mouse, \n extent=(xm-self.dim/4, xm+self.dim/4,\n ym-self.dim/4, ym+self.dim/4))\n self.ax.imshow(self.cat, \n extent=(xc-self.dim/2, xc+self.dim/2,\n yc-self.dim/2, yc+self.dim/2))\n self.fig.canvas.draw()\n return np.array(self.fig.canvas.renderer._renderer)\n\n def reset(self, deterministic=True):\n if deterministic:\n # this is an easier environment setup\n fruit_x = 0\n fruit_y = 0\n player_x = self.grid_size - 1\n player_y = self.grid_size - 1\n time = self.grid_size*2\n else:\n generated = False\n while not generated\\\n or abs(fruit_x - player_x) + abs(fruit_y - player_y) < self.grid_size/2:\n fruit_x = np.random.randint(0, self.grid_size-1)\n fruit_y = np.random.randint(0, self.grid_size-1)\n player_x = np.random.randint(0, self.grid_size-1)\n player_y = np.random.randint(0, self.grid_size-1)\n time = abs(fruit_x - player_x) + abs(fruit_y - player_y)\n time *= 2\n generated = True\n\n self.time = time\n self.d = 0\n self.state = np.asarray([fruit_y, fruit_x, player_y, player_x])\n\n return self.state\n```\n\n### Random actions\nThis runs the environment using random actions. Try to run it!\n\n\n```python\n\"\"\"\nThis runs the environment using random actions\n\"\"\"\n\nprint('Setting up environment')\nenv = Environment(5)\nnum_episodes = 1 # number of games we want the agent to play\nenv.reset()\nframes = []\nRENDER = True\nprint('Running random simulation')\nfor episode in range(num_episodes):\n print('Resetting environment')\n s = env.reset() # Initial state\n while True: \n a = np.random.choice(range(4)) # choose a random action\n s_, r, done = env.step(a) # apply random action\n \n if RENDER:\n fig = env.render()\n plt.imshow(fig)\n plt.show()\n frames.append(fig)\n\n if done:\n break\n```\n\n## 3 - Brain\n\nThe 'brain' of the agent is where the model is created and held.\n\n### Neural Network structure for our task\n\n\n\n\n\nOur DQN takes a stack of six-tuple as an input. It is passed through two hidden networks, and output a vector of Q-values for each action possible in the given state. \\\\\n** Example: **\n$Q(s_t, a_1)$ - q-value for a given state $s$, if we choose action $a_1$ \\\\\nWe need to choose such an action, that will return the higest Q-value.\n\nIn the beginning, the agent does really badly. But over time, it begins to associate states with best actions to do.\n \n \n### Task 1: Build a 3-layer neural network, using Keras liblary.
\n**Instructions:**\n- Build a three-layer neural network with two hidden layers\n- The model's structure is: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR.\n- Activation function for the first and second hidden layers is 'relu'\n- Activation function for the final layer is 'linear' (that returns real values)\n- Input dimentions for the first hidden layer equals to the size of your observation space (state_dim)\n- Number of hidden nodes is 128\n- Number of the output should be the same as the size of the action space (action_dim)\n\n\n\n```python\n#-------------------- BRAIN ---------------------------\n\nclass Brain:\n \"\"\"The 'brain' of the agent, where the model is created and held.\n \n state_dim (int): the size of the observation space\n action_dim (int): the size of the action space\n \n \"\"\"\n def __init__(self, state_dim, action_dim, weights=None):\n self.state_dim = state_dim\n self.action_dim = action_dim\n\n self.model = self._createModel()\n if weights:\n self.model.load_weights(\"brain.h5\")\n\n def _createModel(self):\n # Creates a Sequential Keras model\n # This acts as the Deep Q-Network (DQN)\n \n model = Sequential()\n\n ### START CODE HERE ### (\u2248 3 lines of code)\n\n model.add(Dense(units=128, activation='relu', input_shape=(self.state_dim,)))\n model.add(Dense(units=128, activation='relu'))\n model.add(Dense(units=self.action_dim, activation='linear')) \n \n ### END CODE HERE ###\n\n opt = RMSprop(lr=0.00025)\n model.compile(loss='mse', optimizer=opt)\n\n return model\n\n def train(self, x, y, epoch=1, verbose=0):\n self.model.fit(x, y, batch_size=64, nb_epoch=epoch, verbose=verbose)\n\n def predict(self, s):\n return self.model.predict(s)\n\n def predictOne(self, s):\n return self.predict(s.reshape(1, self.state_dim)).flatten()\n```\n\n## 4 - Memory\n\nIn this block we are defining the main functions that will be used to store the exeriences of our agent.
\nNo need to make any modifications!\n\n\n```python\n#-------------------- MEMORY --------------------------\nclass Memory: # stored as ( s, a, r, s_ )\n \"\"\"The agent's 'memory', where experiences are stored\n \"\"\"\n\n def __init__(self, capacity):\n self.capacity = capacity\n self.samples = []\n\n def add(self, sample):\n # a sample should be an array [s, a, r, s_]\n # s: current state\n # a: current action\n # r: current reward\n # s_: next state\n self.samples.append(sample) \n\n if len(self.samples) > self.capacity:\n self.samples.pop(0)\n\n def sample(self, n):\n n = min(n, len(self.samples))\n return random.sample(self.samples, n)\n```\n\n## 5 - Agent\n\n[np.amax](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.amax.html) - \nReturns the maximum of an array \n\n### Epsilon\n\nOur agent will randomly select its action at first by a certain percentage, called \u2018exploration rate\u2019 or \u2018epsilon\u2019. This is because at first, it is better for the agent to try all kinds of things before it starts to see the patterns. When it is not deciding the action randomly, the agent will predict the reward value based on the current state and pick the action that will give the highest reward. We want our agent to descrese the number of random action, as it goes, so we indroduce an exponential-decay epsilon, that eventually will allow our agent to explore the evironment. \\\\\n\n**Exponential-decay formula for epsilon:** \\\\\n\n\\begin{equation}\n\\epsilon = \\epsilon_{min} + (\\epsilon_{max} - \\epsilon_{min})*e^{-\\lambda|S|},\n\\end{equation}\n\nwhere $\\epsilon_{min}, \\epsilon_{max} \\in [0, 1]$ \\\\\n$\\lambda$ - hyperparameter for epsilon \\\\\n$|S|$ - total number of steps\n\n\n### Task 2: Implement exponential-decay formula for epsilon.
\n**Instructions:**\n- On line 50 implement the formula provided above to calculate epsilon.\n- Please note, that the name for all the variables should start with self, thus
\n\nepsilon $\\rightarrow$ self.epsilon
\nmin_epsilon $\\rightarrow$ self.min_epsilon\n\n### Task 3: Implement Q-function
\n**Instructions:**\n \\begin{align} \\notag\n Q_t = \\begin{cases} r_t, & \\mbox{if episode terminates at step } t+1 \\\\ r_t + \\gamma max_aQ(s_t, a_t; \\Theta), & \\mbox{otherwise} \\end{cases}\n \\end{align} \n\n\n```python\n#-------------------- AGENT ---------------------------\n\nclass Agent:\n \"\"\"The agent, which learns to navigate the environment\n \n \"\"\"\n \n def __init__(self, state_dim, action_dim, memory_capacity = 10000,\n batch_size = 64, gamma = 0.99, lamb = 0.001,\n max_epsilon = 1., min_epsilon = 0.01):\n self.state_dim = state_dim\n self.action_dim = action_dim\n\n self.batch_size = batch_size\n self.gamma = gamma # discount rate, to calculate the future discounted reward\n self.lamb = lamb\n self.max_epsilon = max_epsilon\n self.epsilon = max_epsilon\n self.min_epsilon = min_epsilon\n\n self.brain = Brain(state_dim, action_dim)\n self.memory = Memory(memory_capacity)\n self.steps = 0\n self.epsilons = []\n\n def act(self, s, verbose=False):\n \"\"\"The policy of the agent:\n Here, we determine if we explore (take a random action) based on epsilon.\n If not, we have the model predict the Q-Values for the state,\n then take the action which maximizes those values.\n \"\"\"\n if random.random() < self.epsilon:\n if verbose:\n print(\"Random Action.\")\n return random.randint(0, self.action_dim-1)\n else:\n actions = self.brain.predictOne(s)\n if verbose:\n print(\"Actions:\", actions)\n return np.argmax(actions)\n\n def observe(self, sample): # in (s, a, r, s_) format\n \"\"\"The agent observes an event.\n We pass a sample (state, action, reward, next state) to be stored in memory.\n We then increment the step count and adjust epsilon accordingly.\n \"\"\"\n self.memory.add(sample) \n\n # slowly decrease Epsilon based on our eperience\n self.steps += 1\n \n ### START CODE HERE ### (\u2248 1 line of code)\n \n self.epsilon = self.min_epsilon + (self.max_epsilon - self.min_epsilon)*np.exp(-self.lamb*self.steps)\n \n ### END CODE HERE ###\n \n self.epsilons.append(self.epsilon)\n\n def replay(self):\n \"\"\"The agent learns based on previous experiences.\n We sample observations (state, action, reward, next state) from memory.\n We train the model based on these observations.\n \"\"\"\n \n # Random sample of experiences\n batch = self.memory.sample(self.batch_size)\n batch_size = len(batch)\n\n # Extracting states ('current' and 'next') from samples\n no_state = np.zeros(self.state_dim)\n states = np.array([ o[0] for o in batch ])\n states_next = np.array([ (no_state if o[3] is None else o[3]) for o in batch ])\n\n # Estimating Q-Values for states\n q_vals = self.brain.predict(states)\n q_vals_next = self.brain.predict(states_next)\n\n # Setting up training data\n x = np.zeros((batch_size, self.state_dim))\n y = np.zeros((batch_size, self.action_dim))\n \n for i in range(batch_size):\n # Observation\n obs = batch[i]\n \n # State, Action, Reward, Next State\n st = obs[0]; act = obs[1]; rew = obs[2]; st_next = obs[3]\n \n # Estimated Q-Values for Observation\n t = q_vals[i]\n\n ### START CODE HERE ### (\u2248 4 line of code)\n \n if (st_next is None):\n t[act] = rew\n else:\n t[act] = rew + self.gamma*np.max(q_vals_next[i])\n \n ### END CODE HERE ###\n\n # Set training data\n x[i] = st\n y[i] = t\n\n # Train\n self.brain.train(x, y)\n```\n\n## 6 - Running the game\n\n### Environment\nFirst, we initialize our environment. The environment, loosely structured like [OpenAI's Gym Environments](https://gym.openai.com/), has three main methods: `reset`, `step` and `render`.\n\n- When we call **reset**, we initialize the environment with a fresh episode. This allows us to effectively run through episodes (only needing to call reset at the beginning of an episode), but, more importantly, `reset()` returns the environment's initial state.\n\n- The **step** method accepts an action as a parameter (which, for this example, is an integer in [0, 3]), processes the action, and returns the new state, the reward for performing the action, and a boolean indicating if the run is over.\n\n- The **render** method simplay displays the state in a \"human-readable\" way. In this example, it renders an image of the environment as well as diagnostic data. This method is solely used for debugging purposes, and can be omitted to speed up training time.\n\n### Agent\nWhen we initialize the agent, we must pass both a `state_dim` and `action_dim` into it's constructor. These values tell the agent the dimensions of the input and the output of the neural network. The agent has three main methods: `act`, `observe`, and `replay`.\n\n- The **act** method takes the current state as input and returns the agent's selected action. Optionally, you can pass a boolean to it's `verbose` parameter to print the resulting Q-Values. Within the method, we first check if we should choose a random action (in order to explore) by comparing a randomly generated number to epsilon. If the agent decides to return a random action, then it's simply a randomly selected action from the action space (in this case, an integer in [0,3]).\n\n If the agent doesn't choose a random action, then the state is passed to the agent's neural network (i.e., it's `Brain` object). This results in an array of expected discounted rewards corresponding to each action (so, for example, if the resulting array is `[0.1, 2.3, 1.5, -0.7]`, then this means it expects choosing action `0` to lead to discounted rewards of `0.1`, action `1` leading to `2.3`, etc.). Since we are greedy with our choices, we merely choose the action corresponding to the largest reward.\n\n- The **observe** method recieves an observation tuple `(s, a, r, s_)` as input and doesn't return anything. Here, the observation tuple contains the current state `s`, the agent's action `a` at the current state, the reward `r` recieved for taking the action at the current state, and the resulting state `s_` that occurs after taking the action at the current state. This tuple is saved to the agent's `Memory`, which acts as a simple queue.\n\n- The **replay** method is where any actual learning occurs. Up to this point, we haven't trained the agent's neural network, only applied it to determine actions given states. In order to train it, we implement **Experience Replay**, which, in short, allows the agent to not only learn from recent observations but also previous observations. During experinece replay, we randomly sample a set of observations from the agent's `Memory`. These observations are then constructed in a way that allows us to pass them through the neural network to train it. \n\n Simply put: after we make a complete observation, we have generated a training example. The input for the example is the state, and the output is the expected discounted reward for each action. In this way, training works the same way it would in any classical deep learning task (except our data is generated on the fly instead of collected prior to training).\n\n### Algorthm\n\nThe algorthm, implemented below, simply calls these methods in sequence for some given number of episodes. At the beginning of an episode, we reset the environment, and pass it's return value, the initial state, to the agent's act method. This returns an action, which is then passed to the environment's step method. This returns the next state, the reward, and the boolean indicating if the episode is over. We then save the observation tuple to the agent's memory via the agent's observe method, then run a round of training by calling the agent's replay method. We can then render the environment. If the episode is over, we break from this loop, otherwise we continue with the next state being passed to the agent as the (now) current state.\n\nWhen the environment is set to have a 5x5 grid, the maximum reward is 8. You'll find the training will converge to a rolling average of over 6 reward in about 10,000 episodes. If all the code is implemented above, running the following cell unchanged should be sufficient. In Google Colab, this takes about 15 minutes to run until completion.\n\n\n```python\n#-------------------- MAIN ----------------------------\nprint('Setting up environment')\nenv = Environment(5)\n\nstate_dim = 4\naction_dim = 4 # left, right, up, down\nprint('Setting up agent')\nMAX_EPSILON = 0.5 # the rate in which an agent randomly decides its action\nMIN_EPSILON = 0.05 # min rate in which an agent randomly decides its action\nLAMBDA = 0.005 # speed of decay for epsilon\nnum_episodes = 2000 # number of games we want the agent to play\n\nVERBOSE = False\nagent = Agent(state_dim, action_dim, lamb=LAMBDA,\n max_epsilon=MAX_EPSILON, min_epsilon=MIN_EPSILON)\nenv.reset()\nepisode_rewards = []\nepsilons = []\nt0 = time.time()\nframes = []\n\nprint('Running simulation')\nfor episode in range(num_episodes):\n s = env.reset() # Initial state\n if episode % 1000 == 0:\n fig = env.render()\n frames.append(fig)\n R = 0\n while True: \n a = agent.act(s, verbose=VERBOSE)\n\n s_, r, done = env.step(a)\n\n if done: # terminal state\n s_ = None\n\n agent.observe( (s, a, r, s_) )\n agent.replay()\n\n s = s_\n R += r\n \n if episode % 1000 == 0:\n fig = env.render()\n frames.append(fig)\n \n if VERBOSE:\n print(\"Action:\", a)\n print(\"Reward:\", r)\n\n if done:\n break\n \n epsilons.append(agent.epsilon)\n episode_rewards.append(R)\n \n if episode % 100 == 0:\n print('Episode {}'.format(episode))\n print('Time Elapsed: {0:.2f}s'.format(time.time() - t0))\n print('Epsilon {}'.format(epsilons[-1]))\n print('Last Episode Reward: {}'.format(R))\n print('Episode Reward Rolling Mean: {}'.format(np.mean(episode_rewards[:-100])))\n print('-'*10)\n\nagent.brain.model.save(\"brain.h5\")\n```\n\n\n```python\nplt.figure(figsize=(8, 6), dpi=80)\nplt.title(\"Epsilon\")\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Epsilon value\")\nplt.plot(epsilons)\n```\n\n\n```python\nsmoothing = 50\nplt.figure(figsize=(21, 8), dpi=80)\nplt.title(\"Episode Reward\")\nplt.xlabel(\"Episode\")\nplt.ylabel(\"{} MA Reward\".format(smoothing))\nepisode_ma = np.convolve(episode_rewards[:2000], \n np.ones((smoothing,))/smoothing, mode='valid')\nplt.plot(episode_ma)\n```\n\n\n```python\n\"\"\"\n\nTo create animation, you need to install ffmpeg\nThese lines will install it in Google Colab\nIf you're running this notebook locally, you'll need to install\nffmpeg on your computer: \nhttps://github.com/adaptlearning/adapt_authoring/wiki/Installing-FFmpeg\n\nNote: this lines will only work in Google Colab, do not run them locally.\n\"\"\"\n\n!apt install ffmpeg\n!which ffmpeg\nplt.rcParams['animation.ffmpeg_path'] = u'/usr/bin/ffmpeg'\n```\n\n Reading package lists... Done\n Building dependency tree \n Reading state information... Done\n ffmpeg is already the newest version (7:3.4.4-0ubuntu0.18.04.1).\n 0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.\n /usr/bin/ffmpeg\n\n\n\n```python\n\"\"\"\nThis cell will compile the frames that should have been saved during training\ninto an animation. This required ffmpeg to be installed.\n\nIf the main portion wasn't modified, this will have saved frames from every\n1,000 episode. In the animation, you should see it start off performing poorly,\nbut as it progresses it should perform optimally.\n\"\"\"\n\nfig, ax = plt.subplots()\nplt.axis('off')\nl = ax.imshow(frames[0])\n\ndef animate(i):\n l.set_data(frames[i])\n\nWriter = animation.writers['ffmpeg']\nwriter = Writer(fps=12, metadata=dict(artist='Me'))\nani = animation.FuncAnimation(fig, animate, frames=len(frames))\n\nani.save('animation.mp4', writer=writer, dpi=220)\ntime.sleep(5) # let it process (only necessary in Colab)\n```\n\n\n```python\n\"\"\"\nIf you're using Google Colab, you'll need to use the Google Colab\ndownload function to download both the model you trained and the animation\nyou created.\n\nIf you're not using Google Colab, those files should be saved in the directory\nwhere this notebook is located.\n\"\"\"\n\n# To Save Brain\nfiles.download(\"brain.h5\")\n\n# To Save Animation\nfiles.download('animation.mp4')\n```\n", "meta": {"hexsha": "d62311fdcf42570db3b2a722467e15a2f9741c76", "size": 875449, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "main.ipynb", "max_stars_repo_name": "dani-amirtharaj/Reinforcement-Learning", "max_stars_repo_head_hexsha": "6270047fcae3b22fc1e797ef6fdbd5056f24ddcb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-01-27T20:51:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-25T15:41:00.000Z", "max_issues_repo_path": "main.ipynb", "max_issues_repo_name": "dani-amirtharaj/Deep-Q-Learning", "max_issues_repo_head_hexsha": "6270047fcae3b22fc1e797ef6fdbd5056f24ddcb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.ipynb", "max_forks_repo_name": "dani-amirtharaj/Deep-Q-Learning", "max_forks_repo_head_hexsha": "6270047fcae3b22fc1e797ef6fdbd5056f24ddcb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 595.1386811693, "max_line_length": 83582, "alphanum_fraction": 0.9292443078, "converted": true, "num_tokens": 5915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. NO", "lm_q1_score": 0.5428632979641571, "lm_q2_score": 0.4610167793123159, "lm_q1q2_score": 0.2502690892342978}}