path
stringlengths 8
204
| content_id
stringlengths 40
40
| detected_licenses
list | license_type
stringclasses 2
values | repo_name
stringlengths 8
100
| repo_url
stringlengths 27
119
| star_events_count
int64 0
6.26k
| fork_events_count
int64 0
3.52k
| gha_license_id
stringclasses 10
values | gha_event_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_language
stringclasses 12
values | language
stringclasses 1
value | is_generated
bool 1
class | is_vendor
bool 1
class | conversion_extension
stringclasses 6
values | size
int64 172
10.2M
| script
stringlengths 367
7.46M
| script_size
int64 367
7.46M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
/projects/capstone/Capston_proposal_v1.ipynb
|
30957b39546220dfe3974a83774b83e98ff827b2
|
[] |
no_license
|
notilas/MLproject
|
https://github.com/notilas/MLproject
| 0 | 0 | null | 2020-03-13T21:58:10 | 2020-03-13T21:57:26 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 693,616 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
num = int(input("Enter a number: "))
if num > 1:
for i in range(2,num):
if (num % i) == 0:
print(num,"is not a prime number")
break
else:
print(num,"is a prime number")
else:
print(num,"is not a prime number")
# -
n on everything from the theoretical—what an estimate should be striving to reflect—to the practical—how do various estimates compare in terms of accuracy and coverage.
#
# Kaggle opened a 1 Million dollar competition predicting *Zestimate* residual errors on May 24,2017. Zillow estimates home sale values by their estimation model called as *Zestimate* that has been started about 11 years ago. The *Zestimate* was created to give Zillow consumers information about estimated prices of homes on sale and the housing market trending.
#
# The participants are expected to develop an algorithm that makes predictions about the future sale prices of homes. The contest is made of two rounds, the qualifying round begins on May 24, 2017 and the private round for the 100 top qualifying teams that opens on Feb 1st, 2018. In the qualifying round, the participants are expected to building a model to improve the *Zestimate* residual error. In the final round, they have to build a home valuation algorithm from the ground up, using external data sources to help engineer new features that give your model an edge over the competition. Zillow Prize, a competition with a one million dollar grand prize, is challenging the data science problem that allows any participants in the world.
#
# The Zillow Zestimate home valuation was designed to reflect the market value of a home based on comparable sales where those sales are full-value, non-distressed, arms-length transactions of real estate.
#
# A full-value sale means that the sale price reflects the entirety of the value being conveyed from seller to buyer and no other side-benefit is considered. The Zestimate home valuation was also designed to be independent of any opinion from either the seller or buyer. Neither party is a neutral, unbiased observer in the transactions.
#
# In the case of the Zestimate modeling framework, which contains numerous submodels, each estimating a home’s value via different valuation approaches and data inputs, our goal of independence means that the listing price is not a factor in any of our valuation submodels. However, using the listing price as a hint when selecting from amongst all available submodel estimates for the given home will be fine, but only when there is a substantial difference between the listing price and the submodel estimate that the system would have selected without reference to the listing price.
#
# ### Domain Background
# *Zestimates* is a prediction model that estimated home values based on 7.5 million statistical and machine learning models that analyze hundreds of data points on each property. And, Zillow continually improving the median margin of error (from 14% at the onset to 5% today), Zillow has since become established as one of the largest marketplaces for real estate information in the U.S. and a leading example of impactful machine learning.
#
# When looking at estimate accuracy before a listing appeared in Table 1, the Zestimate achieved a lower error rate than the Redfin estimate. The median absolute percent error for the Zestimate was 7.8% compared to 9.1% for the Redfin estimate, with 89% of Zestimates within 20% of the final sale price versus only 80% of Redfin estimates. Sixty-two percent of Zestimates were within 10% of the final sale price versus only 53% of Redfin estimates[zillow.com]
#
#
# <img src="Table1-a364bf.png" />
#
#
# Yu-Han Chen et. al. presented their insights on the feature set in their work:
# http://blog.nycdatascience.com/student-works/zillow-zestimate-kaggle-competition/
# They have visualized data on the geo-maps and showed that Zillow does a better job of predicting the actual sale price for newer homes and shared their exploratory analysis results on the training data based on PCA and correlation matrix. They have applied various regression models such as Linear Regression, Tree-based modeling, Bagging, Random Forest, Gradient Boosting Machines (GBM).
#
#
#
# <In this section, provide brief details on the background information of the domain from which the project is proposed. Historical information relevant to the project should be included. It should be clear how or why a problem in the domain can or should be solved. Related academic research should be appropriately cited in this section, including why that research is relevant. Additionally, a discussion of your personal motivation for investigating a particular problem in the domain is encouraged but not required.>
#
# ### Problem Statement
# The goal is to predict the log-error between Zillow *Zestimate* and the actual sale price, given all the features of a home. The log error is defined as
#
# logerror=log(Zestimate)−log(SalePrice)
#
# Through Kaggle competition it is provided for real estate data from three counties in and around Los Angeles, CA. Each observation had 56 features; no additional features from outside data sources were allowed in the analysis. It is recorded in the transactions file train.csv. In this competition, it is expected to predict the logerror for the months in Fall 2017.
#
# The data files are
# 1. Training set with the actual logerror = log(Zestimate) - log(SalesPrice) and feature information for 90,725 properties
# 2. Prediction set with only the feature information for 2,985,217 properties.
#
# Submissions were scored based on the MAE (mean absolute error) across all predictions.
#
# <In this section, clearly describe the problem that is to be solved. The problem described should be well defined and should have at least one relevant potential solution. Additionally, describe the problem thoroughly such that it is clear that the problem is quantifiable (the problem can be expressed in mathematical or logical terms) , measurable (the problem can be measured by some metric and clearly observed), and replicable (the problem can be reproduced and occurs more than once).>
# ### Datasets and Inputs
#
#
#
#
# <In this section, the dataset(s) and/or input(s) being considered for the project should be thoroughly described, such as how they relate to the problem and why they should be used. Information such as how the dataset or input is (was) obtained, and the characteristics of the dataset or input, should be included with relevant references and citations as necessary It should be clear how the dataset(s) or input(s) will be used in the project and whether their use is appropriate given the context of the problem.>
# There are a total of 58 features in the property. The data type is mix of continuous numeric, discrete numeric types.
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#color = sns.color_palette()
import matplotlib, sys
SMALL_SIZE = 12
MEDIUM_SIZE = 14
BIGGER_SIZE = 16
matplotlib.rc('font', size=MEDIUM_SIZE)
matplotlib.rc('axes', titlesize=MEDIUM_SIZE)
# %matplotlib inline
# +
# Import data set
if sys.platform=='win32':
Data_path="d:/Work"
elif sys.platform=='linux2':
Data_path='.'
elif sys.platform=='darwin':
Data_path="~/tmp"
else:
print (os.name)
prop_df = pd.read_csv(Data_path+'/zillow/properties_2016.csv',low_memory=False)
train_df = pd.read_csv(Data_path+'/zillow/train_2016_v2.csv',low_memory=False)
# -
# Below is the sample data. The property ID is indicated by parcelid.
prop_df.iloc[:5,:3]
# #### List of data features and type
print (prop_df.dtypes)
# #### Distribution of each feature
#
# Below fiture is the distribution of values of numerical data features. Most of features are severely skewed rather than symmetric or nomal distriubtions.
# +
continuous = ['basementsqft', 'finishedfloor1squarefeet', 'calculatedfinishedsquarefeet',
'finishedsquarefeet12', 'finishedsquarefeet13', 'finishedsquarefeet15',
'finishedsquarefeet50', 'finishedsquarefeet6', 'garagetotalsqft', 'latitude',
'longitude', 'lotsizesquarefeet', 'poolsizesum', 'yardbuildingsqft17',
'yardbuildingsqft26', 'yearbuilt', 'structuretaxvaluedollarcnt', 'taxvaluedollarcnt',
'landtaxvaluedollarcnt', 'taxamount']
discrete = ['bathroomcnt', 'bedroomcnt', 'calculatedbathnbr', 'fireplacecnt', 'fullbathcnt',
'garagecarcnt', 'poolcnt', 'roomcnt', 'threequarterbathnbr', 'unitcnt',
'numberofstories', 'assessmentyear', 'taxdelinquencyyear']
k=0
fig = plt.figure(figsize=(20, 70))
for col in continuous:
k+=1
ax=fig.add_subplot(10,3,k) #,figsize=(20,100)
values = prop_df[col].dropna()
lower = np.percentile(values, 1)
upper = np.percentile(values, 99)
sns.distplot(values[(values>lower) & (values<upper)]) #, ax = plt.subplot(121));
ax.set_title(col, fontsize=16)
# -
# #### Distribution of Errors
#
# The prediciton target is MAE, whcih has symmeric ditribution with a mean value close to zero. Therefore, normalization is not needed for target value.
plt.figure(figsize=(10,5))
sns.distplot(train_df.logerror.values, bins=500)
plt.xlabel('Logerror', fontsize=16)
plt.xlim([-1,1])
#plt.grid()
#plt.show()
# ### Benchmark Model and Evaluation Metrics
#
# Linear regression is best reference model that I can think about. In Kaggle competition, there are many competitiors running various different models with the data set. So I can benchmark my model using the evaludation scores with others.
# <In this section, provide the details for a benchmark model or result that relates to the domain, problem statement, and intended solution. Ideally, the benchmark model or result contextualizes existing methods or known information in the domain and problem given, which could then be objectively compared to the solution. Describe how the benchmark model or result is measurable (can be measured by some metric and clearly observed) with thorough detail.>
#
# Submit the data to Kaggle and get the evaluated on MAE (Mean Absolute Error) score between the predicted log error and the actual log error.
#
# - MAE(logerror)
# - logerror=log(Zestimate)−log(SalePrice)
#
#
# <Evaluation Metrics: In this section, propose at least one evaluation metric that can be used to quantify the performance of both the benchmark model and the solution model. The evaluation metric(s) you propose should be appropriate given the context of the data, the problem statement, and the intended solution. Describe how the evaluation metric(s) are derived and provide an example of their mathematical representations (if applicable). Complex evaluation metrics should be clearly defined and quantifiable (can be expressed in mathematical or logical terms).>
#
# ### Project Design
#
# The first step is trimming features by importance to reduce model complexity, I am planning to aggregated the most important features from each of our best initial model runs and chose small subset of features to test.
# I am going to also consider to add new features based on the most important features in our initial models. For example, add a metric to represent the ratio of the home’s structure tax to its land tax.
#
# The second step is to removing outliers. There are several features in the data set that has significant outliers that is greater than 3 times of STD(standard deviations). These outliers might affect the model performance.
#
# The third step is to applying XGboost, Newrual networks to build models for the predcition, and optimize the model by applying cross-validation and grid-serach parameter optimization.
#
# As the last step, compare the score of the submitted prediciton of each model, then consdier to apply ensemble method to the best models I have build.
#
#
#
#
#
# #### Identified data type and handle Nan values
#
# Firstly, the date type of each feature should be identified and handled properly to improve the models for training. I have treated these 3 different data types in different way.
# - Classify binary (boolean) type
# - Classify ontinuous numeric type
# - Classify discrete type features
#
# #### Distribution of Nan (Non available) values
#
# Below figure shows how many data samples has Nan values in each feature. This shows significant portion of data feature has very large number of Nan values that needs to be taken cared. In handling Nan values, I have considering the following 3 options:
# - Option 1. Replace Nan value with 0.
# - Option 2. Replace Nan values with large negative value such as -999.
# - Option 1. Using a weighted distribution of observed values in an attempt to maintain the original distribution.
#
# <In this section, clearly describe a solution to the problem. The solution should be applicable to the project domain and appropriate for the dataset(s) or input(s) given. Additionally, describe the solution thoroughly such that it is clear that the solution is quantifiable (the solution can be expressed in mathematical or logical terms) , measurable (the solution can be measured by some metric and clearly observed), and replicable (the solution can be reproduced and occurs more than once).>
# +
missing_df = prop_df.isnull().sum(axis=0).reset_index()
missing_df.columns = ['column_name', 'missing_count']
missing_df = missing_df.loc[missing_df['missing_count']>0]
missing_df = missing_df.sort_values(by='missing_count')
ind = np.arange(missing_df.shape[0])
fig, ax = plt.subplots(figsize=(10,15))
rects = ax.barh(ind, missing_df.missing_count.values, color='blue')
ax.set_yticks(ind)
ax.set_yticklabels(missing_df.column_name.values, rotation='horizontal')
ax.set_xlabel("Count of Nan values")
ax.set_title("Number of Nan values in each feature")
ax.set_title(col, fontsize=16)
#plt.grid()
plt.show()
# -
# #### Feature Selection
# Drop features with extreme missingness, duplication, and zero variance. Variables with over 90% missingness and no feasible way to determine the correct value were dropped. If variables captured the same information, such as FIPS (Federal Information Processing Standard code) and Zip Code, we only kept one. Finally, variables with the same value across all observations were dropped as they would have had no impact on our model.
# * Feature use without modification:
# ['fips', 'landtaxvaluedollarcnt', 'lotsizesquarefeet', 'structuretaxvaluedollarcnt', 'taxamount', 'taxvaluedollarcnt', 'buildingqualitytypeid']
#
# * Numeric features filling null with 0:
# ['basementsqft', 'fireplacecnt', 'poolcnt', 'poolsizesum', 'roomcnt', 'fullbathcnt', 'threequarterbathnbr', 'bathroomcnt', 'calculatedbathnbr', 'calculatedfinishedsquarefeet', 'taxdelinquencyyear']
#
# * Area related numeric features filling null with 0:
# ['finishedfloor1squarefeet', 'finishedsquarefeet12', 'finishedsquarefeet13', 'finishedsquarefeet15', 'finishedsquarefeet50', 'finishedsquarefeet6', 'yardbuildingsqft17', 'yardbuildingsqft26']
#
# * Numeric features filling null with NaN values:
# ['bedroomcnt', 'yearbuilt', 'garagecarcnt', 'garagetotalsqft', 'numberofstories', 'unitcnt']
#
# * Boolean features:
# ['fireplaceflag', 'hashottuborspa', 'pooltypeid10', 'pooltypeid2', 'pooltypeid7']
#
train_df2 = pd.merge(train_df, prop_df, on='parcelid', how='left')
train_df2['transaction_month'] = train_df2['transactiondate'].apply(lambda x: float(x[5:7]))
# +
labels = []
values = []
for col in prop_df.columns:
if prop_df[col].dtype==float:
labels.append(col)
tt=train_df2[['logerror',col]].dropna()
corr=np.corrcoef(tt[col], tt.logerror)
if len(corr)>0:
values.append(corr[0,1])
corr_df = pd.DataFrame({'col_labels':labels, 'corr_values':values})
corr_df = corr_df.sort_values(by='corr_values')
ind = np.arange(len(labels))
width = 0.9
fig, ax = plt.subplots(figsize=(10,30))
rects = ax.barh(ind, np.array(corr_df.corr_values.values), color='y')
ax.set_yticks(ind)
ax.set_yticklabels(corr_df.col_labels.values, rotation='horizontal')
ax.set_xlabel("Correlation coefficient")
ax.set_title("Correlation coefficient of the variables")
#autolabel(rects)
plt.show()
# -
#
# Below figure which feature impact more on the target value.
#
# +
from sklearn import ensemble
train_y = train_df2['logerror'].values
cat_cols = ["hashottuborspa", "propertycountylandusecode", "propertyzoningdesc", "fireplaceflag", "taxdelinquencyflag"]
train_df_tr = train_df2.drop(['parcelid', 'logerror', 'transactiondate', 'transaction_month']+cat_cols, axis=1)
train_df_tr=train_df_tr.fillna(-1)
feat_names = train_df_tr.columns.values
model = ensemble.ExtraTreesRegressor(n_estimators=25, max_depth=30, max_features=0.3, n_jobs=-1, random_state=0)
model.fit(train_df_tr, train_y)
## plot the importances ##
importances = model.feature_importances_
std = np.std([tree.feature_importances_ for tree in model.estimators_], axis=0)
indices = np.argsort(importances)[::-1][:60]
plt.figure(figsize=(12,9))
plt.title("Feature importances")
plt.bar(range(len(indices)), importances[indices], color="r", yerr=std[indices], align="center")
plt.xticks(range(len(indices)), feat_names[indices], rotation='vertical')
plt.xlim([-1, len(indices)])
plt.show()
# -
# #### Apply XGBoost Modeling
#
# XGBoost (Extreme Gradient Boosting) is one of boosting algorithms that convert weak learners into strong learners, which is developed by by Tianqi Chen in University of Washington. It is currenlty widely used in many areas relying on supervised learning.
#
# Boosting is a sequential process that exploits multiple number of weak classifiers. Boosting reduces prediction errors by puts more weights on the misclassification (error) of previous model and tries to reduce it in the next phase, then combine the weak classifiers to build a strong calissifer.
#
# XGBoost can be used to solve both regression and classification problems.
# For classification Problems, XGBoost applies boosting algorithm to find tree parameter; for example, a tree is grown one after other and attempts to reduce misclassification rate in subsequent iterations. In this, the next tree is built by giving a higher weight to misclassified points by the previous tree.
#
# For regression problems XGBoost applies boosting algorithm to linear models. Firstly build a generalized linear model and optimizes it using regularization (L1,L2) and gradient descent. In this, the subsequent models are built on residuals (actual - predicted) generated by previous iterations.
#
# XGBoosting has many adavatages in machine learning.
#
# * Parallel Computing: It is easy to apply parallel processing
# * Regularization: This is the biggest advantage of xgboost. GBM has no provision for regularization.
# * Cross Validation: XGBoost is enabled with internal CV function.
# * Missing Values: XGBoost is designed to handle missing values internally. * Flexibility: XGBoost supports user-defined objective functions also, it supports user defined evaluation metrics as well.
# * Availability: Currently, it is available for programming languages such as R, Python, Java, Julia, and Scala.
# * Save and Reload: XGBoost gives us a feature to save data matrix and model and reload it later. This is really convenient feature when it needs to train the machine with a large data set.
# * Tree Pruning: Unlike GBM, where tree pruning stops once a negative loss is encountered, XGBoost grows the tree upto max_depth and then prune backward until the improvement in loss function is below a threshold.
#
# ##### XGBoost model results
# I have trained the data using initial parameters
#
# 'eta': 0.03,
# 'max_depth': 7,
# 'subsample': 1,
# 'objective': 'reg:linear',
# 'eval_metric': 'mae',
# 'base_score': y_train_trim.mean(),
# 'min_child_weight':2, #default 1
# 'alpha':1.0,
# 'lambda':1.0
#
# Training beginning part:
#
# test-mae-mean|test-mae-std|train-mae-mean|train-mae-std
# ---|---|---|---
# 0.05328|0.00033|0.05327|8e-05
# 0.05326|0.00033|0.05323|9e-05
# 0.05323|0.00033|0.05319|9e-05
# 0.05321|0.00033|0.05316|9e-05
# 0.05319|0.00033|0.05313|9e-05
#
# Training ending part:
#
# test-mae-mean|test-mae-std|train-mae-mean|train-mae-std
# ---|---|---|---
# 0.05285|0.00029|0.05179|0.0001
# 0.05285|0.00029|0.05179|0.0001
# 0.05285|0.00029|0.05178|0.0001
# 0.05285|0.00029|0.05177|0.0001
# 0.05285|0.00029|0.05176|0.0001
#
#
# Training and test error with iteration:
#
# <img src="xgboost_plot1.png" />
# ### Future Plan
#
#
# #### Optimize XGBoost parameters
# The current XGBoost method is not optimzied, hence can be improved by optimizing the paremeters.
#
# * nrounds: It controls the maximum number of iterations. For classification, it is similar to the number of trees to grow.
# * eta: It controls the learning rate. After every round, it shrinks the feature weights to reach the best optimum. Lower eta leads to slower computation. It must be supported by increase in nrounds.
# * gamma: It controls regularization (or prevents overfitting). Higher the value, higher the regularization. Regularization means penalizing large coefficients which don't improve the model's performance. default = 0 means no regularization.
# * max_depth: It controls the depth of the tree. Larger the depth, more complex the model, which means higher chances of overfitting.
# * min_child_weight. In regression, it refers to the minimum number of instances required in a child node. In classification, if the leaf node has a minimum sum of instance weight (calculated by second order partial derivative) lower than min_child_weight, the tree splitting stops.
# * subsample: It controls the number of samples (observations) supplied to a tree.
# * colsample_bytree: It control the number of features (variables) supplied to a tree
# * lambda: It controls L2 regularization (equivalent to Ridge regression) on weights. It is used to avoid overfitting.
# * alpha: It controls L1 regularization (equivalent to Lasso regression) on weights. In addition to shrinkage, enabling alpha also results in feature selection. Hence, it's more useful on high dimensional data sets.
#
# To tune gamma parameter I have applied 0 in the beginning and checked CV error rate. If the train error is larger than test error, I have increased the gamma value. With higher gamma values, the different between train error and test error has decreased, which means overfitting problem is reduced by regularization.
#
# #### Apply neural network model
#
# The next step is to applying a different models to predict the target value. Neural networks are well-suited to identifying non-linear patterns, as in patterns where there isn’t a direct, one-to-one relationship between the input and the output. Instead, the networks identify patterns between combinations of inputs and a given output.
#
# #### Ensemble model
#
# Ensemble methods are techniques that combine multiple models to produce improved results. Ensemble methods usually produces more accurate solutions than a single model would. This has been the case in a number of machine learning competitions, where the winning solutions used ensemble methods. In the popular Netflix Competition, the winner used an ensemble method to implement a powerful collaborative filtering algorithm.
| 24,026 |
/Lab7/Lab_7_Exercise_1.ipynb
|
15ba35b9b3ffd88827d29331b45d7bdabee97fd9
|
[] |
no_license
|
austinAbraham/CE888
|
https://github.com/austinAbraham/CE888
| 0 | 0 | null | 2021-05-05T15:30:56 | 2021-05-05T15:29:59 | null |
Jupyter Notebook
| false | false |
.py
| 83,660 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
sal_hike=pd.read_csv("Salary_Data.csv")
sal_hike.head()
sal_hike.shape
sal_hike.describe()
plt.boxplot(sal_hike.YearsExperience)
plt.boxplot(sal_hike.Salary)
sal_hike.corr()
plt.hist(sal_hike.Salary, bins=20)
plt.scatter(x=sal_hike.YearsExperience, y=sal_hike.Salary, color='blue')
plt.xlabel("YearsExperience")
plt.ylabel("Salary")
import statsmodels.formula.api as smf
model6=smf.ols("Salary~YearsExperience",data=sal_hike).fit()
model6.summary()
model7=smf.ols("Salary~np.log(YearsExperience)",data=sal_hike).fit()
model7.summary()
model6.params
model7.params
model6.conf_int(0.05) # 95% confidence interval
pred6 = model6.predict(sal_hike) # Predicted values of Salary using the model
plt.scatter(x=sal_hike.YearsExperience, y=sal_hike.Salary, color='blue')
plt.plot(sal_hike.YearsExperience, pred6,color='black')
plt.xlabel("YearsExperience")
plt.ylabel("Salary")
model7.conf_int(0.05) # 95% confidence interval
pred7 = model7.predict(sal_hike) # Predicted values of Salary using the model
plt.scatter(x=sal_hike.YearsExperience, y=sal_hike.Salary, color='blue')
plt.plot(sal_hike.YearsExperience, pred7,color='black')
plt.xlabel("YearsExperience")
plt.ylabel("Salary")
input tensors of shape (image_height, image_width,
# image_channels) (not including the batch dimension). In this case, we’ll configure
# the convnet to process inputs of size (28, 28, 1), which is the format of MNIST
# images. We’ll do this by passing the argument input_shape=(28, 28, 1) to the first
# layer.
# + [markdown] id="dM4JLEpwjymN"
# #### Instantiating a small convnet
# + id="p-OnpExGjymO" colab={"base_uri": "https://localhost:8080/"} outputId="65c29bb1-2019-431b-ce76-d0dd51cdfa9e"
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
# + [markdown] id="7gcVG3xkjymR"
# #### Adding a classifier on top of the convnet
# + id="C2DfhDJYjymR" colab={"base_uri": "https://localhost:8080/"} outputId="2a921250-f060-4bcf-86d8-18307fa618d0"
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
# + [markdown] id="NOKVF4nKjymU"
# ### Training the convnet on MNIST images
# + id="oIcgUbbUjymV"
from keras.datasets import mnist
from keras.utils import to_categorical
# + [markdown] id="ZnJ2Pfs_jymX"
# #### Load Data
# + id="JpHGHE9MjymY"
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# + [markdown] id="4HoTLrfSjymd"
# #### compile and fit model
# + id="i23FDtC9jyme" colab={"base_uri": "https://localhost:8080/"} outputId="3388c3c1-3937-4eb3-811b-4d11704a79c4"
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_split=0.2)
# + [markdown] id="9zU8iI5ojymg"
# #### evaluate model
# + id="Z3VeaL1Njymh" colab={"base_uri": "https://localhost:8080/"} outputId="3c529cc5-446a-47ac-d562-c9561a10762d"
test_loss, test_acc = model.evaluate(test_images, test_labels)
test_acc
# + id="wXNZOY7Sjymj" colab={"base_uri": "https://localhost:8080/", "height": 545} outputId="27bb41db-368f-4c95-8895-9dd862e453a4"
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + [markdown] id="dsQMc0Iojyml"
# ## Task 1
#
# Change the activation function and other parameters such as optimizer to see the effect on the network and it's performance. If possible create a grid search.
# + id="SUoxjqvhLjiN" colab={"base_uri": "https://localhost:8080/"} outputId="ae22d80f-bdc0-44e2-8064-6ca4fc81d339"
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='sigmoid', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='sigmoid'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='sigmoid'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='sigmoid'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_split=0.2)
# + colab={"base_uri": "https://localhost:8080/", "height": 545} id="5OW4ougV7IHY" outputId="5121dbbc-b802-42d8-9f1e-7e4a0acb8c21"
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + [markdown] id="iF2mF25gB_DD"
# Implementing grid search
# + colab={"base_uri": "https://localhost:8080/", "height": 458} id="PonYbkk35k3u" outputId="2dc0ca57-4a0e-456d-ab03-5021c7f6c878"
from sklearn.model_selection import GridSearchCV
# Using the GridSearchCV, hyperparameters are tuned
# Arranging Parameter Range
optimizers = ['rmsprop', 'adam']
init = ['glorot_uniform', 'normal', 'uniform']
epochs = np.array([50, 100, 150])
batches = np.array([5, 10, 20])
param_grid = { }
grid = GridSearchCV(estimator=model, param_grid=param_grid, cv = 10)
grid.fit(train_images,train_labels)
print("Best score",grid.best_score_)
# Best parameter after tuning
print("Best parameter after tuning",grid.best_params_)
# + id="nd1UOf7C_2do"
| 7,086 |
/explore.ipynb
|
5b9417c7a3a3a24e8c9bf9b1062ec2a9ec729165
|
[
"MIT"
] |
permissive
|
aedavids/lab3RotationProject
|
https://github.com/aedavids/lab3RotationProject
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 39,571 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/amermahyoub/AutoML/blob/main/AutoML.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="90qMZb528kpK"
# # !pip install streamlit -q
import streamlit as st
import base64
import pandas as pd
import numpy as np
import plotly.graph_objects as go
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.metrics import mean_squared_error, r2_score, accuracy_score
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_diabetes
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="OP4V21-oHhSj" outputId="18fb8996-6542-4c0f-99f3-fc21e2ef49a5"
import ipykernel
ipykernel.__version__
# + id="3fsuj8x3--dV"
# # !pip install streamlit -q
# # !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
# # !unzip -qq ngrok-stable-linux-amd64.zip
# get_ipython().system_raw('./ngrok http 8501 &')
# # ! curl -s http://localhost:4040/api/tunnels | python3 -c \
# # "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
# # !streamlit hello
# + id="hPwUMoNI-Vtt"
# # !pip uninstall ipykernel
# # !pip install ipykernel==5.1.2
# # !pip install pydeck
# + id="bk8-5j3N_7H3"
# Page layout
st.set_page_config(page_title='The Machine Learning Hyperparameter Optimization App',
layout='wide', page_icon='random')
st.write("""
# The Machine Learning Hyperparameter Optimization App
**(Regression Edition)**
In this implementation, the *RandomForestRegressor()* function is used in this app for build a regression model using the **Random Forest** algorithm.
""")
# + colab={"base_uri": "https://localhost:8080/"} id="Rqsih-QtA5vY" outputId="e4f85ff6-6227-4013-ee85-99e94cc7649f"
# Sidebar - Collects user input features into dataframe
st.sidebar.header('Upload your CSV data')
uploaded_file = st.sidebar.file_uploader('Upload your input CSV file', type='csv')
st.sidebar.markdown("""
[Example CSV input file](https://raw.githubusercontent.com/dataprofessor/data/master/delaney_solubility_with_descriptors.csv)
""")
# + id="gwQWgBoLB2zt"
# Sidebar - Specify parameter settings
st.sidebar.header('Set Parameters')
spit_size = st.sidebar.slider('Data split ratior (% for Training set)',
min_value=10, max_value=90, value=80, step=5)
# + id="gjFEugMKC5yg"
st.subheader('Learning Parameters')
parameter_n_estimator = st.sidebar.slider('Number of estimators (n_estimators)', 0, 500)
parameter_n_estimator_step = st.sidebar.number_input('Step size for n_estomators', 10)
st.sidebar.write('---')
# + id="WDAWnGJrD29G"
parameter_max_features = st.sidebar.slider('Max features (max_features)', 1, 50, (1,3), 1)
st.sidebar.number_input('Step size for max_features', 1)
st.sidebar.write('---')
# + id="gQv9faBUElL3"
parameter_min_samples_split = st.sidebar.slider('Minimum number of samples required to split an internal node (min_samples_split)'
, 1, 10, 2, 1)
parameter_min_samples_leaf = st.sidebar.slider('Minimum number of samples required to be at a leaf node (min_samples_leaf)'
, 1, 10, 2, 1)
# + id="FHZDgXB3E6nn"
st.sidebar.subheader('General Parameters')
parameter_random_state = st.sidebar.slider('Seed number (random_state)', 0, 1000, 42, 1)
parameter_criterion = st.sidebar.select_slider('Performance measure (criterion)', options=['mse', 'mae'])
parameter_bootstrap = st.sidebar.select_slider('Bootstrap samples when building trees (bootstrap)', options=[True, False])
parameter_oob_score = st.sidebar.select_slider('Whether to use out-of-bag samples to estimate the R^2 on unseen data (oob_score)', options=[False, True])
parameter_n_jobs = st.sidebar.select_slider('Number of jobs to run in parallel (n_jobs)', options=[1, -1])
# + id="_-csJHCkHv0P" colab={"base_uri": "https://localhost:8080/"} outputId="0adf8425-81a8-41d9-cb1b-6cac37158327"
n_estimators_range = np.arange(parameter_n_estimator[0], parameter_n_estimator[1]+parameter_n_estimator_step, parameter_n_estimator_step)
max_features_range = np.arange(parameter_max_features[0], parameter_max_features[1]+1, 1)
param_grid = dict(max_features=max_features_range, n_estimators=n_estimators_range)
# + id="puEYxMHywgg6"
# Displays the dataset
st.subheader('Dataset')
# + id="0SvBuQFCwqZ8"
# Model building
def file_download(df):
csv = df.to_csv(index=False)
b64 = base64.b64encode(csv,encode()).decode() # strings <-> bytes conversions
href = f'<a href="data:file/csv;base64,{b64}" download="model_performance.csv">Download CSV File</a>'
return href
def build_model(df):
x = df.loc[:,:-1] # Using all column except for the last column as X
y = df.loc[:,-1] # Selecting the last column as Y
st.markdown('A model is being built to predict the following **Y** variable:')
st.info(y.name)
# Data splitting
X_train, X_test, Y_train, Y_test = train_test_split(y, x, test_size=split_size)
# Defining our Random Forest model
rf = RandomForestRegressor(n_estimators=parameter_n_estimators,
random_state=parameter_random_state,
max_features=parameter_max_features,
criterion=parameter_criterion,
min_samples_split=parameter_min_samples_split,
min_samples_leaf=parameter_min_samples_leaf,
bootstrap=parameter_bootstrap,
oob_score=parameter_oob_score,
n_jobs=parameter_n_jobs)
grid_search = GridSearchCV(estimator=rf, param_grid=param_grid, cv=5)
grid_search.fit(X_train, Y_train)
Y_pred_test = grid.predict(X_test)
st.write('Coefficient of determination ($R^2$):')
st.info( r2_score(Y_test, Y_pred_test) )
st.write('Error (MSE or MAE):')
st.info( mean_squared_error(Y_test, Y_pred_test) )
st.write("The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_))
st.subheader('Model Parameters')
st.write(grid.get_params())
#-----Process grid data-----#
grid_results = pd.concat([pd.DataFrame(grid.cv_results_["params"]),pd.DataFrame(grid.cv_results_["mean_test_score"], columns=["R2"])],axis=1)
# Segment data into groups based on the 2 hyperparameters
grid_contour = grid_results.groupby(['max_features','n_estimators']).mean()
# Pivoting the data
grid_reset = grid_contour.reset_index()
grid_reset.columns = ['max_features', 'n_estimators', 'R2']
grid_pivot = grid_reset.pivot('max_features', 'n_estimators')
x = grid_pivot.columns.levels[1].values
y = grid_pivot.index.values
z = grid_pivot.values
#-----Plot-----#
layout = go.Layout(
xaxis=go.layout.XAxis(
title=go.layout.xaxis.Title(
text='n_estimators')
),
yaxis=go.layout.YAxis(
title=go.layout.yaxis.Title(
text='max_features')
) )
fig = go.Figure(data= [go.Surface(z=z, y=y, x=x)], layout=layout )
fig.update_layout(title='Hyperparameter tuning',
scene = dict(
xaxis_title='n_estimators',
yaxis_title='max_features',
zaxis_title='R2'),
autosize=False,
width=800, height=800,
margin=dict(l=65, r=50, b=65, t=90))
st.plotly_chart(fig)
#-----Save grid data-----#
x = pd.DataFrame(x)
y = pd.DataFrame(y)
z = pd.DataFrame(z)
df = pd.concat([x,y,z], axis=1)
st.markdown(filedownload(grid_results), unsafe_allow_html=True)
# + id="ghiP3TCdKAjm"
if uploaded_file is not None:
df = pd.read_csv(uploaded_file)
st.write(df)
build_model(df)
else:
st.info('Awaiting for CSV file to be uploaded.')
if st.button('Press to use Example Dataset'):
diabetes = load_diabetes()
X = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
Y = pd.Series(diabetes.target, name='response')
df = pd.concat( [X,Y], axis=1 )
st.markdown('The **Diabetes** dataset is used as the example.')
st.write(df.head(5))
build_model(df)
ata
missingFromClean = cleanData == 'NA'
percentMissingFromClean = np.sum(missingFromClean) / np.size(cleanData)
print()
print("percent missing from clean data:{}".format(percentMissingFromClean))
print("percent of genes dropped:{}".format( 1 - cleanData.shape[0]/ dataStr.shape[0]))
# -
| 8,852 |
/Notebooks/constroi_dataset.ipynb
|
9f87cdc82fa51badb3908cfb711ddf4a644d6b8f
|
[] |
no_license
|
Abello966/SISR_plankton
|
https://github.com/Abello966/SISR_plankton
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 37,659 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parsing total results
# ### C-termini
# ## Two variants
# +
##Importing packages
import pandas as pd
import numpy as np
import math
# -
##Importing dataset
C_terminal_2_total=pd.read_csv('C_terminal_2_total.csv',sep=';')
VS_pharma_2
VS_pharma_2['Molecule']=VS_pharma_2['Molecule'].str.replace("2_var_", "")
VS_pharma_2['Molecule']=VS_pharma_2['Molecule'].str.replace(".mol2", "")
VS_pharma_2.Molecule=VS_pharma_2.Molecule.astype('int64')
VS_pharma_2
VS_pharma_2=index_2.merge(VS_pharma_2, on='Molecule', how='left').sort_values(by="Index")
VS_pharma_2=VS_pharma_2.drop_duplicates(subset=['Label'])
VS_pharma_2.Molecule=VS_pharma_2.Molecule.div(2).apply(np.ceil).astype('int64')
VS_pharma_2.head(20)
VS_pharma_2.head(20).to_csv('VS_2.csv')
# ## Three variants
# +
##Importing dataset
VS_pharma_3=pd.read_csv('scores_1.pha_db_3_var.mol2.csv',header=None,names=["Index", "Score","Aromatic","Hydrophobic","Donors","Aceptors", "Negatives","Molecule"])
index_3=pd.read_csv('indice_3.csv')
index_3['Molecule'] = list(range(1, 466))
index_3.Molecule=index_3.Molecule.astype('int64')
index_3
# -
VS_pharma_3
VS_pharma_3['Molecule']=VS_pharma_3['Molecule'].str.replace("3_var_", "")
VS_pharma_3['Molecule']=VS_pharma_3['Molecule'].str.replace(".mol2", "")
VS_pharma_3.Molecule=VS_pharma_3.Molecule.astype('int64')
VS_pharma_3
VS_pharma_3=index_3.merge(VS_pharma_3, on='Molecule', how='left').sort_values(by="Index")
VS_pharma_3=VS_pharma_3.drop_duplicates(subset=['Label'])
VS_pharma_3.Molecule=VS_pharma_3.Molecule.div(3).apply(np.ceil).astype('int64')
VS_pharma_3.head(20)
VS_pharma_3.head(20).to_csv('VS_3.csv')
LyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCkgewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwogICAgICBwZXJjZW50LnRleHRDb250ZW50ID0KICAgICAgICAgIGAke01hdGgucm91bmQoKHBvc2l0aW9uIC8gZmlsZURhdGEuYnl0ZUxlbmd0aCkgKiAxMDApfSUgZG9uZWA7CiAgICB9CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 250}
from PIL import Image
import numpy as np
from google.colab import files
uploaded = files.upload()
path=list(uploaded.keys())[0]
# Displaying the uploaded image
image = Image.open(path)
img_raw=np.array(image)
display(Image.fromarray(img_raw, 'RGB')) # Display the image
# + id="zSXChddvJd0O"
# Preprocessing the image as required by model loading it into Keras :
image = load_img(path, target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# + [markdown] id="M7-gFKFB9z-L"
# ### Uploading, Displaying and Preprocessing the image from from URL
# + id="FFNS2AyQ976Y" outputId="265757b8-10b1-43e4-f4b7-7614b919fdc3" colab={"base_uri": "https://localhost:8080/", "height": 194}
# Read image from github
import requests
import io
url = "https://raw.githubusercontent.com/Rami-RK/"\
"Transfer_Learning_CV/main/tiger.jpg"
response = requests.get(url)
img= Image.open(io.BytesIO(response.content)).convert('RGB')
display(img)
# + id="VpdclnakITj1"
# Preprocessing the image as required by model, loading it into Keras :
img = img.resize((224, 224))
image= np.array(img)
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# + [markdown] id="AzOVIxqJLNZi"
# ### Note: Uplaod image to be predicted using any one method as given above.
# ### Prediction Part, give below remains same for both method.
# + [markdown] id="OkBla3lt8jZ9"
# ### Predicting the image with probabiltiy
# + id="pXmqG1N234L9" outputId="594422bf-b2be-433b-aaec-62ad85a5dc18" colab={"base_uri": "https://localhost:8080/"}
# predict the probability across all output classes
yhat = model.predict(image)
# convert the probabilities to class labels
label = decode_predictions(yhat)
# retrieve the most likely result, e.g. highest probability
label = label[0][0]
# print the classification
print('%s (%.2f%%)' % (label[1], label[2]*100))
'1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000',
'2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012',
'2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023']].sum().reset_index()
BEAtoAgency.head()
# # Test randomly to see if the sums are the same in the budget dataframe and the Sankey dataframe to know the data was manipulated correctly
budget['1976'].sum()-BEAtoAgency['1976'].sum() # Expect 0
budget['1986'].sum()-BEAtoAgency['1986'].sum() # Expect 0
# # There are 351 BEA to Agency relationships. 224 unique Agencies matching up with 3 BEA's. Would expect a number between 224 and 3 x 224.
BEAtoAgency.describe()
# # Consolidate data into unique combinations of Agency_Code flowing into Subfunction_Code
AgencytoSubfunction = budget.groupby(['BEA_Code','Agency_Code', 'Subfunction_Code'])[['1976', 'TQ', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988',
'1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000',
'2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012',
'2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023']].sum().reset_index()
AgencytoSubfunction.head()
# # Add a column that records the type of target entity so it can be used as selection criteria in the dashboard
AgencytoSubfunction['Target_type'] = 'Subfunction'
AgencytoSubfunction.head()
# # Test randomly to see if the sums are the same in the budget dataframe and the Sankey dataframe to know the data was manipulated correctly
budget['1976'].sum()-AgencytoSubfunction['1976'].sum() # Expect 0
budget['TQ'].sum()-AgencytoSubfunction['TQ'].sum() # Expect 0
# # There are 583 Agency to Subfunction relationships, but when carrying over BEA also that increases to 770 - so 187 have split relationships with BEA. These 187 will end up with orphaned links when eliminating a BEA category in the Sankey diagram. They will be a source, but not from the type of money being examined (because it was eliminated), but from the ageency being examined. The 224 unique Agencies matching up with 84 Subfunctions. I thought this number would be a higher number - but there were a lot of 0's.
AgencytoSubfunction.describe()
# # Create a new dataframe that consists of columns for sources (flow from), targets (flow to), and values (budget amounts).
# # First re-name columns in BEAtoAgency dataframe
BEAtoAgency.rename(columns = {'BEA_Code':'Source'}, inplace=True)
BEAtoAgency.rename(columns = {'Agency_Code':'Target'}, inplace=True)
BEAtoAgency.head()
# # Add a column preserving BEA_Code so that when filtering by BEA_Code in the graph, the BEA_Code is kept in a column for all connections. This column already exists with the Agency to Subfunction dataframe so the two dataframes can be combined.
new_column = pd.Series(BEAtoAgency['Source'])
BEAtoAgency = pd.concat([BEAtoAgency, new_column.rename('BEA_Code')], axis = 1)
BEAtoAgency = BEAtoAgency[['BEA_Code', 'Source', 'Target', '1976', 'TQ', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988',
'1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000',
'2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012',
'2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023']]
BEAtoAgency.head()
# # Add a column that records the type of target entity so it can be used as selection criteria in the dashboard
BEAtoAgency['Target_type'] = 'Agency'
BEAtoAgency.head()
# # Rename columns in AgencytoSubfuncton dataframe
AgencytoSubfunction.rename(columns = {'Agency_Code':'Source'}, inplace=True)
AgencytoSubfunction.rename(columns = {'Subfunction_Code':'Target'}, inplace=True)
AgencytoSubfunction.head()
# # Create the dataframe to use to generate the Sankey graph by combining the BEAtoAgency dataframe with the AgencytoSubfunction dataframe. Expect 1,121 records.
# +
Sankeydf = pd.DataFrame(columns=['BEA_Code', 'Source', 'Target', '1976', 'TQ', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988',
'1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000',
'2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012',
'2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023', 'Target_type'])
Sankeydf = Sankeydf.append(BEAtoAgency, ignore_index = True)
Sankeydf = Sankeydf.append(AgencytoSubfunction, ignore_index = True)
Sankeydf.describe(include=['object'])
# -
# # Test random years to see if the sums are the same in the budget dataframe and the Sankey dataframe to know the data was manipulated correctly.
# # Sankey dataframe total should be twice as much as budget dataframe because all numbers are in it twice - once for node 1 to node 2 and once for node 2 to node 3. Expect the sample selected below to all be 0.
#
#
2*budget['1976'].sum()-Sankeydf['1976'].sum()
2*budget['TQ'].sum()-Sankeydf['TQ'].sum()
2*budget['1977'].sum()-Sankeydf['1977'].sum()
2*budget['1978'].sum()-Sankeydf['1978'].sum()
2*budget['1979'].sum()-Sankeydf['1979'].sum()
2*budget['1986'].sum()-Sankeydf['1986'].sum()
2*budget['1996'].sum()-Sankeydf['1996'].sum()
2*budget['2006'].sum()-Sankeydf['2006'].sum()
2*budget['2016'].sum()-Sankeydf['2016'].sum()
#pd.set_option('display.max_rows', 1500)
Sankeydf.head()
Sankeydf.tail() # Verify that the index numbers are correct (0-1,120).
# # Need to add the label names for the Sankeydf so create a separate dataframe withe the unqiue information in it.
# +
BEALabelNames = pd.DataFrame(columns=['Code', 'Label'])
BEACode = budget.BEA_Code.unique()
BEACategory = budget.BEA_Category.unique()
BEALabelNames['Code'] = BEACode
BEALabelNames['Label'] = BEACategory
BEALabelNames.head()
# -
BEALabelNames.describe(include=['object'])
# +
AgencyLabelNames = pd.DataFrame(columns=['Code', 'Label'])
AgencyCode = budget.Agency_Code.unique()
AgencyName = budget.Agency_Name.unique()
AgencyLabelNames['Code'] = AgencyCode
AgencyLabelNames['Label'] = AgencyName
AgencyLabelNames.head()
# -
AgencyLabelNames.describe(include=['object'])
# +
SubfunctionLabelNames = pd.DataFrame(columns = ['Code', 'Label'])
SubfunctionCode = budget.Subfunction_Code.unique()
SubfunctionTitle = budget.Subfunction_Title.unique()
SubfunctionLabelNames['Code'] = SubfunctionCode
SubfunctionLabelNames['Label'] = SubfunctionTitle
SubfunctionLabelNames.head()
# -
SubfunctionLabelNames.describe(include=['object'])
# # Combine the unique BEA's, Agencies and Subfunctions Codes and Labels into one dataframe
Labelsdf = pd.DataFrame(columns=['Code', 'Label'])
Labelsdf = Labelsdf.append(BEALabelNames, ignore_index = True)
Labelsdf = Labelsdf.append(AgencyLabelNames, ignore_index = True)
Labelsdf = Labelsdf.append(SubfunctionLabelNames, ignore_index = True)
Labelsdf.describe(include=['object']) # Only 310 unique labels - so something in Agency is also in Subfunction - It's Infrastructure Initiative.
# From the documentation, it is both an Agency line item and a Subfunction Line item; it's OK to appear in both places.
Labelsdf.head()
Labelsdf.tail()
Labelsdf.dtypes
# # Change the numbers to numbers instead of objects;
Labelsdf['Code'] = Labelsdf['Code'].apply(pd.to_numeric)
Labelsdf.dtypes
Labelsdf.describe()
len(Sankeydf[(Sankeydf.Source == 1)]) # To use as a check on the graph, across all years there are 118 records that flow from 1 - Mandatory
len(Sankeydf[(Sankeydf.Source == 1) & (Sankeydf['1976'].abs() > 0)]) # For Mandatory money in 1976, there are 41 outflows
# # Create a list to use for generating the Sankey diagram. Test creating the list.
my_indices = []
for item in Sankeydf.Source:
my_indices.append( (Labelsdf [Labelsdf.Code == item].index[0] ) )
#my_indices
# # Generate the list with a list comprehension so it will be faster. This does the same thing as above code does.
# +
# [ Labelsdf [Labelsdf.Code == val].index[0] for val in Sankeydf.Source ]
# -
# # Add node colors to be used in the graph.
# +
DarkColors = ['rgba(0,153,0,1)', 'rgba(71,143,209, 1)', 'rgba(242,116,32,1)', '#D3D3D3', '#F27420', '#ffff00', '#9932cc', '#ff0000', '#d2b48c', '#2f4f4f', '#483d8b', '#df633a', '#4d804d', '#00bfff', '#ff1493', '#00ced1', '#9400d3', '#008000' ]
NodeColor =[]
counter = 0
for item in Labelsdf.Label: # Iterate through Labels dataframe populating NodeColor with a color
NodeColor.append((DarkColors[counter]))
counter = counter + 1
if counter == 18:
counter = 0
Labelsdf['Node_Color'] = NodeColor
Labelsdf.head()
# -
Labelsdf.loc[Labelsdf.Node_Color == 'rgba(0,153,0,1)', 'Link_Color'] = 'rgba(179, 225, 179, 0.5)'
Labelsdf.loc[Labelsdf.Node_Color == 'rgba(71,143,209, 1)', 'Link_Color'] = 'rgba(219, 233, 246, 0.5)'
Labelsdf.loc[Labelsdf.Node_Color =='rgba(242,116,32,1)', 'Link_Color'] = 'rgba(253, 222, 206, 0.5)'
Labelsdf.loc[Labelsdf.Node_Color == '#D3D3D3', 'Link_Color'] = 'rgb(242, 242, 242)'
Labelsdf.loc[Labelsdf.Node_Color == '#F27420', 'Link_Color'] = 'rgb(251, 225, 208)'
Labelsdf.loc[Labelsdf.Node_Color == '#ffff00', 'Link_Color'] = 'rgb(255, 255, 129)'
Labelsdf.loc[Labelsdf.Node_Color == '#9932cc', 'Link_Color'] = 'rgb(235 ,214, 245)'
Labelsdf.loc[Labelsdf.Node_Color == '#ff0000', 'Link_Color'] = 'rgb(255, 179, 179)'
Labelsdf.loc[Labelsdf.Node_Color == '#d2b48c', 'Link_Color'] = 'rgb(234, 219, 200)'
Labelsdf.loc[Labelsdf.Node_Color == '#2f4f4f', 'Link_Color'] = 'rgb(207, 226, 226)'
Labelsdf.loc[Labelsdf.Node_Color == '#483d8b', 'Link_Color'] = 'rgb(206, 202, 232)'
Labelsdf.loc[Labelsdf.Node_Color == '#df633a', 'Link_Color'] = 'rgb(244, 203, 189)'
Labelsdf.loc[Labelsdf.Node_Color == '#4d804d', 'Link_Color'] = 'rgb(207, 226, 207)'
Labelsdf.loc[Labelsdf.Node_Color == '#00bfff', 'Link_Color'] = 'rgb(179, 236, 255)'
Labelsdf.loc[Labelsdf.Node_Color == '#ff1493', 'Link_Color'] = 'rgb(255, 179, 219)'
Labelsdf.loc[Labelsdf.Node_Color == '#ff1493', 'Link_Color'] = 'rgb(255, 179, 219)'
Labelsdf.loc[Labelsdf.Node_Color == '#00ced1', 'Link_Color'] = 'rgb(179, 254, 255)'
Labelsdf.loc[Labelsdf.Node_Color == '#9400d3', 'Link_Color'] = 'rgb(232, 179, 255)'
Labelsdf.loc[Labelsdf.Node_Color == '#008000', 'Link_Color'] = 'rgb(179, 255, 179)'
Labelsdf.head()
# # Add link colors to be used in the graph
# +
i = 0
while True:
key = Labelsdf.iloc[i][0]
LinkColor = Labelsdf.iloc[i][3]
# print(key)
# print(LinkColor)
Sankeydf.loc[Sankeydf.Source == key, 'Link_Color'] = LinkColor
i = i + 1
if(i>310):
break
Sankeydf.head()
# -
# # Create a list with the fiscal years that run October 1 - September 30 since they aren't a column in the Sankey dataframe, but I want them to be used for the slider.
years = ['1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988',
'1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000',
'2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012',
'2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023']
Yearsdf = pd.DataFrame(years)
Yearsdf.columns=['Year']
Yearsdf.head()
# # Generate the Sankey diagram
# +
year = Yearsdf.Year[6]
data_trace = dict(
type='sankey',
domain = dict(
x = [0,1], # Sets the horizontal domain, [0,1] is the default
y = [0,1] # Sets the vertical domain, [0,1] is the default
),
orientation = "h", # h = horizontal orientation; could also be v = vertical
valueformat = "$0,f", # formats the numbers
node = dict(
pad = 10, # sets the padding in pixels between the nodes
thickness = 30, # sets the thickness of the nodes
line = dict( # sets the color and thickness in pixels of the line around each node box
color = "black",
width = 0.5
),
label = Labelsdf.Label, # assigns the labels to the nodes
color = Labelsdf.Node_Color # assigns color to the nodes
),
link = dict(
source = [ Labelsdf [Labelsdf.Code == val].index[0] for val in Sankeydf.Source ],
target = [ Labelsdf [Labelsdf.Code == val].index[0] for val in Sankeydf.Target ],
value = Sankeydf[year].dropna(axis=0, how='any'),
color = Sankeydf['Link_Color'],
)
)
layout = dict(
title = "Federal Budget<br>Fiscal Year October 1 - September 30, {}<br>Source: Fiscal Year 19 Federal Budget Authority <a href='https://www.govinfo.gov/app/details/BUDGET-2019-DB/context' >US Government Publishing Office</a> Published 12 Feb 2018".format(year),
# xaxis = {'title': "Dollars in 000's"}, There are no x and y axis labels so I'll need to note these in Markdown on the Dash application
# yaxis = {'title': "Level 1 - Budget Enforcement Act; Level 2 - Agency; Level 3 - Subfunction"},
height = 1900,
width = 2000,
font = dict(
size = 10
),
)
fig = dict(data=[data_trace], layout=layout)
iplot(fig, validate=False)
# -
# # Save the dataframes to CSV files so they can be moved to server and used with DASH app
Sankeydf.to_csv("Sankeydf.csv")
Labelsdf.to_csv("Labelsdf.csv")
Yearsdf.to_csv("years.csv")
# +
# Sankeydf.to_pickle("test_sankeydf.pickle")
# -
| 25,059 |
/climate_starter.ipynb
|
db81fc5ccd883db13b16367769cdb24a8c84ba91
|
[] |
no_license
|
jwu047/Homework_12
|
https://github.com/jwu047/Homework_12
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 638,530 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hideCode=false hidePrompt=false slideshow={"slide_type": "skip"}
# This line will add a button to toggle visibility of code blocks,
# for use with the HTML export version
from IPython.core.display import HTML
HTML('''<button style="margin:0 auto; display: block;" onclick="jQuery('.code_cell .input_area').toggle();
jQuery('.prompt').toggle();">Toggle code</button>''')
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "slide"}
# <img src="./Images/UoE_Horizontal_Logo_282_v1_160215.png" alt="drawing" width="600"/>
#
# # Week 10 - Clustering
# __Dr. David Elliott__
#
# 1. [Introduction](#intro)
#
# 2. [K-Means](#k)
#
# 3. [Picking K](#pick)
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "subslide"}
# # 1. Introduction <a id='intro'></a>
#
# Most of this course has focused on supervised learning methods such as regression and classification.
#
# Here we look at a set of statistical tools intended for the setting in which we have a set of set of $p$ features $\mathbf{x}_1, \mathbf{x}_2,..., \mathbf{x}_p$ measured on $n$ observations, but no response $\mathbf{y}$ also measured on those same n observations. Rather than prediction, the goal is to discover interesting things about the measurements on $\mathbf{x}_1, \mathbf{x}_2,..., \mathbf{x}_p$.
#
# - Is there an informative way to visualize the data?
# - __Can we discover subgroups among the variables or among the observations?__
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "notes"}
# __Notes__
# - _"In the supervised learning setting, we typically have access to a set of $p$ features $\mathbf{x}_1, \mathbf{x}_2,...,\mathbf{x}_p$, measured on $n$ observations, and a response $\mathbf{y}$ also measured on those same n observations. The goal is then to predict $\mathbf{y}$ using $\mathbf{x}_1, \mathbf{x}_2,...,\mathbf{x}_p$."_<sup>1</sup>
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "subslide"}
# Unsupervised learning is often much more challenging than supervised learning.
#
# It tends to be more subjective, as it can be hard to assess the results obtained from unsupervised learning methods, with a less straight forward goal for the analysis.
#
# Unsupervised learning is often performed as part of an exploratory data analysis.
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "notes"}
# __Notes__
# - Due to being more subjective, we wont be using any "real" data for now while learning about it... we'll leave most of that for the applications notebook.
# - _"The reason for this difference is simple. If we fit a predictive model using a supervised learning technique, then it is possible to check our work by seeing how well our model predicts the response y on observations not used in fitting the model. However, in unsupervised learning, there is no way to check our work because we don’t know the true answer—the problem is unsupervised."_<sup>1</sup>
# + hideCode=false hidePrompt=false slideshow={"slide_type": "skip"}
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import matplotlib
import os
from matplotlib import animation, rc
from IPython.display import HTML
from sklearn.cluster import KMeans
from IPython.display import Image
image_dir = os.path.join(os.getcwd(),"Images")
data_dir = os.path.join(os.getcwd(),"..","Data")
matplotlib.rcParams['animation.embed_limit'] = 30000000.0
plt.rcParams['figure.dpi'] = 120
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
# if you get the following warning then set to true, run the code, and restart the kernel:
### UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads.
### You can avoid it by setting the environment variable OMP_NUM_THREADS=2.
if True:
os.environ['OMP_NUM_THREADS'] = '2'
# Initial fig number
fig_num = 0
plt.rcParams['figure.dpi'] = 120
# golden ratio for figures ()
gr = 1.618
height_pix = 500
width_pix = height_pix*gr
height_inch = 4
width_inch = height_inch*gr
# Pdf conversion can't seem to handle the animations
PDF=True
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "slide"}
# ## Clustering
#
# Clustering covers a a broad class of methods for discovering unknown subgroups in data.
#
# Clustering obervations of a dataset means seeking to partition them into distinct groups so observations in each group are similar, while observations in different groups are dissimilar.
#
# Humans can identify clusters easily. For example, how many clusters are in the following plot?
# + hideCode=false hidePrompt=false slideshow={"slide_type": "fragment"}
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
# set a random seed
np.random.seed(1)
# A list of centres of clusters
true_centres = [[1, 1], [5, 3], [-2, -4]]
points, clusters = make_blobs(
n_samples=[50, 100, 150], # number of samples in each cluster
centers=true_centres, # where are the centres?
random_state=1
)
labelled_points = pd.DataFrame({'cluster': clusters,
'x1': points[:, 0],
'x2': points[:, 1]})
def clusters_plt(title=None):
plt.scatter(labelled_points['x1'], labelled_points['x2'], cmap=plt.cm.tab10, alpha=0.7)
if title:
plt.title(title)
plt.show()
fig_num+=1
clusters_plt("Figure %d: Example Clusters"%fig_num)
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "subslide"}
# ... if you guessed 3 you are correct, because I made the data have three centres with some noise around.
#
# However for an algorithm to cluster data we must define what it means for two or more observations to be similar or different, with this decision typically being a domain-specific consideration<sup>1</sup>.
# + hideCode=false hidePrompt=false slideshow={"slide_type": "fragment"}
plt.scatter(labelled_points['x1'], labelled_points['x2'], cmap=plt.cm.tab10, alpha=0.7)
centres_df = pd.DataFrame.from_records(true_centres, columns=['x', 'y'])
sns.scatterplot(x=centres_df['x'], y=centres_df['y'], alpha=0.9, marker='o', s=120, linewidths=8,
color='w')
sns.scatterplot(x=centres_df['x'], y=centres_df['y'], marker='x', s=60, linewidths=20, color='k')
fig_num+=1
plt.title("Figure %d: Example Cluster Centres"%fig_num)
plt.show()
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "subslide"}
# In the real world we dont have the ground-truth of what the categories are, so our goal is to group them based on feature similarities.
#
# In this series of lectures we discuss three of the best-known clustering approaches: K-means clustering, hierarchical clustering, and density-based clustering<sup>1</sup>:
#
# - In __K-means clustering__, we seek to partition the observations into a pre-specified number of clusters.
#
# - In __Hierarchical clustering__, we use a tree-like visual representation of the observations (_dendrogram_) to view the clusterings obtained for each possible number of clusters.
#
# - In __density-based clustering__, we can identify arbitary shapes by looking at the density of observations.
# + [markdown] hideCode=false hidePrompt=false slideshow={"slide_type": "notes"}
# __Notes__
#
# - Assigning data to clusters would be easy if we had the true...
# - ...center of the cluster (centeroids), as we would just assign each point to its closest centeroid.
# - ...labels, as we would just compute the mean of each label to find the centroids.
#
# - There is a fourth advanced type of clustering we do not address, __graph-based clusteri
| 8,188 |
/recall_precision/.ipynb_checkpoints/recall_precision_example-checkpoint.ipynb
|
1ebf8afa33c5eafdbde9bde662d0e0df938ecb43
|
[
"MIT"
] |
permissive
|
nisheethjaiswal/Data-Analysis
|
https://github.com/nisheethjaiswal/Data-Analysis
| 1 | 1 |
MIT
| 2019-03-16T09:13:21 | 2019-03-16T04:18:02 | null |
Jupyter Notebook
| false | false |
.py
| 57,476 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="hVcWncA_iJR9" colab_type="code" colab={}
# !git clone 'https://github.com/Shenggan/BCCD_Dataset.git'
# + [markdown] id="H4e9h0HfVA30" colab_type="text"
# #**DATA PRE-PROCESSING STARTS**
# + [markdown] id="iccVsnt7VJk1" colab_type="text"
# # Extraction of data labels from .xml file to dataframe
# + id="Qj5wjt0-fRbS" colab_type="code" colab={}
import shutil
import os, sys, random
import xml.etree.ElementTree as ET
from glob import glob
import pandas as pd
from shutil import copyfile
import pandas as pd
from sklearn import preprocessing, model_selection
import matplotlib.pyplot as plt
# %matplotlib inline
from matplotlib import patches
import numpy as np
import os
# + id="4_8R_DIIiQEY" colab_type="code" colab={}
annotations = sorted(glob('/content/BCCD_Dataset/BCCD/Annotations/*.xml'))
df = []
cnt = 0
for file in annotations:
prev_filename = file.split('/')[-1].split('.')[0] + '.jpg'
filename = str(cnt) + '.jpg'
row = []
parsedXML = ET.parse(file)
for node in parsedXML.getroot().iter('object'):
blood_cells = node.find('name').text
xmin = int(node.find('bndbox/xmin').text)
xmax = int(node.find('bndbox/xmax').text)
ymin = int(node.find('bndbox/ymin').text)
ymax = int(node.find('bndbox/ymax').text)
row = [prev_filename, filename, blood_cells, xmin, xmax, ymin, ymax]
df.append(row)
cnt += 1
data = pd.DataFrame(df, columns=['prev_filename', 'filename', 'cell_type', 'xmin', 'xmax', 'ymin', 'ymax'])
data[['prev_filename','filename', 'cell_type', 'xmin', 'xmax', 'ymin', 'ymax']].to_csv('/content/blood_cell_detection.csv', index=False)
# + [markdown] id="_6cRsoE6VRAD" colab_type="text"
# # Processing data as per the YOLO_V5 format
# + [markdown] id="Onr1Vqm4x4qz" colab_type="text"
# **DATAFRAME STRUCTURE**
#
# - filename : contains the name of the image
# - cell_type: denotes the type of the cell
# - xmin: x-coordinate of the bottom left part of the image
# - xmax: x-coordinate of the top right part of the image
# - ymin: y-coordinate of the bottom left part of the image
# - ymax: y-coordinate of the top right part of the image
# - labels : Encoded cell-type **(Yolo - label input-1)**
# - width : width of that bbox
# - height : height of that bbox
# - x_center : bbox center (x-axis)
# - y_center : bbox center (y-axis)
# - x_center_norm : x_center normalized (0-1) **(Yolo - label input-2)**
# - y_center_norm : y_center normalized (0-1) **(Yolo - label input-3)**
# - width_norm : width normalized (0-1) **(Yolo - label input-4)**
# - height_norm : height normalized (0-1) **(Yolo - label input-5)**
# + id="2rybfBj3mwBV" colab_type="code" colab={}
img_width = 640
img_height = 480
def width(df):
return int(df.xmax - df.xmin)
def height(df):
return int(df.ymax - df.ymin)
def x_center(df):
return int(df.xmin + (df.width/2))
def y_center(df):
return int(df.ymin + (df.height/2))
def w_norm(df):
return df/img_width
def h_norm(df):
return df/img_height
df = pd.read_csv('/content/blood_cell_detection.csv')
le = preprocessing.LabelEncoder()
le.fit(df['cell_type'])
print(le.classes_)
labels = le.transform(df['cell_type'])
df['labels'] = labels
df['width'] = df.apply(width, axis=1)
df['height'] = df.apply(height, axis=1)
df['x_center'] = df.apply(x_center, axis=1)
df['y_center'] = df.apply(y_center, axis=1)
df['x_center_norm'] = df['x_center'].apply(w_norm)
df['width_norm'] = df['width'].apply(w_norm)
df['y_center_norm'] = df['y_center'].apply(h_norm)
df['height_norm'] = df['height'].apply(h_norm)
df.head(30)
# + id="pk4xn6BJ4B6q" colab_type="code" cellView="form" colab={}
#@title SAMPLE PLOT - shape (480, 640, 3)
fig = plt.figure()
import cv2
#add axes to the image
ax = fig.add_axes([0,0,1,1])
# read and plot the image
image = plt.imread('/content/BCCD_Dataset/BCCD/JPEGImages/BloodImage_00001.jpg')
plt.imshow(image)
# iterating over the image for different objects
for _,row in df[df.filename == "1.jpg"].iterrows():
xmin = row.xmin
xmax = row.xmax
ymin = row.ymin
ymax = row.ymax
width = xmax - xmin
height = ymax - ymin
# assign different color to different classes of objects
if row.cell_type == 'RBC':
edgecolor = 'r'
ax.annotate('RBC', xy=(xmax-40,ymin+20))
elif row.cell_type == 'WBC':
edgecolor = 'b'
ax.annotate('WBC', xy=(xmax-40,ymin+20))
elif row.cell_type == 'Platelets':
edgecolor = 'g'
ax.annotate('Platelets', xy=(xmax-40,ymin+20))
# add bounding boxes to the image
rect = patches.Rectangle((xmin,ymin), width, height, edgecolor = edgecolor, facecolor = 'none')
ax.add_patch(rect)
# + [markdown] id="CLnE5tWOVaKM" colab_type="text"
# # Splitting into training and validation datasets
# + id="gRrIQI7m8H5P" colab_type="code" colab={}
df_train, df_valid = model_selection.train_test_split(df, test_size=0.1, random_state=13, shuffle=True)
print(df_train.shape, df_valid.shape)
# + id="zO6iT6rQ-2f3" colab_type="code" colab={}
os.mkdir('/content/bcc/')
os.mkdir('/content/bcc/images/')
os.mkdir('/content/bcc/images/train/')
os.mkdir('/content/bcc/images/valid/')
os.mkdir('/content/bcc/labels/')
os.mkdir('/content/bcc/labels/train/')
os.mkdir('/content/bcc/labels/valid/')
# + [markdown] id="bmsqg2dYACVr" colab_type="text"
# # Data segregation and moving to it's corresponding folders
# - BCC
# - Images
# - Train (364 images [.jpg files])
# - Valid (270 images [.jpg files])
# - Labels
# - Train (364 labels [.txt files])
# - Valid (270 labels [.txt files])
#
# + [markdown] id="2IACNN0QxC4s" colab_type="text"
# **STRUCTURE OF .txt FILE**
#
# - One row per object
# - Each row is class x_center y_center width height format.
# - Box coordinates must be in normalized xywh format (from 0 - 1). If your boxes are in pixels, divide x_center and width by image width, and y_center and height by image height.
# - Class numbers are zero-indexed (start from 0).
#
# + [markdown] id="_VgUi-pQ0BZo" colab_type="text"
# <img src="https://github.com/bala-codes/Yolo-v5_Object_Detection_Blood_Cell_Count_and_Detection/blob/master/imgs/label_txt.PNG?raw=true" width="50%">
#
# + id="Bo21M14uF25h" colab_type="code" colab={}
def segregate_data(df, img_path, label_path, train_img_path, train_label_path):
filenames = []
for filename in df.filename:
filenames.append(filename)
filenames = set(filenames)
for filename in filenames:
yolo_list = []
for _,row in df[df.filename == filename].iterrows():
yolo_list.append([row.labels, row.x_center_norm, row.y_center_norm, row.width_norm, row.height_norm])
yolo_list = np.array(yolo_list)
txt_filename = os.path.join(train_label_path,str(row.prev_filename.split('.')[0])+".txt")
# Save the .img & .txt files to the corresponding train and validation folders
np.savetxt(txt_filename, yolo_list, fmt=["%d", "%f", "%f", "%f", "%f"])
shutil.copyfile(os.path.join(img_path,row.prev_filename), os.path.join(train_img_path,row.prev_filename))
# + id="4uy4rLGYSBd7" colab_type="code" colab={}
# %%time
src_img_path = "/content/BCCD_Dataset/BCCD/JPEGImages/"
src_label_path = "/content/BCCD_Dataset/BCCD/Annotations/"
train_img_path = "/content/bcc/images/train"
train_label_path = "/content/bcc/labels/train"
valid_img_path = "/content/bcc/images/valid"
valid_label_path = "/content/bcc/labels/valid"
segregate_data(df_train, src_img_path, src_label_path, train_img_path, train_label_path)
segregate_data(df_valid, src_img_path, src_label_path, valid_img_path, valid_label_path)
# + id="LDv31-CdS_nt" colab_type="code" colab={}
try:
shutil.rmtree('/content/bcc/images/train/.ipynb_checkpoints')
except FileNotFoundError:
pass
try:
shutil.rmtree('/content/bcc/images/valid/.ipynb_checkpoints')
except FileNotFoundError:
pass
try:
shutil.rmtree('/content/bcc/labels/train/.ipynb_checkpoints')
except FileNotFoundError:
pass
try:
shutil.rmtree('/content/bcc/labels/valid/.ipynb_checkpoints')
except FileNotFoundError:
pass
print("No. of Training images", len(os.listdir('/content/bcc/images/train')))
print("No. of Training labels", len(os.listdir('/content/bcc/labels/train')))
print("No. of valid images", len(os.listdir('/content/bcc/images/valid')))
print("No. of valid labels", len(os.listdir('/content/bcc/labels/valid')))
# + [markdown] id="V2uPhrQCU6vT" colab_type="text"
# # **END OF DATA PRE-PROCESSING**
# + [markdown] id="gjHmEIfmWNms" colab_type="text"
# #**YOLO V5 STARTS**
# + id="05kiA297y2s3" colab_type="code" colab={}
# !mkdir -p '/content/drive/My Drive/Machine Learning Projects/YOLO/'
# !cp -r '/content/bcc' '/content/drive/My Drive/Machine Learning Projects/YOLO/'
# + [markdown] id="lcA59GtHeCrd" colab_type="text"
# # Cloning from the yolo v5 repo.
# More can be found at here : [yolo](https://github.com/ultralytics/yolov5)
# + id="YnhcNiuTGAK7" colab_type="code" colab={}
# !git clone 'https://github.com/ultralytics/yolov5.git'
# + id="A4i0BpZIbyTz" colab_type="code" colab={}
# !pip install -qr '/content/yolov5/requirements.txt' # install dependencies
# + [markdown] id="KUsMKPtGeUcv" colab_type="text"
# # WE SHOULD CREATE A .yaml FILE AND THEN PLACE IT INSIDE THE yolov5 FOLDER
# + [markdown] id="bqymagYif3Us" colab_type="text"
# #**Contents of YAML file**
#
# train: /content/bcc/images/train
# val: /content/bcc/images/valid
#
# nc: 3
#
# names: ['Platelets', 'RBC', 'WBC']
#
# + [markdown] id="JNUfz-i21pVU" colab_type="text"
# <img src="https://github.com/bala-codes/Yolo-v5_Object_Detection_Blood_Cell_Count_and_Detection/blob/master/imgs/bcc_yaml.PNG?raw=true" width="50%">
#
#
# + id="IgIO3balXB-B" colab_type="code" colab={}
# !echo -e 'train: /content/bcc/images/train\nval: /content/bcc/images/valid\n\nnc: 3\nnames: ['Platelets', 'RBC', 'WBC']' >> bcc.yaml
# !cat 'bcc.yaml'
# + id="31-z05sIcMcv" colab_type="code" colab={}
shutil.copyfile('/content/bcc.yaml', '/content/yolov5/bcc.yaml')
# + [markdown] id="7gUcPKfsDlEQ" colab_type="text"
# #**Also edit the number of classes (nc) in the ./models/*.yaml file**
#
# Choose the yolo model of your choice, here I chose yolov5s.yaml (yolo - small)
#
# + id="0cXuPZjRhg3o" colab_type="code" colab={}
# !sed -i 's/nc: 80/nc: 3/g' ./yolov5/models/yolov5s.yaml
# + [markdown] id="zL2G0EG1EJQs" colab_type="text"
# <img src="https://cdn-images-1.medium.com/max/600/1*hCE5VwKkqHlZW466umYTTA.png">
# + [markdown] id="g_4-F3I2gVIN" colab_type="text"
# # Training command
# + [markdown] id="127Pw1oS1zzY" colab_type="text"
# **Training Parameters**
#
# # # !python
# - <'location of train.py file'>
# - --img <'width of image'>
# - --batch <'batch size'>
# - --epochs <'no of epochs'>
# - --data <'location of the .yaml file'>
# - --cfg <'Which yolo configuration you want'>(yolov5s/yolov5m/yolov5l/yolov5x).yaml | (small, medium, large, xlarge)
# - --name <'Name of the best model after training'>
# + [markdown] id="Ztjc7_wS5z2J" colab_type="text"
# **METRICS FROM TRAINING PROCESS**
#
# **No.of classes, No.of images, No.of targets, Precision (P), Recall (R), mean Average Precision (map)**
# - Class | Images | Targets | P | R | [email protected] | [email protected]:.95: |
# - all | 270 | 489 | 0.0899 | 0.827 | 0.0879 | 0.0551
# + id="k3Tc61Qzd4lY" colab_type="code" colab={}
# %%time
# !python yolov5/train.py --img 640 --batch 8 --epochs 100 --data bcc.yaml --cfg models/yolov5s.yaml --name BCCM
# + id="i7upZcFvhWhN" colab_type="code" colab={}
# Start tensorboard (optional)
# %load_ext tensorboard
# %tensorboard --logdir runs/
# + [markdown] id="DJYN1lb_uV-T" colab_type="text"
# #**INFERENCE**
# + id="FHnXEx4subZ0" colab_type="code" colab={}
#Optimizer stripped from runs/exp2_BCCM/weights/last_BCCM.pt, 14.8MB
#Optimizer stripped from runs/exp2_BCCM/weights/best_BCCM.pt, 14.8MB
# + [markdown] id="obKZFwYHvg6a" colab_type="text"
# # BATCH PREDICTION
# - Results saved to inference/output
#
# + [markdown] id="jtCUnoyr7h8y" colab_type="text"
# **Inference Parameters**
#
# # # !python
# - <'location of detect.py file'>
# - --source <'location of image/ folder to predict'>
# - --weight <'location of the saved best weights'>
# - --output <'location of output files after prediction'>
# + id="PoDLzE4xu_Bo" colab_type="code" colab={}
# !python yolov5/detect.py --source /content/bcc/images/valid/ --weights '/content/runs/exp0_BCCM/weights/best.pt' --output '/content/inference/output'
# + id="1UFERGRGwOEQ" colab_type="code" colab={}
disp_images = glob('/content/inference/output/*')
fig=plt.figure(figsize=(20, 28))
columns = 3
rows = 5
for i in range(1, columns*rows +1):
img = np.random.choice(disp_images)
img = plt.imread(img)
fig.add_subplot(rows, columns, i)
plt.imshow(img)
plt.show()
# + [markdown] id="wrXZ10ikvnDB" colab_type="text"
# # SINGLE IMAGE PREDICTIONS
#
# + id="RuIiMYKdvRi1" colab_type="code" colab={}
# output = !python yolov5/detect.py --source /content/bcc/images/valid/BloodImage_00000.jpg --weights '/content/runs/exp0_BCCM/weights/best_BCCM.pt'
print(output)
# + [markdown] id="-1ZarG561ak5" colab_type="text"
# # You need these files, if you wish to move the model to production
# + [markdown] id="AxI5iupx_jd1" colab_type="text"
# ## Files
# + id="NSAegF-48fHj" colab_type="code" colab={}
shutil.copyfile('/content/yolov5/detect.py', '/content/drive/My Drive/Machine Learning Projects/YOLO/SOURCE/detect.py')
shutil.copyfile('/content/yolov5/requirements.txt', '/content/drive/My Drive/Machine Learning Projects/YOLO/SOURCE/requirements.txt')
shutil.copyfile('/content/runs/exp2_BCCM/weights/best_BCCM.pt', '/content/drive/My Drive/Machine Learning Projects/YOLO/SOURCE/best_BCCM.pt')
# + [markdown] id="Kd-aarpL_lB1" colab_type="text"
# ## Folder
# + id="I1Gup2m3vv_M" colab_type="code" colab={}
# !cp -r '/content/yolov5/models' '/content/drive/My Drive/Machine Learning Projects/YOLO/SOURCE/'
# !cp -r '/content/yolov5/utils' '/content/drive/My Drive/Machine Learning Projects/YOLO/SOURCE/'
| 14,243 |
/program/3_3_digitized_CartPole.ipynb
|
e6a55382fc7ab50dbb81ee94e6310bc8c2d29121
|
[] |
no_license
|
junjunjunk/Deep-Reinforcement-Learning-Book
|
https://github.com/junjunjunk/Deep-Reinforcement-Learning-Book
| 0 | 0 | null | 2019-12-17T12:27:44 | 2019-11-23T18:43:39 | null |
Jupyter Notebook
| false | false |
.py
| 3,384 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3.3 CartPoleの状態を離散化してみる
# 使用するパッケージの宣言
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import gym
# 定数の設定
ENV = 'CartPole-v0' # 使用する課題名
NUM_DIZITIZED = 6 # 各状態の離散値への分割数
# CartPoleを実行してみる
env = gym.make(ENV) # 実行する課題を設定
observation = env.reset() # 環境の初期化
# 離散化のための閾値を求める
# linespace は等間隔の数列を生成する命令
def bins(clip_min, clip_max, num):
'''観測した状態(連続値)を離散値にデジタル変換する閾値を求める'''
return np.linspace(clip_min, clip_max, num + 1)[1:-1]
np.linspace(-2.4, 2.4, 6 + 1)
np.linspace(-2.4, 2.4, 6 + 1)[1:-1]
def digitize_state(observation):
'''観測したobservation状態を、離散値に変換する'''
cart_pos, cart_v, pole_angle, pole_v = observation
digitized = [
np.digitize(cart_pos, bins=bins(-2.4, 2.4, NUM_DIZITIZED)),
np.digitize(cart_v, bins=bins(-3.0, 3.0, NUM_DIZITIZED)),
np.digitize(pole_angle, bins=bins(-0.5, 0.5, NUM_DIZITIZED)), # 0.5 radian = 29 angle
np.digitize(pole_v, bins=bins(-2.0, 2.0, NUM_DIZITIZED))]
return sum([x * (NUM_DIZITIZED**i) for i, x in enumerate(digitized)]) #4変数をまとめてどれかの状態に変換 0~1295
# NUM_DIZITIZED進数で計算
digitize_state(observation)
| 1,395 |
/chapter7-2.ipynb
|
73c764948b573fa4be85811313ce8b462f65ea77
|
[
"MIT"
] |
permissive
|
jacobguang/python-ML-principles-and-practice
|
https://github.com/jacobguang/python-ML-principles-and-practice
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 332,189 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#本章需导入的模块
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pylab import *
import matplotlib.cm as cm
import warnings
warnings.filterwarnings(action = 'ignore')
# %matplotlib inline
plt.rcParams['font.sans-serif']=['SimHei'] #解决中文显示乱码问题
plt.rcParams['axes.unicode_minus']=False
from mpl_toolkits.mplot3d import Axes3D
from sklearn.datasets import make_circles
from sklearn.model_selection import train_test_split
from sklearn.metrics import zero_one_loss,r2_score,mean_squared_error
import sklearn.neural_network as net
# +
N=800
X,Y=make_circles(n_samples=N,noise=0.2,factor=0.5,random_state=123)
unique_lables=set(Y)
X1,X2= np.meshgrid(np.linspace(X[:,0].min(),X[:,0].max(),50),np.linspace(X[:,1].min(),X[:,1].max(),50))
X0=np.hstack((X1.reshape(len(X1)*len(X2),1),X2.reshape(len(X1)*len(X2),1)))
fig,axes=plt.subplots(nrows=2,ncols=2,figsize=(15,12))
colors=plt.cm.Spectral(np.linspace(0,1,len(unique_lables)))
markers=['o','*']
for hn,H,L in [(1,0,0),(2,0,1),(4,1,0),(30,1,1)]:
NeuNet=net.MLPClassifier(hidden_layer_sizes=(hn,),random_state=123)
NeuNet.fit(X,Y)
Y0=NeuNet.predict(X0)
axes[H,L].scatter(X0[np.where(Y0==0),0],X0[np.where(Y0==0),1],c='mistyrose')
axes[H,L].scatter(X0[np.where(Y0==1),0],X0[np.where(Y0==1),1],c='lightgray')
axes[H,L].set_xlabel('X1')
axes[H,L].set_ylabel('X2')
axes[H,L].set_title('多层感知机的分类边界(层数=%d,隐藏节点数=%d,错误率=%.2f)'%(NeuNet.n_layers_,hn,1-NeuNet.score(X,Y)))
for k,col,m in zip(unique_lables,colors,markers):
axes[H,L].scatter(X[Y==k,0],X[Y==k,1],color=col,s=30,marker=m)
# -
# 说明:这里基于模拟数据,直观展示:通过多个隐藏节点的空间变换作用,多层感知机能够很好地解决非线性分类预测问题。
# 1、利用MLPClassifier实现多层感知机分类。其中,参数hidden_layer_sizes用于指定隐藏层和隐藏节点个数。如:(100,20)表示有两个隐藏层分别包含100和20个隐藏节点。
# 2、MLPClassifier默认的激活函数为ReLU函数,最大迭代次数为200,三层网络(一个隐藏层包含100个隐藏节点)。采用Adam随机梯度优化算法。可指定采用SGD算法。
optimizer.setup(model)
model.predictor['fc'].W.update_rule.hyperparam.lr = alpha*10
model.predictor['fc'].b.update_rule.hyperparam.lr = alpha*10
model.to_gpu(0)
epoch_num = 15
validate_size = 89
batch_size = 30
# train, test = chainer.datasets.split_dataset_random(dataset, N-validate_size)
train, test = chainer.datasets.split_dataset_random(dataset, N-2*validate_size)
test, validate = chainer.datasets.split_dataset_random(test, validate_size)
train_iter = chainer.iterators.SerialIterator(train, batch_size)
test_iter = chainer.iterators.SerialIterator(test, batch_size, repeat=False, shuffle=False)
updater = training.StandardUpdater(train_iter, optimizer, device=gpu)
trainer = training.Trainer(updater, (epoch_num, 'epoch'), out='result')
trainer.extend(extensions.Evaluator(test_iter, model, device=gpu))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(['epoch', 'main/loss', 'validation/main/loss', 'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.extend(extensions.PlotReport(['main/loss', 'validation/main/loss'], 'epoch', file_name='loss.png'))
trainer.extend(extensions.PlotReport(['main/accuracy', 'validation/main/accuracy'], 'epoch', file_name='accuracy.png'))
trainer.run()
ys_pre, ys, features = [], [], []
for path, label in tqdm(zip(paths, labels)):
img = Image.open(path)
img = L.model.vision.vgg.prepare(img)
img = img[np.newaxis, :]
img = cuda.to_gpu(img)
y_pre = model.predictor(img)
y_pre = y_pre.data.reshape(-1)
y_pre = np.argmax(y_pre)
feature = model.predictor(img, extract_feature=True)
feature = feature.data.reshape(-1)
feature = cuda.to_cpu(feature)
ys_pre.append(y_pre)
ys.append(label)
features.append(feature)
print(validate[1][0])
pilOUT = Image.fromarray(validate[1][0])
img = Image.open(paths[0])
print(img)
ys_pre, ys, features = [], [], []
for i in range(0,len(validate)-1):
img = validate[i][0]
img = L.model.vision.vgg.prepare(img)
img = img[np.newaxis, :]
img = cuda.to_gpu(img)
y_pre = model.predictor(img)
y_pre = y_pre.data.reshape(-1)
y_pre = np.argmax(y_pre)
feature = model.predictor(img, extract_feature=True)
feature = feature.data.reshape(-1)
feature = cuda.to_cpu(feature)
ys_pre.append(y_pre)
ys.append(validate[i][1])
features.append(feature)
# +
ys = np.array(ys, dtype=np.int32)
ys_pre = np.array(ys_pre, dtype=np.int32)
plt.imshow(confusion_matrix(ys, ys_pre), interpolation='nearest')
plt.show()
# -
def plot_confusion_matrix(cm, # classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
# tick_marks = np.arange(len(classes))
# plt.xticks(tick_marks, classes, rotation=45)
# plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(ys, ys_pre)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
# class_names =
plt.figure()
plot_confusion_matrix(cnf_matrix, # classes=class_names,
title='Confusion matrix, without normalization')
features = np.array(features, dtype=np.float32)
tsne_model = TSNE(n_components=2).fit_transform(features)
# +
canvas_size = (1500, 1500)
img_size = (50, 50)
canvas = Image.new('RGB', canvas_size)
val_max = np.array(tsne_model).max()
val_min = np.array(tsne_model).min()
# for i, path in enumerate(paths):
# pos_x = int(tsne_model[i][0]*(canvas_size[0]/img_size[0])/(val_max-val_min))*img_size[0]
# pos_y = int(tsne_model[i][1]*(canvas_size[1]/img_size[1])/(val_max-val_min))*img_size[1]
# pos = (int(pos_x+canvas_size[0]/2), int(pos_y+canvas_size[1]/2))
# target_img = Image.open(path)
# target_img = target_img.resize(img_size)
# canvas.paste(target_img, pos)
# target_img.close()
for i in range(0,len(validate)-1):
pos_x = int(tsne_model[i][0]*(canvas_size[0]/img_size[0])/(val_max-val_min))*img_size[0]
pos_y = int(tsne_model[i][1]*(canvas_size[1]/img_size[1])/(val_max-val_min))*img_size[1]
pos = (int(pos_x+canvas_size[0]/2), int(pos_y+canvas_size[1]/2))
target_img = validate[i][0]
# target_img = L.model.vision.vgg.prepare(target_img)
# target_img = target_img[np.newaxis, :]
target_img = target_img.resize(img_size)
canvas.paste(target_img, pos)
target_img.close()
print("i=", i)
plt.figure(figsize=(15,15))
plt.imshow(np.array(canvas))
plt.axis('off')
plt.show()
# +
def cos_sim_matrix(matrix):
d = matrix @ matrix.T
norm = (matrix * matrix).sum(axis=1, keepdims=True) ** .5
return d / norm / norm.T
cos_sims = cos_sim_matrix(features)
# -
samples = np.random.randint(0, len(paths), 30)
for i in samples:
sim_idxs = np.argsort(cos_sims[i])[::-1]
sim_idxs = np.delete(sim_idxs, np.where(sim_idxs==i))
sim_num = 3
sim_idxs = sim_idxs[:sim_num]
fig, axs = plt.subplots(ncols=sim_num+1, figsize=(15, sim_num))
img = Image.open(paths[i])
axs[0].imshow(img)
axs[0].set_title('target\n'+str(labels[i]))
axs[0].axis('off')
for j in range(sim_num):
img = Image.open(paths[sim_idxs[j]])
axs[j+1].imshow(img)
axs[j+1].set_title(str(cos_sims[i, sim_idxs[j]])+'\n'+str(labels[sim_idxs[j]]))
axs[j+1].axis('off')
plt.show()
| 8,290 |
/201/create-devices-requests.ipynb
|
01092272b44b917d3de60a2ec960149b92c1964b
|
[] |
no_license
|
jrasnier/netbox-training
|
https://github.com/jrasnier/netbox-training
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 17,769 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
# # Headers
headers = {
'Authorization': 'Token c2832328bb7c5b7ce8fc17139382adadc676a257',
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# # Generate Devices
device_names = ['access-1', 'access-2', 'access-3', 'access-4', 'access-5', 'access-6', 'access-7', 'access-8']
devices = []
for name in device_names:
device = {
'name': name,
'device_type':{
'model': 'EX4300-48T'
},
'site': {
'name': 'Branch #501'
},
'device_role': {
'name': 'Access Switch'
}
}
devices.append(device)
devices
# # Make request to create devices
response = requests.post('http://localhost:8000/api/dcim/devices/', headers=headers, json=devices)
response.json()
| 1,066 |
/task_1/task_1.ipynb
|
68f89b78c345af1a982096c0019315112cdf27c1
|
[] |
no_license
|
mahhets/Parsers_and_Scrappers
|
https://github.com/mahhets/Parsers_and_Scrappers
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,979 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import json
# +
repos = requests.get("https://api.github.com/users/golang/repos")
a = [i['name'] for i in repos.json()]
# Не совсем понял, что нужно сохранить; > сохраняю весь вывод.
# Кстати, в выводе не все репозитории, не могу понять почему. Может часть из них только для авторизированных?
with open("task_1_repos.json", 'w') as f:
json.dump(repos.json(),f)
a
| 652 |
/Kaggle 15 minute EURUSd.ipynb
|
146be8ae031d1b6ecbfa00dd0c5ea4c17615fdbf
|
[] |
no_license
|
inetkenya/teste-keras
|
https://github.com/inetkenya/teste-keras
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 24,754 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="901455c8-d5fd-4dfb-86a6-44d15a3acc6f" _uuid="cc3143d11be64e4b26d7b3e264f271f1c4ccd904"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
# from subprocess import check_output
# print(check_output(["ls", "../input"]).decode("utf8"))
# + _cell_guid="52b27b36-fbc4-4e45-a220-c664ab6dfbc7" _uuid="3995058fb77ae366c7b3be525af2862b5007c60b"
# Load sample data
df = pd.read_csv('EURUSD_M12.csv',sep='\t',encoding='utf-16')#('../input/EURUSD_15m_BID_sample.csv')
# + _cell_guid="ec9e6dba-442c-488d-92d0-916e7b6703a2" _uuid="8f705cac308b99530a90f830e803854e7b26e595"
df.count()
# + _cell_guid="d381458f-2693-4d23-8d79-62fefa1e8ca6" _uuid="2c6600772d5ae8b592c9fef7f29dc93832589a7e"
df.index.min(), df.index.max()
# + _cell_guid="4aaaa570-2155-4c96-9abd-78c74ed89016" _uuid="aa4d293f227bb0a3edf0faa9e277fa9ee089a517"
# FULL DATA (takes too long)
# df = pd.read_csv('../input/EURUSD_15m_BID_01.01.2010-31.12.2016.csv')
# + _cell_guid="4a7283d8-564f-40c4-b636-3587fa6990bf" _uuid="305dfff7460dabfae7ed2e6ac3f5f5266615d46f"
# Rename bid OHLC columns
# df.rename(columns={'Time' : 'timestamp', 'Open' : 'open', 'Close' : 'close',
# 'High' : 'high', 'Low' : 'low', 'Close' : 'close', 'Volume' : 'volume'}, inplace=True)
df.rename(columns={'Time' : 'timestamp'}, inplace=True)
df['timestamp'] = pd.to_datetime(df['timestamp'], infer_datetime_format=True)
df.set_index('timestamp', inplace=True)
df = df.astype(float)
df.head()
# + _cell_guid="b1b4d74a-6137-418f-93a4-f5a2e74a359a" _uuid="cb5608ff930146ffd1c1364f9f973c867d471f7c"
# Add additional features
df['hour'] = df.index.hour
df['day'] = df.index.weekday
df['week'] = df.index.week
df['minute'] = df.index.minute
# df['momentum'] = df['volume'] * (df['open'] - df['close'])
df['avg_price'] = (df['low'] + df['high'])/2
df['range'] = df['high'] - df['low']
df['ohlc_price'] = (df['low'] + df['high'] + df['open'] + df['close'])/4
df['oc_diff'] = df['open'] - df['close']
# Cannot add ASK related features, which will limit the accuracy of the model
# + _cell_guid="d45ca935-7e02-462d-a046-8b587c7df6d8" _uuid="8f0cc48c0973272649732410963aa52a20c720eb"
# Add PCA as a feature instead of for reducing the dimensionality. This improves the accuracy a bit.
from sklearn.decomposition import PCA
dataset = df.copy().values.astype('float32')
pca_features = df.columns.tolist()
pca = PCA(n_components=1)
df['pca'] = pca.fit_transform(dataset)
# + _cell_guid="7eeb62b1-a682-484b-9ba6-5b024ae28f64" _uuid="c69167f965e53056ffa92b60870793061736bc24"
import matplotlib.colors as colors
import matplotlib.cm as cm
import pylab
plt.figure(figsize=(10,5))
norm = colors.Normalize(df['ohlc_price'].values.min(), df['ohlc_price'].values.max())
color = cm.viridis(norm(df['ohlc_price'].values))
plt.scatter(df['ohlc_price'].values, df['pca'].values, lw=0, c=color, cmap=pylab.cm.cool, alpha=0.3, s=1)
plt.title('ohlc_price vs pca')
plt.show()
# plt.figure(figsize=(10,5))
# norm = colors.Normalize(df['volume'].values.min(), df['volume'].values.max())
# color = cm.viridis(norm(df['volume'].values))
# plt.scatter(df['volume'].values, df['pca'].values, lw=0, c=color, cmap=pylab.cm.cool, alpha=0.3, s=1)
# plt.title('volume vs pca')
# plt.show()
plt.figure(figsize=(10,5))
norm = colors.Normalize(df['ohlc_price'].values.min(), df['ohlc_price'].values.max())
color = cm.viridis(norm(df['ohlc_price'].values))
plt.scatter(df['ohlc_price'].shift().values, df['pca'].values, lw=0, c=color, cmap=pylab.cm.cool, alpha=0.3, s=1)
plt.title('ohlc_price - 15min future vs pca')
plt.show()
# plt.figure(figsize=(10,5))
# norm = colors.Normalize(df['volume'].values.min(), df['volume'].values.max())
# color = cm.viridis(norm(df['volume'].values))
# plt.scatter(df['volume'].shift().values, df['pca'].values, lw=0, c=color, cmap=pylab.cm.cool, alpha=0.3, s=1)
# plt.title('volume - 15min future vs pca')
# plt.show()
# + [markdown] _cell_guid="2eb47cc3-400c-4a04-bc44-9e07023b02c9" _uuid="3e329d43a6e9a833b177a765743ec9249e73c3ca"
# As observed above, using PCA shows data seperability that somehwat clusters the data into different price groups.
# + _cell_guid="b20fc15f-a4b8-40f4-8796-5e21d3426680" _uuid="af5d192a1bc1c6291102c544fb77eb7c47f53f73"
df.head()
# + _cell_guid="176318f8-cfd4-4830-9f63-beefad162b62" _uuid="f1688fe116bbf8663dbbf657cfe20369486d42c7"
def create_dataset(dataset, look_back=20):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back)]
dataX.append(a)
dataY.append(dataset[i + look_back])
return np.array(dataX), np.array(dataY)
# + [markdown] _cell_guid="c8b936e0-7ad5-4c8a-8d4e-4203d85c7032" _uuid="ba4f8e7e577eb30fa512736e7824ed559d1137d9"
# # Doing a bit of features analysis
# + _cell_guid="0e13ad25-0da4-49cc-8362-36252ae487ce" _uuid="e3308cb07121307ce717d48b98b16af5b0abf760"
colormap = plt.cm.inferno
plt.figure(figsize=(15,15))
plt.title('Pearson correlation of features', y=1.05, size=15)
sns.heatmap(df.corr(), linewidths=0.1, vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
plt.show()
plt.figure(figsize=(15,5))
corr = df.corr()
sns.heatmap(corr[corr.index == 'close'], linewidths=0.1, vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True);
# + _cell_guid="2efea531-9b62-4c16-9deb-741a64c9661a" _uuid="f9a4346fd0c044ad064ed3f9b9fd87f726324a6e"
from sklearn.ensemble import RandomForestRegressor
# Scale and create datasets
target_index = df.columns.tolist().index('close')
dataset = df.values.astype('float32')
# Scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# Set look_back to 20 which is 5 hours (15min*20)
X, y = create_dataset(dataset, look_back=1)
y = y[:,target_index]
X = np.reshape(X, (X.shape[0], X.shape[2]))
# + _cell_guid="577a77e6-4cac-4d0f-8970-17b06cb731f7" _uuid="960c97c1020c8072d2346c5afeeb1d2723f4463b"
forest = RandomForestRegressor(n_estimators = 100)
forest = forest.fit(X, y)
# + _cell_guid="7f0cf340-e285-4241-955b-21f6b4119a41" _uuid="66b675557f9e0b68358219cc20a6d62a8cd198d0"
importances = forest.feature_importances_
std = np.std([forest.feature_importances_ for forest in forest.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
column_list = df.columns.tolist()
print("Feature ranking:")
for f in range(X.shape[1]-1):
print("%d. %s %d (%f)" % (f, column_list[indices[f]], indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(figsize=(20,10))
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="salmon", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
# + [markdown] _cell_guid="b893fc2d-dbfb-4c4e-9bd4-b6577958482f" _uuid="1592f57021b81981bc5519eddd3566f9b7540ccc"
# # Exploration
# + _cell_guid="d25cb089-f944-4ce8-a415-ebef26e084a6" _uuid="97417479a9326253b1855daf84f4f6fef1b316d9"
ax = df.plot(x=df.index, y='close', c='red', figsize=(40,10))
index = [str(item) for item in df.index]
plt.fill_between(x=index, y1='low',y2='high', data=df, alpha=0.4)
plt.show()
p = df[:200].copy()
ax = p.plot(x=p.index, y='close', c='red', figsize=(40,10))
index = [str(item) for item in p.index]
plt.fill_between(x=index, y1='low', y2='high', data=p, alpha=0.4)
plt.title('zoomed, first 200')
plt.show()
# + _cell_guid="ff8d4234-7c71-4ade-a13a-c5c75bdc8c95" _uuid="f028f6ae47c668625c842e047a774054df9c0d46"
# Scale and create datasets
target_index = df.columns.tolist().index('close')
high_index = df.columns.tolist().index('high')
low_index = df.columns.tolist().index('low')
dataset = df.values.astype('float32')
# Scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# Create y_scaler to inverse it later
y_scaler = MinMaxScaler(feature_range=(0, 1))
t_y = df['close'].values.astype('float32')
t_y = np.reshape(t_y, (-1, 1))
y_scaler = y_scaler.fit(t_y)
# Set look_back to 20 which is 5 hours (15min*20)
X, y = create_dataset(dataset, look_back=20)
y = y[:,target_index]
# + _cell_guid="ab9d8796-dd89-4c3a-8425-e54860acf920" _uuid="31ff89b4610099a47252b518de87befa8346779f"
# Set training data size
# We have a large enough dataset. So divid into 98% training / 1% development / 1% test sets
train_size = int(len(X) * 0.99)
trainX = X[:train_size]
trainY = y[:train_size]
testX = X[train_size:]
testY = y[train_size:]
# + _cell_guid="8d2615a0-32df-4998-9b83-f41d3e50ed62" _uuid="8bdd1ca1a5ad7c2e1e0fcecc7f0746ddb433933d"
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Input, LSTM, Dense
# create a small LSTM network
model = Sequential()
model.add(LSTM(20, input_shape=(X.shape[1], X.shape[2]), return_sequences=True))
model.add(LSTM(20, return_sequences=True))
model.add(LSTM(10, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(4, return_sequences=False))
model.add(Dense(4, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='relu'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])
print(model.summary())
# + _cell_guid="3a2e81fa-f958-4ab7-9f9d-4677148a5529" _uuid="6345784d591a2d522311dc56c8ade6c55abe569c"
# Save the best weight during training.
from keras.callbacks import ModelCheckpoint
checkpoint = ModelCheckpoint("weights.best.hdf5", monitor='val_mean_squared_error', verbose=1, save_best_only=True, mode='min')
# Fit
callbacks_list = [checkpoint]
history = model.fit(trainX, trainY, epochs=200, batch_size=500, verbose=0, callbacks=callbacks_list, validation_split=0.1)
# + _cell_guid="3bd87749-6336-41f8-9e8d-28b2af668da7" _uuid="e77270be0899ae276da48515fae05303eb0d7f9b"
epoch = len(history.history['loss'])
for k in list(history.history.keys()):
if 'val' not in k:
plt.figure(figsize=(40,10))
plt.plot(history.history[k])
plt.plot(history.history['val_' + k])
plt.title(k)
plt.ylabel(k)
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + _cell_guid="90839462-0395-4116-a13b-101fc1c2bc5f" _uuid="1c73b5d4ab0969215eb3e7c18ab27e0e2545d382"
min(history.history['val_mean_absolute_error'])
# + [markdown] _cell_guid="f2304754-a7cf-4c62-a5fe-666fec2397cf" _uuid="b8233417663b4590f6e4d378d43c3fe198bf287b"
# As seen from the above, the model seems to have converged nicely, but the mean absolute error on the development data remains at ~0.003X which means the model is unusable in practice. Ideally, we want to get ~0.0005. Let's go back to the best weight, and decay the learning rate while retraining the model
# + _cell_guid="5b028aca-1359-4653-9b17-fe49e31f2079" _uuid="b088bcb4e082a9c2791d414c9e16f67ff2ca959a"
# Baby the model a bit
# Load the weight that worked the best
model.load_weights("weights.best.hdf5")
# Train again with decaying learning rate
from keras.callbacks import LearningRateScheduler
import keras.backend as K
def scheduler(epoch):
if epoch%2==0 and epoch!=0:
lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, lr*.9)
print("lr changed to {}".format(lr*.9))
return K.get_value(model.optimizer.lr)
lr_decay = LearningRateScheduler(scheduler)
callbacks_list = [checkpoint, lr_decay]
history = model.fit(trainX, trainY, epochs=int(epoch/3), batch_size=500, verbose=0, callbacks=callbacks_list, validation_split=0.1)
# + _cell_guid="5f4b4826-71ce-4a78-8964-d3e6c8ae2cf7" _uuid="6ad515300117e7ffce8169db1afbdd296db4a04f"
epoch = len(history.history['loss'])
for k in list(history.history.keys()):
if 'val' not in k:
plt.figure(figsize=(40,10))
plt.plot(history.history[k])
plt.plot(history.history['val_' + k])
plt.title(k)
plt.ylabel(k)
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + _cell_guid="fac2ed68-5e46-4068-87bf-743287e80082" _uuid="202a1e3bb06e25b77af5029ae347bcf502dce4c2"
min(history.history['val_mean_absolute_error'])
# + [markdown] _cell_guid="1c71e7b2-a5fa-4c96-983c-e8736f41e193" _uuid="3fbda13306965b59f682eb2d2f7604663f907e6b"
# The variance should have improved slightly. However, unless the mean absolute error is not small enough. The model is still not an usable model in practice. This is mainly due to only using the sample data for training and limiting epoch to a few hundreds.
# + [markdown] _cell_guid="862ba33b-6935-4777-b9e4-29a1fc3259b6" _uuid="70d5d8ec1b68813cf6a8cd4250da9caf2746d094"
# # Visually compare the delta between the prediction and actual (scaled values)
# + _cell_guid="759644d5-1a1b-4916-93c5-75e9c2f5b8a2" _uuid="c9bbaade0cc059ec3e63dfabf182fcd9c3b7c885"
from sklearn.metrics import mean_squared_error, mean_absolute_error
# Benchmark
model.load_weights("weights.best.hdf5")
pred = model.predict(testX)
predictions = pd.DataFrame()
predictions['predicted'] = pd.Series(np.reshape(pred, (pred.shape[0])))
predictions['actual'] = testY
predictions = predictions.astype(float)
predictions.plot(figsize=(20,10))
plt.show()
predictions['diff'] = predictions['predicted'] - predictions['actual']
plt.figure(figsize=(10,10))
sns.distplot(predictions['diff']);
plt.title('Distribution of differences between actual and prediction')
plt.show()
print("MSE : ", mean_squared_error(predictions['predicted'].values, predictions['actual'].values))
print("MAE : ", mean_absolute_error(predictions['predicted'].values, predictions['actual'].values))
predictions['diff'].describe()
# + [markdown] _cell_guid="48e036a6-f1f1-4d07-aae9-40ff7b1fc34a" _uuid="90fb470b95817dc394005f294787f5a809943e26"
# # Compare the unscaled values and see if the prediction falls within the Low and High
# + _cell_guid="c93f72cc-aff8-4e48-b022-87deab5a929e" _uuid="0cbcf799bfec016f80f0ffcfe7a66a19eedb1b7b"
pred = model.predict(testX)
pred = y_scaler.inverse_transform(pred)
close = y_scaler.inverse_transform(np.reshape(testY, (testY.shape[0], 1)))
predictions = pd.DataFrame()
predictions['predicted'] = pd.Series(np.reshape(pred, (pred.shape[0])))
predictions['close'] = pd.Series(np.reshape(close, (close.shape[0])))
p = df[-pred.shape[0]:].copy()
predictions.index = p.index
predictions = predictions.astype(float)
predictions = predictions.merge(p[['low', 'high']], right_index=True, left_index=True)
ax = predictions.plot(x=predictions.index, y='close', c='red', figsize=(40,10))
ax = predictions.plot(x=predictions.index, y='predicted', c='blue', figsize=(40,10), ax=ax)
index = [str(item) for item in predictions.index]
plt.fill_between(x=index, y1='low', y2='high', data=p, alpha=0.4)
plt.title('Prediction vs Actual (low and high as blue region)')
plt.show()
predictions['diff'] = predictions['predicted'] - predictions['close']
plt.figure(figsize=(10,10))
sns.distplot(predictions['diff']);
plt.title('Distribution of differences between actual and prediction ')
plt.show()
g = sns.jointplot("diff", "predicted", data=predictions, kind="kde", space=0)
plt.title('Distributtion of error and price')
plt.show()
# predictions['correct'] = (predictions['predicted'] <= predictions['high']) & (predictions['predicted'] >= predictions['low'])
# sns.factorplot(data=predictions, x='correct', kind='count')
print("MSE : ", mean_squared_error(predictions['predicted'].values, predictions['close'].values))
print("MAE : ", mean_absolute_error(predictions['predicted'].values, predictions['close'].values))
predictions['diff'].describe()
# + [markdown] _uuid="935ca5504302a194d0af986410681213ea3fe5bc"
# The above references an opinion and is for information purposes only. It is not intended to be investment advice. Seek a duly licensed professional for investment advice.
# -
score = model.evaluate(testX,testY)
print(score[0])
print(score[1]*100)
predictions
import plotly.plotly as py
import plotly.graph_objs as go
import plotly
plotly.tools.set_credentials_file(username='jafferwilson', api_key='EkP7ePXHyQaUZxPIX2Zv')
trace = go.Candlestick(x=predictions.index,
open=predictions.predicted,
high=predictions.high,
low=predictions.low,
close=predictions.close)
data = [trace]
py.iplot(data, filename='simple_candlestick')
# +
pred = model.predict(testX)
predictions = pd.DataFrame()
predictions['predicted'] = pd.Series(np.reshape(pred, (pred.shape[0])))
predictions['actual'] = testY
predictions = predictions.astype(float)
predictions.plot(figsize=(20,10))
plt.show()
predictions['diff'] = predictions['predicted'] - predictions['actual']
plt.figure(figsize=(10,10))
sns.distplot(predictions['diff']);
plt.title('Distribution of differences between actual and prediction')
plt.show()
print("MSE : ", mean_squared_error(predictions['predicted'].values, predictions['actual'].values))
print("MAE : ", mean_absolute_error(predictions['predicted'].values, predictions['actual'].values))
predictions['diff'].describe()
| 17,388 |
/Lyft_Data_Challenge.ipynb
|
892b4dfe3871c6461ba7b864b33d8e05a29776eb
|
[] |
no_license
|
brandy-lei/Lyft_DC
|
https://github.com/brandy-lei/Lyft_DC
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 159,116 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import pandas as pd
import scipy as sp
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
sns.set(rc={'axes.facecolor':'white','figure.facecolor':'white'})
# These are the "Tableau 20" colors as RGB.
tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
for i in range(len(tableau20)):
r, g, b = tableau20[i]
tableau20[i] = (r / 255., g / 255., b / 255.)
# -
rides = pd.read_csv('rides.csv')
rides.head()
rides.describe()
rides.dtypes
print len(rides)
# +
#pickup = pd.DatetimeIndex(rides['pickup_datetime'])
# +
#dropoff = pd.DatetimeIndex(rides['dropoff_datetime'])
# +
#rides['pickup'] = pickup
# +
#rides['dropoff'] = dropoff
# +
#rides.dtypes
# +
#print min(rides['pickup']), max(rides['pickup'])
# -
plt.hist(rides['distance_miles'])
plt.show()
# #### Most of the rides are low distance (even below 1 mile). We keep the distances that are 0. As we only have granularity to the scale of the mile we can imagine ride that are below that threshold
# Some rides have negative duration time which is weird
print 1.*len(rides[rides['duration_secs']<0])/len(rides)
# +
# However it is only a tiny fraction of the data so we can remove these points
# -
# What is the minimum time in seconds if we exclude rides that have negative duration?
clean = rides[rides['duration_secs']>0]
print min(clean['duration_secs'])
# +
# 1s! It is way to low to be real.
# -
# #### In the following we will remove all rides that are less than 2 minutes
rides2 = rides[rides['duration_secs']>=120]
plt.hist(rides2['duration_secs'])
plt.show()
# +
# This confirm that people tend to take Lyft for small distances and quick drives
# +
#dates_pickup = pickup.date
# +
#dates_dropoff = dropoff.date
# +
# What is the dates range?
#print np.unique(dates_pickup)
# +
#rides['dates_pickup'] = dates_pickup
# +
#rides['dates_dropoff'] = dates_dropoff
# +
#rides.head()
# -
rides2 = rides[rides['duration_secs']>=120]
print len(rides2)
# +
# To get a sense of the hot spots we restrict the dataset to one day for plotting purposes
# -
one_day = rides2[0:200000]
#one_day = rides2[rides2['dates_pickup']==rides2['dates_pickup'][0]]
print len(one_day)
print np.median(one_day['start_lat']), np.median(one_day['start_lng'])
# +
#import folium
# +
# Create map of the different pickup locations
#map = folium.Map(location=[40.749954,-73.983711],
# zoom_start=5)
#for _, df in one_day.iterrows():
# map.circle_marker(
# location=[df['start_lat'], df['start_lng']],
# radius=20,
# )
#map.create_map(path='pickups.html')
# -
# #### The data are located in New York. Let's overplot some well-known destinations to get a sense of the hot spots
# Times Square
times_square = [40.75773, -73.985708]
# Financial District
financial_district = [40.707499, -74.011153]
# LaGuardia
laguardia = [40.77725, -73.872611]
# JFK
jfk = [40.639722, -73.778889]
# Central Park
central_park = [40.783333, -73.966667]
plt.plot(one_day['start_lng'], one_day['start_lat'], '.', color='k', alpha=0.8)
plt.plot(times_square[1], times_square[0], 'o', color=tableau20[0])
plt.plot(financial_district[1], financial_district[0], 'o', color=tableau20[1])
plt.plot(laguardia[1], laguardia[0], 'o', color=tableau20[2])
plt.plot(jfk[1], jfk[0], 'o', color=tableau20[3])
plt.plot(central_park[1], central_park[0], 'o', color=tableau20[4])
plt.xlim(-74.1, -73.6)
plt.ylim(40.55, 40.9)
plt.legend(['Rides', 'Times Square', 'Financial District', 'LaGuardia', 'JFK', 'Central Park'], ncol=1, frameon=False, fontsize=16)
plt.show()
plt.plot(one_day['end_lng'], one_day['end_lat'], '.', color='k', alpha=0.8)
plt.plot(times_square[1], times_square[0], 'o', color=tableau20[0])
plt.plot(financial_district[1], financial_district[0], 'o', color=tableau20[1])
plt.plot(laguardia[1], laguardia[0], 'o', color=tableau20[2])
plt.plot(jfk[1], jfk[0], 'o', color=tableau20[3])
plt.plot(central_park[1], central_park[0], 'o', color=tableau20[4])
plt.xlim(-74.1, -73.6)
plt.ylim(40.55, 40.9)
plt.legend(['Rides', 'Times Square', 'Financial District', 'LaGuardia', 'JFK', 'Central Park'], ncol=1, frameon=False, fontsize=16)
plt.show()
from sklearn.cluster import KMeans
# +
#k_means = KMeans(n_clusters=5)
# -
pickup_data = one_day[['start_lat','start_lng']]
pickup_data.head()
print min(pickup_data['start_lat']), max(pickup_data['start_lat']), min(pickup_data['start_lng']), max(pickup_data['start_lng'])
xpickup_data = np.array(pickup_data)
k_range = range(1, 10)
k_means_var = [KMeans(n_clusters=k).fit(xpickup_data) for k in k_range]
# Find the cluster center for each model
centroids = [X.cluster_centers_ for X in k_means_var]
from scipy.spatial.distance import cdist, pdist
# Calculate the Euclidian distance for each point to the center
k_euclid = [cdist(xpickup_data, cent, 'euclidean') for cent in centroids]
dist = [np.min(ke, axis=1) for ke in k_euclid]
# Total within-cluster sum of squares
wcss = [sum(d**2) for d in dist]
# Total sum of squares
tss = sum(pdist(xpickup_data)**2)/xpickup_data.shape[0]
pickup_data['labels'] = k_means.labels_
plt.plot(pickup_data['start_lat'][pickup_data['labels']==0], pickup_data['start_lng'][pickup_data['labels']==0], '.', color='b')
plt.plot(pickup_data['start_lat'][pickup_data['labels']==1], pickup_data['start_lng'][pickup_data['labels']==1], '.', color='r')
plt.plot(pickup_data['start_lat'][pickup_data['labels']==2], pickup_data['start_lng'][pickup_data['labels']==2], '.', color='g')
plt.plot(pickup_data['start_lat'][pickup_data['labels']==3], pickup_data['start_lng'][pickup_data['labels']==3], '.', color='m')
plt.plot(pickup_data['start_lat'][pickup_data['labels']==4], pickup_data['start_lng'][pickup_data['labels']==4], '.', color='y')
plt.ylim(-74.4, -73.6)
plt.xlim(40.6, 40.9)
plt.show()
im(-0.1,0.9)
plt.xlim(0,1)
for i, txt in enumerate(words):
plt.annotate(txt,(x[i], y[i]))
# Add titles (main and on axis)
plt.xlabel("Word TF-IDF Score")
plt.ylabel("Sentence False Rate")
plt.title("Top Words in General COVID19 Tweets")
plt.show()
# + id="WevKBEGNWSWz" colab_type="code" colab={}
def tfidf(sentences, top_n):
# sentences = non_false
cvec = CountVectorizer(stop_words='english', min_df=3, max_df=0.5, ngram_range=(1,2))
sf = cvec.fit_transform(sentences)
transformer = TfidfTransformer()
transformed_weights = transformer.fit_transform(sf)
weights = np.asarray(transformed_weights.mean(axis=0)).ravel().tolist()
weights_df = pd.DataFrame({'term': cvec.get_feature_names(), 'weight': weights})
print(weights_df.sort_values(by='weight', ascending=False).head(20))
return weights_df.sort_values(by='weight', ascending=False).head(top_n)
# + id="g6I86jaemK14" colab_type="code" outputId="69e80df7-5714-4717-cd22-0cdfe2ab8e60" colab={"base_uri": "https://localhost:8080/", "height": 1000}
top_n = 23
top_n_words, top_n_tfidf = [], []
tfidf_text_load,tfidf_text_load_false,tfidf_text_load_non_false = tfidf(text_load, top_n), tfidf(false, top_n), tfidf(non_false, top_n)
for i in range(top_n):
top_n_words.append(tfidf_text_load.values[i][0])
top_n_tfidf.append(tfidf_text_load.values[i][1])
for i in range(top_n):
top_n_words.append(tfidf_text_load_false.values[i][0])
top_n_tfidf.append(tfidf_text_load_false.values[i][1])
for i in range(top_n):
top_n_words.append(tfidf_text_load_non_false.values[i][0])
top_n_tfidf.append(tfidf_text_load_non_false.values[i][1])
print(len(top_n_words))
print(top_n_tfidf)
# + id="9_DM0eC7AqRm" colab_type="code" colab={}
# x,y,z,words = [],[],[],[]
# x = [0.72,0.47,0.22,0.16,0.1, 0.1,0.09,0.09,.08,.08, .07,.07,.07,.06,.06]
# y = [0.306,0.1875,0.1063,0.5556,0.6031,0.4733,0.3174,0.9105,0.4782,0.2,0.4646,0.7037,0.375,0.3255,0.6]
# z = [49,80,47,9,189,150,126,123,46,20,637,189,8,86,15]
# words = ['hands' , 'wash', 'washing','soap','face','mask', 'hand','water', 'wear', 'touch','people','prevent','touching','stop','mouth']
for ind, word in enumerate(top_n_words):
if word == 'covid' or word == 'covid 19' or word == '19':
pass
else:
text_false, text_non_false = [],[]
for i in range(len(text_load)):
if word.lower() in text_load[i].lower():
if label_load_veracity[i] == 1:
text_false.append(text_load[i])
else:
text_non_false.append(text_load[i])
x.append(top_n_tfidf[ind])
words.append(word)
y.append(len(text_false)/(len(text_non_false)+len(text_false)))
z.append(len(text_non_false)+len(text_false))
# + id="BZDNEKk-tk_3" colab_type="code" outputId="a9252906-c393-43bc-d761-d344c257f338" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(len(words))
# + id="Vjtq8Zy-h376" colab_type="code" outputId="e80d9767-7105-4520-a15a-8cbf9cfea36b" colab={"base_uri": "https://localhost:8080/", "height": 299}
def plot_tfidf(x,y,z,words):
# Change color with c and alpha. I map the color to the X axis value.
# plt.scatter(x[:20], y[:20], s=z[:20]*np.ones(20)*6, c=y[:20], cmap="Greens", alpha=0.7, edgecolors="grey", linewidth=2)
# plt.scatter(x[20:40], y[20:40], s=z[20:40]*np.ones(20), c=y[20:40], cmap="Oranges", alpha=0.7, edgecolors="grey", linewidth=2)
# plt.scatter(x[40:60], y[40:60], s=z[40:60]*np.ones(20), c=y[40:60], cmap="Reds", alpha=0.7, edgecolors="grey", linewidth=2)
plt.scatter(x[60:80], y[60:80], s=z[60:80]*np.ones(20), c=y[60:80], cmap="Blues", alpha=0.6, edgecolors="grey", linewidth=2)
plt.xscale('log')
plt.ylim(-0.1,0.9)
# plt.xlim(-0.1,1.1)
# for i, txt in enumerate(words):
# # if i < 20 or 40<=i<60:
# if 60<=i<80:
# plt.annotate(txt, (x[i], y[i]))
for i, txt in enumerate(words):
if 60<=i<80:
plt.annotate(txt, (x[i], y[i]))
# kw = dict(prop="sizes", num=3, color=scatter.cmap(0.7), fmt="{x:.0f}",
# func=lambda s: s/7)
# legend2 = plt.legend(*scatter.legend_elements(**kw),
# loc="upper center", title="Word Count")
# Add titles (main and on axis)
plt.xlabel("Word TF-IDF Score")
plt.ylabel("Sentence False Rate")
plt.title("COVID19 Non-false Rumors")
plt.show()
plot_tfidf(x,y,z,words)
# + id="QSd0-lZZ0mrr" colab_type="code" outputId="d9f5731e-ecc7-42bc-9739-6c2edc42dc06" colab={"base_uri": "https://localhost:8080/", "height": 85}
i=0
for line in text_load:
for word in line.split():
if len(word)>20:
line = line.replace(word, 'UNK')
# word = 'UNK'
text_load[i] = line
i+=1
print(text_load[0])
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(text_load)
text_as_int = tokenizer.texts_to_sequences(text_load)
print(len(text_as_int))
print(text_as_int[0])
tokenizer.sequences_to_texts([text_as_int[0]])
# + id="Vmf032Pa1D0k" colab_type="code" outputId="bdd2c14d-74f1-4014-de88-02d26d739fec" colab={"base_uri": "https://localhost:8080/", "height": 316}
len_sen = [len(sublist) for sublist in text_as_int]
import matplotlib.pyplot as plt
_ = plt.hist(len_sen)
plt.show()
print(max(len_sen))
list_of_int = [item for sublist in text_as_int for item in sublist]
vocab = sorted(set(list_of_int))
print ('{} unique words'.format(len(vocab)))
# tokenizer.sequences_to_texts([[1]])
text_as_int = tf.keras.preprocessing.sequence.pad_sequences(text_as_int,
value=0,
padding='post',
maxlen=60)
print(text_as_int.shape)
# + id="t8kCGqtyro8D" colab_type="code" outputId="b3bcf55f-48c0-4076-d672-7a3ee18550b5" colab={"base_uri": "https://localhost:8080/", "height": 272}
from sklearn.manifold import TSNE
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(text_as_int)
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
# + id="gdcVDhEX19ZW" colab_type="code" outputId="814a88bc-24e7-4e69-a7d1-a4161e835235" colab={"base_uri": "https://localhost:8080/", "height": 34}
import pandas as pd
feat_cols = [ 'words'+str(i) for i in range(text_as_int.shape[1]) ]
df = pd.DataFrame(text_as_int,columns=feat_cols)
df['y'] = label_load_veracity
df['label_falseness'] = df['y'].apply(lambda i: str(i))
bins_falseness = np.arange(0,1.1,0.1)
df['tsne-2d-one'] = tsne_results[:,0]
df['tsne-2d-two'] = tsne_results[:,1]
print('Size of the dataframe: {}'.format(df.shape))
# + id="0SocQu6g0zL2" colab_type="code" outputId="17d91076-697b-443c-fb26-8727a5c924ed" colab={"base_uri": "https://localhost:8080/", "height": 296}
import seaborn as sns
# plt.figure(figsize=(16,10))
g = sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="label_falseness",
palette=sns.color_palette("Reds", 3),
# hue = "label_falseness",
style="label_falseness",
# palette=sns.color_palette("Reds", 2),
data=df,
# legend="full",
s=50,
alpha=0.7
)
g.legend(loc='center left', bbox_to_anchor=(1, 0.5), ncol=1)
# + id="CjTl1HKkXNox" colab_type="code" outputId="d367e685-7716-40cb-c6bc-6669e7171fd0" colab={"base_uri": "https://localhost:8080/", "height": 419}
df=pd.read_csv('./drive/My Drive/Colab Notebooks/USC_Melady_Lab_hasDup.csv', sep=',')
df = df.dropna()
df = df.reset_index(drop=True)
df
# + id="hmXMVbxfM2g5" colab_type="code" outputId="f5729354-c105-4000-b82c-171e9805c2d5" colab={"base_uri": "https://localhost:8080/", "height": 419}
df_dupe = df[df.duplicated(subset=['content'],keep=False)]
df_dupe = df_dupe.reset_index(drop=True)
df_dupe
# + id="tfS_Huo1PHIU" colab_type="code" outputId="0de512e3-dcd5-4b36-9c70-aede4e690d39" colab={"base_uri": "https://localhost:8080/", "height": 34}
df_unreliable_dupe = df_dupe
# df_unreliable_dupe = df_dupe[df_dupe['label']=='clickbait']
text_unreliable_dupe = df_unreliable_dupe['content'].values
time_unreliable_dupe = df_unreliable_dupe['time'].values
time_stamp_unreliable_dupe = [int(time_unreliable_dupe[i][5:7])*100+int(time_unreliable_dupe[i][8:10]) for i in range(len(time_unreliable_dupe))]
time_unreliable_sorted_dupe = np.zeros(len(time_stamp_unreliable_dupe))
for i in range(len(time_stamp_unreliable_dupe)):
if time_stamp_unreliable_dupe[i] < 400:
time_unreliable_sorted_dupe[i] = time_stamp_unreliable_dupe[i]-300
else:
time_unreliable_sorted_dupe[i] = time_stamp_unreliable_dupe[i]-400 + 31
# print(time_unreliable_sorted_dupe)
print(len(time_unreliable_sorted_dupe))
# + id="Wnl-C7ie6RC-" colab_type="code" outputId="9881c0d0-7e9e-4ffd-ece1-374c802479bd" colab={"base_uri": "https://localhost:8080/", "height": 51}
df_unreliable = df_unreliable_dupe.drop_duplicates(subset='content')
text_unreliable = df_unreliable['content'].values
time_unreliable= df_unreliable['time'].values
time_stamp_unreliable = [int(time_unreliable[i][5:7])*100+int(time_unreliable[i][8:10]) for i in range(len(time_unreliable))]
time_unreliable_sorted = np.zeros(len(time_stamp_unreliable))
for i in range(len(time_stamp_unreliable)):
if time_stamp_unreliable[i] < 400:
time_unreliable_sorted[i] = time_stamp_unreliable[i]-300
else:
time_unreliable_sorted[i] = time_stamp_unreliable[i]-400 + 31
print(time_unreliable_sorted)
print(len(time_unreliable_sorted))
# + id="2gVmhZHXRAXT" colab_type="code" outputId="963cc0c3-4311-470f-9e99-74adad8c202c" colab={"base_uri": "https://localhost:8080/", "height": 401}
count_unreliable = np.zeros((len(text_unreliable),int(max(time_unreliable_sorted))))
for i, item in enumerate(text_unreliable):
# print(item)
indices = [j for j, x in enumerate(text_unreliable_dupe) if x == item]
# print(indices)
for index in indices:
# print(int(time_unreliable_sorted_dupe[index]))
count_unreliable[i,int(time_unreliable_sorted_dupe[index])-1] += 1
sum_unreliable = count_unreliable.sum(axis = 0)
mean_unreliable = count_unreliable.mean(axis = 0)
# std_unreliable = count_unreliable.std(axis = 0)
print(mean_unreliable)
plt.plot(mean_unreliable)
# + id="9N47LGP8Q7aC" colab_type="code" outputId="5c7e1b14-85b2-4aef-91e5-5fba2eba4a83" colab={"base_uri": "https://localhost:8080/", "height": 323}
# !pip install powerlaw
# + id="JQrZqAMhd7_U" colab_type="code" outputId="199a07eb-cb2f-44b6-c38c-6c4b3617e0eb" colab={"base_uri": "https://localhost:8080/", "height": 136}
import powerlaw
data = mean_unreliable[:39]
fit = powerlaw.Fit(data)
print(fit.power_law.xmin,fit.power_law.alpha, fit.power_law.sigma)
print(fit.lognormal.xmin,fit.lognormal.mu, fit.lognormal.sigma)
theoretical_distribution = powerlaw.Power_Law(xmin=fit.power_law.xmin, parameters=[fit.power_law.alpha])
simulated_data = theoretical_distribution.generate_random(10000)
# simulated_data = fit.power_law.generate_random(len(data))
from scipy.stats import ks_2samp
_,p_powerlaw = ks_2samp(data,simulated_data)
print(p_powerlaw)
theoretical_distribution = powerlaw.Lognormal(xmin=fit.lognormal.xmin, parameters=[fit.lognormal.mu,fit.lognormal.sigma])
simulated_data = theoretical_distribution.generate_random(10000)
# simulated_data = fit.power_law.generate_random(len(data))
# from scipy.stats import ks_2samp
_,p_lognormal = ks_2samp(data,simulated_data)
print(p_lognormal)
# + id="0rWWQLqgTrCd" colab_type="code" outputId="761e9e22-1bd3-4bef-85fe-d85e01aa57d4" colab={"base_uri": "https://localhost:8080/", "height": 34}
# print(fit.distribution_compare('truncated_power_law','lognormal'))
print(fit.distribution_compare('lognormal','power_law'))
# print(fit.distribution_compare('lognormal','exponential'))
# + id="AV2dVPpARgB9" colab_type="code" outputId="9bb3201a-12ea-4d7e-da25-4785b88dfad1" colab={"base_uri": "https://localhost:8080/", "height": 367}
fig1 = plt.figure(figsize=(4.5,3))
fig1 = fit.plot_cdf(label = 'Empirical Data')
fit.power_law.plot_cdf(ax=fig1, color='r', linestyle='--',label = 'Power-law fit: '+'\n' +r'$p_{KS}$'+'={0:.3f}, '.format(p_powerlaw)+r'$x_{min}$'+'={0:.3f}, '.format(fit.power_law.xmin)+'\n'
+r'$\alpha$'+'={0:.3f}, '.format(fit.power_law.alpha)+r'$\sigma$'+'={0:.3f}.'.format(fit.power_law.sigma))
fit.lognormal.plot_cdf(ax=fig1, color='g', linestyle='--', label = 'Lognormal fit: '+'\n' +r'$p_{KS}$'+'={0:.3f}, '.format(p_lognormal)+r'$x_{min}$'+'={0:.3f}, '.format(fit.lognormal.xmin)+'\n'
+r'$\mu$'+'={0:.3f}, '.format(fit.lognormal.mu)+r'$\sigma$'+'={0:.3f}.'.format(fit.lognormal.sigma))
fig1.set_ylabel("CDF of x")
fig1.set_xlabel("Mean Popularity x, Day 0-39")
fig1.set_title("Misinformation")
# fig1.set_xticks(np.arange(0.14,0.3,0.1),[0.1,0.3])
# fig1.set_xlim(0.14,0.26)
fig1.set_ylim(0,1)
fig1.set_yticks(np.arange(0,1,1),[0,1])
fig1.legend(loc = 'lower right')
# fig1.figure.savefig('unreliable_model.png', transparent=True,bbox_inches = "tight")
# + id="o0eYxMERYM9D" colab_type="code" outputId="a8ea6793-b922-4c30-cc02-8689ee5566d8" colab={"base_uri": "https://localhost:8080/", "height": 68}
from scipy.optimize import curve_fit
def func_powerlaw(x, m, c, c0):
return c0+ x**m * c
sol1,_ = curve_fit(func_powerlaw,np.arange(25), mean_unreliable[10:35],maxfev = 2000) #0-41
# sol2,_ = curve_fit(func_powerlaw,np.arange(25), mean_political[10:35],maxfev = 2000) #0-10, 11-36
# sol3,_ = curve_fit(func_powerlaw,np.arange(20), mean_bias[15:35],maxfev = 2000) #14/17-37
# sol4,_ = curve_fit(func_powerlaw,np.arange(25), mean_conspiracy[11:36],maxfev = 1500) # 0-13, 8/14-20, 22-34
# sol5,_ = curve_fit(func_powerlaw,np.arange(33), mean_clickbait[9:],maxfev = 1500) #0-18, 27-40
sol1
# + id="hKYWhIL0UZ0e" colab_type="code" outputId="78f65712-1ecb-4190-d402-48256bd5214f" colab={"base_uri": "https://localhost:8080/", "height": 282}
from scipy.stats import ks_2samp
# stats.kstest(mean_unreliable, 'expon')
plt.scatter(np.arange(42), mean_unreliable)
plt.plot(np.arange(42),func_powerlaw(np.arange(42),*sol1))
# + id="14bypLaPTy1u" colab_type="code" colab={}
length = 25
# plt.figure(figsize = (3,2))
# plt.scatter(np.arange(length), mean_unreliable[10:length+10]-sol1[2])
# plt.plot(np.arange(length),func_powerlaw(np.arange(length),*sol1)-sol1[2])
# plt.scatter(np.arange(length), mean_political[10:length+10]-sol2[2],color=u'#ff7f0e')
# plt.plot(np.arange(length),func_powerlaw(np.arange(length),*sol2)-sol2[2],color=u'#ff7f0e')
# length = 20
# plt.scatter(np.arange(length), mean_bias[15:35]-sol3[2],color= u'#2ca02c')
# plt.plot(np.arange(length),func_powerlaw(np.arange(length),*sol3)-sol3[2],color= u'#2ca02c')
# length = 25
plt.scatter(np.arange(length), mean_conspiracy[11:length+11]-sol4[2],color=u'#d62728')
plt.plot(np.arange(length),func_powerlaw(np.arange(length),*sol4)-sol4[2],color=u'#d62728')
# plt.scatter(np.arange(33), mean_clickbait[9:]-sol5[2],color=u'#9467bd')
# plt.plot(np.arange(33),func_powerlaw(np.arange(33),*sol5)-sol5[2],color=u'#9467bd')
plt.xlabel('Log Time')
plt.ylabel('Log Popularity')
plt.yscale('log')
plt.xscale('log')
plt.savefig('conspiracy.png', transparent=True,bbox_inches = "tight")
# + id="XEeCKxz5jp_f" colab_type="code" outputId="6315e141-6aa4-4311-85ff-b6834f206c38" colab={"base_uri": "https://localhost:8080/", "height": 445}
# length = 36
# import matplotlib.style
# import matplotlib as mpl
# mpl.style.use('seaborn-colorblind')
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
import matplotlib.pyplot as plt
f = plt.figure(figsize=(15,6))
ax = f.add_subplot(111)
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('none')
ax.spines['left'].set_color('none')
ax.spines['right'].set_color('none')
ax.tick_params(labelcolor='w', top=False, bottom=False, left=False, right=False)
ax1 = f.add_subplot(231)
ax2 = f.add_subplot(232)
ax3 = f.add_subplot(233)
ax4 = f.add_subplot(234)
ax5 = f.add_subplot(235)
ax6 = f.add_subplot(236)
f.tight_layout(h_pad=4.0)
ax1.plot(mean_unreliable[10:length+10],label='Historical',color=u'#1f77b4',marker='o')
ax2.plot(mean_political[10:length+10],label='Historical',color=u'#ff7f0e',marker='o')
ax3.plot(mean_bias[15:35],label='Historical',color= u'#2ca02c',marker='o')
ax4.plot(mean_conspiracy[11:length+11],label='Historical',color=u'#d62728',marker='o')
ax5.plot(mean_clickbait[9:],label='Historical',color=u'#9467bd',marker='o')
ax6.plot(mean_unreliable,label='Unreliable')
ax6.plot(mean_political,label='Political')
ax6.plot(mean_bias,label='Bias')
ax6.plot(mean_conspiracy,label='Conspiracy')
ax6.plot(mean_clickbait,label='Clickbait')
ax1.plot(np.arange(length),func_powerlaw(np.arange(length),*sol1),u'#1f77b4',linestyle = '--',label='Fitted')
ax2.plot(np.arange(length),func_powerlaw(np.arange(length),*sol2),u'#ff7f0e',linestyle = '--', label='Fitted')
ax3.plot(np.arange(20),func_powerlaw(np.arange(20),*sol3),u'#2ca02c',linestyle = '--',label='Fitted')
ax4.plot(np.arange(length),func_powerlaw(np.arange(length),*sol4),u'#d62728',linestyle = '--',label='Fitted')
ax5.plot(np.arange(33),func_powerlaw(np.arange(33),*sol5),u'#9467bd',linestyle = '--',label='Fitted')
ax1.legend()
# ax1.set_xscale('log')
# ax1.set_yscale('log')
ax2.legend()
ax3.legend()
ax4.legend()
ax5.legend()
ax6.legend(loc='upper left',ncol = 2)
ax1.set_title('Unreliable')
ax2.set_title('Political')
ax3.set_title('Bias')
ax4.set_title('Conspiracy')
ax5.set_title('Clickbait')
ax6.set_title('All')
ax1.set_xlabel('Day 10-35')
ax1.set_ylabel('Mean Popularity')
ax2.set_xlabel('Day 10-35')
# ax2.set_ylabel('Mean Popularity')
ax3.set_xlabel('Day 15-35')
ax4.set_ylabel('Mean Popularity')
ax4.set_xlabel('Day 10-35')
ax5.set_xlabel('Day 9-41')
ax6.set_xlabel('Day 0-41')
# ax.set_title('Mininformation Spreading Trends (Power Law Fitted)')
# + id="GZO9YjXp_Zcw" colab_type="code" outputId="115670c5-ef53-4c53-f630-32f5ff9a2920" colab={"base_uri": "https://localhost:8080/", "height": 265}
plt.plot(mean_unreliable)
plt.errorbar(np.arange(42), mean_unreliable, yerr=std_unreliable**2/100, fmt='o',
ecolor='lightgray', elinewidth=3, capsize=0);
# + id="5GInUsKoVxGW" colab_type="code" outputId="6bfdedc8-5f34-4a8f-e0ee-f65e8e51ddea" colab={"base_uri": "https://localhost:8080/", "height": 295}
import matplotlib.pyplot as plt
plt.plot(sum_unreliable,label='Unreliable') # plotting by columns
plt.plot(sum_political,label='Political')
plt.plot(sum_bias,label='Bias')
plt.plot(sum_conspiracy,label='Conspiracy')
# plt.yscale('log')
plt.legend()
plt.xlabel('Time by Day')
plt.ylabel('Total Popularity')
plt.title('Mininformation Spreading Trends (Total)')
plt.show()
# + id="0bzq2UsF7sh8" colab_type="code" outputId="2ccfdbad-3050-4fd6-b8b8-628949bdc58b" colab={"base_uri": "https://localhost:8080/", "height": 428}
top_n = 10
top_n_words, top_n_tfidf = [], []
tfidfs = tfidf(text_political, top_n)
for i in range(top_n):
top_n_words.append(tfidfs.values[i][0])
top_n_tfidf.append(tfidfs.values[i][1])
print(len(top_n_words))
print(top_n_tfidf)
# + id="y50roO7w950R" colab_type="code" outputId="b577a389-36c7-4010-f089-2952a9a591af" colab={"base_uri": "https://localhost:8080/", "height": 51}
time_unreliable_sorted = np.zeros(len(time_unreliable))
for i in range(len(time_unreliable)):
if time_unreliable[i] < 400:
time_unreliable_sorted[i] = time_unreliable[i]-300
else:
time_unreliable_sorted[i] = time_unreliable[i]-400 + 31
print(time_unreliable_sorted)
time_political_sorted = np.zeros(len(time_political))
for i in range(len(time_political)):
if time_political[i] < 400:
time_political_sorted[i] = time_political[i]-300
else:
time_political_sorted[i] = time_political[i]-400 + 31
time_political_sorted
# + id="N8WHFzXB9ZvB" colab_type="code" outputId="acd13994-ccee-4d0b-f39c-470ce3c740ec" colab={"base_uri": "https://localhost:8080/", "height": 119}
count_unreliable = np.zeros(int(max(time_unreliable_sorted)))
for i in range(len(text_unreliable)):
if 'coronavirus' in text_unreliable[i].lower():
count_unreliable[int(time_unreliable_sorted[i])-1] += 1
print(count_unreliable)
count_political = np.zeros(int(max(time_political_sorted)))
for i in range(len(text_political)):
if 'coronavirus' in text_political[i].lower():
count_political[int(time_political_sorted[i])-1] += 1
print(count_political)
| 26,864 |
/coursera/dna-sequencing/week1/homework_1.ipynb
|
0966e903a64ed2d30a517acea13fac8897ab75b3
|
[] |
no_license
|
tartakynov/sandbox
|
https://github.com/tartakynov/sandbox
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,200 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # naive_with_rc
# +
def readGenome(filename):
genome = ''
with open(filename, 'r') as f:
for line in f:
# ignore header line with genome information
if not line[0] == '>':
genome += line.rstrip()
return genome
def readFastq(filename):
sequences = []
qualities = []
with open(filename) as fh:
while True:
fh.readline() # skip name line
seq = fh.readline().rstrip() # read base sequence
fh.readline() # skip placeholder line
qual = fh.readline().rstrip() # base quality line
if len(seq) == 0:
break
sequences.append(seq)
qualities.append(qual)
return sequences, qualities
def reverseComplement(s):
complement = {'A': 'T', 'C': 'G', 'G': 'C', 'T': 'A', 'N': 'N'}
t = ''
for base in s:
t = complement[base] + t
return t
def naive(p, t):
occurrences = []
for i in range(len(t) - len(p) + 1): # loop over alignments
match = True
for j in range(len(p)): # loop over characters
if t[i+j] != p[j]: # compare characters
match = False
break
if match:
occurrences.append(i) # all chars matched; record
return occurrences
# +
def match(pattern, genome, pos):
for i in xrange(len(pattern)):
if pattern[i] != genome[pos + i]:
return False
return True
def naive_with_rc(pattern, genome):
occurrences = []
complementary = reverseComplement(pattern)
for pos in xrange(len(genome) - len(pattern) + 1):
if match(pattern, genome, pos) or match(complementary, genome, pos):
occurrences.append(pos)
return occurrences
# -
# ### Example 1
p = 'CCC'
ten_as = 'AAAAAAAAAA'
t = ten_as + 'CCC' + ten_as + 'GGG' + ten_as
occurrences = naive_with_rc(p, t)
print(occurrences) # should be [10, 23]
# ### Example 2
p = 'CGCG'
t = ten_as + 'CGCG' + ten_as + 'CGCG' + ten_as
occurrences = naive_with_rc(p, t)
print(occurrences) # should be [10, 24]
# ### Example 3
# Phi-X genome
# !wget http://d396qusza40orc.cloudfront.net/ads1/data/phix.fa
phix_genome = readGenome('phix.fa')
occurrences = naive_with_rc('ATTA', phix_genome)
print('offset of leftmost occurrence: %d' % min(occurrences)) # should be 62
print('# occurrences: %d' % len(occurrences)) # should be 60
# # lambda virus genome
# !wget https://d28rh4a8wq0iu5.cloudfront.net/ads1/data/lambda_virus.fa
lambdaGenome = readGenome('lambda_virus.fa')
print len(naive_with_rc('AGGT', lambdaGenome))
print len(naive_with_rc('TTAA', lambdaGenome))
print naive_with_rc('ACTAAGT', lambdaGenome)[0]
print naive_with_rc('AGTCGA', lambdaGenome)[0]
# +
def naive_2mm(p, t):
occurrences = []
for i in range(len(t) - len(p) + 1):
mm = 0
for j in range(len(p)):
if t[i+j] != p[j]:
mm += 1
if mm > 2:
break
if mm <= 2:
occurrences.append(i)
return occurrences
print naive_2mm('ACTTTA', 'ACTTACTTGATAAAGT') # should be [0, 4]
# -
print len(naive_2mm('TTCAAGCC', lambdaGenome))
print naive_2mm('AGGAGGTT', lambdaGenome)[0]
# # poor quality genome
# +
# !wget https://d28rh4a8wq0iu5.cloudfront.net/ads1/data/ERR037900_1.first1000.fastq
# -
reads, qualities = readFastq('ERR037900_1.first1000.fastq')
print naive('N', reads[1])
print naive('N', reads[99])
print naive('N', reads[199])
| 3,788 |
/Delhinew.ipynb
|
dd8d2f4839113900af76639b65528d8a4d3aec39
|
[] |
no_license
|
kamakshi443/data-analysis
|
https://github.com/kamakshi443/data-analysis
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 231,260 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
# %pylab inline
# -
df = pd.read_csv("crime_data.csv", parse_dates=['Time Occurred'])
df.columns
df['date_time'] = pd.to_datetime(df['Date Occurred'] + " " + df['Time Occurred'])
df['date_time'].dt.weekday_name.head(10)
# +
import seaborn as sns
def count_rows(rows):
return len(rows)
#df[['Crime Code Description', 'Victim Age']].groupby('Crime Code Description').count()
m = df.groupby(['Crime Code Description', 'Victim Age']).apply(count_rows).unstack()
# -
Delhinew['lat'].count()
grouped_data = Delhinew.groupby('AREA')
# Summary statistics for all numeric columns by AREA
grouped_data.describe()
# Provide the mean for each numeric column by AREA
grouped_data.mean()
longlat_counts = Delhinew.groupby('long')['lat'].count()
print(longlat_counts)
# %matplotlib inline
import matplotlib.pyplot as plt
longlat_counts.plot(kind='bar');
longlat_counts = Delhinew.groupby('AREA')['lat'].count()
longlat_counts.plot(kind='bar');
longlat_counts = Delhinew.groupby('AREA')['long'].count()
longlat_counts.plot(kind='bar');
# plot a histogram
Delhinew['long'].hist(bins=10)
# shows presence of a lot of outliers/extreme values
Delhinew.boxplot(column='long', by = 'lat')
# plotting points as a scatter plot
x = Delhinew["long"]
y = Delhinew["lat"]
plt.scatter(x, y, label= "stars", color= "m",
marker= "*", s=30)
# x-axis label
plt.xlabel('Longitude')
# frequency label
plt.ylabel('Latitude')
# function to show the plot
plt.show()
| 1,814 |
/test_12_17_2.ipynb
|
94e88bb2822be8c72c7d694fd09abdb7118fdf3e
|
[] |
no_license
|
s10730609/python
|
https://github.com/s10730609/python
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,649 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#danaderp May6'19
#Prediction For Main Issues Data Set
# -
import csv
from tensorflow.keras.preprocessing import text
from nltk.corpus import gutenberg
from string import punctuation
from tensorflow.keras.preprocessing.sequence import skipgrams
import pandas as pd
import numpy as np
import re
import nltk
import matplotlib.pyplot as plt
pd.options.display.max_colwidth = 200
# %matplotlib inline
from nltk.stem.snowball import SnowballStemmer
englishStemmer=SnowballStemmer("english")
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers import Dot, Input, Dense, Reshape, LSTM, Conv2D, Flatten, MaxPooling1D, Dropout, MaxPooling2D
from tensorflow.keras.layers import Embedding, Multiply, Subtract
from tensorflow.keras.models import Sequential, Model
from tensorflow.python.keras.layers import Lambda
from tensorflow.keras.callbacks import CSVLogger, ModelCheckpoint
from tensorflow.keras.callbacks import EarlyStopping
# visualize model structure
#from IPython.display import SVG
#from keras.utils.vis_utils import model_to_dot
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.manifold import TSNE
from utils.read_data import Dynamic_Dataset,Processing_Dataset
from utils.vectorize_sentence import Embeddings
#../data replaces datasets for the to access data
path = "../data/augmented_dataset/"
process_unit = Processing_Dataset(path)
ground_truth = process_unit.get_ground_truth()
dataset = Dynamic_Dataset(ground_truth, path,False) # I'm not sure this needs to be False. RC
#As the data is stored in a zip file isZip = True
test, train = process_unit.get_test_and_training(ground_truth,isZip = True)
print(len(test))
print(len(train))
print(test[0])
print(train[0])
# +
#Train/Test split verification
#for elem in test:
# print(elem[0])
# -
#Added due to a lookup error in the next cell
#Adds nltk folder to the repository and is needed if the user doesn't have them already
import nltk
nltk.download('stopwords')
#Preprocesing Corpora
embeddings = Embeddings()
max_words = 5000 #<------- [Parameter]
pre_corpora_train = [doc for doc in train if len(doc[1])< max_words]
pre_corpora_test = [doc for doc in test if len(doc[1])< max_words]
print(len(pre_corpora_train))
print(len(pre_corpora_test))
embed_path = '../data/word_embeddings-embed_size_100-epochs_100.csv'
embeddings_dict = embeddings.get_embeddings_dict(embed_path)
# .decode("utf-8") takes the doc's which are saved as byte files and converts them into strings for tokenization
corpora_train = [embeddings.vectorize(doc[1].decode("utf-8"), embeddings_dict) for doc in pre_corpora_train]#vectorization Inputs
corpora_test = [embeddings.vectorize(doc[1].decode("utf-8"), embeddings_dict) for doc in pre_corpora_test]#vectorization
target_train = [[int(list(doc[0])[1]),int(list(doc[0])[3])] for doc in pre_corpora_train]#vectorization Output
target_test = [[int(list(doc[0])[1]),int(list(doc[0])[3])]for doc in pre_corpora_test]#vectorization Output
#target_train
max_len_sentences_train = max([len(doc) for doc in corpora_train]) #<------- [Parameter]
max_len_sentences_test = max([len(doc) for doc in corpora_test]) #<------- [Parameter]
max_len_sentences = max(max_len_sentences_train,max_len_sentences_test)
print("Max. Sentence # words:",max_len_sentences)
min_len_sentences_train = min([len(doc) for doc in corpora_train]) #<------- [Parameter]
min_len_sentences_test = min([len(doc) for doc in corpora_test]) #<------- [Parameter]
min_len_sentences = max(min_len_sentences_train,min_len_sentences_test)
print("Mix. Sentence # words:",min_len_sentences)
embed_size = np.size(corpora_train[0][0])
# +
#BaseLine Architecture <-------
embeddigs_cols = embed_size
input_sh = (max_len_sentences,embeddigs_cols,1)
#Selecting filters?
#https://stackoverflow.com/questions/48243360/how-to-determine-the-filter-parameter-in-the-keras-conv2d-function
#https://stats.stackexchange.com/questions/196646/what-is-the-significance-of-the-number-of-convolution-filters-in-a-convolutional
N_filters = 128 # <-------- [HyperParameter] Powers of 2 Numer of Features
K = 2 # <-------- [HyperParameter] Number of Classess
# -
input_sh
#baseline_model = Sequential()
gram_input = Input(shape = input_sh)
# 1st Convolutional Layer Convolutional Layer (7-gram)
conv_1_layer = Conv2D(filters=32, input_shape=input_sh, activation='relu',
kernel_size=(7,embeddigs_cols), padding='valid')(gram_input)
conv_1_layer.shape
# Max Pooling
max_1_pooling = MaxPooling2D(pool_size=((max_len_sentences-7+1),1), strides=None, padding='valid')(conv_1_layer)
max_1_pooling.shape
# Fully Connected layer
fully_connected_1_gram = Flatten()(max_1_pooling)
fully_connected_1_gram.shape
fully_connected_1_gram = Reshape((32, 1, 1))(fully_connected_1_gram)
fully_connected_1_gram.shape
# 2nd Convolutional Layer (5-gram)
conv_2_layer = Conv2D(filters=64, kernel_size=(5,1), activation='relu',
padding='valid')(fully_connected_1_gram)
conv_2_layer.shape
max_2_pooling = MaxPooling2D(pool_size=((32-5+1),1), strides=None, padding='valid')(conv_2_layer)
max_2_pooling.shape
# Fully Connected layer
fully_connected_2_gram = Flatten()(max_2_pooling)
fully_connected_2_gram.shape
fully_connected_2_gram = Reshape((64, 1, 1))(fully_connected_2_gram)
fully_connected_2_gram.shape
# 3rd Convolutional Layer (3-gram)
conv_3_layer = Conv2D(filters=128, kernel_size=(3,1), activation='relu',
padding='valid')(fully_connected_2_gram)
conv_3_layer.shape
# 4th Convolutional Layer (3-gram)
conv_4_layer = Conv2D(filters=128, kernel_size=(3,1), activation='relu',
padding='valid')(conv_3_layer)
conv_4_layer.shape
# 5th Convolutional Layer (3-gram)
conv_5_layer = Conv2D(filters=64, kernel_size=(3,1), activation='relu',
padding='valid')(conv_4_layer)
conv_5_layer.shape
# Max Pooling
max_5_pooling = MaxPooling2D(pool_size=(58,1), strides=None, padding='valid')(conv_5_layer)
max_5_pooling.shape
# Fully Connected layer
fully_connected = Flatten()(max_5_pooling)
fully_connected.shape
# 1st Fully Connected Layer
deep_dense_1_layer = Dense(32, activation='relu')(fully_connected)
deep_dense_1_layer = Dropout(0.2)(deep_dense_1_layer) # <-------- [HyperParameter]
deep_dense_1_layer.shape
# 2nd Fully Connected Layer
deep_dense_2_layer = Dense(32, activation='relu')(deep_dense_1_layer)
deep_dense_2_layer = Dropout(0.2)(deep_dense_2_layer) # <-------- [HyperParameter]
deep_dense_2_layer.shape
# 3rd Fully Connected Layer
deep_dense_3_layer = Dense(16, activation='relu')(deep_dense_2_layer)
deep_dense_3_layer = Dropout(0.2)(deep_dense_3_layer) # <-------- [HyperParameter]
deep_dense_3_layer.shape
predictions = Dense(K, activation='softmax')(deep_dense_3_layer)
#Criticality Model
criticality_network = Model(inputs=[gram_input],outputs=[predictions])
print(criticality_network.summary())
#Seting up the Model
criticality_network.compile(optimizer='adam',loss='binary_crossentropy',
metrics=['accuracy'])
#Data set organization
from tempfile import mkdtemp
import os.path as path
#Memoization
file_corpora_train_x = path.join(mkdtemp(), 'alex-res-adapted-003_temp_corpora_train_x.dat') #Update per experiment
file_corpora_test_x = path.join(mkdtemp(), 'alex-res-adapted-003_temp_corpora_test_x.dat')
#Shaping
shape_train_x = (len(corpora_train),max_len_sentences,embeddigs_cols,1)
shape_test_x = (len(corpora_test),max_len_sentences,embeddigs_cols,1)
#Data sets
corpora_train_x = np.memmap(
filename = file_corpora_train_x,
dtype='float32',
mode='w+',
shape = shape_train_x)
corpora_test_x = np.memmap( #Test Corpora (for future evaluation)
filename = file_corpora_test_x,
dtype='float32',
mode='w+',
shape = shape_test_x)
target_train_y = np.array(target_train) #Train Target
target_test_y = np.array(target_test) #Test Target (for future evaluation)
corpora_train_x.shape
target_train_y.shape
corpora_test_x.shape
target_test_y.shape
#Reshaping Train Inputs
for doc in range(len(corpora_train)):
#print(corpora_train[doc].shape[1])
for words_rows in range(corpora_train[doc].shape[0]):
embed_flatten = np.array(corpora_train[doc][words_rows]).flatten() #<--- Capture doc and word
for embedding_cols in range(embed_flatten.shape[0]):
corpora_train_x[doc,words_rows,embedding_cols,0] = embed_flatten[embedding_cols]
#Reshaping Test Inputs (for future evaluation)
for doc in range(len(corpora_test)):
for words_rows in range(corpora_test[doc].shape[0]):
embed_flatten = np.array(corpora_test[doc][words_rows]).flatten() #<--- Capture doc and word
for embedding_cols in range(embed_flatten.shape[0]):
corpora_test_x[doc,words_rows,embedding_cols,0] = embed_flatten[embedding_cols]
#CheckPoints
#csv_logger = CSVLogger(system+'_training.log')
# filepath changed from: "alex-adapted-res-003/best_model.hdf5" for testing
# The folder alex-adapted-res-003 doesn't exist yet in the repository. RC created 08_test in the root folder
# manually
filepath = "../08_test/best_model.hdf5"
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=100)
mc = ModelCheckpoint(filepath, monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
callbacks_list = [es,mc]
#Model Fitting
history = criticality_network.fit(
x = corpora_train_x,
y = target_train_y,
#batch_size=64,
epochs=2000, #5 <------ Hyperparameter
validation_split = 0.2,
callbacks=callbacks_list
)
# filepath changed from: 'alex-adapted-res-003/history_training.csv' for testing
#Saving Training History
df_history = pd.DataFrame.from_dict(history.history)
df_history.to_csv('../08_test/history_training.csv', encoding='utf-8',index=False)
criticality_network.save(filepath)
df_history.head()
# filepath changed from: 'alex-adapted-res-003/corpora_test_x.npy' &
# 'alex-adapted-res-003/corpora_test_x./target_test_y.npy' for testing
#Saving Test Data
np.save('../08_test/corpora_test_x.npy',corpora_test_x)
np.save('../08_test/target_test_y.npy',target_test_y)
# +
#Evaluation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs2 = range(len(acc))
plt.plot(epochs2, acc, 'b', label='Training')
plt.plot(epochs2, val_acc, 'r', label='Validation')
plt.title('Training and validation accuracy')
plt.ylabel('acc')
plt.xlabel('epoch')
plt.legend()
plt.figure()
plt.plot(epochs2, loss, 'b', label='Training')
plt.plot(epochs2, val_loss, 'r', label='Validation')
plt.title('Training and validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend()
plt.show()
# -
from sklearn.metrics import average_precision_score,precision_recall_curve
#funcsigs replaces the (deprecated?) sklearn signature
from funcsigs import signature
#from sklearn.utils.fixes import signature
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from tensorflow.keras.models import load_model
# filepath changed from: 'alex-adapted-res-003/best_model.hdf5' for testing
path = '../08_test/best_model.hdf5'
criticality_network_load = load_model(path) #<----- The Model
score = criticality_network_load.evaluate(corpora_test_x, target_test_y, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
history_predict = criticality_network_load.predict(x=corpora_test_x)
history_predict
inferred_data = pd.DataFrame(history_predict,columns=list('AB'))
target_data = pd.DataFrame(target_test_y,columns=list('LN'))
data = target_data.join(inferred_data)
y_true = list(data['L'])
y_score= list(data['A'])
average_precision = average_precision_score(y_true, y_score)
print('Average precision-recall score: {0:0.2f}'.format(average_precision))
#ROC Curve (all our samples are balanced)
auc = roc_auc_score(y_true, y_score)
print('AUC: %.3f' % auc)
| 12,331 |
/EDGAR_NLP_sentiment_analysis/code/exp3/IBM_AWS_Micro_api.ipynb
|
9bf4e6bb39662493986b89dcc11bb0996d0cb93e
|
[] |
no_license
|
jailukanna/DataScience-MachineLearning
|
https://github.com/jailukanna/DataScience-MachineLearning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 899,044 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # IBM - Natural Language Understanding
# Install the required IBM software to communicate with the API
# !pip install --upgrade "watson-developer-cloud>=2.4.1"
# Import related libraries
import json
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding_v1 import Features, SentimentOptions
# +
natural_language_understanding = NaturalLanguageUnderstandingV1(
version='2018-03-16',
iam_apikey='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
url='https://gateway.watsonplatform.net/natural-language-understanding/api'
)
# -
# Extract just the paragraphs from our call earning dataset
import json
import csv
import pandas as pd
df = pd.read_csv("call_transcript.csv")
df.head(5)
df.shape
IBM_sentiment = []
for x in df['paragraph']:
response = natural_language_understanding.analyze(
text= x,
features=Features(sentiment=SentimentOptions())).get_result()
IBM_sentiment.append(response)
type(IBM_sentiment)
# +
sentiment_dumps = json.dumps(IBM_sentiment, indent =3)
write_json= open("IBM_Api_sentiments.json","w")
write_json.write(sentiment_dumps)
write_json.close()
# -
# # Microsoft - Text Analytics
import requests
from pprint import pprint
subscription_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
assert subscription_key
text_analytics_base_url = "https://eastus.api.cognitive.microsoft.com/text/analytics/v2.0/"
sentiment_api_url = text_analytics_base_url + "sentiment"
print(sentiment_api_url)
df = pd.read_csv("call_transcript.csv")
df = df[['paragraph']]
df['language'] = 'en'
df['id'] = df.index
df = df.rename(columns={'paragraph': 'text'})
df = df[['id','language','text']]
para = df.to_dict(orient='records')
documents = {'documents':para}
headers = {"Ocp-Apim-Subscription-Key": subscription_key}
response = requests.post(sentiment_api_url, headers=headers, json=documents)
sentiments = response.json()
pprint(sentiments)
import json
with open('Azure_Api_sentiments.json', 'w') as fp:
json.dump(sentiments, fp)
# # AWS - Amazon Comprehend
# !pip install awscli
# !pip install boto3
import boto3
import json
df.tail(5)
comprehend = boto3.client(service_name='comprehend', region_name ='us-east-1',
aws_access_key_id="xxxxxxxxxxxxxxxxxxxxx", aws_secret_access_key="xxxxxxxxxxxxxxxxxxxxxxxxx")
AWS_sentiment = []
for x in df['paragraph']:
response = json.dumps(comprehend.detect_sentiment(Text=x, LanguageCode='en'), sort_keys=True, indent=4)
print(x)
print(response)
AWS_sentiment.append(response)
len(AWS_sentiment)
# +
sentiment_dumps = json.dumps(AWS_sentiment, indent =4)
write_json= open("AWS_Api_sentiments.json","w")
write_json.write(sentiment_dumps)
write_json.close()
| 3,023 |
/Perceptron.ipynb
|
0b7156a3b44b847a57c5311a8e24fadffd6fc6da
|
[] |
no_license
|
pandabytes/kaggle
|
https://github.com/pandabytes/kaggle
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 71,194 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Numpy
a = [1,2,3]
b = [2,3,4]
[f*s for f, s in zip(a,b)]
import numpy
a = numpy.array([1,2,3])
b = numpy.array([2,3,4])
a * b
import numpy as np
na = np.array([['name', 'gender', 'age'], ['frank', 'M', 29], ['mary', 'F', 23], ['tom', 'M', 35], ['ted', 'M', 33], ['jean', 'F', 21], ['lisa', 'F', 20]])
na
import pandas as pd
df = pd.DataFrame([['frank', 'M', 29], ['mary', 'F', 23], ['tom', 'M', 35], ['ted', 'M', 33], ['jean', 'F', 21], ['lisa', 'F', 20]])
df
df.columns = ['name', 'gender', 'age']
df
# ## Series
import pandas as pd
phone = pd.Series([21000,18900,18000])
phone
phone = pd.Series([21000,18900,18000], index = ['Iphone 7', 'Oppo', 'Moto'])
phone
phone[1]
phone[1:3]
phone['Oppo']
# ## DataFrame
import pandas as pd
df = pd.DataFrame([['frank', 'M', 29], ['mary', 'F', 23], ['tom', 'M', 35], ['ted', 'M', 33], ['jean', 'F', 21], ['lisa', 'F', 20]])
df
df.columns = ['name', 'gender', 'age']
df
df.head(3)
df.tail(3)
df.info()
df.describe()
df.ix[1:2,['name', 'age']]
# ## 數據篩選
df.head()
df.ix[1]
df.ix[1:4]
df['name']
df[['name','age']]
df.ix[ 1:2 , ['name', 'age'] ]
# ?df.ix
df['gender']
df.gender
df['gender'] == 'M'
df[df['gender'] == 'M']
df['age'] > 30
df[df['age'] > 30]
df[(df['gender'] == 'M') & (df['age'] > 30)]
df[(df['gender'] == 'M') | (df['age'] > 30)]
df['employee'] = True
df.head()
del df['employee']
df.head()
df['employee'] = True
df.head()
df = df.drop('employee', axis = 1)
df.head()
df
df.loc[5]
df.loc[6] = {'age':20, 'gender': 'F', 'name':'qoo'}
df = df.drop(6)
df
df = df.append(pd.DataFrame([{'age':20, 'gender': 'F', 'name':'qoo'}]), ignore_index=True)
df
df = df.drop(6)
df
df['userid'] = range(101,107)
df
df.set_index('userid', inplace=True)
df
df.iloc[1]
df.iloc[[1,3,5]]
df.ix[[101,103,105]]
df.loc[[101,103,105]]
in
x_tr = svd.fit_transform(x_train)
x_tr, _ = ml.transforms.rescale(x_tr)
y_tr = y_train
# Validation
x_val = svd.fit_transform(x_validation)
x_val, _ = ml.transforms.rescale(x_val)
y_val = y_validation
perceptron = Perceptron(fit_intercept = True, n_iter = 1000, alpha = 25)
perceptron = perceptron.fit(x_tr, y_tr)
yhat_train = perceptron.predict(x_tr)
yhat_validation = perceptron.predict(x_val)
# +
roc_tr = metrics.roc_curve(y_tr, yhat_train)
roc_va = metrics.roc_curve(y_val, yhat_validation)
plt.plot(roc_va[0], roc_va[1], 'r', roc_tr[0], roc_tr[1], 'g')
plt.plot([0,1], [0,1])
plt.show()
print(metrics.auc(roc_tr[0], roc_tr[1]))
print(metrics.auc(roc_va[0], roc_va[1]))
print(x_tr.shape)
# -
# <h1>Kitchen Sink</h1>
# +
# from numpy import atleast_2d as twod
# x = ml.transforms.rescale(x_train[0:100])
# y = y_train[0:100]
# k = 2
# a, b = ml.transforms.fkitchensink(x, k, "linear")
# # print(twod(a).dot(b))
# # perceptron = linearClassify(x, y, stopIter = 5000);
# -
# <h1>SVD</h1>
# +
# from mltools.linearC import linearClassify
# # Pick k features among the training data
# a, b = ml.transforms.fsvd(x_train, 2)
# x_tr = a
# x_tr, _ = ml.rescale(x_tr)
# y_tr = y_train
# # Pick k featurese from validation data
# c, d = ml.transforms.fsvd(x_validation, 2)
# x_val = c
# x_val, _ = ml.rescale(x_val)
# y_val = y_train
# # Train the perceptron model here
# perceptron = linearClassify(x_tr, y_tr, stopIter = 5000);
# +
# # Plot the training data
# ml.plotClassify2D(perceptron, x, y)
# plt.show()
# +
# # Plot the validation data
# ml.plotClassify2D(perceptron, x_val, y_val)
# plt.show()
# +
# errTrain = perceptron.err(x_tr, y_tr)
# errValidation = perceptron.err(x_val, y_val)
# aucTrain = perceptron.auc(x_tr, y_tr)
# aucValidation = perceptron.auc(x_val, y_val)
# print(errTrain, errValidation)
# print(aucTrain, aucValidation)
# +
# from mltools.linearC import linearClassify
# err_train = []
# err_validation = []
# auc_train = []
# auc_validation = []
# k = [i for i in range(1, 15)]
# for i in k:
# # Pick k features among the training data
# a, b = ml.transforms.fsvd(x_train, i)
# x_tr = a
# x_tr, _ = ml.rescale(x_tr)
# y_tr = y_train
# # Pick k featurese from validation data
# c, d = ml.transforms.fsvd(x_validation, i)
# x_val = c
# x_val, _ = ml.rescale(x_val)
# y_val = y_train
# # Train the perceptron model here
# perceptron = linearClassify(x_tr, y_tr, stopIter = 5000);
# errTrain = perceptron.err(x_tr, y_tr)
# errValidation = perceptron.err(x_val, y_val)
# aucTrain = perceptron.auc(x_tr, y_tr)
# aucValidation = perceptron.auc(x_val, y_val)
# # Store data
# err_train.append(errTrain)
# err_validation.append(errValidation)
# auc_train.append(aucTrain)
# auc_validation.append(aucValidation)
# +
# # Plot Error graph
# plt.title("Error Rate Plot")
# plt.plot(k, err_train, c = 'r', marker = 'o', label = "Training Error")
# plt.plot(k, err_validation, c = 'b', marker = 'o', label = "Validation Error")
# plt.legend(bbox_to_anchor = (0,1), loc = 2, borderaxespad = 1)
# plt.xlim([0, 15])
# plt.xlabel("K Features")
# plt.ylabel("Error Rate")
# plt.show()
# +
# # Plot AUC graph
# plt.title("AUC Plot")
# plt.plot(k, auc_train, c = 'r', marker = 'o', label = "AUC Training")
# plt.plot(k, auc_validation, c = 'b', marker = 'o', label = "AUC Validation")
# plt.legend(bbox_to_anchor = (0,1), loc = 2, borderaxespad = 1)
# plt.xlim([0, 15])
# plt.xlabel("K Features")
# plt.ylabel("AUC values")
# plt.show()
# +
# q, w = ml.transforms.fsvd(x_train[0:2], 2)
# print("Original:")
# print(x_train[0:2])
# print()
# print("SVD:")
# print(q)
| 5,805 |
/notebooks/alphas/Alpha WFO benchmarking.ipynb
|
84df8f07b37fc395e7f0fa25ce27eb753404191d
|
[] |
no_license
|
trendmanagement/Tmqr-framework-2
|
https://github.com/trendmanagement/Tmqr-framework-2
| 2 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,513 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %pylab inline
# %load_ext Cython
# %load_ext line_profiler
# %load_ext memory_profiler
# +
from tmqrfeed.manager import DataManager
from tmqrfeed.quotes.quote_contfut import QuoteContFut
from tmqrfeed.costs import Costs
from datetime import datetime
import pandas as pd
from tmqrstrategy import StrategyBase
from tmqrstrategy.optimizers import OptimizerBase, OptimizerGenetic
def CrossUp(a, b):
"""
A crosses up B
"""
return (a.shift(1) < b.shift(1)) & (a > b)
def CrossDown(a, b):
"""
A crosses down B
"""
return (a.shift(1) > b.shift(1)) & (a < b)
class AlphaGeneric(StrategyBase):
def __init__(self, datamanager: DataManager, **kwargs):
super().__init__(datamanager, **kwargs)
self.temp = datetime.now() # type: pd.DataFrame
def setup(self):
self.dm.session_set('US.ES')
self.dm.series_primary_set(QuoteContFut, 'US.ES',
timeframe='D')
self.dm.costs_set('US', Costs())
def calculate(self, *args):
direction = 1
period_slow, period_fast = args
# Defining EXO price
px = self.dm.quotes()['c']
#
#
# Indicator calculation
#
#
slow_ma = px.rolling(period_slow).mean()
fast_ma = px.rolling(period_fast).mean()
# Enry/exit rules
entry_rule = CrossDown(fast_ma, slow_ma)
exit_rule = (CrossUp(fast_ma, slow_ma))
return self.exposure(entry_rule, exit_rule, direction)
def calculate_position(self, date: datetime, exposure_record: pd.DataFrame):
primary_quotes_position = self.dm.position()
# get net exposure for all members
exposure = exposure_record['exposure'].sum()
# Just replicate primary quotes position
self.position.add_net_position(date, primary_quotes_position.get_net_position(date), qty=exposure)
# -
import logging
from tmqr.logs import log
logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.INFO)
# +
dm = DataManager()
ALPHA_CONTEXT = {
'name': 'AlphaWFOBenchmarking',
'wfo_params': {
'window_type': 'rolling', # Rolling window for IIS values: rolling or expanding
'period': 'M', # Period of rolling window 'M' - monthly or 'W' - weekly
'oos_periods': 2, # Number of months is OOS period
'iis_periods': 12, # Number of months in IIS rolling window (only applicable for 'window_type' == 'rolling')
},
'wfo_optimizer_class': OptimizerBase,
'wfo_optimizer_class_kwargs': {
'nbest_count': 3,
'nbest_fitness_method': 'max'
},
'wfo_opt_params': [
('period_slow', [10, 30, 40, 50, 70, 90, 110]),
('period_fast', [1, 3, 10, 15, 20, 30])
],
'wfo_members_count': 1,
'wfo_costs_per_contract': 0.0,
'wfo_scoring_type': 'netprofit'
}
alpha = AlphaGeneric(dm, **ALPHA_CONTEXT)
#alpha.run()
# -
# #%lprun -f alpha.run alpha.run()
alpha.run()
equity = alpha.position.get_pnl_series()
equity.equity_decision.plot()
| 3,377 |
/Students/cocrod/Project.ipynb
|
191e9b7d37211780822f0151e82cf2ba1e4a9f10
|
[] |
no_license
|
bsipocz/A302_2019_Homework
|
https://github.com/bsipocz/A302_2019_Homework
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,532 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# -BRIEF DESCRIPTION OF THE WORK PLANNED-
#
#
# What is the science objective (i.e. why are you doing this project):
# The science objective is to understand the differences in various cosmic ray models on galactic evolution.
#
# What is the goal of the project (e.g. to visualize ZTF light curves):
# The goal of the project is to create visualizations of the data across time.
#
# What will the program do (e.g. read in a series of ZTF light curves and plot them as an interactive plot; fit a Lomb-Scargle periodogram and plot the sinusoid with the best fit period on top of the data; run this as a jupyter widget):
# The program will read in data and create plots
#
# What are your data sets and deliverables:
# The data sets are information of different galactic seed models evolved through time.
| 1,069 |
/XOR_tensorflow.ipynb
|
ad7401eadb51b1e1025ff94323216c464a997999
|
[] |
no_license
|
leonardoaraujosantos/LearnTensorflow
|
https://github.com/leonardoaraujosantos/LearnTensorflow
| 5 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 8,832 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # XOR Problem with tensorflow
# Original tutorial: https://aimatters.wordpress.com/2016/01/16/solving-xor-with-a-neural-network-in-tensorflow/
# Create a 3 layer neural network (Input, Hidden, Output) to solve the XOR problem
# Complete source code tutorial here: https://github.com/StephenOman/TensorFlowExamples/blob/master/xor%20nn/xor_nn.py
# Import tensor flow and time libraries
import tensorflow as tf
import time
XOR_X = [[0,0],[0,1],[1,0],[1,1]]
XOR_Y = [[0],[1],[1],[0]]
# ## Prepare variables to receive imput data
# Next step is to set up placeholders to hold the input data. TensorFlow will automatically fill them with the data when we run the network.
x_ = tf.placeholder(tf.float32, shape=[4,2], name="x-input")
y_ = tf.placeholder(tf.float32, shape=[4,1], name="y-input")
# ## Create parameters
Theta1 = tf.Variable(tf.random_uniform([2,2], -1, 1), name="Theta1")
Theta2 = tf.Variable(tf.random_uniform([2,1], -1, 1), name="Theta2")
Bias1 = tf.Variable(tf.zeros([2]), name="Bias1")
Bias2 = tf.Variable(tf.zeros([1]), name="Bias2")
# ## Create the model
#
# +
# First layer (with activations)
A2 = tf.sigmoid(tf.matmul(x_, Theta1) + Bias1)
# Second layer (Also with activations)
Hypothesis = tf.sigmoid(tf.matmul(A2, Theta2) + Bias2)
# -
# ## Define a cost function
# On this case we're doing a cross-entropy cost function.
cost = tf.reduce_mean(( (y_ * tf.log(Hypothesis)) +
((1 - y_) * tf.log(1.0 - Hypothesis)) ) * -1)
log_loss = tf.scalar_summary(cost.op.name, cost)
summary_op = tf.merge_all_summaries()
# ## Train
# Here we're going to use Gradient descent. What this statement says is that we’re going to use GradientDescentOptimizer as our training algorithm, the learning rate (alpha from before) is going to be 0.01 and we want to minimize the cost function above.
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
init = tf.initialize_all_variables()
sess = tf.Session()
# Output model to tensorboard
writer = tf.train.SummaryWriter("./logs/xor_logs", sess.graph)
# Old version gives a warning
#writer = tf.train.SummaryWriter("./logs/xor_logs", sess.graph_def)
sess.run(init)
t_start = time.clock()
for i in range(100000):
# Run the training step here
sess.run(train_step, feed_dict={x_: XOR_X, y_: XOR_Y})
# Display some information
if i % 50000 == 0:
print('Epoch ', i)
print('Hypothesis ', sess.run(Hypothesis, feed_dict={x_: XOR_X, y_: XOR_Y}))
print('Theta1 ', sess.run(Theta1))
print('Bias1 ', sess.run(Bias1))
print('Theta2 ', sess.run(Theta2))
print('Bias2 ', sess.run(Bias2))
print('cost ', sess.run(cost, feed_dict={x_: XOR_X, y_: XOR_Y}))
#summary_str = sess.run(summary_op, feed_dict=feed_dict)
#summary_writer.add_summary(summary_str, step)
t_end = time.clock()
print('Elapsed time ', t_end - t_start)
| 3,208 |
/sandbox/old_notebooks/.ipynb_checkpoints/stackoverflow-checkpoint.ipynb
|
af77086aef03539f2a1d024a5a3f7ae19d046f8d
|
[
"MIT"
] |
permissive
|
samgoodgame/neural_net_skills
|
https://github.com/samgoodgame/neural_net_skills
| 4 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,291,670 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Use Stackexchange API to pull additional text about skills.
#
# Python API (PyPI, third-party) docs here: https://github.com/lucjon/Py-StackExchange
#
import json
from pprint import pprint
from bs4 import BeautifulSoup
from bs4.element import Comment
# Establish credentials. Read from file in home directory
data = json.load(open('/Users/goodgame/.stackoverflow'))
key = data['key']
from stackapi import StackAPI
SITE = StackAPI('stackoverflow', key=key)
ans = SITE.fetch('answers', min=1000, sort='votes', filter='withbody')
pprint(ans)
# +
def tag_visible(element):
if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
return False
if isinstance(element, Comment):
return False
return True
def make_text(text):
soup = BeautifulSoup(text, "html5lib")
texts = soup.findAll(text=True)
visible_texts = filter(tag_visible, texts)
return u" ".join(t.strip() for t in visible_texts)
# +
# %%time
responses = []
for item in items:
responses.append(make_text(item['body']))
# -
print(responses[4])
# ## TODO
#
# This works. Now I just need to figure out:
# 1. What specific query do I want? The one above looks at 500 responses that have a minimum of 1000 votes. What I want is a ton of responses that are high quality, spanning all kinds of tech.
# 2. How should I save the results? I'm thinking one response per document, then I'll preprocess the exact same way I processed the JDs.
# 3. Should I parse out markdown?
| 1,788 |
/numerics/ill_conditioned.ipynb
|
3bd4911c8c30939691d1c6aa25a346e2a3b6bb64
|
[] |
no_license
|
itpplasma/writeups
|
https://github.com/itpplasma/writeups
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,791 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #SciPy bietet viele Funktionen aus Mathematik und Statistik.
# #Mit dem Paket scipy.stats können z.B. statistische Tests durchgeführt werden.
# #Hier z.B. der Zweistichproben-t-Test, der anhand der Mittelwerte zweier Stichproben prüft, ob die
# #Mittelwerte der zwei Grundgesamtheiten gleich oder verschieden sind.
#Das Stats-Paket aus der Scipy-Bibliothek importieren
from scipy import stats
#Zwei numerische arrays erstellen
x = [12, 10, 11, 13, 14, 10, 13, 13, 22]
y = [1, 4, 2, 3, 5, 2, 1, 0, 0, 1, 2]
#Einen Zweistichproben-t-Test durchführen, um zu prüfen, ob die Stichproben untersch. Mittelwerte
#(arithmetisches Mittel) haben
stats.ttest_ind(x,y)
# #Wie ist das Ergebnis zu interpretieren?
# #Und wie kommt ein Mittelwert bei x von 9.28... zustande, wenn alle einzelnen Werte höher als 10 sind??
: x,y={sol}')
interact(doplot, b1=(16.9, 17.1, 0.01));
| 1,144 |
/Solutions.ipynb
|
2c6f39f7a6b196d673dd956264a649a94b8dca68
|
[] |
no_license
|
nicolerichter1989/lab-working-with-api
|
https://github.com/nicolerichter1989/lab-working-with-api
| 0 | 0 | null | 2021-10-09T14:02:28 | 2021-10-09T13:08:37 | null |
Jupyter Notebook
| false | false |
.py
| 14,957 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab | Working with APIs
# **Instructions**
# Following the class example, create a function that returns the price, names of origin and arrival airports and the name of the company. Do it for all the flights between two dates that cost the same.
# +
# example from class
import requests
url = "https://skyscanner-skyscanner-flight-search-v1.p.rapidapi.com/apiservices/browsequotes/v1.0/US/USD/en-US/SFO-sky/JFK-sky/2021-12-24"
querystring = {"inboundpartialdate":"2021-12-24"}
headers = {
'x-rapidapi-host': "skyscanner-skyscanner-flight-search-v1.p.rapidapi.com",
'x-rapidapi-key': "ea05eaab96mshbe0ea9155d41f91p1cf69bjsna0c3487cc020"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
# -
print(response.status_code)
import json
from pandas import json_normalize
import pandas as pd
json_data= response.json()
json_data
json_df = pd.json_normalize(json_data, max_level=1)
quotes = pd.json_normalize(json_data, 'Quotes', ['QuoteId'], record_prefix = '_', errors = 'ignore')
quotes
carriers = pd.json_normalize(json_data, 'Carriers', ['CarrierId'], record_prefix = '_', errors = 'ignore')
carriers
places = pd.json_normalize(json_data, 'Places', ['CityName'], record_prefix = '_', errors = 'ignore')
places
currencies = pd.json_normalize(json_data, 'Currencies', ['Symbol'], record_prefix = '_', errors = 'ignore')
currencies
| 1,674 |
/Python Code/Logistic Reg/Logistic Regression/Arousal.ipynb
|
202cb43da3f8d434c23ff70263b532b3c99628c5
|
[] |
no_license
|
Didanny/EECE-499-Emotion-Recognition
|
https://github.com/Didanny/EECE-499-Emotion-Recognition
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 24,731 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.feature_selection import RFE, RFECV
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.model_selection import StratifiedKFold, cross_val_score, cross_validate
import pandas as pd
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn import datasets, linear_model
from sklearn.linear_model import LogisticRegressionCV
#load_path = 'D:\EECE499\Features\\'
load_path = '..\..\..\..\\'
Features = pd.read_excel(load_path + 'Features.xlsx')
def sort_list(list1, list2):
zipped_pairs = zip(list2, list1)
z = [x for _, x in sorted(zipped_pairs)]
return z
X = Features.iloc[:, :-5].values
y = Features.iloc[:, -5].values
standard_scaler = StandardScaler()
X_scaled = standard_scaler.fit_transform(X)
classifier = LogisticRegression(random_state = 42, solver='lbfgs', max_iter=1000, multi_class='ovr')
result = cross_validate(classifier, X_scaled, y, cv=10, verbose=1, \
scoring=['accuracy', 'neg_mean_absolute_error', 'neg_mean_squared_error'])
print(result['train_accuracy'].mean(), result['test_accuracy'].mean())
print(-result['train_neg_mean_absolute_error'].mean(), -result['test_neg_mean_absolute_error'].mean())
print(np.sqrt(-result['train_neg_mean_squared_error'].mean()), np.sqrt(-result['test_neg_mean_squared_error'].mean()))
classifier.fit(X, y)
y_pred = classifier.predict(X)
confusion_mtrx = confusion_matrix(y, y_pred)
confusion_mtrx
accuracy = accuracy_score(y, y_pred)
mae = mean_absolute_error(y, y_pred)
rmse = mean_squared_error(y, y_pred)
print(accuracy, mae, rmse)
# +
y_new = sort_list(y_pred, y)
plt.title('Arousal Predictions')
plt.plot(y_new, color='green')
plt.plot(sorted(y), color='red')
plt.legend(['predicted', 'actual'])
plt.savefig('arousal.eps', format='eps', dpi=1000)
plt.savefig('arousal.png', format='png', dpi=1000)
plt.show()
# -
| 2,515 |
/.ipynb_checkpoints/Save_Vorticity_Terms-checkpoint.ipynb
|
977a776e91b2e8d0a9554a5dc133e4a9bece079e
|
[] |
no_license
|
hmkhatri/MOM6_Momentum_Budget
|
https://github.com/hmkhatri/MOM6_Momentum_Budget
| 3 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 28,299 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Final Assignment
#Question 1
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
class polynomial(object):
def __init__(self, c, t, v='x'):
self.coeff = c
self.tri=t
self.v = v
def __repr__(self):
coeff = self.coeff
v = self.v
tri=self.tri
s = ''
D = len(coeff)
first = True
for i in range(D):
pw = D-i-1
pre = '+' if coeff[i]>0 else ''
if first:
if pre=='+':
pre = ''
first = False
if pw == 0:
vname = ''
tname = ''
elif pw == 1:
vname = v
tx='sin' if tri[i]=='s' else 'cos'
tname =tx+ '(' + vname + ')'+' '
else:
vname = str(pw)+v
tx='sin' if tri[i]=='s' else 'cos'
tname =tx+ '(' + vname + ')'+' '
if coeff[i] != 0:
s += pre+str(coeff[i])+ ' '+ tname
return s
#in order this code to work len(c)=len(t)+1 condition must be satisfied
p = polynomial([0.2, -2, -2, 7],['s', 'c', 'c'], 'z')
print(p)
# +
#Final Assignment
#Question 2
class myobj(object):
def __init__(self, name):
self.name=name
self.dep=[]
def add_dependency(self, dependency):
self.dep.append(dependency)
def built()
#I could not complete this task, but I want to know the answer. Could you please share it with us?
int_{-H}^{\eta} \mathbf{u}dz - \frac{1}{\rho_o}\int_{-H}^{\eta} \nabla pdz \right] + \beta \overline{V} + f\overline{\dfrac{Q_m}{\rho_o}} - f\partial_t\eta \tag{3}
# \end{equation}
# +
import xarray as xr
import numpy as np
from xgcm import Grid
import filter
from dask.diagnostics import ProgressBar
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import glob, os
from dask.distributed import Client
from dask.distributed import LocalCluster
cluster = LocalCluster()
client = Client(cluster)
client
# +
# Read Data
# for 1/8 deg data
#path1 = "/archive/Hemant.Khatri/MOM_Budget/OM4p125/"
#filelist = glob.glob(path1 + "OM4p125*.nc")
#save_file = "OM4p125_Vorticity_Budget.nc"
#path_grid = "/archive/Raphael.Dussin/FMS2019.01.03_devgfdl_20210308/CM4_piControl_c192_OM4p125_v3/" + \
#"gfdl.ncrc4-intel18-prod-openmp/pp/ocean_monthly/"
#ds_grid = xr.open_dataset(path_grid + "ocean_monthly.static.nc")
#ds_grid = ds_grid.isel(xq = slice(0,2880), yq=slice(0,2240))
# for 1/4 deg data
path1 = "/archive/Hemant.Khatri/MOM_Budget/OM4p25/"
filelist = glob.glob(path1 + "OM4p25*.nc")
save_file = "OM4p25_Vorticity_Budget.nc"
path_grid = "/archive/Raphael.Dussin/FMS2019.01.03_devgfdl_20210308/CM4_piControl_c192_OM4p25/" + \
"gfdl.ncrc4-intel18-prod-openmp/pp/ocean_monthly/"
ds_grid = xr.open_dataset(path_grid + "ocean_monthly.static.nc")
ds_grid = ds_grid.isel(xq = slice(1,1441), yq=slice(1,1081))
#-------------------------------------------------------------
filelist.sort()
ds = []
for i in range(0,len(filelist)):
d = xr.open_dataset(filelist[i])
ds.append(d)
ds = xr.concat(ds, dim='tim')
ds = ds.chunk({'tim': 1})
print(ds)
# -
print(ds_grid)
# +
# Create grid and interpolate depth, beta
OMEGA = 7.2921e-5
RAD_EARTH = 6.378e6
grid = Grid(ds, coords={'X': {'center': 'xh', 'right': 'xq'},
'Y': {'center': 'yh', 'right': 'yq'} }, periodic=[ ])
depth_u = grid.interp(ds['deptho'].isel(tim=0) * ds['areacello'].isel(tim=0), 'X', boundary='fill') / ds['areacello_cu'].isel(tim=0)
depth_v = grid.interp(ds['deptho'].isel(tim=0) * ds['areacello'].isel(tim=0), 'Y', boundary='fill') / ds['areacello_cv'].isel(tim=0)
depth_q = grid.interp(depth_u * ds['areacello_cu'].isel(tim=0), 'Y', boundary='fill') / ds['areacello_bu'].isel(tim=0)
colh_u = grid.interp(ds['col_height'] * ds['areacello'].isel(tim=0), 'X', boundary='fill') / ds['areacello_cu'].isel(tim=0)
colh_v = grid.interp(ds['col_height'] * ds['areacello'].isel(tim=0), 'Y', boundary='fill') / ds['areacello_cv'].isel(tim=0)
beta_v = 2*OMEGA*np.cos(ds.geolat_v.isel(tim=0) * np.pi /180.)/RAD_EARTH
beta_q = 2*OMEGA*np.cos(ds.geolat_c.isel(tim=0) * np.pi /180.)/RAD_EARTH
# +
# compute terms in vorticity budget
rho_0 = 1035.
BPT = xr.Dataset()
vmo_bv = (ds['vmo'] / (rho_0 * ds['dxCv'].isel(tim=0)))
vmo_bv = beta_q * grid.interp(vmo_bv, 'X', boundary='fill')
BPT['vmo_bv'] = vmo_bv
umo = (ds['umo'] / (rho_0 * ds['dyCu'].isel(tim=0)))
umo = grid.interp(umo, 'Y', boundary='fill')
BPT['umo'] = umo
BPT_1 = (( - grid.diff((ds['intz_PFu_2d'] + ds['intz_u_BT_accel_2d']) * ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff((ds['intz_PFv_2d'] + ds['intz_v_BT_accel_2d']) * ds.dyCv.isel(tim=0), 'X', boundary='fill'))
/ ds.areacello_bu.isel(tim=0))
BPT['BPT'] = BPT_1
BPT['depth'] = (depth_q.load())
Mass_Surf = (grid.interp(grid.interp(ds['wfo'] * ds['areacello'].isel(tim=0), 'X', boundary='fill'), 'Y', boundary='fill')
* ds['Coriolis'].isel(tim=0) / (rho_0)) / ds['areacello_bu'].isel(tim=0)
BPT['Qm'] = Mass_Surf
dhdt = (grid.interp(grid.interp(ds['zos'] * ds['areacello'].isel(tim=0), 'X', boundary='fill'), 'Y', boundary='fill')
* ds['Coriolis'].isel(tim=0)) / ds['areacello_bu'].isel(tim=0)
BPT['fdhdt'] = dhdt
div_u = (grid.diff(ds['umo'] / (rho_0), 'X', boundary='fill') +
grid.diff(ds['vmo'] / (rho_0), 'Y', boundary='fill') ) / ds['areacello'].isel(tim=0)
div_u = - (grid.interp(grid.interp(div_u * ds['areacello'].isel(tim=0), 'X', boundary='fill'), 'Y', boundary='fill')
* ds['Coriolis'].isel(tim=0)) / ds['areacello_bu'].isel(tim=0)
BPT['div_u'] = div_u
Curl_dudt = ( - grid.diff(ds['hf_dudt_2d'] * colh_u * ds['dxCu'].isel(tim=0), 'Y', boundary='fill')
+ grid.diff(ds['hf_dvdt_2d'] * colh_v * ds['dyCv'].isel(tim=0), 'X', boundary='fill') ) / ds.areacello_bu.isel(tim=0)
BPT['Curl_dudt'] = Curl_dudt
Curl_taus = ( - grid.diff((ds['taux'])* ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff((ds['tauy'])* ds.dyCv.isel(tim=0), 'X', boundary='fill') )/ ds.areacello_bu.isel(tim=0)
Curl_taus = Curl_taus / (rho_0 )
BPT['Curl_taus'] = Curl_taus
Curl_taub = ( - grid.diff((-ds['taux_bot'])* ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff(-ds['tauy_bot'] * ds.dyCv.isel(tim=0), 'X', boundary='fill') )/ ds.areacello_bu.isel(tim=0)
Curl_taub = Curl_taub / (rho_0 )
BPT['Curl_taub'] = Curl_taub
Curl_Hrv2 = ( - grid.diff((ds['intz_rvxv_2d'] + ds['intz_gKEu_2d']) * ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff((ds['intz_rvxu_2d'] + ds['intz_gKEv_2d']) * ds.dyCv.isel(tim=0), 'X', boundary='fill') )/ ds.areacello_bu.isel(tim=0)
BPT['Curl_NL'] = Curl_Hrv2
Curl_Hdiff2 = ( - grid.diff(ds['intz_diffu_2d'] * ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff(ds['intz_diffv_2d'] * ds.dyCv.isel(tim=0), 'X', boundary='fill') )/ ds.areacello_bu.isel(tim=0)
BPT['Curl_Hdiff'] = Curl_Hdiff2
Curl_Cor2 = ( - grid.diff((ds['intz_CAu_2d'] - ds['intz_gKEu_2d'] - ds['intz_rvxv_2d'])* ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff((ds['intz_CAv_2d'] - ds['intz_gKEv_2d'] - ds['intz_rvxu_2d'])* ds.dyCv.isel(tim=0), 'X', boundary='fill'))/ ds.areacello_bu.isel(tim=0)
BPT['Curl_Cor'] = Curl_Cor2
tmpx = (ds['hf_dudt_2d'] * colh_u - ds['intz_CAu_2d']-ds['intz_PFu_2d']-ds['intz_diffu_2d']-
ds['intz_u_BT_accel_2d'] - ds['taux']/rho_0 + ds['taux_bot']/rho_0)
tmpy = (ds['hf_dvdt_2d'] * colh_v - ds['intz_CAv_2d']-ds['intz_PFv_2d']-ds['intz_diffv_2d']-
ds['intz_v_BT_accel_2d'] - ds['tauy'] /rho_0 + ds['tauy_bot']/rho_0)
Curl_remap = ( - grid.diff(tmpx * ds.dxCu.isel(tim=0), 'Y', boundary='fill')
+ grid.diff(tmpy * ds.dyCv.isel(tim=0), 'X', boundary='fill') )/ ds.areacello_bu.isel(tim=0)
BPT['Curl_remap'] = Curl_remap
# +
times = np.linspace(2.5, len(filelist) * 5 - 2.5, len(filelist))
ds_save = xr.Dataset()
ds_save['ssh'] = ds['col_height'] - ds['deptho']
ds_save['ssh'].attrs['units'] = "m"
ds_save['ssh'].attrs['standard_name'] = "sea surface height above geoid"
ds_save['beta_V'] = BPT['vmo_bv']
ds_save['beta_V'].attrs['units'] = "m/s^2"
ds_save['beta_V'].attrs['standard_name'] = "Meridional Coriolis gradient x depth-integrated meridional velocity"
ds_save['BPT'] = BPT['BPT'] + BPT['Curl_Cor'] + BPT['vmo_bv'] + BPT['Qm'] - BPT['fdhdt']
ds_save['BPT'].attrs['units'] = "m/s^2"
ds_save['BPT'].attrs['standard_name'] = "Bottom Pressure Torque"
ds_save['Curl_Adv'] = (BPT['Curl_NL'] + BPT['Curl_remap'])
ds_save['Curl_Adv'].attrs['units'] = "m/s^2"
ds_save['Curl_Adv'].attrs['standard_name'] = "Curl of depth-integrated nonlinear advetion term"
ds_save['Curl_taus'] = BPT['Curl_taus']
ds_save['Curl_taus'].attrs['units'] = "m/s^2"
ds_save['Curl_taus'].attrs['standard_name'] = "Curl of Surface Wind Stress / rho_0"
ds_save['Curl_taub'] = BPT['Curl_taub']
ds_save['Curl_taub'].attrs['units'] = "m/s^2"
ds_save['Curl_taub'].attrs['standard_name'] = "- Curl of bottom boundary stress / rho_0"
ds_save['Curl_diff'] = BPT['Curl_Hdiff']
ds_save['Curl_diff'].attrs['units'] = "m/s^2"
ds_save['Curl_diff'].attrs['standard_name'] = "Curl of depth-integrated horizontal diffusion"
ds_save['Mass_flux'] = (- BPT['Qm'])
ds_save['Mass_flux'].attrs['units'] = "m/s^2"
ds_save['Mass_flux'].attrs['standard_name'] = " - Coriolis x Surface mass flux / rho_0"
ds_save['eta_dt'] = BPT['fdhdt']
ds_save['eta_dt'].attrs['units'] = "m/s^2"
ds_save['eta_dt'].attrs['standard_name'] = " Coriolis x d(eta)/dt"
ds_save['Curl_dudt'] = (-BPT['Curl_dudt'])
ds_save['Curl_dudt'].attrs['units'] = "m/s^2"
ds_save['Curl_dudt'].attrs['standard_name'] = " - Curl of depth-integrated du/dt"
ds_save = xr.merge([ds_save, ds_grid])
ds_save = ds_save.rename({'tim': 'time'})
ds_save.coords['time'] = times
ds_save.time.attrs['units'] = "years since 0001"
ds_save = ds_save.transpose('time','yq','yh','xq','xh')
# -
print(ds_save)
path1 = "/work/Hemant.Khatri/"
# %time ds_save.load().to_netcdf(path1 + save_file)
ds.close()
ds_save.close()
client.close()
cluster.close()
| 10,518 |
/4_alpha_research_factor_modeling/2-risk_factor_models/factor_model_portfolio_return_solution.ipynb
|
977fa6ccfefb6a7d724c6b32ea09e9a7a174fc98
|
[] |
no_license
|
rwuebker/quant_trading_primer
|
https://github.com/rwuebker/quant_trading_primer
| 0 | 0 | null | 2022-06-21T22:33:09 | 2019-08-27T04:32:04 |
HTML
|
Jupyter Notebook
| false | false |
.py
| 362,215 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pydataset import data
from matplotlib import pyplot as plt
# %matplotlib inline
timedata = pd.read_csv("time_usage.csv")
timedata
#group by the year
x=timedata.groupby("Year")
means = []
#loop through all of the years and get the mean of all activities in that year
for i in range(2007,2018,1):
means.append(x.get_group(i)["Estimate"].mean())
plt.plot(range(2007,2018),means,'b--.',ms=10)
plt.title("Average Time Spent on All Activities")
plt.ylabel("Hours")
plt.xlabel("Year")
plt.show()
print("1.It stays about the same except there's a dip in 2014")
x = timedata.groupby("Activity")
wtv=x.get_group("Watching TV")
exer=x.get_group("Participating in Sports Exercise and Rec Activity")
#merge the 2 dataframes
df = pd.merge(wtv,exer,on='Year')
#rename the columns
df=df.rename(index=str,columns={"Estimate_x":"TV","Estimate_y":"Exercise"})
#plot the info
df.plot(y=["TV","Exercise"])
plt.title("Average Time Spent on Exercise vs Watching TV")
plt.ylabel("Hours")
plt.xlabel("Year")
plt.legend(loc='best')
plt.show()
print("2.People spend a lot more time watching TV than they do exercising!")
x=timedata.groupby("Year")
x=x.get_group(2017.0)
print('1.Time was most spent in Personal Care')
x.plot(kind='bar',y='Estimate',x='Activity')
plt.title("Time Spent in Activities in 2017")
plt.ylabel("Time (hours)")
plt.xlabel("Activity Name")
x=timedata.groupby("Activity")
wtv=x.get_group('Watching TV')
wtv.plot(kind='line',x='Year',y='Estimate')
plt.title("Time Spent Watching TV")
plt.ylabel("Time (hours)")
plt.show()
print('2. Watching TV increased the most between 2007 and 2008 while volunteering increased the most between 2016 and 2017')
v=x.get_group('Volunteering')
v.plot(kind='line',x='Year',y='Estimate')
plt.title("Time Spent Volunteering")
plt.ylabel("Time (hours)")
plt.show()
ohio = pd.read_csv("Ohio_1999.csv")
ohio
print('1.The highest paid Race, Sex combo is the Asian Male. The lowest paid is the Eskimo Woman')
ohio.pivot_table(values='Yearly Salary',index='Race',columns='Sex')
print("2.The African-American Woman works the least. This is shown by the number of hours they usually work and possible by the number of data points on black women")
ohio.pivot_table(values='Usual Hours Worked',index='Race',columns='Sex')
print('3.The Native American Male works the most per week.')s
# Employment data from Ohio in 1999
edu = pd.cut(ohio['Educational Attainment'],[-1,39,42,46])
ohio.pivot_table(columns=edu,aggfunc='count')
print("1.The most common degree was no degree")
age = pd.qcut(ohio['Age'],4)
ohio.pivot_table(columns=edu,index=age,aggfunc='count')
print('2. Ages from 15-32 contains the most number of workers with no degree')
print('Ages from 15-32 contains the most number of workers with a degree that is less than a bachelors')
print('Ages from 32-40 contain the most number of workers with a bachelors degree or higher')
ohio.pivot_table(values='Yearly Salary',columns=edu,index=age)
print('3.The workers that are between 49-85 and have a degree higher than a bachelors are making the most on average')
# Iris Data
iris = data("iris")
iris
iris.pivot_table(columns='Species')
print('1.Setosa is easy to distinguish because the petal length is so much smaller than the others')
print('setosa',1.462*0.246,5.006*3.428)
print('versicolor',1.326*4.26,5.936*2.770)
print('virginica',5.552*2.026,6.588*2.974)
print('2.Petal size is smaller than sepal size in all cases')
_return_1, factor_return_2]
# ## Factor exposures
#
# Factor exposures refer to how "exposed" a stock is to each factor. We'll get into this more later. For now, just think of this as one number for each stock, for each of the factors.
from sklearn.linear_model import LinearRegression
"""
For now, just assume that we're calculating a number for each
stock, for each factor, which represents how "exposed" each stock is
to each factor.
We'll discuss how factor exposure is calculated later in the lessons.
"""
def get_factor_exposures(factor_return_l, asset_return):
lr = LinearRegression()
X = np.array(factor_return_l).T
y = np.array(asset_return.values)
lr.fit(X,y)
return lr.coef_
# +
factor_exposure_l = []
for i in range(len(asset_return_df.columns)):
factor_exposure_l.append(
get_factor_exposures(factor_return_l,
asset_return_df[asset_return_df.columns[i]]
))
factor_exposure_a = np.array(factor_exposure_l)
# -
print(f"factor_exposures for asset 1 {factor_exposure_a[0]}")
print(f"factor_exposures for asset 2 {factor_exposure_a[1]}")
# ## Quiz 1 Portfolio's factor exposures
#
# Let's make up some portfolio weights for now; in a later lesson, we'll look at how portfolio optimization combines alpha factors and a risk factor model to choose asset weights.
#
# $\beta_{p,k} = \sum_{i=1}^{N}(x_i \times \beta_{i,k})$
weight_1 = 0.60 #let's give AAPL a portfolio weight
weight_2 = 0.40 #give MSFT a portfolio weight
weight_a = np.array([weight_1, weight_2])
# For the sake of understanding, try saving each of the values
# into a separate variable to perform the multipliations and additions
# Check that your calculations for portfolio factor exposure match
# the output of this dot product:
# ```
# weight_a.dot(factor_exposure_a)
# ```
# TODO: calculate portfolio's exposure to factor 1
factor_exposure_1_1 = factor_exposure_a[0,0]
factor_exposure_2_1 = factor_exposure_a[1,0]
factor_exposure_p_1 = weight_1 * factor_exposure_1_1 + \
weight_2 * factor_exposure_2_1
factor_exposure_p_1
# TODO: calculate portfolio's exposure to factor 2
factor_exposure_1_2 = factor_exposure_a[0,1]
factor_exposure_2_2 = factor_exposure_a[1,1]
factor_exposure_p_2 = weight_1 * factor_exposure_1_2 + \
weight_2 * factor_exposure_2_2
factor_exposure_p_2
# ## Quiz 2 Calculate portfolio return
#
# For clarity, try storing the pieces into their own
# named variables and writing out the multiplications and addition.
#
# You can check if your answer matches this output:
# ```
# asset_return_df.values.dot(weight_a)
# ```
# +
# TODO calculate the portfolio return
asset_return_1 = asset_return_df.values[:,0]
asset_return_2 = asset_return_df.values[:,1]
portfolio_return = (weight_a[0] * asset_return_1) + \
(weight_a[1] * asset_return_2)
portfolio_return = pd.Series(portfolio_return,index=asset_return_df.index).rename('portfolio_return')
portfolio_return.head(2)
# -
# ## Quiz 3 Contribution of Factors
#
# The sum of the products of factor exposure times factor return is the contribution of the factors. It's also called the "common return." calculate the common return of the portfolio, given the two factor exposures and the two factor returns.
# +
# TODO Calculate the contribution of the two factors to the return of this example asset
common_return = (factor_exposure_p_1 * factor_return_1) + (factor_exposure_p_2 * factor_return_2)
common_return = common_return.rename('common_return')
common_return.head(2)
# -
# ## Quiz 4 Specific Return
# The specific return is the part of the portfolio return that isn't explained by the factors. So it's the actual return minus the common return.
# Calculate the specific return of the stock.
# TODO: calculate the specific return of this asset
specific_return = portfolio_return - common_return
specific_return = specific_return.rename('specific_return')
# ## Visualize the common return and specific return
#
return_components = pd.concat([common_return,specific_return],axis=1)
return_components.head(2)
return_components.plot(title="asset return = common return + specific return");
pd.DataFrame(portfolio_return).plot(color='purple');
| 8,012 |
/Cardiovascular_Disease_Prediction.ipynb
|
ef7e4253d837ae4de1ef932b784f2dd9b5f9f286
|
[] |
no_license
|
jamessonlps/Projeto2_CDados
|
https://github.com/jamessonlps/Projeto2_CDados
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,740,554 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-3-public/blob/main/Course%201%20-%20Custom%20Models%2C%20Layers%20and%20Loss%20Functions/Week%204%20-%20Models/C1_W4_Lab_2_resnet-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="L04u0O5altCk"
# # Ungraded Lab: Implementing ResNet
#
# In this lab, you will continue exploring Model subclassing by building a more complex architecture.
#
# [Residual Networks](https://arxiv.org/abs/1512.03385) make use of skip connections to make deep models easier to train.
# - There are branches as well as many repeating blocks of layers in this type of network.
# - You can define a model class to help organize this more complex code, and to make it easier to re-use your code when building the model.
# - As before, you will inherit from the [Model class](https://keras.io/api/models/model/) so that you can make use of the other built-in methods that Keras provides.
# + [markdown] id="gJfOJhgcltCo"
# ## Imports
# + id="CmI9MQA6Z72_"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.layers import Layer
# + [markdown] id="MisE6enyltCq"
# ## Implement Model subclasses
#
# As shown in the lectures, you will first implement the Identity Block which contains the skip connections (i.e. the `add()` operation below. This will also inherit the Model class and implement the `__init__()` and `call()` methods.
# + id="-FIkYUttchv5"
class IdentityBlock(tf.keras.Model):
def __init__(self, filters, kernel_size):
super(IdentityBlock, self).__init__(name='')
self.conv1 = tf.keras.layers.Conv2D(filters, kernel_size, padding='same')
self.bn1 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2D(filters, kernel_size, padding='same')
self.bn2 = tf.keras.layers.BatchNormalization()
self.act = tf.keras.layers.Activation('relu')
self.add = tf.keras.layers.Add()
def call(self, input_tensor):
x = self.conv1(input_tensor)
x = self.bn1(x)
x = self.act(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.add([x, input_tensor])
x = self.act(x)
return x
# + [markdown] id="3QeSHFk2ltCr"
# From there, you can build the rest of the ResNet model.
# - You will call your `IdentityBlock` class two times below and that takes care of inserting those blocks of layers into this network.
# + id="YnMkmeecxw28"
class ResNet(tf.keras.Model):
def __init__(self, num_classes):
super(ResNet, self).__init__()
self.conv = tf.keras.layers.Conv2D(64, 7, padding='same')
self.bn = tf.keras.layers.BatchNormalization()
self.act = tf.keras.layers.Activation('relu')
self.max_pool = tf.keras.layers.MaxPool2D((3, 3))
# Use the Identity blocks that you just defined
self.id1a = IdentityBlock(64, 3)
self.id1b = IdentityBlock(64, 3)
self.global_pool = tf.keras.layers.GlobalAveragePooling2D()
self.classifier = tf.keras.layers.Dense(num_classes, activation='softmax')
def call(self, inputs):
x = self.conv(inputs)
x = self.bn(x)
x = self.act(x)
x = self.max_pool(x)
# insert the identity blocks in the middle of the network
x = self.id1a(x)
x = self.id1b(x)
x = self.global_pool(x)
return self.classifier(x)
# + [markdown] id="X-B7E6MoltCs"
# ## Training the Model
#
# As mentioned before, inheriting the Model class allows you to make use of the other APIs that Keras provides, such as:
# - training
# - serialization
# - evaluation
#
# You can instantiate a Resnet object and train it as usual like below:
#
# **Note**: If you have issues with training in the Coursera lab environment, you can also run this in Colab using the "open in colab" badge link.
# + id="6dMHKPz_dIc8"
# utility function to normalize the images and return (image, label) pairs.
def preprocess(features):
return tf.cast(features['image'], tf.float32) / 255., features['label']
# create a ResNet instance with 10 output units for MNIST
resnet = ResNet(10)
resnet.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# load and preprocess the dataset
dataset = tfds.load('mnist', split=tfds.Split.TRAIN)
dataset = dataset.map(preprocess).batch(32)
# train the model
resnet.fit(dataset, epochs=1)
trabalhado neste notebook</h3>
#
# A plataforma Kaggle disponibiliza uma variedade imensa de *datasets* para competições de *data science* e *machine learning*. Para este trabalho, será utilizada o ['Cardiovascular Disease Dataset'](https://www.kaggle.com/sulianova/cardiovascular-disease-dataset) que contém alguns dados (que você verá logo no tópico a seguir) coletados no momento do exame de uma grande quantidade de pacientes.
#
# A partir desses dados, serão construído três modelos preditivos que respondam à seguinte pergunta: `a partir de um conjunto de informações sobre o paciente, ele terá ou não alguma doença cardiovascular?`
#
# Esses modelos serão construidos utilizando a bilbioteca [scikit-learning](https://scikit-learn.org/stable/), que contém uma vasta quantidade de ferramentas de *machine learning* que ajudarão a fazer predições que auxiliem na formulação da resposta da pergunta acima. Ainda, esses modelos serão confrontados entre si quanto à acurácia obtida por cada um e será feita uma breve discussão sobre esses resultados.
#
# Antes de começar, será necessário importar todas as bilbiotecas e módulos python necessários ao desenvolvimento do projeto.
# +
# Manipulação de dados
import pandas as pd
import numpy as np
# Visualização gráfica dos dados
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# Configurações de Exibição dos Gráficos:
sns.set_theme()
sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2.5})
# Modelos preditivos
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
# Módulos complementares para os modelos preditivos
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn import metrics
from sklearn.metrics import plot_confusion_matrix
# Remove warnings
import warnings
warnings.filterwarnings('ignore')
# -
# <div id="tratamento-dados"></div>
#
# ______
# <h2> Tratamento da Base de Dados </h2>
#
# A base dados contém algumas informações coletadas de vários pacientes no momento do exame. A seguir, estão descritas as informações originais contidas no [*dataset*](https://www.kaggle.com/sulianova/cardiovascular-disease-dataset):
#
# - `idade`: idade do paciente (em dias) $\rightarrow$ (será convertida para `anos`)
# - `altura`: altura do paciente (em centímetros) $\rightarrow$ (será convertida para `metros`)
# - `peso`: peso ou massa corporal do paciente(em `kg`)
# - `genero`: gênero, em que
# - 1: masculino
# - 2: feminino
# - `pa_sist`: Pressão arterial sistólica - recurso de exame (em `mmHg`)
# - `pa_diast`: Pressão arterial diastólica - recurso de exame (em `mmHg`)
# - `colesterol`: nível de colesterol, em que:
# - 1: normal
# - 2: acima do normal
# - 3: muito acima normal
# - `glicose`: Glicose - recurso de exame, em que
# - 1: normal
# - 2: acima do normal
# - 3: muito acima do normal
# - `fumante`: classificação do paciente quanto ao hábito de fumar:
# - 0: não fumante
# - 1: fumante
# - `alcool`: classificação do paciente quanto ao hábito de Ingestão alcoólica de forma recorrente:
# - 0: não consome
# - 1: consome
# - `a_fisica`: classificação do paciente quanto à prática de atividade física
# - 0: não praticante
# - 1: praticante
# - `cardio`: Presença ou ausência de doença cardiovascular
# - 0: ausente
# - 1: presente
#
# As variáveis acima encontram-se originalmente em inglês no *dataset*, porém, serão traduzidas pra o Português neste trabalho.
#
# A variável *target* será a `cardio`, pois é ela quem classifica o paciente como portador ou não de doença cardiovascular. Logo, ela é a variável dependente (pois seu valor dependerá do valor de todas as outras, denominadas independentes).
#
# Além disso, é necessário fazer ainda uma série de manipulações no *dataset* antes de iniciar os trabalhos efetivamente. Dessa forma, algumas mudanças necessárias serão:
#
# - traduzir os *labels*
# - converter unidades da `idade` de dias para anos
# - converter unidades da `altura` de centímetros para metros
# - atribuir as categorias às variáveis categóricas (na descrição acima, note que alguns dados vêm classificados como 0 ou 1, por exemplo)
# - verificar e filtrar *outliers* (dados que destoam completamente do padrão observados ou que são inconsistentes com a realidade)
# +
# Leitura da base de dados:
dados = pd.read_csv('./data/cardio_data.csv', delimiter=';')
# Tradução dos labels para Português:
dados.columns = ['id', 'idade', 'genero', 'altura', 'peso', 'pa_sist', 'pa_diast',
'colesterol', 'glicose', 'fumante', 'alcool', 'a_fisica', 'cardio']
# -
dados.head(2)
# Contando elementos vazios em cada coluna do dataframe
dados.isnull().sum()
# Não há linhas com dados faltantes
# +
# Converte idade de dias para anos, arredondado para o inteiro inferior
dados['idade'] = dados['idade']//365
# Converte altura de centímetros para metros
dados['altura'] = dados['altura']/100
# -
# Converte colunas necessárias para categóricas
dados['genero'] = dados['genero'].astype('category')
dados['colesterol'] = dados['colesterol'].astype('category')
dados['glicose'] = dados['glicose'].astype('category')
dados['fumante'] = dados['fumante'].astype('category')
dados['alcool'] = dados['alcool'].astype('category')
dados['a_fisica'] = dados['a_fisica'].astype('category')
dados['cardio'] = dados['cardio'].astype('category')
# Cria cópia do dataframe. Esta cópia será mantida para conservar os valores numéricos
# que categorizam as variáveis categóricas
dados_num = dados.copy()
# Atribuindo as categorias para cada variável categórica
dados['colesterol'].cat.categories = ['normal', 'acima do normal', 'muito acima do normal']
dados['glicose'].cat.categories = ['normal', 'acima do normal', 'muito acima do normal']
dados['fumante'].cat.categories = ['não fumante', 'fumante']
dados['alcool'].cat.categories = ['não consome', 'consome']
dados['a_fisica'].cat.categories = ['não praticante', 'praticante']
dados['genero'].cat.categories = ['masculino', 'feminino']
dados['cardio'].cat.categories = ['ausente', 'presente'] # variável target
# Atribuindo ordenação às variáveis categóricas ordinais
dados['colesterol'] = dados['colesterol'].cat.as_ordered()
dados['glicose'] = dados['glicose'].cat.as_ordered()
dados.head()
# <div id="outliers"></div>
#
# ### Remoção de Outliers
#
# A última etapa a ser feita na limpeza do dataset corresponde à remoção de alguns outliers.
#
# Ao fazer uma rápida análise descritiva das variáveis numéricas, pode-se notar alguns dados que são inconsistentes com a realidade ou fogem totalmente do padrão observado na massa de dados. Vale lembrar, ainda, que a remoção dos outliers não afetará a base de dados de maneira significativa, haja vista a grande grande quantidade de dados (70000 linhas).
# #### Pressão Arterial (Sistólica e Diastólica)
#
# Algumas anomalias nesses dados podem ser mais facilmente detectadas ao plotar em um gráfico os pontos correspondentes às pressões sistólica e diastólica. Veja:
# Plota scatterplot comparando as duas pressões
plt.figure(figsize=(12, 8))
sns.scatterplot(data=dados, x="pa_sist", y="pa_diast",
hue="cardio", alpha=0.5,
palette=['#5683B3', '#E00705'])
plt.xlabel('Pressão Sistólica (mmHg)')
plt.ylabel('Pressão Diastólica (mmHg)')
plt.title('Análise das pressões');
# Do gráfico acima, nota-se a existência de alguns valores de pressão que destoam dos demais. Para entender melhor essas irregularidades, acompanhe as etapas abaixo.
# +
# Média, valor mínimo e máximo de Pressão Sistólica
media_p_sist = dados['pa_sist'].mean()
desv_pad_p_sist = dados['pa_sist'].std()
minimo_p_sist = min(dados['pa_sist'])
maximo_p_sist = max(dados['pa_sist'])
# Exibe resultados
print('PRESSÃO SISTÓLICA')
print('-----------------')
print(f'Média (em mmHg): {media_p_sist:.2f}')
print(f'Desvio Padrão (em mmHg): {desv_pad_p_sist:.2f}')
print(f'Mínimo (em mmHg): {minimo_p_sist:.2f}')
print(f'Máximo (em mmHg): {maximo_p_sist:.2f}')
# -
# Observando acima, nota-se um valor maior que a média para o desvio padrão, mostrando que há registro de valores negativos para pressão arterial sistólica, o que não faz sentido físico. Concomitantemente, há valores totalmente fora da realidade, como evidenciado pela pressão Máxima (16020 mmHg - o que corresponde à aproximadamente *21 atmosferas de pressão!!*).
#
# O mesmo acontece com dados referentes à pressão diastólica:
# +
# Média, valor mínimo e máximo de Pressão Sistólica
media_p_diast = dados['pa_diast'].mean()
desv_pad_p_diast = dados['pa_diast'].std()
minimo_p_diast = min(dados['pa_diast'])
maximo_p_diast = max(dados['pa_diast'])
# Exibe resultados
print('PRESSÃO DIASTÓLICA')
print('-----------------')
print(f'Média (em mmHg): {media_p_diast:.2f}')
print(f'Desvio Padrão (em mmHg): {desv_pad_p_diast:.2f}')
print(f'Mínimo (em mmHg): {minimo_p_diast:.2f}')
print(f'Máximo (em mmHg): {maximo_p_diast:.2f}')
# -
# É válido evidenciar, novamente, que esses dados foram colhidos no momento dos procedimentos médicos. Por não se conhecer as condições em que foram coletados e, devido ao procedimento não tomar os devidos cuidados quanto à qualidade dos registros, acredita-se que esses <i> outliers </i> tenham sido um erro de preenchimento, no momento de registrar a informação.
# #### Massa corporal (peso) e Altura
# Com os dados referentes à massa corporal (peso) e à altura, há alguns valores que fogem do padrão observado. Por exemplo, há registro de adultos com massa corporal de 10kg, 20kg, bem como de adultos com 55cm de altura. Para se ter um efeito de comparação desses valores, [o menor adulto humano do mundo](http://revistagalileu.globo.com/Revista/Common/0,,EMI296481-17770,00-CONHECA+O+MENOR+HOMEM+DO+MUNDO.html) já registrado tinha 54,6 cm e 17 kg, o que são casos raríssimos no planeta e, portanto, serão tratados como outliers e removidos do dataset, pois destoam completamente do comportamento da população.
#
# Assim, aplicando uma remoção de todos esses outliers:
# +
# Filtros do dataset
def filtra_dataset(df):
cond_1 = (df['pa_sist'] > 24) # limite mínimo da pa_sist
cond_2 = (df['pa_sist'] < 240) # limite máximo da pa_sist
cond_3 = (df['pa_diast'] > 30) # limite mínimo da pa_diast
cond_4 = (df['pa_diast'] < 250)# limite máximo da pa_diast
cond_5 = (df['altura'] > 0.7) # limite mínimo da altura
cond_6 = (df['peso'] > 30) # limite mínimo do peso
# Unindo em um único filtro
condicoes = cond_1 & cond_2 & cond_3 & cond_4 & cond_5 & cond_6
# Atualiza dataset
df = df.loc[condicoes]
return df
# Atualizando
dados = filtra_dataset(dados)
dados_num = filtra_dataset(dados_num)
# -
# remove coluna 'id' (não tem significado para essa análise)
dados = dados.drop(['id'], axis=1)
dados_num = dados_num.drop(['id'], axis=1)
# Tamanho atualizado do dataset
dados.shape
# <div id="analise-exploratoria"></div>
#
# ___
# <h2 class="analise_exploratoria"> Análise Exploratória dos dados </h2>
#
# A análise exploratória é uma etapa significativa em Ciência dos Dados. Esta etapa, junto com a etapa anterior de tratamento da base de dados, são cruciais para se ter uma noção de como se comporta cada variável envolvida na análise frente à variável *target*. Nesse processo, por meio da construção de recursos gráficos e estatísticos, pode-se extrair informações que permitam avaliar o grau importância e dependência de cada uma das variáveis. Essa etapa de análise exploratória normalmente é feita junto com a etapa anterior, com tratamento dos dados, identificação de outliers etc. Contudo, aqui separou-se em duas etapas por questão de praticidade, em que esta etapa foca em verificar como os dados já organizados e "limpos" se comportam em relação à variável *target*.
#
# Antes de iniciar esse estudo propriamente dito, convém ressaltar duas diferenças básicas:
#
# - `Variáveis numéricas ou quantitativas`: são aquelas cujo valor é um número, isto é, são ordinais (exemplos: área, tempo, idade, quantidade etc.)
# - `Variáveis categóricas ou qualitativas`: são aquelas que denotam uma classificação ou categoria. Podem ser representadas por números, mas, nesse caso, cada número representa uma dessas categorias (exemplos: pequeno/médio/alto, sim/não, masculo/feminino etc.)
# <div id="quantitativas"></div>
#
# <h3>Análise das Variáveis Quantitativas </h3>
# Função que recebe uma lista de colunas e o dataframe corresponde e devolve
# um novo dataframe com uma descrição estatística correspondente à cada coluna
def descreve_variavel(df, lista_colunas):
df_descrito = pd.DataFrame(df[lista_colunas]).describe()
df_descrito.index = ['Quantidade total',
'Média',
'Desvio Padrão',
'Valor mínimo',
'Quartil (25%)',
'Quartil (50%)',
'Quartil (75%)',
'Valor máximo']
return df_descrito
# Descrição estatística das variáveis numéricas
descreve_variavel(dados, ['idade', 'peso', 'altura', 'pa_sist', 'pa_diast']).round(3)
# A tabela acima fornece os principais valores descritivos estatísticos (média, desvio padrão, quartis, valores máximo e mínimo) referente a cada uma das variáveis numéricas. Veja, logo abaixo, como se dá a distribuição de cada uma dessas variáveis.
# +
plt.figure(figsize=(20,8))
plt.subplot(131)
sns.histplot(dados['idade'], stat='density', bins=15, color='#E00705', alpha=0.7)
plt.ylabel('Densidade', fontsize=14)
plt.title('Distribuição de idade', fontsize=14)
plt.xlabel('Idade', fontsize=14)
plt.xlim(30, 70)
plt.subplot(132)
sns.histplot(dados['altura'], stat='density', bins=50, color='#E00705', alpha=0.7)
plt.ylabel('Densidade')
plt.title('Distribuição de Altura')
plt.xlabel('Altura')
plt.xlim (1.25, 2.00)
plt.subplot(133)
sns.histplot(dados['peso'], stat='density', bins=15, color='#E00705', alpha=0.7)
plt.ylabel('Densidade')
plt.title('Distribuição de Peso')
plt.xlabel('Peso')
plt.xlim(25,200)
plt.show()
# -
# Os histogramas acima mostra como se distribuem as variáveis `idade`, `altura` e `peso` dentro do conjunto de dados.
#
# - Quanto à idade, nota-se que os dados estão concentrados na faixa de 35 a 65 anos, com concentração ainda maior na região de 50 a 60 anos.
# - Quanto à altura, nota-se uma concentração principalmente na faixa de 1.5 a 1.8 metros.
# - Quanto ao peso, há uma maior densidade na região em torno de 75 kg. Nota-se também que há uma densidade maior a partir dessa região de pico que inferior, isto é, tem uma distribuição assimétrica para a direita.
# Separando os dados entre Cardíacos e Saudáveis para Análise de Dados
cardiacos = dados.loc[dados.cardio == 'presente',:]
saudaveis = dados.loc[dados.cardio == 'ausente',:]
# +
# Faixas de exibição
faixa_idade = np.arange(35, 70, 5)
faixa_peso = np.arange(30, 150, 5)
plt.figure(figsize=(18,6))
# Plota hitograma de idade em função da categoria cardio
plt.subplot(121)
sns.histplot(dados, x='idade', hue='cardio', hue_order=['presente', 'ausente'], multiple='layer',
stat='density', bins=faixa_idade, element='step', palette=['#5683B3', '#E00705'])
plt.ylabel('Densidade')
plt.xlabel('Idade')
plt.title('Densidade da incidência de doenças \ncardiovasculares por idade')
plt.fontsize=14
# Plota histograma de peso em função da categoria cardio
plt.subplot(122)
sns.histplot(dados, x='peso', hue='cardio', hue_order=['presente', 'ausente'], multiple='layer',
bins=faixa_peso, stat='density', element='step', palette=['#5683B3', '#E00705'])
plt.ylabel('Densidade')
plt.xlabel('Peso')
plt.xlim((40, 130))
plt.title('Densidade da incidência de doenças \ncardiovasculares por peso')
plt.show()
# -
# No histograma acima do lado direito, nota-se que, em faixas de idade inferiores a 55 anos, há predomínio de pessoas com ausência de doenças cardiovasculares, enquanto que nas faixas superiores a 55 anos predomina a presença de doenças cardiovasculares. Esse dado é um indicador de que há uma relação entre `idade` e a existência de uma doença cardiovascular.
#
# No histograma a esquerda, percebe-se que há maior densidade de cardíacos em faixas de peso superiores a 75 kg. Então, isso também indica uma relação existente entre o peso e a existência de uma doença cardiovascular, em que valores mais elevados de peso apontam para maiores chances de ser um cardíaco.
#
# Esses resultados confirmam, de certa forma, os comentários feitos no início do documento, isto é, a maior ocorrência de problemas cardiovasculares em idades mais avançadas e pessoas com maior peso.
#
# <h4>Análise das pressões </h4>
# +
# Função que plota histograma
def plota_histograma(dataframe, x, faixa, cor):
sns.histplot(dataframe, x=x, bins=faixa, stat='density', color=cor)
plt.xlabel('Pressão Arterial (mmHg)')
plt.ylabel('Densidade')
return None
# Faixa de valores no histograma
pa_bins = np.arange(40, 200, 10)
# Gera figura para plotagem
plt.figure(figsize=(15,13))
# Pressão sistólica para cardíacos
plt.subplot(2,2,1)
plota_histograma(cardiacos, 'pa_sist', pa_bins, 'maroon')
plt.title('Pressão Arterial Sistólica \n (Cardíacos)')
# Pressão sistólica para saudáveis
plt.subplot(2,2,2)
plota_histograma(saudaveis, 'pa_sist', pa_bins, 'navy')
plt.title('Pressão Arterial Sistólica \n (Saudáveis)')
# Pressão diastólica para cardíacos
plt.subplot(2,2,3)
plota_histograma(cardiacos, 'pa_diast', pa_bins, 'maroon')
plt.title('Pressão Arterial Diastólica \n (Cardíacos)')
# Pressão diastólica para saudáveis
plt.subplot(2,2,4)
plota_histograma(saudaveis, 'pa_diast', pa_bins, 'navy')
plt.title('Pressão Arterial Diastólica \n (Saudáveis)')
# Exibe gráficos
plt.tight_layout(pad=3.0)
plt.show()
# -
# Os dois histogramas da primeira linha dos gráficos acima mostram a densidade para a pressão sistólica entre cardíacos e saudáveis. Nota-se que, no caso dos cardíacos, há uma maior densidade em pressões acima de 120 mmHg em relação aos saudáveis, o que aponta para uma relação entre altos valores de pressão e a existência de doenças cardiovasculares.
#
# Nos dois histogramas inferiores, compara-se a pressão diastólica. As distribuições são parecidas, mas percebe-se uma densidade um pouco maior em pressões acima de 90 mmHg em cardíacos que em saudáveis. Ainda assim, há pouca diferença para tentar estabelecer uma relação apenas com base nos gráficos.
# <div id="qualitativas"></div>
#
# ### Análise das Variáveis Qualitativas
#
# A seguir, pode-se observar como os dados se distribuem por cada Variável Qualitativa
# Dicionário com as propriedades dos gráficos
prop = {}
prop['variaveis'] = ['genero', 'fumante', 'alcool', 'colesterol', 'a_fisica','cardio']
prop['titulos'] = ['Distribuição da População', 'Distribuição de Cardíacos', 'Distribuição de Saudáveis']
prop['titulo_variavel'] = ['Gênero', 'Fumantes', 'Alcóolatras', 'Nível de Colesterol', 'Prática de Atividade Física', 'Target: cardio']
prop['labels'] = [['Masculino', 'Feminino'], ['Não-fumante', 'Fumante'], ['Não-consome', 'Consome'],
['Normal', 'Acima do normal', 'Muito acima do normal'], ['Não-praticante', 'Praticante'], ['Ausente', 'Presente']]
# Função que plota os gráficos de pizza
def grafico_pizza(n_linhas, n_colunas, dic):
# configura tamanho da imagem
plt.figure(figsize=(40,120))
# Plota o gráfico inicial: Distribuição da Target
plt.subplot(n_linhas, n_colunas, 2)
dados[dic['variaveis'][5]].value_counts(sort=False).plot.pie(autopct='%1.1f%%', explode= [0.05, 0],
textprops={'fontsize': 34},
startangle = 90, labels=dic['labels'][5] ,
colors=['#1E9E58', '#EBA72D', '#44EB90'])
titulo = 'Distribuição de Pacientes \n Cardíacos ou Saudáveis'
plt.title(titulo, fontsize=34)
plt.legend(loc=8, title = dic['titulo_variavel'][5], title_fontsize=32,
bbox_to_anchor=(0.25, -0.25, 0.5, 0.5), fontsize=32)
# Configuração para Plotar Gráficos automaticamente a partir da linha 2
i=4
linha = 2
coluna = 1
explode=[0.05,0]
while linha <= n_linhas:
plt.subplot(n_linhas, n_colunas, i)
if dic['variaveis'][linha-2] == 'colesterol':
explode = [0.05, 0.05, 0]
else:
explode = [0.05, 0]
if i % 3 == 1: # Primeira coluna
coluna = 1
dados[dic['variaveis'][linha-2]].value_counts(sort=False).plot.pie(autopct='%1.1f%%', explode= explode,
textprops={'fontsize': 34},
startangle = 90, labels=dic['labels'][linha-2] ,
colors=['#1E9E58', '#EBA72D', '#44EB90'])
titulo = dic['titulos'][coluna-1] + '\n por ' + dic['titulo_variavel'][linha-2]
plt.title(titulo, fontsize=34)
plt.legend(loc=8, title = dic['titulo_variavel'][linha-2], title_fontsize=28,
bbox_to_anchor=(0.25, -0.25, 0.5, 0.5), fontsize=28)
i += 1
elif i % 3 ==2: # Segunda coluna
coluna = 2
cardiacos[dic['variaveis'][linha-2]].value_counts(sort=False).plot.pie(autopct='%1.1f%%', explode= explode,
textprops={'fontsize': 34},
startangle = 90, labels=dic['labels'][linha-2] ,
colors=['#E64C49', '#AD0603', '#E69797'])
titulo = dic['titulos'][coluna-1] + '\n por ' + dic['titulo_variavel'][linha-2]
plt.title(titulo, fontsize=34)
plt.legend(loc=8, title = dic['titulo_variavel'][linha-2], title_fontsize=28,
bbox_to_anchor=(0.25, -0.25, 0.5, 0.5), fontsize=28)
i += 1
elif i % 3 ==0: # Terceira coluna
coluna = 3
saudaveis[dic['variaveis'][linha-2]].value_counts(sort=False).plot.pie(autopct='%1.1f%%', explode= explode,
textprops={'fontsize': 34},
startangle = 90, labels=dic['labels'][linha-2],
colors=['#6EA8E6', '#1F64AD', '#B6CFEA'])
titulo = dic['titulos'][coluna-1] + '\n por ' + dic['titulo_variavel'][linha-2]
plt.title(titulo, fontsize=34)
plt.legend(loc=8, title = dic['titulo_variavel'][linha-2], title_fontsize=28,
bbox_to_anchor=(0.25, -0.25, 0.5, 0.5), fontsize=28)
i += 1
linha +=1
plt.show()
# Plotando os gráficos
grafico_pizza (6, 3, prop)
# ### Discussão:
#
# Percebe-se que, no geral, os dados estão igualmente distribuídos entre _cardíacos_ e _saudáveis_ , não havendo grandes diferenças entre eles, exceto pelo _nível de colesterol_ , o que já pode se mostrar como um possível indicador para alertar possíveis cardíacos. Ao todo, percebe-se que a maioria da população estudada pratica ativiade física, tem nível de colesterol normal,
# <div id="modelos-preditivos"></div>
#
# ___
# # Modelos Preditivos
# <div id="arvore-decisao"></div>
#
# ## 1. Árvore de Decisão (Decision Tree Classifier)
#
# ### O que é uma arvore de decisão?
#
# Você já deve ter visto alguns fluxogramas ou esquemas gráficos que ligam uma série de decisões que são tomadas sequencialmente, sempre dependendo da decisão tomada anteriormente, como o do exemplo abaixo:
#
#
# <img src="./image/fluxograma_pp.png" width="400" alt="fluxograma">
#
# Extraída de: [UFES](https://etica.ufes.br/fluxograma)
#
# As Árvores de Decisão são muito parecidas com o esquema acima. Elas funcionam como um mapa de todas os possíveis resultados e probabilidades de acordo com as *features* que se apresentam relacionadas ao problema. Em *machine learning*, as árvores de decisão consistem em métodos de aprendizagem supervisionado, podendo ser utilizadas em tarefas de classificação (quando a variável *target* é categórica) ou regressão (quando a variável *target* é numérica).
#
# Para compreender melhor a árvore de decisão, veja alguns conceitos:
#
# - **Nó raiz (ou nó pai)**: É o nó de maior importância (Ganho, em termos matemáticos \$-$ que será visto em breve) a partir do qual os dados passam a ser subdivididos em outros sub-nós.
# - **Nó de decisão**: São as subamostras que se dividem em mais subamostras ou sub-nós (não são o término de uma remificação).
# - **Nó filho (ou folha)**: São as subamostras em que não ocorrem mais subdivisões
#
# A figura a seguir deixa mais claro os conceitos acima.
#
# <img src="./image/decision-tree-classification-algorithm.png" width="400" alt="arvore de decisao">
#
#
# Fonte: [JavaTpoint](https://www.javatpoint.com/machine-learning-decision-tree-classification-algorithm)
#
#
# O nó verde representa o nó pai, os nós azuis representam os nós de decisão e os nós em cor de rosa representam os nós filhos (ou folhas)
#
# ### Como construir uma árvore de decisão?
#
# Como qualquer modelo de *machine learning*, é necessário entender um pouco do processo matemárico que o embasa. A árvore de decisão que será construida nesse modelo será utilizada para classificação. Para esse caso, há 2 critérios de divisão mais conhecidos: *Entropia* e *Índice de Gini*.
#
# **A) Entropia**
#
# A Entropia é mais conhecida por medir o "grau de desordem" de um sistema. Nas árvores de decisão, a entropia mede a falta de *homogeneidade* de uma amostra, isto é, mede a impureza dos dados relacionados à sua classificação. Em outras palavras, a entropia faz o controle de como a Árvore decide a divisão dos dados.
#
# O cálculo da entropia ($S$) pode ser feito conforme a equação:
#
# $$Entropia (S) = - \sum_{i=1}^n p_i \cdot \log_{2}p_i$$
#
# em que:
# - $S$ é um conjunto de dados com $n$ classes diferentes
# - $p_i$ é a fração de dados de $S$ que pertencem à classe $i$
#
# A partir dos cálculos de Entropia, calcula-se o Ganho de informação que, em outras palavras, pode ser entendido como o grau de importância de um dado atributo. Assim, o ganho de um dado atributo $A$ de um conjunto de dados $S$ é dado por:
#
# $$Ganho(S, A) = Entropia(S) - \sum_{v\in Valores(A)}\frac{\vert S_v \vert}{\vert S \vert}Entropia(S_v)$$
#
# em que
# - $Valores(A)$ representa os possíveis valores de $A$
# - $v$ é um elemento de $Valores(A)$
# - $S_v$ é o subconjunto de $S$ quando $x=A$
#
# **B) Índice de Gini**
#
# O Índice de Gini é uma expressão matemática que calcula a impureza de um nó (grau de heterogeneidade dos dados) e é determinado por:
#
# $$Gini = 1 - \sum_{i=1}^n p_i^2$$
#
# em que:
# - $p_i$ é a fração de cada classe em cada nó
# - $n$ é o número de classes
#
# > *Quando, nas árvores de classificação com partições binárias, se utiliza o critério de Gini tende-se a isolar num ramo os registros que representam a classe mais freqüente. Quando se utiliza a entropia, balanceia-se o número de registros em cada ramo.* (SILVA, Luiza Maria Oliveira da, p.44, 2005. Disponível [aqui](https://www.maxwell.vrac.puc-rio.br/colecao.php?strSecao=resultado&nrSeq=7587@1&msg=28#))
#
# Neste projeto, a construção da Árvore de Decisão será feita com o critério da Entropia, tendo em vista o balanço de registros que esse método gera para cada ramo da árvore.
#
# ### Vantagens das Árvores de Decisão
#
# - **Fácil compreensão**: sua representação gráfica ajuda a tornar o entendimento mais intuitivo, ainda que a pessoa não tenha habilidades técnicas na área.
# - **Abrangência aos tipos de variáveis**: esse modelo é capaz de trabalhar tanto com variáveis numéricas como com categóricas.
# - **Fácil manipulação com dados**: além de não exigir grandes manutenções na base de dados (consegue, até certo ponto, não ser influenciada por *outliers* ou valores faltantes), também ajuda na identificação das variáveis mais relevantes e significativas na predição da variável *target*.
#
# ### Desvantagens das Árvores de Decisão
#
# - ***Overfitting***: é uma das dificuldades que se deve ter em mente ao trabalhar com Árvores de Decisão. O modelo pode se ajustar demais com facilidade aos dados de treino, tornando-o inviável como um classificador ao validar com outros conjuntos de dados.
#
# <div id="implementacao-dt"></div>
#
# ### Implementação do modelo
#
# A aplicação da árvore decisão será por meio da biblioteca do scikit-learn [(vide documentação)](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) (a importação encontra-se nas primeiras células deste documento). Para implementar o modelo, cria-se o modelo com *DecisionTreeClassifier()* e aplica o método *fit()*, que recebe os dados de treino e ajusta os dados para fazer classificações de quaisquer *datasets* no mesmo formato.
#
# No objeto *DecisionTreeClassifier()*, há alguns argumentos importantes:
#
# - **criterion**: é o critério com o qual a árvore será construída (*Gini* ou Entropia).
# - **max_depth**: é a profundidade máxima da árvore. Esse argumento limita o tamanho da árvore, e deve ser utilizado com cautela, pois grande profundidade pode levar a *overfitting* e pequena demais pode tirar a capacidade do modelo de generalizar outros conjuntos de dados para classificar.
#
#
#
# ### Preparando a base de dados para implementação
#
# Para a implementação de fato do modelo, é necessário dividir a base de dados em dois grupos:
#
# - **Dados de treino**: esses dados serão aqueles utilizados para montar de fato o modelo, isto é, são os dados que ensinam ao computador como se comporta cada variável em relação a variável *target* e o quanto e como cada uma influenciará uma possível classificação.
# - **Dados de teste**: esses dados serão aqueles utilizados para testar a precisão do modelo, isto é, são dados não iguais aos de treino para verificar o quão preciso está o modelo nas suas classificações.
#
# A divisão desses dados será feita com o módulo *train_test_split()* do módulo *model_selection*, da biblioteca do *sckikit-learning* (a importação encontra-se nas primeiras células deste documento). Esse módulo recebe os dados de treino (sem a coluna da variável *target*) e os dados *target*, ou seja, recebe os dados do "eixo x" e do "eixo y", além de passar a fração com que deve ser feita a divisão (em geral, se utiliza de 20 a 30% dos dados para teste). Aqui, será utilizado 20% dos dados para teste.
#
# As células abaixo mostram as etapas de preparação dos dados e implementação da árvore de decisão.
# +
target = 'cardio'
# Series com a variável target
dados_target = dados_num[target]
# Remove target
dados_num = dados_num.drop([target], axis=1)
# Separando os dados de treino e teste
dados_treino, dados_teste, target_treino, target_teste = train_test_split(dados_num,
dados_target,
test_size=0.2,
random_state=0)
# +
# Cria árvore de decisão
arvore_decisao = DecisionTreeClassifier(criterion='entropy')
# Treina o modelo
modelo_dt = arvore_decisao.fit(dados_treino, target_treino)
# Calcula acurácia do modelo
acuracia_dt = arvore_decisao.score(dados_teste, target_teste) * 100
print(f'Precisão do modelo (Decision Tree): {acuracia_dt:.4f} %')
# -
# Profundidade da árvore gerada
print(f'Profundidade da árvore: {modelo_dt.tree_.max_depth}')
# <div id="matriz-dt"></div>
#
# ### Verificando resultado com Matriz de Confusão
#
# A matriz de confusão é uma maneira de se avaliar o desempenho do modelo. Ela compara, por meio de uma tabela, os dados verdadeiros com os valores previstos para cada um. Assim, ela mostra 4 possibilidades:
#
# - `Verdadeiro positivo`: é quando uma condição é positiva e a previsão também é positiva. No caso deste modelo, é quando o valor verdadeiro é 1 e o previsto também é 1. Ele é exibido no canto superior esquerdo.
# - `Verdadeiro negativo`: é quando uma condição é negativa e a previsão também é negativa. No caso deste modelo, é quando o valor negativo é 0 e o previsto também é 0. Ele é exibido no canto inferior direito.
# - `Falso positivo` (erro tipo I): é quando o modelo prevê a condição positiva (1) quando o valor verdadeiro é negativo (0)
# - `Falso negativo` (erro tipo II): é quando o modelo prevê a condição negativa (0) quando o valor verdadeiro é positivo (1)
#
# A biblioteca do scikit-learn já contém um módulo que plota essa matriz (a importação encontra-se nas primeiras células deste notebook). Veja a seguir a sua implementação:
#
# +
# Figura para plotagem do gráfico
fig, ax = plt.subplots(figsize=(7, 6))
# Cria e plota matriz de confusão
plot_confusion_matrix(modelo_dt, dados_teste, target_teste,
normalize='true', display_labels=['Ausente', 'Presente'],
cmap=plt.cm.Blues, ax=ax, values_format='.2%')
plt.xlabel("Valor Previsto", fontsize=14)
plt.ylabel("Valor Verdadeiro")
plt.title("Matriz de confusão\nNormalizada por categoria")
plt.grid(False)
# -
# A matriz de confusão acima traz os resultados <u> normalizados por cada categoria</u>, isto é:
#
# - em relação aos dados cujo valor verdadeiro é *ausente*, 65.46% destes foram classificados como ausente (verdadeiro negativo), enqaunto os outros 34.54% foram classificados pelo modelo como presente (falso positivo).
# - em relação aos dados cujo valor verdadeiro é *presente*, cerca de 62.73% foram classificados como *presente* (verdadeiro positivo), enquanto que os outros 37.27% foram classificados como ausente (falso negativo)
#
# +
# Figura para plotagem do gráfico
fig, ax = plt.subplots(figsize=(7, 6))
# Cria e plota matriz de confusão
plot_confusion_matrix(modelo_dt, dados_teste, target_teste,
normalize='all', display_labels=['Ausente', 'Presente'],
cmap=plt.cm.Blues, ax=ax, values_format='.2%')
plt.xlabel("Valor Previsto")
plt.ylabel("Valor Verdadeiro")
plt.title("Matriz de confusão\nNormalizada pelo total")
plt.grid(False)
# -
# A matriz de confusão acima traz os resultados <u> normalizados pelo total</u> , isto é, mostra a porcentagem de coincidências entre os valores verdadeiros e a classificação efetuada pelo modelo. A soma entre os verdadeiros positivo e negativo resulta na acurácia de aproximadamente 64%.
# <div id="cross-dt"></div>
#
# ### Validação Cruzada (*Cross-Validation*)
#
# Uma técnica interessante para se avaliar o quão bem um modelo está generalizado, isto é, como ele desempenha frente a dados desconhecidos, é a validação cruzada. Diferentemente do processo feito anteriormente, no qual se fez uma divisão dos dados e então se verificou uma única vez a acurácia do modelo, esta técnica permite utilizar vários subconjuntos de dados a partir do próprio *dataset*.
#
# A vantagem de se utilizar essa técnica é que se pode diminuir as variações de resultados em execuções diferentes do modelo. Uma única divisão dos dados, ainda que aleatória, pode enviesar o modelo fazendo-o parecer bem sucedido quando os dados de teste e treino são parecidos e, quando confrontado com dados desconhecidos, ficar aquém do esperado. Um ponto negativo da técnica é que, quanto maior o conjunto de dados, mais poder computacional será necessário.
#
# #### Como funciona
#
# Dentre as modalidades de validação cruzada existentes, será utilizada a *K-Fold*. Esse método pega o *dataset* e, a partir de um valor *k* definido pelo usuário, divide em dados de treino e teste, repetindo esse processo *k* vezes de forma a obter diferentes conjuntos de treino e teste em cada "rodada". A imagem abaixo esclarece mais esse processo:
#
# <img src="image/kfold.jpg" alt="K-Fold" width="600">
#
# [Fonte](https://gusrabbit.com/code/cross_validate/)
#
# Na imagem acima, o conjunto de dados é embralhado *k* vezes, e em cada embaralhamento toma-se um subconjunto para validação. Por padrão, se utiliza $k = 5$ para evitar que os dados de teste sejam muito pequenos e tenham pouco potencial de avaliação. Porém, para maiores conjuntos de dados, é possível tomar valores maiores de *k*. Para o *dataset* deste projeto, utilizado $k = 10$ (como tem-se mais de 68745 linhas, cada subconjunto de validação teria em torno de 6874 linhas).
#
# #### Implementação
#
# Para implementar, é preciso apenas fazer a importação do módulo necessário (encontra-se nas primeiras células deste arquivo) e passar como argumentos o modelo, os dados das variávels independentes e da *target*, passar o valor de *k* e informar o tipo de avaliação que se quer obter (aqui, será utilizado *'accuracy'* para obter a acurácia.
#
# Com os resultados, é possível ter uma melhor entendimento do desempenho do modelo calculando a média e o desvio padrão dos resultados.
# +
# Calcula resultados
resultados_dt = cross_val_score(modelo_dt, dados_num, dados_target, cv=10, scoring='accuracy')
# Calcula média e desvio padrão (em porcentagem)
media_dt = np.mean(resultados_dt) * 100
desv_pad_dt = np.std(resultados_dt, ddof=1) * 100
# Exibe resultados
print('Cross Validation aplicado ao modelo de Árvore de Decisão: \n')
print(f'Média: {media_dt:.4f} %')
print(f'Desvio Padrão: {desv_pad_dt:.4f} %')
# -
# <div id="discussao-dt"></div>
#
# ### Discussão
# Tratando-se de um modelo voltado para classificar possíveis portadores de doenças cardiovasculares a partir de algumas de suas características, o ideal seria ampliar ainda mais sua acurácia, seja buscando novas features sobre os indivíduos, testando outras técnicas de machine learning, buscando mais aperfeiçoamentos para as técnicas aqui empregadas ou, de forma externa, buscando uma fonte de dados de melhor qualidade.
#
# Além disso, com a aplicação da técnica de cross-validation, percebe-se que a média se encontra muito próxima do valor obtido em uma única aplicação, com um desvio padrão menor que 1%.
#
# Uma das maneiras de tentar melhorar o modelo da árvore de decisão e diminuir as chances de overfitting é limitando o seu crescimento. Quanto maior a quantidade de variáveis envolvidas, maior fica o seu tamanho e também maior as chances daquele modelo se ajustar demais aos dados. Assim, é possível tentar generalizar mais o modelo limitando seu crescimento, utilizando em sua construção as variáveis que mais influenciam na target e excluindo as demais.
#
# Como foi visto, a árvore anterior tinha profundidade de 47. O quanto é possível melhorar sua performance reduzindo sua profundidade máxima? Veja uma implementação na qual se gera dados de treino a partir da mesma base, treina o modelo e calcula sua acurácia para até 20.
# +
def cria_arvore(X_dados, Y_dados, profundidade):
# Separando os dados de treino e teste
X_treino, X_teste, Y_treino, Y_teste = train_test_split(X_dados,
Y_dados,
test_size=0.2,
random_state=0)
# Cria árvore de decisão
nova_arvore = DecisionTreeClassifier(criterion='entropy', random_state=0, max_depth=n)
# Treina o modelo
novo_modelo = nova_arvore.fit(X_treino, Y_treino)
# Calcula acurácia do modelo
acc = nova_arvore.score(X_teste, Y_teste) * 100
print(f'Profundidade (max_depth) = {profundidade:2.0f} | Acurácia: {acc:.2f} %')
# Mostra resultados de 2 a 20
for n in range(2, 21):
cria_arvore(dados_num, dados_target, n)
# -
# Há 2 pontos interessantes a serem notados com o resultado acima:
#
# - Profundidades menores geraram melhores resultados nos testes.
# - A partir de uma certa profundidade (em torno de 9), o desempenho do modelo começou a cair.
#
# Como foi visto, o processo de limitar o tamanho da árvore de decisão pode ser uma medida plausível na tentativa de melhorar a performance. Contudo, há outras formas de se melhorar o modelo da Árvore de Decisão, mas mostrar cada uma aqui foge ao escopo do trabalho. Ao invés disso, pode-se utilizar outras técnicas de predição. Veja, a seguir, a aplicação de um outro modelo que envolve não apenas uma, mas várias árvores de decisão: a **Floresta Aleatória**.
#
# <div id="floresta-aleatoria"></div>
#
# -------------
# ## 2. Floresta Aleatória (Random Forest Classifier)
# ### O que é?
#
# **Random Forest (Floresta Aleatória)** é um tipo de de algoritmo supervisionado de *machine learning*, baseado em *ensemble learning* (aprendizado em conjunto $-$ um tipo de aprendizado onde se juntam diferentes tipos de algoritmos ou o mesmo algoritmo múltiplas vezes para formar um modelo de predição mais poderoso).
# O algoritmo do random forest combina múltiplos algoritmos do mesmo tipo, sendo composto por várias **Decision Trees (Árvores de Decisão)**, as quais formam uma *floresta de árvores*, ou seja, a *Random Forest*.
#
# Ele utiliza o *'Bagging'* (que será detalhado a frente) e o *'Feature Randomness' (aleatoriedade de atributos)* para criar uma **Floresta Aleatória de Árvores não correlacionadas**.
#
# O princípio que move esse algoritmo pode ser resumido `sabedoria das massas`, ou seja: <u> "Um grande numero de modelos não correlacionados (árvores de decisão) operando em conjunto sobressairá a performance de qualquer modelo individualmente". </u>
#
# Esse método, além de ser empregado para a predição de doenças, também é utilizado por sistemas inteligentes para identificar objetos, como nos carros automáticos ou no Kinect (do Xbox).
#
# <img src='image/usos da random forest.png' height=600px width = 800px >
#
# [Fonte: Simplilearn](https://www.youtube.com/watch?v=eM4uJ6XGnSM)
#
# ### Como funciona?
#
# O Algoritmo da Random Forest pode ser resumido nestes seguintes passos:
#
# 1. Escolha N dados aleatoriamente, com repetição, do <i>dataset</i>
# 2. Construa uma *Árvore de Decisão* com base nesses N valores
# 3. Escolha o número *n* de Árvores que você deseja em um sua floresta e repita os passos 1 e 2 *n* vezes
# 4. Nos casos de:
# 1. uma **Regressão:**<br>
# O resultado final será a média do valor previsto em cada Árvore da Floresta.
#
# 2. uma **Classificação:**<br>
# Cada árvore individual votará em um resultado. O resultado final será aquele que tiver a maior quantidade de votos e, portanto, será a indicação do modelo.
#
# <img src='image/Random Forest - exemplo.jpeg'>
#
# No exemplo da imagem acima, poed-se observar uma floresta composta por 9 Árvores de Decisão independentes, aleatoriamente formadas e não correlacionadas. Do resultado, 6 preveram o desfecho "1" e 3 preveram o desfecho "0". Logo, o resultado final da Floresta será "1".
#
# Como todo algoritmo, este modelo também possui seus pontos fortes e fracos, que serão evidenciados a seguir:
# ### Vantagens do Random Forest:
# 1. O algoritmo não é enviesado, uma vez que há múltiplas árvores, em que cada uma é treinada com um subconjunto dos dados. Basicamente, o algoritmo se apoia no *"poder da multidão"* para reduzir viéses individuais.
# 2. O algoritmo é muito estável. Mesmo que uma nova linha de dado seja incrementada ao *dataset*, o algoritmo não é fortemente impactado, uma vez que o novo dado afetará uma árvore, mas dificilmente afetará todas elas e, consequentemente, à floresta.
# 3. O algoritmo funciona bem tanto para variáveis numéricas quanto categóricas.
# 4. O algoritmo também funciona quando há dados faltantes.
#
# ### Desvantagens do Random Forest:
# 1. A maior desvantagem do algoritmo se dá em sua complexidade. Ele pode demandar altos recursos computacionais, a depender do número de árvores de decisão desejados ou até mesmo do tamanho da base de dados a ser trabalhada.
# 2. É uma técnica que demanda muito mais tempo, se comparada a outros algoritmos.
#
# ### Construção das Árvores
#
# Para que o modelo seja eficiente, é necessário que as Árvores sejam não-correlacionadas ou que tenham pouca correlação entre si. Isso garante maior precisão nos resultados, pois o resultado final seria proveniente de modelos distintos, evitando vieses e o *overfiting*. Assim, as árvores protegem umas as outras de seus erros individuais (contanto que o erro não se repita constantemente, na mesma direção). Enquanto uma árvore pode estar errada, as outras estarão corretas. Dessa forma, como um grupo, as árvores de decisão conseguem caminhar para a direção correta.
#
# Para garantir que as árvores se diversifiquem, são utilizados dois métodos:
#
# #### Bagging (Bootstrap Aggregation):
#
# As **Árvores de Decisão** são muito sensíveis aos dados em que são treinadas (pequenas mudanças podem resultar em árvores estruturamente distintas). A *Random Forest* toma vantagem desse fator, simplesmente permitindo que as árvores sejam montadas de acordo com combinações aleatórias e repetidas do conjunto de dados do *dataset*, resultando, assim, em diferentes Árvores.
#
# Por exemplo:
#
# Se, num *dataset* de tamanho N temos os dados [1,2,3,4,5,6], cada Árvore de Decisão será alimentada com uma amostra de mesmo tamanho N, arranjada aleatoriamente com repetição. Assim, para construir uma árvore, podemos fornecer [1,2,2,3,5,5] para treinar a primeira; [2,2,4,5,6,6] para a segunda, e assim sucessivamente... Dessa forma, cada uma terá seus nós escolhidos de maneira a maximizar a distinção entre termos em cada sequência. Por fim, formar-se-á uma floresta com árvores de decisão distintas.
#
#
# #### Feature Randomness:
#
# Numa Árvore de Decisão comum, no momento de dividir um nó, nós consideramos todos atributos possíveis e escolhemos aquele que retorna a melhor distinção entre os Nós da Esquerda e os da Direita. Em contrapartida, no Random Forest, cada Árvore seleciona os atributos aleatoriamente de um subconjunto de atributos. Isso instiga uma maior variabilidade entre as árvores do modelo, garantindo uma menor correlação entre elas.
#
# <img src='image/Feature Randomness.jpeg'>
#
# Na figura acima, pode-se notar a distinção entre o funcionamento do modelo clássico da Árvore de Decisão e o do Random Forest. No primeiro (em Azul), a Árvore pode escolher, dentre todos os atributos, aquele que gera maior distinção entre os elementos do nó: o atributo 1. No segundo, a Árvore só pode escolher o atributo de maior distinção dentro de um subconjunto arranjado aleatoriamente. Para a Árvore 1 do Random Forest, ela pode escolher entre os Atributos 2 e 3. Como o 2 é aquele que gera maior distinção, ela ficará com este. Para a Árvore 2 do Random Forest, ela pode escolher entre os Atributos 1 e 3. Como o 1º é aquele que gera maior distinção (utilizado pelo modelo de Árvore de Decisão covencional $-$ em azul), essa será a adotada para construir a árvore.
#
# Por fim, no algoritmo da Random Forest, as árvores não são treinadas somente com arranjos diferentes de dados, mas também utilizam diferentes atributos para tomadas de decisão. Isso gera uma floresta de árvores não correlacionadas que protegem umas as outras de seus eventuais erros.
# <div id="implementacao-rd"></div>
#
# ### Construindo o modelo
# +
# Preparando as configuração para o Random Forest:
# especificando Variáveis Dependentes (Y) e Independetes (X)
nomes_variaveis = ['idade', 'genero', 'altura', 'peso', 'pa_sist', 'pa_diast',
'colesterol', 'glicose', 'fumante', 'alcool', 'a_fisica']
# X = dados[nomes_variaveis]
X = dados_num
Y = dados_target
# Dividindo a base de dados para Treinamento (80%) e Teste (20%):
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=0.2)
# +
# Criando um Classificador:
classificador = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=2,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
# Treinando o modelo com a base de dados
modelo_rf = classificador.fit(X_train, Y_train)
Y_previsto = classificador.predict(X_test)
# -
# Checando a precisão:
acuracia_rf = metrics.accuracy_score(Y_test, Y_previsto)*100
print(f'Precisão do Modelo (Random Forest): {acuracia_rf:.4f} %')
# +
# Encontrando o grau de importância para cada variável independente:
importancia_variaveis = pd.Series(classificador.feature_importances_, index=nomes_variaveis).sort_values(ascending=False)
importancia_variaveis
# -
# Visualizando graficamente]
plt.figure(figsize=(8,6))
# sns.barplot(x=importancia_variaveis, y=importancia_variaveis.index, palette='Greens_r')
sns.barplot(x=importancia_variaveis, y=importancia_variaveis.index) # sem paletta de cores
plt.xlabel('Pontuação de Importância da Variável')
plt.ylabel('Variável')
plt.title("Grau de Importância das Variávies")
plt.show()
# Neste gráfico, pode-se confirmar a hipótese inicial que tínhamos em relação ao colesterol. Das variáveis qualitativas, sendo a de maior influência na previsão de doenças cardiovasculares.
#
# Obs.: Apesar de alguns atributos parecerem pouco importantes, como as *'alcool'* e *'fumante'*, ao removê-las do algoritmo, o resultado obtido tem um leve decréscimo na precisão, indicando que essas variáveis exercem certa influência no modelo.
#
# Obs.2: Como descrito neste Notebook, a Random Forest gera amostras aleatórias para criar as Árvores de Decisão. Logo, cada vez que o código é executado, uma floresta diferente é gerada e, portanto, os resultados de acurácia são diferentes, mas têm um valor muito próximo.
# <div id="matriz-rd"></div>
#
# ### Verificando o resultado com a Matriz de Confusão:
# +
# Figura para plotagem do gráfico
fig, ax = plt.subplots(figsize=(7, 6))
# Cria e plota matriz de confusão
plot_confusion_matrix(classificador, X_test, Y_test,
normalize='true', display_labels=['Ausente', 'Presente'],
cmap=plt.cm.Greens, ax=ax, values_format='.2%')
plt.xlabel("Valor Previsto")
plt.ylabel("Valor Verdadeiro")
plt.title("Matriz de confusão\nNormalizada por categoria")
plt.grid(False)
# -
# A matriz de confusão acima traz os resultados <u> normalizados por cada categoria</u>, isto é:
#
# - em relação aos dados cujo valor verdadeiro é *ausente*, 71.24% destes foram classificados como ausente (verdadeiro negativo), enqaunto os outros 28.76% foram classificados pelo modelo como presente (falso positivo).
# - em relação aos dados cujo valor verdadeiro é *presente*, cerca de 69.23% foram classificados como *presente* (verdadeiro positivo), enquanto que os outros 30.77% foram classificados como ausente (falso negativo)
#
# <div id="cross-rd"></div>
#
# ### Resultados com Validação Cruzada
# +
# Calcula resultados
resultados_rf = cross_val_score(modelo_rf, dados_num, dados_target, cv=10, scoring='accuracy')
# Calcula média e desvio padrão (em porcentagem)
media_rf = np.mean(resultados_rf) * 100
desv_pad_rf = np.std(resultados_rf, ddof=1) * 100
# Exibe resultados
print('-'*59)
print('Cross Validation aplicado ao modelo da Floresta Aleatória:')
print('-'*59, '\n')
print(f'Média: {media_rf:.4f} %')
print(f'Desvio Padrão: {desv_pad_rf:.4f} %')
# -
# <div id="discussao-rd"></div>
#
# ## Discussão:
#
# Percebe-se, portanto, que o modelo preditivo para Doenças Cardiovasculares utilizando o Algoritmo da Random Forest para Classificação tem uma acurácia razoável, oscilando em cerca de 70%. Tal resultado pode ser aprimorado, conforme se ajuste as especificações do modelo $-$ como o número de árvores, por exemplo $-$, ou, de forma externa, caso a coleta de dados siga maior rigor de qualidade, conforme descrito anteriormente.
#
# Apesar disso, como era de se esperar, a Random Forest obteve um resultado melhor que a Decision Tree, uma vez que ela é capaz de sanar algumas problemas da Árvore de Decisão, como o *overfiting* e o menor enviesamento, uma vez que ela utiliza diversas Árvores ao mesmo tempo.
# <div id="regressao"></div>
#
# -------------
#
# ## 3. Regressão Logística (Logistic Regression)
#
# ### Intuição e conceitos
#
# A regressão logística é um tipo de regressão na qual se obtém probabilidades de um determinado evento ocorrer, tornando-a o melhor tipo de regressão quando a variável *target* é categórica, ou seja, quando assume apenas valores binários (ou seja, 0 ou 1, Verdadeiro/Falso, e no caso deste projeto: Ausente/Presente), e por isso será utilizada aqui.
#
# E como usar uma regressão em classificação? Para compreender melhor essa ideia, tome a seguinte imagem abaixo:
#
# <img src="./image/sigmoid.png" alt="sigmoid" width=450>
#
# Fonte: [medium](https://medium.com/@ODSC/logistic-regression-with-python-ede39f8573c7)
#
# No eixo Y, correspondente à variável *target*, encontram-se plotados os possíveis de valores que ela pode assumir (0 ou 1) em função de X (variável independente). Uma **regressão linear** traça uma reta que melhor se ajusta a uma nuvem de pontos com distribuição qualquer no plano (note que, no caso de Y ser binário, não se forma uma nuvem de pontos, mas duas faixas horizontais de pontos). Já a **regressão logística** começa buscando uma relação linear dos pontos, e então implementa uma não linearidade com o formato de uma ***sigmoide***, cuja representação está no gráfico à direita da imagem anterior.
#
# **Função Sigmoide**
#
# $$f(x) = \frac{1}{1+e^{-y}}$$
#
# É a função acima que é responsável pelo processo "curvar" a reta traçada aos pontos de dados. Ela retorna a probabilidade de classificação de um item (isso quer dizer que os valores variam de 0 a 1). Normalmente, a classificação é feita da seguinte forma:
#
# - se $f(x) < 0.5$, então é classificado como $0$.
# - caso contrário, é classificado como $1$.
#
# Como foi dito, essa regressão é feita a partir do caso linear. O valor de $y$ na função sigmoide provém da equação da reta obtida pelo ajuste linear dos dados, isto é:
#
# $$y = b_0 + b_1 x_1 + b_2 x_2 + {...} + b_n x_n$$
#
# em que $b_0$, $b_1$, $b_2$, ..., $b_n$ são coeficientes estimados com o conjunto de dados pela técnica da máxima verossimilhança, cujo objetivo é encontrar esses parâmetros que dêem a maior probabilidade possível da amostra ter sido observada. Então, chega-se à equação que modela essa regressão:
#
# $$\ln \displaystyle \left(\frac{y}{1 - y}\right) = b_0 + b_1 x_1 + b_2 x_2 + \dots + b_n x_n$$
#
# **Vantagens da Regressão Logística**
#
# - Os resultados do modelo já são fornecidos em probabilidade, isto é, pode-se aproveitar tanto o quesito de classificação quanto à probabilidade de ocorrência daquela classificação.
# - Em comparação à Árvore de Decisão, é um modelo mais robusto a *overfitting*.
# - Pode-se definir um limite de decisão em termos de probabilidade.
#
# <div id="implementacao-lr"></div>
#
# ### Implementação
#
# O modelo de regressão logística será implementado com o método [*LogisticRegression()*](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html), do módulo de modelos lineares do *scikit-learning*. As etapas de construção seguem o mesmo padrão dos modelos anteriores: separação da base de dados em teste e treino, contrução do modelo e cálculo da acurácia com os dados de teste.
# +
# Separando os dados de treino e teste
dados_treino_r, dados_teste_r, target_treino_r, target_teste_r = train_test_split(dados_num,
dados_target,
test_size=0.2,
random_state=0)
# Cria e treina modelo
modelo_lr = LogisticRegression(random_state=0).fit(dados_treino_r, target_treino_r)
# Testa modelo
acuracia_lr = modelo_lr.score(dados_teste_r, target_teste_r)*100
print(f'Acurácia do modelo (Regressão Logística): {acuracia_lr:.4f} %')
# -
# <div id="matriz-lr"></div>
#
# ### Matriz de Confusão
#
# Da mesma forma que nos outros modelos, será apresentado o resultado em matriz de confusão, para verificar a performance do modelo em cada subgrupo e em relação ao total.
# +
# Figura para plotagem do gráfico
fig, ax = plt.subplots(figsize=(7, 6))
# Cria e plota matriz de confusão
plot_confusion_matrix(modelo_lr, dados_teste_r, target_teste_r,
normalize='true', display_labels=['Ausente', 'Presente'],
cmap=plt.cm.Reds, ax=ax, values_format='.2%')
plt.xlabel("Valor Previsto", fontsize=14)
plt.ylabel("Valor Verdadeiro")
plt.title("Matriz de confusão\nNormalizada por categoria")
plt.grid(False)
# Figura para plotagem do gráfico
fig, ax = plt.subplots(figsize=(7, 6))
# Cria e plota matriz de confusão
plot_confusion_matrix(modelo_lr, dados_teste_r, target_teste_r,
normalize='all', display_labels=['Ausente', 'Presente'],
cmap=plt.cm.Reds, ax=ax, values_format='.2%')
plt.xlabel("Valor Previsto")
plt.ylabel("Valor Verdadeiro")
plt.title("Matriz de confusão\nNormalizada pelo total")
plt.grid(False)
# -
# <div id="cross-lr"></div>
#
# ### Validação Cruzada
# +
# Calcula resultados
resultados_lr = cross_val_score(modelo_lr, dados_num, dados_target, cv=10, scoring='accuracy')
# Calcula média e desvio padrão (em porcentagem)
media_lr = np.mean(resultados_lr) * 100
desv_pad_lr = np.std(resultados_lr, ddof=1) * 100
# Exibe resultados
print('-'*59)
print('Cross Validation aplicado ao modelo de Regressão Logística:')
print('-'*59, '\n')
print(f'Média: {media_lr:.4f} %')
print(f'Desvio Padrão: {desv_pad_lr:.4f} %')
# -
# <div id="conclusao"></div>
#
# ___
# # Conclusão
#
# ## Avaliando *performance* dos modelos
# Exibe resultados obtidos por cada modelo
print('==='*9)
print('COMPARAÇÃO ENTRE OS MODELOS')
print('==='*9, '\n')
print(f'Decision Tree | Precisão: {acuracia_dt:.4f} % ')
print(f'Random Forest | Precisão: {acuracia_rf:.4f} % ')
print(f'Logistic Regression | Precisão: {acuracia_lr:.4f} % ')
# Com base nos resultados obtidos, conclui-se que o a Regressão Logística obteve o melhor resultado em termos de Acurácia, distoando-se pouco da Random Forest (cerca de 1,4%). Como era de se esperar, a Random Forest performou melhor que a Decision Tree, haja vista a sua capacidade de sanar possíveis *overfitting*.
#
# Porém, esses valores não são fixos, pois, a cada vez que se executa o código, a base de treino/teste é novamente dividida. Assim, é de se esperar uma pequena flutuação na análise da precisão.
# Exibe resultados obtidos por cada modelo na validação cruzada
print('==='*16)
print('COMPARAÇÃO ENTRE OS MODELOS NA VALIDAÇÃO CRUZADA')
print('==='*16, '\n')
print(f'Decision Tree | Média: {media_dt:.4f} % | Desvio Padrão: {desv_pad_dt:.4f} %')
print(f'Random Forest | Média: {media_rf:.4f} % | Desvio Padrão: {desv_pad_rf:.4f} % ')
print(f'Logistic Regression | Média: {media_lr:.4f} % | Desvio Padrão: {desv_pad_lr:.4f} % ')
# Por meio da *Cross Validation*, pode-se perceber que a média não distoa dos valores calculados anteriormente.
#
# De fato, os modelos apresentaram um baixo percentual como Desvio Padrão, sendo mais evidenciado o da Random Forest, o que se justifica pela forma como ela é construída: gerando diversas árvores diferentes a depender de como a base de dados é dividida.
# +
# Plota curvas ROC para cada modelo
plt.figure(figsize=(10, 8))
# Árvore de decisão
probabilidades_dt = modelo_dt.predict_proba(dados_teste)
predicoes_dt = probabilidades_dt[:,1]
fpr, tpr, threshold = metrics.roc_curve(target_teste, predicoes_dt)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, c='orange', label = 'Árvore de Decisão | AUC = %0.2f' % roc_auc)
# Floresta Aleatória
probabilidades_rd = modelo_rf.predict_proba(dados_teste)
predicoes_rd = probabilidades_rd[:,1]
fpr, tpr, threshold = metrics.roc_curve(target_teste, predicoes_rd)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, c='green', label = 'Floresta Aleatória | AUC = %0.2f' % roc_auc)
# Regressão Logística
probabilidades_lr = modelo_lr.predict_proba(dados_teste)
predicoes_lr = probabilidades_lr[:,1]
fpr, tpr, threshold = metrics.roc_curve(target_teste, predicoes_lr)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, c='red', label = 'Regressão Logística | AUC = %0.2f' % roc_auc)
# Configura gráfico
plt.legend(loc = 'lower right')
plt.title('Receiver Operating Characteristic - Curvas ROC')
plt.plot([0, 1], [0, 1],'b--')
plt.xlim([-0.01, 1.01])
plt.ylim([-0.01, 1.01])
plt.ylabel('Sensibilidade')
plt.xlabel('Especificidade')
plt.show()
# -
# Com base no gráfico anterior, pode-se perceber que o Random Forest é o método que obtém maior sensibilidade em relação à especificidade.
#
# - Sensibilidade é a proporção de casos positivos que foram identificados corretamente
# - Especificidade é a proporção de casos negativos identificados corretamente.
#
# Obs.: vide matriz de confusão normalizada.
# Por fim, pode-se considerar que os resultados obtidos foram satisfatórios para essa base de dados. A partir de informações um tanto quanto genéricas (hábitos de consumo: fumar / ingerir bebida alcoólica, idade, gênero, altura) e de pouca especificidade médica (com exceção da pressão arterial e do nível de colesterol - a qual foi tratada como variável categórica e não quantitativa), pôde-se obter uma precisão em torno de 70%.
#
# Porém, dada a seriedade do assunto $-$ a qual pode desencadear tomadas de decisões que podem pôr vidas em risco, afetar a qualidade de vida de uma população ou até mesmo comprometer a saúde financeira de um hospital, plano de saúde ou seguradora $-$, essa precisão ainda é insatisfatória.
#
# A fim de obter melhores resultados, é recomendável que se tenha maiores cuidados durante a coleta dos dados, tomando as devidas cautelas com o registro nas unidades corretas, dentro de uma faixa de valores válidos e de maneira precisa. Além disso, pode-se experimentar a coleta de uma maior quantidade de variáveis quantitativas, haja vista o maior grau de importância recebida por elas (na Random Forest, peso, altura e pressão arterial sistólica foram as variáveis de maior relevância. Além disso, algumas variáveis como 'colesterol', que poderiam ser quantitativas, foram tratadas como categóricas.)
#
# <div id="referencias"></div>
#
# <h2> Referências </h2>
#
# - [Dataset: Cardiovascular Disease Dataset](https://www.kaggle.com/sulianova/cardiovascular-disease-dataset)
#
# - [Entendendo a Leitura de Pressão Arterial](https://www.heart.org/en/health-topics/high-blood-pressure/understanding-blood-pressure-readings)
#
# - [Doenças Cardiovasculares](https://www.paho.org/pt/topicos/doencas-cardiovasculares)
# - [Doenças Cardiovasculares - Drauzio Varella](https://drauziovarella.uol.com.br/doencas-e-sintomas/endocardite/#:~:text=Endocardite%20%C3%A9%20uma%20doen%C3%A7a%20que,cora%C3%A7%C3%A3o%20e%20as%20v%C3%A1lvulas%20card%C3%ADacas.)
# - [Árvores de Decisão (PUC-rio)](https://www.maxwell.vrac.puc-rio.br/7587/7587_4.PDF)
# - [Árvore de Decisão (prof. Luiz Alberto)](http://professorluizalberto.com.br/site/images/2020-1/Python%20%C3%81rvore%20de%20Decis%C3%A3o.pdf)
# - [Guia sobre Random Forest (Towards Data Science)](https://towardsdatascience.com/understanding-random-forest-58381e0602d2)
# - [Guia sobre Random Forest (Stack Abuse)](https://stackabuse.com/random-forest-algorithm-with-python-and-scikit-learn/)
# - [Regressão Logística](https://edisciplinas.usp.br/pluginfile.php/3769787/mod_resource/content/1/09_RegressaoLogistica.pdf)
# - [Análise Exploratória](http://leg.ufpr.br/~fernandomayer/aulas/ce001e-2016-2/02_Analise_Exploratoria_de_Dados.html)
# - [Doenças cardíacas 1](https://saude.abril.com.br/medicina/o-que-e-arritmia-cardiaca-causas-sintomas-e-tratamentos/)
# - [Doenças cardíacas 2](https://eurofarma.com.br/artigos/doencas-cardiacas-congenitas#:~:text=A%20cardiopatia%20cong%C3%AAnita%20%C3%A9%20qualquer,ser%20descoberto%20anos%20mais%20tarde.)
# ### [Voltar para o início](#inicio)
| 69,299 |
/HW3/hw3/ova_cifar.ipynb
|
4d97a728bb50179cae7bb0660c6f3829558d058a
|
[
"MIT"
] |
permissive
|
JavisDaDa/COMP540ML
|
https://github.com/JavisDaDa/COMP540ML
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 8,005 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Implementing OVA logistic regression for the CIFAR-10 dataset
# In this assignment, you will implement a one-vs-all logistic regression classifier, and apply it to a version of the CIFAR-10 object recognition dataset.
# +
import random
import numpy as np
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
# %load_ext autoreload
# %autoreload 2
# -
# ## Load the CIFAR-10 dataset
# Open up a terminal window and navigate to the **datasets** folder inside the **hw3** folder. Run the
# **get\_datasets.sh** script. On my Mac, I just type in **./get\_datasets.sh** at the shell prompt.
# A new folder called **cifar\_10\_batches\_py** will be created and it will contain $50000$ labeled
# images for training and $10000$ labeled images for testing. The function further partitions the $50000$ training
# images into a train set and a validation set for selection of hyperparameters. We have provided a function to
# read this data in **utils.py**. Each image is a $32 \times 32$ array of RGB triples. It is preprocessed by
# subtracting the mean image from all images. We flatten each image into a 1-dimensional array of size
# 3072 (i.e., $32\times 32 \times 3$). Then a 1 is appended to the front of that vector to handle
# the intercept term. So the training set is a numpy matrix of size $49000\times 3073$,
# the validation set is a matrix of size $1000\times 3073$ and the set-aside test set
# is of size $10000\times 3073$.
# +
import utils
# Get the CIFAR-10 data broken up into train, validation and test sets
X_train, y_train, X_val, y_val, X_test, y_test = utils.get_CIFAR10_data()
# -
# ## Implementing a one_vs_all classifier for CIFAR-10
# In this part of the exercise, you will implement one-vs-all classifier by training multiple regularized binary logistic regression classifiers, one for each of the ten classes in our dataset. You should now complete the code in **one\_vs\_all.py** to train one classifier for each class. In particular, your code should return all the classifier parameters in a matrix $\Theta \in \Re^{(d+1) \times K}$, where each column of $\Theta$ corresponds to the learned logistic regression parameters for a class. You can do this with a for-loop from $0$ to $K − 1$, training each classifier independently.
# When training the classifier for class $k \in \{0, . . . , K − 1\}$, you should build a new label for each example $x$ as follows: label $x$ as 1 if $x$ belomgs to class $k$ and zero otherwise. You can use sklearn's logistic regression function to learn each classifier.
#
# This function will take about an hour to run!
# +
from one_vs_all import one_vs_allLogisticRegressor
ova_logreg = one_vs_allLogisticRegressor(np.arange(10))
# train
reg = 1e5
ova_logreg.train(X_train,y_train,reg)
# predict on test set
y_test_pred = ova_logreg.predict(X_test)
from sklearn.metrics import confusion_matrix
test_accuracy = np.mean(y_test == y_test_pred)
print 'one_vs_all on raw pixels final test set accuracy: %f' % (test_accuracy, )
print confusion_matrix(y_test,y_test_pred)
# -
# ## Visualizing the learned one-vs-all classifier
# +
# Visualize the learned weights for each class
theta = ova_logreg.theta[1:,:].T # strip out the bias term
theta = theta.reshape(10, 32, 32, 3)
theta_min, theta_max = np.min(theta), np.max(theta)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
thetaimg = 255.0 * (theta[i].squeeze() - theta_min) / (theta_max - theta_min)
plt.imshow(thetaimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
plt.show()
# -
# ## Comparing your functions with sklearn's
# +
from sklearn.multiclass import OneVsRestClassifier
from sklearn import linear_model
# train on train set with reg
sklearn_ova = OneVsRestClassifier(linear_model.LogisticRegression(C=1.0/reg,penalty='l2',
fit_intercept=False,solver='lbfgs'))
sklearn_ova.fit(X_train, y_train)
# predict on test set
y_test_pred_sk = sklearn_ova.predict(X_test)
sk_test_accuracy = np.mean(y_test == y_test_pred_sk)
print 'one_vs_all on raw pixels final test set accuracy (sklearn): %f' % (sk_test_accuracy, )
print confusion_matrix(y_test,y_test_pred_sk)
# -
# ## Visualizing the sklearn OVA classifier
# +
# Visualize the learned weights for each class
theta = sklearn_ova.coef_[:,1:].T # strip out the bias term
theta = theta.reshape(10, 32, 32, 3)
theta_min, theta_max = np.min(theta), np.max(theta)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
thetaimg = 255.0 * (theta[i].squeeze() - theta_min) / (theta_max - theta_min)
plt.imshow(thetaimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
plt.show()
.
# last_pymnt_d Last month payment was received
# We will be dropping the records with null values in these columns.
loan = loan[~(pd.isnull(loan.title) | pd.isnull(loan.revol_util) | pd.isnull(loan.last_pymnt_d))]
loan.shape
#Checking the rest of the columns on loan data frame for null values
loan_null = loan_null_vals(loan)
len(loan_null)
# Data Cleaning has been done , we furthur drill down into data and understand the significace of each of the left over column in our analysis. Here we are checking the columns which do not add value to our analysis. Such columns ideally have either 1 or n unique values. We will identify such columns and determine the significane of each column.
((loan.nunique() == 1) | (loan.nunique() == loan.shape[0])).sort_values(ascending=False)
# We can see that 9 columns match our filter conditions.<br>
#
# id
# acc_now_delinq
# application_type
# policy_code
# member_id
# delinq_amnt
# url
# pymnt_plan
# initial_list_status
#
# From the Data Dictionary , we can understand the significane of each column.
#
# <b>id</b>:<br>
# A unique LC assigned ID for the loan listing. Ideally a random number which doesn't contribute in the risk analysis of a loan applicant.
#
# <b>acc_now_delinq</b>:<br>
# The number of accounts on which the borrower is now delinquent.All the rows have same value - 0
#
# <b>application_type</b>:<br>
# Indicates whether the loan is an individual application or a joint application with two co-borrowers.
# All the rows have same value - 'Individual'
#
# <b>policy_code</b>:<br>
# 1. publicly available policy_code=1
# 2. new products not publicly available policy_code=2
#
# All rows have same value - 1.
#
#
# <b>member_id</b>:<br>
# A unique LC assigned Id for the borrower member.
# Ideally a random number which doesn't contribute in the risk analysis of a loan
#
# <b>url</b>:<br>
# URL for the LC page with listing data.Doesn't contribute in the risk assesment.
#
#
# <b>pymnt_plan</b>:<br>
# Indicates if a payment plan has been put in place for the loan.All rows have same value - 'n'
#
#
# <b>initial_list_status</b>:<br>
# The initial listing status of the loan. Possible values are – W, F.
# All rows have same value - f.
#
# So we will dropping all these columns.
loan = loan.drop([
"id",
"acc_now_delinq",
"application_type",
"policy_code",
"member_id",
"delinq_amnt",
"url",
"pymnt_plan",
"initial_list_status"
],axis=1)
# Function to print basic details of specific column
# We will use this column in individual column level analysis
def find_col_details(col):
print("Variable is",col)
print("---------------")
print("Value Counts: ")
print(loan[col].value_counts())
print("---------------")
print(loan[col].describe())
print("---------------")
#Inspecting Individual Columns and checking if we can drop them
find_col_details("zip_code")
# In zip_code column, only first 3 digits of the zip code are visible. Ideally we will use zip_code to determine the location which inturn can be used in our risk assesment to determine whether location plays any roles in a loan being defaulted. But,
# We cannot use this zip_code as it has only 3 digits and we cannot determine the location accurately from this column. To determine the location we have another column <b>addr_state</b> which i think would be sufficient for our analysis so adding this column to the list of columns which must be dropped.
columns_tobe_dropped = set()
columns_tobe_dropped.add("zip_code")
# After performing the analysis on columns which deal with Amount in the loan data frame and using the definitions in Data Dictionary , we have the following analysis.<br>
#
# <b>funded_amnt</b>:<br>
# The total amount committed to that loan at that point in time.
#
# <b>loan_amnt</b>:<br>
# The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.
#
# <b>funded_amnt_inv</b>:<br>
# The total amount committed by investors for that loan at that point in time.
#
# We have both funded_amnt_inv,loan_amnt which give is the picutre of how much the borrower has applied for a loan and how much has been granted by the investor. So,funded_amnt can be dropped.
columns_tobe_dropped.update(["funded_amnt"])
find_col_details("title")
find_col_details("purpose")
# <b>title</b>:<br>
# The loan title provided by the borrower
#
# <b>purpose</b>:<br>
# A category provided by the borrower for the loan request.
#
# After study of columns and their respective data above. It's determined that purpose is a main category and title would be sub category which has the detailed reason of the loan. We donot require two columns which provide the same meaning in our analysis as it doesn't add any value. So, we will dropping title column.
columns_tobe_dropped.add("title")
#Analyzing the Recovery fields
find_col_details("recoveries")
find_col_details("collection_recovery_fee")
# <b>recoveries</b>:<br>
# post charge off gross recovery
#
# Most of the values are 0. Recovery comes into picture post charge off. So , this varaible doesn't have any effect during the loan issuance cycle where risk assesment comes into picture. So, We can drop this column.
#
# <b>collection_recovery_fee</b>:<br>
# post charge off collection fee
# Most of the values are 0. Similar to recoveries variable this variable comes into picture post charge off.So, We can drop it.
columns_tobe_dropped.update(["recoveries","collection_recovery_fee"])
find_col_details("total_rec_late_fee")
# <b>total_rec_late_fee</b>:<br>
# Late fees received to date
#
# Ideally Late fees would not be driving factor for loan default. As per industry standards Late fee would be minimal when compared with the installement amount. Also Late fees can be factor which can determine the probability of default in the Loan Payment cycle , but it's impact would be ideally minimal in loan issuance cycle. So, we will be dropping this column.
columns_tobe_dropped.add("total_rec_late_fee")
#Dropping the Columns
loan = loan.drop(list(columns_tobe_dropped),axis=1)
#Analyzing date Columns and Converting them to date data type for further use
print("Data types before conversion\n",loan[['issue_d','earliest_cr_line','last_pymnt_d','last_credit_pull_d']].dtypes)
loan[['issue_d','earliest_cr_line','last_pymnt_d','last_credit_pull_d']]=loan[[
'issue_d',
'earliest_cr_line',
'last_pymnt_d',
'last_credit_pull_d'
]].apply(lambda x:pd.to_datetime(x, format='%b-%y'))
print("Data types after conversion\n",loan[['issue_d','earliest_cr_line','last_pymnt_d','last_credit_pull_d']].dtypes)
#Deriving metrics from issue date column for year and month level analysis
loan['issue_month'] = loan['issue_d'].dt.month
loan['issue_year'] = loan['issue_d'].dt.year
#Analyzing int and float columns
loan.dtypes
#Analyzing int_rate and revol_util variables as the look to be % columns from data.
find_col_details("int_rate")
find_col_details("revol_util")
# Sanitizing these columns for our further analysis
loan['int_rate'] = loan['int_rate'].str.strip('%').astype('float')
loan['revol_util'] = loan['revol_util'].str.strip('%').astype('float')
#Analysizing term variable
find_col_details("term")
# Converting the variable to appropriate date type for our analysis.
loan.term = loan.term.apply(lambda x: x.split()[0]).astype("int64")
loan.term.value_counts()
loan.loan_status.value_counts()
# We are mainly inetersted in Fully Paid and Charged Off data.Loans with loan_status current, do not contribute to our analysis. So, filtering them.
loan = loan[~(loan.loan_status == "Current")]
#Deriving numeric values from loan_status
loan['default'] = loan.loan_status.apply(lambda x:1 if x == 'Charged Off' else 0)
loan.default.value_counts()
loan.emp_length = loan.emp_length.str.replace("+","")
loan.emp_length = loan.emp_length.str.replace("<","")
loan["emp_exp"] = loan.emp_length.apply(lambda x: x.split()[0]).astype("int64")
loan.emp_exp.value_counts()
#Copying to a new data frame for our analysis
df_master_loan = loan
df_master_loan.shape
# ### Data Analysis:
# As the assignmnet requires a lot of plots. Defining some common custom plot functions which can be resued in different varaiables analysis.
def mybar(col,title="",xlabel="",ylabel=""):
plt.figure(figsize=(15, 5))
ax = sns.barplot(data = df_master_loan,x=col,y='default')
plt.title(title,fontsize=28).set_position([.5, 1.05])
ax.set_ylabel(ylabel,fontsize=20)
ax.set_xlabel(xlabel,fontsize=20)
plt.grid(color='gray', linestyle='dashed')
ax.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{}'.format(round(x*100,2)) + '%'))
return ax
def mypie(data,title="",sizex=20,sizey=10,titlefontsize=22):
plt.figure(figsize=[10,5])
ax2 = data.default.value_counts().plot.pie(
autopct='%1.0f%%',
pctdistance=0.5,
labeldistance=0.7,
figsize=(sizex, sizey),
fontsize=22,
shadow=True
)
plt.title(title,fontsize=titlefontsize)
labels = [r'0-Not Default', r'1-Default']
ax2.legend(labels)
return ax2
def mydist(dataframe, col,title="",xlabel="",ylabel=""):
plt.figure(figsize=(15,5))
ax = sns.distplot(dataframe[col])
ax.set_ylabel(ylabel,fontsize=20)
ax.set_xlabel(xlabel,fontsize=20)
plt.grid(color='gray', linestyle='dashed')
plt.title(title,fontsize=28).set_position([.5, 1.05])
ax.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{}'.format(round(x*100,2)) + '%'))
return ax
def myboxplot(col,title,xlabel,ylabel):
plt.figure(figsize=(15, 5))
ax = sns.boxplot(x=col, y=df_master_loan['loan_status'], data=df_master_loan)
plt.title(title,fontsize=28).set_position([.5, 1.05])
ax.set_ylabel(ylabel,fontsize=20)
ax.set_xlabel(xlabel,fontsize=20)
plt.grid(color='gray', linestyle='dashed')
return ax
def mycountplot(col,title="",xlabel="",ylabel=""):
plt.figure(figsize=(15,5))
ax=sns.countplot(df_master_loan[col])
ax.set_ylabel(ylabel,fontsize=20)
ax.set_xlabel(xlabel,fontsize=20)
plt.grid(color='gray', linestyle='dashed')
plt.title(title,fontsize=28).set_position([.5, 1.05])
return ax
def mybarstacked(data,col,hue,title="",xlabel="",ylabel=""):
plt.figure(figsize=(15,5))
ax = sns.barplot(x=col, y='default', hue=hue, data=data)
plt.title(title,fontsize=28).set_position([.5, 1.05])
ax.set_ylabel(ylabel,fontsize=20)
ax.set_xlabel(xlabel,fontsize=20)
plt.grid(color='gray', linestyle='dashed')
ax.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{}'.format(round(x*100,2)) + '%'))
plt.show()
# default rate
'{:.2%}'.format(df_master_loan.default.sum()/df_master_loan.shape[0])
# Analyzing variable - <b>annual_inc</b>
df_master_loan.annual_inc.describe()
#Plotting Annucal Income with default rate
plt.figure(figsize=(15, 5))
ax = sns.boxplot(x=df_master_loan['annual_inc'], data=df_master_loan)
ax.grid(True)
ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '${}'.format(x/1000000) + 'M'))
plt.show()
# From the descirbe and the box plot
#
# 1. Median is 60,000.00.
# 2. Most of the people have income less than $1 million.
#
# Clearly, there are some outliers. We are going to get rid of them. We will keep only -3sigma to +3 sigma for our analysis. Then we create dervied metrics which would help in clear grading of the income.
df_master_loan = df_master_loan[
np.abs(df_master_loan.annual_inc-df_master_loan.annual_inc.mean()) <= (3*df_master_loan.annual_inc.std())]
def income_grade(n):
if n <= 40000:
return 'Low'
elif n > 40000 and n <=100000:
return 'Medium'
elif n > 100000 and n <=150000:
return 'High'
else:
return 'Very High'
df_master_loan['income_grade'] = df_master_loan.annual_inc.apply(lambda x: income_grade(x))
mybar("income_grade",
"Default Rate vs Income grade",
"Income grade",
"Default Rate"
)
plt.show()
# <b>Observation</b>:<br>
# It's clearly observed that as the Income grade is increasing the default rate is decreasing.Company must be extra cautious while granting to Low income individiuals as there is good probability of of default. Let's find the probability of default in low income individuals.
mypie(df_master_loan[df_master_loan['income_grade'] == "Low"],"Loan default % in Low Annual Income Segment\n")
plt.show()
# In a Low income range there is <b>17%</b> chance that an applicant would default on his Loan. So <b>Annual Income</b> is a strong Indicator of default.
# Analyzing Variable - <b>emplength</b>
# We have created a derived metric for emplength with the columname emp_exp
df_master_loan.emp_exp.describe()
mybar("emp_exp","Employee Experience vs Default Rate","Experience in Years","Default Rate")
plt.show()
# We are unable derive clear insights from the above plot , so we will be going with binning.
def emp_exp(n):
if n <= 2:
return '0-2' #0-25%
elif n > 2 and n <=4: #25-50%
return '2-4'
elif n > 4 and n <=9:#50-75%
return '4-9'
else:
return '9+'#>75%
df_master_loan['emp_length_bin'] = df_master_loan['emp_exp'].apply(lambda x: emp_exp(x))
df_master_loan['emp_length_bin'].value_counts()
mybar("emp_length_bin","Employee Experience vs Default Rate","Experience in Years","Default Rate")
plt.show()
# <b>Observation</b>:<br>
# Applicants with 9+ years experience are more likely to default than others.
mypie(df_master_loan[df_master_loan['emp_length_bin'] == '9+'],"Loan default % in 9+ Employee Length Segment\n")
plt.show()
# It's observed that 15% of all Applicants in 9+ Employee lenght Category are likely to default on a Loan.
# Analyzing Variable - <b>int_rate</b>
df_master_loan.int_rate.describe()
#Distribution if interest rate
mydist(df_master_loan,"int_rate","Interest Rate vs Default rate","Interest Rate","Default rate")
plt.show()
# +
# distributing int_rate into relevant bins
def int_rate(n):
if n <= 9:#25%
return 'Low'
elif n > 9 and n <=15:#25 -75%
return 'Medium'
else:
return 'High'#>75%
df_master_loan['int_rate_bin'] = df_master_loan['int_rate'].apply(lambda x: int_rate(x))
# -
mybar("int_rate_bin","Interest Rate vs Default rate","Interest Rate Range","Default rate")
plt.show()
# <b>Observartion</b>:<br>
# It's observed that as interest rate is increasing ,default rate increases.<br>
# Applicants with High Interest Rates(i.e. >15% in our case) have high rates of default.
mypie(df_master_loan[df_master_loan['int_rate_bin'] == 'High'],"Loan default % in High Interest Rate Segment\n")
plt.show()
# 25% of Applicants in High interest range likely to default on the loan. So, <b>Interest Rate</b> is a strong indicator of default.
# Analyzing Variable - <b>term</b>
df_master_loan.term.describe()
mybar("term","Loan Term vs Default rate","Loan Term in Months","Default rate")
plt.show()
# <b>Observartion</b>:<br>
# Applicants with higher loan Term 60 months have high default rate.
mypie(df_master_loan[df_master_loan['term'] == 60],"Loan default % in 60 months term Segment\n")
plt.show()
# Among all the applicants with term 60 months ,25% of the are likely to default on the loan. So, <b>Loan Term</b> is a Strong Indicator of default
# Analyzing variable - <b>Sub-Grade</b>
mycountplot("sub_grade","Sub Grade vs Number of Loans","Sub Grade","Number of Loans")
plt.show()
ax = mybar("sub_grade","Sub Grade vs Default rate","Sub Grade","Default rate")
plt.show()
plt.figure(figsize=(15, 5))
ax = df_master_loan.groupby("sub_grade").default.mean().nlargest().plot.bar()
plt.title("Top 5 Sub-Grades interms of High Default Rate",fontsize=28).set_position([.5, 1.05])
ax.set_ylabel("Average Loan Default Rate",fontsize=20)
ax.set_xlabel("Sub Grade",fontsize=20)
plt.grid(color='gray', linestyle='dashed')
ax.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{}'.format(round(x*100,2)) + '%'))
plt.show()
# Assumption: Grading starts from A1 and it's the best sub grade.<br>
# <b>Observation</b>:<br>
# As the grading is going down there is evidence of increase in default rates especially in the least grades.
# Lending club must be extra cautious while lending money to Applicants from grades F5,G3,G5,G2,F4 as they have high chance of default.<br>
# Default Rate in Grade F5 = 47%<br>
# Default Rate in Grade G3 = 45%<br>
# Default Rate in Grade G5 = 38%<br>
# Default Rate in Grade G2 = 36%<br>
# Default Rate in Grade F4 = 36%<br>
#
# So, <b>Sub Grade</b> is a Strong Indicator of default.
# Analyzing Varaible - <b>home_ownership</b>
mycountplot("home_ownership","Home Ownership Type vs No of Applicants","Home Ownership Type","No of Applicants")
plt.show()
# We observe that Rent and Mortgage types are very high.
mybar("home_ownership","Home Ownership Type vs Default Rate","Home Ownership Type","Default Rate")
plt.show()
# <b>Observation</b>:<br>
# Applicants with Home Ownership Type "Other" have 19% chance of defaulting on a loan.
#
# Note:
# As found in the previous step Home Owners who come into category "Other" are very Less.
#
# Applicants with Home Ownership Type "Rent" have 15% chance of defaulting on a loan.<br>
# Applicants with Home Ownership Type "Own" have 14% chance of defaulting on a loan.<br>
# Applicants with Home Ownership Type "Mortgage" have 13% chance of defaulting on a loan.<br>
# Analyzing Varaible - <b>purpose</b>
mycountplot("purpose","Loan Purpose vs No of Loans","Loan Purpose","No of Loans")
plt.xticks(rotation=45)
plt.show()
# Top 5 loan purposes are :<br>
# debt_consolidation <br>
# credit_card <br>
# other <br>
# home_improvement <br>
# major_purchase <br>
mybar("purpose","Loan Purpose vs Default Rate","Loan Purpose","Default Rate")
plt.xticks(rotation=45)
plt.show()
# <b>Observation</b>:<br>
# Loans with Loan Purpose small_business have high chance of default.
#
# Note: Loan Type small_business is Low when compared with Other Loan Types.
#
# Default Rates of Top 5 Loan Types are <br>
# debt_consolidation - 15%<br>
# credit_card - 10%<br>
# other - 16%<br>
# home_improvement - 11%<br>
# major_purchase - 10%<br>
# Analyzing variable - <b>dti</b>
df_master_loan.dti.describe()
mydist(df_master_loan,"dti","Debt to Income Ratio vs Default Rate","DTI","Default Rate")
plt.show()
# We see that the range of dti is from 0-30 and the average dti being at 13.45
myboxplot(
"dti",
"DTI vs Loan Status",
"DTI",
"Loan Status"
)
plt.show()
# We can clearly see that the fully paid loans have lower dti than charged off loans
# +
# debt to income ratio
def dti_range(n):
if n <= 10:
return 'Low'
elif n > 10 and n <=20:
return 'Medium'
else:
return 'High'
df_master_loan['dti_range'] = df_master_loan['dti'].apply(lambda x: dti_range(x))
# -
mybar("dti_range",
"DTI vs Default Rate",
"DTI Range",
"Default Rate"
)
plt.show()
# __Observation:__
# <br> We can clearly see that the fully paid loans have lower dti than charged off loans.
# <br>We can clearly see that as the DTI increases the default rate increases.
# <br> About __16%__ of the applicants with high DTI default.
# So, <b>DTI</b> is a strong indicator of default.
# Analysing - __Verification Status__
#Verification_status
df_master_loan.verification_status.describe()
df_master_loan.verification_status.value_counts()
# +
# Converting verification_status into numerical for analysis
def verification_status(status):
if status == 'not verified':
return 1
elif status == 'verified':
return 2
else:
return 3
df_master_loan['verification_status_num'] = df_master_loan['verification_status'].apply(lambda x: verification_status(x.lower()))
# -
mycountplot(
"verification_status",
"Verification Status vs Number of Loans",
"Verification Status",
"Number of Loans"
)
plt.show()
# We can see that the number of not verified loans are significanlty higher than verified and source verified
mybar("verification_status",
"Verification Status vs Default Rate",
"Verification Status",
"Default Rate"
)
plt.show()
# __Observation:__
# <br>We can see that despite the not verified loans were more, we are seeing loans with verified verification status being defaulted. So, even the verified cases are prone to default.
# Lending Club needs to review the verification process and add more checks into the verification process.
# <br> About __17%__ of the total loans with verification status as verified
# Analysing - __Grade__
# grade
df_master_loan.grade.describe()
df_master_loan.grade.value_counts()
# We can see that majority of the loans are provided to A, B and C grade applicants
df_master_loan.sort_values(by=["grade"], inplace=True)
mycountplot(
"grade",
"Grade vs Number of Loans Issued",
"Grade",
"Number of Loans Issued"
)
plt.show()
mybar("grade",
"Grade vs Default Rate",
"Grade",
"Default Rate"
)
plt.show()
mypie(df_master_loan[df_master_loan['grade'] == 'G'],"Loan default % in G grade\n")
plt.show()
# __Observations:__
# <br>We can cleary see that as we go from grade A to G, the default rate increases dramatically.
# <br>From __grade E__ onwards, __more than 25% of the loans__ have defaulted
# <br>For __grade G__, the default rate is as high as __33%__<br>
# So, <b>Grade</b> is a strong indicator of default.
# Analysing - __funded_amnt_inv__
#funded_amnt_inv
df_master_loan.funded_amnt_inv.describe()
#sns.distplot(df_master_loan['funded_amnt_inv'])
ax = mydist(df_master_loan,"funded_amnt_inv",
"Funded Amount by Investors vs Average Default Rate",
"Funded Amount by Investors",
"Default Rate")
ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '${}'.format(x/1000) + 'K'))
plt.show()
myboxplot(
"funded_amnt_inv",
"Funded Amount by Investors vs Loan Status",
"Funded Amount by Investors",
"Loan Status"
)
plt.show()
# We can see that the range of funded amount by investors is anywhere from 0 to 35000 and the average being at 10375.
# <br>Also, we can analyse the funded amount by investors better by trying to understand what where is the maxiumum funding happening and what is the charged off count.<br>
# Note: We are not clearing the outliers as they are in significant number. Alternatively we are categorizing them in the below binning logic for analysis.
# +
# distributing funded_amount_inv into relevant bins
def funded_amnt_inv(amt):
if amt < 5000:
return 'Low'
elif amt >=5000 and amt < 15000:
return 'Medium'
elif amt >= 15000 and amt < 25000:
return 'High'
else:
return 'Very High'
df_master_loan['funded_amnt_inv_range'] = df_master_loan['funded_amnt_inv'].apply(lambda x: funded_amnt_inv(x))
# -
df_master_loan.funded_amnt_inv_range.value_counts()
mybar("funded_amnt_inv_range",
"Funded Amount by Investors vs Default Rate",
"Funded Amount by Investors",
"Default Rate"
)
plt.show()
mypie(df_master_loan[df_master_loan['funded_amnt_inv_range'] == 'Very High'],
"Loan default % in Very High Funded Amount by investors Segment\n")
plt.show()
# __Observations:__
# <br> We can also see majority of the funded amount by investors is from 5000 to 15000 and there are few higher value loans also.
# <br>We can see that as the funding amount by investors increase the charged off loans also increase.
# <br> For funding amount in very high range, the default rate is __19%__
# <br>So, investors need to be very careful with lending high and very high amounts.<br>
# So, <b>Funded Amount Inv</b> is a strong indicator of default.
# Analyzing - __installment__
#installment
df_master_loan.installment.describe()
mydist(df_master_loan,"installment","Installment vs Default Rate","Installment","Default Rate")
plt.show()
# We can see that the installment ranges from \\$16 to \\$1305 while the average installment amount is \\$322
myboxplot(
"installment",
"Installment vs Loan Status",
"Installment",
"Loan Status"
)
plt.show()
# +
# distributing installment into relevant bins
def installment(amt):
if amt <= 200:
return 'Low'
elif amt > 200 and amt<=400:
return 'Medium'
elif amt > 400 and amt <=600:
return 'High'
else:
return 'Very High'
df_master_loan['installment_range'] = df_master_loan['installment'].apply(lambda x: installment(x))
# -
mybar("installment_range",
"Installment vs Default Rate",
"Installment",
"Default Rate"
)
plt.show()
# __Observations:__
# <br>We can clearly see that as the installment amount increases, the default rate also increase.
# <br> For the high and very high levels of installment, the default rate is __>15%__
# <br>Hence, investors need to make sure they lend at the right installment amount.
# So, __Installment__ is a strong Indicator of default.
# +
#issue_d
# from issue_d, we have already extracted the issue month and the year
# -
# issue_month
df_master_loan.groupby('issue_month').issue_month.count()
# number of loans issues per month
mycountplot(
"issue_month",
"Issue Month vs Number of Loans Issued",
"Issue Month",
"Number of Loans Issued"
)
plt.show()
myboxplot(
"issue_month",
"Issue Month vs Loan Status",
"Issue Month",
"Loan Status"
)
plt.show()
mybar("issue_month",
"Issue Month vs Default Rate",
"Issue Month",
"Default Rate"
)
plt.show()
# __Observations:__
# <br>We can see that the number of loans issued tend to increase towards the end of the year in the month from 9 to 12.
# <br> Also, default rate is high in 9-12 month period. For these months, the default rates have been __>14%__.
# This would mostly due to holiday season in these months.
# <b>issue_month</b> is strong indicator of default and investors must be causious in lending during 9-12 month period.
df_master_loan.groupby('issue_year').issue_year.count()
mycountplot(
"issue_year",
"Issue Year vs Number of Loans Issued",
"Issue Year",
"Number of Loans Issued"
)
plt.show()
# We have seen significant rise in the number of loans issued from 2007 to 2011
myboxplot(
"issue_year",
"Issue Year vs Loan Status",
"Issue Year",
"Loan Status"
)
plt.show()
mybar("issue_year",
"Issue Year vs Default Rate",
"Issue Year",
"Default Rate"
)
plt.show()
# __Observations:__
# <br> We can ignore 2007 numbers as there are very few loans issues.
# <br>We can see that the number of charged off cases were high in 2008 but fell in 2009 and 2010. But it again started to rise in 2011.
# <br> We dont see a significant pattern here for our analysis of defaulting on loan.
# Lets see, How different varaibles in Combinations behave with default rate.
# Filtering dataframe w.r.t to top 5 loan purposes in terms of Loan count,So that We can have perform segmented analysis.
top5_loan_purposes = df_master_loan.purpose.value_counts().nlargest().index.to_list()
top5_loan_purposes
df_master_loan_top5purp = df_master_loan[df_master_loan['purpose'].isin(top5_loan_purposes)]
df_master_loan_top5purp['purpose'].value_counts()
mybarstacked(df_master_loan_top5purp,"term","purpose","Term and Top 5 Loan Purposes vs Default Rate",
"Term",
"Default Rate")
# <b>Observation</b>:<br>
# In Loan Term 60 months "Other" Category has with a default rate of 29%.
# Filtereing data w.r.t to top5 Sub Grades in terms of default rate for analysis.
top5_sub_grades = df_master_loan.groupby("sub_grade").default.mean().nlargest().index.to_list()
df_master_loan_top5subgrade = df_master_loan[df_master_loan['sub_grade'].isin(top5_sub_grades)]
mybarstacked(df_master_loan_top5subgrade,"term","sub_grade","Term and Sub Grade vs Default Rate","Term","Default Rate")
# <b>Observation</b>:<br>
#
# In Loan term 36 months we can see that following Sub Grades have high chance of default.<br>
# G3 - 100% <br>
# G5 - 86% <br>
# Note:
# The No of Applicants who match the above criteria are very Low in the current data. We need more data to gain more confidence on this observation.
mybarstacked(df_master_loan,"term","emp_length_bin","Term and Employee Length Range vs Default Rate","Term","Default Rate")
# <b>Observation</b>:<br>
#
# In Loan Term 60 months , Employee Length doens't seem to have effect as default rate is approxmiately similar all the employee Lengths.
mybarstacked(df_master_loan,"term","int_rate_bin","Term and Interest Rate vs Default Rate","Term","Default Rate")
# <b>Observation</b>:<br>
# Across all Loan Terms ,As increasing interest rates are leading to increasing rate of defaults.<br>
# Loan Applicants with High Interest Rates with Loan Term 60 months have 30% chance of defaulting on a Loan.
mybarstacked(df_master_loan,"term","income_grade","Term and Income Grade vs Default Rate","Term","Default Rate")
# <b>Observation</b>:<br>
# Across all Loan Terms ,As annual income is increasing the default rate is going down.<br>
# Applicants with Loan Term of 60 months in Income Grade Low(i.e < $40000) have high default rate of 31%.
mybarstacked(df_master_loan,"int_rate_bin","income_grade","Interest Rate Range and Income Grade Range vs Default Rate",
"Interest Rate Range","Default Rate")
# <b>Observation</b>:<br>
# Irrespective of the Loan Term , Applicants with Low Income Grade(i.e < $40000) have High Default Rates across Low,Meidum and High Interest Ranges.
#
# Low Income Applicants with interest rate in High range(i.e >15%) have default rate of 29%.
mybarstacked(df_master_loan,"sub_grade","int_rate_bin")
# Assumption: Grading starts with A1 considered as the best Sub Grade<br>
# <b>Observation</b>:<br>
# As the Grading is going down ,Interest Rate is going up which is leading to an increase in default rate.
# Analysing - __dti__
mybarstacked(df_master_loan_top5purp, "dti_range", "purpose",
"DTI and Purpose vs Default Rate",
"DTI",
"Default Rate"
)
plt.show()
# <b>Observation</b>:<br>
# Irrespective of DTI range Other,Debt_consolidation categories in purpose have high rates of default.
# As the DTI increases , default rates in these two categories are increasing.
# Analysing - __grade__
mybarstacked(df_master_loan_top5purp, "grade", "purpose",
"Grade and Purpose vs Default Rate",
"Grade",
"Default Rate"
)
plt.show()
# Assumption: Grading starts with A considered as the best Grade<br>
# <b>Observation</b>:<br>
# Across all purpose categories default rates are on the rise as the grade is decreasing.
# Analysing - __funded_amnt_inv_range__
mybarstacked(df_master_loan_top5purp, "funded_amnt_inv_range", "purpose",
"Funding Amount by Investors and Purpose vs Default Rate",
"Funding Amount by Investors",
"Default Rate"
)
plt.show()
# <b>Observation</b>:<br>
# Irrespective of Funding Amount by Investors range,Other and debt_consolidation loan purpose categories have consistently high default rates.
# Analyzing - __installment__
mybarstacked(df_master_loan_top5purp, "installment_range","purpose",
"Installment and Purpose vs Default Rate",
"Installment",
"Default Rate"
)
plt.show()
# <b>Observation</b>:<br>
# In Very High Installment Range Loan purpose categories Other,debt_consolidation have high default rate.
# Analysing - __issue_month__
mybarstacked(df_master_loan_top5purp, "issue_month", "purpose",
"Issue Month and Purpose vs Default Rate",
"Issue Month",
"Default Rate"
)
plt.show()
# <b>Observation</b>:<br>
# Across all loan purposes default rate is on the rise for the last 3 months of the year especially for the Other Category.
#Heat Map provides the correlation between determined variables.
driver_variables = ["int_rate",
"term",
"dti",
"funded_amnt_inv",
#grade, - Non Numeric
#sub_grade, - Non Numeric
"annual_inc",
"issue_month",
"installment",
"default"
#default is not a driver variable. It is added for analysis.
]
mask_ut=np.triu(np.ones(df_master_loan[driver_variables].corr(method='spearman').shape)).astype(np.bool)
plt.figure(figsize=(15,10))
sns.heatmap(df_master_loan[driver_variables].corr(),cmap="coolwarm",annot=True,mask=mask_ut,fmt='.1%')
plt.xticks(rotation=45)
plt.show()
# <b>Observations from Heat Map</b>:<br>
# 1. term and interest have a positive co-relation of __44.1%__.
# 2. term and funded amount by investors have a positive co-relation of __34.1%__.
# 3. interest rate and funded amount by investors have positive co-relation of __29.3%__.
# 4. interest rate and installment have positive co-relation of __27.4%__.
# 5. annual income and funded amount by investors have positive co-relation of __38.7%__.
# 6. annual income and installment have positive co-relation of __40.2%__.
# 7. default rate and interest rate have positive co-relation of __21.3%__.
# 8. default rate and term have positive co-relation of __18.0%__.
# 9. funded amount by investors and installment have positive co-relation of __92.2%__.
#lets visualize the driver variables in pair plot
sns.pairplot(df_master_loan[driver_variables],diag_kind="hist",hue="default",corner=True,height = 2,palette="husl")
plt.show()
| 40,185 |
/keras/SWWAE.ipynb
|
f060d043013ef98ee6d3f396c0851d768bd93bd4
|
[] |
no_license
|
nickyongzhang/Learning
|
https://github.com/nickyongzhang/Learning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 46,161 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Description
# Trains a stacked what-where autoencoder built on residual blocks on the
# MNIST dataset. It exemplifies two influential methods that have been developed
# in the past few years.
#
# The first is the idea of properly 'unpooling.' During any max pool, the
# exact location (the 'where') of the maximal value in a pooled receptive field
# is lost, however it can be very useful in the overall reconstruction of an
# input image. Therefore, if the 'where' is handed from the encoder
# to the corresponding decoder layer, features being decoded can be 'placed' in
# the right location, allowing for reconstructions of much higher fidelity.
#
# ## References
#
# - Visualizing and Understanding Convolutional Networks
# Matthew D Zeiler, Rob Fergus
# https://arxiv.org/abs/1311.2901v3
# - Stacked What-Where Auto-encoders
# Junbo Zhao, Michael Mathieu, Ross Goroshin, Yann LeCun
# https://arxiv.org/abs/1506.02351v8
#
# The second idea exploited here is that of residual learning. Residual blocks
# ease the training process by allowing skip connections that give the network
# the ability to be as linear (or non-linear) as the data sees fit. This allows
# for much deep networks to be easily trained. The residual element seems to
# be advantageous in the context of this example as it allows a nice symmetry
# between the encoder and decoder. Normally, in the decoder, the final
# projection to the space where the image is reconstructed is linear, however
# this does not have to be the case for a residual block as the degree to which
# its output is linear or non-linear is determined by the data it is fed.
# However, in order to cap the reconstruction in this example, a hard softmax is
# applied as a bias because we know the MNIST digits are mapped to [0, 1].
#
# ## References
# - Deep Residual Learning for Image Recognition
# Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
# https://arxiv.org/abs/1512.03385v1
# - Identity Mappings in Deep Residual Networks
# Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
# https://arxiv.org/abs/1603.05027v3
# +
from __future__ import print_function
import numpy as np
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Activation
from keras.layers import UpSampling2D, Conv2D, MaxPooling2D
from keras.layers import Input, BatchNormalization, ELU
import matplotlib.pyplot as plt
import keras.backend as K
from keras import layers
def convresblock(x, nfeats=8, ksize=3, nskipped=2, elu=True):
"""The proposed residual block from [4].
Running with elu=True will use ELU nonlinearity and running with
elu=False will use BatchNorm + RELU nonlinearity. While ELU's are fast
due to the fact they do not suffer from BatchNorm overhead, they may
overfit because they do not offer the stochastic element of the batch
formation process of BatchNorm, which acts as a good regularizer.
# Arguments
x: 4D tensor, the tensor to feed through the block
nfeats: Integer, number of feature maps for conv layers.
ksize: Integer, width and height of conv kernels in first convolution.
nskipped: Integer, number of conv layers for the residual function.
elu: Boolean, whether to use ELU or BN+RELU.
# Input shape
4D tensor with shape:
`(batch, channels, rows, cols)`
# Output shape
4D tensor with shape:
`(batch, filters, rows, cols)`
"""
y0 = Conv2D(nfeats, ksize, padding='same')(x)
y = y0
for i in range(nskipped):
if elu:
y = ELU()(y)
else:
y = BatchNormalization(axis=1)(y)
y = Activation('relu')(y)
y = Conv2D(nfeats, 1, padding='same')(y)
return layers.add([y0, y])
def getwhere(x):
''' Calculate the 'where' mask that contains switches indicating which
index contained the max value when MaxPool2D was applied. Using the
gradient of the sum is a nice trick to keep everything high level.'''
y_prepool, y_postpool = x
return K.gradients(K.sum(y_postpool), y_prepool)
if K.backend() == 'tensorflow':
raise RuntimeError('This example can only run with the '
'Theano backend for the time being, '
'because it requires taking the gradient '
'of a gradient, which isn\'t '
'supported for all TensorFlow ops.')
# +
# This example assume 'channels_first' data format.
K.set_image_data_format('channels_first')
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# The size of the kernel used for the MaxPooling2D
pool_size = 2
# The total number of feature maps at each layer
nfeats = [8, 16, 32, 64, 128]
# The sizes of the pooling kernel at each layer
pool_sizes = np.array([1, 1, 1, 1, 1]) * pool_size
# The convolution kernel size
ksize = 3
# Number of epochs to train for
epochs = 5
# Batch size during training
batch_size = 128
if pool_size == 2:
# if using a 5 layer net of pool_size = 2
x_train = np.pad(x_train, [[0, 0], [0, 0], [2, 2], [2, 2]],
mode='constant')
x_test = np.pad(x_test, [[0, 0], [0, 0], [2, 2], [2, 2]], mode='constant')
nlayers = 5
elif pool_size == 3:
# if using a 3 layer net of pool_size = 3
x_train = x_train[:, :, :-1, :-1]
x_test = x_test[:, :, :-1, :-1]
nlayers = 3
else:
import sys
sys.exit('Script supports pool_size of 2 and 3.')
# Shape of input to train on (note that model is fully convolutional however)
input_shape = x_train.shape[1:]
# The final list of the size of axis=1 for all layers, including input
nfeats_all = [input_shape[0]] + nfeats
# First build the encoder, all the while keeping track of the 'where' masks
img_input = Input(shape=input_shape)
# We push the 'where' masks to the following list
wheres = [None] * nlayers
y = img_input
for i in range(nlayers):
y_prepool = convresblock(y, nfeats=nfeats_all[i + 1], ksize=ksize)
y = MaxPooling2D(pool_size=(pool_sizes[i], pool_sizes[i]))(y_prepool)
wheres[i] = layers.Lambda(
getwhere, output_shape=lambda x: x[0])([y_prepool, y])
# Now build the decoder, and use the stored 'where' masks to place the features
for i in range(nlayers):
ind = nlayers - 1 - i
y = UpSampling2D(size=(pool_sizes[ind], pool_sizes[ind]))(y)
y = layers.multiply([y, wheres[ind]])
y = convresblock(y, nfeats=nfeats_all[ind], ksize=ksize)
# Use hard_simgoid to clip range of reconstruction
y = Activation('hard_sigmoid')(y)
# Define the model and it's mean square error loss, and compile it with Adam
model = Model(img_input, y)
model.compile('adam', 'mse')
# Fit the model
model.fit(x_train, x_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, x_test))
# Plot
x_recon = model.predict(x_test[:25])
x_plot = np.concatenate((x_test[:25], x_recon), axis=1)
x_plot = x_plot.reshape((5, 10, input_shape[-2], input_shape[-1]))
x_plot = np.vstack([np.hstack(x) for x in x_plot])
plt.figure()
plt.axis('off')
plt.title('Test Samples: Originals/Reconstructions')
plt.imshow(x_plot, interpolation='none', cmap='gray')
plt.savefig('reconstructions.png')
# -
model.predict(Xtrain) #Use el modelo previamente entrenado para hacer predicciones con las mismas muestras de entrenamiento
Yest = model.predict(Xtest) #Use el modelo previamente entrenado para hacer predicciones con las muestras de test
#Evaluamos las predicciones del modelo con los datos de test
EficienciaTrain[j] = np.mean(Ytrain_pred.ravel() == Ytrain.ravel())
EficienciaVal[j] = np.mean(Yest.ravel() == Ytest.ravel())
j += 1
print('Eficiencia durante el entrenamiento = ' + str(np.mean(EficienciaTrain)) + '+-' + str(np.std(EficienciaTrain)))
print('Eficiencia durante la validación = ' + str(np.mean(EficienciaVal)) + '+-' + str(np.std(EficienciaVal)))
# -
# Una vez completado el código realice los experimentos necesarios para llenar la siguiente tabla:
import pandas as pd
import qgrid
randn = np.random.randn
df_types = pd.DataFrame({
'Numero de arboles' : pd.Series([5,5,5,5,5,5,10,10,10,10,10,10,20,20,20,20,20,20,50,50,50,50,50,50,100,100,100,100,100,100]), 'Variables analizadas por nodo' : pd.Series([5,10,15,20,25,30,5,10,15,20,25,30,5,10,15,20,25,30,5,10,15,20,25,30,5,10,15,20,25,30])})
df_types["Eficiencia en validacion"] = ""
df_types["Intervalo de confianza"] = ""
df_types.set_index(['Numero de arboles','Variables analizadas por nodo'], inplace=True)
#df_types.sort_index(inplace=True)
df_types["Eficiencia en validacion"][0]=0.8778
df_types["Intervalo de confianza"][0] = 0.0143
qgrid_widget = qgrid.show_grid(df_types, show_toolbar=False)
qgrid_widget
# Ejecute la siguiente instrucción para dejar guardados en el notebook los resultados de las pruebas.
qgrid_widget.get_changed_df()
# Responda:
#
# 3.1 Realice una prueba adicional empleando el total de variables para la selección del mejor umbral en cada nodo ¿De acuerdo con los resultados es mejor usar un bagging de árboles o Random Forest? Explique su respuesta.
#
# R: Con todas las variables, (El modelo tiene 39 características), el resultado obtenido se muestra a continuación:
# Árboles Eficiencia+-intervalo de Confianza
#
# 5 0.8679209469037824+-0.03494183927878393
#
# 10 0.919491471803523+-0.01928076973100282
#
# 20 0.9527499515900375+-0.023919225780921167
#
# 50 0.9610610320863378+-0.023849163448816137
#
# 100 0.9652287291316153+-0.012808847013543736
#
# Por lo tanto, se puede observar que la eficiencia en la validación, generalmente, es mayor cuando se evalúa con todas las variables del modelo y que el intervalo de confianza no varía mucho, por lo que es mejor utilizar el bagging.
# ## Ejercicio 4
#
# Utilice el paquete time (instrucción time.clock()) para medir el efecto del número de árboles y de la cantidad de variables a analizar por nodo, en el tiempo que tarda el entrenamiento del modelo Random Forest. Construya una gráfica de tiempo vs número de árboles, dejando constante el número de variables en 20, y una gráfica de tiempo vs número de variables dejando constante el número de árboles en 30.
def train_random_forest(num_tree, max_var):
#Validamos el modelo
Folds = 4
random.seed(19680801)
EficienciaTrain = np.zeros(Folds)
EficienciaVal = np.zeros(Folds)
skf = StratifiedKFold(n_splits=Folds)
j = 0
tiempos = []
for train, test in skf.split(X, Y):
Xtrain = X[train,:]
Ytrain = Y[train]
Xtest = X[test,:]
Ytest = Y[test]
#Normalizamos los datos
media = np.mean(Xtrain)
desvia = np.std(Xtrain)
Xtrain = sc.stats.stats.zscore(Xtrain)
Xtest = (Xtest - np.matlib.repmat(media, Xtest.shape[0], 1))/np.matlib.repmat(desvia, Xtest.shape[0], 1)
#Haga el llamado a la función para crear y entrenar el modelo usando los datos de entrenamiento
model = RandomForestClassifier(n_estimators = num_tree, max_features = max_var)
model.fit(Xtrain,Ytrain)
#cvar = model.n_features_ #numero de variables del modelo
#Validación
Ytrain_pred = model.predict(Xtrain) #Use el modelo previamente entrenado para hacer predicciones con las mismas muestras de entrenamiento
Yest = model.predict(Xtest) #Use el modelo previamente entrenado para hacer predicciones con las muestras de test
#Evaluamos las predicciones del modelo con los datos de test
EficienciaTrain[j] = np.mean(Ytrain_pred.ravel() == Ytrain.ravel())
EficienciaVal[j] = np.mean(Yest.ravel() == Ytest.ravel())
j += 1
print('Eficiencia durante el entrenamiento = ' + str(np.mean(EficienciaTrain)) + '+-' + str(np.std(EficienciaTrain)))
print('Eficiencia durante la validación = ' + str(np.mean(EficienciaVal)) + '+-' + str(np.std(EficienciaVal)))
# +
import time
import matplotlib.pyplot as plt
num_trees = [5,10,20,50,100]
var_per_nodes = [5,10,15,20,25,30]
times1 = np.zeros(len(var_per_nodes))
times2 = np.zeros(len(num_trees))
j=0
for num in num_trees:
time_init = time.perf_counter()
train_random_forest(num,20)
time_end = time.perf_counter()
times2[j] = time_end - time_init
j+=1
j=0
for var in var_per_nodes:
time_init = time.perf_counter()
train_random_forest(30,var)
time_end = time.perf_counter()
times1[j] = time_end - time_init
j+=1
plt.subplot(211)
plt.plot(num_trees,times2,'o-')
plt.title('Número de variables = 20')
plt.ylabel('Número de arboles')
plt.show()
plt.subplot(212)
plt.plot(var_per_nodes,times1,'o-')
plt.title('Número de arboles = 30')
plt.ylabel('Número de variables')
plt.xlabel('Tiempo')
plt.show()
# -
| 13,529 |
/방울토마토/1920/하늘.ipynb
|
8a0d36a56696360e3e52e2825ba59672387f8c05
|
[] |
no_license
|
meucham11/nong_intern
|
https://github.com/meucham11/nong_intern
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 2,422,977 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # muziek angular
# ## imports
from IPython.display import display
from ipywidgets import widgets
import requests
from pymongo import MongoClient
# +
nameInput = widgets.Text(
placeholder='Geef naam in',
description='naam:'
)
nameBtn=widgets.Button(
description='zoek',
button_style='warning', # 'success', 'info', 'warning', 'danger' or ''
tooltip='get country',
icon='check'
)
countries=[]
response=[]
borders=[]
def search_country(sender):
response=requests.get('https://restcountries.eu/rest/v2/')
for land in response.json():
if land['translations']['nl']==nameInput.value:
print(land['translations']['nl'])
if len(countries)>0:
countries.pop()
countries.append(land)
for border in land['borders']:
borders.append(border)
print border
nameBtn.on_click(search_country)
display(nameInput)
display(nameBtn)
# +
mongo_client = MongoClient('mongodb://127.0.0.1:27017')
db = mongo_client.person
btnInsert=widgets.Button(
description='plaats in mongo',
button_style='info', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check'
)
def opslaan_land(sender):
json ={'name':countries[0]['name'],
'demonym':countries[0]['demonym'],
'currency':countries[0]['currencies'][0]['code'],
'talen':len(countries[0]['languages']),'borders':borders}
response = db.landen.find()
gevonden =0
for item in response:
print(item['name'])
if item['name']==json['name']:
print('persoon al aanwezig')
gevonden = 1
if gevonden ==0:
db.person.insert_one(json)
print(json)
btnInsert.on_click(opslaan_land)
btnInsert
# -
_data = pickle.load(fr)
# # 생육에 주차 추가
# 주차 추가
def add_weeknum(sang_df):
sang_df['diff']=sang_df['WeekNum'].shift(1)
sang_df['diff2']=sang_df['WeekNum']-sang_df['diff']
num=-1
test_week=[]
for i in range(len(sang_df)):
if sang_df['diff2'].loc[i]!=0:
num+=1
test_week.append(num)
else:
test_week.append(num)
del sang_df['diff']
del sang_df['diff2']
return test_week
sang_data1['주차']=add_weeknum(sang_data1)
sang_data2['주차']=add_weeknum(sang_data2)
sang_data1=sang_data1[['Date','Sample','주차','생장길이']]
sang_data2=sang_data2[['Date','Sample','주차','생장길이']]
# # 생육 이상치 제거 (선택에 따라 이 과정 안하고 넘어가도 됨)
# +
q1 = sang_data1['생장길이'].quantile(0.25)
q3 = sang_data1['생장길이'].quantile(0.75)
iqt = 1.5 * (q3 - q1)
# 이상치 제거
sang_data_delout1=sang_data1[(sang_data1['생장길이'] < (q3 + iqt)) & (sang_data1['생장길이'] > (q1 - iqt))]
# +
q1 = sang_data2['생장길이'].quantile(0.25)
q3 = sang_data2['생장길이'].quantile(0.75)
iqt = 1.5 * (q3 - q1)
# 이상치 제거
sang_data_delout2=sang_data2[(sang_data2['생장길이'] < (q3 + iqt)) & (sang_data2['생장길이'] > (q1 - iqt))]
# -
# # 환경 12시간 단위로 끊기
# 12시간 짤라서 Date 추가
def env_add_Date(env_df,생육조사기간):
df=pd.DataFrame()
for i in range(len(생육조사기간)-1):
start_i = 생육조사기간[i]+timedelta(hours=12)
end_i = 생육조사기간[i+1]+timedelta(hours=12)
df2 = env_df[(env_df['date']>=start_i) & (env_df['date']<end_i)]
df2['Date']=생육조사기간[i+1]
df=pd.concat([df,df2])
return df
생육조사기간=sorted(list(set(list(sang_data1['Date']))))
env_data1=env_add_Date(env_data,생육조사기간)
생육조사기간=sorted(list(set(list(sang_data2['Date']))))
env_data2=env_add_Date(env_data,생육조사기간)
# # 환경날짜와 측정 날짜의 차이를 구함
env_data1['n_date']=env_data1['date'].apply(lambda x : x.strftime('%Y-%m-%d'))
env_data2['n_date']=env_data2['date'].apply(lambda x : x.strftime('%Y-%m-%d'))
def n_date(d):
myDatetimeStr = d
myDatetime = datetime.strptime(myDatetimeStr, '%Y-%m-%d')
return myDatetime
env_data1['n_date']=env_data1['n_date'].apply(n_date)
env_data2['n_date']=env_data2['n_date'].apply(n_date)
env_data1['diff']=env_data1['Date']-env_data1['n_date']
env_data2['diff']=env_data2['Date']-env_data2['n_date']
env_data1.head(10)
env_data2.head(10)
# # 일사량 해결
def cumsum_to_value(df):
#누적일사량 해결
df['lag_누적일사량']=df['누적일사량'].shift(1)
df['일사량2']=df['누적일사량']-df['lag_누적일사량']
#음수를 0으로 치환 NA를 0으로 치환
df['일사량2_치환']=df['일사량2'].apply(lambda x : 0 if x<0 else x)
df['일사량']=df['일사량2_치환'].fillna(0)
del df['lag_누적일사량']
del df['일사량2']
del df['일사량2_치환']
return df
base_col=['date','Date']
cumsum_list=['누적일사량']
cumsum_col=base_col+cumsum_list
cumsum_df1=cumsum_to_value(env_data1[cumsum_col])
env_data1['일사량']=cumsum_df1['일사량']
del env_data1['누적일사량']
cumsum_df2=cumsum_to_value(env_data2[cumsum_col])
env_data2['일사량']=cumsum_df2['일사량']
del env_data2['누적일사량']
# # 환경 주차 생성
# 생육 Date와 환경 Date 매칭
def match_test_weeknum(sang_data,env_data):
dic=dict(zip(sang_data['Date'],sang_data['주차']))
result=env_data.replace({"Date":dic})["Date"]
return result
env_data1['주차']=match_test_weeknum(sang_data1,env_data1)
env_data2['주차']=match_test_weeknum(sang_data2,env_data2)
env_data2
# # 환경 pivot
def remove_outlier_pivot(df,col,aggfunc):
cut_df=pd.DataFrame()
for i in range(len(df['주차'].unique())):
k=df[df['주차']==i+1]
q1 = k[col].quantile(0.25)
q3 = k[col].quantile(0.75)
iqt = 1.5 * (q3 - q1)
# 이상치 제거
k=k[(k[col] < (q3 + iqt)) & (k[col] > (q1 - iqt))]
cut_df=pd.concat([cut_df,k])
result = cut_df.pivot_table(index=['주차'],
values=col,
aggfunc=aggfunc).reset_index(drop=False)
return result
# 평균으로 집계할 변수와, 합으로 집계할 변수 설정하기
base_col=['date','Date','주차','diff']
avg_list=['내부온도','내부습도','CO2']
sum_list=['일사량']
cols=avg_list+sum_list
aggfunc=['mean']*len(avg_list)+['sum']*len(sum_list)
i=0
for col,agg in zip(cols,aggfunc):
df=env_data1[[col,'n_date','주차']]
# 이상치 제거, 피봇
my_pivot = remove_outlier_pivot(df,col,agg)
if i==0:
result1 = my_pivot
i+=1
continue
result1 = pd.merge(result1,my_pivot,how='inner',on='주차')
i=0
for col,agg in zip(cols,aggfunc):
df=env_data2[[col,'주차']]
# 이상치 제거, 피봇
my_pivot = remove_outlier_pivot(df,col,agg)
if i==0:
result2 = my_pivot
i+=1
continue
result2 = pd.merge(result2,my_pivot,how='inner',on='주차')
result1.head(5)
result2.head(5)
# # 지연변수 생성(3주)
p=result1
lag_result1=result1
for i in range(3):
raw_col=p.columns[1:]
later_df=p.iloc[:,1:].shift(periods=i+1)
later_col=['_'+str(i+1)+'주전_'+j for j in list(raw_col)]
later_df.columns=later_col
lag_result1=pd.concat([lag_result1,later_df],axis=1)
p=result2
lag_result2=result2
for i in range(3):
raw_col=p.columns[1:]
later_df=p.iloc[:,1:].shift(periods=i+1)
later_col=['_'+str(i+1)+'주전_'+j for j in list(raw_col)]
later_df.columns=later_col
lag_result2=pd.concat([lag_result2,later_df],axis=1)
lag_result1.head(5)
lag_result2.head(5)
# # 양액 데이터
yang_data = yang_data[['date','Irrigation (dripper) [ml]','Number of irrigation starts based on radiation sum','Total number of irrigation starts']]
yang_data['n_date']=yang_data['date'].apply(lambda x : x.strftime('%Y-%m-%d'))
def n_date(d):
myDatetimeStr = d
myDatetime = datetime.strptime(myDatetimeStr, '%Y-%m-%d')
return myDatetime
yang_data['date']=yang_data['n_date'].apply(n_date)
del yang_data['n_date']
g_yang_data=yang_data.groupby(['date'])['Number of irrigation starts based on radiation sum','Total number of irrigation starts'].agg('max').reset_index()
g_yang_data.head(5)
def biger(df):
if df['Number of irrigation starts based on radiation sum']>=df['Total number of irrigation starts']:
return df['Number of irrigation starts based on radiation sum']
else:
return df['Total number of irrigation starts']
g_yang_data['big']=g_yang_data.apply(biger,axis=1)
g_yang_data['tot_yang']=g_yang_data['big']*70.9
del g_yang_data['Number of irrigation starts based on radiation sum']
del g_yang_data['Total number of irrigation starts']
del g_yang_data['big']
g_yang_data.head(5)
# 생육 Date와 환경 Date 매칭
def match_test_weeknum(sang_data,g_yang_data):
dic=dict(zip(sang_data['Date'],sang_data['주차']))
result=g_yang_data.replace({"date":dic})["date"]
return result
# 양액 데이터에 주차 컬럼 추가
g_yang_data['주차1']=match_test_weeknum(sang_data1,g_yang_data)
g_yang_data['주차2']=match_test_weeknum(sang_data2,g_yang_data)
d1=sang_data1['Date'].max().strftime('%Y-%m-%d')
d2=sang_data2['Date'].max().strftime('%Y-%m-%d')
start_idx1=g_yang_data[g_yang_data['주차1']==0].index
start_idx2=g_yang_data[g_yang_data['주차2']==0].index
end_idx1=g_yang_data[g_yang_data['date']==datetime.strptime(d1,'%Y-%m-%d')].index
end_idx2=g_yang_data[g_yang_data['date']==datetime.strptime(d2,'%Y-%m-%d')].index
yang_data1=g_yang_data.iloc[start_idx1[0]:end_idx1[0],:]
yang_data2=g_yang_data.iloc[start_idx2[0]:end_idx2[0],:]
del yang_data1['주차2']
del yang_data2['주차1']
yang_data1=yang_data1[['date', '주차1', 'tot_yang']]
yang_data2=yang_data2[['date', '주차2', 'tot_yang']]
yang_data1.head(5)
yang_data2.head(5)
# # 주차 추가
for i in range(len(yang_data1)):
if type(yang_data1['주차1'].iloc[i])==int:
k=yang_data1['주차1'].iloc[i]
else:
yang_data1['주차1'].iloc[i]=k
yang_data1=yang_data1.reset_index(drop=True)
yang_data1['주차1']=yang_data1['주차']+1
for i in range(len(yang_data2)):
if type(yang_data2['주차2'].iloc[i])==int:
k=yang_data2['주차2'].iloc[i]
else:
yang_data2['주차2'].iloc[i]=k
yang_data2=yang_data2.reset_index(drop=True)
yang_data2['주차2']=yang_data2['주차2']+1
yang_data1=yang_data1.pivot_table(index='주차1',values='tot_yang',aggfunc='sum').reset_index()
yang_data2=yang_data2.pivot_table(index='주차2',values='tot_yang',aggfunc='sum').reset_index()
# # 지연
p=yang_data1
yang_lag_result1=yang_data1
for i in range(3):
raw_col=p.columns[1:]
later_df=p.iloc[:,1:].shift(periods=i+1)
later_col=['_'+str(i+1)+'일주_'+j for j in list(raw_col)]
later_df.columns=later_col
yang_lag_result1=pd.concat([yang_lag_result1,later_df],axis=1)
yang_lag_result1.head(5)
p=yang_data2
yang_lag_result2=yang_data2
for i in range(3):
raw_col=p.columns[1:]
later_df=p.iloc[:,1:].shift(periods=i+1)
later_col=['_'+str(i+1)+'일주_'+j for j in list(raw_col)]
later_df.columns=later_col
yang_lag_result2=pd.concat([yang_lag_result2,later_df],axis=1)
lag_env_yang1=pd.merge(lag_result1,yang_lag_result1,how='inner',left_on='주차',right_on='주차1')
del lag_env_yang1['주차1']
lag_env_yang1.head(5)
lag_env_yang2=pd.merge(lag_result2,yang_lag_result2,how='inner',left_on='주차',right_on='주차2')
del lag_env_yang2['주차2']
lag_env_yang2.head(5)
# # 병합
# +
merge_dataset1 = pd.merge(sang_data1,lag_env_yang1,how='left',on='주차')
merge_dataset_delout1 = pd.merge(sang_data_delout1,lag_env_yang1,how='left',on='주차')
merge_dataset2 = pd.merge(sang_data2,lag_env_yang2,how='left',on='주차')
merge_dataset_delout2 = pd.merge(sang_data_delout2,lag_env_yang2,how='left',on='주차')
# +
dataset1=merge_dataset1.dropna()
dataset_delout1=merge_dataset_delout1.dropna()
dataset2=merge_dataset2.dropna()
dataset_delout2=merge_dataset_delout2.dropna()
# -
plt.figure(figsize=(25,25))
g = sns.heatmap(dataset1.corr(),annot=True, fmt = ".2f", cmap = "coolwarm")
# # XGBOOST
# +
from sklearn.datasets import load_boston
import xgboost
from sklearn.model_selection import train_test_split
from sklearn.metrics import explained_variance_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import KFold
# -
def xgb(df,Y,parameters,col):
test_size=0.25
xgb_model = xgboost.XGBRegressor(n_estimators=100, gamma=0,
colsample_bytree=1, max_depth=7)
xgb_regressor = GridSearchCV(xgb_model,parameters,scoring='neg_mean_squared_error',cv=10)
x_train=df.iloc[:,col]
y_train=df[Y]
xgb_regressor.fit(x_train,y_train)
print(xgb_regressor.best_params_)
eta=xgb_regressor.best_params_['eta']
learning_rate=xgb_regressor.best_params_['learning_rate']
subsample = xgb_regressor.best_params_['subsample']
# Kfold
X_train, X_test, y_train, y_test = train_test_split(x_train, y_train, test_size=test_size)
xgb_model = xgboost.XGBRegressor(n_estimators=100, gamma=0,colsample_bytree=1, max_depth=7,
learning_rate=learning_rate,
subsample=subsample,
eta=eta
)
print(len(X_train), len(X_test))
xgb_model.fit(X_train,y_train)
predictions = xgb_model.predict(X_test)
RMSE = mean_squared_error(y_test, predictions)**0.5
print(RMSE)
xgboost.plot_importance(xgb_model)
plt.figure(figsize=[18,10])
sns.lineplot(x=range(len(predictions)),y=predictions,label="pred")
sns.lineplot(x=range(len(y_test)),y=y_test,label="Test")
plt.show()
plt.figure(figsize=[24,12])
sns.lineplot(x=range(len(y_test)),y=y_test-predictions,label="Test")
plt.title("차이")
plt.show()
print("--------------------------------------------------------------------------------------------")
print("--------------------------------------------------------------------------------------------")
print("--------------------------------------------------------------------------------------------")
result_df = pd.DataFrame({'y_test':y_test,
'pred':predictions,
'차이':abs(y_test-predictions)})
result_df=result_df.sort_values('차이',ascending=False).head(6)
return result_df
# # 생장길이+환경+양액 3주
model_dataset1=merge_dataset1[['생장길이']+list(merge_dataset1.columns[list(range(4,len(merge_dataset1.columns)))])].dropna()
model_dataset1.head(5)
# +
parameters = {'eta':[0.001,0.005,0.01],
'learning_rate':[0.13,0.15,0.17],
'subsample':[0.85,0.86,0.9]}
col=list(range(1,len(model_dataset1.columns)))
result = xgb(model_dataset1,'생장길이',parameters=parameters,col=col)
merge_dataset1.iloc[result.index,:]
# -
xgb(dataset,y,plot='no')
xgb(dataset,y,plot='no')
xgb(dataset,y,plot='no')
xgb(dataset,y,plot='no')
xgb(dataset,y,plot='no')
y='생장길이'
dataset_delout['주차']=pd.to_numeric(dataset_delout['주차'])
xgb(dataset_delout,y,plot='no')
xgb(dataset_delout,y,plot='no')
xgb(dataset_delout,y,plot='no')
xgb(dataset_delout,y,plot='no')
xgb(dataset_delout,y,plot='no')
xgb(dataset_delout,y,plot='no')
# ----
# ----
# ----
# ----
# ----
# ----
# ----
# ----
# ----
# # 양액적용
# ### 환경 10 양액 10
# +
y='생장길이'
dataset_delout1['주차']=pd.to_numeric(dataset_delout1['주차'])
hydict={'eta':[0.01,0.02,0.03,0.05],
'learning_rate':[0.2,0.3,0.4,0.5],
'subsample':[0.80,0.9,0.95]
}
# -
xgb(dataset_delout1,y,hydict,plot='ok')
xgb(dataset_delout1,y,hydict,plot='ok')
xgb(dataset_delout1,y,hydict,plot='ok')
xgb(dataset_delout1,y,hydict,plot='ok')
hydict={'eta':[0.008,0.01,0.012],
'learning_rate':[0.35,0.4,0.45],
'subsample':[0.85,0.9,0.92]
}
xgb(dataset_delout1,y,hydict,plot='ok')
xgb(dataset_delout1,y,hydict,plot='ok')
xgb(dataset_delout1,y,hydict,plot='ok')
total_model_set=pd.concat([dataset_delout1,dataset_delout2])
# +
y='생장길이'
total_model_set['주차']=pd.to_numeric(total_model_set['주차'])
hydict={'eta':[0.01,0.02,0.03,0.05],
'learning_rate':[0.2,0.3,0.4,0.5],
'subsample':[0.80,0.9,0.95]
}
# -
xgb(total_model_set,y,hydict,plot='ok')
xgb(total_model_set,y,hydict,plot='ok')
xgb(total_model_set,y,hydict,plot='ok')
n_anova_data=fruit_data1.pivot_table(index = ['샘플','화방'],
values='수확번호',
aggfunc='max').reset_index()
n_anova_data['샘플']=n_anova_data['샘플'].astype('str')
n_anova_data['화방']=n_anova_data['화방'].astype('str')
n_anova_data=n_anova_data[n_anova_data['화방']<=18]
sns.boxplot(data=n_anova_data,x='샘플',y='수확번호')
sns.boxplot(data=n_anova_data,x='화방',y='수확번호',sort=False)
import statsmodels.api as sm
from statsmodels.formula.api import ols
import scipy.stats as stats
# +
lm = ols('수확번호 ~ C(샘플) + C(화방) + C(샘플):C(화방)', data=n_anova_data).fit()
anova_table = sm.stats.anova_lm(lm, typ=2)
anova_table
# -
| 16,890 |
/NLP-Indeed-Scaper.ipynb
|
584a9e70fddff3de9a79360a98665a5296d8cd22
|
[
"Apache-2.0"
] |
permissive
|
johnmconner/resume-job-posting-nlp-project
|
https://github.com/johnmconner/resume-job-posting-nlp-project
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,883 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests as rq
from bs4 import BeautifulSoup as bs
from bs4 import SoupStrainer as ss
import time
from random import randint
import pandas as pd
import re
BaseURL = 'https://www.indeed.com'
def SearchForJobs():
params = {'q':'Systems Administrator','l':'Manassas VA','fromage':'last','Sort':'Date'}
request = rq.get('https://www.indeed.com/jobs', params=params)
return request
def MakeRequest(URL):
request = rq.get(URL)
return request
def GetPageURLs(HTML):
Links = HTML.find_all('a', 'jobtitle')
URL_List = []
for link in Links:
URL_List.append(BaseURL + link['href'])
return URL_List
def ExtractData(URLList):
RawDF = pd.DataFrame()
HTMLList = []
for URL in URLList:
bullets_list = []
time.sleep(randint(1,2))
request = MakeRequest(URL)
titlestrainer = ss('div', attrs={'class':re.compile('jobsearch-JobInfoHeader-title-container.*')})
soupstrainer = ss('div', attrs={'id':'jobDescriptionText'})
titlesoup = bs(request.text, 'html.parser', parse_only=titlestrainer)
soup = bs(request.text, 'html.parser', parse_only=soupstrainer)
if titlesoup.h1.text.strip():
title = titlesoup.h1.text.strip()
else:
continue
bullets = soup.find_all('li')
#print(URL)
#print(title)
for bullet in bullets:
bullets_list.append(bullet.text.strip())
joined_bullet_list = ','.join(bullets_list)
DFImport = {'title':title,'Bullets':joined_bullet_list, 'URL':URL}
RawDF = RawDF.append(DFImport, ignore_index=True)
return RawDF
SearchResultsHTML = SearchForJobs()
URLList = GetPageURLs(bs(SearchResultsHTML.text))
df = ExtractData(URLList)
# -
df.to_csv(r'C:\Users\John\Desktop\python\liveProject_resume\resume-job-posting-nlp-project\csv.csv')
df
| 2,147 |
/community/en/r1/deepdream.ipynb
|
160e0cc0d513e7e56ca992d99147f4d3d662978e
|
[
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
] |
permissive
|
AFAgarap/examples
|
https://github.com/AFAgarap/examples
| 2 | 2 |
Apache-2.0
| 2020-06-08T11:20:28 | 2020-04-07T10:53:55 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 31,349 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="B905bc8x4-I6"
# #### Licensed under the Apache License, Version 2.0 (the "License");
# + cellView="form" colab={} colab_type="code" id="YcK9kdXi4q8l"
# @title Copyright 2019 The TensorFlow Authors.
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="xu2SVpFJjmJr"
# # DeepDreaming with TensorFlow
# + [markdown] colab_type="text" id="AE4X-5Z6qJlv"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/community/en/r1/deepdream.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
# Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/examples/master/tensorflow/examples/blob/master/community/en/r1/deepdream.ipynb">
# <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
# View source on GitHub</a>
# </td>
# </table>
#
# + [markdown] colab_type="text" id="rx1FbclXm8c7"
# > For a TensorFlow 2.0 compatible
# version see [TensorFlow.org](https://tensorflow.org/en/beta/tutorials/generative/deepdream.ipynb)
# + [markdown] colab_type="toc" id="hupz2hrZjdnC"
# >[Loading and displaying the model graph](#loading)
#
# >[Naive feature visualization](#naive)
#
# >[Multiscale image generation](#multiscale)
#
# >[Laplacian Pyramid Gradient Normalization](#laplacian)
#
# >[Playing with feature visualzations](#playing)
#
# >[DeepDream](#deepdream)
#
#
# + [markdown] colab_type="text" id="-PLC9SvcQgkG"
# This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
#
# - visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](https://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](https://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries)
# - embed TensorBoard graph visualizations into Jupyter notebooks
# - produce high-resolution images with tiled computation ([example](https://storage.googleapis.com/deepdream/pilatus_flowers.jpg))
# - use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
# - generate DeepDream-like images with TensorFlow (DogSlugs included)
#
#
# The network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](https://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](https://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures.
# + cellView="both" colab={} colab_type="code" id="jtD9nb-2QgkY"
# boilerplate code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# + [markdown] colab_type="text" id="ILvNKvMvc2n5"
# <a id='loading'></a>
# ## Loading and displaying the model graph
#
# The pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network:
# + colab={} colab_type="code" id="9ozsAvJdn2G7"
# !wget -nc https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip -n inception5h.zip
# + cellView="both" colab={} colab_type="code" id="1kJuJRLiQgkg"
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
# + [markdown] colab_type="text" id="eJZVMSmiQgkp"
# To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
# + cellView="both" colab={} colab_type="code" id="LrucdvgyQgks"
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
# + [markdown] colab_type="text" id="Nv2JqNLBhy1j"
# <a id='naive'></a>
# ## Naive feature visualization
# + [markdown] colab_type="text" id="6LXaGEJkQgk4"
# Let's start with a naive way of visualizing these. Image-space gradient ascent!
# + cellView="both" colab={} colab_type="code" id="ZxC_XGGXQgk7"
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
# + [markdown] colab_type="text" id="ZroBKE5YiDsb"
# <a id="multiscale"></a>
# ## Multiscale image generation
#
# Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
#
# With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
# + cellView="both" colab={} colab_type="code" id="2iwWSOgsQglG"
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
# + cellView="both" colab={} colab_type="code" id="GRCJdG8gQglN"
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
# + [markdown] colab_type="text" id="mDSZMtVYQglV"
# <a id="laplacian"></a>
# ## Laplacian Pyramid Gradient Normalization
#
# This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normalization_.
# + cellView="both" colab={} colab_type="code" id="Do3WpFSUQglX"
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
# + cellView="both" colab={} colab_type="code" id="zj8Ms-WqQgla"
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
# + [markdown] colab_type="text" id="YzXJUF2lQgln"
# <a id="playing"></a>
# ## Playing with feature visualizations
#
# We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
# + cellView="both" colab={} colab_type="code" id="a6jfiWqZQglq"
render_lapnorm(T(layer)[:,:,:,65])
# + [markdown] colab_type="text" id="ka6RyOMEnrB5"
# Lower layers produce features of lower complexity.
# + cellView="both" colab={} colab_type="code" id="KYOtrJxMnlws"
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
# + [markdown] colab_type="text" id="wuP8a4FlQglx"
# There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
# + cellView="both" colab={} colab_type="code" id="ozN-nH2yQgl0"
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
# + [markdown] colab_type="text" id="lcPe-ZMv0dYR"
# <a id="deepdream"></a>
# ## DeepDream
#
# Now let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow.
#
# + cellView="both" colab={} colab_type="code" id="qM2U_96hyUwN"
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
# + [markdown] colab_type="text" id="EuvInTo8n2Hk"
# Let's load some image and populate it with DogSlugs (in case you've missed them).
# + colab={} colab_type="code" id="exYchiCWmR4r"
img_path = tf.keras.utils.get_file("pilatus800.jpg","https://storage.googleapis.com/download.tensorflow.org/example_images/pilatus800.jpg")
# + cellView="both" colab={} colab_type="code" id="M9_vOh_2Qgl-"
img0 = PIL.Image.open(img_path)
img0 = np.float32(img0)
showarray(img0/255.0)
# + cellView="both" colab={} colab_type="code" id="k0oggbGEeC3U"
render_deepdream(tf.square(T('mixed4c')), img0)
# + [markdown] colab_type="text" id="IJzvhEFxpB7E"
# Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
#
# Using an arbitrary optimization objective still works:
# + cellView="both" colab={} colab_type="code" id="4GexZuwJdDmu"
render_deepdream(T(layer)[:,:,:,139], img0)
# + [markdown] colab_type="text" id="mYsY6_Ngpfwl"
# Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](https://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image.
# + [markdown] colab_type="text" id="mENNVQd3eD-h"
# We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.
2, 4, i)
sns.kdeplot(t1[feature], bw=0.5, label="SeriousDlqin2yrs = 1")
sns.kdeplot(t0[feature], bw=0.5, label="SeriousDlqin2yrs = 0")
plt.ylabel('Density plot', fontsize=10)
plt.xlabel(feature, fontsize=10)
locs, labels = plt.xticks()
plt.tick_params(labelsize=10)
plt.show()
# + [markdown] id="0DRHlG5Iy_7O" colab_type="text"
# ### Brief Observations
#
# - age: Same trend as seen from Age_Map in Binary EDA. Majority of data-set are between ages 41 to 63
#
# - NumberOfTime30-59DaysPastDueNotWorse: Very High Kurtosis & right-skewed
#
# - NumberOfOpenCreditLinesAndLoans: High Kurtosis & Right-skewed
#
# - NumberOfTimes90DaysLate: Very High Kurtosis & Very Right-skewed
#
# - NumberRealEstateLoansOrLines: High Kurtosis & Right-skewed
#
# - NumberOfTime60-89DaysPastDueNotWorse: Very High Kurtosis & Very Right-skewed
#
# - NumberOfDependents: High Kurtosis & Right-skewed
#
# - CombinedDefault: Gaussian distribution shape
#
# - Reasonable data-set as those who experience financial distress (SeriousDlqin2yrs=1) have left skewed (Median>Mean).
#
# - In other words, data population who experience financial distress have a greater proportion of defaults
# + id="FthM2v7wy_7O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="927dabca-98f2-4a23-a7e8-0145206756e1"
sns.set_style('whitegrid')
for i in Integer:
sns.lmplot(y=i, x='Unnamed: 0', data=train, fit_reg=False, hue='SeriousDlqin2yrs', legend=True,size=5, aspect=1)
plt.show()
# + [markdown] id="zqVrbkTJy_7P" colab_type="text"
# ### Brief Observations
#
# - age: Same pattern seen. Concentrated on ages 41 to 63
#
# - NumberOfTime30-59DaysPastDueNotWorse: Interesting disparity! In other words, we have extreme frequency's.
#
# - NumberOfOpenCreditLinesAndLoans: Evidently, those who have had financial distress (SeriousDlqin2yrs=1) have lower Loans given their poor credit history
#
# - NumberOfTimes90DaysLate, NumberRealEstateLoansOrLines: Same pattern as 'NumberOfTime30-59DaysPastDueNotWorse'
#
# - NumberOfTime60-89DaysPastDueNotWorse: Same pattern as 'NumberOfTime30-59DaysPastDueNotWorse'
#
# - NumberOfDependents': Interesting, those who have had financial distress (SeriousDlqin2yrs=1) tend to have lesser dependents than those who have had financial distress
# + id="q3EnGEjny_7R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 392} outputId="508b4642-c266-4e50-b05e-95d787e13c3b"
i = 0
t1 = train.loc[train['SeriousDlqin2yrs'] != 0]
t0 = train.loc[train['SeriousDlqin2yrs'] == 0]
sns.set_style('whitegrid')
fig, ax = plt.subplots(2, 2, figsize=(8, 6))
for feature in Real:
i += 1
plt.subplot(2, 2, i)
sns.kdeplot(t1[feature], bw=0.5, label="SeriousDlqin2yrs = 1")
sns.kdeplot(t0[feature], bw=0.5, label="SeriousDlqin2yrs = 0")
plt.ylabel('Density plot', fontsize=10)
plt.xlabel(feature, fontsize=10)
locs, labels = plt.xticks()
plt.tick_params(labelsize=10)
plt.show()
# + id="v_UDQbspy_7S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="bd456c16-c103-4e54-98c9-5ed65c8ad1e7"
sns.set_style("whitegrid")
for col in Real:
sns.lmplot(y=col, x="Unnamed: 0", data=train, fit_reg=False, hue='SeriousDlqin2yrs', legend=True,
size=5, aspect=1)
plt.show()
# + [markdown] id="BqN6EUz3y_7U" colab_type="text"
# ### Brief Observations
#
# - DebtRatio': Contrastingly, those who have had financial distress (SeriousDlqin2yrs=1) posses lower DebtRatio
#
# - MonthlyIncome': Interesting! Data-set actually has a huge income disparity for those who have had financial distress (SeriousDlqin2yrs=1).
#
# - Evidently, from our Preliminary Overview it has a StandardDeviation of 3.650860e+04.
#
# - Disparity roughly is between the Less than 33,000 & capping at =100,000.(SeriousDlqin2yrs=0) instead is more evened out.
#
# - NetWorth': Reasonable sense, since those who have had financial distress (SeriousDlqin2yrs=1) have lower net worth
#
# - MonthlyDebtPayments': Similar to the pattern found in 'DebtRatio' those who have had financial distress (SeriousDlqin2yrs=1) are paying out lesser existing debts
# + [markdown] id="yG0lfno0y_7V" colab_type="text"
# ### Bivariate Analysis
# + id="Z1bihag-y_7W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="312c69a7-d925-477d-b191-717b585bf86f"
BiVariate_1 = ['CombinedLoan',
'NumberOfTime30-59DaysPastDueNotWorse', 'NumberOfDependents', 'CombinedDefault',
'DebtRatio', 'MonthlyIncome', 'NetWorth']
sns.set_style("whitegrid")
for col in BiVariate_1:
sns.lmplot(y=col, x='age', data=train, fit_reg=False, hue='Age_Map', legend=True,
size=5, aspect=1)
plt.show()
# + [markdown] id="0UrriJEEy_7Y" colab_type="text"
# ### Observation:
#
# - CombinedLoan': For both category's of those who made & did not make a loans, they are dominated by 'Retired' by more than 2x
#
# - NumberOfTime30-59DaysPastDueNotWorse':
#
# - On the high extreme end, of exceeding over 95times they are split evenly between 'Working' & 'Senior' category's. But rarely 'Retired'.
#
# - On the low extreme end, of exceeding over but below 15times they are split evenly between between 'Working' & 'Senior' category's. But this time dominated by 'Retired'
#
# - NumberOfDependents': 'Working' & 'Senior' category's tend to have higher number of dependents. Evidently, mortality means as we grow older we see more deaths..
#
# - CombinedDefault': Same pattern as 'CombinedLoan'
#
# - DebtRatio': Evidently, as 'age' increases 'DebtRatio' increases. But begins falling upon retirement at age 63
#
# - MonthlyIncome': Similar to before, in Uni-variate EDA we spotted a disparity in 'MonthlyIncome'. But this time we can clearly see a gaussian shape appearing. which is also similar to 'DebtRatio' pattern
#
# - Evidently, as 'age' increases 'MonthlyIncome' increases. But begins falling upon retirement at age 63
#
# - NetWorth': This emphasizes the pattern. Evidently, as 'age' increases 'NetWorth' increases. But begins falling upon retirement at age 63.
#
# - HOWEVER, the positive gradient also highlights the flaw in our 'NetWorth' Derivation. Since we are essentially assuming constant income growth & ignoring depreciation considerations for the time value of money (i.e.From my placement year in a Pension's Consultancy firm, we need to include annuities which accounts for both interest rates and mortality.
# + id="1CkD-q6cy_7Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="4a6fe57f-b251-4f21-e022-7aa5c0a04338"
BiVariate_2 = ['CombinedLoan',
'NumberOfTime30-59DaysPastDueNotWorse', 'NumberOfDependents', 'CombinedDefault',
'DebtRatio', 'MonthlyIncome', 'NetWorth']
sns.set_style("whitegrid")
for col in BiVariate_1:
sns.lmplot(y=col, x='MonthlyIncome', data=train, fit_reg=False, hue='Income_Map', legend=True,
size=5, aspect=1)
plt.show()
# + [markdown] id="fSxaUm2dy_7b" colab_type="text"
# ### Observations:
#
# - CombinedLoan': Clearly, those who have made & did not make Loans are dominated by the higher tier income earners
#
# - NumberOfTime30-59DaysPastDueNotWorse': From a relative perspective, those who exceed the 30-59Days deadline are dominated by higher tier income earners
#
# - NumberOfDependents': The higher the 'MonthlyIncome', the lower the Dependents
#
# - CombinedDefault': Same pattern as 'CombinedLoan'
#
# - DebtRatio': The higher the 'MonthlyIncome', the higher the 'DebtRatio'
#
# - NetWorth': Obvious of higher 'MonthlyIncome' equates to higher 'NetWorth'
# + id="BgcdDJjOy_7d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="50efb25a-4b8b-4bfc-ce61-4c100185613a"
sns.set_style("whitegrid")
sns.lmplot(y='DebtRatio', x='MonthlyIncome', data=train, fit_reg=False, hue='SeriousDlqin2yrs', legend=True,
size=5, aspect=1)
plt.show()
# + [markdown] id="9r4dHRjey_7f" colab_type="text"
# - Those who have had financial distress (SeriousDlqin2yrs=1) clearly have lower 'DebtRatio' & 'MonthlyIncome'
# + id="2XbceKPty_7f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="3a7d9c4c-8f6d-4cbf-b75e-844cca8d3002"
sns.lmplot(y='NumberOfTime30-59DaysPastDueNotWorse', x='MonthlyIncome', data=train, fit_reg=False,
hue='SeriousDlqin2yrs', legend=True, size=5, aspect=1)
plt.show()
# + [markdown] id="b3qTnEEYy_7n" colab_type="text"
# - Contrastingly, often those who have had financial distress (SeriousDlqin2yrs=1) have exceeded the 30-59Days deadline only a few times
# + id="N9nISIqUy_7o" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="5e309742-0285-4678-ed37-24f625ef8df0"
sns.lmplot(y='NumberOfOpenCreditLinesAndLoans', x='MonthlyIncome', data=train, fit_reg=False,
hue='SeriousDlqin2yrs', legend=True, size=5, aspect=1)
plt.show()
# + [markdown] id="eQuqScA9y_7q" colab_type="text"
# - Similarly, those who have had financial distress (SeriousDlqin2yrs=1) actually open lesser loans than those of without financial distress (SeriousDlqin2yrs=0)
# + id="3EpwErQiy_7r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="8067ac36-73b6-454d-ba80-2bee31fb1e7c"
sns.lmplot(y='NumberRealEstateLoansOrLines', x='MonthlyIncome', data=train, fit_reg=False,
hue='SeriousDlqin2yrs', legend=True, size=5, aspect=1)
plt.show()
# + id="0l0BFufIy_7s" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="791d2369-8a40-433e-ce65-516232e16704"
sns.lmplot(y='DebtRatio', x='NumberRealEstateLoansOrLines', data=train, fit_reg=False, hue='SeriousDlqin2yrs',
legend=True, size=5, aspect=1)
plt.show()
# + [markdown] id="7G3QAy7Xy_7w" colab_type="text"
# - Similarly, those who have had financial distress (SeriousDlqin2yrs=1) actually a lower 'DebtRatio' than those of without financial distress (SeriousDlqin2yrs=0)
# + id="yMPPc2dSy_7x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="fa3b0360-67ae-4c5d-9de4-5f2bca0e1663"
sns.lmplot(y='NumberOfOpenCreditLinesAndLoans', x='NumberRealEstateLoansOrLines', data=train, fit_reg=False,
hue='SeriousDlqin2yrs', legend=True, size=5, aspect=1)
plt.show()
# + [markdown] id="aL6abz6Ky_7y" colab_type="text"
# - A realistic scenario here.
# As the RealEstateLoans increases, borrowers tendency to open other lines and loans (Credit loans) decreases.
# + [markdown] id="NqR-2q4Fy_7y" colab_type="text"
# ### Quick Summary over EDA:
#
# - Disobedient acts (making excessive loans & defaults or exceed deadlines) often made by 'Retired'
#
# - Realistic data-set, income plateau and falls while proceeding with age
#
# - Disobedient acts often made by Higher-tier income
#
# - Credit balloon; since as Higher-tier income exhibit higher extremes of making loans & experiencing default
#
# - Higher-tier income tend to have lesser dependents. Self-centred data-set?
#
# - Contrasting relationships. When experiencing financial distress (SeriousDlqin2yrs=1), borrows actually have a "apt" financial circumstance (lower 'DebtRatio' & exceeding deadlines & making loans or having defaults) but only have low 'MonthlyIncome'
#
# - low 'MonthlyIncome' main driver for financial distress, while 'DebtRatio' & exceeding deadlines & making loans or having defaults play less significant effect.
#
# + id="BwhT5aISy_7z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 805} outputId="f3d0297f-c9f8-4cc4-faba-9c9f305f4cff"
cor = train.corr()
plt.figure(figsize=(17, 10))
sns.heatmap(cor, annot=True, cmap='YlGnBu')
plt.show()
# + [markdown] id="yTVSY1Jly_70" colab_type="text"
# ## Coclusive from Heatmap:
#
# - Now with the added support of the HeatMap correlations to our EDA analysis, we can further justify and decide on which features to discard or keep.
#
# - Keep CombinedDefault since it outperforms in correlation
#
# - Keep 'NumberOfTime30-59DaysPastDueNotWorse' since it has the highest 'target variable' correlation amonst the original 3. But still bears low multi-collinearity with CombinedDefault
#
# - Drop NumberOfTime60-89DaysPastDueNotWorse & NumberOfTimes90DaysLate
#
# - Keep NetWorth since it outperforms in correlation
#
# - Drop MonthlyIncome; Since NetWorth has higher correlation. NetWorth as a proxy for 'MonthlyIncome' given the formula.
#
# - Keep CombinedLoans. Used as proxy for original features
#
# - Drop NumberOfOpenCreditLinesAndLoans & NumberRealEstateLoansOrLines to avoid multi-collinearity.
#
# - Drop MonthlyDebtPayments since its correlation is still lower than 'DebtRatio'
#
# - Drop Age_Map & is_Retired & is_Senior & is_Working, since original 'age' outperforms
#
# - Drop Income_Map & is_LowY & is_MidY & is_HighY, since original 'MonthlyIncome'' outperforms
# + id="Ebg7gWA2y_70" colab_type="code" colab={}
train_Drop = train
ColumnsToDrop = ['Unnamed: 0', 'NumberOfTime60-89DaysPastDueNotWorse', 'NumberOfTimes90DaysLate',
'MonthlyIncome',
'NumberOfOpenCreditLinesAndLoans', 'NumberRealEstateLoansOrLines',
'MonthlyDebtPayments',
'Age_Map', 'is_Retired', 'is_Senior', 'is_Working',
'Income_Map', 'is_LowY', 'is_MidY', 'is_HighY']
train.drop(columns=ColumnsToDrop, inplace=True)
# + id="ufba0X9jy_73" colab_type="code" colab={}
test['CD'] = (test['NumberOfTime30-59DaysPastDueNotWorse']
+ test['NumberOfTimes90DaysLate']
+ test['NumberOfTime60-89DaysPastDueNotWorse'])
test['CombinedDefault'] = 1
test.loc[(test['CD'] == 0), 'CombinedDefault'] = 0
del test['CD']
# + id="a17bCVoAy_74" colab_type="code" colab={}
test['NetWorth'] = test['MonthlyIncome'] * test['age'] / NetWorthDivisor
test['CL'] = (test['NumberOfOpenCreditLinesAndLoans']
+ test['NumberRealEstateLoansOrLines'])
test['CombinedLoan'] = 1
test.loc[test['CL'] >= LoanLinesBuffer, 'CombinedLoan'] = 1
test.loc[test['CL'] < LoanLinesBuffer, 'CombinedLoan'] = 0
del test['CL']
# + id="1MjpNikty_74" colab_type="code" colab={}
to_drop = ['Unnamed: 0', 'NumberOfTime60-89DaysPastDueNotWorse', 'NumberOfTimes90DaysLate',
'MonthlyIncome',
'NumberOfOpenCreditLinesAndLoans', 'NumberRealEstateLoansOrLines']
test.drop(columns=to_drop, inplace=True)
# + id="W-FrBGIjOdVA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1d8f28b2-23b6-404f-b3ce-9f7ffd441ac1"
train.shape
# + id="3RRWM3VAPDl2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="90924ece-f986-42b9-849c-8b008fec904a"
test.shape
# + id="9y90kqUj2AbW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="8870317f-b251-4a7f-f133-c2519f52755f"
train.columns
# + id="Axm0_3pxWety" colab_type="code" colab={}
X = train.drop('SeriousDlqin2yrs',axis=1)
y = pd.DataFrame(train.pop('SeriousDlqin2yrs'))
# + id="LVT4J0xp28br" colab_type="code" colab={}
X_test = test.drop('SeriousDlqin2yrs',axis=1)
y_test = pd.DataFrame(test.pop('SeriousDlqin2yrs'))
# + id="VQocP0fV2I-E" colab_type="code" colab={}
from scipy.stats import zscore
# + id="zNFJO4oJ2O61" colab_type="code" colab={}
scaled_train= X.apply(zscore)
scaled_test = X_test.apply(zscore)
# + id="GyDG77DLFUo9" colab_type="code" colab={}
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
# + id="a5EJuIYrHPTN" colab_type="code" colab={}
pred = knn.fit(scaled_train,train_copy['SeriousDlqin2yrs']).predict_proba(scaled_test)
pred = pred[:,1]
# + id="lmCs7M8cKjdU" colab_type="code" colab={}
result = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Credit Scoring/cs-test.csv', na_values=-1)
# + id="PsE5elFRK3Gr" colab_type="code" colab={}
result = result.drop(["RevolvingUtilizationOfUnsecuredLines",
"age",
"NumberOfTime30-59DaysPastDueNotWorse",
"DebtRatio",
"MonthlyIncome",
"NumberOfOpenCreditLinesAndLoans",
"NumberOfTimes90DaysLate",
"NumberRealEstateLoansOrLines",
"NumberOfTime60-89DaysPastDueNotWorse",
"NumberOfDependents"], axis=1)
# + id="CA6Shf_2J7kz" colab_type="code" colab={}
result.SeriousDlqin2yrs = pred
result = result.rename(columns={'Unnamed: 0': 'Id',
'SeriousDlqin2yrs': 'Probability'})
# + id="ty_gdLwsHgcl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="11e830c0-2b3a-4f40-943e-8013410faeea"
result.head()
# + id="O9q6wpbwLkiP" colab_type="code" colab={}
from google.colab import files
result.to_csv('Credit_score_knn.csv',index=False)
files.download('Credit_score_knn.csv')
# + [markdown] id="UPZsD1wuNbgx" colab_type="text"
# So KNN base model is giving us the accuracy of 68.739 % Lets apply random forest algorithm
# + id="jvnppikYOfQx" colab_type="code" colab={}
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
# + id="wd_caIbPMwIa" colab_type="code" colab={}
rf = RandomForestClassifier(n_estimators=50, max_features='sqrt')
rf = rf.fit(X,y)
# + id="cir0iQfTOPnr" colab_type="code" colab={}
features = pd.DataFrame()
features['feature'] = X.columns
features['importance'] = rf.feature_importances_
features.sort_values(by=['importance'], ascending=True, inplace=True)
features.set_index('feature', inplace=True)
# + id="3ZsAyZDlPOql" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 502} outputId="7d982e16-a7f0-47c4-c368-43abbd208f49"
features.plot(kind='barh', figsize=(12, 8))
# + id="i9BOmBh0PUoZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="97691a6e-00c5-4384-a271-5974347ca392"
parameters = {'n_estimators': 100, 'random_state' : 123}
model2 = RandomForestClassifier(**parameters)
model2.fit(X, y)
# + id="KRBI7bn6Pk6p" colab_type="code" colab={}
rf_pred = model2.predict_proba(scaled_test)
rf_pred = rf_pred[:,1]
# + id="Pl9sMfvNRDCs" colab_type="code" colab={}
result.SeriousDlqin2yrs = rf_pred
result = result.rename(columns={'Unnamed: 0': 'Id',
'SeriousDlqin2yrs': 'Probability'})
# + id="TbH28mI3RSeD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="25b494a6-160b-4cd9-df98-2603bc41c710"
result.head()
# + id="Mx98YmWCRWNW" colab_type="code" colab={}
from google.colab import files
result.to_csv('Credit_score_rf.csv',index=False)
files.download('Credit_score_rf.csv')
# + [markdown] id="xIC4U63cR1bs" colab_type="text"
# Hence Random Forest has improved the accuracy score to 73 %
# + id="-JpKyDz-RdS9" colab_type="code" colab={}
| 39,445 |
/examples/lyrics_based_semantic_networks/lyrics_based_semantic_networks.ipynb
|
d43ea0ab8994117d82497a5a1cb4abe8f7fa353b
|
[] |
no_license
|
omermadmon/Text2Net
|
https://github.com/omermadmon/Text2Net
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 880,816 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="D16ZrWJMQXhn"
# # Psychological and Cognitive Networks Project
# ## Lyrics-Based Semantic Networks
#
# ### Authors:
# * Omer Madmon
# * Ariel Kreisberg Nitzav
# + [markdown] id="8eWLXX6aQ9xl"
# ## Background
# + [markdown] id="mjT8NOGUQnfy"
# In this project, we examine the differences and similarities between three popular music genres: Pop, Hip-Hop and Rock. The three differ in multiple aspects, among them their lyrical content: based on the work of Sarjoun Doumit et al., we seek to examine lyrics of songs which belong to the three genres mentioned using Network Analysis, and draw conclusions regarding their typical structure, their lyrical richness and the ideas expressed in them.
# + [markdown] id="dKlYxO0kR8sV"
# ## Data
# + [markdown] id="T2DYVqCwQveM"
# The data was acquired from "Songs Lyrics From 6 Musical Genres" dataset, which is available in [Kaggle](https://www.kaggle.com/neisse/scrapped-lyrics-from-6-genres).
# + [markdown] id="ePdVthFeRXEr"
# ## Pre-Processing
# + [markdown] id="f5e3gZXAUcK1"
# We first perform some pre-processing operations and create an artists-lyrics unified dataframe for each genre.
# Preprocessing can be view in data_preprocessing.py
# + pycharm={"name": "#%%\n"}
import pandas as pd
df_dict = dict()
for genre in ['Rock', 'Pop', 'Hip Hop']:
genre_url = genre.replace(' ', '%20')
url = f'https://github.com/omermadmon/Text2Net/blob/master/examples/lyrics_based_semantic_networks/data/{genre_url}_data.csv?raw=true'
df_dict[genre] = pd.read_csv(url, index_col=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 423} id="FFgLHK0jT89n" outputId="a8ea75d0-0c2a-49bf-fc44-2315b510ff66"
df_dict['Rock']
# + colab={"base_uri": "https://localhost:8080/", "height": 423} id="9M7CaFj8UAik" outputId="cc3e12af-e104-4164-ed6a-98c8cc569de6"
df_dict['Pop']
# + colab={"base_uri": "https://localhost:8080/", "height": 423} id="llt2jFLhUDKk" outputId="cfdbc49e-0f39-43ee-920c-1f541f36c495"
df_dict['Hip Hop']
# + [markdown] id="H47jQAzvVz4l"
# ## Network Estimation
# + [markdown] id="A8iBP2hJYcfO"
# For each genre, we will sample 50 songs and construct a network from this sample:
# + id="j28Do_U2Vw4G"
from Text2Net import Text2Net
from Utils import visualize
import random
graph = {}
sample_size = 50
for genre in ['Rock', 'Hip Hop', 'Pop']:
lyrics = '\n'.join([df_dict[genre]['Lyric'][i] for i in random.sample(range(0, len(df_dict[genre]['Lyric'])), sample_size)])
graph[genre] = Text2Net(lyrics).transform(n_nodes=100, weight_function='jaccard')
# + colab={"base_uri": "https://localhost:8080/", "height": 591} id="D4dnEj1wY9yE" outputId="429c04b8-6cf8-4093-f4ff-c931a6e3b9cb"
visualize(graph['Rock'], 'Rock', nodes_factor=5, edges_factor=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 591} id="aChdP1O0ZDkl" outputId="53dec2ba-676c-445d-fddd-7f34dd33151c"
visualize(graph['Pop'], 'Pop', nodes_factor=5, edges_factor=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 591} id="l0ydfqSWZHBT" outputId="526eb145-58d2-45dd-97a3-8df3008c4c14"
visualize(graph['Hip Hop'], 'Hip Hop', nodes_factor=5, edges_factor=1)
# + [markdown] id="t_UmZRLjb8O2"
# The Hip Hop networks is the most connected network, as expected.
# + [markdown] id="SdNQvws9ZOPi"
# ## Bootstrap
# + [markdown] id="FKdlHCjZZdVm"
# We will create 50 a bootstrap sample, each sample of size 200.
# The bootstrap sampling implementation is available at bootstrap_sampling.py
# + id="DjSnv2jmZYs8"
import pickle
with open('data/measures_dict.pickle', 'rb') as handle:
measures_dict = pickle.load(handle)
# + [markdown] id="tVjal8tebMil"
# ## Measures Descriptive Statistics & Statistical Testing
# + colab={"base_uri": "https://localhost:8080/", "height": 809} id="SrcFwVy0cOnj" outputId="119638d4-7317-41d4-ef9a-a9dd880628d6"
import matplotlib.pyplot as plt
genres = ['Rock', 'Hip Hop', 'Pop']
measures = ['ASPL', 'AVG_DEG', 'CC']
for measure in measures:
df = pd.DataFrame()
for genre, d in measures_dict.items():
df[genre] = pd.Series(d[measure])
boxplot = df.boxplot()
boxplot.set_title(f'{measure} Distributions Per Genre')
plt.show()
# + id="_SUaZK86cS08"
from itertools import combinations, product
from scipy.stats import ttest_ind
from collections import defaultdict
g_couples = list(combinations(genres, r=2))
hypothesis = list(product(g_couples, measures))
pv_dict = {}
for genres, measure in hypothesis:
T, pv = ttest_ind(measures_dict[genres[0]][measure], measures_dict[genres[1]][measure], equal_var=False)
pv_dict[(genres, measure)] = {'T': T, 'P-value': pv, 'Bonf. Adj. P-value': len(hypothesis)*pv,
'Rejected': len(hypothesis)*pv < 0.05}
pvalues_dataframe_rows = {k: defaultdict(int) for k in g_couples}
for key, value in pv_dict.items():
((genre1, genre2), measure) = (key[0][0], key[0][1]), key[1]
pv = value['Bonf. Adj. P-value']
pvalues_dataframe_rows[(genre1, genre2)][measure] = pv
results = pd.DataFrame(pvalues_dataframe_rows.values(),
index=[f'{genres[0]} - {genres[1]}' for genres in pvalues_dataframe_rows.keys()])
results
# + [markdown] id="cLGos6xFl72j"
# As the p-values indicate, all hypotheses are rejected for every reasonable simultaneous confidence level.
# + [markdown] id="0BzHEpIxdy6o"
# ## Topics Representation In Genres
# + [markdown] id="bwCsnSEFpNW2"
# Topic-genre score is calculated according to the formula:
#
# $score(G, Topic):= \Sigma_{word\in Topic \cap G.nodes} {PageRank_{G}(word)}$
#
# Where $G$ is the genre's network and $Topic$ is the list of words representing the topic.
# + id="aS5Leuiud7IX"
import networkx as nx
topics = {
'romantic': ['honey', 'goodbye', 'woman', 'love', 'heart', 'care', 'feel', 'friend', 'girl', 'feeling', 'baby',
'relationship', 'soul', 'boy', 'heartbreak', 'break'],
'profanity': ['sex', 'sexy', 'bitch', 'pussy', 'ass', 'nigga', 'shit', 'dirty', 'hoe', 'hell', 'fuck', 'suck', 'damn'],
'selfishness': ['i', 'im', 'me', 'mine', 'myself', 'self', 'am', 'ill', 'imma'],
'partying': ['hands', 'hair', 'jump', 'feet', 'club', 'night', 'party', 'dance', 'dancing', 'boom', 'tonight', 'summer', 'play']
}
topic_scores = { topic : defaultdict(list) for topic in topics.keys() }
sample_size = 200
num_samples = 20
for genre in ['Rock', 'Hip Hop', 'Pop']:
i = 0
while i < num_samples:
lyrics = '\n'.join([df_dict[genre]['Lyric'][i] for i in random.sample(range(0, len(df_dict[genre]['Lyric'])), sample_size)])
G = Text2Net(lyrics).transform(n_nodes=100, weight_function='jaccard')
pr = nx.pagerank(G, alpha = 0.9)
for topic, topic_words in topics.items():
topic_score = sum([pr[word] for word in set(topic_words).intersection(set(pr.keys()))])
topic_scores[topic][genre].append(topic_score)
i+=1
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="pqdYzmBnnXAB" outputId="5e01e656-10b3-4060-90c7-517bb393ca5d"
for topic in topic_scores.keys():
boxplot = pd.DataFrame(topic_scores[topic]).boxplot()
boxplot.set_title(f'{topic.capitalize()} Score Distributions Per Genre')
plt.show()
| 7,480 |
/examples/Python/Basic/mesh.ipynb
|
f30d4aab02f808f467fd0dfc7c6f88986614fe9b
|
[
"MIT"
] |
permissive
|
REXJJ/Open3D
|
https://github.com/REXJJ/Open3D
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 25,984 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python385jvsc74a57bd0b74de16614daa0846ddfaf5c205b09a61d1fef6a850ce9d5a0bef4ebf69fbd07
# ---
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv("results.csv")
df
df.plot(
x="Iterations",
y=["MWVR", "RWVR", "K100WVR"],
label=["Prior states", "Random", "Keep training 100x"],
title="Model performance with training",
ylabel="Wins out of 100 games v.s. random",
xlabel="Training iterations",
)
plt.savefig("modelwins.png")
df.plot(
x="Iterations",
y=["MTM", "RTM", "K100TM"],
label=["Prior states", "Random", "Keep training 100x"],
title="Model average game time with training",
ylabel="Average game time in seconds",
xlabel="Training iterations",
)
plt.savefig("modeltime.png")
| 1,005 |
/Final Project - Online Business Starter Kit.ipynb
|
dd1ca5c6677933bfb97d38a6d59aeaaeaca99613
|
[] |
no_license
|
mikavelasco/mika-velasco
|
https://github.com/mikavelasco/mika-velasco
| 0 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 10,431 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Online Business Starter Kit
#
# Welcome to your online business starter kit! Here, we will help you build up your online business from scratch. From inventory making, to a selling system with a digital receipt, we shall guide you throughout this process!
# ## 1) Fill up your first inventory
#
# To create your inventory, provide a name for your inventory by filling the blanks guided by the # comments.
#
# Then, run the cell below (Shift + Return/Enter).
# #### "INVENTORY MAKER" CELL:
# +
inventory = {}
# ^ change inventory to the name of your inventory
command = input("Add product to your inventory? (Y/N): ")
while command == "Y":
product_code = str(input("Enter product code: "))
product_name = str(input("Enter product name: "))
product_price = float(input("Enter product price: "))
product_quantity = int(input("Enter product quantity: "))
def enter_product(code, name, price, quantity):
details = {}
details["name"] = name
details["price"] = price
details["quantity"] = quantity
entry = {code:details}
inventory.update(entry)
# ^ change inventory to the name of your inventory
enter_product(product_code, product_name, product_price, product_quantity)
command = input("Add another product? Y - Yes and N - End inventory edit: ")
print("------------------------------------------")
print("Here if your current inventory:")
print("Name of inventory: "+"inventory")
# ^ change inventory to the name of your inventory in between the quotation marks
print(inventory)
# -
# #### In case you have new products you weren't able to put in your inventory before, you may use the code below!
#
# Please read all the instructions in hashtags before you run the cell!
# #### "ADD TO AN EXISTING INVENTORY" CELL:
# +
# if you want to add more to your inventory,
# insert the name of the inventory you want to add to down below
# guided by the # comment
# then, you may run this cell
# # copy this code down below to a new cell and execute,
# if you want to add more items to your inventory in the future
command = input("Add product to your inventory? (Y/N): ")
while command == "Y":
product_code = str(input("Enter product code: "))
product_name = str(input("Enter product name: "))
product_price = float(input("Enter product price: "))
product_quantity = int(input("Enter product quantity: "))
def enter_product(code, name, price, quantity):
details = {}
details["name"] = name
details["price"] = price
details["quantity"] = quantity
entry = {code:details}
inventory.update(entry)
# ^ change inventory to the name of the inventory you want to add to
enter_product(product_code, product_name, product_price, product_quantity)
command = input("Add another product? Y - Yes and N - End inventory edit: ")
print("------------------------------------------")
print("Here if your current inventory:")
print("Name of inventory: "+"inventory")
# ^ change inventory to the name of your inventory in between the quotation marks
print(inventory)
# -
# #### 1.a) Q: "What if I want to create more inventories in the future?"
# #### A: Just copy the contents or duplicate the "INVENTORY MAKER" CELL and follow the instructions for making an inventory all over again!
# ## 2) Execute your ordering system
#
# To execute your ordering system, you must add the name of your chosen inventory in certain parts of this code (follow # instructions).
#
# Then, run the cell!
#
# **Note that this ordering system will only apply to one inventory. Copy the contents of the cell below/duplicate the cell below if you want to create an ordering system from another inventory**
# +
def add_to_cart(cart, ordered_item):
add_to_cart = cart.append(ordered_item)
return add_to_cart
def generate_receipt(cart):
print("-----------------------------------------------------")
print(" Official Receipt ")
print()
total_payment = 0
print("Product","\t","Quantity","\t","Subtotal")
for item in cart:
print(inventory[item["code"]]["name"],"\t","\t",item["qty"],"\t","\t","P",item["subtotal"])
# ^ change inventory to the inventory you want to get item from
total_payment += item["subtotal"]
print()
print("Your total payment is: ","P",total_payment)
print("-----------------------------------------------------")
cart = []
command = "Y"
while(True):
command = input("Would you like to order? (Y/N): ")
if(command.upper()=="N"):
break
else:
add = "Y"
while(add=="Y"):
code = input("Enter product code: ")
qty = int(input("Enter quantity: "))
ordered_item = dict()
ordered_item["code"] = code
ordered_item["qty"] = qty
ordered_item["subtotal"] = int(qty) * inventory[code]["price"]
# ^ change inventory to the inventory you want to get item from
add_to_cart(cart,ordered_item)
add = input("Add more items? (Y/N): ").upper()
generate_receipt(cart)
cart = []
print("Please proceed to the next window for your payment and shipping.")
# -
# ## 3) Execute the payment and shipping system
# +
print("-----------------------------------------------------")
print(" Payment and Shipping ")
print("Here is your total payment: ", total_payment)
print()
print(" Shipping Method ")
print("Type pick-up or shipping")
shipping_method = (input("Please enter your preferred method for receiving your items: ")
if (shipping_address=="pick up"):
break
else:
shipping_address = str(input("Enter your shipping address: "))
print()
print(" Payment Method ")
print("Type cash on delivery, cash deposit or card")
payment_method = (input("Please enter your preferred method of payment: ")
if (payment_method=="cash on delivery"):
given_payment = float(input("Please enter the amount you are paying: "))
def check_for_change(given_payment):
if given_payment == total_payment:
print("Exact amount was given.")
elif given_payment < total_payment:
print("Insufficient payment.")
return payment_method
else:
check_for_change = total_payment-given_payment
print("Your change is: ",check_for_change)
elif (payment_method=="cash deposit"):
deposit_account = str(input("Through which bank? (BDO/BPI): "))
print("Please provide us with your deposit slip via this email: ")
break
else:
card_type = str(input("Type of card (ex. Mastercard, etc.): "))
card_name = str(input("Please enter the name on your credit card: "))
card_number = int(input("Please enter your credit card number: "))
card_expiration = int(input("Please enter your credit card expiration date (MM/YY): "))
card_cvv = int(input("Please enter the 3-digit CVV: "))
print("Thank you for shopping with us! Have a great day.")
| 7,596 |
/Paired T-test Systolic Blood Pressure.ipynb
|
6263f5f66a404395d8170be44ea33089e5e5ae54
|
[
"MIT"
] |
permissive
|
jbonfardeci/data-sci-notebooks
|
https://github.com/jbonfardeci/data-sci-notebooks
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 39,157 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: 'Python 3.5.2 64-bit (''root'': conda)'
# language: python
# name: python35264bitrootconda11b641845f59432291708858f802b5d3
# ---
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
# %matplotlib inline
df = pd.read_csv('blood_pressure.csv')
df['Diff'] = df['After'] - df['Before']
compare = df[['Before', 'After', 'Diff']]
compare.head(10)
compare.describe()
# Assumption Check: Outliers
df[['Before', 'After']].plot(kind='box')
plt.show()
# Assumption Check: Normal Distribution
df['Diff'].plot('hist', title='SBP Difference')
plt.show()
# Check for normally distributed data
# with Q-Q plot.
stats.probplot(df['Diff'], plot=plt)
plt.title('SBP Difference Q-Q Plot')
plt.show()
# Shapiro-Wilk test for normality
# (W-test value, p-value)
stats.shapiro(df['Diff'])
# The findings are statistically significant!
# One can reject the null hypothesis in support of the alternative.
stats.ttest_rel(df['Before'], df['After'])
pvalue = 0.0011
print(-pvalue < -0.025 or pvalue > 0.025)
# ## Interpretation of the Results
# A paired sample t-test was used to analyze the blood pressure before and after the intervention to test if the intervention had a significant affect on the blood pressure. The blood pressure before the intervention was higher (156.45 ± 11.39 units) compared to the blood pressure post intervention (151.36 ± 14.18 units); there was a statistically significant decrease in blood pressure (t(119)=3.34, p= 0.0011) of 5.09 units.
port time
import datetime
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from statsmodels.stats.proportion import proportions_ztest
# %matplotlib inline
# -
# ! spark-shell --version
#set seaborn style
sns.set(style="whitegrid")
# create a Spark session
spark = SparkSession.builder.appName('Sparkify_local').getOrCreate()
# # Load and Clean Dataset
# In this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids.
df = spark.read.json('mini_sparkify_event_data.json')
df.show(3)
df.limit(10).toPandas()
df.printSchema()
df.describe('userId').show()
# - There appears to be blank user ids
df.describe('sessionId').show()
df.count()
for col_ in df.columns:
print(col_,df.filter(col(col_).isNull()).count())
df.columns
df.dtypes
df.dropDuplicates().count()
df.filter(col("userId") == "" ).count()
## Clean empty user id's
df = df.filter(df.userId!="")
df.count()
# +
#observation:no duplicates
# +
#stats:
# -
stats_df = df.describe().toPandas().set_index("summary")
all_stats =spark.createDataFrame(stats_df.reset_index()).cache()
stats_df
df.filter( col("userId") == "" ).count()
# +
# Create a user defined function for formating the timestamp
get_time = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0).strftime("%Y-%m-%d %H:%M:%S"))
#Apply the udf on the ts column
df_cleaned = df.withColumn("time", get_time(df.ts))
# -
df_cleaned.limit(5).toPandas()
df_cleaned.createOrReplaceTempView("Sparkify_local_cleaned")
spark.sql('''
SELECT DISTINCT(auth)
FROM Sparkify_local_cleaned
''').show()
spark.sql('''
SELECT auth,COUNT(DISTINCT userId) AS user_counts
FROM Sparkify_local_cleaned
GROUP BY auth
ORDER BY user_counts DESC
''').show()
gender_count = spark.sql('''
SELECT gender,COUNT(DISTINCT userId) AS user_counts
FROM Sparkify_local_cleaned
GROUP BY gender
ORDER BY user_counts DESC
''')
gender_count.show()
# +
sns.barplot(x='gender',y='user_counts',data=gender_count.toPandas());
# -
spark.sql('''
SELECT count(*)
FROM Sparkify_local_cleaned
where userid != ''
''').show()
spark.sql('''
SELECT count(*)
FROM Sparkify_local_cleaned
where userid = ''
''').show()
df2 = spark.sql('''
SELECT *
FROM Sparkify_local_cleaned
where userid != ''
''')
df2.count()
df2.createOrReplaceTempView("Sparkify_local_cleaned2")
spark.sql('''
SELECT COUNT(DISTINCT(itemInSession)) AS item_counts
FROM Sparkify_local_cleaned2
''').show()
length_data = spark.sql('''
SELECT length
FROM Sparkify_local_cleaned2
''')
sns.distplot(length_data.toPandas().dropna());
spark.sql('''
SELECT level,COUNT(DISTINCT userId) AS user_counts
FROM Sparkify_local_cleaned2
GROUP BY level
ORDER BY user_counts DESC
''').show()
location_count = spark.sql('''
SELECT location,COUNT(DISTINCT userId) AS user_counts
FROM Sparkify_local_cleaned2
GROUP BY location
ORDER BY user_counts DESC
''').toPandas()
# +
#split city and state
location_count = location_count.join(location_count['location'].str.split(',',expand=True).rename(columns={0:'city',1:'state'})).drop('location',axis=1)
# -
location_count.groupby('city')['user_counts'].sum().sort_values(ascending=False).plot(kind='bar',figsize=(17,5));
location_count.groupby('state')['user_counts'].sum().sort_values(ascending=False).plot(kind='bar',figsize=(17,5));
df.select("page").dropDuplicates().show()
spark.sql('''
SELECT page,COUNT(userId) AS user_counts
FROM Sparkify_local_cleaned2
GROUP BY page
ORDER BY user_counts DESC
''').toPandas()
userAgent_count = spark.sql('''
SELECT userAgent,COUNT(DISTINCT userId) AS user_counts
FROM Sparkify_local_cleaned2
GROUP BY userAgent
ORDER BY user_counts DESC
''').toPandas()
# +
def get_browser(x):
if 'Firefox' in x:
return 'Firefox'
elif 'Safari' in x:
if 'Chrome' in x:
return 'Chrome'
else:
return 'Safari'
elif 'Trident' in x:
return 'IE'
else:
return np.NaN
# -
userAgent_count['browser'] = userAgent_count['userAgent'].apply(get_browser)
platform_dict = {'compatible': 'Windows', 'iPad': 'iPad', 'iPhone': 'iPhone',
'Macintosh': 'Mac', 'Windows NT 5.1': 'Windows','Windows NT 6.0': 'Windows', 'Windows NT 6.1': 'Windows',
'Windows NT 6.2': 'Windows', 'Windows NT 6.3': 'Windows', 'X11': 'Linux'}
userAgent_count['platform'] = userAgent_count['userAgent'].str.extract(r'\(([^\)]*)\)')[0].str.split(';').str[0].map(platform_dict)
# +
userAgent_count.groupby('browser')['user_counts'].sum().sort_values().plot(kind='bar');
# -
userAgent_count.groupby('platform')['user_counts'].sum().sort_values().plot(kind='bar');
time_data = spark.sql('''
SELECT time,userId
FROM Sparkify_local_cleaned2
''').toPandas()
time_data['time'] = pd.to_datetime(time_data['time'])
weekday_dict = {0:'Mon.',1:'Tues.',2:'Wed.',3:'Thur.',4:'Fri.',5:'Sat.',6:'Sun.'}
time_data['weekday'] = time_data['time'].dt.weekday.map(weekday_dict)
time_data['day'] = time_data['time'].dt.day
time_data['hour'] = time_data['time'].dt.hour
time_data.groupby('weekday')['userId'].count().loc[list(weekday_dict.values())].plot(kind='bar',color='#318ce7');
time_data.groupby('day')['userId'].count().plot(kind='bar',color='#318ce7');
time_data.groupby('hour')['userId'].count().plot(kind='bar',color='#318ce7');
# # Exploratory Data Analysis
# When you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore.
#
# ### Define Churn
#
# Once you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events.
#
# ### Explore Data
# Once you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played.
#define the flag event udf to transform event to 0 or 1
flag_event = udf(lambda x : 1 if x=='Cancellation Confirmation' else 0, IntegerType())
#define the current churn or not state
df_cleaned_cancel = df_cleaned.withColumn('Churn_state',flag_event('page'))
df_cleaned_cancel.select("page").dropDuplicates().show()
# +
df_cleaned_cancel.filter(df_cleaned_cancel.page=="Cancellation Confirmation").select("userId").dropDuplicates().show(10)
# -
df_cleaned_cancel.select(["userId", "page", "time", "level", "song", "sessionId"]).where(df_cleaned_cancel.userId == "125").sort("time").show(50)
# +
#define the current churn or not state
df_cleaned_cancel = df_cleaned.withColumn('Churn_state',flag_event('page'))
# -
#mark the user who have churned event
userwindow = Window.partitionBy('userId').rangeBetween(Window.unboundedPreceding,Window.unboundedFollowing)
df_cleaned_cancel = df_cleaned_cancel.withColumn('Churn',fmax('Churn_state').over(userwindow))
df_cleaned_cancel.limit(2).toPandas()
df_cleaned_cancel.dropDuplicates(['userId']).select('Churn').groupby('Churn').count().collect()
df_cleaned_cancel.dropDuplicates(["userId"]).groupby(["Churn", "auth"]).count().sort("Churn").show()
df_cleaned_cancel.dropDuplicates(["userId", "gender"]).groupby(["Churn", "gender"]).count().sort("Churn").show()
proportions_ztest([32,20],[121,104],alternative='two-sided')
df_cleaned_cancel.select(["Churn", "length"]).groupby(["Churn"]).agg(avg('length').alias('mean_length'),
stddev('length').alias('stdev_length'),
fmax('length').alias('max_length'),
fmin('length').alias('min_length')).show()
df_cleaned_cancel.dropDuplicates(["userId"]).groupby(["Churn", "level"]).count().sort("Churn").show()
proportions_ztest([8,44],[48,177],alternative='two-sided')
def normalize_data(groupby_data):
groupby_series = groupby_data.set_index(list(groupby_data.columns[:2]))
temp = groupby_series.unstack('Churn').fillna(0)
df = pd.DataFrame(((temp - temp.min()) / (temp.max() - temp.min())).stack()).reset_index()
df = df.rename(columns={df.columns[-1]:'result'})
return df
split_city_state = split(df_cleaned_cancel['location'], ',')
df_cleaned_cancel = df_cleaned_cancel.withColumn('city',split_city_state.getItem(0))
df_cleaned_cancel = df_cleaned_cancel.withColumn('state',split_city_state.getItem(1))
city_data = df_cleaned_cancel.dropDuplicates(["userId"]).groupby(["city","Churn"]).count().sort("city").toPandas()
city_data = normalize_data(city_data)
fig, ax = plt.subplots(figsize=(10,15))
sns.barplot( x="result",y="city", hue="Churn", data=city_data,ax=ax);
state_data = df_cleaned_cancel.dropDuplicates(["userId"]).groupby(["state","Churn"]).count().sort("state").toPandas()
state_data = normalize_data(state_data)
fig, ax = plt.subplots(figsize=(10,15))
sns.barplot( x="result",y="state", hue="Churn", data=state_data,ax=ax);
df_cleaned_cancel.select(["Churn", "page"]).groupby(["Churn", "page"]).count().sort("page").show()
page_data = df_cleaned_cancel.select(["page","Churn"]).groupby([ "page","Churn"]).count().sort("page").toPandas()
page_data = normalize_data(page_data)
fig, ax = plt.subplots(figsize=(8,8))
sns.barplot( x="result",y="page", hue="Churn", data=page_data,ax=ax);
browser = udf(lambda x : get_browser(x))
#get browsers
df_cleaned_cancel = df_cleaned_cancel.withColumn('browser',browser(df_cleaned_cancel.userAgent))
get_platform = udf(lambda x: platform_dict[re.findall(r'\(([^\)]*)\)',x)[0].split(';')[0]])
#get platform
df_cleaned_cancel = df_cleaned_cancel.withColumn('platform',get_platform(df_cleaned_cancel.userAgent))
browser_data = df_cleaned_cancel.select(["browser","Churn"]).groupby([ "browser","Churn"]).count().sort("browser").toPandas()
browser_data = normalize_data(browser_data)
fig, ax = plt.subplots(figsize=(8,8))
sns.barplot( x="result",y="browser", hue="Churn", data=browser_data,ax=ax);
platform_data = df_cleaned_cancel.select(["platform","Churn"]).groupby([ "platform","Churn"]).count().sort("platform").toPandas()
platform_data = normalize_data(platform_data)
fig, ax = plt.subplots(figsize=(8,8))
sns.barplot( x="result",y="platform", hue="Churn", data=platform_data,ax=ax);
get_hour = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0).hour)
get_day = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0).day)
get_weekday = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0).strftime('%w'))
df_cleaned_cancel = df_cleaned_cancel.withColumn('hour', get_hour(df_cleaned_cancel.ts))
df_cleaned_cancel = df_cleaned_cancel.withColumn('day', get_day(df_cleaned_cancel.ts))
df_cleaned_cancel = df_cleaned_cancel.withColumn('dayofweek', get_weekday(df_cleaned_cancel.ts))
hour_data = df_cleaned_cancel.select(["Churn", "hour"]).groupby(["Churn", "hour"]).count().sort("hour").toPandas()
day_data = df_cleaned_cancel.select(["Churn", "day"]).groupby(["Churn", "day"]).count().sort("day").toPandas()
dayofweek_data = df_cleaned_cancel.select(["Churn", "dayofweek"]).groupby(["Churn", "dayofweek"]).count().sort("dayofweek").toPandas()
hour_data = normalize_data(hour_data)
fig, ax = plt.subplots(figsize=(10,5))
sns.barplot( x="hour",y="result", hue="Churn", data=hour_data,ax=ax,order=list(map(lambda x: str(x),range(23))));
day_data = normalize_data(day_data)
fig, ax = plt.subplots(figsize=(10,5))
sns.barplot( x="day",y="result", hue="Churn", data=day_data,ax=ax,order=list(map(lambda x: str(x),range(1,32))));
dayofweek_data = normalize_data(dayofweek_data)
fig, ax = plt.subplots(figsize=(10,5))
sns.barplot( x="dayofweek",y="result", hue="Churn", data=dayofweek_data,ax=ax);
# # Feature Engineering
# Once you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps.
# - Write a script to extract the necessary features from the smaller subset of data
# - Ensure that your script is scalable, using the best practices discussed in Lesson 3
# - Try your script on the full data set, debugging your script if necessary
#
# If you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster.
df_cleaned_cancel.printSchema()
# On the basis of the above EDA, we can create features as follows:
#
# - Categorical Features (need label encoding)
# - gender
# - level
# - browser
# - platform
#
# - Numerical Features
# - mean,max,min,std of length of users
# - numbers of these item in page (NextSong,ThumbsUp, ThumbsDown, AddtoPlaylist, AddFriend, RollAdvert)
# - number of unique songs and total songs of users
# - number of unique artists of users
# - percentage of operations after 15th in a month
# - percentage of operations in workday
def label_encoding(col_name):
'''
transform categorical items to number
'''
temp = df_cleaned_cancel.select([col_name]).dropDuplicates().toPandas()
label_dict = {val:str(idx) for idx,val in enumerate(temp[col_name].tolist())}
result = df_cleaned_cancel.dropDuplicates(['userId']).select(['userId',col_name]).replace(label_dict,subset=col_name)
return result
def get_categorical_features():
'''
join all categorical features together
'''
feature_gender = label_encoding('gender')
feature_level = label_encoding('level')
feature_browser = label_encoding('browser')
feature_platform = label_encoding('platform')
result = feature_gender.join(feature_level,on='userId',how='inner').\
join(feature_browser,on='userId',how='inner').\
join(feature_platform,on='userId',how='inner')
return result
categorical_feature = get_categorical_features()
categorical_feature.show(2)
dfx = df_cleaned_cancel.select(["userId","page"]).groupby(["userId","page"]).count()
dfx.show(20)
df_cleaned_cancel.select('userId').show(20)
# +
page_count = df_cleaned_cancel.select(["userId","page"]).groupby(["userId","page"]).count()
# create the pivot table
# -
temp1 = page_count.groupby('userId').pivot('page').agg(first('count')).fillna(0)
temp1.show(3)
temp1 = temp1.select(['userId','NextSong','Thumbs Up', 'Thumbs Down', 'Add to Playlist', 'Add Friend', 'Roll Advert'])
# column names used to sum up for total
temp1.show()
cols = temp1.columns[1:]
cols
temp1.printSchema()
from pyspark.sql.functions import col, trim, lower
feature_page = temp1.withColumn('total', sum([col(c) for c in cols if c != 'userId']))
feature_page.show(5)
def get_numerical_features():
'''
join all numerical features together and implement Standscaler
'''
#length
feature_length = df_cleaned_cancel.select(["userId", "length"]).groupby(["userId"]).agg(avg('length').alias('mean_length'),
stddev('length').alias('stdev_length'),
fmax('length').alias('max_length'),
fmin('length').alias('min_length'))
#page, reference url:https://stackoverflow.com/questions/56051438/pivot-table-in-pyspark
page_count = df_cleaned_cancel.select(["userId","page"]).groupby(["userId","page"]).count()
# create the pivot table
temp1 = page_count.groupby('userId').pivot('page').agg(first('count')).fillna(0)
# filter columns
temp1 = temp1.select(['userId','NextSong','Thumbs Up', 'Thumbs Down', 'Add to Playlist', 'Add Friend', 'Roll Advert'])
# column names used to sum up for total
cols = temp1.columns[1:]
# calculate the total
feature_page = temp1.withColumn('total', sum([col(c) for c in cols]))
#unique songs number
feature_nunique_song = df_cleaned_cancel.filter(df_cleaned_cancel.page=='NextSong').select(["userId","song"]).\
dropDuplicates(["userId","song"]).groupby(["userId"]).count()
feature_nunique_song = feature_nunique_song.selectExpr("userId as userId","count as nunique_song")
#total songs number
feature_ntotal_song = df_cleaned_cancel.filter(df_cleaned_cancel.page=='NextSong').select(["userId","song"]).\
groupby(["userId"]).count()
#source:https://exceptionshub.com/how-to-change-dataframe-column-names-in-pyspark.html
feature_ntotal_song = feature_ntotal_song.selectExpr("userId as userId","count as ntotal_song")
#unique artists artist
feature_nunique_artist = df_cleaned_cancel.filter(df_cleaned_cancel.page=='NextSong').select(["userId","artist"]).\
dropDuplicates(["userId","artist"]).groupby(["userId"]).count()
feature_nunique_artist = feature_nunique_artist.selectExpr("userId as userId","count as nunique_artist")
#percentage of opretions
day_count = df_cleaned_cancel.filter(df_cleaned_cancel.day>=15).select(['userId']).groupby(["userId"]).count()
day_count = day_count.selectExpr("userId as userId","count as day_count")
total_count = df_cleaned_cancel.select(['userId']).groupby(["userId"]).count()
total_count = total_count.selectExpr("userId as userId","count as total_count")
dayofweek_count = df_cleaned_cancel.filter(df_cleaned_cancel.dayofweek<5).select(['userId']).groupby(["userId"]).count()
dayofweek_count = dayofweek_count.selectExpr("userId as userId","count as dayofweek_count")
feature_percentage_month = (total_count.alias("total").join(day_count.alias("day"), ["userId"]).\
select(col("userId"), (col("day.day_count") / col("total.total_count")).alias("month_percentage")))
feature_percentage_week = (total_count.alias("total").join(dayofweek_count.alias("day"), ["userId"]).\
select(col("userId"), (col("day.dayofweek_count") / col("total.total_count")).alias("week_percentage")))
#merge together
result = feature_length.join(feature_page,on='userId',how='inner').\
join(feature_nunique_song,on='userId',how='inner').\
join(feature_ntotal_song,on='userId',how='inner').\
join(feature_nunique_artist,on='userId',how='inner').\
join(feature_percentage_month,on='userId',how='inner').\
join(feature_percentage_week,on='userId',how='inner')
return result
label = df_cleaned_cancel.select(['userId','Churn']).dropDuplicates()
def get_data_for_train():
'''
merge features and label together
'''
categorical_feature = get_categorical_features()
numerical_feature = get_numerical_features()
label = df_cleaned_cancel.select(['userId','Churn']).dropDuplicates()
result = categorical_feature.join(numerical_feature,on='userId',how='inner').join(label,on='userId',how='inner')
#correct datatype
for col_name in result.columns[1:5]:
result = result.withColumn(col_name, result[col_name].cast(IntegerType()))
for col_name in result.columns[5:-1]:
result = result.withColumn(col_name, result[col_name].cast(FloatType()))
#fill NaN
result = result.na.fill(0)
return result
final_data = get_data_for_train()
final_data.write.save('final_data_new.json',format='json',header=True)
final_data = spark.read.json('final_data_new.json')
final_data_columns = final_data.columns
final_data_columns.remove('Churn')
final_data_columns.remove('userId')
categorical_features = ['gender','level','browser','platform']
numerical_features = [col_name for col_name in final_data_columns if col_name not in categorical_features]
check_df = final_data.toPandas()
check_df.shape
# +
#implement standscaler
vector = VectorAssembler(inputCols=numerical_features, outputCol='numerical_features')
temp = vector.transform(final_data)
scaler = StandardScaler(withMean=True, withStd=True, inputCol='numerical_features', outputCol='features_scaled')
scaler_fit = scaler.fit(temp)
result_scaled = scaler_fit.transform(temp)
#add categorical features to feature vector
vector = VectorAssembler(inputCols=categorical_features+['features_scaled'], outputCol='all_features')
result_scaled = vector.transform(result_scaled)
final_result = result_scaled.select(result_scaled.Churn.alias('label'), result_scaled.all_features.alias('features'))
# -
df_final.columns
df_final.persist()
# # Modeling
# Split the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize.
def undersample(df):
'''
Implement undersample on dataset, return a balanced dataset.
'''
# size of minority class(0)
minoritySize = df.where(df.label == '1').count()
# two classes with the same size
df_minority = df.where(df.label == '1')
df_majority = df.where(df.label == '0').sample(1.0, seed=7).limit(minoritySize)
# concatenate them together
result = df_minority.union(df_majority)
#shuffle data
result = result.orderBy(rand())
return result
balanced_data = undersample(final_result)
check_balanced_df = balanced_data.toPandas()
#check out
balanced_data.groupby(balanced_data.label).count().show()
train, test = balanced_data.randomSplit([0.7, 0.3], seed=7)
test.count()
# Initialize four models
clf_LR = LogisticRegression(maxIter=50)
clf_DT = DecisionTreeClassifier(seed=7)
clf_RF = RandomForestClassifier(seed=7)
clf_SVM = LinearSVC()
evaluator= MulticlassClassificationEvaluator(predictionCol="prediction")
# +
# collect results on the learners
all_results = {}
for clf in [clf_LR, clf_DT, clf_RF, clf_SVM]:
model_results = {}
# get the classifier name
clf_name = clf.__class__.__name__
# fit the dataset
print(f'{clf_name} is training...')
start = time.time()
model = clf.fit(train)
end = time.time()
model_results['train_time'] = round(end-start,6)
# predict
print(f'{clf_name} is predicting...')
start = time.time()
pred_test = model.transform(test)
end = time.time()
model_results['pred_time'] = round(end-start,6)
#metrics
print(f'{clf_name} is evaluating...')
model_results['f1_test'] = evaluator.evaluate(pred_test.select('label','prediction'),{evaluator.metricName: 'f1'})
print('Test F1-score: ',model_results['f1_test'])
all_results[clf_name] = model_results
all_results_df = pd.DataFrame(all_results)
all_results_df.to_csv('baseline.csv')
# -
all_results_df
# - Though the LinearSVC spent more training time, but it can get the highest f1 score 0.702. And the LogisticRegression has a medium trainning time and f1 score, maybe I can tuning it to get a higher score. So I'll choose LinearSVC and LogisticRegression to tuning in next section.
paramGrid = ParamGridBuilder().\
addGrid(clf_SVM.maxIter, [10, 100, 1000]).\
addGrid(clf_SVM.regParam, [0.01,0.1,10.0,100.0]).\
build()
crossval = CrossValidator(estimator=clf_SVM,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(metricName="f1"),
numFolds=3)
start = time.time()
cvModel_SVM = crossval.fit(train)
end = time.time()
print(f'Model tuning is done, spent {end-start}s.')
cvModel_SVM.avgMetrics
# +
pred = cvModel_SVM.transform(test)
print('Accuracy: {}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "accuracy"})))
print('F-1 Score:{}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "f1"})))
# -
cvModel_SVM.save('svm_model')
# ### Logistic Regression
# +
paramGrid = ParamGridBuilder().\
addGrid(clf_LR.elasticNetParam,[0.1, 0.5, 1]).\
addGrid(clf_LR.regParam,[0.01, 0.05, 0.1]).\
build()
crossval = CrossValidator(estimator=clf_LR,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(metricName="f1"),
numFolds=3)
# -
start = time.time()
cvModel_LR = crossval.fit(train)
end = time.time()
print(f'Model tuning is done, spent {end-start}s.')
cvModel_LR.avgMetrics
# +
pred = cvModel_LR.transform(test)
print('Accuracy: {}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "accuracy"})))
print('F-1 Score:{}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "f1"})))
# +
cvModel_LR.save('lr_model')
# -
# ### Stacking Predictions
def melt_predictions(train_datasets=True):
'''
melt predictions together.
'''
if train_datasets:
dataset = train
else:
dataset = test
svm_pred = cvModel_SVM.transform(dataset)
lr_pred = cvModel_LR.transform(dataset)
lr_frame = lr_pred.select(lr_pred.label,lr_pred.features,lr_pred.prediction.alias('lr_prediction'))
svm_frame = svm_pred.select(svm_pred.features,svm_pred.prediction.alias('svm_prediction'))
melt_data = lr_frame.join(svm_frame,on='features')
#VectorAssembler
vector = VectorAssembler(inputCols=['lr_prediction','svm_prediction'], outputCol='combine_features')
temp = vector.transform(melt_data)
stack_data = temp.select(temp.label, temp.combine_features.alias('features'))
return stack_data
stack_data_train = melt_predictions()
stack_train,stack_test = stack_data_train.randomSplit([0.6,0.4],seed=7)
# +
paramGrid = ParamGridBuilder().\
addGrid(clf_LR.elasticNetParam,[0.1, 0.5, 1]).\
addGrid(clf_LR.regParam,[0.01, 0.05, 0.1]).\
build()
stack_crossval = CrossValidator(estimator=clf_LR,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(metricName="f1"),
numFolds=3)
# -
start = time.time()
cvModel_stack = stack_crossval.fit(stack_train)
end = time.time()
print(f'Model tuning is done, spent {end-start}s.')
# +
#validation
pred = cvModel_stack.transform(stack_test)
print('Accuracy: {}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "accuracy"})))
print('F-1 Score:{}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "f1"})))
# -
stack_data_test = melt_predictions(train_datasets=False)
# +
pred = cvModel_stack.transform(stack_data_test)
print('Accuracy: {}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "accuracy"})))
print('F-1 Score:{}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "f1"})))
# -
# ### Decision Tree Classifier
# - When using decision trees for classification, you'll build up an ML-based classifier after the training fields, feed in the input features, and in response get whether that person is likely to churn or not. This is your output classification label.
# - When you're constructing your decision tree model, you can choose one of two ways in which to measure the impurity of the node.
# - The objective of a classification decision tree that we build up is to minimize the Gini/Entropy impurity at each node
# - Gini impurity is found from rule violations in training data. The measure of Gini impurity is on the training data, not on predictions.
# - The one hyperparameter that decision trees use is the depth of the tree that it constructs. Shallower trees result in fewer decision variables.
# #### Decision Tree Drawbacks
# - Decision trees are highly prone to overfitting, meaning that your model performs extremely well on training data, but does not perform well on test data perform very well on test data, or in the real world.
# - Small changes in data cause big changes in model. models which are very sensitive to the training data are said to be high variance models.
# +
dt = DecisionTreeClassifier()
paramGrid = ParamGridBuilder() \
.addGrid(dt.impurity,['entropy', 'gini']) \
.addGrid(dt.maxDepth,[2, 3, 4]) \
.build()
crossval_dt = CrossValidator(estimator=dt,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(),
numFolds=2)
cvModel_dt = crossval_dt.fit(train)
# -
cvModel_dt.save('cvModel_dt1.model')
cvModel_dt.avgMetrics
# +
pred = cvModel_dt.transform(test)
print('Accuracy: {}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "accuracy"})))
print('F-1 Score:{}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "f1"})))
# -
# +
gbt = GBTClassifier()
paramGrid = ParamGridBuilder() \
.addGrid(gbt.maxIter,[3, 10, 20]) \
.addGrid(gbt.maxDepth,[2, 4, 6]) \
.build()
crossval_gbt = CrossValidator(estimator=gbt,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(),
numFolds=3)
cvModel_gbt = crossval_gbt.fit(train)
# -
cvModel_gbt.save('cvModel_gbt.model')
cvModel_gbt.avgMetrics
# +
pred = cvModel_gbt.transform(test)
print('Accuracy: {}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "accuracy"})))
print('F-1 Score:{}'.format(evaluator.evaluate(pred.select('label','prediction'), {evaluator.metricName: "f1"})))
# -
# ### Conclusion
# # Final Steps
# Clean up your code, adding comments and renaming variables to make the code easier to read and maintain. Refer to the Spark Project Overview page and Data Scientist Capstone Project Rubric to make sure you are including all components of the capstone project and meet all expectations. Remember, this includes thorough documentation in a README file in a Github repository, as well as a web app or blog post.
| 32,889 |
/Lesson7.ipynb
|
325874739b82658286846a7434c653ccf702960f
|
[] |
no_license
|
RoyJia/python-codeacademy-sg
|
https://github.com/RoyJia/python-codeacademy-sg
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,368 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="DtywtGFtF2nR"
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
# Single hidden layer
model.add(Dense(5, activation='tanh', input_shape=(11,)))
model.add(Dense(5, activation='tanh', input_shape=(5,)))
# Final layer. No activation means "linear"
model.add(Dense(7))
model.save('Chass_Keras.h5') #Saves as a TensorFlow SavedModel format (standard practice)
URL path from **/nb/tree** to **/nb/lab**
#
# Select **Lesson7.ipynb**
| 914 |
/week 5/задание 2/solution.ipynb
|
0517d5640e199aaef711b29cd593e6e7ba6e77bc
|
[] |
no_license
|
DRomanova-A/startML
|
https://github.com/DRomanova-A/startML
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 98,572 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import log_loss
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from typing import List, Tuple
import sys
sys.path.append("..")
import os
def print_answer(num: int, text: str) -> None:
if not os.path.exists("answers"):
os.makedirs("answers")
with open(os.path.join("answers", f"a{num}.txt"), "w") as f:
f.write(text)
print(text)
# %matplotlib inline
# -
# 1. Загрузите выборку из файла gbm-data.csv с помощью pandas и преобразуйте ее в массив numpy (параметр values у датафрейма).
#
# В первой колонке файла с данными записано, была или нет реакция. Все остальные колонки (d1 - d1776) содержат различные характеристики молекулы, такие как размер, форма и т.д. Разбейте выборку на обучающую и тестовую, используя функцию train_test_split с параметрами test_size = 0.8 и random_state = 241
df = pd.read_csv("gbm-data.csv")
df.head()
# +
X = df.loc[:, "D1":"D1776"].values
y = df["Activity"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.8, random_state=241)
# -
# 2. Обучите GradientBoostingClassifier
#
# с параметрами n_estimators=250, verbose=True, random_state=241 и для каждого значения learning_rate из списка [1, 0.5, 0.3, 0.2, 0.1] проделайте следующее:
#
# - Преобразуйте полученное предсказание с помощью сигмоидной функции по формуле 1 / (1 + e^{−y_pred}), где y_pred — предсказанное значение.
def sigmoid(y_pred: np.array) -> np.array:
return 1.0 / (1.0 + np.exp(-y_pred))
# - Используйте метод staged_decision_function для предсказания качества на обучающей и тестовой выборке на каждой итерации.
def log_loss_results(model, X: np.array, y: np.array) -> List[float]:
return [log_loss(y, sigmoid(y_pred)) for y_pred in model.staged_decision_function(X)]
# - Вычислите и постройте график значений log-loss (которую можно посчитать с помощью функции sklearn.metrics.log_loss) на обучающей и тестовой выборках, а также найдите минимальное значение метрики и номер итерации, на которой оно достигается.
# +
def plot_loss(learning_rate: float, test_loss: List[float], train_loss: List[float]) -> None:
plt.figure()
plt.plot(test_loss, "r", linewidth=2)
plt.plot(train_loss, "g", linewidth=2)
plt.legend(["test", "train"])
plt.show()
min_loss_results = {}
for lr in [1, 0.5, 0.3, 0.2, 0.1]:
print(f"Learning rate: {lr}")
model = GradientBoostingClassifier(learning_rate=lr, n_estimators=250, verbose=True, random_state=241)
model.fit(X_train, y_train)
train_loss = log_loss_results(model, X_train, y_train)
test_loss = log_loss_results(model, X_test, y_test)
plot_loss(lr, test_loss, train_loss)
min_loss_value = min(test_loss)
min_loss_index = test_loss.index(min_loss_value) + 1
min_loss_results[lr] = min_loss_value, min_loss_index
print(f"Min loss {min_loss_value:.2f} at n_estimators={min_loss_index}\n")
# -
# 3. Как можно охарактеризовать график качества на тестовой выборке, начиная с некоторой итерации: переобучение (overfitting) или недообучение (underfitting)?
# В ответе укажите одно из слов overfitting либо underfitting.
print_answer(1, "overfitting")
# 4. Приведите минимальное значение log-loss на тестовой выборке и номер итерации, на котором оно достигается, при learning_rate = 0.2
min_loss_value, min_loss_index = min_loss_results[0.2]
print_answer(2, f"{min_loss_value:.2f} {min_loss_index}")
# 5. На этих же данных обучите RandomForestClassifier
#
# с количеством деревьев, равным количеству итераций, на котором достигается наилучшее качество у градиентного бустинга из предыдущего пункта, c random_state=241 и остальными параметрами по умолчанию. Какое значение log-loss на тесте получается у этого случайного леса? (Не забывайте, что предсказания нужно получать с помощью функции predict_proba. В данном случае брать сигмоиду от оценки вероятности класса не нужно)
# +
model = RandomForestClassifier(n_estimators=min_loss_index, random_state=241)
model.fit(X_train, y_train)
y_pred = model.predict_proba(X_test)[:, 1]
test_loss = log_loss(y_test, y_pred)
print_answer(3, f"{test_loss:.2f}")
# -
| 4,544 |
/notebooks/06.- Ecuaciones y sistemas.ipynb
|
594b7c01739fa1dcddd16b43a25b66c50c8c0be5
|
[
"MIT"
] |
permissive
|
disoftw/python-for-maths
|
https://github.com/disoftw/python-for-maths
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 13,768 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #<div class="alert alert-success">Ecuaciones y sistemas</div>
from sympy import *
init_printing()
# Sympy puede trabajar con variables simbólicas, pero no por defecto. Si queremos utilizar una letra, debemos "informar" a Sympy de ello con la función **symbols**.
x=symbols('x')
# Por defecto, Sympy supone que **todas las ecuaciones están igualadas a cero**, así que si nos encontramos con alguna ecuación que no lo esté, debemos pasar todo a un miembro. El comando fundamental para resolver ecuaciones de modo exacto es **solve**. Si todos los números son enteros o racionales, la solución nos la da de modo exacto si es posible. **Si algún número es decimal, nos da la solución en formato decimal**.
# ###<div class="alert alert-warning">Resuelve las siguientes ecuaciones de primer grado:</div>
#
#
# * $3x+6=0$
#
#
# * $4x-8=6$
#
#
# * $5y-7=0$
#
#
# * $\frac{5x}{7}+\frac{3}{7}=0$
y = symbols('y')
solve(Rational(5,7)*x+Rational(3,7),x, )
# Sympy puede resolver ecuaciones de segundo grado, tanto con soluciones reales como complejas. Si la raíz no es exacta, no da el número con radicales. Para obtener el resultado decimal podemos poner un numero en formato decimal.
# ###<div class="alert alert-warning">Resuelve las siguientes ecuaciones de segundo grado:</div>
#
#
# * $x^2-5x+6=0$
#
#
# * $x^2-7x+9=0$
#
#
# * $x^2-4x+5=0$
solve(x**2-7.0*x+9,x)
# Para resolver sistemas lo primero es declarar todas las letras necesarias con la función **symbols**. Después debemos **escribir cada ecuación entre corchetes y separadas por comas**. Como siempre las ecuaciones deben estar igualadas a cero.
# ###<div class="alert alert-warning">Resuelve los sistemas, de modo exacto y aproximado:</div>
#
#
# * $\begin{cases}
# 3x+7y=9\\
# -5x+9y=5
# \end{cases}
# $
#
#
# * $\begin{cases}
# x^2-46x=8\\
# -6x+7y=-3
# \end{cases}
# $
solve([3*x+7*y-9,-5*x+9*y-5])
solve([x**2 - 46.0*x - 8, -6*x + 7*y+3])
# Para resolver inecuaciones se utiliza también **solve** y se escribe la inecuación. El resultado de la inecuación debemos interpretarlo.
# ###<div class="alert alert-warning">Resuelve las inecuaciones:</div>
#
#
# * $4x-5>9$
#
#
# * $x^2-5x+6 \geq 0$
#
#
# * $\displaystyle\frac{x^2-5x+6}{x^2-6x+8} >0$
solve(x**2 - 5*x +6 >= 0)
# ###<div class="alert alert-warning">Resuelve la ecuación general de segundo grado.</div>
solve((x**2-5*x+6)/(x**2-6*x+8)>=0)
| 2,648 |
/.ipynb_checkpoints/Address-checkpoint.ipynb
|
3fb04b9c38ee67efa2f88447925a3a4812b88184
|
[] |
no_license
|
Patcm10/Proyecto-Final-Renta2
|
https://github.com/Patcm10/Proyecto-Final-Renta2
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 32,916 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Tâche 7 : xrd.ipynb
#
# Loudeche Maxime.
#
# Ce notebook permet de télécharger le fichier .CIF du matériaux "mp-1023936" : WSe2. Il permet également de déterminer les 3 premiers pics du diffractogramme correspondant à notre cristal et de précisier les indices hkl associés. Dans cette partie, nous allons nous intéresser aux pics du diffractogramme pour un rayonnement X incident CuKα de longueur d’onde λ=1.54060 Angstrom sur le matériaux WSe2.
#
# A partir de l'image de diffraction obtenue, il est possible, sous certaines réserves de déterminer la position de tous les atomes de la structure.
from pymatgen.ext.matproj import MPRester
from pymatgen.io.cif import CifWriter
from pymatgen.analysis.diffraction.xrd import XRDCalculator
from matplotlib import pyplot as plt
import numpy as np
with MPRester("1UZlSnaTONTXfpKB") as m:
structure = m.get_structure_by_material_id("mp-1023936")
w = CifWriter(structure)
w.write_file("mp-1023936_struture.cif")
# Nous nous concentrons, comme demandé, uniquement sur les 3 premiers pics :
# +
# Paramètres du graphe
plt.title("Diffractogramme centré sur les 3 premiers pics pour un rayonnement X incident Cu K\u03B1 pour le matériaux WSe\u2082")
plt.xlabel('2\u03B8[\u00b0]')
plt.xticks(np.arange(5, 16, 1))
plt.ylabel('Intensité [CPS]')
plt.yticks(np.arange(0, 101, 10))
# Les points à plot
angle = [5.042, 10.094, 15.165]
amplitude = [100, 13.698, 2.008]
origine = [0, 0, 0, 0]
# Le plot :
plt.vlines(angle, origine, amplitude, color='red')
plt.text(angle[0], amplitude[0], '5.042° ; 100CPS')
plt.text(angle[1], amplitude[1], '10.094° ; 13.698CPS')
plt.text(angle[2], amplitude[2], '15.165° ; 2.008CPS')
plt.show()
# -
# Le graphe ci-dessus est simplement obtenu en recopiant les valeurs de Materials Project. Nous allons maintenant obtenir des valeurs plus précises en utilisant un XRDCalculator :
# +
# Création d'un XRDCalculator avec la valeur demandée dans l'énoncé
XRD = XRDCalculator(1.54060)
# Calcul du modèle de diffraction
DIF = XRD.get_pattern(structure)
# Impression des 3 premiers pics et des indices (hkl) obtenus à partir du XRDCalculator :
print('Premier pic : ')
print('2\u03B8[\u00b0] =', (DIF.x)[0])
print('I [CPS] =', (DIF.y)[0])
print('Indices :' , DIF.hkls[0], '\n')
print('Deuxième pic : ')
print('2\u03B8[\u00b0] =', (DIF.x)[1])
print('I [CPS] =', (DIF.y)[1])
print('Indices :' , DIF.hkls[1], '\n')
print('Troisième pic : ')
print('2\u03B8(\u00b0) =', (DIF.x)[2])
print('I [CPS] =', (DIF.y)[2])
print('Indices :' , DIF.hkls[2])
# -
# Enfin, à titre d'information et pour compléter cette tâche, affichons le diffractogramme complet calculé par le XRDCalculator.
# Le plot du diffractogramme complet :
XRD.show_plot(structure, annotate_peaks = False)
| 3,016 |
/notebooks/iceflow/0_introduction.ipynb
|
e9dc52c2a9fb0edc96afd864faa0c0bed891731c
|
[
"MIT"
] |
permissive
|
schandler88/NSIDC-Data-Tutorials
|
https://github.com/schandler88/NSIDC-Data-Tutorials
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 43,376 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import warnings
warnings.simplefilter("ignore")
import pandas as pd
import numpy as np
dataset = pd.read_csv('garments_worker_productivity.csv')
dataset
###Understanding the dataset
dataset.shape
dataset.head()
##slicing the dataset
dataset=dataset.drop(['idle_time','idle_men','no_of_style_change','date','quarter','day','team','department',], axis=1)
dataset
#Segregate and reshape the dataset
x = dataset.iloc[:,0]
x
x.shape
x=dataset.iloc[:,0].values.reshape(-1,1)
x.shape
y=dataset.iloc[:,-1].values.reshape(-1,1)
y.shape
y
import matplotlib.pyplot as plt
# %matplotlib inline
#Scatter plot
plt.scatter(x,y)
plt.show()
#divide the dataset into training and testing set
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(x,y,test_size=0.2, random_state=0)
x.shape
x_train.shape
x_test.shape
y_train.shape
x_test.shape
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(x_train,y_train)
y_pred = lm.predict(x_test)
y_pred
plt.scatter(x,y,color = 'blue')
plt.plot(x_test,y_pred,color = 'red')
lso the less experienced Python user to learn how to use *IceFlow* to access altimetry data. Most of the "heavy lifting" is done with our *IceFlow* client code so you don't necessarily need to know a lot about these libraries. If you feel like learning more about geoscience and Python, you can find great tutorials by CU Boulder's Earth Lab here: [Data Exploration and Analysis Lessons](https://www.earthdatascience.org/tags/data-exploration-and-analysis/) or by the Data Carpentry project: [Introduction to Geospatial Concepts](https://datacarpentry.org/organization-geospatial/)
#
#
# The main Python packages/libraries that will be used in this notebook are:
#
# * [*requests*](https://requests.readthedocs.io/en/master/):
# HTTP library for Python, used to make requests
# * [*geopandas*](https://geopandas.org/):
# Library to simplify working with geospatial data in Python (using pandas)
# * [*geojson*](https://github.com/jazzband/geojson):
# Functions for encoding and decoding GeoJSON formatted data in Python
# * [*h5py*](https://github.com/h5py/h5py):
# Pythonic wrapper around the [*HDF5 library](https://en.wikipedia.org/wiki/Hierarchical_Data_Format)
# * [*matplotlib*](https://matplotlib.org/):
# Comprehensive library for creating static, animated, and interactive visualizations in Python
# * [*vaex*](https://github.com/vaexio/vaex):
# High performance Python library for lazy Out-of-Core dataframes (similar to *pandas*), to visualize and explore big tabular data sets
# * [*iPyLeaflet*](https://github.com/jupyter-widgets/ipyleaflet):
# Jupyter/Leaflet bridge enabling interactive maps in the Jupyter notebook
# * [*icepyx*](https://icepyx.readthedocs.io/en/latest/):
# Library for ICESat-2 data users
#
#
# ## **1.2 Learning Goals**
#
# After completing this notebook and the companion [visualization and analysis notebook](./3_dataviz.ipynb) you will:
# * Understand the basics about the data sets (pre-IceBridge, IceBridge, ICESat/GLAS and ICESat-2) served by *IceFlow*;
# * Be able to access these data sets using the *IceFlow* user interface widget and the API;
# * Be able to read and analyze the data using *IceFlow*.
#
# > If you want to know what an API is, take a look at this video: ["What is an API?"](https://www.youtube.com/watch?v=s7wmiS2mSXY)
#
# This notebook contains three ***IceFlow* use cases**:
# 1. Accessing data with the *IceFlow* widget (Section 6.1)
# 2. Accessing data using the *IceFlow* API (Section 6.2)
# 3. Reading and plotting data (Section 6.3)
#
# **Note:** Some data orders can take quite some time. Read more on estimated data download times in Section 6.1. If you run this notebook with the Binder, we recommend loading only very small data orders as the Binder will time out after ~10 minutes. For larger data orders run this notebook locally.
#
# **Important:** The three use cases can be run independently as the data needed in Section 6.3 is already preloaded.
# If you feel comfortable using code you don't need to use the map widget (Section 6.1) but you can directly jump to Section 6.2 (*IceFlow* API) or take a look at the [API usage notebook](./2_api.ipynb).
#
#
# # **2. Why IceFlow**
#
# ### The Short Answer is **Data Harmonization**
#
# In 2003, NASA launched the Ice, Cloud and Land Elevation Satellite mission with the Geoscience Laser Altimeter System (ICESat/GLAS) instrument onboard. Over the following six years, ICESat/GLAS collected valuable ice thickness data in the Polar Regions. Unfortunately, the ICESat/GLAS mission ended in 2009 before a follow-up mission could be launched. An airborne campaign called Operation IceBridge was funded to fill the gap and continue ice thickness measurements. Between 2009 and 2019, Operation IceBridge flew numerous campaigns over Greenland, the Antarctic ice sheets, and sea ice in the Arctic and Southern Ocean. In September 2018, ICESat-2 was launched to continue NASA's collection of ice, cloud and land elevation data.
#
# The wealth of data from these three missions, as well as from the pre-IceBridge airborne altimetry missions, presents an opportunity to study the evolution of ice thickness over several decades. However, combining data from these missions presents several challenges:
# * Data from the Airborne Topographic Mapper (ATM) flown during the IceBridge campaigns is stored in four different file formats. ICESat/GLAS and ICESat-2 data are also in different file formats. Therefore, the data needs to be harmonized, that means placed into similar formats before comparisons can be made.
# * The coordinate reference systems used to locate measurements have changed over the years, as the Earth's surface is not static and changes shape. To account for these changes, terrestrial reference frames that relate latitude and longitude to points on the Earth are updated on a regular basis. Since the launch of ICESat/GLAS, the International Terrestrial Reference Frame [(ITRF)](https://www.iers.org/IERS/EN/DataProducts/ITRF/itrf.html) has been updated three times. The geolocation of a point measured at the beginning of the record and the end of the record is not the same even though the latitude and longitude is the same. These changes in geolocation need to be reconciled to allow meaningful comparisons within the long-term data record.
#
# The *IceFlow* library grants easy access across the missions, harmonizing the data from the different file formats as well as corrects for changes in coordinate reference systems. A more detailed overview of these corrections can be found in [Applying Coordinate Transformations to Facilitate Data Comparison](corrections.ipynb).
#
# # 3. Mission Overview
#
# ## **3.1 Pre-IceBridge**
#
# The Airborne Topographic Mapper (ATM) is a conically-scanning laser altimeter that measures the surface topography of a swath of terrain directly beneath the path of an aircraft. ATM surveys can be used to detect surface changes. Differences of laser swaths surveyed over the same area but a few years apart can be used to estimate elevation changes between the first and second survey. Comparing the surveys conducted 1993-4 and 1998-9 resulted in the first comprehensive assessment of the mass balance change of the Greenland ice sheet ([Krabill et al., 1999](https://science.sciencemag.org/content/283/5407/1522), [2000](https://science.sciencemag.org/content/289/5478/428)). ATM surveys can also be used to calibrate/validate satellite altimeter measurements (e.g. [Martin et al., 2005](https://atm.wff.nasa.gov/files/browser/ATM_Calibration_Procedures_and_Accuracy_Assessment_2012.pdf)). The ATM was deployed on a wide variety of platforms, including the NASA P3, a Chilean Navy P3, a US Navy P3, the NASA DC8, the NCAR C-130, and a half-dozen Twin Otters to collected high quality topographic data. For a complete list of the ATM deployments visit [https://atm.wff.nasa.gov/deployments/](https://atm.wff.nasa.gov/deployments/)
#
# ## **3.2 ICESat/GLAS**
#
# ICESat/GLAS was the benchmark Earth Observing System mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat/GLAS mission provided multi-year elevation data for ice sheet mass balance as well as stratospheric cloud property information over polar areas. This mission also provided topographic and vegetation data from around the globe beyond the polar-specific ice height information over the Greenland and Antarctic ice sheets. Launched on 12 January 2003, after seven years in orbit and 18 laser-operation campaigns, the ICESat/GLAS science mission ended due to the failure of its primary instrument in 2009.
#
#
# ## **3.3 IceBridge**
#
# The operation IceBridge was with its surveying flights from 2009 to 2019 the largest airborne survey of the Earth's polar ice. It has yielded an unprecedented three-dimensional view of the Arctic and Antarctic ice sheets, ice shelves and sea ice. The flights provide a yearly, multi-instrument look at the behavior of the rapidly changing features of the polar ice.
# Data collected during Operation IceBridge helps scientists bridge the gap in polar satellite observations between ICESat/GLAS (2003-2009) and ICESat-2 (2018-present). Although the IceBridge data are not continuous its mission became critical for extending the ice altimetry time series in the Arctic and Antarctic, after ICESat/GLAS stopped collecting data in 2009.
#
# IceBridge flights were generally conducted in March-May over Greenland and in October-November over Antarctica.
#
# ## **3.4 ICESat-2**
#
# The ICESat-2 mission was designed to provide elevation data needed to determine ice sheet mass balance as well as vegetation canopy information. It provides topographic measurements of cities, lakes and reservoirs, oceans and land surfaces around the globe. The sole instrument on ICESat-2 is the Advanced Topographic Laser Altimeter System (ATLAS), a space-based Lidar. It was designed and built at Goddard Space Flight Center, with the laser generation and detection systems provided by Fibertek. ATLAS measures the travel time of laser photons from the satellite to Earth and back; travel times from multiple laser pulses are used to determine elevation data.
#
# **Note:** Data from ICESat-2 is not accessed with the *IceFlow* library but we provide access to it within this notebook using the [*icepyx*](https://github.com/icesat2py/icepyx) library.
#
# <p align="center">
# <img style="align: center;" width="80%" src='./img/iceflow-coverage.jpg'/>
# <br>
# <b><center>Fig 2. IceFlow mission coverages</center></b>
# </p>
#
#
# # 4. Data Sets and their Spatial and Temporal Coverage
#
# NSIDC provides a list of all available data sets within each mission including further mission information and documentation for each data set:
# * [ICESat/GLAS data sets at NSIDC](https://nsidc.org/data/icesat/data.html)
# * [Pre-IceBridge and IceBridge data sets at NSIDC](https://nsidc.org/data/icebridge/data_summaries.html)
# * [ICESat-2 data sets at NSIDC](https://nsidc.org/data/icesat-2/data-sets)
#
# The *IceFlow* library provides web services to order a spatial and temporal subset of the Lidar point cloud data. The following table describes the temporal and spatial coverage of all available data sets within *IceFlow* as well as their sensors and platforms used to acquire the data.
#
# **Important:** The *IceFlow* library unifies ATM into one category, so when you request pre-IceBridge and IceBridge data you can have a more continuous coverage.
#
#
# ---
#
#
# |Data Set| Spatial Coverage | Temporal Coverage| Mission | Sensors | IceFlow Name|
# |--------|------------------|------------------|------------|---------|-------------|
# |[BLATM L1B](https://nsidc.org/data/BLATM1B)| South: N:-53, S: -90, E:180, W:-180 <br> North: N:90, S: 60, E:180, W:-180 | 23 Jun. 1993 - 30 Oct. 2008 | Pre-IceBridge | ATM | **ATM1B**
# |[ILATM L1B V1](https://nsidc.org/data/ILATM1B/versions/1) | South: N:-53, S: -90, E:180, W:-180 <br> North: N:90, S: 60, E:180, W:-180 | 31 Mar. 2009 - 8 Nov. 2012 <br> (updated 2013) | IceBridge | ATM | **ATM1B**
# |[ILATM L1B V2](https://nsidc.org/data/ILATM1B/versions/2)| South: N:-53, S: -90, E:180, W:-180 <br> North: N:90, S: 60, E:180, W:-180 | 20 Mar. 2013 - 16 May 2019 <br> (updated 2020)| IceBridge|ATM|**ATM1B**
# |[ILVIS2](https://nsidc.org/data/ILVIS2)| North: N:90, S: 60, E:180, W:-180|25 Aug. 2017 - 20 Sept. 2017|IceBridge | ALTIMETERS, LASERS, LVIS |**ILVIS2**
# |[GLAH06](https://nsidc.org/data/GLAH06/)| Global: N:86, S: -86, E:180, W:-180|20 Feb. 2003 - 11 Oct. 2009|ICESat/GLAS | ALTIMETERS, CD, GLAS, GPS, <br> GPS Receiver, LA, PC|**GLAH06**
#
#
# ---
#
# **Note**: If you have questions about the data sets please refer to the user guides or contact NSIDC user services at [email protected]
# # 5. NASA's Earthdata Credentials
#
# To access data using the *IceFlow* library it is necessary to log into [Earthdata Login](https://urs.earthdata.nasa.gov/). To do this, enter your NASA Earthdata credentials in the next step after executing the following code cell.
#
# **Note**: If you don't have NASA Earthdata credentials you have to register first at the link above. You don't need to be a NASA employee to register with NASA Earthdata!
#
# Importing IceFlow client library
from iceflow.ui import IceFlowUI
# Instantiateing the client
client = IceFlowUI()
# You need to use your NASA Earthdata Credentials and verify that they work.
# Please click on set credentials and then see if authentication is successful by executing the next cell.
client.display_credentials()
# This cell will verify if your credentials are valid.
# This may take a little while, if it fails for some reason try again.
# NOTE: Wednesday mornings are usually downtime for NSIDC services and you might experience difficulties accessing data.
authorized = client.authenticate()
if authorized is None:
print('Earthdata Login not successful')
else:
print('Earthdata Login successful!')
# **Note:** If the output of the previous cell is "You are logged into NASA Earthdata!", then you are ready to proceed with any of the three following *IceFlow* use cases (6.1, 6.2, 6.3).
# # 6. IceFlow Use Cases
# ## 6.1 Accessing Data with the IceFlow Access Widget
# The *IceFlow* access widget is a user interface tool to visualize flightpaths from IceBridge, draw a region of interest, set spatio-temporal parameters and place data orders to the *IceFlow* API without writing code.
# The output of the operations performed in the widget can be seen in the log window (right-most icon at the bottom of your browser.)
# <img src='./img/log-icons.png'> or by selecting it on the View menu "Show log console"
#
# **Note:** Currently the access widget is stateless, if you change any parameter you will have to redraw your bounding box or polygon. This is a temporary artifact.
# Let's start with the user interface. We'll explain what this does next.
# You have two displaying options for the user interface:
# 'vertical' displays a sidecar widget, 'horizontal' renders the widget in this notebook.
# Note that depending on your screen size and resolution the 'vertical' display option may not work correctly.
# This is a current bug in the jupyter-widget that can not be solved within the scope of IceFlow.
client.display_map('horizontal', extra_layers=True)
# ### IceFlow User Interface (UI) Components
# This user interface uses [*ipyleaflet*](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a) which allows us to draw
# polygons or bounding boxes to delimit our area of interest. We can also edit and delete these geometries using the widget controls in the map.
# <br>
# <br>**The following list describes all user interface data selection options:**
#
# - **Hemisphere**: Choose which map projection you are going to use, you can pick global, north or south.
#
# - **Data sets**: Choose one or more data sets from the selection. For more than one data set use "CTRL+Space" or "CTRL+Click" on Windows and Linux or "command+click" on a Mac. Note: ATM1B includes the 3 different ATM products (BLATM L1B, ILATM L1B v1, ILATM L1B V2), see the above table for more details.
#
# - **ITRF (optional)**: Choose an International Terrestrial Reference Frame, for more details see [ITRF](corrections.ipynb).
#
# - **Epoch (optional)**: Populate this field with the epoch value in which you want the coordinate reference systems to be based. This can only be applied if a ITRF is selected. (e.g. if you use 2010.1 and ITRF 2014 then all the points will be shifted to match the best ground representation as if they were in January 2010. This is compensating for plate tectonic motion.)
#
# - **ICESat-2**: If you additionally want to place a data order for ICESat-2 data (using icepyx) utilizing the current parameters you need to select the short name code of the desired data set i.e. ATL06.
#
# - **Date Range**: This slider control allows you to select a start and end date of interest.
#
# - **Inside Map options**: In the map part of the widget, you can zoom in and out, draw a polygon or bounding boxes and edit them to select an area of interest. You can also turn on and off the layers that show IceBridge flights and Ice Velocities.
#
# **The following list describes all user interface buttons:**
#
# - The **Get Raw Granule Count** button will query [NASA's CMR](https://earthdata.nasa.gov/eosdis/science-system-description/eosdis-components/cmr) to get a granule count for the current parameters, you need to have a geometry and one or more data sets selected. The result of the query gets displayed in the log window. **Important:** Check the selected raw granule count before placing an order. As a rule of thumb, you can expect a wait time of approximately 10 minutes each 1 Gigabyte of data selected. Keep in mind to run this notebook locally (not with the Binder) if you have large data orders as the Binder will time out after approximately 10 minutes.
#
# - The **Print Current Parameter** button displays the selected start and end time, bounding box and data set(s) in the log window.
#
# - The **Place Data Order** button will submit an *IceFlow* order using the current user interface parameters, this is an **asynchronous** process, you will have to wait until the order is completed before you can work with the data or place a new order but this does not block you from exploring the rest of the notebook while waiting for the order to complete.
#
# - The **Check Order Status** button will output the status of the order in the log window.
#
# - The **Download Data Order** button will download the data from an order that has been completed.
#
# **Notes**:
# * If you use the bounding box geometry in a polar projection, you'll notice a distortion due the nature of polar coordinates, if you prefer, you can use the global Mercator map to draw a bounding box without apparent distortion. The better option is to draw a polygon or enter your exact desired coordinates. [How to do that will be covered later in this tutorial.]
# * The calculated download size of these granules is an upper bound since *IceFlow* allows us to subset the data.
#
# ### NASA Common Metadata Repository (CMR)
#
# NASA's Common Metadata Repository (CMR) is a metadata system that catalogs all data and service metadata records for NASA's Earth Observing System Data and Information System (EOSDIS) and will be the authoritative management system for all EOSDIS metadata. These metadata records are registered, modified, discovered, and accessed through programmatic interfaces leveraging standard protocols and APIs.
#
# In short: NASA's CMR is a database for Earth-related data sets, in this case **all the data served by *IceFlow* is also indexed by CMR**
#
# One **important** thing to notice here is that CMR has the location of the original data granules and they can be in multiple data formats and projections. The *IceFlow* tool simplifies the handling of these different data formats and projections. As CRM does not subset the data, the data size calculation for CMR granules is an upper bound.
# Use the widget state to build spatio-temporal parameters
# Note: This gives you the same information as clicking the "Print Current Parameter" button.
# but it will show up right in this notebook under this code instead of inside the log window.
params = client.build_parameters()
params
# You can query CMR to get an idea of data set coverages, granule numbers and approximate download size
# for the current set of selections.
# Note: This gives you the same information as clicking the "Get Raw Granule Count" button.
# but it will show up right in this notebook under this code instead of inside the log window.
# The granules and total size is an upper bound since CMR has full granules
# and IceFlow will subset them to only cover the area selected.
granules = client.query_cmr(params=params)
# ## 6.2 Accessing Data with the IceFlow API
#
# A second option to access *IceFlow* data is programmatically using the API (without the user interface widget).
#
# In this example we are ordering data from the [Thwaites Glacier](https://en.wikipedia.org/wiki/Thwaites_Glacier) in Antarctica.
#
# Commented out is another example ordering data for the [Jakobshavn](https://en.wikipedia.org/wiki/Jakobshavn_Glacier) glacier in Greenland.
#
#
# ### Specifying Parameters
# The following cell will output the CMR query for the specified parameter set in ```my_params```. *IceFlow* will subset and harmonize the data for you. As a rule of thumb, you can expect a wait time of approximately 10 minutes each 1 Gigabyte of data selected. Keep in mind to run this notebook locally (not with the Binder) if you have large data orders as the Binder will time out after approximately 10 minutes.
# +
# Specify the parameters of interest
# This example covers 10 years of data over Thwaites glacier.
# It consists of Pre-IceBridge, ICESat/GLAS, IceBridge data all in one place!
# IceFlow will harmonize all these data sets for you!
my_params ={
'datasets': ['GLAH06', 'ATM1B'],
'ITRF': '2014',
'epoch': '2014.1',
'start': '1993-01-01',
'end': '2020-01-01',
'bbox': '-103.125559,-75.180563,-102.677327,-74.798063'
}
# This is a second example from the Jakobshavn glacier:
# my_params ={
# 'datasets': ['ATM1B', 'GLAH06', 'ILVIS2'],
# 'start': '2008-01-01',
# 'end': '2018-12-31',
# 'bbox': '-50.2734,68.9110,-47.9882,69.4112'
# }
# returns a json dictionary, the request parameters and the order's response.
granules_metadata = client.query_cmr(params=my_params)
# -
# ### Place a Data Order
#
# After you place an order in *IceFlow* (next cell) a few things will happen, first you will receive a set of emails telling you that NSIDC DAAC received your data orders and that your orders will be processed. The number of emails depend on how many data sets are selected in ```my_params```. In the above example you will receive three emails one for ILVIS2, GLAH06 and ATM1B. As the ICESat-2 data (in this example ATL06) is ordered only indirectly through *IceFlow* you will not receive an email for it at this point. See more details for that in Section "Place ICESat-2 data orders using *IceFlow*". <br>
# After some wait time, dependent on the order size, you will receive another set of emails from NSIDC DAAC letting you know that your data orders have been processed and are ready for you to download. Do not proceed with the step "Download the data" before all your orders are complete. You can check the status of your data orders in the next section "Check order status".
#
# **Important note:** If you use this notebook with the Binder you will have to make sure that it does not time out while waiting for your data order. As some of the data orders are large, we recommend running the notebook locally and not in the Binder.
orders = client.place_data_orders(params=my_params)
orders
# we can also access the last orders using the last orders property.
client.last_orders
# ### Check Order Status
#
# The following cell will show you the status of your data order. You can proceed in the notebook once all orders are "COMPLETE". If you proceed earlier only the completed data orders will be downloaded.
for order in orders:
status = client.order_status(order)
print(order['dataset'], order['id'], status['status'])
# ### Download Data
# Once all data orders are "COMPLETE", you can proceed downloading the data:
for order in orders:
status = client.order_status(order)
if status['status'] == 'COMPLETE':
client.download_order(order)
# ### Place **ICESat-2** Data Orders Using IceFlow
#
# **Note:** This is an additional example on how to use *IceFlow* for ICESat-2 data orders. This is pointing out the difference between ICESat-2 and the other data sets when ordering them via *IceFlow*.
#
# *IceFlow* does not order ICESat-2 data directly but via ***icepyx***. The data ordered this way will be downloaded synchronously. The current *IceFlow* common dictionary does not work for ICESat-2.
#
# ICESat-2 data orders can be very big so before placing an order it is important to check the estimated download size querying CMR.
# ICESat and ICESat-2
my_params ={
'datasets': ['GLAH06', 'ATL06'],
'start': '2003-01-01',
'end': '2019-01-01',
'bbox': '-107.4515,-75.3695,-105.3794,-74.4563'
}
# This will query CMR for unsubsetted granules using the data set's most recent version.
granules_metadata = client.query_cmr(params=my_params)
# ICESat-2 + ICESat/GLAS
orders = client.place_data_orders(params=my_params)
# Downloading the data, ICESat-2 will be downloaded right away.
for order in orders:
status = client.order_status(order)
if status['status'] == 'COMPLETE':
client.download_order(order)
# ## 6.3 Reading and Plotting Data with IceFlow
#
# **Note:** This section can be run without waiting for the previous data orders to be completed. The data used in this example is already preloaded. <br>
# <br>
# Remote sensing data can be overwhelmingly big. Reading a big file is not trivial and when we have an array of them this task can become an intractable barrier.
# The main constraint, if you don't have a super computer, is memory. The average granule size is in the 10s of Megabytes for ICESat-2 and could be Gigabytes for *IceFlow* granules depending on the selected area. This is when libraries like *vaex* and others come into play.
#
# These libraries read our files using a battery of optimizations like lazy loading, memory mapping and parallelism. Let's now explore different ways of reading these HDF5 files using the following libraries:
#
# * *h5py* + *geopandas*
# * *xarray*
#
# Other libraries that you can use to read and work with these files, especially if they are big and need out of core computations are:
#
# * *vaex*
# * *dask* (Note: This notebook does currently not show an example using this library)
#
# ### IceFlow HDF5 File Content
#
# Depending on what data set you requested, the available variables will be different. It is important to note that while *IceFlow* will harmonize the data by using a common frame of reference and by transforming the native formats into HDF5, it will not change the name of the variables form the original data sets. As a result of this, your variables of interest are going to be named differently in data sets from different missions. Samples of data set variables depending on the chosen data set are shown in the output of the cell below.
#
# +
import h5py
glas_file = 'data/twaties-test-GLAH06-2000-2010.h5'
ib_file = 'data/atm1b_data_2020-11-15T20-05.hdf5'
is2_file = 'data/processed_ATL06_20181015100401_02560110_003_01.h5'
print('\nICESat/GLAS dictionary:')
glas_df = h5py.File(glas_file, 'r')
display(glas_df.keys())
print('\nIceBridge dictionary:')
ib_df = h5py.File(ib_file, 'r')
display(ib_df.keys())
print('\nICESat-2 dictionary:')
is2_df = h5py.File(is2_file, 'r')
display(is2_df.keys())
# -
# ### Unifying Parameter Names
#
# **The following *IceFlow* code provides a common way of unifying the main 4 parameters in the point cloud data (non grided) for ICESat/GLAS and IceBridge data.**
#
# These main 4 parameters are:
# * longitude
# * latitude
# * elevation
# * time
#
# ```python
# from iceflow.processing import IceFlowProcessing as ifp
# import geopandas as gdp
#
# ib_gdf = ifp.to_geopandas('data/atm1b_granule_2009.h5')
# glas_gdf = ifp.to_geopandas('data/glah06_granule_2006.h5')
#
# stacked_df = gpd.GeoDataFrame(pd.concat( [ib_gpdf, glas_gpdf], ignore_index=True))
#
# ```
# The above code will open the file, grab the 4 main variables named above and return a *geopandas* dataframe. In this dataframe the name of these parameters will be the same and we could stack them with data from other data sets i.e., GLAH06 from ICESat/GLAS.
#
# **NOTE**: The *geopandas* method is good for medium size granules i.e., no larger than 1 or 2 GB. The main reason being that *geopandas* is not an out of core-oriented library. This means that if you run out of memory the dataframe will crash your notebook.
#
# ### Reading and Cleaning Data
#
# The following code will walk you through reading and cleaning the data step by step. <br>
# First you have to load some libraries:
# +
# %matplotlib widget
# importing some libraries to work with the data
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import pandas as pd
import geopandas as gpd
# IceflowProcessing is a wrapper to work with HDF5 files
from iceflow.processing import IceFlowProcessing as ifp
# Pre-IceBridge ATM granule data
preib_gdf = ifp.to_geopandas('data/atm1b_data_2020-11-15T20-05.hdf5')
# ICESat granule data
glas_gdf = ifp.to_geopandas('data/twaties-test-GLAH06-2000-2010.h5')
# -
# Then we display the Pre-IceBridge dataframe:
# first, let's see what's in the harmonized dataframe and its shape.
display(preib_gdf.head(), preib_gdf.shape)
# and the ICESat/GLAS dataframe:
# we do the same for the ICESat/GLAS dataframe
display(glas_gdf.head(), glas_gdf.shape)
# Next you can look at the histogram of the elevation parameter:
# +
# Let's see the data distribution via a histogram
title = 'Elevation Distribution'
x_title = 'Elevation (Meters)'
y_title = 'Sample Count'
fig, axes = plt.subplots()
preib_gdf.hist('elevation', ax=axes)
plt.title(title, ha='center', fontsize='large')
fig.text(0.5, 0.02, x_title, ha='center')
fig.text(0.0, 0.5, y_title, va='center', rotation='vertical')
# -
# Noticing some unphysical negative values as well as large outliers we remove them for plotting purposes and replot the histogram.
# As we could see, there are plenty of outliers, if we just want a clean view we can just discard them.
# For a more correct approach to these outliers we could use the 2 sigma method.
preib_gdf = preib_gdf[(preib_gdf['elevation'] > -10) & (preib_gdf['elevation'] < 400)]
# Now the histogram for our "corrected" dataframe.
fig, axes = plt.subplots()
preib_gdf.hist('elevation', ax=axes)
plt.title(title, ha='center', fontsize='large')
fig.text(0.5, 0.02, x_title, ha='center')
fig.text(0.0, 0.5, y_title, va='center', rotation='vertical')
# In a similar way this is done for the ICESat/GLAS data in the next couple of code blocks.
# we do the same for our ICESat/GLAS dataframe
fig, axes = plt.subplots()
glas_gdf.hist('elevation', ax=axes)
plt.title(title, ha='center', fontsize='large')
fig.text(0.5, 0.02, x_title, ha='center')
fig.text(0.0, 0.5, y_title, va='center', rotation='vertical')
glas_gdf = glas_gdf[glas_gdf['elevation'] < 400]
#print the normalized dataframe histogram.
fig, axes = plt.subplots()
glas_gdf.hist('elevation', ax=axes)
plt.title(title, ha='center', fontsize='large')
fig.text(0.5, 0.02, x_title, ha='center')
fig.text(0.0, 0.5, y_title, va='center', rotation='vertical')
# ### Plotting Data
#
# You can now either plot the two data sets separately:
# Or, and that is the **beauty of the *IceFlow* library**, you can easily stack the two data sets and plot them overlapping in one figure!
glas_gdf_3031 = glas_gdf.to_crs('EPSG:3031')
plt.figure(figsize=(12,8), dpi= 120)
ax = plt.axes(projection=ccrs.SouthPolarStereo(central_longitude=0))
ax.coastlines(resolution='50m', color='black', linewidth=1)
ax.set_extent([-180, 180, -65, -90], ccrs.PlateCarree())
glas_gdf_3031.plot(column='elevation',
ax=ax,
markersize=1,
cmap='inferno',
legend=True,
legend_kwds={'label':'GLAH06 elevation (meters)'})
# We overlap our ATM1B dataframe on the plot above, you can zoom in to see where they overlap.
# Here you can notice the difference in density between the 2 datasets.
preib_gdf_3031 = preib_gdf.to_crs('EPSG:3031')
preib_gdf_3031.plot(ax=ax,
column='elevation',
markersize=1,
cmap='viridis',
legend=True,
legend_kwds={'label':'ATM1B elevation (meters)'})
plt.tight_layout()
# ### Plotting Multiple Years of Point Cloud Data
# +
# We group our dataframe by year
glas_by_year = glas_gdf.groupby([(glas_gdf.index.year)])
for key, group in glas_by_year:
group.plot(column='elevation',
markersize=0.5,
label=key,
legend=True,
legend_kwds={'label':f'GLAH06 {key} elevation (meters)'})
# -
# We can also stack 2 or more geopandas dataframes to have a unified dataframe for analysis.
stacked_df = gpd.GeoDataFrame(pd.concat( [preib_gdf, glas_gdf]))
display(stacked_df.head(), stacked_df.shape)
# # 7. Conclusions and Future Work
#
#
# # 8. References
# 1. [Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment](https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120008479.pdf)
# 2. [Open Source Tools for Point Cloud Processing, Storage, Subsetting, and Visualization](https://sea.ucar.edu/sites/default/files/kbeam_seaconf18.pdf)
#
# # 9. Related Tools
#
# * [OpenAltimetry](https://openaltimetry.org/): Advanced discovery, processing, and visualization services for ICESat and ICESat-2 altimeter data
# * [ITS_LIVE](https://its-live.jpl.nasa.gov/):A NASA MEaSUREs project to provide automated, low latency, global glacier flow and elevation change data sets.
| 34,793 |
/notebooks/运用特征初探机器学习算法.ipynb
|
a54d3883a255e35ba1b93e51f7e094e70f47bec3
|
[] |
no_license
|
ShawnXiha/Two-Sigma-Connect-Rental-Listing-Inquiries
|
https://github.com/ShawnXiha/Two-Sigma-Connect-Rental-Listing-Inquiries
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 17,707 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas
import matplotlib
import seaborn as sn
iris=sn.load_dataset('iris')
iris.head()
iris.species.unique()
sn.set_style('dark')
sn.kdeplot(iris.loc[(iris['species']=='setosa'),
'sepal_length'],shade=True,color='b',Label='setosa')
sn.set_style('dark')
sn.kdeplot(iris.loc[(iris['species']=='setosa'),
'sepal_length'],shade=True,color='b',Label='setosa')
sn.set_style('dark')
sn.kdeplot(iris.loc[(iris['species']=='virginica'),
'sepal_length'],shade=True,color='y',Label='virginica')
from matplotlib import pyplot as plt
x=iris.petal_length
plt.hist(x,bins=20,color="yellow")
plt.title("petal_lenth")
plt.show()
# +
iris=sn.load_dataset('iris')
from matplotlib import pyplot as plt
x=iris.species
y=iris.petal_length
plt.bar(x,y)
plt.show()
# -
from matplotlib import pyplot as plt
plt.scatter(iris.sepal_length,iris.sepal_width)
sn.set()
plt.show()
sn.relplot(data=iris,x='sepal_length',y='sepal_width')
from matplotlib import pyplot as plt
import seaborn as sn
plt.scatter(iris.sepal_length,iris.petal_length)
sn.set_style('darkgrid')
plt.scatter(iris.sepal_length,iris.petal_length)
sn.set_style('whitegrid')
plt.show()
sn.relplot(data=iris,x="sepal_length",y='petal_length')
sn.kdeplot(iris.loc[(iris['species']=='setosa'),
'sepal_length'],color='b',Label='setosa')
x=iris.species
y=iris.petal_length
fig,ax=plt.subplots()
ax.plot(y)
# +
sn.set_style('darkgrid')
from matplotlib import pyplot as plt
iris=sn.load_dataset('iris')
sn.boxplot(x=iris['species'],y=iris['sepal_length'])
plt.show()
# -
selection.KFold(n_splits=5, shuffle=True, random_state=2016)
for dev_index, val_index in kf.split(range(train_X.shape[0])):
dev_X, val_X = train_X[dev_index,:], train_X[val_index,:]
dev_y, val_y = train_y[dev_index], train_y[val_index]
preds, model = runXGB(dev_X, dev_y, val_X, val_y)
cv_scores.append(log_loss(val_y, preds))
print(cv_scores)
break
preds, model = runXGB(train_X, train_y, test_X, num_rounds=170)
out_df = pd.DataFrame(preds)
out_df.columns = ["high", "medium", "low"]
out_df["listing_id"] = pd.read_pickle("../input/test.json.pkl").index.values
out_df.to_csv("../output/xgb_starter2.csv", index=False)
| 2,545 |
/machine-learning-tutorial/06_Decision_Tree/Exercise - Decision Tree.ipynb
|
f69511a6af398b4c8845445d8c197dfca5e61946
|
[] |
no_license
|
nandujawale/jupyter-notebooks
|
https://github.com/nandujawale/jupyter-notebooks
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 18,370 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
learning_rate = 0.01
training_epochs = 10
batch_size = 256
display_step = 1
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
n_hidden_1=128
n_hidden_2=64
n_hidden_3=12
weights = {
'encoder_h1': tf.Variable(tf.random_normal([784, n_hidden_1])),
'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'encoder_h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3])),
'decoder_h1': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_2])),
'decoder_h2': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])),
'decoder_h3': tf.Variable(tf.random_normal([n_hidden_1, 784])),
}
biases = {
'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])),
'encoder_b3': tf.Variable(tf.random_normal([n_hidden_3])),
'decoder_b1': tf.Variable(tf.random_normal([n_hidden_2])),
'decoder_b2': tf.Variable(tf.random_normal([n_hidden_1])),
'decoder_b3': tf.Variable(tf.random_normal([784])),
}
def autoencoder(input):
layer_1=tf.nn.sigmoid(tf.add(tf.matmul(input, weights['encoder_h1']),biases['encoder_b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),biases['encoder_b2']))
layer_3 = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, weights['encoder_h3']),biases['encoder_b3']))
return layer_3
def decoder(x):
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),biases['decoder_b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),biases['decoder_b2']))
layer_3 = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, weights['decoder_h3']),biases['decoder_b3']))
return layer_3
autoencoder=autoencoder(x)
decoder=decoder(autoencoder)
prediction=decoder
loss=tf.reduce_mean(tf.pow(x-prediction, 2))
train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
total_batch = int(mnist.train.num_examples/batch_size)
for epoch in range(training_epochs):
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_, c = sess.run([train_step, loss], feed_dict={x: batch_x})
if (epoch+1) % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c))
print("Optimization Finished!")
# -
| 2,986 |
/Assignment_7.ipynb
|
61b9cb1070fc4bf6a9f771d6f54f0d78df46dcd6
|
[] |
no_license
|
zubaer005/CMSC6950_Assignments
|
https://github.com/zubaer005/CMSC6950_Assignments
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 251,086 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment 7 - Numpy and Matplotlib
# ### Due Thursday, June 3, 2021
#
# Your assignment should be handed in as an iPython/Jupyter notebook checked into your private GitHub repository `CMSC6950_Assignments` with the filename `Assignment_7.ipynb`.
#
# ## 1 Plotting and analyzing ARGO float data
#
# #### 1.1 Import numpy
#
import numpy as np
#
# #### 1.2 Use the shell command `wget` to download an example ARGO float profile from the North Atlantic.
# The data file's url is http://www.ldeo.columbia.edu/~rpa/argo_float_4901412.npz
# (you can have bash command inside a Jupyter cell if they start with a `!` )
# !wget -c http://www.ldeo.columbia.edu/~rpa/argo_float_4901412.npz
# #### 1.3 Load the data file
data = np.load('argo_float_4901412.npz')
# #### 1.4 Extract the temperature, pressure and salinity arrays to arrays T, S, P and mask out invalid data (the nan values from missing points).
data = np.load('argo_float_4901412.npz')
list(data)
# #### 1.5 Extract the date, lat, lon, and levels arrays.
T = data['T']
S = data['S']
P = data['P']
T = np.ma.masked_array(T, mask=np.isnan(T))
S = np.ma.masked_array(S, mask=np.isnan(S))
P = np.ma.masked_array(P, mask=np.isnan(P))
# #### 1.5 Note the shapes of T, S and P compared to these arrays. How do they line up?
T.shape
S.shape
P.shape
# #### 1.6 Load the necessary package for plotting using pyplot from matplotlib.
import matplotlib.pyplot as plt
# #### 1.7 Make a 1 x 3 array of plots for each column of data in T, S and P.
# The vertical scale should be the `levels` data. Flip the veritcal axis direction so that levels increase downward on the plot. Each plot should have a line for each column of data. It will look messy. Make sure you label the axes and put a title on each subplot.
a = np.array([T,S,P])
# #### 1.8 Compute the mean and standard deviation of each of T, S and P at each depth in `levels`.
# +
fig, axs = plt.subplots(1, 3, figsize=(10, 6))
for i in range(75):
axs[0].plot(T[:, i], levels)
axs[1].plot(S[:, i], levels)
axs[2].plot(P[:, i], levels)
for i in range(3):
axs[i].invert_yaxis()
axs[0].set_xlabel('Temperature')
axs[1].set_xlabel('Salinity')
axs[2].set_xlabel('Pressure')
# -
Tmean = T.mean(axis=1)
Tstd = T.std(axis=1)
Smean = S.mean(axis=1)
Sstd = S.std(axis=1)
Pmean = P.mean(axis=1)
Pstd = P.std(axis=1)
# +
fig, axs = plt.subplots(1, 3, figsize=(10, 6))
axs[0].errorbar(Tmean, levels, xerr=Tstd)
axs[1].errorbar(Smean, levels, xerr=Sstd)
axs[2].errorbar(Pmean, levels, xerr=Pstd)
for i in range(3):
axs[i].invert_yaxis()
axs[0].set_xlabel('Temperature')
axs[1].set_xlabel('Salinity')
axs[2].set_xlabel('Pressure')
axs[0].set_ylabel('Depth Level')
# -
Tmean = T.mean(axis=0)
Tstd = T.std(axis=0)
Smean = S.mean(axis=0)
Sstd = S.std(axis=0)
Pmean = P.mean(axis=0)
Pstd = P.std(axis=0)
P_ma = np.ma.masked_invalid(P)
P_ma.mean()
T_std= T_ma.std()
S_std= S_ma.std()
P_std= P_ma.std()
T = np.ma.masked_invalid(data['T'])
S = np.ma.masked_invalid(data['S'])
P = np.ma.masked_invalid(data['P'])
P.max()
# #### 1.9 Now make a similar plot, but show only the mean T, S and P at each depth. Show error bars on each plot using the standard deviations.
# Again, make sure you label the axes and put a title on each subplot.
plt.scatter(S, T, c=P)
plt.grid()
plt.colorbar()
# #### 1.10 Compute the mean and standard deviation of each of T, S and P for each time in `date`.
T.std();
T
# #### 1.11 Plot the mean T, S and P for each entry in *time*, now on a *3 x 1* subplot grid with time on the horizontal axis. Show error bars on each plot using the standard deviations.
# +
fig, axs = plt.subplots(3, 1, figsize=(10, 6))
axs[0].errorbar(date, Tmean, yerr=Tstd)
axs[1].errorbar(date, Smean, yerr=Sstd)
axs[2].errorbar(date, Pmean, yerr=Pstd)
axs[0].set_ylabel('Temperature')
axs[1].set_ylabel('Salinity')
axs[2].set_ylabel('Pressure')
axs[2].set_ylabel('Date')
# -
# #### 1.12 Create a scatter plot of the positions of the ARGO float data. Color the positions by the date. Add a grid overlay.
# Don't forget to label the axes!
# +
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter,DayLocator
from matplotlib import cm
data = np.arange(24.)+700000.
x = np.random.rand(24)
y = np.random.rand(24)
fig = plt.figure()
ax = fig.add_subplot(111)
smap = ax.scatter(x,y,s=10,c=data,edgecolors='none',marker='o',cmap=cm.jet)
ax.set_xlabel('')
cb = fig.colorbar(smap,orientation='horizontal',shrink=0.7,
ticks=DayLocator(interval=5),
format=DateFormatter('%b %d'))
# -
# ## 2 Matrix multiplication
# #### 2.1 Create a function called myMatrixMultiply that takes input matrices X and Y and computes their matrix product.
#
# *Matrix Multiplication.* In this exercise you will create two square matrices $A$ and $B$ with dimensions $n \times n$. You will then use [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) to compute their product with the results being stored in matrix $C$. Thus, you will be computing the matrix equation $C = AB$. Note that matrix multiplication is different from element by element array multiplication. See the [wikipedia page](https://en.wikipedia.org/wiki/Matrix_multiplication) if you are unsure what matrix multiplication is.
#
# Use three nested `for` loops to *explicitly* perform the matrix multiplication. The inner most loop calculates element `C[i,j]` which is equal to the sum of `A[i,k]*B[k,j]` over all values of index `k` from `0` to `n-1`. The two outer loops iterate over `i` and `j`.
def myMatrixMultiply(A, B):
n, _ = A.shape
C = np.empty((n, n))
for i in range(n):
for j in range(n):
for k in range(n):
C[i,j] = A[i,k]*B[k,j]
return C
# #### 2.2 Create ones() square matrices for A and B with n = 100. Use the `%timeit` function to compute the matrix product AB using your function `myMatrixMultiply`.
n = 100
A = np.ones((n,n))
B = np.ones((n,n))
# %%timeit
C = myMatrixMultiply(A, B)
# #### 2.3 Now let's see how much faster Numpy's built in matrix multiplication routine is.
# In Numpy, matrix multiplication is done using the `dot()` function. Use the `%timeit` function to compute the matrix product AB for n = 100 using `dot()` and time it using the `%timeit` function.
#
# How much faster is using NumPy's `dot()` compared your `myMatrixMultiply` function?
# %%timeit
C = np.dot(A, B)
# Now time how long the NumPy `dot()` version takes for n = 1000
n = 1000
A = np.ones((n,n))
B = np.ones((n,n))
# And, finally, measure NumPy's `dot()` for n = 10000 (be patient, and defintely don't try this with `myMatrixMultiply` !)
# Your results should demonstrate to you that the run time for matrix-matrix multiplication scales as a power law of `n`.
#
# Assuming that the run time of `myMatrixMultiply` is proportional to the cube of `n`, approximately how long would you expect the run time to be for n=10000 in the original Python version?
#
# %%timeit
C = np.dot(A, B)
# NumPy allows you to do computations that would be way to slow with only Python statements.
| 7,428 |
/Test_NN.ipynb
|
563b72a4b69ed374a5959dfda982a936aa8070fa
|
[] |
no_license
|
kgao1997/ML_Projects
|
https://github.com/kgao1997/ML_Projects
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,445 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/YangTaeSung/CAU-MachineLearning/blob/master/assignment02.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="qp7Yxg6axN2E" colab_type="text"
# # Assignment02
# + id="mlKumgoTuKJS" colab_type="code" colab={}
import matplotlib.pyplot as plt
import numpy as np
import random
# + [markdown] id="ALO8uBnexTZi" colab_type="text"
# ## 1.Input data
# + id="GUGQd7XO824B" colab_type="code" colab={}
noiseNum = 10
x = np.linspace(0, 5, noiseNum)
y = 2 * x
# + [markdown] id="ZAmTp06tigkI" colab_type="text"
# - noise의 개수 10개
# - linear 함수 표현
#
# + id="4OvkUKtSjCvw" colab_type="code" colab={}
for i in range(0,noiseNum):
noise = random.uniform(-2,2)
noiseY[i] = y[i] + noise
# + [markdown] id="J8vhwedHjDmi" colab_type="text"
# * y값에서 -1~1의 편차를 두고 랜덤값 생성하여 noiseY값 적용
# + id="DxNfailWx4Iy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="26c9026c-a90e-43cd-bf47-b896d9b03d74"
plt.plot(x, y, "BLUE")
plt.scatter(x,noiseY,c='black', s=50)
plt.title('Linear function')
plt.xlabel('X AXIS')
plt.ylabel('Y AXIS')
plt.show()
# + [markdown] id="eV19HM6cyKFR" colab_type="text"
# ### [1] a straight line that is the graph of a linear function (in blue color)
# ### [2] a set of points that have random perturbations with respect to the straight line (in black color)
# + id="Qnb7HNAQI5Hc" colab_type="code" colab={}
thetaZero = 0.0
thetaOne = 0.0
cycle = 3000
learningLate = 0.03
# + [markdown] id="cCuPfPwdJJXY" colab_type="text"
# * 세타값 초기화(learning 시작점)
# * Optimization cycle과 learning late 설정
# + id="ZHHIAaVmJeP-" colab_type="code" colab={}
iForGraph = []
jFor3D = []
thetaOneFor3D = []
thetaZeroFor3D = []
# + [markdown] id="l7LBZfREzTQ6" colab_type="text"
# * Plotting을 위한 각종 데이터 저장 변수 정의
# + id="Okm4QgveKrO7" colab_type="code" outputId="e54b7e1b-b217-4546-a3ee-1d3dd28ea127" colab={"base_uri": "https://localhost:8080/", "height": 547}
for i in range(cycle):
h = thetaZero + thetaOne * x
j = np.sum((h - noiseY) ** 2) / (2 * noiseNum)
thetaZero = thetaZero - learningLate / noiseNum * np.sum((thetaOne * x - noiseY + thetaZero))
thetaOne = thetaOne - learningLate / noiseNum * np.sum((thetaOne * x - noiseY + thetaZero) * x)
iForGraph.append(i)
jFor3D.append(j)
thetaOneFor3D.append(thetaOne)
thetaZeroFor3D.append(thetaZero)
if i % 100 == 0:
print('cycle : {:10d} cost: {:10f} thetaZero: {:10f} thetaOne: {:10f}'.format(i, j,thetaZero, thetaOne))
finalY = thetaOne * x + thetaZero
# + [markdown] id="Tlh3YYWzz8pJ" colab_type="text"
# * Gradient descent algorithm
# * list.append 함수를 사용하여 optimization 단계마다 각각의 값 저장
# * 100cycle 주기로 출력
# * Optimal한 linear함수 finalY 정의
#
#
#
# + [markdown] id="gXUoNc6F1HdY" colab_type="text"
# ## Output results
# + id="-XEt7SH_1MU_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="df1757cf-391e-4b53-899e-c97dfa1433ac"
# Plotting the output results
plt.plot(x, y, "BLUE")
plt.plot(x, finalY, "RED")
plt.scatter(x,noiseY,c='black', s=50)
plt.title('Linear function')
plt.xlabel('X AXIS')
plt.ylabel('Y AXIS')
plt.show()
# + [markdown] id="dgAZBOl51V-j" colab_type="text"
# ### [1] the set of points that have random perturbations with respect to the straight line (in black color)
# ### [2] a straight line that is the graph of a solution obtained by linear regression (in red color)
# + [markdown] id="UGyJeeJN181p" colab_type="text"
# ## Plotting the energy values
# + id="3xBiccan15U8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="570374fb-7cfd-423b-e05e-52152b1e71a5"
# Plotting the energy values
plt.plot(iForGraph, jFor3D, "BLUE")
plt.title('Ploting the energy values')
plt.xlabel('Optimization step')
plt.ylabel('Objective funtion')
plt.show()
# + [markdown] id="yJjhOrF92CwO" colab_type="text"
# ### [1] the value of the objective function at every optimization step by the gradient descent algorithm (in blue color)
# ### [2] the optimization should be performed until convergence
# + [markdown] id="7X9xs0452Iol" colab_type="text"
# ## Plotting the model parameters
# + id="usrYK0Xg2N79" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="405e5b56-c874-4705-fe58-e536c1defbca"
# Plotting the model parameters
plt.plot(iForGraph, thetaZeroFor3D, "RED")
plt.plot(iForGraph, thetaOneFor3D, "BLUE")
plt.title('Ploting the model parameters')
plt.xlabel('Optimization step')
plt.ylabel('Parameter')
plt.show()
# + [markdown] id="e9keH10c2OsN" colab_type="text"
# ### [1] the value of the model parameters at every optimization step (red color : theta0 , blue color : theta1)
# ### [2] the optimization should be performed until convergence
# + [markdown] id="zrnacYyK2aXR" colab_type="text"
# # ## + Additional practice (3D graph representation)
# + id="3CN158Ou83cj" colab_type="code" outputId="0ee4d7e8-2a1e-4e04-ef2f-ba4de98b7866" colab={"base_uri": "https://localhost:8080/", "height": 479}
# 세타0, 세타1, J을 3차원으로 표현
fig1 = plt.figure()
ax = fig1.gca(projection='3d')
ax.scatter(thetaZeroFor3D,thetaOneFor3D,jFor3D)
plt.show()
# 세타0, 세타1, J을 3차원에서 각각 표현
fig2 = plt.figure()
ax = fig2.gca(projection='3d')
ax.scatter(thetaZeroFor3D,0,0, c='red')
ax.scatter(0,thetaOneFor3D, c='blue')
ax.scatter(0,0,jFor3D)
plt.show()
1_c.real, fft2_sinus1_c.imag)
# visualisation
affichage_14([real_sinus1[0:5,0:10],imag_sinus1[0:5,0:10]],['partie réelle','partie imaginaire'])
affichage_14([module_affichage(module_sinus1),phase_sinus1],['module DFT centrée','Phase DFT centrée'])
# -
#
# ### Question 2 :
#
# On va s'intéresser au module de la DFT centrée. **Note:** N'oubliez pas de toujours utiliser ```module_affichage(image_module)``` pour afficher les images modules à l'écran avec la fonction ```affichage_14()```.
#
# 1. Codez une fonction ```module_fft_c()``` qui renvoie le module de la DFT centrée d'une image.
# 2. Remarquez l'emplacement des coefficients non nuls dans l'image du module de la DFT centrée de sinus1.png. Où sont-ils situés ? Extrayez ces coefficient et tracez les avec la fonction ```affichage_14()```. **Note :** Vous pouvez extraires les coefficients en prenant la ligne correspondante dans l'image de module ou bien en prenant simplement les coefficients non nuls de l'image avec ```coeffs = image[image>0]```
# 3. Répétez ces opérations sur les images sinus2.png et sinus3.png : affichez les images des modules et tracez les coefficients non nuls
# +
def module_fft_c(image):
image_fft = np.fft.fft2(image)
image_fft = np.fft.fftshift(image_fft)
module = np.sqrt(image_fft.real**2 + image_fft.imag**2)
return module
module_sinus2 = module_fft_c(image_sinus2)
module_sinus3 = module_fft_c(image_sinus3)
affichage_14( [module_affichage(module_sinus2),module_affichage(module_sinus3)], ['module sinus2','module sinus3'])
affichage_14( [module_sinus2[module_sinus2 > 0], module_sinus3[module_sinus3 > 0]], ['profil module sinus2','profil module sinus3'])
# -
# ## Exercice 2 : Quelques DFT d’images simples
#
# 1. **DFT d'une sinusoïde diagonale:** Affichez l'image `sinrot.png` ainsi que le module de sa DFT centrée et commentez.
#
# 2. **DFT d'une gaussienne:** Affichez l'image `gaussienne.png` à côté du module de sa DFT centrée.
# Puis choisissez la ligne ou la colonne du milieu dans les deux images et comparez leurs profils. Commentez.
# +
# Code à compléter
module_fft_sinrot = module_fft_c(image_sinrot)
affichage_14( [image_sinrot, module_affichage(module_fft_sinrot)], ['image sinrot', 'module DFT centrée sinrot'])
module_fft_gaussienne = module_fft_c(image_gaussienne)
affichage_14( [image_gaussienne, module_affichage(module_fft_gaussienne)], ['image gaussienne', 'module DFT centrée gaussienne'])
n = int(module_fft_gaussienne.shape[0]/2)
fft_sinrot_n = module_fft_sinrot[n]
fft_gaussienne_n = module_fft_gaussienne[n]
affichage_14( [fft_sinrot_n, fft_gaussienne_n], ['spectre horizontale sinrot', 'spectre horizontale gaussienne'])
# -
# ### Commentaires:
# - On remarque dans la dft du sinus l'apparition de 3 point de forte instensité, qui correspondent au pique de frequence des sinus
# - tandis que la dft de la gaussiene ressemble a une gaussienne aussi
# ## Exercice 3 : Propriétés de la DFT
#
# Cet exercice se propose de mettre en avant certaines propriétés de la DFT :
#
# 1. Observer et interprétez le module de la DFT centrée de l’image rectangle.png. On tracera le profil de niveaux de gris sur la ligne 65.
# 2. Observez et interprétez le module de la DFT centrée de l’image rotate.png. Cette image correspond à la précédente, après rotation de 30 degrés.
# 3. Effectuez la somme des deux images précédentes en utilisant la fonction ```somme_images()```. Observez et interprétez le module de la DFT centrée de l’image résultat.
#
# +
# code à compléter
def somme_images(img1,img2):
# convert to float
img1 = np.array(img1, dtype='float32')
img2 = np.array(img2, dtype='float32')
# rescale :
img1 /= np.sum(img1)
img2 /= np.sum(img2)
return img1 + img2
module_fft_rectangle = module_fft_c(image_rectangle)
affichage_14( [image_rectangle, module_affichage(module_fft_rectangle),image_rectangle[64]], ['image rectangle', 'module DFT centrée rectangle','spectre ligne 65 DFT rectangle'])
module_fft_rotate = module_fft_c(image_rotate)
affichage_14( [image_rotate, module_affichage(module_fft_rotate)], ['image rotate', 'module DFT centrée rotate'])
image_somme = somme_images(image_rectangle, np.pad(image_rotate, (image_rectangle.shape[0] - image_rotate.shape[0])//2))
module_fft_somme = module_fft_c(image_somme)
affichage_14( [image_somme, module_affichage(module_fft_somme)], ['image somme', 'module DFT centrée somme'])
# -
# ### **Commentaires :**
# - on remarque que la dft du carré correspond à un sinus cardinal dans les deux direction
# - on remarque que la dft de la rotation de l'image correspond à rotation de la dft de l'image, (à la difference prés des aliasing qui causent l'apparition du bruit)
# - et dernierrement la dft de la somme de deux images, correspond à la somme des deux dft
# ## Exercice 4 : Compréhension de spectres
#
# Affichez les images ```texture1.png```, ```texture2.png```, ```h.png``` et leurs DFT. Commentez les DFT.
affichage_14( [image_texture1, image_texture2, image_h], ['image texture1', 'image texture2','image h'])
module_fft_texture1 = module_fft_c(image_texture1)
module_fft_texture2 = module_fft_c(image_texture2)
module_fft_h = module_fft_c(image_h)
affichage_14( [module_affichage(module_fft_texture1), module_affichage(module_fft_texture2), module_affichage(module_fft_h)], ['module fft texture1', 'module fft texture2','module fft h'])
# ### **Commentaires :**
# - les images contenant des motifs qui se repetent, ont tendence à avoir une dft ..
# - les images simple(absense de motif qui se repetent), ont un pique trés intense au centre.
# ## Exercice 5 : Filtrage dans le domaine fréquentiel
#
#
# 1. Observez l’image ```pulse.png```. À quel signal (bidimensionnel) correspond-elle ? Observez le spectre de sa DFT centrée. Interprétez le résultat.
# 2. Observez l’image ```passe_bas.png```. Cette image correspond en fait a la réponse fréquentielle d’un filtre passe-bas idéal ```PB```. À quoi le voit-on ? Tracez ce profil et celui de la DFT (même colonne) précédente.
# 3. Soit ```A = image_pulse```. Calculez $DFT(A) \times PB$ et interpréter le résultat obtenu en affichant le module de cette image.
# 4. Calculez la DFT inverse (fonction ```ifft2()```) en utilisant l'image $DFT(A) \times PB$ afin d’obtenir l’image reconstruite ```A′```. Visualisez et interprétez le module de cette image ainsi que la courbe de profil obtenue à partir d'une de ses lignes. Commentez.
# 5. Effectuer les étapes 1, 2 et 3 avec les filtres suivants : ```passe_haut.png``` et ```passe_bande.png```, respectivement les réponses fréquentielle d’un filtre passe-haut ```PH``` et d’un filtre passe-bande ```PB```.
# +
# code à compléter
module_fft_pulse = module_fft_c(image_pulse)
affichage_14( [image_pulse, module_affichage(module_fft_pulse), module_fft_pulse[module_fft_pulse>0]], ['image pulse', 'image module DFT pulse', 'spectre pulse'])
# -
# passe bas
module_fft_passebas = module_fft_c(image_passe_bas)
affichage_14([image_passe_bas, module_affichage(module_fft_passebas), module_fft_passebas[module_fft_passebas > 0] ],['profil image passe bas','spectre pulse','image passe bas'])
fft_pulse = np.fft.fftshift((np.fft.fft2(image_pulse)))
fft_pb = image_passe_bas
a_prime = fft_pulse * fft_pb
im = np.fft.ifft2(a_prime)
affichage_14([abs(im), module_fft_c(a_prime), abs(a_prime[a_prime > 0])],['module pulse x passe bas', 'DFT pulse x PB', 'profil DFT pulse x PB'])
module_fft_passehaut = module_fft_c(image_passe_haut)
affichage_14([image_passe_haut, module_affichage(module_fft_passehaut), module_fft_passehaut[module_fft_passehaut > 0]],['profil image passe haut','spectre pulse','image passe haut'])
fft_pulse = np.fft.fftshift((np.fft.fft2(image_pulse)))
fft_pb = image_passe_haut
a_prime = fft_pulse * fft_pb
im = np.fft.ifft2(a_prime)
affichage_14([abs(im), module_fft_c(a_prime), a_prime[a_prime > 0]],['module pulse x passe haut', 'DFT pulse x PH', 'profil DFT pulse x PH'])
# +
module_fft_bande = module_fft_c(image_passe_bande)
affichage_14([image_passe_bande, module_fft_bande, module_fft_bande[module_fft_bande > 0]],['profil image passe bande','spectre pulse','image passe bande'])
fft_pulse = np.fft.fftshift((np.fft.fft2(image_pulse)))
fft_pb = image_passe_bande
a_prime = fft_pulse * fft_pb
im = np.fft.ifft2(a_prime)
affichage_14([abs(im), module_affichage(module_fft_c(a_prime)), a_prime[a_prime > 0]],['module pulse x passe bande', 'DFT pulse x PB', 'profil DFT pulse x PB'])
# -
# ### **Commentaires :**
# - filtre passebas enlève les hautes frequence et garde uniquement les basse frequence
# - filtre passehaut met l'accents sur les hautes frequence et notament les contours
| 14,404 |
/C05_Data_Science_with_Python/C05T02_Prepare_and_Explore_the_Data/.ipynb_checkpoints/C05T02_Prepare_and_Explore_the_Data-checkpoint.ipynb
|
802f90eb426fe89a4b620a67984a4869f6d02fee
|
[] |
no_license
|
jlnerd/UTAustin_Data_Analytics
|
https://github.com/jlnerd/UTAustin_Data_Analytics
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 2,749,909 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3.9.12 ('smartscope-0.61')
# language: python
# name: python3
# ---
# # Patterns for atlas acquisition within a given radius
import numpy as np
import matplotlib.pyplot as plt
from typing import List
# %matplotlib inline
# Here we set up the parameters and a mask of the area for acquisition.
full_radius_in_um = 990
atlas_radius_in_um = 600
atlas_imsize_x = 5700
atlas_imsize_y = 4096
pixel_size_in_angst = 650
pixel_size_um = 250/10_000
overlap = 0.05
imsize_x_um = atlas_imsize_x * pixel_size_um
imsize_y_um = atlas_imsize_y * pixel_size_um
imsize = np.array([imsize_x_um,imsize_y_um])
# Divide the stage into tiles of different sizes and generate the mask. It's a padded array of zeros with 1s where we would like to acquire
# +
def generate_tile_mask(radius:float,imsize_x_um:float,imsize_y_um:float, tile_overlap_fraction:float) -> np.ndarray:
padded_max_axis = max([int(radius*2//imsize_x_um*(1-tile_overlap_fraction)), int(radius*2//imsize_y_um*(1-tile_overlap_fraction))]) + 2
lattice_mask = np.zeros((padded_max_axis,padded_max_axis))
center = np.array(lattice_mask.shape)//2
lattice_stage = lattice_mask.copy()
for x,xval in enumerate(lattice_mask):
for y,yval in enumerate(xval):
coord = (np.array([x,y])-center) * imsize
dist = np.sqrt(np.sum(np.power(coord,2)))
if dist > atlas_radius_in_um:
continue
lattice_mask[x,y] = 1
return lattice_mask
lattice_mask = generate_tile_mask(radius=full_radius_in_um,imsize_x_um=imsize_x_um,imsize_y_um=imsize_y_um,tile_overlap_fraction=overlap)
center = np.array(lattice_mask.shape)//2
print(lattice_mask)
# -
# ## Spiral Pattern
# This sections will generate a spiral pattern from the center of the stage outwards to cover the mask
# +
def make_spiral_pattern_in_mask(mask:np.ndarray) -> List:
start = np.array(mask.shape) // 2
ind = 1
movements = []
while True:
if ind+1 >= max(mask.shape):
break
for i in [np.array([1,0]),np.array([0,1])]:
axis = np.where(i == 1)[0]
temp_ind = ind
if ind % 2 == 0:
temp_ind *=-1
end=start+(i*temp_ind)
if start[0] == end[0]:
order= np.sort(np.array([start[1],end[1]]))
mov_slice = (start[0],slice(order[0],order[1],None))
else:
order= np.sort(np.array([start[0],end[0]]))
mov_slice = (slice(order[0],order[1],None), start[1])
if (mask[mov_slice] != 0).any():
indexes = np.where(mask[mov_slice] == 1)[0]
new_start, new_end = start.copy(), end.copy()
new_start[axis] = start[axis]+indexes[0] if temp_ind > 0 else start[axis]-indexes[0]
new_end[axis] = new_start[axis] + len(indexes) -1 if temp_ind > 0 else new_start[axis] - len(indexes) +1
movements.append([new_start,new_end])
start=end
ind += 1
return movements
spiral_movments=make_spiral_pattern_in_mask(lattice_mask)
# -
# Then we plot back onto the map of the stage the different movements of the stage.
# +
from matplotlib.patches import Circle, Rectangle
def plot_movements_on_stage(movements,imsize,overlap):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
limits = Circle([0,0],atlas_radius_in_um,fill=False)
ax.add_patch(limits)
ax.set_ylim(-full_radius_in_um,full_radius_in_um)
ax.set_xlim(-full_radius_in_um,full_radius_in_um)
for i, mov in enumerate(movements):
a = (np.array(mov)-center) * imsize*(1-overlap)
ax.text(a[0,0],a[0,1], i+1)
ax.plot(a[:,0],a[:,1], marker='o')
plot_movements_on_stage(movements=spiral_movments,imsize=imsize,overlap=overlap)
# -
# ## Serpent Pattern
# This ones is more simple but reuses some pieces of the spiral logic
def make_serpent_pattern_in_mask(mask:np.ndarray) -> List:
movement_direction = 1
movements =[]
for ind,line in enumerate(mask):
if (line != 0).any():
indexes = np.where(line == 1)[0]
start = [ind,indexes[0] if movement_direction == 1 else indexes[-1]]
end = [ind, indexes[-1] if movement_direction == 1 else indexes[0]]
movements.append([start,end])
movement_direction *= -1
return movements
serpent_movements = make_serpent_pattern_in_mask(lattice_mask)
plot_movements_on_stage(serpent_movements,imsize,overlap)
, there isn't much to see in most of the plots (not reall a strong obvious correlation), but a few of the plots seem to show some interesting trends. Below we pull out those plots of interest
len(headers_dict['labels']+headers_dict['features'])
for x_idx, y_idx in [[23, 1],
[7,6],
[8,6],
[10,11],
[-7,-2]]:
x_header = (headers_dict['labels']+headers_dict['features'])[x_idx]
y_header = (headers_dict['labels']+headers_dict['features'])[y_idx]
plt.plot(df[x_header], df[y_header], 'o')
plt.xlabel(x_header)
plt.ylabel(y_header)
plt.show()
# <font color = 'red'> in the scatter plots above, we can see there are some interesting correlations in the data, particularly in the features related to "PAY" there seem to be some strong linear correlations.
# <font color = 'red'> Below, we plot some box plots for the data to inspect the stats
n_cols = 3
fig, ax_list = plt.subplots(1, n_cols)
p=0
for header in headers_dict['labels']+headers_dict['features']:
if p >= n_cols:
fig.tight_layout(rect=(0,0,2.5,1))
plt.show()
fig, ax_list = plt.subplots(1, n_cols)
p=0
ax_list[p].boxplot(df[header])
ax_list[p].set_xlabel(header)
p+=1
fig.tight_layout(rect=(0,0,2.5,1))
plt.show()
# ## Correlations
# +
fig, ax = plt.subplots(1,1)
cax = ax.matshow(df.corr(),vmin=-1,vmax=1)
ax.set_xticks([i for i in range(len(df.columns))])
ax.set_xticklabels(df.columns,rotation='vertical')
ax.set_yticks([i for i in range(len(df.columns))])
ax.set_yticklabels(df.columns)
fig.colorbar(cax)
fig.tight_layout(rect=(0,0,2.5,2.5))
plt.show()
# -
# <font color='red'> the correlation matrix shown above highlights some interesting relationships between the features. Specifically, we can see that this matrix confirms our previous statement about the "PAY_#" features being strongly correlated. We can also see that the "BILL_AMT#" features are strongly correlated and generally have a more uniform correlation coefficient among each other, compared to the "PAY_#" features.
#
# To gain further insight into the correlations between the label of interest and the features, let's slice out the "default payment next month" correlations and look at those in the box plot below
# +
df_corr_for_label = df.corr()['default payment next month'].drop(index='default payment next month').sort_values()
fig, ax = plt.subplots(1,1)
ax.bar(df_corr_for_label.index, df_corr_for_label)
ax.set_xticklabels(df_corr_for_label.index, rotation = 'vertical')
ax.set_ylabel('default payment next month\ncorrelation')
plt.show()
# -
# <font color='red'> Here we can see that the "PAY_#" features have the strongest correlations with defaults, with a positive correlation coeff.
# ## Covariances
# <font color='red'> Because our features aren't normalized/scaled, the covariance values can be drastically different in scale. To mitigate this, we can transform the data using a standard scalar on each column, where this scalar standardizes columns by removing the mean and scaling to unit variance.
import sklearn, sklearn.preprocessing
# +
scalar = sklearn.preprocessing.StandardScaler()
scalar.fit(df)
df_scaled = pd.DataFrame(scalar.transform(df),columns = df.columns)
# -
# <font color='red'> No we're ready to look at the covariance.
# +
fig, ax = plt.subplots(1,1)
cax = ax.matshow(df_scaled.cov())
ax.set_xticks([i for i in range(len(df.columns))])
ax.set_xticklabels(df.columns,rotation='vertical')
ax.set_yticks([i for i in range(len(df.columns))])
ax.set_yticklabels(df.columns)
fig.colorbar(cax)
fig.tight_layout(rect=(0,0,2.5,2.5))
plt.show()
# +
df_cov_for_label = df_scaled.cov()['default payment next month'].drop(index='default payment next month').sort_values()
fig, ax = plt.subplots(1,1)
ax.bar(df_cov_for_label.index, df_cov_for_label)
ax.set_xticklabels(df_cov_for_label.index, rotation = 'vertical')
ax.set_ylabel('default payment next month\ncorrelation')
plt.show()
# -
# <font color='red'> Overall, the trends look pretty similar to those observed in the correlation coefficients.
| 8,915 |
/collision/Physical_fusion_collision.ipynb
|
8520042d7a2a9e5e22bb62eb88461b123e4db2e5
|
[
"MIT"
] |
permissive
|
PhysicsNAS/PhysicsNAS
|
https://github.com/PhysicsNAS/PhysicsNAS
| 12 | 3 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,796 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Softmax分类
#
# 相比于逻辑回归,SVM等做多分类需要多个分类器,Softmax则通过另一种方式来做多分类。下面通过与逻辑回归做比较来介绍softmax分类器。
# ## 模型:多个线性模型组合
#
# 相比于逻辑回归在一个线性回归模型$\theta^TX$上套用一个逻辑函数$g(z)=\dfrac{1}{1+e^{-z}}$来实现单分类,softmax则使用$n$(其中$n$等于分类数目)个回归模型组合起来实现多分类,其模型表示为:
#
# > $P(i)=\dfrac{exp(\theta_i^Tx)}{\sum_{k=1}^Kexp(\theta_k^Tx)}$
#
# 其中$P(i)$表示属于第$i$个类的概率。下面给出了随后层为softmax层的深度神经网络:
#
# 
#
# 上面三个黄色的神经元即计算了3个线性函数。
# ## 代价函数
| 695 |
/LSTM_Reuter.ipynb
|
99b479f59c485d687ad7706c9f429e5f82933350
|
[] |
no_license
|
SilverGoeun/AimforReasoning
|
https://github.com/SilverGoeun/AimforReasoning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 30,738 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Online Retails Purchase
# ### Introduction:
#
#
#
# ### Step 1. Import the necessary libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/Visualization/Online_Retail/Online_Retail.csv).
# ### Step 3. Assign it to a variable called online_rt
online_rt = pd.read_excel('C:/Github/pandas_exercises/07_Visualization/Online_Retail/Online_Retail.xlsx')
online_rt.info()
online_rt.head()
# ### Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
online_rt[online_rt.Country != 'United Kingdom'].groupby('Country')['Quantity'].sum().sort_values(ascending=False).head(10).plot.bar()
plt.ylabel('Quantity')
# ### Step 5. Exclude negative Quatity entries
online_rt = online_rt[online_rt.Quantity >= 0]
# ### Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
top3_countries = online_rt[online_rt.Country != 'United Kingdom'].groupby('Country')['Quantity'].sum().sort_values(ascending=False).head(3).index
online_rt['totalprice'] = online_rt['Quantity'] * online_rt['UnitPrice']
df = online_rt[online_rt.Country.isin(top3_countries)].groupby('CustomerID')[['Quantity', 'totalprice']].sum()
plt.scatter(df.Quantity, df.totalprice/df.Quantity)
# ### BONUS: Create your own question and answer it.
er_tz": -540, "elapsed": 1999, "user": {"displayName": "\uace0\uc740", "photoUrl": "", "userId": "01085054918018336986"}} outputId="5879cfbe-b729-4077-ec1a-1f7b723a3498"
category = np.max (Y_train) +1
print (category, '카테고리')
print (len (X_train), '학습용 뉴스 기사')
print (len (X_test), '테스트용 뉴스 기사')
print (X_train[0])
# + id="DlBhe5W_Qoqp" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594032750330, "user_tz": -540, "elapsed": 1981, "user": {"displayName": "\uace0\uc740", "photoUrl": "", "userId": "01085054918018336986"}}
from keras.preprocessing import sequence
X_train = sequence.pad_sequences (X_train, maxlen=100)
X_test = sequence.pad_sequences (X_test, maxlen=100)
Y_train = np_utils.to_categorical(Y_train)
Y_test = np_utils.to_categorical(Y_test)
# + id="lwE9ryQ8TRjm" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594032750890, "user_tz": -540, "elapsed": 2521, "user": {"displayName": "\uace0\uc740", "photoUrl": "", "userId": "01085054918018336986"}}
model = Sequential()
model.add(Embedding (1000, 100))
model.add(LSTM(100, activation='tanh'))
model.add(Dense(46, activation= 'softmax'))
# + id="fgIs-XBATrEs" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594032750892, "user_tz": -540, "elapsed": 2508, "user": {"displayName": "\uace0\uc740", "photoUrl": "", "userId": "01085054918018336986"}}
model.compile (loss= 'categorical_crossentropy',
optimizer= 'adam',
metrics = ['accuracy'])
# + id="BsgQpSmsT8sC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 896} executionInfo={"status": "ok", "timestamp": 1594033238358, "user_tz": -540, "elapsed": 489943, "user": {"displayName": "\uace0\uc740", "photoUrl": "", "userId": "01085054918018336986"}} outputId="4f3b4967-7510-434f-edc4-972b9c39d4b2"
history = model.fit(X_train, Y_train, batch_size=100, epochs=20, validation_data= (X_test, Y_test))
print ("\n Test Accuracy: %.4f" % (model.evaluate(X_test, Y_test)[1]))
# + id="CCF61lJ6VCTt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} executionInfo={"status": "ok", "timestamp": 1594033274452, "user_tz": -540, "elapsed": 668, "user": {"displayName": "\uace0\uc740", "photoUrl": "", "userId": "01085054918018336986"}} outputId="00874c02-542a-4b6c-8ebc-8ca5b600289a"
y_vloss = history.history ['val_loss']
y_loss = history.history['loss']
x_len= np.arange(len( y_loss))
plt.plot( x_len, y_vloss, marker='.', c="red", label= 'Testset_loss')
plt.plot( x_len, y_loss, marker= '.', c="blue", label= 'Trainset_loss')
plt.legend (loc= 'upper right')
plt. grid()
plt. xlabel('epoch')
plt. ylabel('loss')
plt. show()
# + id="v7un0lO7yXrt" colab_type="code" colab={}
| 4,463 |
/.ipynb_checkpoints/Assign_7-checkpoint.ipynb
|
b6fe24cf9430bc4e1878802b77f26d83ebe5e354
|
[] |
no_license
|
RphlFrmnt/PHYS512
|
https://github.com/RphlFrmnt/PHYS512
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 209,185 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Header
import numpy as np
import matplotlib.pyplot as plt
import spinmob as s
from scipy import sparse
import os
from mpl_toolkits import mplot3d
# %pylab inline
pylab.rcParams['figure.figsize'] = (10, 10)
# check directory
os.chdir("D:\Raphael\Dropbox\Mcgill\JupyterNotebook\PHYS512")
os.getcwd()
# # Problem 1
# Earlier in the course, I claimed that the leapfrog scheme preserves energy. Show that this is true as long as the CFL condition is satisfied. Recall that the leapfrog scheme is:
# $$ \frac{f(t + dt, x) − f(t − dt, x)}{2dt} = -v \frac{f(t, x + dx) − f(t, x − dx)}{2dx}$$
# In particular, we use calculate the spatial derivative at a point halfway between the times at which we evaluate the solution. You may assume the solution will look something like $f(x, t) = ξ^t \exp(ikx)$ where in general ξ will be complex and a function of k.
# Using $f(x, t) = ξ^t \exp(ikx)$:
# $$ \frac{f(t + dt, x) − f(t − dt, x)}{2dt} = -v \frac{f(t, x + dx) − f(t, x − dx)}{2dx}$$
# $$ \frac{ξ(k)^{t + dt} \exp(ikx) − ξ(k)^{t - dt} \exp(ikx)}{2dt} = -v \frac{ξ(k)^t \exp(ik(x + dx)) − ξ(k)^t \exp(ik(x - dx))}{2dx}$$
# $$ \frac{ξ(k)^t ξ(k)^{dt} \exp(ikx) − ξ(k)^t ξ(k)^{-dt} \exp(ikx)}{2dt} = -v \frac{ξ(k)^t \exp(ikx)\exp(ikdx) − ξ(k)^t \exp(ikx)\exp(-ikdx)}{2dx}$$
# $$ \frac{ξ(k)^t \exp(ikx) (ξ(k)^{dt} − ξ(k)^{-dt})}{2dt} = -v \frac{ξ(k)^t \exp(ikx)(\exp(ikdx) − \exp(-ikdx))}{2dx}$$
# $$ \frac{ξ(k)^{dt} − ξ(k)^{-dt}}{dt} = -v \frac{\exp(ikdx) − \exp(-ikdx)}{dx}$$
# $$ \frac{vdt}{dx} = -\frac{ξ(k)^{dt} − ξ(k)^{-dt}}{\exp(ikdx) − \exp(-ikdx)}$$
# $$ \frac{vdt}{dx} = -\frac{ξ(k)^{dt} − ξ(k)^{-dt}}{2i\sin(kdx)}$$
# $$ \frac{vdt}{dx} = \frac{i(ξ(k)^{dt} − ξ(k)^{-dt})}{2\sin(kdx)}$$
#
# Generalizing $ξ(k)$ as $r\exp(ik)$:
#
# $$ \frac{vdt}{dx} = \frac{i(r\exp(ikdt) − r\exp(-ikdt))}{2\sin(kdx)}$$
# $$ \frac{vdt}{dx} = \frac{ir(2i \sin(kdt)}{2\sin(kdx)}$$
# $$ \frac{vdt}{dx} = \frac{-r\sin(kdt)}{\sin(kdx)}$$
#
# Since $dt$ and $dx$ are small and positive, we know $\sin(kdt)/\sin(kdx)$ is always positive. Thus the right side would always be negative, thus always smaller than 1, thus the CFL condition is satisfied.
# $$ \frac{vdt}{dx} \leq 1$$
# # Problem 2
# Let’s use conjugate gradient to solve Laplace’s equation with boundary conditions using Green’s functions. With a little cleverness, we can actually do this without needing to set an exterior boundary. For
# speed, let’s do this in two dimensions.
#
# **Part a**: First, we’ll figure out what V (r) looks like from a point charge. Remember that a 2D point charge looks like a 3D line charge and so will have a log behavior rather than 1/r. We also need to be careful about the singularity at 0. While this can actually be written down properly with some effort, a much easier trick is to note that away from the origin, each
# point is the average of its neighbors. By knowing the potential at (1,0) has to be the average of its neighbors, we can work out the potential at the origin. We’ll ignore $\epsilon_0$ and set ρ to be the potential minus the average of neighbors. If you rescale your potential so that ρ[0, 0] = 1 and V [0, 0] = 1 (recall that we can add an arbitrary offset to a potential without affecting the physics of the situation), what is the potential V [1, 0] and V [2, 0]? To sanity check your answer, the potential V [5, 0] should be around -1.16.
# First let's define a general potential function:
def V(x,y,k=1):
r = np.sqrt(x**2 + y**2)
return k*np.log(r)
# +
# Creating a matrix of element around (0,0):
size = 10
V_m = np.zeros(shape=(2*size+1,2*size+1,3))
for i in range(2*size+1):
for j in range(2*size+1):
V_m[i-size,j-size] = [i-size,j-size,V(i-size,j-size)]
#Averaging to obtain V_m(0,0)
V_m[0,0][2] = 4*V_m[1,0][2] - (V_m[1,1][2] + V_m[1,-1][2] + V_m[2,0][2])
print(" => V(0,0) before scaling:")
print("V(0,0) = ",V_m[0,0][2])
#Obtaining Rescaling factor R s.t. V_m(0,0)=1
R = 1/V_m[0,0][2]
print(" => Scaling factor:")
print("R = ",R)
#Rescaling V_m according to R
for i in range(2*size+1):
for j in range(2*size+1):
V_m[i,j][2] = V_m[i,j][2] * R
print(" => V(0,0) after scaling:")
print("V(0,0) = ",V_m[0,0][2])
#Sanity Check
print(" --- ")
print(" => Sanity check: V(5,0) = -1.16 ")
print("V(5,0) = ", V_m[5,0][2])
print(" --- ")
print(" => Plotting for visual check")
#plot in a 3D plot
V_m_plot = np.zeros(shape = ((2*size+1)**2,3)) #Make V_m convinient for plotting.
n = 0
for i in range(2*size+1):
for j in range(2*size+1):
V_m_plot[n] = V_m[i-size,j-size]
n = n+1
x= np.transpose(V_m_plot)[0]
y= np.transpose(V_m_plot)[1]
z= np.transpose(V_m_plot)[2]
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot3D(x,y,z,"r.")
ax.view_init(20, 60)
# -
# **Part b**: Now that we know what the potential from a point charge is, we can calculate the potential everywhere in space from an arbitrary charge distribution by convolving the charge by our hard-won but eternal Green’s function. We can write this as V = G ~ ρ. where G is the Green’s function. Even though we don’t usually think of it that way, this is indeed a matrix
# equation and so can be solved using conjugate-gradient (or any other tool you care to use). In this case, though, we start with the potential on some surfaces, and want to find the charge distribution on those same surfaces. Write a conjugate-gradient solver that solves for ρ on a mask given V on that mask. Use your solver to find the charge on a square box held at a potential of 1. Plot the charge density along one side of the box
# +
# define square box:
def sq_box(side=5,size=10):
assert side<size
box = np.zeros(shape=(2*size+1,2*size+1,3))
for i in range(2*size+1):
for j in range(2*size+1):
if np.abs(i-size) <= side and np.abs(j-size) <= side:
box[i,j] = [i-size,j-size,1]
else:
box[i,j] = [i-size,j-size,0]
return box
box = sq_box()
#plot in a 3D plot to check box
box_plot = np.zeros(shape = ((2*size+1)**2,3)) #Make V_m convinient for plotting.
n = 0
for i in range(2*size+1):
for j in range(2*size+1):
box_plot[n] = box[i-size,j-size]
n = n+1
x= np.transpose(box_plot)[0]
y= np.transpose(box_plot)[1]
z= np.transpose(box_plot)[2]
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot3D(x,y,z,"r.")
ax.view_init(20, 60)
# -
def simple_cg(x,b,A,n=20):
r = b-np.dot(A,x)
p=r.copy()
x=0
rTr = np.dot(r,r)
for iter in range(n):
print("inter is",iter,"with residual",rTr)
Ap =np.dot(A,p)
pAp = np.dot(p,Ap)
alpha = np.dot(r,r)/pAp
x = x + alpha*p
r_new=r-alpha*Ap
rTr_new=np.dot(r_new,r_new)
beta=rTr_new/rTr
p = r_new+beta*p
r=r_new
rTr=rTr_new
return x
# Green's Function is V(r) for a point charge
G = np.zeros(shape=(2*size+1,2*size+1))
for i in range(2*size+1):
for j in range(2*size+1):
G[i-size,j-size] = V_m[i-size,j-size][2]
Box = np.zeros(shape=(2*size+1,2*size+1))
for i in range(2*size+1):
for j in range(2*size+1):
Box[i-size,j-size] = sq_box()[i-size,j-size][2]
GT = np.transpose(G)
A = np.dot(GT,G)
b = np.dot(GT,Box)
p = b*0
#p = simple_cg(p,b,A,n=10)
# **Part c**: Now that you have the charge, show the potential everywhere in space. How close to constant is the potential in the interior of the box? Now plot the x− and y−components of the field just outside the box. Do they agree with what you expect? As a reminder, the boundary conditions are that the field is perpendicular to any equipotential, and that standard lore says that fields are stronger near points.
# **Final Comments**: The easiest way to solve this is to find the potential everywhere in space and then read it out along the mask. For relativey small masks, though, there’s no guarantee that this is the fastest way - a brute force summation may win in some cases. Also note that we got away with not having to specify an outer boundary condition. As long as you pulled a large enough region that the potential from your boundaries doesn’t wrap around onto itself (if you’re using an FFT) then there’s no
# edge where boundaries are specified. This is generally a good thing, since our usual state of affairs is not to be sitting near the center of a grounded, conducting box. Finally, you may have noticed that I played a bit fast and loose with zero points. In 3D, it’s perfectly sensible to set the potential at infinity to zero, but we can’t do that in 2D since ln(x) unhappily diverges at both large and small x. In some sense then “hold a box at fixed potential” doesn’t even make sense (fixed relative to what?). However, by setting the potential of our Green’s function to 1 at the center, we have implicitly set a zero point.
| 9,103 |
/notebooks/2-functions.ipynb
|
b3aedfda2baca1bc7c45fe7cc879f4484ae85e45
|
[] |
no_license
|
m-newhauser/data-science-in-a-day
|
https://github.com/m-newhauser/data-science-in-a-day
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,720 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/m-newhauser/data-science-in-a-day/blob/master/2-functions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ZkV3xrCOS4r6" colab_type="text"
# ### Custom functions
# + id="AN52O1-2K_Xy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c6b88215-2f4c-41ba-8754-77d98db98636"
# Let's add a single digit three times
1 + 1 + 1
# + id="6xld1MOuLCzX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e75468cc-525b-4c79-925b-368bee1afafd"
# Let's repeat this for numbers 2-5
3 + 3 + 3
# + id="_hUPYk7ILehD" colab_type="code" colab={}
# This takes too long!
# + id="rFNaHQ-2Lppm" colab_type="code" colab={}
# Let's create a function
def add_digit_three_times(digit):
result = digit + digit + digit
return result
# + id="_wGMVQmNL7Jn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="47d522f8-80a5-4c0b-8f80-cdce692d0760"
# Run the function
add_digit_three_times(digit=7)
# + [markdown] id="08ujhEJYqJGr" colab_type="text"
# ### Pre-existing functions
# + id="hJPVzrKIprw4" colab_type="code" colab={}
# Let's use a function from a package called Statistics
# + id="1CPy9ynmp3OE" colab_type="code" colab={}
# Import the statistics module
import statistics
# + id="yhZJbKnWqsH7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c680b3a0-34e6-40be-91cf-ff769d230e50"
(1+2+3+4+5)/5
# + id="ry4cCQCZqStx" colab_type="code" colab={}
# Make a list of numbers
numbers = [1, 2, 3, 4, 5]
# + id="cNhUwmDDqto5" colab_type="code" colab={}
# Calculate the mean of numbers and save as an object
mean = statistics.mean(numbers)
# + id="1ffdH4SdqTD9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="54df7ab4-bda8-4793-d9b7-fdf5b7655a93"
# Print the mean
print(mean)
# + id="ZB37NNVAvqvj" colab_type="code" colab={}
# Let's do another example
# + id="KQVBRJFoq5DM" colab_type="code" colab={}
# Import the NumPy package
import numpy as np
# + id="qPGozLCluRy6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c5036e0f-f047-4001-c8ae-b40fb226ddbc"
# Use NumPy's rounding function
np.round(5.2398, 2)
| 2,674 |
/homework/Day_056_kmean_HW.ipynb
|
ff5fbaff22fe50a774a989af254335c26b833adb
|
[] |
no_license
|
kuanpofeng/3rd-ML100Days
|
https://github.com/kuanpofeng/3rd-ML100Days
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 596,173 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: finlab
# language: python
# name: finlab
# ---
# # K-Mean 觀察 : 使用輪廓分析
# # [作業目標]
# - 試著模仿範例寫法, 利用隨機生成的 5 群高斯分布資料, 以輪廓分析來觀察 K-mean 分群時不同 K 值的比較
# # [作業重點]
# - 使用輪廓分析的圖表, 以及實際的分群散佈圖, 觀察 K-Mean 分群法在 K 有所不同時, 分群的效果如何變化 (In[3], Out[3])
# # 作業
# * 試著模擬出 5 群高斯分布的資料, 並以此觀察 K-mean 與輪廓分析的結果
# +
# 載入套件
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn import datasets
from sklearn.metrics import silhouette_samples, silhouette_score
np.random.seed(5)
# %matplotlib inline
# +
# 生成 5 群資料
X, y = make_blobs(n_samples=500,
n_features=2,
centers=5,
cluster_std=1,
center_box=(-10.0, 10.0),
shuffle=True,
random_state=123)
# 設定需要計算的 K 值集合
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
# +
# 計算並繪製輪廓分析的結果
# 因下列為迴圈寫法, 無法再分拆為更小執行區塊, 請見諒
for n_clusters in range_n_clusters:
# 設定小圖排版為 1 row 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# 左圖為輪廓分析(Silhouette analysis), 雖然輪廓係數範圍在(-1,1)區間, 但範例中都為正值, 因此我們把顯示範圍定在(-0.1,1)之間
ax1.set_xlim([-0.1, 1])
# (n_clusters+1)*10 這部分是用來在不同輪廓圖間塞入空白, 讓圖形看起來更清楚
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# 宣告 KMean 分群器, 對 X 訓練並預測
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# 計算所有點的 silhouette_score 平均
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# 計算所有樣本的 The silhouette_score
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# 收集集群 i 樣本的輪廓分數,並對它們進行排序
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# 在每個集群中間標上 i 的數值
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# 計算下一個 y_lower 的位置
y_lower = y_upper + 10
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# 將 silhouette_score 平均所在位置, 畫上一條垂直線
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # 清空 y 軸的格線
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 右圖我們用來畫上每個樣本點的分群狀態, 從另一個角度觀察分群是否洽當
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors, edgecolor='k')
# 在右圖每一群的中心處, 畫上一個圓圈並標註對應的編號
centers = clusterer.cluster_centers_
ax2.scatter(centers[:, 0], centers[:, 1], marker='o',
c="white", alpha=1, s=200, edgecolor='k')
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1,
s=50, edgecolor='k')
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
# + jupyter={"outputs_hidden": true}
s two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
#
# We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
#
# > **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
#
# Below, you have these tasks:
# 1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
# 2. Implement the forward pass in the `train` method.
# 3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
# 4. Implement the forward pass in the `run` method.
#
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 +np.exp(-x))# Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
# def sigmoid(x):
# return 1.0/(1 +np.exp(-x)) # Replace 0 with your sigmoid calculation here
# self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs =np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
output_errors = targets-final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error - Replace these values with your calculations.
hidden_errors =output_errors*self.weights_hidden_to_output # errors propagated to the hidden layer
hidden_grad = hidden_outputs *(1-hidden_outputs) # hidden layer gradients
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr*output_errors*hidden_outputs.T # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr* np.dot(hidden_grad*hidden_errors.T, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs )# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs =np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs =final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
# ## Training the network
#
# Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
#
# You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
#
# ### Choose the number of epochs
# This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
#
# ### Choose the learning rate
# This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
#
# ### Choose the number of hidden nodes
# The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
# +
import sys
### Set the hyperparameters here ###
epochs = 1200
learning_rate = 0.05
hidden_nodes = 30
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
# -
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
# ## Check out your predictions
#
# Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
# +
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
# -
# ## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
#
# Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
#
# > **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#
# #### Your answer below
# ## Unit tests
#
# Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
# +
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'data/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
# -
| 16,292 |
/NMF vs SHIT.ipynb
|
92d11251dcc53148944428f22e272ea3b9483ccd
|
[] |
no_license
|
ismaelbonneau/movie_recommender
|
https://github.com/ismaelbonneau/movie_recommender
| 0 | 2 | null | 2019-06-02T10:10:51 | 2019-05-30T06:35:39 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 235,879 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hent ut informasjon fra saken: Flertall av kommuner vil liberalisere scooterkjøring
#
# Les saken på [NRK.no om scooterkjøring til hytter](https://www.nrk.no/sorlandet/stort-flertall-av-kommunene-vil-liberalisere-scooter-reglene-1.15124277).
# ### Hva er saken?
# *Hva handler saken om? Gi en kort oppsummering i neste celle ↓.*
#
# #### Svar
#
# ### Hvem har skrevet saken?
# *Finn ut hvem som har skrevet saken. Kan vi stole på at det som står her er riktig? Vurder kilden etter [TONE-strategien](https://ndla.no/nb/subject:14/topic:1:185701/resource:1:169741?filters=urn:filter:94dfe81f-9e11-45fc-ab5a-fba63784d48e). Skriv svar i neste celle ↓.*
#
# #### Svar
#
# ### Hvilke tall finner du i saken?
# *Hvilke tall kan du hente ut fra saken? Gi tall til variablene i python-cellen under.*
#
grenseverdi_meter_til_hytte =
antall_kommuner_som_har_sendt_svar =
antall_kommuner_som_ønsker_regelendring =
antall_kommuner_som_ikke_ønsker_regelendring =
# +
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
grenseverdi_meter_til_hytte = 2500
antall_kommuner_som_har_sendt_svar = 50
antall_kommuner_som_ønsker_regelendring = 36
antall_kommuner_som_ikke_ønsker_regelendring = 12
# -
# ### Prosent for og imot
# *Hvor mange prosent av kommunene ønsker regelendring? Hvor mange ønsker ingen regelendring? Skriv pythonkode som regner dette ut for deg. Bruk variablene fra oppgaven over. (Eksempel ligger litt lenger nede på siden)*
###################################################
# ↓ ditt svar ↓ #
###################################################
# +
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
pst_ønsker_endring = antall_kommuner_som_ønsker_regelendring \
/ antall_kommuner_som_har_sendt_svar * 100
pst_ønsker_ikke_endring = antall_kommuner_som_ikke_ønsker_regelendring \
/ antall_kommuner_som_har_sendt_svar * 100
print(f"Det er {pst_ønsker_endring:.2f} % av kommunene som ønsker regelendring,\
og {pst_ønsker_ikke_endring:.2f} % av kommunene som ikke ønsker regelendring.")
# +
###################################################
# Eksempel på hvordan regne ut prosenter i python #
###################################################
# 11 av 14 elever i 1P bruker en Mac som datamaskin på skolen.
# Hvor mange prosent tilsvarer dette? Og hvor mange prosent bruker Windows?
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
###################################################
totalt_antall_elever = 14
antall_elever_med_mac = 11
# for å finne prosenten kan vi ta(delen av tallet)/(hele tallet)
# og multiplisere med 100 %
prosent_mac = (antall_elever_med_mac / totalt_antall_elever) * 100
prosent_windows = 100 - prosent_mac
print(f"{prosent_mac:.2f} % av elevene i 1P har mac. Det er {prosent_windows:.2f} % som bruker Windows.")
# -
# ### Eksempler: Sektordiagrammer i python
#
# Du kan lage sektordiagrammer i python ved å bruke pakken `matplotlib.pyplot`. Nedenfor har jeg laget et eksempel som viser hvordan du kan lage et sektordiagram som viser fordelingen av høyrehendte, venstrehendte og dem som like gjerne bruker begge hender. Jeg har brukt tall fra [Wikipedia](https://no.wikipedia.org/wiki/Ambidekstri) som tyder på at omtrent 90 % er høyrehendte, 9 % er venstrehendte og 1 % kan bruke begge hender.
#
# #### Forklaring av koden under ↓
#
# ```python
# import matplotlib.pyplot as plt
# merkelapper = ["Høyrehendt" , "Venstrehendt", "Ambidekster/kapphendt"]
# andeler = [90, 9, 1]
# plt.pie(andeler, labels=merkelapper)
# ```
# I første linje importerer vi pakken `matplotlib.pyplot` og gir den kallenavnet `plt` slik at det skal bli enklere å skrive navnet på den senere.
#
# Jeg oppretter deretter listen `merkelapper` som inneholder informasjon om hva de ulike sektorene i diagrammet representerer.
#
# Deretter lager jeg en liste `andeler` som inneholder tallene for hvor stor andel som henholdsvis høyrehendt, venstrehendt og ambidekster. Pass på at det første elementet i lista med andeler tilsvarer det første elementet i lista med merkelapper. Siden jeg har høyrehendt først i lista mi over merkelapper er det viktig at `andeler`-lista begynner 90 som er andelen høyrehendte.
#
# `plt.pie()` lager et sektordiagram av den listen du gir til funksjonen. I mitt tilfelle har jeg gitt listen `andeler`. I tillegg kan jeg gi en liste med merkelapper som hører til andelene ved å bruke `labels=ListeMedAndeler`, i mitt tilfelle `labels=merkelapper`.
#
# Hvis jeg i tillegg ønsker at sektordiagrammet skal vise hvor mange prosent de ulike sektorene tilsvarer så kan jeg legge til tilvalget `autopct="%.1f%%"`. Se cellen litt lenger ned.
#
# Kjør koden i de to `python`-cellene under og se hvordan sektordiagrammene ser ut.
import matplotlib.pyplot as plt
merkelapper = ["Høyrehendt" , "Venstrehendt", "Ambidekster/kapphendt"]
andeler = [90, 9, 1]
plt.pie(andeler, labels=merkelapper)
plt.pie(andeler, labels=merkelapper, autopct="%.1f%%")
# ### Lag sektordiagram over for og imot
# *Lag et sektordiagram som viser fordelingen av kommuner for og imot regelendringer for scooterkjøring til hytta. Ta gjerne med andelen som ikke har svart direkte på spørsmålet også. Skriv koden din i neste celle.*
# +
###################################################
# ↓ ditt svar ↓ #
###################################################
# +
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
import matplotlib.pyplot as plt
merkelapper = ["Ønsker regelendring", "Ønsker ikke regelendring", "Ingen svar på spørsmålet"]
andeler = [pst_ønsker_endring, pst_ønsker_ikke_endring, 100-pst_ønsker_endring-pst_ønsker_ikke_endring]
plt.pie(andeler, labels=merkelapper, autopct="%.1f%%")
# -
# ### Hva er en høring?
# Nyhetssaken du leste handler om en høring. Hva er en høring? Vi har 356 kommuner i Norge, hvorfor har ikke alle disse svart på høringen? Svar i neste celle.
# #### Svar
#
# ### Regjeringen opphevet 2,5 km-regelen i oktober 2020
#
# Regjeringen [vedtok å oppheve 2,5 km-regelen](https://www.regjeringen.no/no/aktuelt/opphever-25-km-grensen-for-snoskuter/id2766966/) i september, med [virkning fra 1. oktober](https://lovdata.no/dokument/LTI/forskrift/2020-09-28-1893). Dette betyr at mange nye hytteeiere fikk tilgang til å bruke scooter til vare- og materialtransport til hyttene sine.
# # Hvilke hytter fikk lov å bruke scooter fra 2020?
# Jeg har sett på et [mindre hytteområde nordøst for Takvatnet](https://www.norgeskart.no/#!?project=norgeskart&layers=1002&zoom=9&lat=7675666.50&lon=668440.74&sok=Trolldalen&markerLat=7675666.496412521&markerLon=668440.7404211427&panel=searchOptionsPanel&drawing=ehghAngBLmAVnofmN_bR). Se kartet under. Jeg har markert bygninger med blå trekant og målt avstanden fra mange av bygningene til nærmeste brøytede vei. Deretter har jeg lagt inn data om hyttene i en csv-fil kalt `hytter-ved-andorvatnet.csv`. Hyttene er ekte, men alle data om hyttene er oppdiktet. Du skal bruke python til å finne ut hvilke hytter som fikk lov til å bruke scooter fra oktober 2020.
#
# Akkurat nå ser du kun på 18 utvalgte hytter i Indre Troms, og du skal prøve å løse oppgavene med dette begrensede datasettet. Det som er fint med programmering er at vi kan bruke den samme løsningen på mye større datasett – jeg har derfor sett på alle 1002 bygninger som er definert som fritidsboliger i Målselv. Hvis du lurer på hvor mange av disse som faktisk ligger under 2500 m fra vei, så kan du bla helt til bunnen av denne siden.
#
# 
###################################################
# Innlasting av data om hytter i python #
###################################################
import numpy as np
import pandas as pd
df = pd.read_csv("hytter-ved-andorvatnet.csv", sep=";", index_col="hytte_nr")
# Vi har nå lastet inn informasjonen om hyttene fra filen `hytter-ved-andorvatnet.csv` og lagret informasjonen som en [pandas dataframe](https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html). En dataframe er egentlig bare en stor tabell, og pandas-pakken inneholder mange nyttige funksjoner for å manipulere og analysere dataene.
#
# Vi kalte dataframen vår for `df`. Vi kan nå kjøre funksjoner på dataene i `df` ved å skrive `df.FUNKSJONSNAVN`. F.eks finnes det en funksjon som heter `head()`. Prøv `df.head()` og se hva som skjer i cellen under.
# +
###################################################
# ↓ ditt svar ↓ #
###################################################
# +
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
df.head()
# -
# Vi kan også få pandas/python til å gi oss litt statistikk om de ulike kolonnene i dataframen vår. Bruk funksjonen `describe()` **på dataframen** for å få ut statistikk. Hvor langt er det i gjennomsnitt fra hyttene til veien? Hvor mange prosent av hyttene har strøm? (tallet 1 betyr at den har strøm, tallet 0 betyr at den ikke har strøm).
# +
###################################################
# ↓ ditt svar ↓ #
###################################################
# +
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
df.describe()
# -
# ## Oppgave: hvor mange hytter har lov til å bruke scooter?
# Du skal nå bruke python til å telle hvor mange hytter som ligger mer enn 2500 m fra vei. Ekstra utfordring: Klarer du i tillegg å lage en liste som inneholder hvilke hytter som faktisk ligger mer enn 2500 unna vei?
#
# Nedenfor har jeg laget to eksempler. Det første eksempelet lager en tilfeldig liste med 200 terningkast og teller hvor mange seksere det er blant de 200 kastene. Kjør den gjerne flere ganger, da ser du at antallet seksere foranderer seg for hver kjøring (det er tilfeldig). Legg merke til hvordan jeg lager "teller"-variabel som inneholder antallet seksere jeg har telt fram til nå.
#
# Det andre eksempelet viser hvordan man kan kjøre en `for`-løkke over en dataframe. En for-løkke kan egentlig kun kjøres over lister eller andre tellbare objekter, siden dataframen er en tabell så "gjør vi den om" til en liste ved å bruke `df.index` istedenfor bare `df`.
#
# Husk at vi kan bruke `if` setninger til å sette betingelser og sammenligne størrelser ved å bruke bruke disse operatorene:
#
# | Eksempel | Betydning |
# |------------|-----------------------------|
# | x == y | x er lik y |
# | x != y | x er ikke lik y |
# | x > y | x er større enn y |
# | x < y | x er mindre enn y |
# | x >= y | x er større enn eller lik y |
# | x <= y | x er mindre enn eller lik y |
# | x is y | x er lik y |
# | x is not y | x er ikke lik y |
# +
###################################################
# Telling ved hjelp av for-løkker over lister #
###################################################
# Vi simulerer 200 terningkast og lager en liste med 200 elementer hvor hvert
# element er et tall mellom 1 og 6 som tilsvarer terningkastet. Vi bruker
# funksjonen random.randint fra pakken numpy (np) for å simulere terningkastene
# Legg merke til at vi skriver 1,7 for å indikere at vi skal ha terningkast fra
# og 1, til (men ikke inkludert) 7.
liste_med_terningkast = np.random.randint(1,7, size=200)
# I dette eksempelet skal vi telle antall sekser i lista ved å bruke en for-
# løkke.
# Vi begynner med å lage en variabel antall_seksere = 0 som holder tellinga
# på hvor mange seksere vi har talt fram til nå.
# For hvert element i lista skal vi sjekke om det er en sekser. Hvis
# det er en sekser så skal vi legge til 1 til antall_seksere-variabelen
antall_seksere = 0
for terning in liste_med_terningkast:
if terning == 6:
antall_seksere = antall_seksere + 1
print(f"Det er {antall_seksere} seksere blant de {len(liste_med_terningkast)} terningene.")
# +
###################################################
# Eksempel: For-løkke over dataframe #
###################################################
# Denne for-løkka går gjennom hvert element i df.index. df.index er
# som et stikkordregister som holder orden på alle radene i dataframen
# vår. Prøv gjerne kommandoen print(df.index) så skjønner du kanskje
# bedre hva den inneholder. :)
#
# I for-løkka blir variabelen hytte_nr tilordnet verdiene i df.index,
# altså er hytte_nr 0 den første gangen i løkka, deretter er den 1,
# osv, osv, helt fram til og med 17.
for hytte_nr in df.index:
# den neste linja skriver ut indexen og avstanden til vei for
# det nåværende hytte_nr. Legg merke til at vi bruker df[kolonne][rad],
# altså henter vi ut verdien i dataframen fra kolonne "avstand_m" og
# raden som tilsvarer hytte_nr.
print(f"{hytte_nr:2} {df['avstand_m'][hytte_nr]:4}")
# Legg gjerne merke til 2-tallet i {hytte_nr:2} betyr at vi ønsker å
# bruke 2 tegn på å skrive ut hyttenummeret. Dette gjør at tallene
# 1 til 9 tar like stor plass på skjermen som 10-18, og at kolonnene
# dermed blir rette nedover.
# Prøv gjerne å bytte ut printen over med den på neste linje og sjekk
# forskjellen print(f"{hytte_nr} {df['avstand_m'][hytte_nr]}")
# +
###################################################
# ↓ ditt svar ↓ #
###################################################
# +
###################################################
# ↓ LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
liste_med_hytte_nr_over_2500m = []
antall_hytter_over_2500m_til_vei = 0
for hytte_nr in df.index:
if df["avstand_m"][hytte_nr] > grenseverdi_meter_til_hytte:
antall_hytter_over_2500m_til_vei += 1
liste_med_hytte_nr_over_2500m.append(hytte_nr)
print(antall_hytter_over_2500m_til_vei)
print(liste_med_hytte_nr_over_2500m)
# +
###################################################
# ↓ ALT. LØSNINGSFORSLAG ↓ #
# SKJUL DENNE CELLEN FØR PUBLISERING TIL ELEVER #
###################################################
liste_med_hytte_id_over_2500m = []
antall_hytter_over_2500m_til_vei = 0
for indeks, rad in df.iterrows():
if rad["avstand_m"] > grenseverdi_meter_til_hytte:
antall_hytter_over_2500m_til_vei += 1
liste_med_hytte_id_over_2500m.append(indeks)
print(f"Hyttene {liste_med_hytte_id_over_2500m} (til sammen \
{antall_hytter_over_2500m_til_vei} stk) har over 2500 m til vei ")
# -
# ***
#
# ***
# # Utvidelse med ekte data
#
# *Denne siste delen er en demonstrasjon av hva som er mulig å gjøre med programmering og åpne data. Det er **ikke** meningen at elevene skal være i stand til å gjøre det samme.*
#
# ## Målsetning
# Jeg ønsker å finne ut hvor mange hytter som *faktisk* ligger mindre enn 2500 m fra vei. I tillegg ønsker jeg å få vist disse hyttene på et kart, slik at jeg får et bedre bilde av hvilke hytter som faktisk blir berørt av forskriftsendringen.
#
# ## Forberedelser og nedlasting av data
# Jeg har hentet ut data om bygninger og veier fra [Geonorge](https://geonorge.no). Jeg brukte et bakgrunnskart (toporaster 4) og lastet ned to datasett: `elveg` (veier) og `matrikkelen bygningspunkt` som har informasjon om alle bygninger som punkter. Jeg lastet kun ned data for Målselv kommune. Dataene ble behandlet i programmet [QGis](https://www.qgis.org/) som er et gratis kartprogram.
#
# Tjenester fra geonorge:
# * [Toporaster 4 bakgrunnskart](https://kartkatalog.geonorge.no/metadata/toporaster-4-wms/430b65ec-8543-4387-bf45-dbb5ce4bf4c8). Legg dette til under WMS i QGIS. Naviger deg fram til et passende grunnlagskart med referanse.
# * [Elveg](https://kartkatalog.geonorge.no/metadata/elveg/ed1e6798-b3cf-48be-aee1-c0d3531da01a). Last ned SOSI-fil og konverter til shapefile med [`sosicon`](https://sosicon.espenandersen.no/). Dra inn i QGIS.
# * [Matrikkelen bygningspunkt](https://kartkatalog.geonorge.no/metadata/matrikkelen-bygningspunkt/24d7e9d1-87f6-45a0-b38e-3447f8d7f9a1). Last ned som GML og dra inn i QGIS.
#
# I QGIS omdøpte jeg matrikkel til `bygning` og Elveg til `veg`. Deretter filtrerte jeg bygning etter spørringen `"bygningstype" == 161` som kun gir meg bygningene som er definert som fritidsboliger og hytter. For å ha så få punkter som mulig lastet inn i QGis eksporterte jeg disse fritidsboligene og hyttene ved å klikke høyre museknapp på `bygning`-laget og velge Eksporter → Lagre som. Jeg opprettet et nytt lag med alle hyttene som jeg kalte `fritidsboliger` og fjernet laget `bygning` Deretter la jeg til et nytt virtuelt lag med spørringen:
#
# ```sql
# select p.gml_id, ST_ShortestLine (l.geometry, p.geometry) as geometry ,
# st_length (ST_ShortestLine (l.geometry, p.geometry)) as dist_m,
# ST_X(p.geometry) as x,
# ST_Y(p.geometry) as y
# from fritidsboliger as p, veg as l
# group by p.gml_id
# having min ( st_length(shortestline (p.geometry, l.geometry)))
# ```
#
# Det resulterende virtuelle laget har linjer fra hver hytte til nærmeste vei og hver for hver linje er avstanden blitt beregnet og lagret i feltet `dist_m`. Dette laget ble lagret som csv-fil `alle-hytter-maalselv.csv`. Kolonneoverskriftene er
#
# * `gml_id` er id-en til bygningen
# * `dist_m` er distanse i meter fra bygg til vei
# * `x` er x-koordinat i UTM33 (EPSG:25833)
# * `y` er y-koordinat i UTM33 (EPSG:25833)
#
# Nå er jeg klar til behandle dataene i Python. Jeg ønsker å
#
# Svakheter: jeg lastet kun ned data fra Målselv. Det er godt mulig at nærmeste vei ikke ligger i Målselv, men i en nabokommune.
# +
import numpy as np
import pandas as pd
# Jeg laster inn dataene om hyttene fra csv-fila
df_full = pd.read_csv("alle-hytter-maalselv.csv", sep=";", decimal=",")
# Jeg lager to teller-variabler. En for hyttene som ligger mindre enn 2500 m
# fra vei, og en for dem som ligger under 100 m fra vei. Det er svært mange
# hytter i Målselv som ligger nærme vei (f.eks i Fjellandsbyen). Det er
# lite sannsynlig at disse vil begynne å kjøre scooter med varer til hytta
antall_hytter_maalselv_under_2500 = 0
antall_hytter_maalselv_under_100m_fra_vei = 0
# For-løkka gjører over hele dataframen ved å bruke metoden iterrows() på
# dataframen. For hver eneste rad så sammenligner vi innholdet i feltet
# "dist_m" med grenseverdiene vi har satt og øker teller-variablene våre.
for indeks, rad in df_full.iterrows():
if rad["dist_m"] < grenseverdi_meter_til_hytte:
antall_hytter_maalselv_under_2500 += 1
if rad["dist_m"] < 100:
antall_hytter_maalselv_under_100m_fra_vei += 1
print(f"Det er {antall_hytter_maalselv_under_2500} hytter som ligger mindre\
enn {grenseverdi_meter_til_hytte} m fra vei, av dem ligger\
{antall_hytter_maalselv_under_100m_fra_vei} mindre enn 100 m vei.")
# +
###################################################
# ↓ ALT. LØSNINGSFORSLAG ↓ #
###################################################
# Dette løsningsforslaget er mer effektivt, siden det bruker .apply-metoden
# og .count-metoden på dataframen, istedenfor å bruke en for-løkke.
#
# For-løkker er som oftest veldig ineffektive, men det er ganske enkelt å
# forstå hva de gjør. Denne koden er en del vanskeligere å forstå.
# Jeg antar at hytter som ligger mindre enn 100 m fra vei ikke kommer til å
# bruke scooter, og at hytter som ligger mer enn 2500 m fra vei allerede
# bruker scooter. Jeg vil fargelegge de hyttene som nå kanskje får mulighet
# til å kjøre scooter til hytta med rød farge.
minstegrense_scooter = 100
# Jeg legger til en ny kolonne i dataframen: ny_scooter. Hver rad i kolonnen
# settes til "r" hvis hytta ligger mellom low_lim og high_lim fra veg. Hvis
# hytta ikke ligger mellom low og high_lim blir feltet satt til "b". "r" og
# "b" brukes til å angi fargen på hytta i plottet. "r" er rød, og "b" er blå.
#
# For å få til dette bruker jeg .apply-metoden og en lambda-funksjon. .apply
# utfører en funksjon på hver eneste rad i dataframen. Funksjonen min i dette
# tilfellet er lambda-funksjonen som sammenligner avstandene og setter fargen
df_full["ny_scooter"] = df_full["dist_m"].apply(lambda x: "r" if \
(x > minstegrense_scooter and x < \
grenseverdi_meter_til_hytte) else "b")
# Jeg teller antallet rader i dataframen hvor ny_scooter er satt til "r"
antall_nye_scooterhytter = df_full[df_full["ny_scooter"] == "r"].count()[0]
print(f"Det er {antall_nye_scooterhytter} hytter som ligger mellom\
{minstegrense_scooter} m og {grenseverdi_meter_til_hytte} m fra veg")
# +
###################################################
# PLOTTING AV HYTTER PÅ KART #
###################################################
# Koordinatene til hyttene er i kartkoordinatsystemet UTM33. Dette "må"
# konverteres til lengde- og breddegrader for å plotte på kart. Pakken
# pyproj gir oss muligheten til å konvertere mellom koordinatsystemer
from pyproj import Proj
import matplotlib.pyplot as plt
# UTM33 har kode epsg:25833
myprojection = Proj("epsg:25833")
# Jeg lager to nye kolonner som inneholder lengdegrad og breddegrad.
# Disse beregnes ved å konvertere x- og y-koordinater ved hjelp av
# pyproj.
df_full["lon"], df_full["lat"] = myprojection(df_full["x"].values,\
df_full["y"].values, inverse=True)
# Bestemmer bounding boxen, de ytre grensene for kartutsnittet vårt
# Den går fra minimumsverdiene til maksimumsverdiene for lengde-
# og breddegrad
BBox = (df_full["lon"].min(), df_full["lon"].max(),\
df_full["lat"].min(), df_full["lat"].max())
# Henter inn kart som bakgrunnsbilde
målselv_kartbilde = plt.imread('map.png')
# Opprettet plottet mitt
fig, ax = plt.subplots(figsize=(20,24))
# Plotter alle hyttene med. zorder = 1 gjør at hyttene vil ligge
# foran kartet. Alpha er gjennomsiktigheten til hyttepunktene.
# c er fargen på hvert punkt, denne setter vi etter kolonnen for
# farge som vi opprettet tidligere. s er størrelsen på prikken
ax.scatter(df_full["lon"], df_full["lat"], zorder=1, alpha= 0.4,\
c=df_full["ny_scooter"], s=14)
ax.set_title('Fritidsboliger i Målselv')
ax.imshow(målselv_kartbilde, zorder=0, extent = BBox, aspect="auto")
ax.set_xlim(BBox[0],BBox[1])
ax.set_ylim(BBox[2],BBox[3])
#fig.savefig("resultat.png", dpi=500)
# +
###################################################
# DYNAMISK VISNING AV HYTTER PÅ KART #
###################################################
import folium
# Oppretter et kart med sentrum i gjennomsnittsverdiene av lengde- og breddegr
map_osm = folium.Map(location=[df_full["lat"].mean(), df_full["lon"].mean()], \
tiles="https://opencache.statkart.no/gatekeeper/gk/\
gk.open_gmaps?layers=topo4&zoom={z}&x={x}&y={y}", \
attr="<a href='http://www.kartverket.no/'>Kartverket</a>")
# For hver rad i df_full så setter jeg ut en sirkel med rød farge dersom
# raden angir at det skal være rød farge. Ellers setter jeg ut sirkel med
# blå farge. Du kan trykke på sirkelen for å få opp tekst som angir
# avstanden til nærmeste vei.
folium_hytter_nye = df_full[df_full["ny_scooter"] == "r"].apply(\
lambda x: folium.CircleMarker(location=[x["lat"], x["lon"]],\
fill=True, fill_color="red", color="red", \
popup=str("Avstand til veg: " + str(np.round(x["dist_m"],0)) + " m")\
).add_to(map_osm), axis=1)
folium_hytter_gamle = df_full[df_full["ny_scooter"] == "b"].apply(\
lambda x: folium.CircleMarker(location=[x["lat"], x["lon"]],\
fill=True, fill_color="blue", color="blue", \
popup=str("Avstand til veg: " + str(np.round(x["dist_m"],0)) + " m")\
).add_to(map_osm), axis=1)
# Jeg legger til en fil med linjer som jeg har eksportert. Dette er linjene
# fra hver fritidsbolig til nærmeste vei. På denne måten blir det enklere å
# forstå hvorfor noen hytter blir farget røde, og andre blå.
folium.GeoJson("veger-geojson-converted.geojson", \
name="geojson").add_to(map_osm)
# Viser kartet
map_osm
| 25,267 |
/Cap2/DSA-Python-Cap02-Exercicios.ipynb
|
824f180c652f2b4217246ce40735308c348b1ddc
|
[] |
no_license
|
godoycaique/FundamentosPython
|
https://github.com/godoycaique/FundamentosPython
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,730 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 2</font>
#
# ## Download: http://github.com/dsacademybr
# ## Exercícios Cap02
# Exercício 1 - Imprima na tela os números de 1 a 10. Use uma lista para armazenar os números.
list1a10=[1,2,3,4,5,6,7,8,9,10]
print (list1a10)
# Exercício 2 - Crie uma lista de 5 objetos e imprima na tela
listaobjetos = [True, "Caique", "Godoy", "Python", 1.85]
print(listaobjetos)
# Exercício 3 - Crie duas strings e concatene as duas em uma terceira string
str1 = "Caique"
str2 = "Godoy"
print(str1 + " " + str2)
# Exercício 4 - Crie uma tupla com os seguintes elementos: 1, 2, 2, 3, 4, 4, 4, 5 e depois utilize a função count do
# objeto tupla para verificar quantas vezes o número 4 aparece na tupla
tupla1 = (1,2,2,3,4,4,4,5)
tupla1.count(4)
# Exercício 5 - Crie um dicionário vazio e imprima na tela
dic1 = {}
print(dic1)
# Exercício 6 - Crie um dicionário com 3 chaves e 3 valores e imprima na tela
dic2 = {"key1":"Caique", "key2":"Godoy", "key3":"Python"}
print (dic2)
# Exercício 7 - Adicione mais um elemento ao dicionário criado no exercício anterior e imprima na tela
dic2['key4'] = "R"
print(dic2)
# Exercício 8 - Crie um dicionário com 3 chaves e 3 valores. Um dos valores deve ser uma lista de 2 elementos numéricos.
# Imprima o dicionário na tela.
dic3={'k1':'Caique', 'k2':5, 'k3':[2, 5]}
print(dic3)
# Exercício 9 - Crie uma lista de 4 elementos. O primeiro elemento deve ser uma string,
# o segundo uma tupla de 2 elementos, o terceiro um dcionário com 2 chaves e 2 valores e
# o quarto elemento um valor do tipo float.
# Imprima a lista na tela.
list3=["Caique", ("Segunda", "Terça"), {'key1':'Python', 'key2':'R'}, 5.98]
print(list3)
# Exercício 10 - Considere a string abaixo. Imprima na tela apenas os caracteres da posição 1 a 18.
frase = 'Cientista de Dados é o profissional mais sexy do século XXI'
print(frase[0:18])
# # Fim
# ### Obrigado - Data Science Academy - <a href="http://facebook.com/dsacademybr">facebook.com/dsacademybr</a>
InferenceSession(
os.path.join(os.path.join(deployment_folder, onnx_export_folder), onnx_model_name))
input_name = onnx_session.get_inputs()[0].name
output_name = onnx_session.get_outputs()[0].name
print('Expected input shape: ', onnx_session.get_inputs()[0].shape)
# ### Prepare test data
# **Load the GloVe word vectors**
# +
word_vectors_dir = './word_vectors'
dictonary = np.load(os.path.join(word_vectors_dir, 'wordsList.npy'))
dictonary = dictonary.tolist()
dictonary = [word.decode('UTF-8') for word in dictonary]
print('Loaded the dictonary! Dictonary size: ', len(dictonary))
word_vectors = np.load(os.path.join(word_vectors_dir, 'wordVectors.npy'))
print ('Loaded the word vectors! Shape of the word vectors: ', word_vectors.shape)
# -
# **Create the word contractions map**
contractions_url = ('https://quickstartsws9073123377.blob.core.windows.net/'
'azureml-blobstore-0d1c4218-a5f9-418b-bf55-902b65277b85/glove50d/contractions.xlsx')
contractions_df = pd.read_excel(contractions_url)
contractions = dict(zip(contractions_df.original, contractions_df.expanded))
# **Setup the helper functions to process the test data**
# +
import re
import string
def remove_special_characters(token):
pattern = re.compile('[{}]'.format(re.escape(string.punctuation)))
filtered_token = pattern.sub('', token)
return filtered_token
def convert_to_indices(corpus, dictonary, c_map, unk_word_index = 399999):
sequences = []
for i in range(len(corpus)):
tokens = corpus[i].split()
sequence = []
for word in tokens:
word = word.lower()
if word in c_map:
resolved_words = c_map[word].split()
for resolved_word in resolved_words:
try:
word_index = dictonary.index(resolved_word)
sequence.append(word_index)
except ValueError:
sequence.append(unk_word_index) #Vector for unkown words
else:
try:
clean_word = remove_special_characters(word)
if len(clean_word) > 0:
word_index = dictonary.index(clean_word)
sequence.append(word_index)
except ValueError:
sequence.append(unk_word_index) #Vector for unkown words
sequences.append(sequence)
return sequences
# -
# **Preprocess the test data**
# +
from keras.preprocessing.sequence import pad_sequences
maxSeqLength = 125
test_claim = ['I crashed my car into a pole.']
test_claim_indices = convert_to_indices(test_claim, dictonary, contractions)
test_data = pad_sequences(test_claim_indices, maxlen=maxSeqLength, padding='pre', truncating='post')
# convert the data type to float
test_data_float = np.reshape(test_data.astype(np.float32), (1,maxSeqLength))
# -
# ### Make Inferences
#
# Make inferences using both the ONNX and the Keras Model on the test data
# +
# Run an ONNX session to classify the sample.
print('ONNX prediction: ', onnx_session.run([output_name], {input_name : test_data_float}))
# Use Keras to make predictions on the same sample
print('Keras prediction: ', keras_model.predict(test_data_float))
# -
# ## Compare Inference Performance: ONNX vs Keras
#
# Evaluate the performance of ONNX and Keras by running the same sample 1,000 times. Run the next three cells and compare the performance in your environment.
# Next we will compare the performance of ONNX vs Keras
import timeit
n = 1000
start_time = timeit.default_timer()
for i in range(n):
keras_model.predict(test_data_float)
keras_elapsed = timeit.default_timer() - start_time
print('Keras performance: ', keras_elapsed)
start_time = timeit.default_timer()
for i in range(n):
onnx_session.run([output_name], {input_name : test_data_float})
onnx_elapsed = timeit.default_timer() - start_time
print('ONNX performance: ', onnx_elapsed)
print('ONNX is about {} times faster than Keras'.format(round(keras_elapsed/onnx_elapsed)))
# # Deploy ONNX model to Azure Container Instance (ACI)
# ## Create and connect to an Azure Machine Learning Workspace
#
# Review the workspace config file saved in the previous notebook.
# !cat .azureml/config.json
# **Create the `Workspace` from the saved config file**
# +
import azureml.core
print(azureml.core.VERSION)
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print(ws)
# -
# ## Register the model with Azure Machine Learning
#
# In the following, you register the model with Azure Machine Learning (which saves a copy in the cloud).
# +
#Register the model and vectorizer
from azureml.core.model import Model
registered_model_name = 'claim_classifier_onnx'
onnx_model_path = os.path.join(os.path.join(deployment_folder, onnx_export_folder), onnx_model_name)
registered_model = Model.register(model_path = onnx_model_path, # this points to a local file
model_name = registered_model_name, # this is the name the model is registered with
description = "Claims classification model.",
workspace = ws)
print(registered_model.name, registered_model.description, registered_model.version)
# -
# ## Create the scoring web service
#
# When deploying models for scoring with Azure Machine Learning services, you need to define the code for a simple web service that will load your model and use it for scoring. By convention this service has two methods init which loads the model and run which scores data using the loaded model.
#
# This scoring service code will later be deployed inside of a specially prepared Docker container.
# **Save the scoring web service Python file**
#
# Note that the scoring web service needs the registered model: the ONNX model to make inferences.
# +
# %%writefile scoring_service.py
import string
import re
import os
import numpy as np
import pandas as pd
import urllib.request
import json
import keras
from keras.preprocessing.sequence import pad_sequences
import tensorflow as tf
from azureml.core.model import Model
import onnxruntime
def init():
global onnx_session
global dictonary
global contractions
try:
words_list_url = ('https://quickstartsws9073123377.blob.core.windows.net/'
'azureml-blobstore-0d1c4218-a5f9-418b-bf55-902b65277b85/glove50d/wordsList.npy')
word_vectors_dir = './word_vectors'
os.makedirs(word_vectors_dir, exist_ok=True)
urllib.request.urlretrieve(words_list_url, os.path.join(word_vectors_dir, 'wordsList.npy'))
dictonary = np.load(os.path.join(word_vectors_dir, 'wordsList.npy'))
dictonary = dictonary.tolist()
dictonary = [word.decode('UTF-8') for word in dictonary]
print('Loaded the dictonary! Dictonary size: ', len(dictonary))
contractions_url = ('https://quickstartsws9073123377.blob.core.windows.net/'
'azureml-blobstore-0d1c4218-a5f9-418b-bf55-902b65277b85/glove50d/contractions.xlsx')
contractions_df = pd.read_excel(contractions_url)
contractions = dict(zip(contractions_df.original, contractions_df.expanded))
print('Loaded contractions!')
# Retrieve the path to the model file using the model name
onnx_model_name = 'claim_classifier_onnx'
onnx_model_path = Model.get_model_path(onnx_model_name)
print('onnx_model_path: ', onnx_model_path)
onnx_session = onnxruntime.InferenceSession(onnx_model_path)
print('Onnx Inference Session Created!')
except Exception as e:
print(e)
def remove_special_characters(token):
pattern = re.compile('[{}]'.format(re.escape(string.punctuation)))
filtered_token = pattern.sub('', token)
return filtered_token
def convert_to_indices(corpus, dictonary, c_map, unk_word_index = 399999):
sequences = []
for i in range(len(corpus)):
tokens = corpus[i].split()
sequence = []
for word in tokens:
word = word.lower()
if word in c_map:
resolved_words = c_map[word].split()
for resolved_word in resolved_words:
try:
word_index = dictonary.index(resolved_word)
sequence.append(word_index)
except ValueError:
sequence.append(unk_word_index) #Vector for unkown words
else:
try:
clean_word = remove_special_characters(word)
if len(clean_word) > 0:
word_index = dictonary.index(clean_word)
sequence.append(word_index)
except ValueError:
sequence.append(unk_word_index) #Vector for unkown words
sequences.append(sequence)
return sequences
def run(raw_data):
try:
print("Received input: ", raw_data)
maxSeqLength = 125
print('Processing input...')
input_data_raw = np.array(json.loads(raw_data))
input_data_indices = convert_to_indices(input_data_raw, dictonary, contractions)
input_data_padded = pad_sequences(input_data_indices, maxlen=maxSeqLength, padding='pre', truncating='post')
# convert the data type to float
input_data = np.reshape(input_data_padded.astype(np.float32), (1,maxSeqLength))
print('Done processing input.')
# Run an ONNX session to classify the input.
result = onnx_session.run(None, {onnx_session.get_inputs()[0].name: input_data})[0].argmax(axis=1).item()
# return just the classification index (0 or 1)
return result
except Exception as e:
print(e)
error = str(e)
return error
# -
# ## Package Model and deploy to ACI
#
# Your scoring service can have dependencies install by using a Conda environment file. Items listed in this file will be conda or pip installed within the Docker container that is created and thus be available to your scoring web service logic.
#
# The recommended deployment pattern is to create a deployment configuration object with the `deploy_configuration` method and then use it with the deploy method of the [Model](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py) class as performed below. In this case, we use the `AciWebservice`'s `deploy_configuration` and specify the CPU cores and memory size.
#
# You will see output similar to the following when your web service is ready: `Succeeded - ACI service creation operation finished, operation "Succeeded"`
#
# Run the following cell. This may take between 5-10 minutes to complete.
# +
# create a Conda dependencies environment file
print("Creating conda dependencies file locally...")
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice, Webservice
conda_packages = ['numpy==1.16.4', 'xlrd==1.2.0', 'pandas==0.25.1', 'scikit-learn==0.21.3']
pip_packages = ['azureml-defaults', 'azureml-sdk', 'tensorflow==1.13.1', 'keras==2.3.1', 'onnxruntime==1.0.0']
environment = Environment('my-environment')
environment.python.conda_dependencies = CondaDependencies.create(conda_packages=conda_packages, pip_packages=pip_packages)
execution_script = 'scoring_service.py'
service_name = "claimclassservice"
inference_config = InferenceConfig(entry_script=execution_script, environment=environment)
aci_config = AciWebservice.deploy_configuration(
cpu_cores=1,
memory_gb=1,
tags = {'name': 'Claim Classification'},
description = "Classifies a claim as home or auto.")
service = Model.deploy(workspace=ws,
name=service_name,
models=[registered_model],
inference_config=inference_config,
deployment_config=aci_config)
# wait for the deployment to finish
service.wait_for_deployment(show_output=True)
# -
# ## Test Deployment
# ### Make direct calls on the service object
# +
import json
test_claims = ['I crashed my car into a pole.',
'The flood ruined my house.',
'I lost control of my car and fell in the river.']
for i in range(len(test_claims)):
result = service.run(json.dumps([test_claims[i]]))
print('Predicted label for test claim #{} is {}'.format(i+1, result))
# -
# ### Make HTTP calls to test the deployed Web Service
#
# In order to call the service from a REST client, you need to acquire the scoring URI. Take a note of printed scoring URI, you will need it in the last notebook.
#
# The default settings used in deploying this service result in a service that does not require authentication, so the scoring URI is the only value you need to call this service.
# +
import requests
url = service.scoring_uri
print('ACI Service: Claim Classification scoring URI is: {}'.format(url))
headers = {'Content-Type':'application/json'}
for i in range(len(test_claims)):
response = requests.post(url, json.dumps([test_claims[i]]), headers=headers)
print('Predicted label for test claim #{} is {}'.format(i+1, response.text))
| 15,692 |
/TP2_2LAM-Enonce.ipynb
|
16fa09cae6a219577bdf2ef88764a6edf140896f
|
[] |
no_license
|
nourelhoudasaid/TPPROBA
|
https://github.com/nourelhoudasaid/TPPROBA
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 19,002 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import prior_envs
from ipypb import track as tqdm
import torch
import astar
import diffplan
import matplotlib.pyplot as plt
import numpy as np
import itertools
import numba
import random
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
eps = torch.finfo().eps
# -
env = prior_envs.f2d
diffplan.plot_graph(env, labels=[str(x) for x in env.states])
# +
s = 1
g = 17
option_set = [9]
# %time D = diffplan.compute_distance_matrix(env)
BFS = diffplan.compute_bfs_matrix(env, D)
states = env.states
actions = np.array([
[
env.step(s, a)[0]
for a in env.actions
]
for s in env.states
], dtype=np.int)
def make_options(g):
options = np.array([
[o for o in option_set+[g]]
for s in env.states
], dtype=np.int)
return option_set+[g], options
def find_probe(s, g, probe, step_cost, debug=False):
option_list, options = make_options(g)
op = 0
plan, plan_options = search(s, options, g, step_cost=step_cost)[:2]
if debug: print(plan, plan_options)
for s in plan:
op += 1
if s == probe:
return op, True
# If probe was an option, we should now reject since it isn't in our plan.
if probe in options:
return op, False
if debug: print('Planning for each option')
for s, o in zip(plan, plan_options):
if debug: print(s, o, option_list[o], plan)
plan, _ = search(s, actions, option_list[o])[:2]
for s in plan:
op += 1
if s == probe:
return op, True
return op, False
def search(start, T, goal, *, iters=len(states), step_cost=1):
if np.size(step_cost) == 1:
step_cost = np.full((len(states), len(states)), step_cost)
assert np.any(T==goal), 'Bare minimum to make sure things are sorta configured'
V = np.zeros(len(states))
for _ in range(iters):
prev = np.copy(V)
for s in states:
if s == goal:
continue
ns = T[s]
Q = V[ns] - step_cost[s, ns]
V[s] = np.max(Q)
if np.linalg.norm(prev - V) < 1e-3:
break
# Now make path
path = [start]
actions = []
while path[-1] != goal:
s = path[-1]
ns = T[s]
Q = V[ns] - step_cost[s, ns]
a = np.argmax(Q)
actions.append(a)
path.append(ns[a])
if len(path) > len(states):
break
return path, actions, V
assert search(0, actions, 1)[0] == [0, 1]
assert search(0, actions, 2)[0] == [0, 1, 2]
assert search(0, actions, 6)[0] == [0, 3, 6]
assert search(0, actions, 10)[0] == [0, 1, 2, 9, 10]
# -
step_cost = D + BFS
path, _, V = search(1, make_options(11)[1], 11, step_cost=step_cost.numpy())
assert path == [1, 9, 11]
assert find_probe(1, 17, 9, step_cost=step_cost.numpy()) == (2, True)
assert find_probe(1, 10, 2, step_cost=step_cost.numpy(), debug=True) == (5, True)
# # Running experiments
nb = [2, 8, 10, 16]
experiments = [
((0, 1, 2), (10, 11, 12), 9, [2, 10], True),
((6, 7, 8), (10, 11, 12), 9, [8, 10], True),
((0, 1, 2), (16, 17, 18), 9, [2, 16], True),
((6, 7, 8), (16, 17, 18), 9, [8, 16], True),
((0, 1, 2), (6, 7, 8), 9, nb, False),
((10, 11, 12), (16, 17, 18), 9, nb, False),
]
# +
import itertools
import pandas as pd
sc = (D+BFS).numpy()
data = []
for source, dest, bottle, nonbottles, nonloc in tqdm(experiments):
for s, d in itertools.product(source, dest):
cost, affirm = find_probe(s, d, bottle, step_cost=sc)
data.append(dict(cost=cost, affirm=affirm, nonloc=nonloc, bottle=True))
for nonbottle in nonbottles:
cost, affirm = find_probe(s, d, nonbottle, step_cost=sc)
data.append(dict(cost=cost, affirm=affirm, nonloc=nonloc, bottle=False))
# -
df = pd.DataFrame(data)
# affirm == nonloc
summary = df.groupby(['nonloc', 'bottle']).cost.mean().reset_index()
summary
# +
import seaborn as sns
df['Affirm'] = df['nonloc']
df['Execution Cost'] = df['cost']
df['Bottleneck'] = df['bottle']
#sns.lineplot(data=summary, x='affirm', order=[True, False], y='cost', hue='bottle', markers=True)
ax = sns.catplot(
x='Affirm', y='Execution Cost', hue='Bottleneck', kind="point", data=df,
order=[True, False],
f=f,
palette=[
tuple(x/255 for x in (148, 192, 216, 255)),
tuple(x/255 for x in (165, 97, 143, 255)),
]
)
f = plt.gcf()
#plt.xticks([0, 1])
#plt.xlim(1.1, -0.1)
ax.set(
xticklabels=['Affirm', 'Reject'],
xlabel='',
title='Model',
)
plt.ylim(1, None)
f.set_size_inches(2.5,2)
f.set_dpi(80)
plt.savefig('figures/model_of_solway2e.pdf', bbox_inches='tight')
# -
logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set = None, l2_penalty=0.,
l1_penalty = l1_penalty, verbose = False)
rss = RSS(model.predict(validation), validation['price'])
if rss < lowest:
lowest = rss
print l1_penalty
print str(l1_penalty) + " with RSS " + str(rss)
# -
# *** QUIZ QUESTION. *** What was the best value for the `l1_penalty`?
10.0
# ***QUIZ QUESTION***
# Also, using this value of L1 penalty, how many nonzero weights do you have?
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set = None, l2_penalty=0.,
l1_penalty = 10.0, verbose = False)
model_all.get("coefficients").print_rows(num_rows=18, num_columns=3)
# # Limit the number of nonzero weights
#
# What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
# In this section, you are going to implement a simple, two phase procedure to achive this goal:
# 1. Explore a large range of `l1_penalty` values to find a narrow region of `l1_penalty` values where models are likely to have the desired number of non-zero weights.
# 2. Further explore the narrow region you found to find a good value for `l1_penalty` that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for `l1_penalty`.
max_nonzeros = 7
# ## Exploring the larger range of values to find a narrow range with the desired sparsity
#
# Let's define a wide range of possible `l1_penalty_values`:
l1_penalty_values = np.logspace(8, 10, num=20)
# Now, implement a loop that search through this space of possible `l1_penalty` values:
#
# * For `l1_penalty` in `np.logspace(8, 10, num=20)`:
# * Fit a regression model with a given `l1_penalty` on TRAIN data. Specify `l1_penalty=l1_penalty` and `l2_penalty=0.` in the parameter list. When you call `linear_regression.create()` make sure you set `validation_set = None`
# * Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
# * *Hint: `model['coefficients']['value']` gives you an SArray with the parameters you learned. If you call the method `.nnz()` on it, you will find the number of non-zero parameters!*
number_of_nonzeros = []
for l1_penalty in np.logspace(8, 10, num=20):
model = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set= None,
verbose = False, l1_penalty = l1_penalty,
l2_penalty = 0)
print l1_penalty
print model['coefficients']['value'].nnz()
# Out of this large range, we want to find the two ends of our desired narrow range of `l1_penalty`. At one end, we will have `l1_penalty` values that have too few non-zeros, and at the other end, we will have an `l1_penalty` that has too many non-zeros.
#
# More formally, find:
# * The largest `l1_penalty` that has more non-zeros than `max_nonzeros` (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
# * Store this value in the variable `l1_penalty_min` (we will use it later)
# * The smallest `l1_penalty` that has fewer non-zeros than `max_nonzeros` (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
# * Store this value in the variable `l1_penalty_max` (we will use it later)
#
#
# *Hint: there are many ways to do this, e.g.:*
# * Programmatically within the loop above
# * Creating a list with the number of non-zeros for each value of `l1_penalty` and inspecting it to find the appropriate boundaries.
l1_penalty_min = 2976351441.63
l1_penalty_max = 3792690190.73
# ***QUIZ QUESTION.*** What values did you find for `l1_penalty_min` and `l1_penalty_max`, respectively?
# ## Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
#
# We will now explore the narrow region of `l1_penalty` values we found:
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
# * For `l1_penalty` in `np.linspace(l1_penalty_min,l1_penalty_max,20)`:
# * Fit a regression model with a given `l1_penalty` on TRAIN data. Specify `l1_penalty=l1_penalty` and `l2_penalty=0.` in the parameter list. When you call `linear_regression.create()` make sure you set `validation_set = None`
# * Measure the RSS of the learned model on the VALIDATION set
#
# Find the model that the lowest RSS on the VALIDATION set and has sparsity *equal* to `max_nonzeros`.
for l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set = None, verbose = False,
l1_penalty = l1_penalty, l2_penalty=0.)
print str(l1_penalty) + " -----" + str(model['coefficients']['value'].nnz())
print RSS(model.predict(validation), validation['price'])
# ***QUIZ QUESTIONS***
# 1. What value of `l1_penalty` in our narrow range has the lowest RSS on the VALIDATION set and has sparsity *equal* to `max_nonzeros`?
# 2. What features in this model have non-zero coefficients?
winning_l1_penalty = 3448968612.16
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set = None, verbose = False,
l1_penalty = winning_l1_penalty, l2_penalty=0.)
model.get("coefficients").print_rows(num_rows=18, num_columns=3)
| 10,994 |
/2_HUBBLE/hubble.ipynb
|
78b0292430d0d4a192a2ff5734809b7b1e651249
|
[] |
no_license
|
matt-ngo/Astronomical-Source-Detection
|
https://github.com/matt-ngo/Astronomical-Source-Detection
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,394 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Nyabokelean/Predicting_Electricity_consumption/blob/master/Energy_In_Uganda.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Br-38PiKUn92" colab_type="text"
# ## Business Understanding
# + [markdown] id="I0q_jxX6UsYL" colab_type="text"
# ## Data Understanding
# + id="QCCavICLFB9T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 214} outputId="6390af94-7b35-4518-c4df-4c0f264a09e5"
# !pip install bar_chart_race
# + id="KsX6Vo78Ssq6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 214} outputId="30836dcf-18ae-4e29-ad18-11e680a386dc"
# !pip install bar_chart_race
# + id="HuRyPb0RcMt0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="c965fa63-52c1-4e56-d9ca-65cdb347c2c4"
# Import necessary libraries.
#
import pandas as pd
import numpy as np
import seaborn as sns
import plotly as pty
import matplotlib.pyplot as plt
# %matplotlib inline
from datetime import datetime
import plotly.express as px
import plotly.graph_objects as go
import plotly.figure_factory as ff
from IPython.display import HTML
import calendar
from plotly.subplots import make_subplots
from plotly import graph_objects
# + [markdown] id="9a7MhWTOcs4a" colab_type="text"
# ## Loading the Dataset.
# + id="2s1j7OoycRmO" colab_type="code" colab={}
# Load the dataset.
#
energy = pd.read_csv("Copy of train_6BJx641.csv")
# + id="GScXsdW5cfWi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 384} outputId="cd3c65ea-3d6c-4184-a169-e63614cd6147"
# Preview the top 5 rows.
#
energy.head()
# + id="FgTKDgxGdIZk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 384} outputId="9fc4cb03-74a3-497c-c0ee-2100cbc3e655"
# Preview bottom 5 rows.
#
energy.tail()
# + [markdown] id="Pbha-8w2V2KR" colab_type="text"
# > Our dataframe has about 26495 rows.
# > Lets check for the size of the dataframe to confirm this.
# + id="058DQIcfdOrm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cc84d30e-98d7-4ccb-e4e3-cd119b2a1ab1"
# Check the size of the dataframe.
#
energy.shape
# + [markdown] id="oruMbpHBWP8-" colab_type="text"
# > The dataframe has an actual size of 26496 Rows and 8 Columns.
# + [markdown] id="8x37JfqXWfW5" colab_type="text"
# ## Data Cleaning
# + id="6gPUlrSJePv-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 176} outputId="d9d38851-8591-4aeb-e467-ba5b9be24760"
# Check for null values.
#
energy.isna().sum()
# + [markdown] id="x9PhrTlqX2MG" colab_type="text"
# >>
# The dataset does not have any null values.
# + [markdown] id="XIu5Dig4Xxm5" colab_type="text"
# **Duplicated Values**
# + id="hWjIu_apeaX8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3b59fb0e-0cba-44d9-c25f-6f4f490e5520"
# Check for Duplicates values.
#
energy.duplicated().sum()
# + [markdown] id="3HVx3whPYFXi" colab_type="text"
# >>
# We also don't have duplicated variables in our dataset.
# + [markdown] id="uetulawkYPku" colab_type="text"
# >>
# We can now split the 'datetime' column into two columns.
# >>
# These columns are
# >>
# * Date
# * Time
#
# >>
# The reason as to why we are splitting this column is due to the fact that we need to analyze how electricity consumption varies on an hourly basis.
#
# + [markdown] id="y5IS3apiZJj2" colab_type="text"
# **Datetime**
# + id="WPbyAxMIfc_k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="359f778d-08c4-4de5-d0be-02cc26d4f702"
# Split the datatime column into two columns and add them to the dataframe.
# These two columns are: date and time.
#
d = pd.to_datetime(energy['datetime'], infer_datetime_format=True)
energy['date'] = d.dt.date
energy['time'] = d.dt.time
print (energy)
# + id="Aea28kHIGkHl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 384} outputId="188002e8-f5eb-49ba-c97f-39efd40cd513"
energy['time_hour'] = pd.to_datetime(energy['datetime']).dt.hour
energy.head()
# + id="bIyTxB4j-2dx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 307} outputId="c5fd2f21-7727-4d68-8585-3be59943bc36"
energy.describe()
# + [markdown] id="m0ce_pWc-DVL" colab_type="text"
# > We observe that during the seventh month of the year we have a spike in electricity consumption.
# >>
# During the month of January we see a slightly lower record of electricity consumption.
# + [markdown] id="A6cSoZ0Ab4p8" colab_type="text"
# **Drop datetime column**
# + id="iOe7DMVsdckG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dd8ae7f7-9ce1-4f1a-a26b-1a6b67c1f0dc"
# Check for the shape of the new dataframe.
#
energy.shape
# + [markdown] id="hQHpyhJqr9Fg" colab_type="text"
# **Export clean dataframe**
# + id="yCKW5_DHr8nE" colab_type="code" colab={}
energy.to_csv("Clean_Train_Dataset.csv")
# + id="AN1hzjAOsXW3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 401} outputId="51f87075-5d06-4603-a672-4fcbe24d3996"
# Load The clean Dataset.
#
data = pd.read_csv("Clean_Train_Dataset.csv")
data.head()
# + id="aNB9kTPDbqAp" colab_type="code" colab={}
data.drop(['Unnamed: 0'], axis=1, inplace=True)
# + [markdown] id="CC9AUoqkg8S_" colab_type="text"
# **Numerical Dataframe**
# + id="EmLHFTqYgyza" colab_type="code" colab={}
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
numeric = data.select_dtypes(include=numerics)
# + [markdown] id="MhdnXDxwhA-g" colab_type="text"
# **Categorical Dateframe**
# + id="wGpbtYyYhFIl" colab_type="code" colab={}
categorical = ['object']
category = data.select_dtypes(include=categorical)
# + [markdown] id="ytbeFrucZpb4" colab_type="text"
# **Checking fo outliers**
# + id="3ZifdWX6ZumL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 318} outputId="904ba33b-8980-49f4-b8d2-0559443fcac3"
plt.style.use('bmh')
# plotting outliers for the numeric columns
#
_t, cols = pd.DataFrame.boxplot(numeric, return_type='both')
outliers = [flier.get_ydata() for flier in cols['fliers']]
out_list = [i.tolist() for i in outliers]
print(f" Ourtliers:\n {out_list}")
# + [markdown] id="VukmThM7b4zx" colab_type="text"
# > We have outliers in all the variables with electricity consumtion variable having the largest number of outliers.
# + id="IfXJX01MdirK" colab_type="code" colab={}
col = data.columns.tolist()
# + id="5GZbWSqWhIee" colab_type="code" colab={}
def boxplots_by_class(df, list_of_X, y='electricity_consumption'):
plt.rcParams['figure.figsize']=(10,5)
f, ax = plt.subplots(1,len(list_of_X))
for i in range(len(list_of_X)):
sns.boxplot(y, y=list_of_X[i], data=df, ax=ax[i], palette='coolwarm')
f.tight_layout()
# + id="1zGz5Rz0fczY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 672} outputId="211d7b46-230c-4fe3-c863-9dac95817177"
boxplots_by_class(df=numeric, list_of_X=col[:5])
# + id="N-2zqXUnr6ER" colab_type="code" colab={}
# Function for counting number of outliers in our data columns and cheking the percentage for each
# ----
#
def detect_outlier(data):
outliers=[]
threshold=3
mean_1 = np.mean(data)
std_1 =np.std(data)
for y in data:
z_score= (y - mean_1)/std_1
if np.abs(z_score) > threshold:
outliers.append(y)
return outliers
# + id="xDUKF-Jhrw7x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 141} outputId="7969b1e0-b571-470d-a8aa-e1c89a3c636b"
# Counting number of outliers in our data columns and cheking the percentage for each column using z-score
#
#
for col in numeric:
rows, columns = numeric.shape
percent_coefficient = float(100 / rows)
outliers = detect_outlier(numeric[col])
outliers_count = len(outliers)
outliers_percentage = outliers_count * percent_coefficient
print(f"{col} has {outliers_count} outliers in total, which is {outliers_percentage:.2}% of data")
# + id="5K8ry-WHsEkI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="ff41e45b-df0d-41ac-cd7e-4edebf5eb2be"
# Getting ouliers from our dataframe using a z-test
#
from scipy import stats
z = np.abs(stats.zscore(numeric))
print(z)
# Dropping and Confirming that our outliers have been dropped from the dataset.
#
df_o = numeric[(z < 3).all(axis=1)]
print(f"Previous dataframe size : {numeric.shape[0]}")
print(f"New dataframe size: {df_o.shape[0]}")
# + [markdown] id="I4fv1ChC_fNK" colab_type="text"
# # Exploratory Data Analysis.
#
# > Indented block
#
#
# + [markdown] id="FKCiZ3IjdwgJ" colab_type="text"
# ### Visualization
#
# + [markdown] id="cPYYlaWodzNj" colab_type="text"
# #### Correlation
# + id="DtjilHPydtBr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 504} outputId="b82aea31-f3dc-466c-d8d3-9366574a36f4"
plt.rcParams['figure.figsize']=(12,8)
sns.set(font_scale=1.2)
sns.heatmap(df_o.corr(), cmap='coolwarm');
# + [markdown] id="rx83Gz3DfMO1" colab_type="text"
# **Temperature VS Electricity Consumption compared to Var1 VS Electricity Consumption**
# + [markdown] id="zcGd_i-Je0ZT" colab_type="text"
# **Histograms by class**
# + id="cz2VCp0EezxO" colab_type="code" colab={}
# Draw Histogram plots of Numeric columns.
#
def hist_by_class(df, list_of_X, y='var2'):
sns.set(rc={'figure.figsize':(15.7,10.27)})
for i in range(len(list_of_X)):
# g = s.FacetGrid(df, col=y, hue=y)
g = sns.FacetGrid(data, hue=y)
g.map(sns.distplot, list_of_X[i], hist=True, rug=False).add_legend()
# + id="hdo3X6T5fvDN" colab_type="code" colab={}
hist_by_class(df_o, list_of_X=col[:3])
# + id="7YBUlScRBJ9u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="71e48dea-d7c9-462c-d69a-aa83781d6382"
fig = px.line(data, x='date', y='electricity_consumption', color='var2')
fig.update_xaxes(rangeslider_visible=True)
fig.show()
# + id="iyS0vtva_4Ek" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="1b9bf64e-b448-4a9a-b233-eaf9d36e3fdb"
fig = px.line(data, x='date', y='temperature', color='var2')
fig.update_xaxes(rangeslider_visible=True)
fig.show()
# + id="0x_4ZpDNlaIX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="6c04c4b2-977c-4598-ca9a-b697de54d025"
fig = px.line(data, x='date', y='pressure', color='var2')
fig.update_xaxes(rangeslider_visible=True)
fig.show()
# + id="t_xwWYfkmzFk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="6ba4b065-7ce4-4248-aef4-f28e00a1c858"
fig = px.line(data, x='date', y='windspeed', color='var2')
fig.update_xaxes(rangeslider_visible=True)
fig.show()
# + id="azbvJ9d8nCZS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="d1fcd859-a4e7-4ba8-dd4a-79c9e77809db"
# Drawing frequency tables for the categorical variale
#
for col in category:
print(data[col].value_counts())
print("\n")
# + id="4BQJr1j3tx5Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 438} outputId="200ab33b-0d3e-49bc-aa6b-2c875097067c"
# Plotting sublots for our categorical variables.
#
fig, (ax) = plt.subplots(figsize=(10, 6))
fig.suptitle('Frequency Distributions')
sns.barplot(data['var2'].value_counts().keys(), data['var2'].value_counts(), ax=ax)
plt.ylabel(col)
plt.xlabel('Count', fontsize=16)
plt.show()
# + id="LJXqHPwAuE9B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="c9a9bebd-a7a6-45a7-fe49-88cad6c7a985"
fig = px.line(data, x='date', y='electricity_consumption', range_x=['2017-01-01','2017-07-31'], color='var2')
fig.update_xaxes(rangeslider_visible=True)
fig.show()
# + id="QBmJHkT3vfCL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="147acdf8-0f19-4d84-eb7e-6569268e6e8b"
fig = px.line(data, x='date', y='electricity_consumption', title='Time Series with Range Slider and Selectors')
fig.update_xaxes(
rangeslider_visible=True,
rangeselector=dict(
buttons=list([
dict(count=1, label="1m", step="month", stepmode="backward"),
dict(count=6, label="6m", step="month", stepmode="backward"),
dict(count=1, label="YTD", step="year", stepmode="todate"),
dict(count=1, label="1y", step="year", stepmode="backward"),
dict(step="all")
])
)
)
fig.show()
# + id="YxD0VGr90PnK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="7f712b55-f8ff-408d-99df-855fad3cc722"
fig = go.Figure(go.Scatter(
x = data['datetime'],
y = data['electricity_consumption']
))
fig.update_xaxes(
rangeslider_visible=True,
tickformatstops = [
dict(dtickrange=[None, 1000], value="%H:%M:%S.%L ms"),
dict(dtickrange=[1000, 60000], value="%H:%M:%S s"),
dict(dtickrange=[60000, 3600000], value="%H:%M m"),
dict(dtickrange=[3600000, 86400000], value="%H:%M h"),
dict(dtickrange=[86400000, 604800000], value="%e. %b d"),
dict(dtickrange=[604800000, "M1"], value="%e. %b w"),
dict(dtickrange=["M1", "M12"], value="%b '%y M"),
dict(dtickrange=["M12", None], value="%Y Y")
]
)
fig.show()
# + id="kuP0e7l-Bvpp" colab_type="code" colab={}
| 13,835 |
/Bakh_1037/Plot_maker.ipynb
|
b39e4b9796241b34064e9e83592733fbe4badd66
|
[] |
no_license
|
subake/CompMath-Project
|
https://github.com/subake/CompMath-Project
| 0 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 257,947 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ZWlRCBM70oj0"
# Predicting the percentage of student based on the number of study hours using Linear Regression
#
# + [markdown] id="b4VWVrRL0cgr"
# AUTHOR: V A S Kiranmayee
# + id="Xsn7cocsz1bJ"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="mygaCgQ8z7xp" outputId="eb20514e-58c5-4326-d698-0a2fa67e017a"
link="https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv"
data=pd.read_csv(link)
data.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 284} id="UBnzqLByz-kw" outputId="ce509274-a31a-4397-841a-655e267f807e"
data.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="Vb75si9p0Cf1" outputId="d9819967-554c-4219-d0a9-cdb6a6f0b98e"
data.shape
# + [markdown] id="L0KlhpuK0Mey"
# ### **Visualization using matplotlib**
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="fSnaz5Mb0GVK" outputId="7a42b9cf-4473-4090-abe7-dd82ffa80d73"
plt.scatter(data.Hours,data.Scores,c='b')
plt.xlabel('Hours studied')
plt.ylabel('Percentage scored')
plt.title('Relation Between Hours & Scores')
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="IgUsBdjW016W" outputId="a988ef5e-f43a-4333-8214-404eeab960a5"
plt.hist(data.Scores, bins=[15,35,50,80,100], rwidth=0.86)
plt.xlabel('Scores')
plt.ylabel('Y axis')
plt.title('Score Ranges')
# + [markdown] id="5yjoB6Uu1Djh"
# From the above histogram we can say that there are many students who scored more than 50%, but there are many more students with very low percentange between 15% to 35% and there are few people who scored above 35% and below 50%¶
# + [markdown] id="BljnMI5h1KWx"
# **Preparing the data for training it**
# + id="6EcVOEnl04lb"
x=data.drop(columns='Scores')
y=data.drop(columns='Hours')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y,
test_size=0.2, random_state=0)
# + [markdown] id="ZCOqG7oC1Ws9"
# Training data using linear regression
# + id="lQNOl85m1IsJ"
from sklearn.linear_model import LinearRegression
# + colab={"base_uri": "https://localhost:8080/"} id="BSJ3u4wn1cGH" outputId="c3b29da3-c48d-41d9-e66a-ecb136e0fe2b"
reg = LinearRegression()
reg.fit(X_train, y_train)
# + [markdown] id="lUECmbBM1p7h"
# Plotting the regressor line on scatter plot
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="KqRog8SL1krL" outputId="c7115e0f-edd6-4908-9532-ccdf44b6a748"
line = reg.coef_*x+reg.intercept_
# Plotting for the test data
plt.scatter(x, y, c='r')
plt.plot(x, line);
plt.show()
# + [markdown] id="Nzi1Ff9Q1yMR"
# Predicting the scores using model
# + colab={"base_uri": "https://localhost:8080/"} id="qvT4pICE1oy2" outputId="f009110c-a68e-4388-bfbf-c3686c910a85"
reg.predict([[2.5]])
# + [markdown] id="3YJs1MLm12sK"
# Predicting the score of a student who studied for 9.25 hours
#
# + colab={"base_uri": "https://localhost:8080/"} id="1Ajb_ORk1xWL" outputId="3e8e96c9-64d9-410f-e472-03a573c6d24b"
reg.predict([[9.25]])
# + [markdown] id="jBe5STw_1_a-"
# comparing the actual VS predicted
# + colab={"base_uri": "https://localhost:8080/"} id="Mrgp3S_117XV" outputId="92cecd2f-c6b0-4054-c201-d9afeb8e9baa"
y_predicted=reg.predict(X_test)
y_predicted
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="Dhid_VgI2DEG" outputId="3853f3f3-705c-4ee9-9166-433cc03fafcc"
y_test
# + [markdown] id="D1BFQKTN2DXm"
# **Accuracy & Mean absolute Error of the model**
#
# + colab={"base_uri": "https://localhost:8080/"} id="k6gzHX7A2GdN" outputId="efdb14b6-2c4e-40b7-9bcd-31386e1fa31c"
reg.score(X_test,y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="D5f3uTjN2KYj" outputId="33a7dd60-9fc0-41dc-e96f-4dc93927cd31"
from sklearn import metrics
print('Mean Absolute Error is:',
metrics.mean_absolute_error(y_test, y_predicted))
# + [markdown] id="blKK6Pow2fux"
# From this it can be concluded that our model is giving a 94.5% accuacy with a mean absolute error of 4.183859899002975.
#
# Thank you.
| 4,423 |
/notebooks/bogdanbaraban/t-test-relation-of-lunch-and-test-results.ipynb
|
61ddac853c4e4491a80fbdc709279f0b048ee81f
|
[] |
no_license
|
Sayem-Mohammad-Imtiaz/kaggle-notebooks
|
https://github.com/Sayem-Mohammad-Imtiaz/kaggle-notebooks
| 5 | 6 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,361 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from math import sqrt
from numpy import mean
from scipy.stats import sem
from scipy.stats import t
data=pd.read_csv('../input/StudentsPerformance.csv')
data['overall_score'] = data[['math score', 'reading score', 'writing score']].mean(axis=1)
print(data.describe())
data.head()
print(data.groupby(['lunch'])['overall_score'].mean())
print('====='*10)
print(data.groupby(['lunch'])['overall_score'].count())
# +
from scipy.stats import ttest_ind
free_lunch_mean = data[(data["lunch"] == 'free/reduced')]
standard_lunch_mean = data[(data["lunch"] == 'standard')]
print('t=%.3f, p=%.3f' %(ttest_ind(standard_lunch_mean['overall_score'],
free_lunch_mean['overall_score'])))
print('With free Lunch:', data[(data["lunch"] == 'free/reduced')].overall_score.mean())
print('With standard Lunch:', data[(data["lunch"] == 'standard')].overall_score.mean())
# -
# function for calculating the t-test for two samples
def ttest(data1, data2, alpha):
# calculate means
mean1, mean2 = mean(data1), mean(data2)
# calculate standard errors
se1, se2 = sem(data1), sem(data2)
# standard error on the difference between the samples
sed = sqrt(se1**2 + se2**2)
# calculate the t statistic
t_stat = (mean1 - mean2) / sed
# degrees of freedom
df = len(data1) + len(data2) - 2
# calculate the critical value
cv = t.ppf(1.0 - alpha, df) # at what point is 1-alpha percentile
# calculate the p-value
p = (1.0 - t.cdf(abs(t_stat), df)) * 2 # cdf - Cumulative Distribution Function
# return everything
return t_stat, df, cv, p
# +
# generate two independent samples
data1 = data[data['lunch']=='standard']['overall_score']
data2 = data[data['lunch']=='free/reduced']['overall_score']
# calculate the t test
alpha = 0.05
t_stat, df, cv, p = ttest(data1, data2, alpha)
print('t=%.3f, degrees of freedom=%d, cv=%.3f, p=%.3f' %(t_stat, df, cv, p))
# interpret via critical value
if abs(t_stat) <= cv:
print('Accept null hypothesis that the means are equal.')
else:
print('Reject the null hypothesis that the means are equal.')
# interpret via p-value
if p > alpha:
print('Accept null hypothesis that the means are equal.')
else:
print('Reject the null hypothesis that the means are equal.')
# -
| 2,578 |
/final_project/.ipynb_checkpoints/FinalProject_A13709073-checkpoint.ipynb
|
0b34e6f88114a48adc47433ce8ab323f606ce76b
|
[] |
no_license
|
COGS108/individual_sp20
|
https://github.com/COGS108/individual_sp20
| 1 | 188 | null | 2020-06-23T21:57:40 | 2020-06-16T18:48:45 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 306,319 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # COGS 108 - Final Project
# # Overview
# This project, in an effort to better health inspection services, looks through two datasets of inspections and violations and locates common errors/ violations within restaurants to see if the most prevalent finding is what causes the drop in Health Inspection Score.
#
# What was found was that, despite Food surface cleanliness being the most common and prevalent within inspections, it was not the deciding factor for deducting a restaurants score. It was discovered to be facility upkeep that heavily impacted restaurant scores, despite not being the most found violation.
# # Name & PID
#
# - Name: Emmanuel Ejanda
# - PID: A13709073
# # Research Question
# What are the common cases of error and lack of compliance when looking through an inspection dataset that cause restaurants to drop in health letter grade/ fail a health inspection?
#
# ## Background and Prior Work
# -Health code violations within resteraunts are recorded and documented within this checklist; in summary, this reference includes such violations and information regarding the instances (Reference 1)
#
# -A summary for this reference is information regarding the city of San Diego's health inspection codes and regulations within the state of California (Reference 2)
#
# -A summary for this reference is information regarding the county of Los Angeles with included high risk and low risk specifications within the state of California (Reference 3)
#
# -Overall, an analysis of the Grade System for Health Inspection relies on the resteraunt's compliances with lists that detail what practices should and should not be seen within the environment. After going through the list and tallying up what was and was not followed, a letter grade is issued - this is the factor that which will be the focus for this research project.
#
# References (include links):
# - 1)
# https://www.webstaurantstore.com/article/16/health-inspection-checklist.html
#
# - 2) https://www.sandiegocounty.gov/content/dam/sdc/deh/fhd/food/pdf/publications_foodselfinspection.pdf
#
# - 3)
# http://www.publichealth.lacounty.gov/EH/docs/RefGuideFoodInspectionReport.pdf
# # Hypothesis
#
# Hypothesis: I believe that the most common/ most prevalent mistakes and errors found within health inspections directly cause a decrease in food regulation compliance letter grades.
#
# Why?: This hypothesis was developed by wondering what causes a resteraunt to lose ratings in health inspections. In an effort fo find out why, I looked to the health inspection regulations and checklists and began thinking of how these items could affect the overall grade for the resteraunt. Knowing the answer to this could give an opportunity to spread information to other restaurants and increase health and safety regulations while also improving restaurant health inspection processes.
# # Dataset(s)
#
# - Dataset Name: inspections.csv
# - Link to the dataset: https://canvas.ucsd.edu/courses/12630/files?preview=1639871
# - Number of observations: 18466 Observations
# Description: Information regarding the inspections made for various restaurants and includes information such as the restaurant name, date of inspection, city, state, etc.
#
# - Dataset Name: violations.csv
# - Link to the dataset: https://canvas.ucsd.edu/courses/12630/files?preview=1639871
# - Number of observations: 189802 Observations
# Description:
#
# Dataset Combination plan:
# Find restaurants with a score less than 90 then match their hsisid's with the violations dataset.
# # Setup
# +
## Import necessary items for project
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Seaborn
import seaborn as sns
sns.set()
sns.set_context('talk')
import warnings
warnings.filterwarnings('ignore')
# -
#Data Reading
IN_df = pd.read_csv('data/inspections.csv')
VI_df = pd.read_csv('data/violations.csv')
# # Data Cleaning
## Cut Uneccessary Information/ Compromising Information from Datasets
IN_df = IN_df.drop(['date','name','address1','address2','city','postalcode','phonenumber','restaurantopendate','days_from_open_date','facilitytype','x','y','geocodestatus','zip','type','description','inspectedby','inspection_num','inspector_id','previous_inspection_date','days_since_previous_inspection','previous_inspection_by_same_inspector','num_critical_previous','num_non_critical_previous','num_critical_mean_previous','num_non_critical_mean_previous','avg_neighbor_num_critical','avg_neighbor_num_non_critical','top_match','second_match','state'], axis=1)
VI_df = VI_df.drop(['X.objectid','inspectdate','statecode','inspectedby','observationtype','count','cdcriskfactor','cdcdataitem','violationtype','pointvalue'],axis = 1)
# +
## Examine Inspection with Scores less than 90 (B or lower)
IN = IN_df[IN_df['score'] <90]
## Merge Datasets based on hsisid's
MR = pd.merge(IN, VI_df, on = 'hsisid')
## Store unique ID's Of Inspections (Duplicate ID's Indicate multiple Violations on one ID)
ID = MR['hsisid'].unique()
# -
## Shorten Descriptions
def Short_Desc(string):
string = string.lower()
string = string.strip()
if 'toxic' in string:
output = 'Toxic'
elif 'garbage' in string:
output = 'Garbage Disposed'
elif 'knowledge' in string:
output = 'Knowledge'
elif 'thermometers' in string:
output = 'Thermometers'
elif 'toilet' in string:
output = 'Restrooms'
elif 'physical' in string:
output = 'Facilities'
elif 'ventilation' in string:
output = 'Light/Vents'
elif 'hot hold' in string:
output = 'Hot Holding Temp'
elif 'cold hold' in string:
output = 'Cold Holding Temp'
elif 'handwashing' in string:
output = 'Wash Sinks'
elif 'date mark' in string:
output = 'Date Managing'
elif 'time as a' in string:
output = 'Health Records'
elif 'wiping' in string:
output = 'Wipe Cloths'
elif 'hands clean' in string:
output = 'Wash Hands'
elif 'contamination' in string:
output = 'Food Contamination'
elif 'insects' in string:
output = 'Insects/Rodents'
elif 'single-use' in string:
output = 'Consumables stored'
elif 'food-contact' in string:
output = 'Food Surface Clean'
elif 'utensils' in string:
output = 'Utensils/Equipment Stored'
elif 'pic' in string:
output = 'Certification'
elif 'food separated' in string:
output = 'Food protected'
elif 'equipment, food' in string:
output = 'Approved Items'
elif 'non-food contact' in string:
output = 'NonFood Surface Clean'
elif 'exclusion' in string:
output = 'Report/Restrict/Exclude'
elif 'bare hand' in string:
output = 'No bare hand contact'
elif 'personal Clean' in string:
output = 'Personal Cleanliness'
elif 'warewashing' in string:
output = 'Warewashing Facility'
elif 'unadulterated' in string:
output = 'Food is Safe'
elif 'compliance' in string:
output = 'Compliance'
elif 'proper cooling methods' in string:
output = 'Proper cooling Methods'
elif 'food properly' in string:
output = 'Food Labeled'
elif 'plumbing' in string:
output = 'Plumbing'
elif 'proper eating' in string:
output = 'Proper Eat/Drink/Taste'
elif 'labeled' in string:
output = 'Food Labeled'
elif 'proper cooling time' in string:
output = 'Cooling Time/Temp'
elif 'proper cooking time' in string:
output = 'Cooking Time/Temp'
elif 'in-use' in string:
output = 'Used Utensils Stored'
elif 'variance' in string:
output = 'Specialized Processing'
elif 'approved thawing' in string:
output = 'Thawing Method Used'
elif 'proper reheating' in string:
output = 'Proper reheating'
elif 'food obtained' in string:
output = 'Food Obtained Approved Source'
elif 'hot & cold' in string:
output = 'Hot and Cold Water'
elif 'consumer advisory' in string:
output = 'Consumer advisory Raw/Uncooked'
elif 'sewage' in string:
output = 'Sewage disposed properly'
elif 'shellstock' in string:
output = 'Records/Shellstock/ParasiteDestruction'
elif 'food received' in string:
output = 'Food Received Proper Temp'
elif 'washing fruits' in string:
output = 'Washed Fruits and Veggies'
elif 'pasteurized' in string:
output = 'Pasteurized Eggs'
elif 'disposition' in string:
output = 'Disposition/ Unsafe Food'
elif 'food additives' in string:
output = 'Food Additives'
elif 'plant food' in string:
output = 'Plant food Properly Cooked'
else:
output = string
return output
## Apply Method Short_Desc to shortdesc of MR
MR['shortdesc'] = MR['shortdesc'].apply(Short_Desc)
## This Method is to Clean Commentary In the Later Data Analysis Section
def Short_Comment(string):
string = string.lower()
string = string.strip()
if 'clean' in string:
output = 'Cleaning'
elif 'damage' in string:
output = 'Damage'
elif 'repair' in string:
output = 'Damage'
else:
output = 'Facility Maintenance'
return output
# # Data Analysis & Results
## Check Total Count of Violations within Data
MR['shortdesc'].value_counts()
## Plot Total Counts for Violations to provide visual feedback
plt.figure(figsize=(17,17))
desc = sns.countplot(y='shortdesc',data = MR)
desc.set_ylabel('Description of Violation',fontsize = 30)
desc.set_xlabel('Counts',fontsize = 30)
desc.tick_params(labelsize=20)
# +
## Take the top 3 Violations and look back on the scores of said violations
MR2 = MR.loc[ (MR['shortdesc'] == 'Food Surface Clean')|(MR['shortdesc'] == 'Facilities') |(MR['shortdesc'] == 'Date Managing')]
## Remove Duplicate hsisid's to look at individual scores
MR3 = MR2.drop_duplicates(subset='score')
Scores = MR3['score']
ID = MR3['hsisid']
Violation = MR3['shortdesc']
Comments = MR3['comments']
## Create Data Frame With Collected Information
COM = pd.concat([ID,Scores,Violation,Comments],axis= 1)
COM = COM.reset_index(drop=True)
COM = COM.rename(columns = {'hsisid': 'Restaurant ID','score' : 'Score','shortdesc' :'Violation','comments':'Comments'})
COM
# -
## Review Summarized Comments from the Restaurants that scored below 90
COM['Comments'] = COM['Comments'].apply(Short_Comment)
COM
# # Ethics & Privacy
# Certain privacy issues that must be dealt with would have to be the names/locations of restaurants should they appear in the dataset. Removal of these pieces of information have been done as they are not relevant to the question we are trying to answer. This information could also potentially lead to unforseen consequences should we examine one such restaurant with less than adequate health results and this information were given to consumers who frequent or would have frequented such a place.
#
# Caution has been made not to include any means of identification for Resteraunts other than the given hsisid to match data with other datasets and pull information from there. Should there be any mention of errors within an ID of a resteraunt, no physical identifiers are available to look up said area.
#
# The removal of locations and restaurant identifcation, however, don't completely remove the presence of bias that had affected the information gathered from the restaurant beforehand. A popular chain, for instance, despite not being named in this data analysis, can have innate bias that may have been difficult , if possible, to remove from the analyzed data.
# # Conclusion & Discussion
# From the final dataframe examined, we can see that the areas that brought a resteraunt down below a 90 in health score (Below an A rating) was Facility Maintenance. This includes Damage Control, Restaurant Cleaning, and General Maintenance. This is surprising as, while there was a much larger value count for Food surface cleanliness in restaurants among health inspection, their health scores did not drop below a 90 in comparison to these restaurants that are ranging between the 70's and 80's in health score. These scores represent C's and B's Respectively, and as such, are grounds for improvement in comparison to other restaurants. What we can take away from this finding is that, even though Food surface cleanliness was the most prevalent in health code violations among restaurants, it wasn't the deciding factr for bringing down a restaurants score. Instead, we see that Facility upkeep had brought these restaurants to a score below an A.
i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
color='darkgray',**fontDict['data_labels']) #color="white" if cm[i, j] > thresh else "black"
plt.tight_layout()
plt.ylabel('True label',**fontDict['ylabel'])
plt.xlabel('Predicted label',**fontDict['xlabel'])
if print_raw_matrix:
print_title = 'Raw Confusion Matrix Counts:'
print('\n',print_title)
print(conf_matrix)
fig = plt.gcf()
return fig
def evaluate_model(y_true, y_pred,history=None):
from sklearn import metrics
if y_true.ndim>1:
y_true = y_true.argmax(axis=1)
if y_pred.ndim>1:
y_pred = y_pred.argmax(axis=1)
if history is not None:
plot_keras_history(history)
num_dashes=20
print('\n')
print('---'*num_dashes)
print('\tCLASSIFICATION REPORT:')
print('---'*num_dashes)
print(metrics.classification_report(y_true,y_pred))
fig = plot_confusion_matrix((y_true,y_pred))
plt.show()
class Timer():
def __init__(self, start=True,time_fmt='%m/%d/%y - %T'):
import tzlocal
import datetime as dt
self.tz = tzlocal.get_localzone()
self.fmt= time_fmt
self._created = dt.datetime.now(tz=self.tz)
if start:
self.start()
def get_time(self):
import datetime as dt
return dt.datetime.now(tz=self.tz)
def start(self,verbose=True):
self._laps_completed = 0
self.start = self.get_time()
if verbose:
print(f'[i] Timer started at {self.start.strftime(self.fmt)}')
def stop(self, verbose=True):
self._laps_completed += 1
self.end = self.get_time()
self.elapsed = self.end - self.start
if verbose:
print(f'[i] Timer stopped at {self.end.strftime(self.fmt)}')
print(f' - Total Time: {self.elapsed}')
from sklearn.metrics import make_scorer
def my_custom_scorer(y_true,y_pred,verbose=True):#,scoring='accuracy',verbose=True):
"""My custom score function to use with sklearn's GridSearchCV
Maximizes the average accuracy per class using a normalized confusion matrix"""
import sklearn.metrics as metrics
from sklearn.metrics import confusion_matrix
import numpy as np
## reduce dimensions of y_train and y_test
if y_true.ndim>1:
y_true = y_true.argmax(axis=1)
if y_pred.ndim>1:
y_pred = y_pred.argmax(axis=1)
evaluate_model(y_true,y_pred)
print('\n\n')
return metrics.accuracy_score(y_true,y_pred)
def get_secret_password(file='/Users/jamesirving/.secret/gmail.json'):
with open(file) as file:
import json
gmail = json.loads(file.read())
# email_notification()
print(gmail.keys())
return gmail
def email_notification(password_obj=None,subject='GridSearch Finished',
msg='The GridSearch is now complete.'):
"""Sends email notification from gmail account using previously encrypyted password object (an instance
of EncrypytedPassword).
Args:
password_obj (dict): Login info dict with keys: username,password.
subject (str):Text for subject line.
msg (str): Text for body of email.
Returns:
Prints `Email sent!` if email successful.
"""
if password_obj is None:
gmail = get_secret_password()
else:
assert ('username' in password_obj)&('password' in password_obj)
gmail = password_obj
if isinstance(msg,str)==False:
msg=str(msg)
# import required packages
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email import encoders
## WRITE EMAIL
message = MIMEMultipart()
message['Subject'] =subject
message['To'] = gmail['username']
message['From'] = gmail['username']
message.attach(MIMEText(msg,'plain'))
text_message = message.as_string()
# Send email request
try:
with smtplib.SMTP_SSL('smtp.gmail.com',465) as server:
server.login(gmail['username'],gmail['password'])
server.sendmail(gmail['username'],gmail['username'], text_message)#text_message)
server.close()
print(f"Email sent to {gmail['username']}!")
except Exception as e:
print(e)
print('Something went wrong')
def prepare_gridsearch_report(grid_search,X_test,y_test,
save_path = 'results/emails/'):
"""Creates a text report with grid search results
and saves it to disk. Text is returned and can be attached as
the `msg` param for email_notification'"""
## Make folders for saving email contents
import os,sys
import sklearn.metrics as metrics
os.makedirs(save_path,exist_ok=True)
## Get time afor report
import datetime as dt
import tzlocal as tz
now = dt.datetime.now(tz.get_localzone())
time = now.strftime("%m/%d/%Y - %I:%M %p")
## filepaths for fig and report
fig_fpath = save_path+'confusion_matrix.png'
msg_text_path = save_path+'msg.txt'
## GET BEST PARAMS AND MODEL
best_params = str(grid_search.best_params_)
best_model = grid_search.best_estimator_#(grid.best_params_)
# Get predictions
y_hat_test = best_model.predict(X_test)
## Get Classification report
report = metrics.classification_report(y_test.argmax(axis=1),y_hat_test)
## Get text confusion matrix
cm = np.round(metrics.confusion_matrix(y_test.argmax(axis=1),y_hat_test,normalize='true'),2)
cm_str = str(cm)
## Combine text for report
msg_text = [f'Grid Search Results from {time}:\n']
msg_text = ['The best params were:\n\t']
msg_text.append(best_params)
msg_text.append('\n\n')
msg_text.append('Classification Report:\n')
msg_text.append(report)
msg_text.append('\n\n')
msg_text.append('Confusion Matrix (normalized to true labels):\n')
msg_text.append(cm_str)
## Save the text to file
with open(msg_text_path,'w+') as f:
f.writelines(msg_text)
print(f"Message saved as {msg_text_path}")
## Load the (fixed) text from file
with open(msg_text_path,'r') as f:
txt = f.read()
## Plot and save confusion matrix
fig = plot_confusion_matrix((y_test,y_hat_test))
try:
fig.savefig(fig_fpath, dpi=300, facecolor='w', edgecolor='w', orientation='portrait',
papertype=None, format=None, transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None, metadata=None)
print(f"Figure saved as {fig_fpath}")
except Exception as e:
print(f"[!] ERROR saving figure:\n\t{e}")
return txt#,fig
# + [markdown] id="M9kJLR7pgd5I" colab_type="text"
# ## Using Colab Pro
# + id="8XX6Sd5dpuOQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="0bf0e5a1-09f2-4ab7-b044-c1c63d5099a0"
#https://colab.research.google.com/notebooks/pro.ipynb
# gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
# + id="CJCNXnmPqBC6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="cec553e2-19a5-433b-d636-53b0d9b4c320"
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
if ram_gb < 20:
print('To enable a high-RAM runtime, select the Runtime → "Change runtime type"')
print('menu, and then select High-RAM in the Runtime shape dropdown. Then, ')
print('re-execute this cell.')
else:
print('You are using a high-RAM runtime!')
# + [markdown] id="8qbS0lSrghaO" colab_type="text"
# ## Installs & Imports
# + id="KHc94KpTlbRj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="641999b3-b3a8-470c-a68c-b2c88afdf1f3"
# !pip install pillow
# !pip install opencv-contrib-python
# !pip install -U fsds_100719
from fsds_100719.imports import *
# + id="2yML_wHNlbRn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 151} outputId="79f2fbf3-c173-4c51-9d68-fac065ed6226"
from PIL import Image
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import cv2
# + id="ykHe6fpjlbRq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c7a5e7c1-97d4-4144-e677-eeaba80e936c"
# ## dataset
base_folder = r'dogs-vs-cats-sorted/'#My Drive/Datasets/dogs-vs-cats-sorted/'
os.listdir(base_folder)
# + id="3vsPkzBelbRt" colab_type="code" colab={}
# ## DOG VS CAT
base_folder = r'dogs-vs-cats-sorted/'
train_base_dir = base_folder+'training_set/'
test_base_dir =base_folder+'test_set/'
train_dogs = train_base_dir+'dogs/'
train_cats = train_base_dir+'cats/'
test_dogs = test_base_dir+'dogs/'
test_cats = test_base_dir+'cats/'
# + [markdown] id="JY6Lqh7thQ6m" colab_type="text"
# ## Image manipulation with opencv
# + id="2gdysZbGlbRy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="2576dc8c-ccd9-47c5-a18b-8caccf1ee934"
import cv2,glob,os
dog_filenames = glob.glob(train_dogs+'*.jpg')
cat_filenames = glob.glob(train_cats+'*.jpg')
img_filenames = [*dog_filenames,*cat_filenames]
dog_testnames = glob.glob(test_dogs+'*.jpg')
cat_testnames = glob.glob(test_cats+'*.jpg')
print(len(img_filenames))
img_filenames[:10]
# + id="FliRjzJilbR1" colab_type="code" colab={}
def load_image_cv2(filename, RGB=True):
"""Loads image using cv2 and converts to either matplotlib-RGB (default)
or grayscale."""
import cv2
IMG = cv2.imread(filename)
if RGB: cmap = cv2.COLOR_BGR2RGB
else: cmap=cv2.COLOR_BGR2GRAY
return cv2.cvtColor(IMG,cmap)
# + id="Pw5vCtFUlbR4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="d9b0cbca-10d4-48e9-9f86-75bae32b0243"
## Load in and display image.
IMG = load_image_cv2(img_filenames[0],RGB=False)
## Even if you import as grayscale, must tell plt to use gray cmap
fig,ax= plt.subplots(ncols=2,figsize=(12,5))
ax[0].imshow(IMG)
ax[1].imshow(IMG,cmap='gray')
## Remove axes labels https://stackoverflow.com/a/2176591
[(a.get_xaxis().set_visible(False), a.get_yaxis().set_visible(False)) for a in ax]
print(IMG.shape)
# + id="XF01W_lZlbR7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="1207d05b-eb53-4f12-f8c3-ca3fe517f486"
## Using seaborn color palette with imshow
from matplotlib.colors import ListedColormap
cmap = ListedColormap(sns.color_palette('RdBu',n_colors=25))
plt.imshow(IMG,cmap=cmap)
# + id="GJXd8BUllbR-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="21755830-051f-466b-a2c4-fa3fdb185e7b"
plt.imshow(IMG,cmap='Reds')
# + id="YvhsfVPilbSA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 252} outputId="dd58672f-9d09-47fe-d615-eefbfa5242cf"
## RESIZING IMAGES
print(IMG.shape)
small = cv2.resize(IMG,(100,50))
plt.imshow(small,cmap=cmap)
# + id="Ibpi5ZNZlbSD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="94e58f1e-ba61-4c59-deb4-2d227199bbf4"
## Resizing Using Ratios
w_ratio = 0.5
h_ratio = 0.5
## Must Pass cv2.resize(IMG, (0,0) IMG, w_ratio,h_ratio)
new_img = cv2.resize(IMG, (0,0), IMG, w_ratio,h_ratio)
plt.imshow(new_img,cmap=cmap)
print(new_img.shape)
# + id="0StvIfnRlbSF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="d8da6dc8-6619-4810-f57f-30ebd2a97c3a"
new_img = cv2.flip(new_img,0)
plt.imshow(new_img,cmap=cmap)
# + id="8U5WVuj3lbSH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="875b3c7d-8042-4ea6-d289-94a8f84e6470"
*a,_=test_base_dir.split('/')
save_dir = '/'.join(a)
save_dir
# + id="gbyz6QgllbSK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="193cf53b-1530-4fc3-8209-8b3151f52716"
cv2.imwrite(save_dir+'example_save.jpg',new_img)
# + id="JFbGkkbOlbSN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="7990723e-3a78-4fe1-feb1-c0ed5678cef9"
plt.imshow(cv2.imread(save_dir+'example_save.jpg',cv2.COLOR_BGR2RGB),cmap=cmap)
plt.gcf().patch.set_visible(False)
# + id="Wapm7Z1plbSQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="44cdf97e-7e09-4e90-923f-2fae71167225"
print(base_folder)
os.listdir(base_folder)
# + [markdown] id="P1-7hb_lhVdU" colab_type="text"
# # Using CNNs
# + [markdown] id="L5CNHxnagDjo" colab_type="text"
# ## Preparing Images Using .flow instead of flow_from_directory
# - https://discuss.analyticsvidhya.com/t/keras-image-preprocessing-using-flow-and-not-flow-from-directory/69460/2
#
# + id="hMrChDEtsGZx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="c6e743e5-e9a7-4c21-865a-3c144a6ed66c"
from PIL import Image
from keras.preprocessing import image
from imageio import imread
from skimage.transform import resize
import cv2
from tqdm import tqdm
# defining a function to read images
def read_img(img_path,target_size=(150, 150, 3)):
img = image.load_img(img_path, target_size=target_size)
img = image.img_to_array(img)
return img
# reading the images
train_img = []
train_label = []
# dog=1
for img_path in tqdm(dog_filenames):
train_img.append(read_img(img_path))
train_label.append(1)
for img_path in tqdm(cat_filenames):
train_img.append(read_img(img_path))
train_label.append(0)
print('\n',pd.Series(train_label).value_counts())
# reading the images
dog_testnames = glob.glob(test_dogs+'*.jpg')
cat_testnames = glob.glob(test_cats+'*.jpg')
test_img = []
test_label = []
for img_path in tqdm(dog_testnames):
test_img.append(read_img(img_path))
test_label.append(1)
for img_path in tqdm(cat_testnames):
test_img.append(read_img(img_path))
test_label.append(0)
print('\n',pd.Series(test_label).value_counts())
# + id="PylTCOJLFh4j" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
X = np.array(train_img, np.float32)
y = np.array(train_label)
X_test = np.array(test_img, np.float32)
y_test = np.array(test_label)
X_train,X_val,y_train,y_val = train_test_split(X,y,test_size=0.1)
# + id="EJ32Dq1HlbSS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="85a8540f-4e24-4c70-96a4-b00ee5feb414"
def train_test_val_datagens(BATCH_SIZE = 32):
## Create training and test data
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
val_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow(X_train,y=y_train,batch_size=BATCH_SIZE)
test_set = test_datagen.flow(X_test,y=y_test,batch_size=BATCH_SIZE)
val_set = val_datagen.flow(X_val,y=y_val,batch_size=BATCH_SIZE)
return training_set,test_set,val_set
training_set,test_set,val_set = train_test_val_datagens(BATCH_SIZE=32)
# help(train_datagen.flow)
# training_set = train_datagen.flow_from_directory(base_folder+'training_set/',
# target_size = (64, 64),
# batch_size = 32,
# class_mode = 'binary')
# test_set = test_datagen.flow_from_directory(base_folder+'test_set/',
# target_size = (64, 64),
# batch_size = 32,
# class_mode = 'binary')
shapes = ["Batchsize", "img_width","img_height","img_dim"]
SHAPES = dict(zip(shapes, training_set[0][0].shape))
print(SHAPES)
print(training_set[0][0].shape)
print('\nLabels for batch')
print(training_set[0][1])
# + id="_i56KGpclbSU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="b2632aaf-f290-4f69-cf63-6139d14d33be"
training_set[0][1]
# + id="aQWzX5XWyGhZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5f5d8304-0b24-4851-cfd5-eebfa09c412c"
print(SHAPES)
# + id="5ZbaFM3xlbSa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="f6771821-f3e5-4fec-f627-fae2879a5cc6"
# Part 1 - Building the CNN
clock = fs.jmi.Clock()
clock.tic('')
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(SHAPES['Batchsize'], (3, 3),
input_shape = (SHAPES['img_width'],
SHAPES['img_height'],
SHAPES['img_dim']),
activation = 'relu'))
classifier.add(Conv2D(SHAPES['Batchsize'], (3, 3),
input_shape = (SHAPES['img_width'],
SHAPES['img_height'],
SHAPES['img_dim']),
activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(SHAPES['Batchsize'], (3, 3),
activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = SHAPES['Batchsize'], activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = ['accuracy'])
display(classifier.summary())
# Part 2 - Fitting the CNN to the images
classifier.fit_generator(training_set,
steps_per_epoch = 1000,
epochs = 2,
validation_data = test_set,
validation_steps = 250,workers=-1)
clock.toc('')
# + [markdown] id="-cdSMxwih-AI" colab_type="text"
# ## Getting Predictions for a Single Image?
# + id="3xqfuOkilbSc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 303} outputId="36c27d09-5f11-4230-dae8-27358cb540bb"
import numpy as np
from keras.preprocessing import image
test_image_ = image.load_img(base_folder+'single_prediction/cat_or_dog_1.jpg', target_size = (150, 150))
test_image = image.img_to_array(test_image_)
## Must expand to get a pred for only 1
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
# training_set.class_indices
if result[0][0] == 1:
prediction = 'dog'
else:
prediction = 'cat'
print(prediction)
plt.imshow(test_image_,cmap='gray')
# + id="le7jSVcIixbD" colab_type="code" colab={}
# y_hat_test = classifier.predict_classes(X_test).flatten()
# pd.Series(y_hat_test).value_counts()
# + [markdown] id="aBz76CL3j84K" colab_type="text"
# ## Evaluate Model
# + id="wEJKy_R9iHqW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 829} outputId="6508bfc5-4791-43ab-af0e-623fec0bcbae"
y_hat_val = classifier.predict_classes(X_val).flatten()
print(pd.Series(y_hat_val))
evaluate_model(y_val,y_hat_val)
# + id="uyphhqdq4lpv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 829} outputId="5aae7bd5-9033-4223-df3d-fa19d1286715"
# Part 1 - Building the CNN
clock = fs.jmi.Clock()
clock.tic('')
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense,Dropout
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(SHAPES['Batchsize'], (3, 3),
input_shape = (SHAPES['img_width'],
SHAPES['img_height'],
SHAPES['img_dim']),
activation = 'relu'))
classifier.add(Conv2D(SHAPES['Batchsize'], (3, 3),
input_shape = (SHAPES['img_width'],
SHAPES['img_height'],
SHAPES['img_dim']),
activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.2))
# Adding a second convolutional layer
classifier.add(Conv2D(SHAPES['Batchsize'], (3, 3),
activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.2))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = SHAPES['Batchsize'], activation = 'relu'))
classifier.add(Dropout(0.2))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = ['accuracy'])
display(classifier.summary())
# Part 2 - Fitting the CNN to the images
classifier.fit_generator(training_set,
steps_per_epoch = 1000,
epochs = 2,
validation_data = test_set,
validation_steps = 250,workers=-1)
clock.toc('')
# + [markdown] id="vtZ-fSjUkHF0" colab_type="text"
# ## Make functions
# + id="UM7sFcOukGlq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 932} outputId="72195fca-b315-4f4d-ea87-b111e25a540a"
def build_model(SHAPES,filter_size=(3,3), pool_size=(2,2),dropout=True):
vars_ = locals()
print(f'[i] MODEL BUILT USING:\n\t{vars_}')
# Part 1 - Building the CNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(SHAPES['Batchsize'], filter_size,
input_shape = (SHAPES['img_width'], SHAPES['img_height'], SHAPES['img_dim']),
activation = 'relu'))
classifier.add(Conv2D(SHAPES['Batchsize'], filter_size,
input_shape = (SHAPES['img_width'], SHAPES['img_height'], SHAPES['img_dim']), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = pool_size))
if dropout:
classifier.add(Dropout(0.2))
# Adding a second convolutional layer
classifier.add(Conv2D(SHAPES['Batchsize'], filter_size, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = pool_size))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = SHAPES['Batchsize'], activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy',
metrics = ['accuracy'])
display(classifier.summary())
return classifier
# Part 2 - Fitting the CNN to the images
def train_model(classifier,training_set, test_set,
params=dict(steps_per_epoch = 2000,
epochs = 3, validation_steps = 500,
workers=-1)):
vars = locals()
print(f'[i] Training model using\n\t{vars}\n')
clock = Timer()
history_ = classifier.fit_generator(training_set,
validation_data = test_set,
validation_steps=validation_steps,
**params)
# workers=workers)
clock.stop()
return history_
model_=build_model(SHAPES)
history = train_model(model_,training_set,test_set)
y_hat_val = model_.predict_classes(X_val).flatten()
evaluate_model(y_val,y_hat_val,history=history)
# + [markdown] id="FtnmGCYraMWd" colab_type="text"
# ## Saving and Loading Models/Weights
# + id="w-w_gmXrrOFX" colab_type="code" colab={}
## To save to Gdrive, must first chdir to My Drive (so there's no spaces in fpath)
curdir = os.path.abspath(os.curdir)
gdrive_folder =r'/gdrive/My Drive/'
model_subfolder = 'Datasets/Models/cat_vs_dog/'
try:
os.chdir(gdrive_folder)
os.makedirs(model_subfolder,exist_ok=True)
except Exception as e:
print(f'ERROR: {e}')
os.listdir(model_subfolder)
# + id="AJmB5E5XVakv" colab_type="code" colab={}
def save_model(model,model_subfolder = 'Datasets/Models/cat_vs_dog/',
base_modelname = 'CNN_cat_dog_02142020', as_json=True,
return_fpaths=True,verbose=True):
# https://jovianlin.io/saving-loading-keras-models/
try:
weight_fpath = model_subfolder+base_modelname+'_weights.h5'
model.save_weights(weight_fpath, overwrite=True)
if as_json:
model_fpath = model_subfolder+base_modelname+'_model.json'
# Save the model architecture
with open(model_fpath, 'w') as f:
f.write(model.to_json())
else:
model_fpath = model_subfolder+base_modelname+'_model.h5'
model.save(model_fpath)
if verbose:
print(f"[io] Model architecture saved as {model_fpath}")
print(f"[io] Model weights saved as {weight_fpath}")
else:
print(f"[io] Successfully saved model.")
except Exception as e:
import warnings
warnings.warn(f"ERROR SAVING: {e}")
if return_fpaths:
return model_fpath, weight_fpath
model_fpath,weight_fpath = save_model(model)
# + id="ehR3dQfiYX13" colab_type="code" colab={}
def load_model(model_fpath,weight_fpath=None,as_json=True):
from keras.models import model_from_json
if (as_json == True) & (weight_fpath is None):
raise Exception('If using as_json=True, must provide ')
# Model reconstruction from JSON file
with open(model_fpath, 'r',encoding="utf8") as f:
model2 = model_from_json(f.read())
# Load weights into the new model
model2.load_weights(weight_fpath)
display(model2.summary())
return model2
model_loaded = load_model(model_fpath,weight_fpath)
# + id="_koIz3CKZ57v" colab_type="code" colab={}
y_hat_val = model_loaded.predict_classes(X_val)
evaluate_model(y_val,y_hat_val)
# + [markdown] id="dszu4vFoS2Mp" colab_type="text"
# ## Transfer Learning
#
# https://www.kaggle.com/risingdeveloper/transfer-learning-in-keras-on-dogs-vs-cats
# + id="B9xAQK1SlbSe" colab_type="code" colab={}
from keras.applications import InceptionResNetV2
conv_base = InceptionResNetV2(weights='imagenet', include_top=False, input_shape=(150,150,3))
conv_base.summary()
# + id="kKag2f0OlbSh" colab_type="code" colab={}
# + id="TKkdHFLxlbSj" colab_type="code" colab={}
from keras import layers
from keras import models
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(32, activation='relu'))#256
model.add(layers.Dense(1, activation='sigmoid')) #Sigmoid function at the end because we have just two classes
# model.summary()
# + id="ave6CZillbSm" colab_type="code" colab={}
print('Number of trainable weights before freezing the conv base:', len(model.trainable_weights))
conv_base.trainable = False
print('Number of trainable weights after freezing the conv base:', len(model.trainable_weights))
# + id="5a_puKjUlbSo" colab_type="code" colab={}
from keras import optimizers
model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=2e-5), metrics=['accuracy'])
# + id="piJDSJIilbSq" colab_type="code" colab={}
len(training_set)*32
# + id="6lBRMvb47liA" colab_type="code" colab={}
# + id="r6DHGjLplbSt" colab_type="code" colab={}
history = model.fit_generator(training_set,
steps_per_epoch = 2000,
epochs = 2, validation_data = test_set,
validation_steps = 500,workers=-1)
y_hat_val = model.predict_classes(X_val)
evaluate_model(y_val,y_hat_val)
# + id="BWhbeB2k3Who" colab_type="code" colab={}
pd.Series(y_hat_val).value_counts()
# + id="LL9Tz-nq2GZD" colab_type="code" colab={}
save_model()
# + [markdown] id="q9n-JX7ZoHSS" colab_type="text"
# ## Lime
#
# - https://github.com/expectopatronum/code-snippets-blog/blob/master/python/201808_catdog_classifier_lime/analyse-cat-dog-classifier.ipynb
# + id="d9TQLN4voJMX" colab_type="code" colab={}
# # # !pip install lime
# import lime
# from lime import lime_image
# from lime import lime_base
# from lime.wrappers.scikit_image import SegmentationAlgorithm
# from skimage.segmentation import mark_boundaries
# explainer = lime_image.LimeImageExplainer()
# + id="36ca0zaYoWsw" colab_type="code" colab={}
# def explain_single_sample(dataset, idx):
# img_data = dataset[idx][0]
# data = img_data.reshape(IMG_SIZE,IMG_SIZE,3)
# model_out = model.predict([data])[0]
# label = 0
# label_name = "cat"
# if model_out[1] > 0.5:
# label = 1
# label_name = "dog"
# explanation = explainer.explain_instance(data, model.predict, top_labels=2, hide_color=None, num_samples=1000)
# temp, mask = explanation.get_image_and_mask(label, positive_only=True, num_features=5, hide_rest=True)
# fig, ax = plt.subplots(1,3)
# ax[0].imshow(img_data)
# #plt.subplot(1, 2, 1)
# ax[1].imshow(mark_boundaries(temp, mask))
# #plt.show()
# #plt.subplot(1, 2, 2)
# temp, mask = explanation.get_image_and_mask(label, positive_only=False, num_features=20, hide_rest=False)
# ax[2].imshow(mark_boundaries(temp, mask))
# plt.show()
# print("label: ", dataset[idx][1], "prediction:", label_name)
# + id="YQMdYErdlbSz" colab_type="code" colab={}
# + id="Z8ZNM1XglbS1" colab_type="code" colab={}
# + id="jKbFWJq9lbS4" colab_type="code" colab={}
# + id="TCADvLJ3lbS6" colab_type="code" colab={}
# + [markdown] colab_type="text" id="srbjybjqQ5m9"
# ### Using os, shutil to create directories and copy files
# - from [Convolutional Neural Networks - Codealong](https://github.com/jirvingphd/dsc-04-43-03-convolutional-neural-networks-code-along-online-ds-ft-021119)
#
# - **first define the folders that currently contain the images get their filenames**
#
# ```python
# import os, shutil
#
# # Define directories to be created:
# data_santa_dir = 'data/santa/'
# data_not_santa_dir = 'data/not_santa/'
# new_dir = 'split/'
#
# # Store the list of all the relevant training target images
# imgs_santa = [file for file in os.listdir(data_santa_dir) if file.endswith('.jpg')]
# print('There are',len(imgs_santa), 'santa images')
#
# # Store the list of all non-target images
# imgs_not_santa = [file for file in os.listdir(data_not_santa_dir) if file.endswith('.jpg')]
# print('There are', len(imgs_not_santa), 'images without santa')
#
# ```
#
# - **Now create new directries and for training, testing, and validation images.**
#
# ```python
# # Create the main folder for all of the new sub-folders
# os.mkdir(new_dir)
#
# # Create valid pathnames inside of the new_dir for training images
# train_folder = os.path.join(new_dir, 'train')
# train_santa = os.path.join(train_folder, 'santa')
# train_not_santa = os.path.join(train_folder, 'not_santa')
#
# # Create valid pathnames inside of the new_dir for testing images
# test_folder = os.path.join(new_dir, 'test')
# test_santa = os.path.join(test_folder, 'santa')
# test_not_santa = os.path.join(test_folder, 'not_santa')
#
# # Create valid pathnames inside of the new_dir for validation images
# val_folder = os.path.join(new_dir, 'validation')
# val_santa = os.path.join(val_folder, 'santa')
# val_not_santa = os.path.join(val_folder, 'not_santa')
#
#
# # Now create all of the folders defined above
# os.mkdir(test_folder)
# os.mkdir(test_santa)
# os.mkdir(test_not_santa)
#
# os.mkdir(train_folder)
# os.mkdir(train_santa)
# os.mkdir(train_not_santa)
#
# os.mkdir(val_folder)
# os.mkdir(val_santa)
# os.mkdir(val_not_santa)
#
# ```
#
# - **Now that we have the folders, copy the desired # of images to the correct dataset folders**
#
# ```python
# # The user decided to put 271 images in the training set, 100 in the validation set, and 90 in the test set
# # train santa
# imgs = imgs_santa[:271]
# for img in imgs:
# origin = os.path.join(data_santa_dir, img)
# destination = os.path.join(train_santa, img)
# shutil.copyfile(origin, destination)
#
# # validation santa
# imgs = imgs_santa[271:371]
# for img in imgs:
# origin = os.path.join(data_santa_dir, img)
# destination = os.path.join(val_santa, img)
# shutil.copyfile(origin, destination)
#
# # test santa
# imgs = imgs_santa[371:]
# for img in imgs:
# origin = os.path.join(data_santa_dir, img)
# destination = os.path.join(test_santa, img)
# shutil.copyfile(origin, destination)
#
# ## REPEATED FOR FOR THE NON-SANTA IMAGES - NOT SHOWN
# ```
#
# - Now that we have images in separate directories, we can use the Kera's ImageDataGenerators .flow_from_directory() method.
#
#
# ```python
#
# # get all the data in the directory split/test (180 images), and reshape them
# test_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(
# test_folder,
# target_size=(64, 64), batch_size = 180)
# # ...do the same for train and val (not shown)
#
#
# # create the data sets
# train_images, train_labels = next(train_generator)
#
# # Make sure things worked
# print ("Number of training samples: " + str(m_train))
# print ("train_images shape: " + str(train_images.shape))
#
#
# # Reshape the training images to have one column(/row?) for each image
# train_img = train_images.reshape(train_images.shape[0], -1)
# print(train_img.shape)
#
#
# # Reshape the labels to match the data
# train_y = np.reshape(train_labels[:,0], (542,1))
#
# ```
# + [markdown] colab_type="text" id="jUgyK0HRQBxT"
# ## Building CNN From Scratch Lab
# - https://github.com/learn-co-students/dsc-04-43-04-building-a-cnn-from-scratch-online-ds-ft-021119/tree/solution
# - CNN's are great for image processing
# ### Image Data
#
# ```python
# import os #for listdir()
# from keras.preprocessing.image import ImageDataGenerator
#
#
# train_dir = 'chest_xray_downsampled/train'
# validation_dir = 'chest_xray_downsampled/val/'
# test_dir = 'chest_xray_downsampled/test/'
#
# # All images will be rescaled by 1./255
# train_datagen = ImageDataGenerator(rescale=1./255)
# test_datagen = ImageDataGenerator(rescale=1./255)
#
# # Train_generator example
# train_generator = train_datagen.flow_from_directory(
# # This is the target directory
# train_dir,
# # All images will be resized to 150x150
# target_size=(150, 150),
# batch_size=20,
# # Since we use binary_crossentropy loss, we need binary labels
# class_mode='binary')
#
# ```
# - **Images are store in ImageDataGenerators**
# - Generally rescale to... intensity values? of 1./255
# - load in files with ImageDataGenerator.flow_from_directory( :
# - directory
# - the target_size (the size to convert all images to)
# - batch_size
# - class_mode
# - Note: selection of loss function determines chocie.
# - If using 'binary_crossentropy' for binary classification, use class_mode='binary'
#
# - ImageDataGenerators are also used for augmenting data.
#
# ```python
#
# train_datagen = ImageDataGenerator(
# rotation_range=40,
# width_shift_range=0.2,
# height_shift_range=0.2,
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True,
# fill_mode='nearest')
#
# ```
#
# ### Setting Up Initial Network
# ```python
# from keras import models, layers, optimizers
# # or if want to do exact layers:
# from keras.models import Sequential
# from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense)
# from keras.optimizers import RMSprop
#
# # Initialize sequential model
# model = Sequential()
# ```
# **1A) A CNN should start with a Conv2D later**
#
# ```python
# layers.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
# ```
#
# - Conv2D layers parameters (to change):
# - filters: # of samples to take from each image (kind of like # of neurons?) (e.g. filters=32)
# - kernel_size: size (in pixels) of the filters (e.g kernel_size=(3,3)
# - activation: activation function to use (e.g. 'relu')
#
# **1B) MaxPooling2D layers following Conv2D layers**
# ```python
# layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None)
# ```
#
# - MaxPooling2D parameters:
# - pool_size: factor by which to downscale.
# - e.g.pool_size=(2,2) will half information in vertical and horizational direction
#
# **1C,optional) Add a Dropout layer to avoid overfitting:** [Udemy course suggestion]
# ```python
# layers.Dropout(rate, noise_shape=None, seed=None)
# ```
# - Dropout parameters:
# - rate = 0.25 (used by udemy course)
#
# **2) Repeat: Continue layering combinations of Conv2D / MaxPooling2D layers** (Dropout too?):
# - Later layers will need larger # of filters to detect more abstract patterns.
#
# ```python
# model = models.Sequential()
# model.add(layers.Conv2D(32, (3, 3), activation='relu',
# input_shape=(150, 150, 3)))
# model.add(layers.MaxPooling2D((2, 2)))
# model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# model.add(layers.MaxPooling2D((2, 2)))
# model.add(layers.Conv2D(128, (3, 3), activation='relu'))
# model.add(layers.MaxPooling2D((2, 2)))
# model.add(layers.Conv2D(128, (3, 3), activation='relu'))
# model.add(layers.MaxPooling2D((2, 2)))
# ```
#
# **3A) Flatten Data Before Passing on to Dense Layers for Classification/Learning**
# ```python
# layers.Flatten(data_format =None)
# ```
# **3B) Add Dense layers at the end of the convolutional base for learning:**
# ```python
# layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None))
# ```
# - Will only need to worry about basic parameters:
# - Units:
# - Larger #, used for the actual learning.
# - Activation
# - User choice, 'relu' is always good.
#
# **3C) Add final Dense layer to determine output classification**
# - Add a final small Dense layer (depending on number of classes?)
# - For binary classification:
# - units: 1
# - activation: 'sigmoid'
#
# ``` python
# model.add(Flatten())
# model.add(Dense(512, activation='relu'))
# model.add(Dense(1, activation='sigmoid'))
# ```
#
# **4) Compile the model, selecting loss function, optimizer, and metric**
#
# - Loss Function:
# - For binary classifications, use 'binary_crossentropy'
# - Optimizer:
# - Use RMSProp,
# -Specify learning rate: ```lr = 1e-4```
# - Metrics:
# - Use 'acc' for accuracy
# ```python
# keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=0.0)
# ```
#
#
# ```python
# # Compile Model
# model.compile(loss='binary_crossentropy',
# optimizer=optimizers.RMSProp(lr=1e-4),
# metrics=['acc'])
#
# #Set the model to train;
# import datetime
# start=datetime.datetime.now()
# history = model.fit_generator(
# train_generator,
# steps_per_epoch=100,
# epochs=30,
# validation_data=validation_generator,
# validation_steps=50)
#
# end=datetime.datetime.now()
# ```
# + [markdown] colab_type="text" id="8rz5t0lMYCfd"
# # Use Pretrained CNNs
#
# ## Pretrained Networks Overview
# - Pretrained networks have already been trained on large pools of data, and have their weights frozen.
# - They enable deep learning on fairly small image datasets
# - a 'small' dataset is less than tens of thousands or hundreds of thousands of images
#
# - Pretrained networks can be used in whole or only specific parts, depending on your need/data.
# - The shallower the layer of neurons, the more generic its features are.
# - Therefore even if you data is very different, you can still use the lower layers for basic feature extraction.
# - The deeper the layer, the more abstract its features are.
# - so you may want to unfreeze the deeper/higher order classificaiton layers and re-train the network on your images.
# <br><br>
# ### Where to find the pre-trained networks
# - **Pretrained Networks are available in [Keras.applications](https://keras.io/applications/)**
# - This list of pretained models are for image classification.
# - DenseNet
# - InceptionResNetV2
# - InceptionV3
# - MobileNet
# - NASNet
# - ResNet50
# - VGG16
# - *VGG19* - used in labs
# - Xception
#
# - You can import these networks and use it as a function with 2 arguments:
# 1. `weights`
# - Determines which data source's training data weights to use.
# - ex: `weights='imagenet'
# 2. `include_top`
# - determines whetehr or not to include the fully-connected layer at the top of the network
# ```python
# from keras.applications import MobileNet
# conv_base = MobileNet(weights='imagenet', include_top=True)
# ```
#
# ### How to use pretrained networks for feature extraction or for fine-tuning
#
# **You'll learn about two ways to use pre-trained networks:**
# - **Feature extraction**: here, you use the representations learned by a previous network to extract interesting features from new samples.
# - Method 1) Use the convolutional base layers and run your data to detect the basic features, and save the output data, which is then run a new dense classifier, which is trained from scratch.
# - (+) It is fast
# - (-) but cant use data augmentation.
# - Note: If your images are very different from the pretraining datasets, you may want to only use _part_ of the convolutional base but a _new_ densely connected classifier
# - Method 2) Extend the conv_base by adding dense layers on top, running everything together.
# - (+) allows for data sugmentation
# - (-) extremely time-consuming and requires GPU
#
# - **Fine-tuning**: when finetuning, you'll "unfreeze" a few top layers from the convolutional base of the model and train them again together with the densely connected classifier layers of the model.
# - Note that you are changing the parts of the convolutional layers here that were used to detect the more abstract features.
# - By doing this, you can make your model more relevant for the classification problem at hand.
#
# ## Additional Resources
#
# * http://cs231n.stanford.edu/syllabus.html
# * https://www.dlology.com/blog/gentle-guide-on-how-yolo-object-localization-works-with-keras/
# * https://www.dlology.com/blog/gentle-guide-on-how-yolo-object-localization-works-with-keras-part-2/
#
# + colab_type="code" id="VHwZgQ9mQAu3" colab={}
from keras.applications import MobileNet
conv_base = MobileNet(weights='imagenet',
include_top = True)
conv_base.summary()
# + [markdown] colab_type="text" id="OStc-MxyrNMY"
# # Using Pretrained Networks - Codealong
# ## Theory/Tips
#
# ## Code
#
# ### Feature Extraction Method 1:
#
# ```python
# from keras.applications import VGG19
# cnn_base = VGG19(weights='imagenet',
# include_top=False,
# input_shape=(64, 64, 3))
# cnn_base.summary()
#
# # ---
#
# def extract_features(directory, sample_amount):
# features = np.zeros(shape=(sample_amount, 2, 2, 512))
# labels = np.zeros(shape=(sample_amount))
# generator = datagen.flow_from_directory(
# directory, target_size=(64, 64),
# batch_size = 10,
# class_mode='binary')
# i=0
# for inputs_batch, labels_batch in generator:
# features_batch = cnn_base.predict(inputs_batch)
# features[i * batch_size : (i + 1) * batch_size] = features_batch
# labels[i * batch_size : (i + 1) * batch_size] = labels_batch
# i = i + 1
# if i * batch_size >= sample_amount:
# break
# return features, labels
#
# # ---
#
# # you should be able to divide sample_amount by batch_size!!
# train_features, train_labels = extract_features(train_folder, 540)
# validation_features, validation_labels = extract_features(val_folder, 200)
# test_features, test_labels = extract_features(test_folder, 180)
#
# train_features = np.reshape(train_features, (540, 2 * 2 * 512))
# validation_features = np.reshape(validation_features, (200, 2 * 2 * 512))
# test_features = np.reshape(test_features, (180, 2 * 2 * 512))
#
# # ---
#
# from keras import models
# from keras import layers
# from keras import optimizers
#
# model = models.Sequential()
# model.add(layers.Dense(256, activation='relu', input_dim=2 * 2 * 512))
# model.add(layers.Dense(1, activation='sigmoid'))
#
# model.compile(optimizer=optimizers.RMSprop(lr=1e-4),
# loss='binary_crossentropy',
# metrics=['acc'])
# history = model.fit(train_features, train_labels,
# epochs=20,
# batch_size=10,
# validation_data=(validation_features, validation_labels))
#
# results_test = model.evaluate(test_features, test_labels)
#
# # ---
#
# train_acc = history.history['acc']
# val_acc = history.history['val_acc']
# train_loss = history.history['loss']
# val_loss = history.history['val_loss']
# epch = range(1, len(train_acc) + 1)
# plt.plot(epch, train_acc, 'g.', label='Training Accuracy')
# plt.plot(epch, val_acc, 'g', label='Validation acc')
# plt.title('Accuracy')
# plt.legend()
# plt.figure()
# plt.plot(epch, train_loss, 'r.', label='Training loss')
# plt.plot(epch, val_loss, 'r', label='Validation loss')
# plt.title('Loss')
# plt.legend()
# plt.show()
#
# #---
#
# ```
#
# ## Feature Extraction Method 2
# - this method is much more costly, but allows us to use data augmentation
#
# - The process:
# 1. Add the pretrained model as the first layer
# 2. Add some dense layers as a classifier on top
# 3. Freeze the convolutional base
# - This will prevent the weights from changing.
# - The layer.trainable attribute indicates if a layer is frozen
# 4. Train the model.
#
#
# ```python
#
# model = models.Sequential()
# model.add(cnn_base)
# model.add(layers.Flatten())
# model.add(layers.Dense(132, activation='relu'))
# model.add(layers.Dense(1, activation='sigmoid'))
#
# # ---
#
# #You can check whether a layer is trainable (or alter its setting) through the layer.trainable attribute:
# for layer in model.layers:
# print(layer.name, layer.trainable)
#
# #Similarly, we can check how many trainable weights are in the model:
# print(len(model.trainable_weights))
#
# # ---
#
# # Freeze the conv base
# cnn_base.trainable = False
#
#
# # ---
#
# # get all the data in the directory split/train (542 images), and reshape them
# train_datagen = ImageDataGenerator(
# rescale=1./255,
# rotation_range=40,
# width_shift_range=0.2,
# height_shift_range=0.2,
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True,
# fill_mode='nearest')
#
# train_generator = train_datagen.flow_from_directory(
# train_folder,
# target_size=(64, 64),
# batch_size= 20,
# class_mode= 'binary')
#
# # get all the data in the directory split/validation (200 images), and reshape them
# val_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(
# val_folder,
# target_size=(64, 64),
# batch_size = 20,
# class_mode= 'binary')
#
# # get all the data in the directory split/test (180 images), and reshape them
# test_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(
# test_folder,
# target_size=(64, 64),
# batch_size = 180,
# class_mode= 'binary')
#
# test_images, test_labels = next(test_generator)
#
# # ---
#
# model.compile(loss='binary_crossentropy',
# optimizer=optimizers.RMSprop(lr=2e-5),
# metrics=['acc'])
#
# # ---
#
# history = model.fit_generator(
# train_generator,
# steps_per_epoch= 27,
# epochs = 10,
# validation_data = val_generator,
# validation_steps = 10)
#
# ```
#
#
# + [markdown] colab_type="text" id="d0sjUefMuIs0"
# ## Fine Tuning
# + [markdown] colab_type="text" id="ey5E6in_uGHO"
# Up till now, we have frozen the entire convolutional base. Again, it cannot be stressed enough how important this is before fine tuning the weights of the later layers of this base. Without training a classifier on the frozen base first, there will be too much noise in the model and initial epochs will overwrite any useful representations encoded in the pretrained model. That said, now that we have tuned a classifier to the frozen base, we can now unfreeze a few of the deeper layers from this base and further fine tune them to our problem scenario. In practice, this is apt to be particularly helpful where adapted models span new domain categories. For example, if the pretrained model is on cats and dogs and this is adapted to a problem specific to cats (a very relatively similar domain) there is apt to be little performance gain from fine tuning. On the other hand, if the problem domain is more substantially different, additional gains are more likely in adjusting these more abstract layers of the convolutional base. With that, let's take a look at how to unfreeze and fine tune these later layers.
#
#
#
# ```python
#
# cnn_base.trainable = True
#
# # ---
#
# cnn_base.trainable = True
# set_trainable = False
# for layer in cnn_base.layers:
# if layer.name == 'block5_conv1':
# set_trainable = True
# if set_trainable:
# layer.trainable = True
# else:
# layer.trainable = False
#
#
# # ---
#
# model.compile(loss='binary_crossentropy',
# optimizer=optimizers.RMSprop(lr=1e-4),
# metrics=['accuracy'])
#
# # ---
# history = model.fit_generator(
# train_generator,
# steps_per_epoch= 27,
# epochs = 10,
# validation_data = val_generator,
# validation_steps = 10)
#
# + colab_type="code" id="mtyhyKPOrgY5" colab={}
# + [markdown] colab_type="text" id="dkMVqngpLsKC"
# # NLP Content from Flation Data Science Bootcamp
# + [markdown] colab_type="text" id="lReWoVYsTbwE"
# ## Word Embeddings Lab
#
# + [markdown] colab_type="text" id="oBEb-dY-Lweq"
# - [Solution on Github](https://github.com/jirvingphd/dsc-04-45-04-generating-word-embeddings-lab-online-ds-ft-021119/tree/solution)
#
# - Use `nltk.word_tokenize` to tokenize new headlines data
# - Can use `dataframe['column'].map(word_tokenize)` to tokenize a specific column in a df.
# - After tokenization, leave in original order.
#
# - Use `gensim.models.Word2Vec`
# - [gensim website](https://radimrehurek.com/gensim/)
#
#
# - Instantiate a Word2Vec model:<br> `model = Word2Vec(data, size=100, window=5, min_count=1, workers=4)`
# - `data` = text
# - `size` =size of the embedding vectors to create
# - `window` = # of words to include in sliding window
# - `min_count` = number of times a word must appear to be counted
# - `wokers` = number of threads to use during training
#
# - `model.train(data, total_examples = model.corpus_count, epochs=10)`
#
# - Now can use the model.wv dictionary for methods
# - `wv = model.wv`
#
#
# - **Get Word Similarity**
# - Return the Most Similar Words
# - `wv.most_similar('Texas')`
#
# - Return the Least Similar Words (not so meaningful)
# - `wv.most_similar(negative='Texas')`
#
# - **To Get a Word's Vector**
# - Use wv as a dictionary
# - `wv['Texas']`
#
# - **To Get All Word Vectors**
# - `wv.vectors`
#
# - **To Perform Word *Arithmetic***
# - i.e. 'king' - 'man' + 'woman'
# - Words to add should be `positive=`, words to subtract are `negative=`
# - `wv.most_similar(positive=['king','woman'], negative=['man'])`
# + colab_type="code" id="Hs8MNYwNL7ka" colab={}
import pandas as pd
import numpy as np
from gensim.models import Word2Vec
from nltk import word_tokenize
import nltk
nltk.download('punkt')
# + colab_type="code" id="KIiiqS1hMC_x" colab={}
from google.colab import drive
drive.mount('/content/gdrive/')
file = '/content/gdrive/My Drive/Colab Notebooks/datasets/News_Category_Dataset_v2.json'
df = pd.read_json(file, lines=True)
df.head()
# + colab_type="code" id="poAr1lwHOiZK" colab={}
# Concatenate description and headline.
df['combined_text'] = df.headline+' '+df.short_description
# Tokenize the combined_text column.
data = df['combined_text'].map(word_tokenize)
# Preview first 5
data[:5]
# + colab_type="code" id="XaQMDoG0Ox2y" colab={}
model = Word2Vec(data, size=100, window=5, min_count=1, workers=4)
model.train(data, total_examples=model.corpus_count, epochs=10)
wv = model.wv
wv.most_similar('Texas')
# + colab_type="code" id="O1pgxlGeTK7G" colab={}
wv.most_similar(negative='Texas')
# + colab_type="code" id="dOCWkeEmTRai" colab={}
wv.most_similar(positive=['king','woman'], negative=['man'])
# + [markdown] colab_type="text" id="y7QKTYb9OiOK"
# ___
# + [markdown] colab_type="text" id="5t5C_rbCTdd5"
# ## Classification with Word Embeddings
# + [markdown] colab_type="text" id="QRaVtOWsTjVS"
# ### Using Pretrained Word Vectors with GloVe
# - Best to load a top-tier industry-standard word mdels
# - Most common is Global Vectors for Word Representation (GloVe) from the Stanford NLP Group.
# - Loading in weights removes the need to instantiate a Word2Vec model.
# - **Instead, the process is:**
# 1. Get the total vocabulary of our data
# 2. Download and unzip the GloVe file needed from Standford NLP
# 3. Read the GloVe file, save only vectors for words in our dataset.
#
#
#
# - **File must be downloaded manually:**
# - Must download the zip file for the Stanford Groups pretrained vectors [here.](https://nlp.stanford.edu/projects/glove/)(Lab used the smallest one (which is still 6 B words)
# - Place the downloaded file directly into same folder as jupyter notebook.
# - **To use in python;**
# - First, must tokenize the words in the vocab as done above with nltk.
# - Second, must turn the vocabulary into a `set`:
# `total_vocabulary = set(word for headline in data for word in headline)`
# - Third, load in the glove file, check for and keep only the words that are inside of total_vocabulary
# ```python
# glove = {}
# with open('glove.6B.50d.txt', 'rb') as f:
# for line in f:
# parts = line.split()
# word = parts[0].decode('utf-8')
# if word in total_vocabulary:
# vector = np.array(parts[1:], dtype=np.float32)
# glove[word] = vector
# ```
# - The code above has created a dictionary called `glove, which contains all of the vectors for our data's vocabulary.
#
# ### Mean Word Embeddings
#
# - Just loading in vectors does not describe sentence, only individual words.
# - To classify text, we need to calculate ***Mean Word Embeddings***.
# - Simply get the vector for every word in a sentence and take the average.
# - Mean Word vectors will always match the size of each individual word, no matter how many words appear in a sentence.
# - Can easily put text into a form for Supervised Learning models, such as Support Vector Machines or Gradient Boosted Trees.
#
#
# ### Coding a Custom Vectorizer Class (compatible with sklearn)
#
# ### Deep Learning & Embedding Layers
#
# - Mean word embeddings lose some of the meaning, which is why **Sequence Models** exist.
# - Recurrent Neural Networks
# - Long Short Term Memory Cells
# - For deep learning, add **Embedding Layers** into the network.
#
# - **Embedding Layer Requirements**
# - Learn the word embeddings for our data 'on the fly', get the benefits of Word2Vec without needing to train a Word2Vec model separately.
# - Embedding Layers must always be the FIRST layer, immediately below the Input() layer.
# - All words in the text must be integer-encoded (each word its own unique integer)
# - Size of the embedding layer MUST be greater than the total vocabulary size
# - First parameter denotes vocab size, the second parameter denotes word vector size.
# - The size of sequences passed must be set when creating the layer.
#
# ### Embedding Layers in Keras
# - [Classification with Word Embeddings - Lab ](https://github.com/learn-co-students/dsc-04-45-06-classification-with-word-embeddings-lab-online-ds-ft-021119)
# - [Keras Documentation](https://keras.io/layers/embeddings/)
#
# ```python
# from keras.preprocessing.sequence import pad_sequences
# from keras.layers import Input, Dense, LSTM, Embedding
# from keras.layers import Dropout, Activation, Bidirectional, GlobalMaxPool1D
# from keras.models import Model
# from keras import initializers, regularizers, constraints, optimizers, layers
# from keras.preprocessing import text, sequence
#
# y = pd.get_dummies(target).values
# ```
# - **Preprocessing data for use with embedding layer:**
# - Tokenize each example
# - Convert to sequences
# - Pad the sequences so same length
#
# ```python
# tokenizer = text.Tokenizer(num_words=20000) # limiting to first 20000 words in vocab
# tokenizer.fit_on_texts(list(df.combined_text))
# list_tokenized_headlines = tokenizer.texts_to_sequences(df.combined_text)
# X_t = sequence.pad_sequences(list_tokenized_headlines, maxlen=100)
#
# ```
#
# - **Setting up the network architecture:**
# - Input layer first
# - Embedding layer second
# - pass size of vocab, embedding_size
# - embedding_size = 128
# - LSTM layer third
# - Followed by a GlobalMaxPooling1D layer
# - Followed by Dropout layer
# - Dense layer for classification (activation='relu')
# - Followed by another Dropout layer
# - Final Dense layer
# - Number of neurons = # of possible classes.
# - activation = 'softmax'
#
# ```python
#
# embedding_size = 128
# input_ = Input(shape=(100,))
# x = Embedding(20000, embedding_size)(input_)
# x = LSTM(25, return_sequences=True)(x)
# x = GlobalMaxPool1D()(x)
# x = Dropout(0.5)(x)
# x = Dense(50, activation='relu')(x)
# x = Dropout(0.5)(x)
# # There are 41 different possible classes, so we use 41 neurons in our output layer
# x = Dense(41, activation='softmax')(x)
#
# model = Model(inputs=input_, outputs=x)
#
# model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#
# model.fit(X_t, y, epochs=2, batch_size=32, validation_split=0.1)
#
#
# ```
#
#
# ### Lab's W2vVectorizer Class (edited for bs_ds compatibility)
# - Original did not have import numpy statements and did not accept glove during \_\_init__ (but still expected it to be present)
# ```python
# class W2vVectorizer(object):
# """From Learn.co Text Classification with Word Embeddings Lab.
# An sklearn-comaptible class containing the vectors for the fit Word2Vec."""
#
# def __init__(self, w2v, glove):
# # takes in a dictionary of words and vectors as input
# import numpy as np
#
# self.w2v = w2v
# if len(w2v) == 0:
# self.dimensions = 0
# else:
# self.dimensions = len(w2v[next(iter(glove))])
#
# # Note from Mike: Even though it doesn't do anything, it's required that this object implement a fit method or else
# # It can't be used in a sklearn Pipeline.
# def fit(self, X, y):
# return self
#
# def transform(self, X):
# import numpy as np
# return np.array([
# np.mean([self.w2v[w] for w in words if w in self.w2v]
# or [np.zeros(self.dimensions)], axis=0) for words in X])
# ```
#
# #### With W2vVectorizer, can use in sklearn Pipelines:
#
# ```python
# from sklearn.ensemble import RandomForestClassifier
# from sklearn.svm import SVC
# from sklearn.linear_model import LogisticRegression
# from sklearn.pipeline import Pipeline
# from sklearn.model_selection import cross_val_score
#
# rf = Pipeline([("Word2Vec Vectorizer", W2vVectorizer(glove)),
# ("Random Forest", RandomForestClassifier(n_estimators=100, verbose=True))])
# svc = Pipeline([("Word2Vec Vectorizer", W2vVectorizer(glove)),
# ('Support Vector Machine', SVC())])
# lr = Pipeline([("Word2Vec Vectorizer", W2vVectorizer(glove)),
# ('Logistic Regression', LogisticRegression())])
#
# # ---
# models = [('Random Forest', rf),
# ("Support Vector Machine", svc),
# ("Logistic Regression", lr)]
# # ---
# scores = [(name, cross_val_score(model, data, target, cv=2).mean()) for name, model, in models]
# scores
# # ---
# ```
#
| 75,635 |
/house-prices.ipynb
|
1c86fa7cb302f775f7245d4d4e32e7950f084bd2
|
[] |
no_license
|
annalisamf/Kaggle-House-Prices-Advanced-Regression-Techniques
|
https://github.com/annalisamf/Kaggle-House-Prices-Advanced-Regression-Techniques
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 851,275 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project: Investigate a Wine Quality Dataset
# [Datset Source](https://archive.ics.uci.edu/ml/datasets/Wine+Quality)<br>
# ***Muthukumar Palavesam***
#
# ## Table of Contents
# <ul>
# <li><a href="#intro">Introduction</a></li>
# <li><a href="#wrangling">Data Wrangling</a></li>
# <li><a href="#eda">Exploratory Data Analysis</a></li>
# <li><a href="#conclusions">Conclusions</a></li>
# </ul>
# <a id='intro'></a>
# ## Introduction
#
# > Two datasets are included, related to red and white vinho verde wine samples, from the north of Portugal. The goal is to model wine quality based on physicochemical tests
#
# > In this project I will go ahead and explore the answers for the below questions:<br>
# > - What chemical characteristics are most important in predicting the quality of wine?
# > - Is a certain type of wine (red or white) associated with higher quality?
# > - Do wines with higher alcoholic content receive better ratings?
# > - Do sweeter wines (more residual sugar) receive better ratings?
# > - What level of acidity receives the highest average rating?
#import the necessary packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#https://discuss.analyticsvidhya.com/t/how-to-make-a-text-bold-within-print-statement-in-ipython-notebook/14552
from IPython.display import display,Markdown
% matplotlib inline
# <a id='wrangling'></a>
# ## Data Wrangling
#
# ### General Properties
# Load your data and print out a few lines. Perform operations to inspect data
wine_red=pd.read_csv('winequality-red.csv')
wine_white=pd.read_csv('winequality-white.csv')
#print out a few lines of red wine
wine_red.head(2)
#print out a few lines of white wine
wine_white.head(2)
# we noticed that the seperation is showing as ';' in both dataset, so will load data with include `sep` keyword
wine_red=pd.read_csv('winequality-red.csv',sep=';')
wine_white=pd.read_csv('winequality-white.csv',sep=';')
#print out a few lines of red wine
wine_red.head(2)
#print out a few lines of white wine
wine_white.head(2)
# ### Data Cleaning
# > - Assessing Data
# > - Rename the column
# > - Adding New column `color` in dataset
# > - Appending the dataset
# > - Adding New column `acidity_levels` in dataset
# > - Saving the merged Dataset
# #### Data Cleaning- Assessing Data
# > - number of samples in each dataset
# > - number of columns in each dataset
# > - features with missing values
# > - duplicate rows in the white wine dataset
# > - number of unique values for quality in each dataset
# > - mean density of the red wine dataset
#number of samples in each dataset
display(Markdown('***`Number of samples and columns in Each dataset`***'))
print('Red wine has {} rows and {} columns'.format(wine_red.shape[0],wine_red.shape[1]))
print('White wine has {} rows and {} columns'.format(wine_white.shape[0],wine_red.shape[1]))
#Missing Values
display(Markdown('***`Red Wine Missing values:`***'))
display(wine_red.isnull().sum())
display(Markdown('***`White Wine Missing values:`***'))
display(wine_white.isnull().sum())
# From above results no missing values found
#Duplicate Rows count in each dataset
display(Markdown('***`Red Wine duplicated rows count`***'))
display(sum(wine_red.duplicated()))
display(Markdown('***`White Wine duplicated rows count`***'))
display(sum(wine_white.duplicated()))
#Unique Values of each dataset
display(Markdown('***`Red Wine unique values`***'))
display(wine_red.nunique())
display(Markdown('***`White Wine unique values`***'))
display(wine_white.nunique())
#mean density of the red wine dataset
display(Markdown('***`Mean density of the red wine dataset`***'))
display(wine_red['density'].mean())
display(Markdown('***`Mean density of the red wine dataset`***'))
display(wine_white['density'].mean())
# #### Data Cleaning - Rename the column
display(Markdown('***Red Wine Column Names:***'))
display(wine_red.columns.sort_values())
display(Markdown('***White Wine Column Names:***'))
display(wine_white.columns.sort_values())
# what I observed is, the column in red wine "total_sulfur-dioxide" is not matching with column in white wine. So need to change that
# +
#Renaming the column 'total_sulfur-dioxide' from red wine data frame using 'rename' function
wine_red.rename(columns={'total_sulfur-dioxide':'total_sulfur_dioxide'},inplace=True)
# -
#Recheck the column name by fetching sample records
wine_red.head(2)
# Now the column name has been changed and look like similar column names in both dataframes
# #### Data Cleaning- Adding New column in dataset
# ***Adding New Column `color`***<br>
#
# We are going to add the column name `color` into both the dataset and adding `red` into `redwine dataset`,`White` in `whitewine dataset`using the `Numpy's repeat` function.[Numpy Repeat](https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html)<br>
# numpy.repeat(a, repeats, axis=None)
#Adding column color and value red into wine_red dataset
wine_red['color']=np.repeat('red',wine_red.shape[0])
#Adding column color and value white into wine_white dataset
wine_white['color']=np.repeat('white',wine_white.shape[0])
#fetching some rows to confimr whether the column is showing or not
wine_red.head(2)
wine_white.head(2)
#number of samples in each dataset after adding the column color
display(Markdown('***`Number of samples and columns in Each dataset`***'))
print('Red wine has {} rows and {} columns'.format(wine_red.shape[0],wine_red.shape[1]))
print('White wine has {} rows and {} columns'.format(wine_white.shape[0],wine_red.shape[1]))
# #### Data Cleaning- Appending the dataset
#Append dataframe using `append` function
df=wine_red.append(wine_white)
df.head()
# #### Data Cleaning- Adding New column `acidity_levels` in dataset
# ***Adding New Column `acidity_levels`***<br>
# we are going to add the column `acidity_levels` to show acidity level groups. to achieve this going to use pandas `cut` function
# [panda's cut](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.cut.html)
# View the min, 25%, 50%, 75%, max pH values with Pandas describe
df['pH'].describe()
# Bin edges that will be used to "cut" the data into groups
bin_edges=[2.72,3.11,3.21,3.32,4.01]
# Labels for the four acidity level groups. always bin_names length will be `len(bin_edges)-1`
# pH 2.72 to 3.11 = high
# pH 3.12 to 3.21 = mod_high
# pH 3.22 to 3.32 = medium
# pH 3.33 to 4.01 = low
bin_names=['high','mod_high','medium','low']
#adding column
df['acidity_levels']=pd.cut(df['pH'],bin_edges,labels=bin_names)
#recheck the dataframe
df.head(2)
#checking append rows are matching or not. if the result returns True then matching
wine_red.shape[0]+wine_white.shape[0]==df.shape[0]
# #### Data Cleaning- Saving the merged Dataset
#Saving the merged Dataset
df.to_csv('cleaned_wine_data.csv',index=False)
# <a id='eda'></a>
# ## Exploratory Data Analysis
# ### Research Question 1 : which of the following feature variables appear skewed to the right? Fixed Acidity, Total Sulfur Dioxide, pH, Alcohol
# Loading the cleaned Data and Exploring the histogram of the data looks like
wine_data=pd.read_csv('cleaned_wine_data.csv')
wine_data.hist(figsize=(15,15));
# Based on the above histograms, the `alcohol` and `fixed_acidity` appear `skewed to the right`
# ### scatterplots of quality against different feature variables
#
# ### Research Question 2 : which of the following is most likely to have a positive impact on quality? Volatile Acidity, Residual Sugar, pH, Alcohol
wine_data.plot(x='volatile_acidity',y='quality',kind='scatter');
# low volatile acitidity has the high quality
wine_data.plot(x='pH',y='quality',kind='scatter');
wine_data.plot(x='residual_sugar',y='quality',kind='scatter');
wine_data.plot(x='alcohol',y='quality',kind='scatter');
# Based on the above scatter plots,`Alcohol` is most likely to have a `positive impact on quality`
# ### Research Question 3 : what are all the available Quality in dataset
#available quality levels
sorted(list(wine_data['quality'].unique()))
# So from above results, the available qualities are `3, 4, 5, 6, 7, 8, 9`
# ### Research Question 4 : What is the mean pH value of the each quality
wine_data.groupby('quality')['pH'].mean()
# ### Research Question 5 : What is the mean pH value of each quality in each color
wine_data.groupby(['quality','color'])['pH'].mean()
wine_data.groupby(['quality','color'],as_index=False)['pH'].mean().style.hide_index()
# ### Research Question 6 : Counts by Wine Color and Quality
#http://jonathansoma.com/lede/data-studio/matplotlib/changing-the-background-of-a-pandas-matplotlib-graph/
#Counts by Wine Color and Quality
counts=wine_data.groupby(['quality','color']).count()['pH']
display(counts)
ax=counts.plot(kind='bar',color=['r','w'],figsize=(10,6),grid=False)
ax.set_facecolor('lightslategrey')
ax.set_xlabel('Quality and Color',fontsize=13)
ax.set_ylabel('Counts',fontsize=13)
plt.title('Counts by Wine Color and Quality',fontsize=13);
# First, there's clearly more white samples than red samples.So it's hard to make a fair comparison, we just count.To balance this out,let's divide each count by the total count for that color to use proportions instead.
# ### Research Question 7 : Proportion by Wine Color and Quality
#Proportion by Wine Color and Quality
totals=wine_data.groupby('color').count()['pH']
proportions=counts/totals
display(proportions)
ax=proportions.plot(kind='bar',color=['r','w'],figsize=(10,6),grid=False);
ax.set_facecolor('lightslategrey');
ax.set_xlabel('Quality and Color',fontsize=13);
ax.set_ylabel('Proportion',fontsize=13);
ax.set_title('Proportion by Wine Color and Quality',fontsize=13);
# ### Create arrays for red bar heights white bar heights
# Remember, there's a bar for each combination of color and quality rating. Each bar's height is based on the proportion of samples of that color with that quality rating.
# 1. Red bar proportions = counts for each quality rating / total # of red samples
# 2. White bar proportions = counts for each quality rating / total # of white samples
# get counts for each rating and color
color_counts=wine_data.groupby(['color','quality']).count()['pH']
color_counts
# get total counts for each color
color_total=wine_data.groupby('color').count()['pH']
color_total
# get proportions by dividing red rating counts by total # of red samples
red_proportion=color_counts['red']/color_total['red']
#we're missing a red wine value for a the 9 rating
red_proportion[9]=0
red_proportion
# get proportions by dividing white rating counts by total # of white samples
white_proportion=color_counts['white']/color_total['white']
white_proportion
# ### Plot proportions on a bar chart
# Set the x coordinate location for each rating group and and width of each bar.
# +
#https://www.youtube.com/watch?v=ffALfovKud4
#https://codeyarns.com/2014/10/27/how-to-change-size-of-matplotlib-plot/
import seaborn as sns
sns.set_style('darkgrid')
ind=np.arange(len(red_proportion)) # the x locations for the groups
fig_size = plt.rcParams["figure.figsize"]
print ("Current Figure size:{}".format(fig_size))
# plot bars
plt.bar(ind,red_proportion,color='r',width=0.35,label='Red Wine')
plt.bar(ind+0.35,white_proportion,color='w',width=0.35,label='White Wine')
# title and labels
labels = ['3', '4', '5', '6', '7', '8', '9'] # xtick labels
locations = ind + width / 2
plt.xticks(locations, labels)
plt.xlabel('Quality',fontsize=13)
plt.ylabel('Proportion',fontsize=13)
plt.title('Proportion by Wine Color and Quality',fontsize=13)
fig_size[0] = 10
fig_size[1] = 6
plt.rcParams["figure.figsize"] = fig_size
plt.legend();
# -
# > - From this above graph , the quality 6 in white wine is more and quality 5 in red wine is more
# ### Research Question 8 : What is the Average Wine Quality by color
#Average Wine Quality by color
display(wine_data.groupby('color')['quality'].mean())
ax=wine_data.groupby('color')['quality'].mean().plot(kind='bar',color=['r','w'],figsize=(6,4),grid=False)
ax.set_facecolor("lightslategray")
plt.title('Average Wine Quality by color');
plt.xlabel('Color');
plt.ylabel('Quality');
# > - So mean quality of redwine is less than white wine
# ### Research Question 9 : What level of acidity receives the highest average rating?
#Average Quality Ratings by Acidity Level
display(wine_data.groupby('acidity_levels')['quality'].mean().sort_values(ascending=False))
wine_data.groupby('acidity_levels')['quality'].mean().plot(kind='bar',figsize=(8,4),\
color=['royalblue','slateblue','cornflowerblue','skyblue'])
plt.title('Average Quality Ratings by Acidity Level')
plt.xlabel('Acidity Level')
plt.ylabel('Average Quality Rating');
# > - So From above results, the low level of acidity receives the highest mean quality rating
# ### Research Question 10 : Do wines with higher alcoholic content receive better ratings?
# +
#Do wines with higher alcoholic content generally receive better ratings?Yes
mean_quality_low=wine_data.query('alcohol < alcohol.median()').quality.mean()
mean_quality_high=wine_data.query('alcohol >= alcohol.median()').quality.mean()
display(Markdown('***`mean quality rating for the Low alcohol`***'))
display(mean_quality_low)
display(Markdown('***`mean quality rating for the High alcohol`***'))
display(mean_quality_high)
plt.bar([1,2],[mean_quality_low,mean_quality_high],tick_label=['Low','High'],color=['lightcoral','lightsalmon']);
plt.title('Average Quality Ratings by Alcohol Content')
plt.xlabel('Alcohol Content')
plt.ylabel('Average Quality Rating');
# -
# > - From above plot, the higher alcoholic content receive better ratings.
# ### Research Question 11 : Do sweeter wines receive higher ratings?
# +
#Do sweeter wines generally receive higher ratings? Yes
mean_quality_low_sugar=wine_data.query('residual_sugar < residual_sugar.median()').quality.mean()
mean_quality_high_sugar=wine_data.query('residual_sugar >= residual_sugar.median()').quality.mean()
display(Markdown('***`mean quality rating for the Low sugar`***'))
display(mean_quality_low_sugar)
display(Markdown('***`mean quality rating for the High sugar`***'))
display(mean_quality_high_sugar)
plt.bar([1,2],[mean_quality_low_sugar,mean_quality_high_sugar],tick_label=['Low','High'],color=['lightcoral','darkgrey']);
plt.xlabel('Residual Sugar')
plt.ylabel('Average Quality Rating')
plt.title('Average Quality Ratings by Residual Sugar');
# -
# > - From above Plot the sweeter wines received the higher ratings
# <a id='conclusions'></a>
# ## Conclusions
# > - Based on the histograms, the `alcohol` and `fixed_acidity` appear `skewed to the right`
# > - Based on the scatter plots,Alcohol is most likely to have a positive impact on quality
# > - The quality 6 in white wine is more and quality 5 in red wine is more
# > - Mean quality of redwine is less than white wine
# > - The low level of acidity receives the highest mean quality rating
# > - The higher alcoholic content receive better ratings.
# > - The sweeter wines received the higher ratings
Scaler
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
# Separate features and target variable
housing_X = housing_fillNa.drop("SalePrice", axis=1)
housing_y = housing_fillNa["SalePrice"].copy()
# Get the list of names for numerical and categorical attributes separately
num_attribs = list(housing_X.select_dtypes(exclude='object'))
cat_attribs = list(housing_X.select_dtypes(include='object'))
# Numerical Pipeline to impute any missing values with the median and scale attributes
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('std_scaler', StandardScaler()),
])
# -
# We use ColumnTransformer to handle both categorical and numerical attributes and apply the transformation to all columns.
#
# The constructor of ColumnTransformer class required a list of tuples containing a name, a transformer and a list of columns which the transformer should be applied to. We specify that the numerical columns should be using the num_pipeline and that the categorical columns should be transformed using a OneHotEncoder. We then apply the constructed ColumnTransformer to the housing data with fit_transform().
# +
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
# Description before applying transforms
print(housing_y.describe())
# Apply log-transform to SalePrice
housing_y_prepared = np.log(housing_y)
# Run the transformation pipeline on all the other attributes
housing_X_prepared = full_pipeline.fit_transform(housing_X)
# Description before applying transforms
print(housing_y_prepared.describe())
housing_X_prepared
# -
# # Machine learning model
# + active=""
# Create a test set and linear regression model
# +
# Split data into train and test formate
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(housing_X_prepared, housing_y_prepared, test_size=0.2, random_state=7)
# +
# Import modules
from sklearn.linear_model import LinearRegression
# Train the model on training data
model = LinearRegression()
model.fit(X_train, y_train)
# Evaluate the model on test data
print("Accuracy%:", model.score(X_test, y_test)*100)
| 17,538 |
/code/new_words.ipynb
|
ca7b081b0b02fdeb66dab4cda9fb10173a077d15
|
[
"MIT"
] |
permissive
|
HKCaesar/cc-topography
|
https://github.com/HKCaesar/cc-topography
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 101,444 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas as pd
import numpy as np
import cv2
import scipy
import sys
import math
import matplotlib.pyplot as plt
import time
# +
# Stroke width transform - microsoft paper
def stroke_width_transform(filename):
img = cv2.imread(filename,0)
# initialise with infinite intensity at each pixel
swt_img = np.empty(img.shape)
swt_img[:]=np.infty
edges = cv2.Canny(img,175,320) # find out the edges using canny detector
grad_x = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=-1) # scharr filter
grad_y = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=-1)
direction_y = grad_x/np.sqrt(grad_x**2 + grad_y**2) # normalised step
direction_x = grad_y/np.sqrt(grad_x**2 + grad_y**2)
user = input('bright text on dark background? Press "y" for yes and any other key for no.')
if user.lower() != 'y':
direction_y = -direction_y
direction_x = -direction_x
vectors = []
for y in range(img.shape[1]):
for x in range(img.shape[0]):
if edges[x,y]>0:
vector=[]
vector.append((x,y))
n=0
prev_x,prev_y = x,y
while True:
n+=1
if np.isnan(direction_x[x,y]) and np.isnan(direction_y[x,y]):
break
new_x,new_y = math.floor(x + direction_x[x,y]*n), math.floor(y + direction_y[x,y]*n)
if new_x!=prev_x or new_y!=prev_y:
try:
if edges[new_x,new_y]>0:
vector.append((new_x,new_y))
if np.arccos(-direction_x[new_x,new_y]*direction_x[x,y] - direction_y[new_x,new_y]*direction_y[x,y])<np.pi/2.0:
for a,b in vector:
swt_img[a,b]=min(np.linalg.norm((new_x-x,new_y-y)),swt_img[a,b])
vectors.append(vector)
break
else:
vector.append((new_x,new_y))
except IndexError:
break
prev_x,prev_y = new_x,new_y
for elem in vectors:
median = np.median([swt_img[a,b] for a,b in elem])
for a,b in elem:
swt_img[a,b] = min(swt_img[a,b],median)
return swt_img
# Connected component labelling for stroke width transform
def connected_components(swt_im):
con_img = np.zeros(swt_im.shape)
con_list={}
rows = swt_im.shape[0]
cols = swt_im.shape[1]
n=1
for x in range(rows):
for y in range(cols):
if swt_im[x,y]<np.infty and swt_im[x,y]>0:
neigh = [(x,y-1),(x-1,y)]
neighbours=[]
for elem in neigh:
if -1 not in elem:
neighbours.append(elem)
l=[]
for i,j in neighbours:
try:
ratio = swt_im[x,y]/swt_im[i,j]
except IndexError:
continue
if ratio<3.0 and 1/ratio<3.0:
if con_img[i,j]>0 and con_img[i,j] not in l:
l.append(con_img[i,j])
if len(l)<1:
con_img[x,y]=n
n+=1
elif len(l)>1:
con_img[x,y] = min(l)
if min(l) in con_list:
con_list[min(l)].extend(l)
else:
con_list[min(l)]=l
else:
con_img[x,y] = min(l)
for key,value in con_list.items():
for val in value:
con_img[con_img==val] = key
return con_img
# -
swt_img = np.array([1,1,0,0,0,1,1,1,0,0,2,2,0,2,0,1,0,0,0,0,3,4,4,4,0,0,0,5,5,5,5,0,0,0,0,0]).reshape(6,6)
connected_components(swt_img)
swt_img
cv2.connectedComponents(np.uint8(swt_img),connectivity=4)[1]
# +
# finding letters
# def find_letters(con,swt):
# width = {}
# height = {}
# labels = np.unique(con[con>0])
# for label in labels:
# l = np.where(con==label)
# width[label],height[label] = max(l[0])-min(l[0]),max(l[1])-min(l[1])
# if len(con[con==label])<10:
# con[con==label]=255
# if width[label]<8 or height[label]<8:
# con[con==label]=255
# if width[label]/height[label]<0.1 or width[label]/height[label]>10:
# con[con==label]=255
# if width[label]/con.shape[1]>0.4 or height[label]/con.shape[0]>0.4:
# con[con==label]=255
# diameter = np.sqrt(width[label]**2 + height[label]**2)
# med_stroke = np.median(swt[l])
# if diameter/med_stroke>20:
# con[con==label]=255
# if width[label]<10 or height[label]>300:
# con[con==label]=255
# mu = np.mean(swt[l])
# mean_var_stroke = np.mean((swt[l]-mu)**2)
# if mean_var_stroke>20:
# con[con==label]=255
return con,width,height
# -
# image = '3.jpg'
# swt = stroke_width_transform(image)
# cc_img = connected_components(swt)
# +
# plt.imshow(swt)
# +
# cc_img = connected_components(swt)
# +
# plt.imshow(cc_img)
# +
# plt.imshow(find_letters(cc_img,swt)[0],'gray')
# +
# final_image,wid,hghts = find_letters(connected_components(swt),swt)
# plt.figure(figsize=[20,10])
# plt.subplot(1,3,1)
# plt.imshow(cv2.imread(image,0))
# plt.subplot(1,3,1)
# plt.imshow(swt)
# plt.subplot(1,3,2)
# plt.imshow(cc_img,'gray')
# +
# con_img = np.zeros(swt_img.shape)
# con_list={}
# rows = swt_img.shape[0]
# cols = swt_img.shape[1]
# n=1
# for x in range(rows):
# for y in range(cols):
# if swt_img[x,y]<np.infty and swt_img[x,y]>0:
# neigh = [(x,y-1),(x-1,y)]
# neighbours=[]
# for elem in neigh:
# if -1 not in elem:
# neighbours.append(elem)
# l=[]
# for i,j in neighbours:
# try:
# ratio = swt_img[x,y]/swt_img[i,j]
# except IndexError:
# continue
# if ratio<3.0 and 1/ratio<3.0:
# if con_img[i,j]>0 and con_img[i,j] not in l:
# l.append(con_img[i,j])
# if len(l)<1:
# con_img[x,y]=n
# n+=1
# elif len(l)>1:
# con_img[x,y] = min(l)
# if min(l) in con_list:
# con_list[min(l)].extend(l)
# else:
# con_list[min(l)]=l
# else:
# con_img[x,y] = min(l)
# for key,value in con_list.items():
# for val in value:
# con_img[con_img==val] = key
# -
#
#
| 7,243 |
/1. Web Scraping Basic (사전, 영화리뷰, 기사글).ipynb
|
c36b2249bdd655ed3a2c27df90ebd19a3afccb6b
|
[] |
no_license
|
jameshan54/likelion3
|
https://github.com/jameshan54/likelion3
| 2 | 0 | null | 2021-06-17T02:17:36 | 2021-06-17T02:11:39 | null |
Jupyter Notebook
| false | false |
.py
| 1,322,331 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <br>
#
# #### 1. 단어의 검색 결과 출력하기
# - 다음 사전 (https://alldic.daum.net) 에 ‘happiness’ 단어를 검색하였을 때 출력 되는 화면에서 단어와 단어의 의미를 출력한다
# !pip install beautifulsoup4==4.7.1
from bs4 import BeautifulSoup
from urllib.request import urlopen
# +
'''
pip install opencv-python
import cv2
pip install pillow
import PIL #Python Image Library
https://alldic.daum.net/search.do?q=happiness&age=22&new=True --> ? 이후의 것들 == query
'''
# +
# 검색하고 싶은 단어 입력하기
word = 'happiness'
# 불러오려는 url 입력하기
# 디폴트 url에 string 타입의 word 변수를 합쳐서 url 변수 생성
url = 'https://alldic.daum.net/search.do?q=' + word
# urlopen 함수를 통해 web 변수를 생성
web = urlopen(url) # urlopen(url).read().decode('utf-8')
# BeautifulSoup으로 web 페이지상의 HTML 구조를 파싱
web_page = BeautifulSoup(web, 'html.parser')
print(web_page)
# -
# #### Parser 들의 장단점
# 
#
# **Scraping에서 Parsing이란?**
# - HTML이나 XML, JavaScript 등으로 쓰여진 소스들을 각 요소별로 나누는 것
# - 이 때, 이러한 parsing을 진행해주는 것을 parser라고 부른다.
# +
# 찾는 단어 (대상이 하나일 때)
box1 = web_page.find('span', {'class': 'txt_emph1'}) # 찾다 web_page.find('tag',{'attribute_name':'attribute_value'})
print(box1.get_text())
print()
print(box1.attrs)
# print(box1.get_?) # 태그를 걷어내고 내부의 텍스트만 꺼내고 싶을 때
# +
# 단어의 뜻 (대상이 여러개일 때)
box2 = web_page.find_all('span', {'class': 'txt_search'}) # 모두 찾다
for element in box2:
print(element.get_text())
# +
# 단어와 단어의 뜻 출력하기
print(box1.get_text()) # get_ + Tab!
print()
for defintion in web_page.find_all('span', {'class': 'txt_search'}):
print(defintion.get_text().strip()) # 앞부분의 공백을 지워주기 위해 strip() 적용
# -
# <br>
# <br>
#
# #### 2. 영화 정보 출력하기
# - IMDb 사이트에서 영화 Guardians of the Galaxy 대한 제목(title)과 감독(director)을 출력한다
# 
from bs4 import BeautifulSoup
from urllib.request import urlopen
# +
# 불러오려는 url 입력하기 (IMDb - Guardians of the Galaxy Vol. 2 (2017) )
url = 'http://www.imdb.com/title/tt3896198/?ref_=nv_sr_6'
# urlopen 함수를 통해 web 변수를 생성
web = urlopen(url)
# BeautifulSoup으로 web 페이지상의 HTML 구조를 파싱
web_page = BeautifulSoup(web, 'html.parser')
# +
# 영화 제목을 출력한다
title = web_page.find('h1')
print('Movie Title:')
print(title.get_text())
# +
# 영화 요약을 출력한다
summary = web_page.find(?, {?: ?})
print('Movie Summary:')
print(summary.get_text().strip())
# +
# 감독 이름을 출력한다 (바깥의 box를 먼저 catch 후, 그 안의 tag를 catch!)
director = web_page.find('div', {'class': 'credit_summary_item'}).?(?)
print('Director:')
print(director.get_text().strip())
# -
# <br>
# <br>
#
# #### 3. 영화 리뷰를 출력하고 파일에 저장 하기
# - 영화 Guardians of the Galaxy Vol. 2 리뷰 내용을 출력해 보자
# 
# 
from bs4 import BeautifulSoup
from urllib.request import urlopen
# +
# 불러오려는 url 입력하기
url = 'http://www.imdb.com/title/tt3896198/reviews?ref_=tt_urv'
# urlopen 함수를 통해 web 변수를 생성
web = urlopen(url)
# BeautifulSoup으로 web 페이지상의 HTML 구조를 파싱
source = BeautifulSoup(web, 'html.parser')
# +
# 리뷰 데이터를 출력하고 파일로 저장한다
reviews = source.find_all(?, {?: ?})
with open('moviereview.txt','w', encoding = "utf-8") as f:
for review in reviews:
print(review.get_text())
f.write(review.get_text())
# -
# <br>
# <br>
#
# #### 4. 신문 기사 출력하고 저장하기
# - 시애틀의 대표적인 신문 Seattle Times에서 글을 불러와 파일로 저장하기
# +
# 불러오려는 url 입력하기
url = 'https://www.seattletimes.com/business/real-estate/zillows-zestimate-overvalued-a-washington-home-by-700-percent-in-a-case-of-algorithms-gone-wrong/'
# urlopen 함수를 통해 web 변수를 생성
web = urlopen(url)
# BeautifulSoup으로 web 페이지상의 HTML 구조를 파싱
source = BeautifulSoup(web, 'html.parser')
source
# -
# 
article = source.find('div',{'id': 'article-content'})
for tag in article.find_all('p'):
print(tag.get_text())
# Settle Times에 기사를 불러온다
with open('seattletimes.txt','w', encoding = 'utf-8') as f:
times = source.find('div',{'id': 'article-content'})
article = times.find_all('p')
for content in article:
print(content.get_text())
f.write(content.get_text() + '\n')
# <br>
# <br>
#
# #### (Extra) Brunch 기사글 출력하고 저장하기
# - 브런치 주간지에 실린 기사을 불러와 파일로 저장하기
# +
# 불러오려는 url 입력하기
url = 'https://brunch.co.kr/@imagineer/267'
# urlopen 함수를 통해 web 변수를 생성
web = urlopen(url)
# BeautifulSoup으로 web 페이지상의 HTML 구조를 파싱
source = BeautifulSoup(web, 'html.parser')
# -
# 
all_text = source.find('div',{'class': 'wrap_body'})
article = all_text.find_all('p')
for p in article:
print(p.get_text())
# Brunch 에 있는 글을 불러온다
with open('brunch.txt','w',encoding = 'utf-8') as f:
all_text = source.find('div',{'class': 'wrap_body'})
article = all_text.find_all('p')
for content in article:
print(content.get_text())
f.write(content.get_text() + '\n')
for i in range(3):
text = 'Python' + str(i)
print(text)
# +
# Save multiple articles automatically
# @imagineer 님의 글 10개를 자동으로 가져와 저장하는 코드를 작성해보세요!
# Hint 1 : str에는 str만 더할 수 있어요!
# Hint 2 : @imagineer님 글의 URL은 몇 번부터 시작할까요?
for i in range(10):
try:
url = 'https://brunch.co.kr/@imagineer/' + str(i)
web = urlopen(url)
source = BeautifulSoup(web, 'html.parser')
with open('brunch_all.txt', 'a', encoding = 'utf-8') as f:
all_text = source.find('div',{'class': 'wrap_body'})
article = all_text.find_all('p')
for content in article:
print(content.get_text())
f.write(content.get_text() + '\n')
except:
continue
# (Additional) 예외처리 (try & except pass 적용해보기)
# -
act line number!
#
# We can also show the whole tablet.
#
# It is a bit of a puzzle to spot the `1(N24'')`.
# In the notebook on [search](search.ipynb) we'll show how you can highlight things on a tablet.
tabletDouble = L.u(primes[0], otype="tablet")[0]
A.pretty(tabletDouble, standardFeatures=True)
# The `L.u()` function takes a node as starting point and looks up all nodes that embed it.
# You can restrict those to nodes of a certain type, as we did by `otype='case'`.
# It yields a tuple of nodes, so if you want a single embedder, you have to select one,
# as we did by `[0]`.
# Earlier we collected all *quads* (composite signs).
# Let us look up info for them.
#
# The least technical way is ... a one-liner!
for q in quads[0:10]:
A.pretty(q)
# We can also assemble custom information.
#
# For each such quad we assemble the following pieces of information:
#
# * the P-number of the tablet
# * the transcription line number
# * a representation of the quad
# * the list of signs of which the quad is composed.
for q in quads[0:10]:
cl = A.lineFromNode(q)
(pNum, colNum, caseNum) = A.caseFromNode(cl)
lineNum = F.srcLnNum.v(cl)
qRep = A.atfFromQuad(q)
signs = L.d(q, otype="sign")
signReps = " , ".join([A.atfFromSign(s) for s in signs])
print(f"{lineNum:>5} {pNum} {caseNum:<5} {qRep:<15} with {signReps}")
# Admittedly, this was a bit advanced. We used things we haven't explained yet.
#
# * `A.lineFromNode()`: if your node is something that fits in a single transcription line (
# (a sign or quad or cluster), it will give you the node that corresponds to that
# transcription line (a terminal case or terminal line);
# * `A.caseFromNode()`: gives you section headings
# with case numbers instead of line numbers for nodes
# if you give it a node. (exactly opposite to `A.nodeFromCase()`);
# * likewise, `T.sectionFromNode()` is opposite to `T.nodeFromSection()`.
# * we have functions to generate ATF transliterations for nodes, especially for
# quads and signs:
# * `A.atfFromQuad(n)` gives you the transliteration of the
# *quad* identified by node (barcode) `n`;
# * `A.atfFromSign(n)` likewise for *sign*s.
# With our mastery of starting points and navigation,
# we really do not have to see the actual node numbers (barcodes) anymore.
#
# We'll see less and less of them, but they are the invisible glue that
# holds the whole corpus together.
# # See also
#
# [jumps](jumps.ipynb)
#
# Because there are more ways to travel ...
#
# # Next
#
# [search](search.ipynb)
#
# *Don't get lost ...*
#
# All chapters:
# [start](start.ipynb)
# [imagery](imagery.ipynb)
# **steps**
# [search](search.ipynb)
# [calc](calc.ipynb)
# [signs](signs.ipynb)
# [quads](quads.ipynb)
# [jumps](jumps.ipynb)
# [cases](cases.ipynb)
#
# ---
#
# CC-BY Dirk Roorda
| 8,664 |
/AlexNet_model_9.ipynb
|
a3488404cb87338dc043db566458d7708c92d5ab
|
[] |
no_license
|
CHIANGEL/mini-project
|
https://github.com/CHIANGEL/mini-project
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 552,071 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 1. https://python-forum.io/Thread-converting-a-number-from-bytes-to-float-and-back
def inc(num):
return type(num)(float(num)+1)
# +
from functools import wraps
def retain_type(f):
@wraps(f)
def wrapper(inp):
res = f(inp)
if type(res) == type(inp):
return res
elif isinstance(res, str):
return res.encode('utf-8')
elif isinstance(res, bytes):
return res.decode('utf-8')
else:
raise TypeError("Output type shoud be either bytes or str.")
return wrapper
# -
@retain_type
def inc(num):
if isinstance(num, bytes):
num = num.decode('utf-8')
return type(num)(float(num)+1)
import torch.nn as nn
NUM_CLASSES = 10
class AlexNet_model_9(nn.Module):
def __init__(self, num_classes=NUM_CLASSES):
super(AlexNet_model_9, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.BatchNorm2d(64),
nn.Conv2d(64, 192, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.BatchNorm2d(192),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 2 * 2, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
self.train_loss_history = []
self.test_loss_history = []
self.train_acc_history = [0]
self.test_acc_history = [0]
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 256 * 2 * 2)
x = self.classifier(x)
return x
# +
from solver import *
import torch.optim as optim
import torch.utils.data
import torch.backends.cudnn as cudnn
import torchvision
from torchvision import transforms as transforms
import numpy as np
# from models import *
from misc import progress_bar
CLASSES = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Solver(object):
def __init__(self, model, lr, trainBatchSize, testBatchSize, cuda, save_path):
self.model = model
self.lr = lr
self.train_batch_size = trainBatchSize
self.test_batch_size = testBatchSize
self.criterion = None
self.optimizer = None
self.scheduler = None
self.device = None
self.cuda = cuda
self.train_loader = None
self.test_loader = None
self.save_path = save_path
def load_data(self):
cifar_norm_mean = (0.49139968, 0.48215827, 0.44653124)
cifar_norm_std = (0.24703233, 0.24348505, 0.26158768)
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(cifar_norm_mean, cifar_norm_std),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(cifar_norm_mean, cifar_norm_std),
])
train_set = torchvision.datasets.CIFAR10(root='./dataset', train=True, download=True, transform=transform_train)
self.train_loader = torch.utils.data.DataLoader(dataset=train_set, batch_size=self.train_batch_size, shuffle=True)
test_set = torchvision.datasets.CIFAR10(root='./dataset', train=False, download=True, transform=transform_test)
self.test_loader = torch.utils.data.DataLoader(dataset=test_set, batch_size=self.test_batch_size, shuffle=False)
def load_model(self):
if self.cuda:
self.device = torch.device('cuda')
cudnn.benchmark = True
else:
self.device = torch.device('cpu')
self.model = self.model.to(self.device)
self.optimizer = optim.SGD(self.model.parameters(), lr=self.lr, momentum=0.99, nesterov=True)
self.scheduler = optim.lr_scheduler.MultiStepLR(self.optimizer, milestones=[75, 150], gamma=0.5)
self.criterion = nn.CrossEntropyLoss().to(self.device)
def train(self):
print("train:")
self.model.train()
train_loss = 0
train_correct = 0
total = 0
for batch_num, (data, target) in enumerate(self.train_loader):
data, target = data.to(self.device), target.to(self.device)
self.optimizer.zero_grad()
output = self.model(data)
loss = self.criterion(output, target)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
prediction = torch.max(output, 1) # second param "1" represents the dimension to be reduced
total += target.size(0)
# train_correct incremented by one if predicted right
train_correct += np.sum(prediction[1].cpu().numpy() == target.cpu().numpy())
progress_bar(batch_num, len(self.train_loader), 'Loss: %.4f | Acc: %.3f%% (%d/%d)'
% (train_loss / (batch_num + 1), 100. * train_correct / total, train_correct, total))
return train_loss, train_correct / total
def test(self):
print("test:")
self.model.eval()
test_loss = 0
test_correct = 0
total = 0
with torch.no_grad():
for batch_num, (data, target) in enumerate(self.test_loader):
data, target = data.to(self.device), target.to(self.device)
output = self.model(data)
loss = self.criterion(output, target)
test_loss += loss.item()
prediction = torch.max(output, 1)
total += target.size(0)
test_correct += np.sum(prediction[1].cpu().numpy() == target.cpu().numpy())
progress_bar(batch_num, len(self.test_loader), 'Loss: %.4f | Acc: %.3f%% (%d/%d)'
% (test_loss / (batch_num + 1), 100. * test_correct / total, test_correct, total))
return test_loss, test_correct / total
def save(self):
torch.save(self.model, self.save_path)
print("Checkpoint saved to {}".format(self.save_path))
def run(self, to_epoch):
self.load_data()
self.load_model()
accuracy = self.model.test_acc_history[-1]
from_epoch = len(self.model.train_loss_history) + 1
for epoch in range(from_epoch, to_epoch + 1):
self.scheduler.step(epoch)
print("\n===> epoch: %d/%d" % (epoch, to_epoch))
train_result = self.train()
print(train_result)
test_result = self.test()
accuracy = max(accuracy, test_result[1])
self.model.train_loss_history.append(train_result[0])
self.model.test_loss_history.append(test_result[0])
self.model.train_acc_history.append(train_result[1])
self.model.test_acc_history.append(test_result[1])
self.save()
if epoch == to_epoch:
print("===> BEST ACC. PERFORMANCE: %.3f%%" % (accuracy * 100))
# -
solver_model_9 = Solver(AlexNet_model_9(), 1e-3, 64, 64, torch.cuda.is_available(), \
'saved_models/model_9.pt')
solver_model_9.run(50)
solver_model_9 = Solver(torch.load('saved_models/model_9.pt'), 5e-4, 64, 64, torch.cuda.is_available(), \
'saved_models/model_9.pt')
solver_model_9.run(50)
olver_model_9 = Solver(torch.load('saved_models/model_9.pt'), 2.5e-4, 64, 64, torch.cuda.is_available(), \
'saved_models/model_9.pt')
solver_model_9.run(50)
solver_model_9 = Solver(torch.load('saved_models/model_9.pt'), 1.25e-4, 64, 64, torch.cuda.is_available(), \
'saved_models/model_9.pt')
solver_model_9.run(70)
solver_model_9 = Solver(torch.load('saved_models/model_9.pt'), 6.25e-5, 64, 64, torch.cuda.is_available(), \
'saved_models/model_9.pt')
solver_model_9.run(70)
solver_model_9.run(63)
)
age_normalized_tot
ax8 = plt.subplot()
age_normalized_tot.plot(kind='Barh', legend=False, ax=ax8, color='r', edgecolor='k', linewidth=3)
ax8.set_title('Normalized Purchase Value by Age')
ax8.set_xlabel("Normalized Purchase Value (Dollars)")
# # Top 5 Spenders:
# +
top_spenders = purchase_data_df.groupby('SN').sum().sort_values('Price', ascending=False)
top_spenders = top_spenders.iloc[0:5,2].to_frame().rename(index=str, columns={'Price': 'Total Purchase Value'})
# -
top_spenders_index = list(top_spenders.index)
top_spenders_index
top_purchase_data_df = purchase_data_df.set_index('SN')
top_purchase_data_df = top_purchase_data_df.loc[top_spenders_index,:]
top_purchase_data_df = top_purchase_data_df.reset_index()
top_purchase_groupby = top_purchase_data_df.groupby('SN')
# ## Number of Items Purchased:
top_purchase_count =top_purchase_groupby.count()
top_purchase_item_count = top_purchase_count['Item ID'].to_frame()
top_purchase_item_count.rename(index=str, columns={'Item ID': 'Items Purchased'}, inplace=True)
top_purchase_item_count
# ## Average Price of Items Purchased:
top_purchase_avg =top_purchase_groupby.mean()
top_purchase_avg_price = top_purchase_avg['Price'].to_frame()
top_purchase_avg_price.rename(index=str, columns={'Price': 'Average Price Purchased'}, inplace=True)
top_purchase_avg_price
# ## Total Purchase Value of Top Spenders:
top_purchase_sum =top_purchase_groupby.sum()
top_purchase_tot_price = top_purchase_sum['Price'].to_frame()
top_purchase_tot_price.rename(index=str, columns={'Price': 'Total Purchased Value'}, inplace=True)
top_purchase_tot_price
top_spenders_df = top_purchase_tot_price.reset_index().merge(top_purchase_avg_price.reset_index()).merge(top_purchase_item_count.reset_index())
# ## Summary Table:
top_spenders_df = top_spenders_df.sort_values('Total Purchased Value', ascending=False)
top_spenders_df
plt_top_spenders = top_spenders_df.set_index('SN')
# ## Value Plots:
ax9 = plt.subplot()
plt_top_spenders['Items Purchased'].plot(kind='barh', ax=ax9, color='r', edgecolor='k', linewidth=3)
ax9.set_title('Number of Items Purchased by the 5 Top Spenders')
ax9.set_xlabel('Item Count')
ax9.set_ylabel('Screen Name')
ax10 = plt.subplot()
plt_top_spenders['Average Price Purchased'].plot(kind='barh', ax=ax10, color='r', edgecolor='k', linewidth=3)
ax10.set_title('Average Price of Purchased Items for Top 5 Spenders')
ax10.set_xlabel('Average Price of Items (Dollars)')
ax10.set_ylabel('Screan Name')
ax11 = plt.subplot()
plt_top_spenders['Total Purchased Value'].plot(kind='barh', ax=ax11, color='r', edgecolor='k', linewidth=3)
ax11.set_title('Total Purchased Value of Items for Top 5 Spenders')
ax11.set_xlabel('Total Purchased Value (Dollars)')
ax11.set_ylabel('Screan Name')
# # Most Popular Items:
#
# 1. ID and Name of top 5 Items
# 2. Item Price
# 3. number of Items sold
# 4. Total Purchased Value
pop_products = purchase_data_df['Item ID'].value_counts()
pop_products = pop_products.iloc[0:5].to_frame()
pop_products_index = pop_products.index
purchase_data_ID = purchase_data_df.set_index('Item ID')
top_pop = purchase_data_ID.loc[pop_products_index,:]
top_name = top_pop['Item Name'].unique()
top_price = top_pop['Price'].unique()
top_products = pd.DataFrame({'Item ID': pop_products_index, 'Item Name': top_name, 'Price': top_price})
pop_products = pop_products.reset_index().rename(index=str, columns={'index': 'Item ID', 'Item ID': 'Item Count'})
top_products = top_products.merge(pop_products)
# ## Summary Chart:
top_pop_group_sum = top_pop.groupby('Item Name').sum()
top_pop_tot_price = top_pop_group_sum['Price'].reset_index().rename(index=str, columns={'Price': 'Total Purchase Value'})
top_products = top_products.merge(top_pop_tot_price)
top_products
# ## Plots of Values:
ax12 = plt.subplot()
top_products.plot(x='Item Name', y='Item Count', kind='barh', ax=ax12, legend=False, color='r', edgecolor='k', linewidth=3)
ax12.set_xlabel('Item Count')
ax12.set_title('Items Sold for 5 Most Popular Items')
ax13 = plt.subplot()
top_products.plot(x='Item Name', y='Price', kind='barh', ax=ax13, legend=False, color='r', edgecolor='k', linewidth=3)
ax13.set_xlabel('Price (Dollars)')
ax13.set_title('Price of 5 Most Popular Items')
ax14 = plt.subplot()
top_products.plot(x='Item Name', y='Total Purchase Value', kind='barh', ax=ax14, legend=False, color='r', edgecolor='k', linewidth=3)
ax14.set_xlabel('Total Purchase Value (Dollars)')
ax14.set_title('Total Purchase Value of 5 most Popular Items')
# # Most Profitable Items:
# ## Total Purchase Value of 5 most Profitable Items:
items_df = purchase_data_df.assign()
items_df_tot_value = items_df.groupby('Item ID').sum()
top_items = items_df_tot_value.sort_values('Price',ascending=False).iloc[0:5,1].to_frame()
top_index =top_items.index
top_items.reset_index(inplace=True)
top_items.rename(index=str, columns={'Price': 'Total Purchase Value'}, inplace=True)
top_items
# ## Price of 5 Most Profitable Items:
top_items_df = items_df.set_index('Item ID').loc[top_index,:]
top_items_price = top_items_df['Price'].unique()
top_items_price_df = pd.DataFrame({'Item ID': top_index, 'Price': top_items_price})
top_items_price_df
# ## Number of Items Sold for 5 Most Profitable Items:
top_items_count = top_items_df.reset_index().groupby('Item ID').count()
top_items_count_df = top_items_count['SN'].to_frame().rename(index=str, columns={'SN': 'Item Count'})
top_items_count_df.reset_index(inplace=True)
top_items_count_df = top_items_count_df.astype('int64')
top_items_count_df
# ## Item Name and ID
top_items_df = items_df.set_index('Item ID').loc[top_index,:]
top_items_name = top_items_df['Item Name'].unique()
top_items_name_df = pd.DataFrame({'Item ID': top_index, 'Item Name': top_items_name})
top_items_name_df
# ## Summary Table
most_profitable_items = top_items.merge(top_items_price_df)
most_profitable_items = most_profitable_items.merge(top_items_count_df, on='Item ID')
most_profitable_items = most_profitable_items.merge(top_items_name_df, on='Item ID')
most_profitable_items
# ## Plots of Values for 5 Most Profitable Items:
ax14 = plt.subplot()
most_profitable_items.plot(x='Item Name', y='Total Purchase Value', kind='barh', ax=ax14, legend=False, color='r', edgecolor='k', linewidth=3)
ax14.set_xlabel('Total Purchase Value (Dollars)')
ax14.set_title('Total Purchase Value of 5 most Profitable Items')
ax14 = plt.subplot()
most_profitable_items.plot(x='Item Name', y='Price', kind='barh', ax=ax14, legend=False, color='r', edgecolor='k', linewidth=3)
ax14.set_xlabel('Purchase Price (Dollars)')
ax14.set_title('Purchase Price of 5 most Profitable Items')
ax14 = plt.subplot()
most_profitable_items.plot(x='Item Name', y='Total Purchase Value', kind='barh', ax=ax14, legend=False, color='r', edgecolor='k', linewidth=3)
ax14.set_xlabel('Purchase Count')
ax14.set_title('Number of Purchases of 5 most Profitable Items')
| 15,673 |
/notebooks/2.0 - accidents cleaning.ipynb
|
aab680b6964e3575ed7dc6573a49c324c7fc76b6
|
[
"MIT"
] |
permissive
|
nymarya/corvus
|
https://github.com/nymarya/corvus
| 1 | 0 |
MIT
| 2023-02-11T01:13:43 | 2022-11-08T18:14:39 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 114,305 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="5d5LBt7cyDtJ"
from prf_api.prf_api import PRFApi
import pandas as pd
import re
# -
data = PRFApi()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 70768, "status": "ok", "timestamp": 1569201784533, "user": {"displayName": "MAYRA DANTAS", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC1Nk3Cdh9E0tWxPoHxnjsDZc7rkChaMFPbhtGo=s64", "userId": "09389505834223807513"}, "user_tz": 180} id="8koQXzgEyVm4" outputId="edcc2497-67df-4a79-849e-c99022305e52"
accidents_df = data.dataframe('acidentes_ocorrencia', estado='RN', caminho='../data/raw',
anos=list(range(2007,2019)))
# + colab={} colab_type="code" id="OzWYPv2Pk5f1"
accidents_df.info()
# + colab={} colab_type="code" id="sAEEDcm4i54W"
accidents_df.tail()
# + [markdown] colab_type="text" id="22b76QizjJbZ"
# ## Cleaning the data
#
# The first step is to normalize the format of the date.
# + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" executionInfo={"elapsed": 963, "status": "ok", "timestamp": 1569201950006, "user": {"displayName": "MAYRA DANTAS", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC1Nk3Cdh9E0tWxPoHxnjsDZc7rkChaMFPbhtGo=s64", "userId": "09389505834223807513"}, "user_tz": 180} id="T444XHGYjQXX" outputId="851bc80b-69c7-4639-d624-70788adb363f"
repl = lambda x: x.group(0)[-4:] + '-' + x.group(0)[-7:-5] + '-'+ x.group(0)[:2]
accidents_df['data'] = accidents_df.data_inversa.str.replace("[0-9]{2}/[0-9]{2}/[0-9]{4}",
repl, regex=True)
# Check out the result
print(accidents_df['data'].head())
print(accidents_df['data'].tail())
# -
# Then, we drop the missing data in `km` and `br` columns.
# + colab={} colab_type="code" id="ZxOueBoRgr4-"
accidents_df.dropna(subset=['km', 'br'], inplace=True)
# -
# Finally, the column `ano` (year) is correctly filled
# +
accidents_df['ano'] = accidents_df.data.str.split('-').str.get(0)
accidents_df.ano.head()
# + [markdown] colab_type="text" id="j8khx4n4Vcqt"
# ## Merging the datasets
#
# In this section, we will aggregate the info about driving violations
# + colab={} colab_type="code" id="Q7WdMO1aJ2vt"
# Initialize the new column
accidents_df['infracoes'] = 0
# + colab={"base_uri": "https://localhost:8080/", "height": 442} colab_type="code" executionInfo={"elapsed": 976, "status": "error", "timestamp": 1569201951132, "user": {"displayName": "MAYRA DANTAS", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC1Nk3Cdh9E0tWxPoHxnjsDZc7rkChaMFPbhtGo=s64", "userId": "09389505834223807513"}, "user_tz": 180} id="cTeqFKd2YGdF" outputId="e60bab7e-9a5c-431e-ff92-79af2b262d54"
# Recover data
violations_df = pd.read_csv('../data/processed/violations_count.csv', sep=';',
names=['data','km','br', 'contagem'])
# Check out the data
violations_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 163} colab_type="code" executionInfo={"elapsed": 722, "status": "error", "timestamp": 1569201951133, "user": {"displayName": "MAYRA DANTAS", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC1Nk3Cdh9E0tWxPoHxnjsDZc7rkChaMFPbhtGo=s64", "userId": "09389505834223807513"}, "user_tz": 180} id="O_JtGFtQa3ab" outputId="1f0345f9-b5c9-4892-9a60-105d550f6dee"
violations_df.info()
# + [markdown] colab_type="text" id="GwJ7h_sfYwjo"
# Now it's possible to update the number of driving violations based on the date of the violation and the point and road.
# + colab={"base_uri": "https://localhost:8080/", "height": 340} colab_type="code" executionInfo={"elapsed": 549, "status": "error", "timestamp": 1569201951472, "user": {"displayName": "MAYRA DANTAS", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC1Nk3Cdh9E0tWxPoHxnjsDZc7rkChaMFPbhtGo=s64", "userId": "09389505834223807513"}, "user_tz": 180} id="tmd22mcEYwDo" outputId="8ff3e633-958c-4ec4-ee4c-9cfea1fc7cc1"
def filter_violations(row):
data_value = row['data']
km_value = float(str(row['km']).replace(',', '.'))
br_value = float(row['br'])
# Recover violation ate the same point
violation = violations_df.query("data == '{}' and km == {} and br == {}".format(
data_value,km_value, br_value))
return len(violation['contagem']) if violation['contagem'].sum() else 0
accidents_df['infracoes'] = accidents_df.apply(filter_violations, axis='columns')
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" executionInfo={"elapsed": 472, "status": "ok", "timestamp": 1569201951771, "user": {"displayName": "MAYRA DANTAS", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC1Nk3Cdh9E0tWxPoHxnjsDZc7rkChaMFPbhtGo=s64", "userId": "09389505834223807513"}, "user_tz": 180} id="KAP3NRXZaTIM" outputId="bd615f9e-6a4b-4129-9bbd-be443a27fdd8"
accidents_df.query('infracoes > 0')
# + colab={} colab_type="code" id="yL-mLifCqO1j"
accidents_df.to_csv('../data/processed/accidents.csv', sep=';')
| 5,322 |
/WIP_dirs/Justin/clean_data.ipynb
|
8d865c916b9c486c0605262e09c71229a2963c19
|
[] |
no_license
|
tslindner/Script-Text-Classification-ML
|
https://github.com/tslindner/Script-Text-Classification-ML
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,951 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import re
#df = pd.read_csv("output.csv")
df = pd.read_csv("output_tv.csv") # new data with all episodes
df = df.drop(["Unnamed: 0"], axis=1)
df.count()
df.head()
# +
#Stackoverflow solution to remove data within () [].
'''
import re
x = "This is a sentence. (once a day) [twice a day]"
x = re.sub("[\(\[].*?[\)\]]", "", x)
'''
quotes3 = []
for x in df["Quotes"]:
x = re.sub("[\(\[].*?[\)\]]", "", str(x))
quotes3.append(str(x))
# -
len(quotes3)
df["Remove Data"] = quotes3
df.head(1)
#Removes punctuation and creates new column
df["Quotes w/o Pun."] = df["Remove Data"].str.replace('[^\w\s]','')
df.head()
#drop/rename columns to original state
new_df = df.drop(columns= ["Quotes","Remove Data"])
new_df.columns = ["Characters", "Quotes"]
new_df.head()
#convert to csv
new_df.to_csv("clean_output_tv.csv")
| 1,116 |
/single_cell_lineages/03_Visualising_Single_Cell_Traits.ipynb
|
7fcc89cc852842d74bc9a1c263552bbdf2250a62
|
[
"MIT"
] |
permissive
|
rikuturkki/DeepTree
|
https://github.com/rikuturkki/DeepTree
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,560,008 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="support_files/cropped-SummerWorkshop_Header.png">
#
# <h1 align="center">Python Bootcamp</h1>
# <h3 align="center">August 20-21, 2016</h3>
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <center><h1>Introduction to Matplotlib</h1></center>
#
# <p>
# **`matplotlib`** is a plotting library for Python.
# <p>
# **Pros:**
# <ul>
# <li>Huge amount of functionality/options.
# <li>Works with numpy arrays and python lists.
# <li>Comes with many prepackaged Python distros (anaconda, WinPython, etc.).
# <li>Easily saves plots to image (.png, .bmp, etc.) and vector (.svg, .pdf, etc.) formats.
# <li>Has an excellent set of examples (with code) at http://matplotlib.org/gallery.
# <li>Shares many syntactic conventions with Matlab.
# </ul>
#
# <p>
# **Cons:**
# <ul>
# <li>Slow for rapidly updating plots.
# <li>3D plotting support is not great.
# <li>Documentation is not always useful.
# <li>Essentially has two interfaces. One is intended to be close to Matlab, the other is object oriented. You will find examples that assume one or the other, but rarely the one you are after.
# <li>Shares many syntactic conventions with Matlab.
# </ul>
# </div>
# Import numpy and pyplot
import matplotlib.pyplot as plt
import numpy as np
from __future__ import print_function
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <left><h1>Enable Inline Plotting</h1></left>
# <p>One of the great features of the Jupyter Notebook is the ability to have your code, outputs, and graphics in a single document. But plots do not render in the notebook by default. To turn on inline plot rendering, we have to use a so-called ```Magic Command```, which is a special Jupyter command preceded by a %.
#
# <p>Two commonly used Matplotlib magic commands are:
# <ul>
# <li>```%matplotlib notebook``` - creates interactive plots. This is what we're going to use today. Interactive features depend on the kernel running in the background, which means they disappear without the kernel. This feature is somewhat new, so still has occasional bugs.
# <li>```%matplotlib inline``` - creates static (non-interactive) plots. This still remains the most common way to generate plots.
# </ul>
# <p>It's important to note that the above commands are specific to Jupyter. In other environments, you'll need to add a line of code to explicitly display your plots, or save them to disk. This will be discussed further when we cover other environments.
#
# </div>
# %matplotlib notebook
# # %matplotlib inline
# Feel free to replace the command above and see how the behavior of the notebook changes
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <h2>Example 1. Simple plot</h2>
#
# <p> start by making some sample data
# </div>
#
#
x = np.arange(0, 10, 0.01) #make evenly spaced points between 0 and 10 at intervals of 0.01
y = np.sin(2*np.pi*x)*np.exp(-0.5*x) #some function x
print("first five elements of x:",x[:5])
print("first five elements of y:",y[:5])
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
#
# <p> Create a figure and axis, the plot our data on the axis
# </div>
# +
fig,ax = plt.subplots() #subplots will make a single axis inside a new figure by default
ax.plot(x, y, color='red', linewidth=2)
# The figure can be saved by uncommenting the line below
# All major image formats (as well as PDFs) are accepted formats.
# fig.savefig('/testfig.png')
# -
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <h2>Example 2. Subplots</h2>
#
# <p>A given figure can have more than one axis. The ```subplot``` command, which we used above, generates a single axis by default. But we can specify the number of axes that we want.
#
# </div>
# +
#make a time array
t = np.arange(0, 10, 0.1)
# Create figure and axes objects. Make them share the x-axis
fig, axes = plt.subplots(nrows=2, ncols=1, sharex=True)
# Here, `axes` is a numpy array with two axes subplot objects
print('axes type: ' + str(type(axes)))
print('axes shape:', np.shape(axes))
print('axes object:\n' + str(axes))
# Plot on each subplot by indexing into 'ax'
axes[0].plot(t, np.sin(t), label='sin')
axes[1].plot(t, np.cos(t), label='cos', color='red')
# We can loop over the `axes` array to set properties in every subplot (no matter how many)
for ax in axes:
ax.legend(loc='best')
ax.set_ylabel('Amplitude',fontsize=14)
# We can also access individual axes to set the properties
axes[1].set_xlabel('Time',fontsize=14,weight='bold',style='italic')
# We can also edit attributes of the entire figure, such as the title
fig.suptitle('This is the figure title',fontsize=18);
# -
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
#
# Note that the x-axes are tied together in interactive mode due to the 'sharex = True' flag. Try setting that to False and regenerating
#
# </div>
# <div style="background: #DFF0D8; border-radius: 3px; padding: 10px;">
# <p>**Exercise 5.1:**
#
# <ol>
# <li>Remake the above plot with 4 subplots (2 rows, 2 columns; hint: you'll now have to index into axes like **`axes[0, 0]`**).
# <li>Create a loop over all the axes objects (hint: use **`axes.flatten()`**) so that the **`legend`** and **`set_ylabel`** functions are called for all subplots.
# <li>Use the loop from #2 to add a title to only the top row of plots using the **`set_title`** function.
# <li>Look at the documentation for the fig.tight_layout() command to optimize figure layout
# * Note that this doesn't play nice with the figure suptitle. try using the 'plt.subplots_adjust(top=0.92)' command to control the whitespace at the top of the plot.
# </ol>
#
# </div>
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <h2>Example 3. Plotting histograms </h2>
# <p> Use the ```hist``` command.
# </div>
# +
# Create gaussian distributed data with mu=10, sigma=3
x = 10 + 3 * np.random.randn(1000)
# Create figure and axes object
fig, ax = plt.subplots(1,1,figsize=(10,6))
# Create histogram
bins=25
ax.hist(x, bins=bins, label='Counts')
# Set other properties
ax.set_ylabel('# Unicorns Earned', fontsize=14)
ax.set_xlabel('Karma Points', fontsize=14)
ax.legend(loc='upper right')
# -
# <div style="background: #DFF0D8; border-radius: 3px; padding: 10px;">
# <p>**Exercise 5.2:**
#
# <p>Bin edges can be explicitly defined.
# <p> For example, you can use linspace to define bin edges:
# <p> ```bins = np.linspace(5,25,num=50,endpoint=True)```
#
# <p>Remake the histogram above but explicitly define the bin edges rather than the bin size.
#
# </div>
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <h2>Example 4 - Plotting Images</h2>
# <p> The ```imshow``` and ```matshow``` functions are useful for displaying matrices
# <p> Let's first grab an image that was saved on your hard drive using Matplotlib's ```image.mpimg``` function. This will turn the bitmapped image file into a Numpy array with dimensions ```HEIGHT x WIDTH x COLORS```
# </div>
# +
import matplotlib.image as mpimg
img=mpimg.imread('support_files/stinkbug.png')
print('the shape of img is: '+str(np.shape(img)))
# -
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
#
# <p> Now create a new figure and axis, then use ```imshow``` to display the data
# </div>
fig,ax=plt.subplots()
imgplot = ax.imshow(img)
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <p> Note that, despite this having data in three color channels, it renders as a grayscale image. Why is that?
# <p> Let's look at all three color values for a given point on the image:
# </div>
img[200,200,:]
# <div style="background: #DFF0D8; border-radius: 3px; padding: 10px;">
# <p>**Exercise 5.3:**
#
# <p>Plot a single color channel
#
# <ol>
# <li>Use Numpy slicing commands that you learned in the previous module to extract all pixels for one of the three color channels
# <li>Repeat the imshow command for that single color channel
# <li>Note that, in the absence of color data, Matplotlib applies the 'jet' colormap by default. Try adding the following keyword argument to your imshow command:
# <p> ```cmap = 'gray'```
# </ol>
#
# </div>
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <p> Now let's use the same functions to display some mathemically defined data
# </div>
# +
# Create some datapoints
x = np.linspace(0,10,num=500)
y = np.linspace(0,1,num=500)
#create a meshgrid (evenly spaced 2d array on each axis)
XX, YY = np.meshgrid(x, y)
## make ZZ a function of XX and YY
ZZ = np.sin(0.2*np.pi*XX**2) * YY
# Create figure and axes object
fig, ax = plt.subplots()
# Plot colormap and add colorbar scale
image_plot = ax.imshow(ZZ,cmap='coolwarm', extent=[0,10,1,0],aspect="auto")
# image_plot = ax.matshow(ZZ,cmap='coolwarm',origin="lower")
#make a colorbar
cbar = plt.colorbar(image_plot)
#set the colorbar's label properties, including an example of using LaTex code
cbar.set_label('$\sin(0.2 \pi x^2)*y$',fontsize=20,rotation=90)
ax.set_xlabel('x',fontsize=16)
ax.set_ylabel('y',fontsize=16)
print('the shape of z is: '+str(np.shape(ZZ)))
# -
# <div style="background: #DFF0D8; border-radius: 3px; padding: 10px;">
# <p>**Exercise 5.4:**
# <ol>
# <li>Try experimenting with some different colormaps. Here's a full list: http://matplotlib.org/examples/color/colormaps_reference.html
# <li>Try reducing the number of points on the x and y axes by changing the 'num' argument in the linspace command. What happens for low values (<= 100 points)?
# <li>Note that the imshow command turns on interpolation by default to reduce pixelation in images. When displaying quantitive data, this is generally not desireable. Here's a full list of interpolation methods: http://matplotlib.org/examples/images_contours_and_fields/interpolation_methods.html. Try setting interpolation to 'none'
# <li>Look up the documentation for 'matshow'. Try using it instead of 'imshow'.
# <li>Notice the ```extent``` and ```aspect``` keywords in the ```imshow``` function. Try removing or editing them to see what happens
# <li>Note that, in both imshow and matshow, the origin is in the upper left hand corner. Try using the following argument:
# <p> ```origin="lower"``` (but notice how this interacts with the ```extent``` keyword)
#
# </ol>
# </div>
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
# <h2> Example 5 - Error bars </h2>
# <p>The ```errorbar``` function can be used to generate line plots with errorbars
# </div>
# +
# Create some data
x = np.arange(0., 1., 0.1)
y = x + 1
y_error = y ** 2
# Create figure and axes object
fig, ax = plt.subplots()
# Create errorbar plot using `y_error` and color errorbars red
ax.errorbar(x, y, yerr=y_error, ecolor='red')
ax.set_xlim([-.1, 1.])
# -
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
#
# <h2> Example 6 - Scatter Plots </h2>
# <p> The ```scatterplot``` function allows you to generate scatter plots with dots of different sizes, colors, transparencies, etc.
#
# </div>
# +
#make first dataset
n1 = 500
x1 = 5+1.5*np.random.randn(n1)
y1 = 1+0.2*np.random.randn(n1)
size1 = 100*np.random.rand(n1)
#make second dataset
n2 = 750
x2 = 2+0.5*np.random.randn(n2)
y2 = 0.25+0.5*np.random.randn(n2)
size2 = 100*np.random.rand(n2)
#make figure,axes handles
fig,ax=plt.subplots()
#plot scatter plots
ax.scatter(x1,y1,s=size1,color='red',alpha=0.25)
ax.scatter(x2,y2,s=size2,color='blue',alpha=0.25)
#set axis limits
ax.set_xlim(0,10)
ax.set_ylim(-2,2)
# -
# <div style="border-left: 3px solid #000; padding: 1px; padding-left: 10px; background: #F0FAFF; ">
#
# <h2>Example 7 - Gridspec </h2>
#
# <p>Gridspec is useful when you have uneven subplots. It can get tricky for more complex plots, so first try to use **`ax.subplots()`** (like in the previous examples) if possible.
#
# <p>The documentation for gridspec is here: http://matplotlib.org/users/gridspec.html
# </div>
#
# +
import matplotlib.gridspec as gridspec
t = np.arange(0., 5., 0.01)
fig=plt.figure()
# Create grispec object and define each subplot
gs = gridspec.GridSpec(2, 2)
ax0 = plt.subplot(gs[0, 0]) # Top left corner
ax1 = plt.subplot(gs[0, 1]) # Top right corner
ax2 = plt.subplot(gs[1, :]) # Bottom, span entire width
ax0.plot(t, np.cos(5 * t), c='b')
ax1.plot(t, np.exp(-1 * t), c='g')
ax2.plot(t, np.cos(5 * t) * np.exp(-1 * t), c='k')
# -
# <div style="background: #DFF0D8; border-radius: 3px; padding: 10px;">
# <p>**Exercise 5.5:**
# <p>Modify the above plot in the following ways:
# <ol>
# <li>Add another plot on the right edge that spans the full vertical distance (can be a scatter plot, imshow, line plot, etc.)
# <li>Change the size of the entire plot to 12 inches wide by 6 inches tall
# <li>Add a title to each subplot, and the figure as a whole
# <li>Label all axes
# <li>Modify the x-labels on the lower-left plot such that there is a tick every 0.5 points (0, 0.5, 1, 1.5, etc).
# <li>Add gridlines to the upper left plot
# <li>Make the line in the lower left plot thicker (linewidth of 3) and dashed
# <ol>
# </div>
| 13,849 |
/.ipynb_checkpoints/Analyzing Constitutions II-checkpoint.ipynb
|
6473bbf983bba6be996d7431d550b5ad8372dabb
|
[] |
no_license
|
mbaker21231/constitutions
|
https://github.com/mbaker21231/constitutions
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 707,794 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## More on Analyzing Constitutions
#
# In the previous workbook, we didn't get very far with our analysis. A reason for this is that we did not attempt to break down the constitutions very seriously. They have a definite structure, so we should take that into account and see where it gets us. Accordingly:
# +
import os
import numpy as np
import pandas as pd
# -
# We select a file randomly, just to look at its structure. So:
filelist = os.listdir()
file = filelist[38]
print(file)
# Let's read in the file line by line and make a list out of the non-blank components of the list.
# +
with open(file) as f:
content = f.readlines()
content = [x.strip() for x in content]
content = list(filter(None, content))
# -
len(content)
# We see that we have these definitive markers for the beginning and end of certain things. So, ASTART marks the beginning of an article, and AEND marks the end of an article. We also can add in beginnings and endings for Sections. Anyways:
# +
artbeg = np.zeros(len(content))
artend = np.zeros(len(content))
secbeg = np.zeros(len(content))
secend = np.zeros(len(content))
conbeg = np.zeros(len(content))
conend = np.zeros(len(content))
count = 0
for line in content:
if line.find("CSTART") > 0:
conbeg[count] = 1
if line.find("CEND") > 0:
conend[count] = 1
if line.find("ASTART") > 0:
artbeg[count] = 1
if line.find("AEND") > 0:
artend[count] = 1
if line.find("SSTART") > 0:
secbeg[count] = 1
if line.find("SEND") > 0:
secend[count] = 1
count = count + 1
# -
# So, we now can make a dataframe out of the constitution, for one:
Foo = pd.DataFrame([content, list(artbeg), list(artend), list(secbeg), list(secend), list(conbeg), list(conend)])
Foo = Foo.T
text_lines=np.zeros(len(content))
count = 0
for line in Foo[0]:
if any(c for c in line if c.islower()):
text_lines[count] = 1
count = count + 1
TL = pd.DataFrame(text_lines)
Foo['textdum'] = TL
paragraphs = []
count = 0
for line in Foo[0]:
newline = []
if Foo['textdum'][count] == 1:
newline.append(Foo['textdum'][count])
count = count + 1
| 2,451 |
/shopping_data3.ipynb
|
f84392d75cb2a16cbee5dbe7330de46508181bd3
|
[] |
no_license
|
mohammad-assaad999/Cryptocurrencies
|
https://github.com/mohammad-assaad999/Cryptocurrencies
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,889,973 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Unit 29. 함수 사용하기
# ### 함수 만들고 호출하기
def hello():
print('Hello, world!')
for i in range(3):
hello()
def add(a, b):
print(a + b)
add(10, 20)
# ### 함수의 결과를 반환하기
def add(a, b):
return a + b
c = add(10, 20)
c
print(add(20,30))
# return으로 함수 중간에 빠져나오기
def not_ten(a):
if a == 10:
return
print(f'{a}는 10이 아닙니다.')
not_ten(5)
def add_sub(a, b):
return a + b, a - b
x, y = add_sub(40,30)
print(x, y)
t = add_sub(40,30)
t
x, _ = add_sub(40,30) # 두 번째 리턴값을 사용하지 않겠다는 의미
x
x, y = map(int, input().split())
def calc(x,y):
return x + y, x - y, x * y, x / y
a, s, m, d = calc(x, y)
print('덧셈: {0}, 뺄셈: {1}, 곱셈: {2}, 나눗셈: {3}'.format(a, s, m, d))
l.fit(data)
# Predict clusters
predictions = model.predict(data)
# Create return DataFrame with predicted clusters
data["class"] = model.labels_
return data
five_clusters = get_clusters(5, df_shopping)
five_clusters.head()
six_clusters = get_clusters(6, df_shopping)
six_clusters.head()
# Plot the 2D-scatter with x="Annual Income", y="Spending Score (1-100)"
five_clusters.hvplot.scatter(x='Annual Income', y='Spending Score (1-100)', by='class')
# Plot the 3D-scatter with x="Annual Income", y="Spending Score (1-100)" and z="Age"
fig = px.scatter_3d(
five_clusters,
x="Age",
y="Spending Score (1-100)",
z="Annual Income",
color="class",
symbol="class",
width=800,
)
fig.update_layout(legend=dict(x=0, y=1))
fig.show()
# Plot the 2D-scatter with x="Annual Income", y="Spending Score (1-100)"
six_clusters.hvplot.scatter(x='Annual Income', y='Spending Score (1-100)', by='class')
# Plotting the 3D-Scatter with x="Annual Income", y="Spending Score (1-100)" and z="Age"
fig = px.scatter_3d(
six_clusters,
x="Age",
y="Spending Score (1-100)",
z="Annual Income",
color="class",
symbol="class",
width=800,
)
fig.update_layout(legend=dict(x=0, y=1))
fig.show()
etrics.confusion_matrix(y_test, y_predicted)
cm
# + colab={"base_uri": "https://localhost:8080/"} id="FJYlaIMkHpx9" outputId="167a6ec3-201e-46df-bee6-07cb8133f92e"
model.predict_proba(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="1Vqn-vRJIEI2" outputId="51a37456-bf09-4d13-81f8-115d845b900e"
model.partial_fit(x_test, y_test)
y_predicted = model.predict(x_test)
accuracy = sklearn.metrics.accuracy_score(y_test, y_predicted)
print("Accuracy =", accuracy)
| 2,677 |
/h2o/sparkling-water.ipynb
|
96de7d7d3daebc176f486142447c0afb468603bb
|
[
"Apache-2.0"
] |
permissive
|
danielfrg/ml-notes
|
https://github.com/danielfrg/ml-notes
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 33,153 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Start Spark and H2O
# ### Create Spark session
import pyspark
from pyspark.sql import SparkSession, functions
spark = SparkSession.builder.appName("sparkling").getOrCreate()
spark
# ### Create H2OContext
import h2o
from pysparkling import H2OContext
hc = H2OContext.getOrCreate(spark)
# ## Data
#
# ### Loading data
#
# Here we load using H2O and move to Spark DFs because its easy to use `h2o.import_file`
# but in general would be more common to use just Spark to load from HDFS for example.
allFlights = h2o.import_file(path="http://h2o-public-test-data.s3.amazonaws.com/smalldata/airlines/year2005.csv.gz")
weatherTable = h2o.import_file(path="http://h2o-public-test-data.s3.amazonaws.com/smalldata/chicago/Chicago_Ohare_International_Airport.csv")
allFlights.head(5)
weatherTable.head(5)
allFlightsDF = hc.as_spark_frame(allFlights)
weatherTableDF = hc.as_spark_frame(weatherTable)
allFlightsDF.count(), allFlightsDF.head()
weatherTableDF.count(), weatherTableDF.head()
# ### Clean data using Spark
flightsToORD = allFlightsDF.filter(allFlightsDF.Dest == "ORD")
flightsToORD.count()
datasetDF = flightsToORD.join(weatherTableDF, (flightsToORD.Year == functions.year(weatherTableDF.Date)) &
(flightsToORD.Month == functions.month(weatherTableDF.Date)) &
(flightsToORD.DayofMonth == functions.dayofmonth(weatherTableDF.Date))
)
datasetDF.count()
datasetDF.head()
datasetDF = datasetDF.select("Year", "Month", "DayofMonth", "CRSDepTime", "CRSArrTime", "CRSElapsedTime",
"UniqueCarrier", "FlightNum", "TailNum", "Origin", "Distance",
"TmaxF", "TminF", "TmeanF", "PrcpIn", "SnowIn", "CDD", "HDD", "GDD", "ArrDelay")
# ## Build model
#
# We take the SparkDF make it an H2OFrame to train a model
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
flightsWithWeather = hc.as_h2o_frame(datasetDF)
flightsWithWeather.head(5)
train, valid, test = flightsWithWeather.split_frame(ratios=[.7, .15])
predictor_columns = list(range(0, flightsWithWeather.ncol - 1))
target_col = flightsWithWeather.ncol - 1
dl_model = H2ODeepLearningEstimator(hidden=[100, 100], epochs=5)
dl_model.train(x=predictor_columns, y=target_col, training_frame=train, validation_frame=valid)
dl_model.model_performance(test)
test["ArrDelay"].head(5)
dl_model.predict(test.head(5))
x=math.log((math.sqrt(1-2*par[2]*z+z**2)+z-par[2])/(1-par[2]))
A=1+(((1-par[1])**2*par[0]**2)/(24.*(V**2))+(par[0]*par[1]*par[3]*par[2])/(4.*V)+
((par[3]**2)*(2-3*(par[2]**2))/24.))*expiry
B=1+(1/24.)*(((1-par[1])*logFK)**2)+(1/1920.)*(((1-par[1])*logFK)**4)
ivol=(par[3]*logFK*A)/(x*B)
diff=ivol-MKT[j]
elif method=='Obloj':
logFK=math.log(F/K[j])
one_beta=1-par[1]
one_betasqr=one_beta*one_beta
fK=F*K[j]
fK_beta=math.pow(fK,one_beta/2.0)
sigma_exp=(one_betasqr/24.0*par[0]*par[0]/fK_beta/fK_beta+0.25*par[2]*par[1]*par[3]*par[0]/fK_beta+
(2.0-3.0*par[2]*par[2])/24.0*par[3]*par[3])
if par[3]==0:
sigma=(1-par[1])*par[0]*logFK/(math.pow(F,(1-par[1]))-math.pow(K[j],(1-par[1])))
elif par[1]==1:
z=par[3]*logFK/par[0]
sigma=par[3]*logFK/math.log((math.sqrt(1-2*par[2]*z+z*z)+z-par[2])/(1-par[2]))
else:
z=par[3]*(math.pow(F,(1-par[1]))-math.pow(K[j],(1-par[1])))/par[0]/(1-par[1])
sigma=par[3]*logFK/math.log((math.sqrt(1-2*par[2]*z+z*z)+z-par[2])/(1-par[2]))
ivol=sigma*(1.0+sigma_exp*expiry)
diff=ivol-MKT[j]
res+=diff**2
obj=math.sqrt(res)
return obj
def calibration(self,starting_par=np.array([0.001,0.5,0,0.001]),method='Hagan',eqc='none'):
[F,K,expiry,MKT]=[self.F,self.K,self.expiry,self.MKT]
starting_guess=starting_par
if eqc=='none':
pass
else:
starting_guess[eqc[0]]=eqc[1]
alpha=len(F)*[starting_guess[0]]
beta=len(F)*[starting_guess[1]]
rho=len(F)*[starting_guess[2]]
nu=len(F)*[starting_guess[3]]
jacmat=len(F)*[starting_guess[3]]
for i in range(len(F)):
x0=starting_guess
bnds=((0.001,None),(0,1),(-0.999,0.999),(0.001,None))
if eqc=='none':
res=minimize(self.objfunc,x0,(F[i],K[i],expiry[i],MKT[i],method),bounds=bnds,method='SLSQP')
else:
res=minimize(self.objfunc,x0,(F[i],K[i],expiry[i],MKT[i],method),bounds=bnds,constraints={'type':'eq','fun':lambda par: par[eqc[0]]-eqc[1]},method='SLSQP')
alpha[i]=res.x[0]
beta[i]=res.x[1]
rho[i]=res.x[2]
nu[i]=res.x[3]
jacmat[i]=res.jac
jacmat=pd.DataFrame(jacmat)
params=pd.DataFrame(data=[list(expiry),list(F),alpha,beta,rho,nu],index=['expiry','F','alpha','beta','rho','nu'])
return {'alpha':alpha,'beta':beta,'rho':rho,'nu':nu,'params':params,'jacmat':jacmat}
def ivol_SABR(self,alpha,beta,rho,nu,method='Hagan'):
sabr=SABR_model(0.5,0,0.25) #random nos
[F,K,expiry]=[self.F,self.K,self.expiry]
return sabr.ivol_matrix(alpha,beta,rho,nu,F,K,expiry,method)
fitter=Fitter('market_data.xlsx')
fitter.input_read()
results=fitter.calibration()
fitter.ivol_SABR(results['alpha'],results['beta'],results['rho'],results['nu'])
| 6,091 |
/Machine Learning with Python and scikit-learn.ipynb
|
0dd289074182597ec4f9b2002d0de30bf1ae488c
|
[] |
no_license
|
zhongzhu/learn-jupiter-notes
|
https://github.com/zhongzhu/learn-jupiter-notes
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,717 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## LEARNING PATH: Machine Learning with Python and scikit-learn
# * https://www.safaribooksonline.com/learning-paths/learning-path-machine/9781789536911/
# * This path navigates across the following products (in sequential order):
# * Fundamentals of Machine Learning with scikit-learn (2h 33m)
# * Hands-On Machine Learning with Python and Scikit-Learn (2h 39m)
# * Learn Machine Learning in 3 Hours (2h 14m)
# * related code https://github.com/PacktPublishing/Fundamentals-of-Machine-Learning-with-scikit-learn
# ## three types of learning
# * supervised learning
# * unsupervised learning
# * reinforcement learning
# * feedback(reward) provided by environment,比如自动打游戏
# * understand whether certain action performed is positive or not
# ## 训练
# * underfitting
# * normal fitting
# * overfitting
#
| 1,094 |
/misc/others/ExperimentBiggerAutoEncoderSigmoid-OneImage.ipynb
|
9b31d1a86c86ae0bf4c26ec173a8ad7ae7df9b8c
|
[] |
no_license
|
WangLiwen1994/2DSceneRelighting
|
https://github.com/WangLiwen1994/2DSceneRelighting
| 1 | 0 | null | 2020-06-05T13:00:29 | 2020-05-29T21:56:40 | null |
Jupyter Notebook
| false | false |
.py
| 2,330,966 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Inspired from https://medium.com/@vaibhaw.vipul/building-autoencoder-in-pytorch-34052d1d280c and https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
# ## Imports
import torch
import torchvision as tv
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torchvision.utils import save_image
from torch.utils.data import Dataset, DataLoader
from torch.optim.lr_scheduler import MultiStepLR
# +
import sys
sys.path.append('../')
from .summary import summarize
from models.BiggerConvAutoencoderSigmoidModel import Autoencoder
# -
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# %matplotlib inline
filePath1 = "../../../VIDIT/train/scene_abandonned_city_54/2500/E/image1015.png"
img1 = mpimg.imread(filePath1)[:,:,0:3]
implot1 = plt.imshow(img1)
img1=torch.from_numpy(img1)
print(img1.shape)
img1 = img1.permute(2,0,1)
print(img1.shape)
filePath2 = "../../../VIDIT/train/scene_abandonned_city_54/2500/S/image1025.png"
img2 = mpimg.imread(filePath2)[:,:,0:3]
implot2 = plt.imshow(img2)
img2=torch.from_numpy(img2)
print(img2.shape)
img2 = img2.permute(2,0,1)
print(img2.shape)
# ## Loading and Transforming data
# +
class OneImageDataset(Dataset):
def __init__(self):
self.samples=[(Variable(img1).cuda(),Variable(img2).cuda())]
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
return self.samples[idx]
trainloader = OneImageDataset()
testloader = OneImageDataset()
# -
# ## About the model
model = Autoencoder().cuda()
distance = nn.MSELoss().cuda() #We can modify this, eg input and output must have same edges, we should experiment differents ones
optimizer = torch.optim.Adam(model.parameters(), weight_decay=1e-5) #Weight Decay is L2 Regularization
scheduler = MultiStepLR(optimizer, milestones=[3000,8000,13000,18000], gamma=0.1) #divide learning rate by 10 at each milestone
summarize(model, input_size=img1.shape)
# ## Training
#defining some params
num_epochs = 20000 #you can go for more epochs
for epoch in range(num_epochs):
for i,data in enumerate(trainloader):
img, groundtruth = data
img = img.unsqueeze(0)
groundtruth = groundtruth.unsqueeze(0)
# ===================forward=====================
output = model(img)
loss = distance(output, groundtruth)
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
scheduler.step()
if epoch%1000 == 0:
implotpred = plt.imshow(output.cpu().detach().permute(0,2,3,1).numpy().squeeze(), vmin=0, vmax=1)
plt.show()
# ===================log========================
if epoch%10 == 0:
print('epoch [{}/{}], loss:{:.4f}'.format(epoch+1, num_epochs, loss.data))
print('Finished Training')
# A part of the next cell should be in utils/save_model.py
PATH = './OneImageBiggerAutoEncoderSigmoid_net.pth'
torch.save(model.state_dict(), PATH)
for i,data in enumerate(testloader):
img, groundtruth = data
img = img.unsqueeze(0)
groundtruth = groundtruth.unsqueeze(0)
# ===================forward=====================
output = model(img)
print ("Input:")
implotin = plt.imshow(img.cpu().detach().permute(0,2,3,1).numpy().squeeze())
plt.show()
print ("Predicted:")
implotpred = plt.imshow(output.cpu().detach().permute(0,2,3,1).numpy().squeeze(), vmin=0, vmax=1)
plt.show()
print ("GroundTruth:")
implotgt = plt.imshow(groundtruth.cpu().detach().permute(0,2,3,1).numpy().squeeze())
plt.show()
| 4,012 |
/sliderule_dsi_json_try.ipynb
|
e53e6f7430cc926d7945f8b143b46767375d7bd7
|
[] |
no_license
|
Nee-DS/Json_data_wrrangling
|
https://github.com/Nee-DS/Json_data_wrrangling
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 42,046 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # JSON examples and exercise
# ****
# + get familiar with packages for dealing with JSON
# + study examples with JSON strings and files
# + work on exercise to be completed and submitted
# ****
# + reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader
# ****
import pandas as pd
# ## imports for Python, Pandas
import json
from pandas.io.json import json_normalize
# ## JSON example, with string
#
# + demonstrates creation of normalized dataframes (tables) from nested json string
# + source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
# ****
# ## JSON example, with file
#
# + demonstrates reading in a json file as a string and as a table
# + uses small sample file containing data about projects funded by the World Bank
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
world_bank=pd.read_json('data/world_bank_projects.json')
print(world_bank.columns)
# ****
# ## JSON exercise
#
# Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
# 1. Find the 10 countries with most projects
# 2. Find the top 10 major project themes (using column 'mjtheme_namecode')
# 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
import pandas as pd
import numpy as np
world_bank=pd.read_json('data/world_bank_projects.json')
#print(world_bank.head())
#grouped_code=world_bank.groupby('countrycode').size()
#grouped_code.sort_values(ascending=False).head(9)
grouped_name=world_bank.groupby('countryshortname').size()
grouped_name.sort_values(ascending=False).head(9)
# +
#grouped=world_bank.groupby('mjtheme_namecode').size()
#print(world_bank['mjtheme_namecode'].head(1))
# +
data_projects=json.load((open('data/world_bank_projects.json')))
#data
table_projects=json_normalize(data_projects,'mjtheme_namecode')
#print(table_projects['name'].head())
#print(table.head())
#table_projects.groupby('code').size()
#table_projects1=table_projects.drop_duplicates()
#table_projects1['name'].replace(' ','NaN')
#table_projects_tidy=table_projects1.dropna(how='all')
#print(table_projects1)
#table_projects.name.unique()
#unique_projects = (table_projects['name'].append(table_projects['code'])).unique()
#print(unique_projects)
#table_projects.groupby(['name','code'],as_index=False).size()
table_projects.groupby(['name','code']).size().reset_index()
# -
#projects_name = ['Human development', 'Economic management','Social protection and risk management',
#'Trade and integration','Public sector governance','Environment and natural resources management',
#'Social dev/gender/inclusion', 'Financial and private sector development', 'Rural development',
#'Urban development', 'Rule of law']
#table_projects.loc[table_projects['name'].isin(projects_name)].unique()
#create a dictionary
name_code={1:"Economic management",2:"Public sector governance",3:"Rule of law",
4:"Financial and private sector development",
5:"Trade and integration",6:"Social protection and risk management", 7:"Social dev/gender/inclusion",
8:"Human development",9:"Urban development",10:"Rural development",
11:"Environment and natural resources management"}
#name_code[1]
empty_name=np.where(table_projects['name'].map(lambda x: x == ''))
print(empty_name)
#table_projects['name'].''.sum()
#table_projects.replace('')
#name_project=table_projects['name']
#name_project.notnull().sum()
# +
sd2=table_projects.drop_duplicates().replace('',np.nan).dropna(how='any')
print(sd2.loc[sd2['code'] == '11']['name'].values[0])
for index, row in table_projects.iterrows():
if(row['name'] == ''):
row['name']=(sd2.loc[sd2['code'] == row['code']]['name'].values[0])
#print(table_projects)
#print(grouped_projects.sort_values(ascending=False).head(10))
#grouped_projects=table_projects.groupby('name').size()
grouped_projects=table_projects.groupby('name').count()[['code']].sort_values('code', ascending=False)
print(grouped_projects)
| 5,262 |
/Portfolio/Summary Statistics/Data Transformation/.ipynb_checkpoints/Startup Transformation-checkpoint.ipynb
|
8dd1e9445385b57644e616260851926b03a852b1
|
[] |
no_license
|
Manivelas23/Data-Science
|
https://github.com/Manivelas23/Data-Science
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 108,042 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Startup Transformation
# In this project, you’ll work as a data analyst for a tech startup that is looking to improve its operations after a global pandemic has taken the world by storm.
#
# You will apply data transformation techniques to make better sense of the company’s data and help answer important questions such as:
#
# * Is the company in good financial health?
# * Does the company need to let go of any employees?
# * Should the company allow employees to work from home permanently?
# ### Analyzing Revenue and Expenses
#
# 1. The management team of the company you work for is concerned about the status of the company after a global pandemic.
#
# The CFO (Chief Financial Officer) asks you to perform some data analysis on the past six months of the company’s financial data, which has been loaded in the variable `financial_data`.
#
# First, examine the first few rows of the data using `print()` and `.head()`.
# +
from sklearn import preprocessing
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
# load in financial data
financial_data = pd.read_csv('financial_data.csv')
# code goes here
#exercise 1
display(financial_data.head())
# -
# 2. Notice that `financial_data` has three columns – `Month, Revenue, and Expenses`.
#
# Store each column in three separate variables called `month`, `revenue`, and `expenses`.
#exercise 2
month = financial_data.Month
revenue = financial_data.Revenue
expenses = financial_data.Expenses
# 3. Next, use the following code to create a plot of revenue over the past six months:
# ```python
# plt.plot(month,revenue)
# plt.show()
# ```
#exercise 3
plt.plot(month,revenue)
plt.xlabel('Month')
plt.ylabel('Amount ($)')
plt.title('Revenue')
plt.show()
# 4. On the right, you should now see a plot of revenue over time. You can label and format the figure using the following functions:
#
# ```python
# plt.xlabel('Month')
# plt.ylabel('Amount ($)')
# plt.title('Revenue')
# ```
#
# These should be added before `plt.show()`. Add the labels to your plot.
# 5. Repeat steps 3 and 4 for monthly expenses. In other words, create a second plot of monthly expenses over the past 6 months. Note that you’ll need to use the function `plot.clf()` prior to creating this new plot. Otherwise, it will be plotted on-top of the revenue plot. The code should look something like this:
# ```python
# plt.clf()
# #insert code to create plot here
# #add labels to the plot here
# plt.show()
# ```
#
# How are monthly expenses changing over time?
#exercise 5
plt.clf()
plt.plot(month,expenses)
plt.xlabel('Month')
plt.ylabel('Amount ($)')
plt.title('Expenses')
plt.show()
# 6. As shown, revenue seems to be quickly decreasing while expenses are increasing. If the current trend continues, expenses will soon surpass revenues, putting the company at risk.
#
# After you show this chart to the management team, they are alarmed. They conclude that expenses must be cut immediately and give you a new file to analyze called `expenses.csv`.
#
# Use pandas to read in expenses.csv and store it in a variable called `expense_overview`.
#
# Print the first seven rows of the data.
#exercise 6
expense_overview = pd.read_csv('expenses.csv')
display(expense_overview.head())
# ***
# ### Analyzing Revenue and Expenses
# 7. Notice that there are two columns:
#
# * Expense: indicates the expense category
# * Proportion: indicates how much of the overall expenses a specific category takes up
#
# Store the Expense column in a variable called `expense_categories` and the `Proportion` column in a variable called `proportions`.
#exercise 7
expense_categories = expense_overview.Expense
proportions = expense_overview.Proportion
# 8. Next, we want to create a pie chart of the different expense categories. Use `plt.clf()` again to clear the previous plot, then create a pie chart using the `plt.pie()` method, passing in two arguments:
#
# * `proportions`
# * `labels = expense_categories`
#
# Give your pie chart a title using `plt.title()`, then use `plt.show()` at the end to show the plot.
#
# 9. Notice that the pie chart currently looks deformed.
#
# Above plt.show(), add in the following two lines of code to set the axis and adjust the spacing:
#
# ```python
# plt.axis('Equal')
# plt.tight_layout()
# ```
#
# Take a moment to look at the pie chart. Which expense categories make up most of the data, and which ones aren’t so significant?
#exercise 8 and 9
plt.clf()
plt.pie(proportions, labels = expense_categories)
plt.title('Expenses Proportions')
plt.axis('Equal')
plt.tight_layout()
plt.show()
# 10. It seems that `Salaries`, `Advertising`, and `Office Rent` make up most of the expenses, while the rest of the categories make up a small percentage.
#
# Before you hand this pie chart back to management, you would like to update the pie chart so that all categories making up less than 5% of the overall expenses (Equipment, Utilities, Supplies, and Food) are collapsed into an “Other” category.
#
# Update the pie chart accordingly.
#exercise 10
expense_categories = ['Salaries', 'Advertising', 'Office Rent', 'Other']
proportions = [0.62, 0.15, 0.15, 0.08]
plt.clf()
plt.pie(proportions, labels = expense_categories)
plt.title('Expense Categories')
plt.axis('Equal')
plt.tight_layout()
plt.show()
# 11. You should now see four categories in your updated pie chart:
#
# * Salaries
# * Advertising
# * Office Rent
# * Other
#
# This simplified pie chart helps the management team see a big picture view of the company’s expenses without getting distracted by noisy data.
# ***
# ### Employee Productivity
# 12. Salaries make up 62% of expenses. The management team determines that to cut costs in a meaningful way, they must let go of some employees.
#
# Each employee at the company is assigned a productivity score based on their work. The management would like to keep the most highly productive employees and let go of the least productive employees.
#
# First, use pandas to load in `employees.csv` and store it in a variable called `employees`.
#
# Print the first few rows of the data.
#exercise 12
employees = pd.read_csv('employees.csv')
display(employees.head())
# 13. Notice that there is a Productivity column, which indicates the productivity score assigned to that employee.
#
# Sort the employees data frame (in ascending order) by the Productivity column and store the result in a variable called `sorted_productivity`.
#
# To sort a data frame, you can do the following:
#
# ```python
# sorted_data = dataframe_name.sort_values(by=['Column Name'])
# ```
#
# Print `sorted_productivity`.
#exercise 13
sorted_productivity = employees.sort_values(by=['Productivity'])
display(sorted_productivity)
# 14. You should now see the employees with the lowest productivity scores at the top of the data frame.
#
# The company decides to let go of the 100 least productive employees.
#
# Store the first 100 rows of `sorted_productivity` in a new variable called `employees_cut` and print out the result.
#
# Unfortunately, this batch of employees won’t be so lucky.
#exercise 14
employees_cut = sorted_productivity.head(100)
# 15. Your colleague Sarah, a data scientist at the company, would like to explore the relationship between `Income` and `Productivity` more in depth, but she points out that these two features are on vastly different scales.
#
# For example, productivity is a feature that ranges from 0-100, but income is measured in the thousands of dollars.
#
# Moreover, there are outliers in the data that add an additional layer of complexity.
#
# She asks you for advice on how she should transform the data. Should she perform normalization, standardization, log transformation, or something else?
#
# Put your answer in a string in a variable called `transformation`.
#exercise 15
transformation = 'standardization'
# ### Commute Times and Log Transformation
#
#
# 16. The COO (Chief Operating Officer) is debating whether to allow employees to continue to work from home post-pandemic.
#
# He first wants to take a look at roughly how long the average commute time is for employees at the company. He asks for your help to analyze this data.
#
# The `employees` data frame has a column called `Commute` Time that stores the commute time (in minutes) for each employee.
#
# Create a variable called `commute_times` that stores the `Commute Time` column.
#exercise 16 and 19
commute_times = employees['Commute Time']
commute_times_log = np.log(commute_times)
# 17. Let’s do some quick analysis on the commute times of employees.
#
# Use `print()` and `.describe()` to print out descriptive statistics for `commute_times`.
#
# What are the average and median commute times? Might it be worth it for the company to explore allowing remote work indefinitely so employees can save time during the day?
#exercise 17
print(commute_times.describe())
# 18. Let’s explore the shape of the commute time data using a histogram.
#
# First, use `plt.clf()` to clear the previous plots. Then use `plt.hist()` to plot the histogram of `commute_times`. Finally, use `plt.show()` to show the plot. Feel free to add labels above `plt.show()` if you would like to practice!
#
# What do you notice about the shape of the data? Is it symmetric, left skewed, or right skewed?
#exercise 18
plt.clf()
plt.hist(commute_times)
plt.show()
# 19. The data seems to be skewed to the right. To make it more symmetrical, we might try applying a log transformation.
#
# Right under the `commute_times` variable, create a variable called `commute_times_log` that stores a log-transformed version of `commute_times`.
#
# To apply log-transform, you can use numpy’s `log()` function.
# 20. Replace the histogram for `commute_times` with one for `commute_times_log`.
#
# Notice how the shape of the data changes from being right skewed to a more symmetrical (and even slightly left-skewed) in shape. After applying log transformation, the transformed data is more “normal” than before.
# +
#exercise 20
plt.clf()
plt.hist(commute_times_log)
plt.show()
| 10,336 |
/.ipynb_checkpoints/2_Training-checkpoint.ipynb
|
73cab3ecb872a724ffca1cb5fceca626137ae698
|
[] |
no_license
|
aish27/-Image-Captioning
|
https://github.com/aish27/-Image-Captioning
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 26,999 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import random
import scipy.linalg as sp
import scipy.stats as ss
random.seed(1)
# ### Generating State space model for reactor:
# +
W = 2104.7 # in kg
X00 = np.array([0.0874, 0.3896, 0.0153, 0.2907, 0.1075, 0.1095])
P00 = np.identity(6, dtype = float) # Initial Covariance Estimate
R = np.identity(4, dtype = float) # Measurement Noise Covariance Matrix at Ts=30
R[0][0] = (0.00389)**2
R[1][1] = (0.00015)**2
R[2][2] = (0.0029)**2
R[3][3] = (0.0011)**2
Q = np.identity(6, dtype = float) # State Noise Covariance Matrix
Q[0][0] = (0.00044)**2
Q[1][1] = (0.0019)**2
Q[2][2] = (0.00008)**2
Q[3][3] = (0.0014)**2
Q[4][4] = (0.00053)**2
Q[5][5] = (0.00054)**2
Ns = 1900
timestep = 30
Fa = 1.8270 # in kg/s
b1 = 1.6599 * 10**6
b2 = 7.2117 * 10**8
b3 = 2.6745 * 10**12
# -
# ### Creating a class for reactor
from math import exp
class ottoReactor(object):
def __init__(self, Tr):
#The states:
self.xa = 0.0874
self.xb = 0.3896
self.xc = 0.0153
self.xe = 0.2907
self.xg = 0.1075
self.xp = 0.1095
#The reaction constants:
self.k1 = b1 * exp(-6666.7/Tr)
self.k2 = b2 * exp(-8333.3/Tr)
self.k3 = b3 * exp(-11111/Tr)
#The reaction rates:
self.r1 = self.k1*self.xa*self.xb*W
self.r2 = self.k2*self.xb*self.xc*W
self.r3 = self.k3*self.xc*self.xp*W
#Differentiation of rates w.r.t Tr of 11.2:
self.dr1 = self.xa*self.xb*W*self.r1*6666.7/Tr**2
self.dr2 = self.xb*self.xc*W*self.r2*8333.3/Tr**2
self.dr3 = self.xc*self.xp*W*self.r3*11111/Tr**2
#Setting state from input TO class:
def setState(self, X):
self.xa=X[0]
self.xb=X[1]
self.xc=X[2]
self.xe=X[3]
self.xg=X[4]
self.xp=X[5]
#Obtaining current state FROM class:
def getState(self):
return np.array([self.xa, self.xb, self.xc, self.xe, self.xg, self.xp])
#The differentiating equations (11.2a, 11.2b, 11.2c, 11.2d, 11.2e, 11.2f):
def dxdt(self, Fb):
dxadt = (Fa - (Fa + Fb)*self.xa - self.r1)/W
dxbdt = (Fa - (Fa + Fb)*self.xb - self.r1 - self.r2)/W
dxcdt = (-(Fa + Fb)*self.xc + 2*self.r1 - 2*self.r2 - self.r3)/W
dxedt = (-(Fa + Fb)*self.xe + 2*self.r2)/W
dxgdt = (-(Fa + Fb)*self.xg + 1.5*self.r3)/W
dxpdt = (-(Fa + Fb)*self.xp + self.r2 - 0.5*self.r3)/W
A = np.array([dxadt, dxbdt, dxcdt, dxedt, dxgdt, dxpdt])
return A
#Euler method to update states:
def euler(self, timestep, Fb, Tr, noise=False):
X=self.getState()
X+=timestep*self.dxdt(Fb)
if noise==True:
X+=np.random.multivariate_normal([0, 0, 0, 0, 0, 0], Q)
self.setState(X)
def Jacobian(self, Fb):
X=self.getState()
xa=X[0]
xb=X[1]
xc=X[2]
xe=X[3]
xg=X[4]
xp=X[5]
j1 = np.array([-(Fa+Fb) - self.k1*xb*W, -self.k1*xa*W, 0, 0, 0, 0])/W
j2 = np.array([(-self.k1*xb*W), (-(Fa+Fb) - self.k1*xa*W - self.k2*xc*W), (-self.k2*xb*W), 0,0,0])/W
j3 = np.array([(2*self.k1*xb*W), (2*self.k1*xa*W - 2*self.k2*xc*W), (-Fa-Fb-2*self.k2*xb*W-self.k3*xp*W), 0,0, (-self.k3*xc*W)])/W
j4 = np.array([0,2*self.k2*xc*W, 2*self.k2*xb*W, -Fa-Fb, 0,0])/W
j5 = np.array([0,0, 1.5*self.k3*xp*W, 0, -Fa-Fb, 1.5*self.k3*xc*W])/W
j6 = np.array([0,0, self.k2*xb*W - 0.5*self.k3*xp*W, 0,0, -Fa-Fb-0.5*self.k3*xc*W])/W
j = np.array([j1, j2, j3, j4, j5, j6])
A = -np.array([xa,xb,xc,xe,xg,xp])
B = np.array([-self.dr1, -self.dr1-self.dr2, 2*self.dr1 - 2*self.dr2 - self.dr3, 2*self.dr2, 1.5*self.dr3, self.dr2-0.5*self.dr3])
A = np.array([A,B])
return j, A
def y_measure(self, noise=False):
c = np.zeros((4,6), dtype = float)
c[0][1] = 1
c[1][2] = 1
c[2][4] = 1
c[3][5] = 1
Y=np.dot(c, self.getState())
if noise:
Y+=np.random.multivariate_normal([0, 0, 0, 0], R)
return Y
def del_y(self):
c = np.zeros((4,6), dtype = float)
c[0][1] = 1
c[1][2] = 1
c[2][4] = 1
c[3][5] = 1
return c
# +
a = ottoReactor(362.85)
X = a.getState()
Xall = []
Fb = 4.789
Tr = 362.85
Y_pred = []
# Defining perturbation inputs:
S1=[]
S2=[]
for i in range(100):
bit1=random.choice([-0.2394, 0.2394])
bit2=random.choice([-4.485, 4.485])
for j in range(19):
S1.append(4.789+bit1)
S2.append(362.85+bit2)
for i in range(0, 1900):
Fb=S1[i]
Tr=S2[i]
a.euler(timestep, Fb, Tr, True)
X=a.getState()
Xall.append(X)
y=a.y_measure(True)
Y_pred.append(y)
Xall=np.array(Xall)
Y_pred=np.array(Y_pred)
plt.figure(figsize = (6,6))
plt.plot(Xall[:, 0])
plt.ylabel('$X_a$')
plt.xlabel('Time Step k')
plt.show()
plt.plot(Xall[:, 0])
plt.ylabel('$X_b$')
plt.xlabel('Time Step k')
plt.show()
plt.plot(Xall[:, 0])
plt.ylabel('$X_c$')
plt.xlabel('Time Step k')
plt.show()
plt.plot(Xall[:, 0])
plt.ylabel('$X_e$')
plt.xlabel('Time Step k')
plt.show()
plt.plot(Xall[:, 0])
plt.ylabel('$X_g$')
plt.xlabel('Time Step k')
plt.show()
plt.plot(Xall[:, 0])
plt.ylabel('$X_p$')
plt.xlabel('Time Step k')
plt.show()
# -
# ### Generating Perturbed Inputs:
plt.plot(S1) #Fb
plt.plot(S2) #Tr
# +
b=ottoReactor(Tr)
Fb=4.789
Tr=362.85
for i in range(0, 1900):
b.euler(timestep, Fb, Tr, False)
x_steady = b.getState()
E_x = []
A, C = b.Jacobian(Fb)
# -
# ### KALMAN FILTER
# +
Phi=np.identity(6)+A*30
Tau=30*C
T=30
C = b.del_y()
r=ottoReactor(Tr)
X0=x_steady
P0=P00
Xstore=[]
ee=[] #Measurement Error
E_x=[] #Actual Error
Bk=[] #Bk
Rp=[] #Predicted spectral radii
Ru=[] #Updated spectral radii
X_est=X0
Cov_est_meas=P0
E=[]
for i in range(0, 1900):
Fb=U1[i]
Tr=U2[i]
#initial estimates
r.setState(X_est)
X_est+=T*r.dxdt(Fb)
Cov_est_meas=np.linalg.multi_dot([Phi, Cov_est_meas, np.transpose(Phi)])+Q
Max_eigen, eigenvector=np.linalg.eig(Cov_est_meas)
Rp.append(max(Max_eigen))
#Compute Kalman Gain Matrix
t1=np.linalg.multi_dot([C, Cov_est_meas, np.transpose(C)])+R
t1=np.linalg.inv(t1)
L=np.linalg.multi_dot([Cov_est_meas, np.transpose(C), t1])
#L=L*0
#Compute Innovation
e=Ysim[i]-np.dot(C, X_est)
ee.append(e)
#Update Estimates
X_est=X_est+np.dot(L, e)
t1=np.identity(6)-np.dot(L, C)
Cov_est_meas=np.dot(t1, Cov_est_meas)
Max_eigen, eigenvector=np.linalg.eig(Cov_est_meas)
Ru.append(max(Max_eigen))
#Storing estimated results
Xstore.append(X_est)
e_x=Xall[i]-X_est
E_x.append(e_x)
Pinverse=sp.inv(Cov_est_meas)
bk=np.dot(e_x, np.dot(Pinverse, e_x))
Bk.append(bk)
# +
E_x_kf=np.array(E_x)
bk_kf = np.array(Bk)
ee_kf=np.array(ee)
Rp_kf = np.array(Rp)
Ru_kf = np.array(Ru)
Xstore_kf=np.array(Xstore)
print(bk_kf)
# -
def IMlin(x, z):
S=ottoReactor()
S.setState(x)
A, B=S.Jacobian(z)
C=np.identity(6)
return A, B, C
# # Extended Kalman Filter
# +
b=ottoReactor(Tr)
x_steady = b.getState()
E_x = []
# +
T=30
r=ottoReactor(362.85)
X0=x_steady
P0=P00
Xstore=[]
ee=[] #Measurement Error
E_x=[] #Actual Error
Bk=[] #Bk
Rp=[] #Spectral Radii-Predicted
Ru=[] #Spectral Radii-Updated
X_est=X0
P_cov_est=P0
E=[]
for i in range(0, 1900):
r.setState(X_est)
A, C = r.Jacobian(Fb)
Phi=sp.expm(A*30)
Tau=30*C
C = r.del_y()
Fb=U1[i]
Tr=U2[i]
r.setState(X_est)
X_est+=T*r.dxdt(Fb)
P_cov_est=np.linalg.multi_dot([Phi, P_cov_est, np.transpose(Phi)])+Q
Max_eigen, eigenvector=np.linalg.eig(P_cov_est)
Rp.append(max(Max_eigen))
#Kalman Gain Matrix
t1=np.linalg.multi_dot([C, P_cov_est, np.transpose(C)])+R
t1=np.linalg.inv(t1)
L=np.linalg.multi_dot([P_cov_est, np.transpose(C), t1])
#Innovation
e=Ysim[i]-np.dot(C, X_est)
ee.append(e)
#Update Estimates
X_est=X_est+np.dot(L, e)
t1=np.identity(6)-np.dot(L, C)
P_cov_est=np.dot(t1, P_cov_est)
Max_eigen, eigenvector=np.linalg.eig(P_cov_est)
Ru.append(max(Max_eigen))
#estimated results
Xstore.append(X_est)
e_x=Xall[i]-X_est
E_x.append(e_x)
Pinverse=sp.inv(P_cov_est)
bk=np.dot(e_x, np.dot(Pinverse, e_x))
Bk.append(bk)
# -
E_x_ekf = np.array(E_x)
ee_ekf = np.array(ee)
bk_ekf = np.array(Bk)
Rp_ekf = np.array(Rp)
Ru_ekf = np.array(Ru)
Xstore_ekf = np.array(Xstore)
Xall_ekf = np.array(Xall)
# ### Predicted and Actual States plot
# +
plt.plot(Xall[:,0])
plt.plot(Xstore_kf[:,0])
plt.plot(Xstore_ekf[:,0])
plt.legend(['True value', 'KF', 'EKF'])
plt.show()
plt.plot(Xall[:,1])
plt.plot(Xstore_kf[:,1])
plt.plot(Xstore_ekf[:,1])
plt.legend(['True value', 'KF', 'EKF'])
plt.show()
plt.plot(Xall[:,2])
plt.plot(Xstore_kf[:,2])
plt.plot(Xstore_ekf[:,2])
plt.legend(['True value', 'KF', 'EKF'])
plt.show()
plt.plot(Xall[:,3])
plt.plot(Xstore_kf[:,3])
plt.plot(Xstore_ekf[:,3])
plt.legend(['True value', 'KF', 'EKF'])
plt.show()
plt.plot(Xall[:,4])
plt.plot(Xstore_kf[:,4])
plt.plot(Xstore_ekf[:,4])
plt.legend(['True value', 'KF', 'EKF'])
plt.show()
plt.plot(Xall[:,5])
plt.plot(Xstore_kf[:,5])
plt.plot(Xstore_ekf[:,5])
plt.legend(['True value', 'KF', 'EKF'])
plt.show()
# -
# ### Innovation Plot
# +
plt.plot(ee_kf[:,0])
plt.plot(ee_ekf[:,0])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(ee_kf[:,1])
plt.plot(ee_ekf[:,1])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(ee_kf[:,2])
plt.plot(ee_ekf[:,2])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(ee_kf[:,3])
plt.plot(ee_ekf[:,3])
plt.legend(['KF', 'EKF'])
plt.show()
# -
# ### Estimation error: xtrue - xest
# +
plt.plot(E_x_kf[:,0])
plt.plot(E_x_ekf[:,0])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(E_x_kf[:,1])
plt.plot(E_x_ekf[:,1])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(E_x_kf[:,2])
plt.plot(E_x_ekf[:,2])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(E_x_kf[:,3])
plt.plot(E_x_ekf[:,3])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(E_x_kf[:,4])
plt.plot(E_x_ekf[:,4])
plt.legend(['KF', 'EKF'])
plt.show()
plt.plot(E_x_kf[:,5])
plt.plot(E_x_ekf[:,5])
plt.legend(['KF', 'EKF'])
plt.show()
# -
# ### spectral radii for kf
plt.plot(Rp_kf[0:100])
plt.plot(Ru_kf[0:100])
plt.legend(['predicted', 'updated'])
plt.show()
# ### spectral radii for EKF: max(abs(cov matrix)) eigen values
plt.plot(Rp_ekf[0:100])
plt.plot(Ru_ekf[0:100])
plt.legend(['predicted', 'updated'])
plt.show()
# ### RMSE
# +
print('S.D. of estimation error for xa with KF is: ')
print(np.std(E_x_kf[:, 0]))
print()
print('S.D. of estimation error for xb with KF is: ')
print(np.std(E_x_kf[:, 1]))
print()
print('S.D. of estimation error for xc with KF is: ')
print(np.std(E_x_kf[:, 2]))
print()
print('S.D. of estimation error for xe with KF is: ')
print(np.std(E_x_kf[:, 3]))
print()
print('S.D. of estimation error for xg with KF is: ')
print(np.std(E_x_kf[:, 4]))
print()
print('S.D. of estimation error for xp with KF is: ')
print(np.std(E_x_kf[:, 5]))
print()
# +
print('S.D. of estimation error for xa with EKF is: ')
print(np.std(E_x_ekf[:, 0]))
print()
print('S.D. of estimation error for xb with EKF is: ')
print(np.std(E_x_ekf[:, 1]))
print()
print('S.D. of estimation error for xc with EKF is: ')
print(np.std(E_x_ekf[:, 2]))
print()
print('S.D. of estimation error for xe with EKF is: ')
print(np.std(E_x_ekf[:, 3]))
print()
print('S.D. of estimation error for xg with EKF is: ')
print(np.std(E_x_ekf[:, 4]))
print()
print('S.D. of estimation error for xp with EKF is: ')
print(np.std(E_x_ekf[:, 5]))
print()
# -
# ### Normalised estimation error squared
plt.plot(bk_kf)
plt.plot(bk_ekf)
ome approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
# +
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
# -
# <a id='step3'></a>
# ## Step 3: (Optional) Validate your Model
#
# To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
#
# If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
# - the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
# - the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
#
# The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset.
# +
# (Optional) TODO: Validate your model.
| 17,150 |
/book/tutorials/database/4_get_spiral_example.ipynb
|
a8711cbdaac3f7d5a524336d7cdc51837b415311
|
[
"MIT"
] |
permissive
|
snowex-hackweek/website2022
|
https://github.com/snowex-hackweek/website2022
| 3 | 34 |
MIT
| 2023-07-12T15:48:33 | 2023-02-08T17:21:50 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 5,560 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Forming Queries: Example Visualizng Depths
#
# During the SnowEx campaigns a TON of manual snow depths were collected, surveys for hackweek showed an overhelming interest in the manual
# snow depths dataset. This tutorial shows how easy it is to get at that data in the database while learning how to build queries
#
# Don't forget your [cheat sheets](https://snowexsql.readthedocs.io/en/latest/cheat_sheet.html)!
#
# **Goal**: Visualize a small subset of snow depths
#
# **Approach**:
#
# 1. Connect to the DB
# 2. Build a query filtering by dataset and date
# 3. Convert to a GeoDataFrame and plot
# ## Process
# ### Step 1: Get connected
# +
# Import the function to get connect to the db
from snowexsql.db import get_db
# Import our class for the points table
from snowexsql.data import PointData
# Import a useful function to format that data into a dataframe
from snowexsql.conversions import query_to_geopandas
# Import some tools to build dates
from datetime import date
# This is what you will use for all of hackweek to access the db
db_name = 'snow:[email protected]/snowex'
# -
# ### Step 2: Build a query
# +
# Pick a dataset
dataset = 'depth'
# Pick a date
collection_date = date(2020, 2, 7)
# Site name
site_name = "Grand Mesa"
# Get a session
engine, session = get_db(db_name)
# The part inside the query function is what we want back, in this case all columns for the point data
qry = session.query(PointData)
# Filter by site
qry = qry.filter(PointData.site_name == site_name)
# We then want to filter by the selected the data type depth.
qry = qry.filter(PointData.type == dataset)
# Filter by a date
qry = qry.filter(PointData.date == collection_date)
# Limit it to a couple hundred - just for exploration
qry = qry.limit(200)
# Execute the query and convert to geopandas in one handy function
df = query_to_geopandas(qry, engine)
# how many did we retrieve?
print(f'{len(df.index)} records returned!')
session.close()
# -
# ### Step 3: Plot it!
# + tags=["nbsphinx-gallery", "nbsphinx-thumbnail"]
# Get the Matplotlib Axes object from the dataframe object, color the points by snow depth value
ax = df.plot(column='value', legend=True, cmap='PuBu')
# Use non-scientific notation for x and y ticks
ax.ticklabel_format(style='plain', useOffset=False)
# Set the various plots x/y labels and title.
ax.set_title(f'{len(df.index)} {dataset.title()}s collected on {collection_date.strftime("%Y-%m-%d")}')
ax.set_xlabel('Easting [m]')
ax.set_ylabel('Northing [m]')
# Close the session to avoid hanging transactions
session.close()
# -
# Lets try to filter to get the data to show only a depth spiral.
# Let see what instruments are available
result = session.query(PointData.instrument).filter(PointData.type == 'depth').distinct().all()
print(result)
# **Try This:**
# Go back and add a filter to reduce to just one spiral. Do you know what instrument was used to make depth spirals?
#
#
# ## Recap
# You just plotted snow depths and reduce the scope of the data by compounding filters on it
#
# **You should know:**
# * How to build queries using filtering
# * Where a useful tools like [`query_to_geopandas`](https://snowexsql.readthedocs.io/en/latest/snowexsql.html#snowexsql.conversions.query_to_geopandas) live in the snowexsql library
#
#
# If you don't feel comfortable with these, you are probably not alone, let's discuss it!
#
| 3,678 |
/notebooks/Functions.ipynb
|
57e066b1e0a73454768d3372da98d1c0ad1ba5df
|
[] |
no_license
|
wildart/CSCI271
|
https://github.com/wildart/CSCI271
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.cpp
| 42,309 |
// -*- coding: utf-8 -*-
#include <iostream>
using namespace std;
// # Functions
// ## Intro
//
// Develop and maintain a large program by constructing it from small, simple pieces, or components.
//
// divide and conquer
//
// Emphasize how to declare and use **functions** to facilitate the design, implementation, operation and maintenance of large programs.
//
// - Function prototypes
// - Function overloading
// - Function templates
// - Recursion
// math f(x) = x^2
double val = x*x;
double val = f(x);
// ## General
//
// Functions allow you to **modularize** a program by separating its tasks into self-contained units.
//
// - To promote software reusability, every function should be limited to performing a single, well-defined task, and the name of the function should express that task effectively.
//
// A function is invoked by a **function call**, and when the called function completes its task:
// - it either **returns a result**,
// - or simply **returns control** to the caller.
//
// This hiding of implementation details promotes good software engineering.
// ## Global Functions
//
// Sometimes functions are not members of a class. They are called **global functions**.
//
// *Example:* The `<cmath>` header provides a collection of functions that enable you to perform common mathematical calculations.
// - All functions in the `<cmath>` header are global functions
// - Each is called simply by specifying the name of the function followed by parentheses containing the function's arguments.
//
// ## Problem
//
// Write program that returns the largest of three inputted integer numbers.
// +
#include <iostream>
using namespace std;
int main()
{
int a, b, c;
cout << "Enter three integer numbers: ";
cin >> a >> b >> c;
int maxval = a;
// determine if b is larger than maximum
if (maxval < b) {
maxval = b; // make b new maximum
}
// determine if c is larger than maximum
if (maxval < c) {
maxval = c; // make c new maximum
}
cout << "Maximum number is " << maxval << endl;
}
// -
// Rewrite above problem with function.
// +
#include <iostream>
using namespace std;
int maximum(int, int, int z); // function prototype
int main()
{
int a, b, c;
cout << "Enter three integer numbers: ";
cin >> a >> b >> c;
cout << "Maximum number is " << maximum(a,b,c) << endl;
}
int maximum(int x, int y, int z) // function implementation
{
int maxval = x;
// determine if y is larger than maximum
if (maxval < y) {
int m;
maxval = y; // make y new maximum
}
// determine if z is larger than maximum
if (maxval < z) {
maxval = z; // make z new maximum
}
return maxval;
}
// -
// ## Function Prototypes
//
// You must either **define the function before using it** or you must **declare that the function exists** through function prototype:
// ```c++
// <return_type> <function_name>(<par1_type>, <par2_type>, ...);
// ```
// A function prototype is a declaration of a function tells the compiler
// - the function's name,
// - its return type,
// - and the types of its parameters.
// ## Function Prototypes (cont.)
//
// This is a function prototype, which describes the maximum function without revealing its implementation.
int maximum(int, int y, int z); // function prototype
// Above prototype indicates that the function returns an `int`, has the name `maximum` and requires **three** `int` parameters to perform its task.
// - The function prototype is the same as the first line of the corresponding function definition, but ends with a required semicolon.
// ## Function Prototypes (cont.)
//
// A function prototype is required unless the function is defined before it's used.
// - When you use a standard library function like `sqrt`, you do not have access to the function's definition, therefore it cannot be defined in your code before you call the function.
// - Instead, you must include its corresponding header (`<cmath>`), which contains the function's prototype.
//
// **Always provide function prototypes**, even though it's possible to omit them when functions are defined before they're used.
// - Providing the prototypes avoids tying the code to the order in which functions are defined (which can easily change as a program evolves).
// ## Tips
//
// - Parameter names in function prototypes are optional (they're ignored by the compiler), but many programmers use these names for documentation purposes.
//
// - Declaring function parameters of the same type as `int x, y` instead of `int x, int y` is a syntax error - a **type** is required for each parameter in the parameter list.
//
// - Compilation (linker) errors occur if the function prototype, header and calls do not all agree in the number, type and order of arguments and parameters, and in the return type.
// ## Tips (cont.)
//
// - A function that has many parameters may be performing too many tasks. Consider dividing the function into smaller functions that perform the separate tasks. Limit the function header to one line if possible.
//
// - Multiple parameters are specified in both the function prototype and the function header as a comma-separated list, as are multiple arguments in a function call.
//
// - In a function that **does not return** a result (i.e., it has a void return type), we showed that control returns when the program reaches the function-ending right brace.
// - You also can explicitly return control to the caller by executing the statement `return;`
// ## Argument Coercion
//
// An important feature of function prototypes is **argument coercion**:
// - forcing arguments to the appropriate types specified by the parameter declarations.
// - these conversions occur as specified by C++'s promotion rules.
//
// The promotion rules indicate how to convert between types without losing data.
// - The promotion rules apply to expressions containing values of two or more data types.
// - The type of each value in a mixed-type expression is promoted to the "highest" type in the expression.
// - Converting values to lower fundamental types can result in incorrect values.
// - a value can be converted to a lower fundamental type only by explicitly assigning the value to a variable of lower type or by using a cast operator.
// ## Argument Coercion (cont.)
//
// Lists the arithmetic data types in order from "highest type" to "lowest type."
// 
// ## Random Number Generator
//
// The function **rand** generates an unsigned integer between `0` and `RAND_MAX`.
//
// - The function prototype for the **rand** function is in `<cstdlib>`.
// - The value of `RAND_MAX` must be at least 32767 - the maximum positive value for a two-byte (16-bit) integer (a symbolic constant defined in the `<cstdlib>` header file).
// - For GNU C++, the value of `RAND_MAX` is 2147483647; for Visual Studio, the value of `RAND_MAX` is 32767.
// - If **rand** truly produces integers at random, every number between `0` and `RAND_MAX` has an equal chance (or probability) of being chosen each time **rand** is called.
//
// **rand** generates pseudorandom numbers:
// - a sequence of numbers that appears to be random.
// - sequence repeats itself each time the program executes.
// ## Random Number Generator (cont.)
//
// Program can be conditioned to produce a different sequence of random numbers for each execution.
// - This is called *randomizing* and is accomplished with the C++ Standard Library function **srand**.
// - Takes an unsigned integer argument and seeds **rand** to produce a different sequence of random numbers for each execution.
//
// To randomize without having to enter a seed use
srand(static_cast<unsigned>(time(0)))
// - Causes the computer to read its clock to obtain the value for the seed.
// - Function **time** (with the argument 0 as written in the preceding statement) returns the current time as the number of seconds since January 1, 1970.
// - The function prototype for time is in `<ctime>`.
{
for(int i=0; i<5; ++i)
cout << rand() / (float)RAND_MAX << endl;
}
#include <cstdlib>
{
srand(static_cast<unsigned>(time(0)));
for(int i=0; i<5; ++i)
cout << rand() % 3 << endl;
}
// ## Scope
//
// The portion of the program where an identifier can be used is known as its **scope**.
// - block scope
// - global namespace scope
// ## Scope (cont.)
//
// Identifiers declared **inside** a block have **block scope**, which begins at the identifier's declaration and ends at the terminating right brace (}) of the enclosing block.
// - Local variables have block scope, as do function parameters.
// - Any block can contain variable declarations.
// - In nested blocks, if an identifier in an outer block has the same name as an identifier in an inner block, the one in the outer block is "hidden" until the inner block terminates.
// - The inner block "sees" its own local variable's value and not that of the enclosing block's identically named variable.
//
// _Note: Avoid variable names in inner scopes that hide names in outer scopes. Most compilers will warn you about this issue._
// ## Example
{
int a{1};
{
int b{2};
{
int c{3};
}
// cout << c << endl; // error
}
cout << a << endl;
// cout << b << endl; // error
}
// ## Scope (cont.)
//
// An identifier declared **outside** any function or class has **global namespace scope**.
// - "known" in all functions from the point at which it’s declared until the end of the file.
// - Function definitions, function prototypes placed outside a function, class definitions and global variables all have global namespace scope.
// - Global variables are created by placing variable declarations outside any class or function definition. Such variables retain their values throughout a program’s execution.
//
// _Note: Variables used only in a particular function should be declared as local variables in that function rather than as global variables._
// +
// Scoping example.
#include <iostream>
using namespace std;
void useLocal(); // function prototype
void useStaticLocal(); // function prototype
void useGlobal(); // function prototype
int x{1}; // global variable
int main() {
cout << "global x in main is " << x << endl;
int x{5}; // local variable to main
cout << "local x in main's outer scope is " << x << endl;
{ // block starts a new scope
int x{7}; // hides both x in outer scope and global x
cout << "local x in main's inner scope is " << x << endl;
}
cout << "local x in main's outer scope is " << x << endl;
useLocal(); // useLocal has local x
useStaticLocal(); // useStaticLocal has static local x
useGlobal(); // useGlobal uses global x
useLocal(); // useLocal reinitializes its local x
useStaticLocal(); // static local x retains its prior value
useGlobal(); // global x also retains its prior value
cout << "\nlocal x in main is " << x << endl;
}
// useLocal reinitializes local variable x during each call
void useLocal() {
int x{25}; // initialized each time useLocal is called
cout << "\nlocal x is " << x << " on entering useLocal" << endl;
++x;
cout << "local x is " << x << " on exiting useLocal" << endl;
}
// useStaticLocal initializes static local variable x only the
// first time the function is called; value of x is saved
// between calls to this function
void useStaticLocal() {
static int x{50}; // initialized first time useStaticLocal is called
cout << "\nlocal static x is " << x << " on entering useStaticLocal" << endl;
++x;
cout << "local static x is " << x << " on exiting useStaticLocal" << endl;
}
// useGlobal modifies global variable x during each call
void useGlobal() {
cout << "\nglobal x is " << x << " on entering useGlobal" << endl;
x *= 10;
cout << "global x is " << x << " on exiting useGlobal" << endl;
}
// -
// ### Unary Scope Resolution Operator
//
// - C++ provides the **unary scope resolution operator (::)** to access a global variable when a local variable of the same name is in scope.
// - Using the unary scope resolution operator (::) with a given variable name is optional when the only variable with that name is a global variable.
// +
// Unary scope resolution operator.
#include <iostream>
using namespace std;
int number{7}; // global variable named number
int main() {
double number{10.5}; // local variable named number
// display values of local and global variables
cout << "Local double value of number = " << number
<< "\nGlobal int value of number = " << ::number << endl;
}
// -
// ## Tips
//
// - Always using the unary scope resolution operator (::) to refer to global variables (even if there is no collision with a local-variable name) makes it clear that you're intending to access a global variable rather than a local variable.
// - Always using the unary scope resolution operator (::) to refer to a global variable eliminates logic errors that might occur if a nonglobal variable hides the global variable.
// - Avoid using variables of the same name for different purposes in a program. Although this is allowed in various circumstances, it can lead to errors.
// ## Function Call Stack
//
// To understand how C++ performs function calls, we first need to consider a data structure (i.e., collection of related data items) known as a **stack**. It is analogous to a pile of dishes.
// - When a dish is placed on the pile, it's normally placed at the top - referred to as **pushing**.
// - Similarly, when a dish is removed from the pile, it's normally removed from the top - referred to as **popping**.
//
// Last-in, first-out (**LIFO**) data structures - the last item pushed (inserted) is the first item popped (removed).
//
// Function call stack mechanism:
//
// - supports the function call/return mechanism
// - supports the creation, maintenance and destruction of each called function's automatic variables
// ## Stack Frames
//
// - Each function eventually must return control to the function that called it.
// - Each time a function calls another function, an entry is pushed onto the function call stack.
// - This entry, called a stack frame or an activation record, contains the return address that the called function needs in order to return to the calling function.
// - When a function call returns, the stack frame for the function call is popped, and control transfers to the return address in the popped stack frame.
// ## Stack Overflow
//
// - The amount of memory in a computer is finite, so only a certain amount of memory can be used to store activation records on the function call stack.
// - If more function calls occur than can have their activation records stored on the function call stack, an a fatal error known as stack overflow occurs.
// ## Example
// +
#include <iostream>
using namespace std;
int square(int); // prototype
int main() {
int a = 10;
cout << a << " squared: " << square(a) << endl;
}
int square(int x){ // x is a local variable
return x*x;
}
// -
// First, the operating system calls `main` - this pushes an activation record onto the stack.
// - The activation record tells main how to return to the operating system (i.e., transfer to return address R1) and contains the space for main's automatic variable (i.e., a, which is initialized to 10).
//
// 
// Function `main` - before returning to the operating system - now calls function `square`.
// - This causes a stack frame for `square` to be pushed onto the function call stack.
// - This stack frame contains the return address that square needs to return to main (i.e., R2) and the memory for square's automatic variable (i.e., x).
//
// 
// After square calculates the square of its argument, it needs to return to `main` - and no longer needs the memory for its automatic variable x. So `square` stack frame is popped from the stack - giving square the return location in main (i.e., R2) and losing square's automatic variable.
//
// 
// ## References and Reference Parameters
//
// Two ways to pass arguments to functions in many programming languages are
// - **pass-by-value**
// - **pass-by-reference**
//
// 
// ## Pass-by-Value
//
// When an argument is __passed by value__, a copy of the argument's value is made and passed (on the function call stack) to the called function.
// - Changes to the copy do not affect the original variable's value in the caller.
// - To specify a reference to a constant, place the `const` qualifier before the type specifier in the parameter declaration.
//
// - One disadvantage of pass-by-value is that, if a large data item is being passed, copying that data can take a considerable amount of execution time and memory space.
// ## Pass-by-Reference
//
// With __pass-by-reference__, the caller gives the called function the ability to access the caller's data directly, and to modify that data.
// - A reference parameter is an alias for its corresponding argument in a function call.
// - To indicate that a function parameter is passed by reference, simply follow the parameter's type in the function prototype by an ampersand (&); use the same convention when listing the parameter's type in the function header.
// - Pass-by-reference is good for performance reasons, because it can eliminate the pass-by-value overhead of copying large amounts of data.
// +
// Passing arguments by value and by reference.
#include <iostream>
using namespace std;
int squareByValue(int); // function prototype (value pass)
void squareByReference(int&); // function prototype (reference pass)
int main() {
int x{2}; // value to square using squareByValue
int z{4}; // value to square using squareByReference
// demonstrate squareByValue
cout << "x = " << x << " before squareByValue\n";
cout << "Value returned by squareByValue: "
<< squareByValue(x) << endl;
cout << "x = " << x << " after squareByValue\n" << endl;
// demonstrate squareByReference
cout << "z = " << z << " before squareByReference" << endl;
squareByReference(z);
cout << "z = " << z << " after squareByReference" << endl;
}
// squareByValue multiplies number by itself, stores the
// result in number and returns the new value of number
int squareByValue(int number) {
return number *= number; // caller's argument not modified
}
// squareByReference multiplies numberRef by itself and stores the result
// in the variable to which numberRef refers in function main
void squareByReference(int& numberRef) {
numberRef *= numberRef; // caller's argument modified
}
// -
// ## References
//
// - References can also be used as aliases for other variables within a function.
// - Reference variables must be initialized in their declarations and cannot be reassigned as aliases to other variables.
// - Once a reference is declared as an alias for another variable, all operations supposedly performed on the alias are actually performed on the original variable.
// - To specify that a reference parameter should not be allowed to modify the corresponding argument, place the `const` qualifier before the type name in the parameter's declaration.
// - `string` objects can be large, so they should be passed to functions by reference.
// ## References (cont.)
//
// Functions can return references, but this can be dangerous.
// - When returning a reference to a variable declared in the called function, the variable should be declared static in that function.
// - Returning a reference to a local variable in a called function is a **logic error** for which compilers typically issue a warning.
// - Compilation warnings indicate potential problems, so most software-engineering teams have policies requiring code to compile without warnings.
//
// ## Function Overloading
//
// - C++ enables several functions of the same name to be defined, as long as they have different signatures.
// - This is called **function overloading**.
// - The C++ compiler selects the proper function to call by examining the number, types and order of the arguments in the call.
// - Function overloading is used to create several functions of the same name that perform similar tasks, but on different data types.
// +
// Overloaded square functions.
#include <iostream>
using namespace std;
// function square for int values
int square(int x) {
cout << "square of integer " << x << " is ";
return x * x;
}
// function square for double values
double square(double y) {
cout << "square of double " << y << " is ";
return y * y;
}
int main() {
cout << square(7); // calls int version
cout << endl;
cout << square(7.5); // calls double version
cout << endl;
}
// -
// ## Function Overloading (cont.)
//
// Compiler differentiates among overloaded functions.
// - Overloaded functions are distinguished by their signatures.
// - A signature is a combination of a function's name and its parameter types (in order).
// - The compiler encodes each function identifier with the types of its parameters.
// ## Function Templates
//
// - If the program logic and operations are identical for each data type, overloading may be performed more compactly and conveniently by using function templates.
// - You write a single function template definition.
// - Given the argument types provided in calls to this function, C++ automatically generates separate **function template specializations** to handle each type of call appropriately.
// - All function template definitions begin with the **template** keyword followed by a **template parameter list** enclosed in angle brackets (< and >).
// - Every parameter in the template parameter list is preceded by keyword typename or keyword class.
// - The type parameters are placeholders for fundamental types or user-defined types.
// - Used to specify the types of the function’s parameters, to specify the function’s return type and to declare variables within the body of the function definition.
// Function template maximum header.
template <typename T> // or template<class T>
T maximum(T value1, T value2, T value3) {
T maximumValue{value1}; // assume value1 is maximum
// determine whether value2 is greater than maximumValue
if (value2 > maximumValue) {
maximumValue = value2;
}
// determine whether value3 is greater than maximumValue
if (value3 > maximumValue) {
maximumValue = value3;
}
return maximumValue;
}
maximum(1, 2, 3)
maximum(1.0, 3.0, 3.5)
// force type parameter during the function call
maximum<int>(1.0, 3.0, 3.5)
// ## Recursion
//
// A **recursive function** is a function that calls itself, either directly, or indirectly (through another function).
//
// Recursive problem-solving approaches have a number of elements in common.
// - A recursive function is called to solve a problem.
// - The function actually knows how to solve only the simplest case(s), or so-called base case(s).
// - If the function is called with a **base case**, the function simply returns a result (value).
// - If the function is called with a more complex problem, it typically divides the problem into two conceptual pieces - a piece that the function knows how to do and a piece that it does not know how to do.
// - This new problem looks like the original, so the function calls a copy of itself to work on the smaller problem - this is referred to as a **recursive call** and is also called the **recursion step**.
//
// ## Example: Factorial
//
// Write a function that calculates a factorial value:
//
// $$n! = 1 \cdot 2 \cdot \cdots \cdot n$$
//
// - This is an iterative approach
// iterative definition of factorial function
unsigned long factorial(unsigned int number) {
unsigned long result{1};
// iterative factorial calculation
for (unsigned int i{number}; i >= 1; --i) {
result *= i;
}
return result;
}
// Recursive formula for the factorial function:
//
// $$n! = n \cdot (n-1)!$$
//
// - Base case: if $n$ is 1 then $n!$ is 1
// recursive definition of factorial function
unsigned long factorial(unsigned long number) {
if (number <= 1) { // test for base case
return 1; // base cases: 0! = 1 and 1! = 1
}
else { // recursion step
return number * factorial(number - 1);
}
}
// Calling the factorial function
#include <iostream>
using namespace std;
int main() {
unsigned int n;
cout << "Enter an integer number: ";
cin >> n;
cout << "Factorial of "<< n
<< " is " << factorial(n) << endl;
return 0;
}
// 
// ## Recursion
//
// - The recursion step often includes the key-word return, because its result will be combined with the portion of the problem the function knew how to solve to form the result passed back to the original caller, possibly main.
// - The recursion step executes while the original call to the function is still "open", i.e., it has not yet finished executing.
// - The recursion step can result in many more such recursive calls.
// ## Recursion vs Iteration
//
// Both iteration and recursion are based on a control statement:
// - Iteration uses an iteration statement
// - Recursion uses a selection statement.
//
// Both iteration and recursion involve iteration:
// - Iteration explicitly uses an iteration statement
// - Recursion achieves repetition through repeated function calls.
//
// Iteration and recursion each involve a termination test:
// - Iteration terminates when the loop-continuation condition fails
// - Recursion terminates when a base case is recognized.
// ## Recursion vs Iteration (cont.)
//
// Counter-controlled iteration and recursion both gradually approach termination:
// - Iteration modifies a counter until the counter assumes a value that makes the loop-continuation condition fail
// - Recursion produces simpler versions of the original problem until the base case is reached.
//
// Both iteration and recursion can occur infinitely:
// - An infinite loop occurs with iteration if the loop-continuation test never becomes false
// - Infinite recursion occurs if the recursion step does not reduce the problem during each recursive call in a manner that converges on the base case.
| 26,523 |
/Additional.ipynb
|
eb6052bf93336b9a2eed25e4aaef59f7a23067ee
|
[] |
no_license
|
JediKnightChan/CAD-Research
|
https://github.com/JediKnightChan/CAD-Research
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 262,500 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# +
D = 2
K = 3
N = int(K*1e3)
X0 = np.random.randn((N//K),D) + np.array([2,2])
X1 = np.random.randn((N//K),D) + np.array([0,-2])
X2 = np.random.randn((N//K),D) + np.array([-2,2])
X = np.vstack((X0,X1,X2))
y = np.array([0]*(N//K) + [1]*(N//K) + [2]*(N//K))
plt.figure()
plt.scatter(X[:,0], X[:,1], c = y, alpha = 0.5)
# +
def one_hot_encode(y):
N = len(y)
K = len(set(y))
Y = np.zeros((N,K))
for i in range(N):
Y[i,y[i]] = 1
return Y
def softmax(H):
eH = np.exp(H)
return eH/eH.sum(axis = 1, keepdims = True)
def feed_forward(X, W1, b1, W2, b2):
Z = np.tanh(np.matmul(X,W1) + b1)
P_hat = softmax(np.matmul(Z,W2) + b2)
return Z, P_hat
def cross_entropy(Y, P_hat):
return -np.sum(Y*np.log(P_hat))
def accuracy(y, P_hat):
return np.mean(y == P_hat.argmax(axis = 1))
# +
M = 4
W1 = np.random.randn(D,M)
b1 = np.random.randn(M)
W2 = np.random.randn(M,K)
b2 = np.random.randn(K)
# -
Z, P_hat = feed_forward(X, W1, b1, W2, b2)
print("Accuracy: {:0.4f}".format(accuracy(y, P_hat)))
| 1,404 |
/2. 고급스크래핑/2-1) 로그인이 필요한 사이트에서 다운받기.ipynb
|
08c74f821e1d244c93413e718e1bd45d1c14207b
|
[] |
no_license
|
wonji0129/Study_Play_with_data
|
https://github.com/wonji0129/Study_Play_with_data
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 4,887 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2-1 로그인이 필요한 사이트에서 다운받기
# ## HTTP 통신 ( Hyper Text Transfer Protocal )
# - 서버가 html 문서(잘디자인된 문서)를 유저에게 보여주기 위한 목적으로 만들어진 통신규약
# - Connectionless 하고 Stateless 한 프로토콜
# > HTTP는 Stateless하기 때문에 로그인 상태 정보를 유지하지 않아서 쿠키와 세션을 사용하지 않으면 게시판이나 메일을 확인할 때 페이지를 이동할 때마다 로그인 해야 한다.
# - 도메인 ex> http://www.testsitexxx.com:8080/test.html
# > schema :// hostname : port / page
# > default port : 80 ( 안보이면 이거 쓰는거임 )
# - 기본적인 stateless 통신 ( 상태를 전혀 저장하지 않는 구조 : 대용량 유저의 요청과 응답을 받기 위함 )
#
# ### 쿠키 ( Cookie )
# - 헨젤과 그레텔의 지나온 길을 걸어온것을 표시하기 위해 쿠키조각을 떨어뜨리는것에 유래
# - 웹브라우져를 통해 사이트에 방문하는 사람의 일시적으로 데이터를 저장하는 기능
# > 쿠키는 클라이언트에 저장되는 키와 값이 들어 있는 작은 데이터 파일이다. 쿠키에는 이름, 값, 만료 날짜, 경로 정보가 들어있다.Response Header의 Set-Cookie 속성을 사용하면 클라이언트에 쿠키를 만들 수 있다. 만들어진 쿠키는 사용자가 요청하지 않아도 브라우저가 매번 Request Header에 넣어서 서버에 전송
#
# ### 세션 ( Session )
# - 쿠키를 사용에 데이터를 저장함
# 세션은 서버 메모리에 저장되는 정보다. 서버에 저장되기 때문에 쿠키와는 달리 사용자 정보가 노출되지 않는다.
# - 로그인 처리 과정
# 1. 사용자가 로그인 페이지에 id/pw를 입력하고 로그인 버튼 클릭
# 2. 서버에서 사용자가 보낸 id/pw 정보를 확인하고 존재하는 사용자면 서버 메모리에 유일한 세션 ID를 생성하고 사용자 id와 매핑 정보를 저장
# 3. 클라이언트에 세션 ID를 쿠키로 저장
# 4. 요청시 마다 서버는 Request Header의 쿠키 정보(세션 ID)를 확인하고 세션 ID와 매핑되는 id의 사용자로 인식
# > 세션은 서버 메모리에 저장되지만 세션 역시 클라이언트에 쿠키로 저장된다는 것이 중요하다.
# > 이것도 세션ID를 가로채서 변경할 우려가 있기때문에 보안에 보완할점이 있음.
#
#
# ### urllib.request 패키지 이용
# - HTTP 전반적인 기능 탑재
# - GET
# - POST
# - DELETE
#
# ### OPEN API 사용의 예
# - https://developer.github.com/v3/
# - https://www.instagram.com/developer/
# - http://openapi.11st.co.kr/openapi/OpenApiFrontMain.tmall
#
# +
# 파이선으로 로그인하기
# 로그인을 위한 모듈 추출하기
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
# 아이디와 비밀번호 지정하기[자신의 것을 사용해주세요] --- (※1)
USER = "hyungsok7"
PASS = "XXXXX"
# 세션 시작하기 --- (※2)
session = requests.session()
# 로그인하기 --- (※3)
login_info = {
"m_id": USER, # 아이디 지정
"m_passwd": PASS # 비밀번호 지정
}
url_login = "http://www.hanbit.co.kr/member/login_proc.php"
res = session.post(url_login, data=login_info)
res.raise_for_status() # 오류가 발생하면 예외가 발생합니다.
# 마이페이지에 접근하기 --- (※4)
url_mypage = "http://www.hanbit.co.kr/myhanbit/myhanbit.html"
res = session.get(url_mypage)
res.raise_for_status()
# 마일리지와 이코인 가져오기 --- (※5)
soup = BeautifulSoup(res.text, "html.parser")
mileage = soup.select_one(".mileage_section1 span").get_text()
ecoin = soup.select_one(".mileage_section2 span").get_text()
print("마일리지: " + mileage)
print("이코인: " + ecoin)
# +
# 데이터 가져오기
import requests
r = requests.get("http://api.aoikujira.com/time/get.php")
# 텍스트 형식으로 데이터 추출하기
text = r.text
print(text)
# 바이너리 형식으로 데이터 추출하기
bin = r.content
print(bin)
# +
# 이미지 데이터 추출하기
import requests
r = requests.get("http://wikibook.co.kr/wikibook.png")
# 바이너리 형식으로 데이터 저장하기
with open("test.png", "wb") as f:
f.write(r.content)
print("saved")
| 3,058 |
/reto_1/.ipynb_checkpoints/reto_1-checkpoint.ipynb
|
d0065a93c5571e886e97bb3272ad3eb9b1780416
|
[] |
no_license
|
maurogome/PlatziDataChallenge
|
https://github.com/maurogome/PlatziDataChallenge
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 26,979 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PlatziDataChallenge #1
# PlatziDataChallenge Parte 1, las instrucciones y tips las encontrarás en el blog de Platzi cada lunes.
#
# Para resolver este primer reto los pasos son: - Aprobar el Curso de Manipulación y Análisis de Datos con Pandas y Python - Descargar los datos del Titanic desde kaggle.com/c/titanic - Unir los tres CSVs (gender_submission, test y train) - Responder las siguientes preguntas: 1. ¿Cuántas personas iban en el titanic? 2. ¿Cuántos hombres y mujeres sobrevivieron? 3. ¿Cuál fue el top 10 de edad que más sobrevieron y el top 10 de edad que no lo lograron? 4. ¿Cuántos cargos o títulos iban en el barco? Ejemplo: Capitanes, Mrs. Miss, etc. (Acá usarás expresiones regulares) 5. ¿Cuánto es la sumatoria del valor de los tickets en USD (yep en USD)?
import pandas as pd
import numpy as np
# ### Import datasets:
df_gs = pd.read_csv("data/gender_submission.csv")
df_test = pd.read_csv("data/test.csv")
df_train = pd.read_csv("data/train.csv")
# ## Exploring dataset:
#
# ### Gender submission dataset:
df_gs.info()
df_gs.head()
df_gs.shape
# ### Train dataset:
df_train.info()
df_train.head()
df_train.shape
# ### Test dataset:
df_test.info()
df_test.head()
df_test.shape
# ## Merging datasets
df_temp = pd.merge(df_test, df_gs, on = ['PassengerId'], how = 'inner')
df_final = pd.concat([df_train, df_temp]).reset_index(drop = True)
# ## Preguntas:
# 1. ¿Cuántas personas iban en el titanic?
# > Cada una de las filas corresponde a un pasajero, por lo tanto el numero de filas corresponde al numero de pasajeros...
df_final.shape
# En el Titanic iban 1309 pasajeros
# 2. ¿Cuántos hombres y mujeres sobrevivieron?
df_final.head()
df_final.groupby(['Sex','Survived'])['Sex'].count()
# Sobrevivieron 385 mujeres y 109 hombres
# 3. ¿Cuál fue el top 10 de edad que más sobrevieron y el top 10 de edad que no lo lograron?
df_final.groupby(['Age','Survived'])['Age'].count().sort_values(ascending=False).head(10)
# ¿Cuántos cargos o títulos iban en el barco? Ejemplo: Capitanes, Mrs. Miss, etc. (Acá usarás expresiones regulares)
# Si revisamos el index Name podemos notar que que consta de Apellido, Titulo, Nombre, por lo cual debemos extraer la segunda palabra del string y verificar cuantos son los valores únicos
df_final['Title'] = df_final.Name.apply(lambda name: name.split(',')[1].split('.')[0].strip())
df_final.groupby(['Title'])['PassengerId'].nunique().shape
# Habian 18 cargos o titulos en el Titanic
# ¿Cuánto es la sumatoria del valor de los tickets en USD (yep en USD)?
df_final['Fare'].sum()
| 2,831 |
/Python_tutorials/Intro_xarray_dask.ipynb
|
e51a54b6d2cd5eb5bffec808a64877903062527b
|
[] |
no_license
|
paigem/COESSING2020_pythonLabs
|
https://github.com/paigem/COESSING2020_pythonLabs
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 11,846 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Very brief introduction to the python libraries: xarray and dask
#
# - `xarray` is similar to the `pandas` library, but for multi-dimensional data. Whereas `pandas` works well with 2-d or tabular data, it is common in oceanography (at least in physical oceanography) to have 3-d or 4-d data (with 3 spatial dimensions and 1 time dimension). `xarray` is also very useful for analyzing large volumes of data often in netcdf (.nc) format.
# - `dask` is a library that parallelizes code (i.e. can run multiple computations at the same time) in a relatively easy and efficient way. It is very useful when using big data and integrates nicely with `xarray`.
#
# This notebook is meant to only be a *very brief* introduction to these two libraries, so you know where to start if you'd like to use these tools in your own research/coding. There are a few very nice tutorials online about both of these libraries, including these two by Ryan Abernathey:
#
# - "Xarray Fundamentals": https://rabernat.github.io/research_computing_2018/xarray.html
# - "Dask for Parallel Computing and Big Data": https://rabernat.github.io/research_computing_2018/dask-for-parallel-computing-and-big-data.html#
#
# In this notebook, we will also download some free sea surface temperature (SST) data from NOAA using ftp (file transfer protocol) and the command 'wget'. This same method is used in the cartopy tutorial.
#
# If you have not yet used the command 'wget', you will need to install it by typing `!conda install -c anaconda wget` in a new code cell (or in Anaconda Prompt).
# # Download the SST data
#
# Uncomment the cell below to download mean SST data.
#
# #### Note: you only need to run this once! Once you run the cell, you should see the data 'sst.mnmean.nc' appear in the same folder as this notebook is in. The data is ~57MB.
# +
# #!wget ftp://ftp.cdc.noaa.gov/Datasets/noaa.oisst.v2/sst.mnmean.nc
# -
# # xarray
#
# #### First we need to import xarray and set our plots to show up in the notebook
# %matplotlib inline
import xarray as xr
# #### Now let's open the data into an xarray dataset, which we will call 'ds' for short.
ds = xr.open_dataset('../sst.mnmean.nc')
# #### To view details of the dataset, simply type its name
ds
# #### So you can see that the data has a few things to note:
#
# - the data is a `xarray.Dataset` typre
# - 4 dimensions of different sizes: lat, lon, nbnds, time
# - 3 coordinates: lat, lon, time
# - 2 variables: sst and time_bnds
# - a list of attributes (i.e. metadeta: telling us where and when the data is from, etc.)
#
# #### You can start to understand just how useful xarray is with netcdf files, as it can load all different types of information about a dataset.
#
# #### To access items within the dataset, we just type `ds.` followed by the aspect of the dataset we are interested in. The following few cells show a few examples.
#
# See information about the variable 'sst':
ds.sst
# List information about the time dimension. Notice that the time interval is in months, and the data goes from 1981 to 2020.
ds.time
# List all dimensions:
ds.dims
# List all variables:
list(ds.keys())
# List all attributes:
ds.attrs
# #### One of the very nice features of xarray is the ease in which you can do simple data manipulation, such as taking the mean of a dataset. First we select the sst variable, and then we write out the function `.mean()` with the argument 'time' to take the time average.
ds.sst.mean('time')
# #### Another one of the great aspects of `xarray` is that it supports calling dimensions and variables by their names, instead of trying to remember which dimension is first or second, etc. (as we would need to do, for instance, with numpy).
#
# #### Great, so we just took the mean of a dataset in one line! What if we want to plot the mean SST that we just calculated? It turns out that you can do this on the same line as well! All you have to do is add `.plot()` to the end!
ds.sst.mean('time').plot()
# #### This plot doesn't look very nice at the moment, but that's where the map plotting package cartopy comes to the rescue! If you want to try out cartopy, go over to the cartopy tutorial that Josué made for you!
#
# ### Ok, that is all I'm going to do with xarray for now. I hope you can see how useful it can be, and you can check out the tutorial linked at the top of this notebook if you want to learn about xarray in more depth.
# # dask
#
# ### In this brief dask tutorial (largely based on the one at https://tutorial.dask.org/00_overview.html), we will see how dask can help speed up your computations.
#
# #### We will start by defining some very basic adding and multiplying functions that use the function `sleep()` from the `time` library. This sleep() function causes the function add() that we define below to pause for the number of seconds inside the parentheses."
# +
from time import sleep
# Define a function to add two numbers
def add(x,y):
sleep(2) # pause for 2 seconds
return x+y
# Define a function to multiply two numbers
def mlt(x,y):
sleep(1) # pause for 1 second
return x*y
# -
# #### In the next cell, we will use another 'magic' function similar to the `%matplotlib inline` function you have probably used numerous times. These function calls that begin with a `%` are called 'magic' functions in ipython notebooks. This time we will use `%%time`, which prints out the amount of time it takes to run all of the code in that particular cell.
#
# #### If we call add() once and mlt() twice, can you guess how long it will take to run? It should take almost exactly 2 + 1 + 1 seconds.
# +
# %%time
a = add(1,2)
b = mlt(1,2)
c = mlt(a,b)
# -
# #### On my computer, it took 4.01 seconds to run - pretty darn close to 4 seconds!
#
# #### But, we could theoretically run all of these functions at the same time, since they are all independent calculations. `dask` can help us do that! We are now going to import a dask function called 'delayed'. It is so called because it doesn't run a function immediately, but stores the information to run a function until the user specifies a `.compute()` function, at which point the calculation is run, and in the most optimized way. Let's see an example:
from dask import delayed
# +
# %%time
a = delayed(add)(1,2)
b = delayed(mlt)(1,2)
c = delayed(mlt)(a,b)
# -
# #### Hmm, this claims that the calculations ran in 640 microseconds - but that doesn't make sense! Our two functions require at least a 1 second pause when run. The catch is that we haven't actually done the calculation yet. We have just created delayed objects that will run once we compute them.
# +
# %%time
c.compute()
# -
# #### Now it looks like it only took 3.01 seconds to run the same calculation as before when it took 4 seconds! While 1 second doesn't sound like much, this can be scaled up with large amounts of data. You can save hours or even days or your time by parallelizing with dask!
#
# #### Dask can do way more interesting things, but I'll leave you to explore it in more depth on your own - a good place to start is the tutorial linked at the top of this notebook.
#
# #### Note that dask may not be necessary to use if you do not have big data! If you do have large amounts of data that take a long time to run, then dask is a great resource, and you can use it on your local computer or on computer clusters at unviersities or on the cloud!
) # [batch_size, 2 ** level, 2]
level_decisions = decisions[
:, begin_idx:end_idx, :
] # [batch_size, 2 ** level, 2]
mu = mu * level_decisions # [batch_size, 2**level, 2]
begin_idx = end_idx
end_idx = begin_idx + 2 ** (level + 1)
mu = tf.reshape(mu, [batch_size, self.num_leaves]) # [batch_size, num_leaves]
probabilities = keras.activations.softmax(self.pi) # [num_leaves, num_classes]
outputs = tf.matmul(mu, probabilities) # [batch_size, num_classes]
return outputs
class NeuralDecisionForest(keras.Model):
def __init__(self, num_trees, depth, num_features, used_features_rate, num_classes):
super(NeuralDecisionForest, self).__init__()
self.ensemble = []
# Initialize the ensemble by adding NeuralDecisionTree instances.
# Each tree will have its own randomly selected input features to use.
for _ in range(num_trees):
self.ensemble.append(
NeuralDecisionTree(depth, num_features, used_features_rate, num_classes)
)
def call(self, inputs):
# Initialize the outputs: a [batch_size, num_classes] matrix of zeros.
batch_size = tf.shape(inputs)[0]
outputs = tf.zeros([batch_size, num_classes])
# Aggregate the outputs of trees in the ensemble.
for tree in self.ensemble:
outputs += tree(inputs)
# Divide the outputs by the ensemble size to get the average.
outputs /= len(self.ensemble)
return outputs
# +
learning_rate = 0.01
batch_size = 265
num_epochs = 10
hidden_units = [64, 64]
def run_experiment(model):
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
print("Start training the model...")
train_dataset = get_dataset_from_csv(
train_data_file, shuffle=True, batch_size=batch_size
)
model.fit(train_dataset, epochs=num_epochs)
print("Model training finished")
print("Evaluating the model on the test data...")
test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size)
_, accuracy = model.evaluate(test_dataset)
print("Test accuracy: ", round(accuracy * 100, 2))
# +
num_trees = 10
depth = 10
used_features_rate = 1.0
num_classes = 2 #len(TARGET_LABELS)
def create_tree_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
features = layers.BatchNormalization()(features)
num_features = features.shape[1]
tree = NeuralDecisionTree(depth, num_features, used_features_rate, num_classes)
outputs = tree(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
tree_model = create_tree_model()
run_experiment(tree_model)
| 10,604 |
/Amazon_Food_Review_Decision_Trees_Sriram-Final_14Nov2018.ipynb
|
106fcffaad05e14eb4fe729d83df2d9fcd6818f8
|
[] |
no_license
|
SRIRAM777/Amazon-Fine-Food-Reviews-Analysis
|
https://github.com/SRIRAM777/Amazon-Fine-Food-Reviews-Analysis
| 0 | 1 | null | 2019-08-12T07:44:27 | 2019-08-12T07:43:41 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 192,377 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Amazon Food Reviews - Decision Trees
# +
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import sqlite3
import nltk
import string
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve,auc
from nltk.stem.porter import PorterStemmer
import re
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from sklearn.model_selection import train_test_split
from collections import Counter
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn import cross_validation
from sklearn.preprocessing import normalize
from scipy.sparse import find
from scipy.sparse import csr_matrix
from prettytable import PrettyTable
import warnings
warnings.filterwarnings("ignore")
# -
df= pd.read_csv("../../../../Desktop/Prep/ML_Repo/amazon-fine-food-reviews/Reviews.csv")
df.head()
df.shape
# +
import pickle
def save_data(data,file):
pickle.dump(data,open(file+".pkl","wb"))
def load_data(file):
data = pickle.load(open(file+".pkl","rb"))
return data
# -
dt_data = load_data('../Amazon_Food_Review_KNN/150k_nb')
dt_data = dt_data.head(75000)
def confusion_matrix_plot(y_test,pred):
df_bow = pd.DataFrame(confusion_matrix(y_test, pred))
sns.heatmap(df_bow, annot=True)
tn, fp, fn, tp = confusion_matrix(y_test,pred).ravel()
print('True Positive',tp)
print('True Negative',tn)
print('False Positive',fp)
print('False Negative',fn)
def metric_scores(y_test,pred):
acc_scores = [[accuracy_score(y_test, pred)*100,precision_score(y_test,pred),recall_score(y_test,pred),f1_score(y_test,pred)]]
acc_scores = pd.DataFrame(acc_scores,columns=['Accuracy','Precision score','Recall score','F1 score'])
return acc_scores
def acc_vs_max_depth_plot(max_depth_list,acc_scores):
sns.set_style("whitegrid")
plt.plot(max_depth_list,acc_scores)
plt.xlabel("Max Depth")
plt.ylabel("Accuracy")
plt.title("Accuracy vs Max-Depth")
plt.show()
def summary_of_scores():
x = PrettyTable()
x.field_names = ["Model", "CV - Type", "Best Max-Depth","Best Accuracy"]
x.add_row(["BOW","Grid Search",load_data('bow_grid_search_dt').best_params_.get('max_depth'),load_data('bow_grid_search_dt').best_score_*100])
x.add_row(["Bigram","Grid Search", load_data('bigram_grid_search_dt').best_params_.get('max_depth'),load_data('bigram_grid_search_dt').best_score_*100])
x.add_row(["Tf-Idf","Grid Search", load_data('tfidf_grid_search_dt').best_params_.get('max_depth'),load_data('tfidf_grid_search_dt').best_score_*100])
x.add_row(["W2V","Grid Search", load_data('w2v_grid_search_dt').best_params_.get('max_depth'),load_data('w2v_grid_search_dt').best_score_*100])
x.add_row(["Tf-Idf - W2v","Grid Search", load_data('tfidf_w2v_grid_search_dt').best_params_.get('max_depth'),load_data('tfidf_w2v_grid_search_dt').best_score_*100])
print(x)
# ### Bag Of Words - Decision Trees
# ### Best Hyperparameter Max-Depth using GridSearchCV
x_train, x_test, y_train, y_test = train_test_split(dt_data['Cleaned_Text'] ,dt_data['Score'], test_size=0.3, shuffle = False)
x_test.head()
from sklearn.preprocessing import Normalizer
count_vect = CountVectorizer()
x_train = count_vect.fit_transform(x_train)
normal_scale = Normalizer().fit(x_train)
x_train = normal_scale.transform(x_train)
x_test = count_vect.transform(x_test)
x_test = normal_scale.transform(x_test)
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
dtc = DecisionTreeClassifier()
time_split_cv = TimeSeriesSplit(n_splits = 5)
max_depth_list = sorted(np.random.randint(1,40,5))
param_grid = {'max_depth':max_depth_list}
grid_search = GridSearchCV(dtc,param_grid,cv=time_split_cv,verbose=1,n_jobs= -1,scoring = {'f1_micro'},refit= 'f1_micro')
grid_search.fit(x_train,y_train)
save_data(grid_search,'bow_grid_search_dt')
a1 = load_data('bow_grid_search_dt')
print('Best Max-Depth ',a1.best_params_['max_depth'])
print('Best Accuracy %f% %'%(a1.best_score_*100))
from sklearn.metrics import accuracy_score
dt_optimal = DecisionTreeClassifier(max_depth = a1.best_params_['max_depth'])
dt_optimal.fit(x_train, y_train)
pred = dt_optimal.predict(x_test)
acc = accuracy_score(y_test, pred) * 100
metric_scores(y_test,pred)
confusion_matrix_plot(y_test,pred)
# ### Accuracy vs Maximum Depth Plot
acc_vs_max_depth_plot(max_depth_list,a1.cv_results_['mean_test_f1_micro']*100)
from sklearn.tree import export_graphviz
export_graphviz(dt_optimal, out_file='dot_data_bow.dot',filled=True, rounded=True,special_characters=True,feature_names=count_vect.get_feature_names())
# ### Bigram - Decision Trees
# ### Best Hyperparameter Max-Depth using GridSearchCV
x_train, x_test, y_train, y_test = train_test_split(dt_data['Cleaned_Text'] ,dt_data['Score'], test_size=0.3, shuffle = False)
x_test.head()
ngram_vect = CountVectorizer(ngram_range=(1,2))
x_train = ngram_vect.fit_transform(x_train)
normal_scale = Normalizer().fit(x_train)
x_train = normal_scale.transform(x_train)
x_test = ngram_vect.transform(x_test)
x_test = normal_scale.transform(x_test)
dtc = DecisionTreeClassifier()
time_split_cv = TimeSeriesSplit(n_splits = 3)
max_depth_list = sorted(np.random.randint(1,40,5))
param_grid = {'max_depth':max_depth_list}
grid_search = GridSearchCV(dtc,param_grid,cv=time_split_cv,verbose=1,n_jobs= -1,scoring = {'f1_micro'},refit= 'f1_micro')
grid_search.fit(x_train,y_train)
save_data(grid_search,'bigram_grid_search_dt')
a1 = load_data('bigram_grid_search_dt')
print('Best Max-Depth ',a1.best_params_['max_depth'])
print('Best Accuracy %f% %'%(a1.best_score_*100))
from sklearn.metrics import accuracy_score
dt_optimal = DecisionTreeClassifier(max_depth = a1.best_params_['max_depth'])
dt_optimal.fit(x_train, y_train)
pred = dt_optimal.predict(x_test)
acc = accuracy_score(y_test, pred) * 100
metric_scores(y_test,pred)
confusion_matrix_plot(y_test,pred)
# ### Accuracy vs Maximum Depth Plot
acc_vs_max_depth_plot(max_depth_list,a1.cv_results_['mean_test_f1_micro']*100)
from sklearn.tree import export_graphviz
export_graphviz(dt_optimal, out_file='dot_data_bigram.dot',filled=True, rounded=True,special_characters=True,feature_names=ngram_vect.get_feature_names())
# ### Tf-Idf Decision Trees
# ### Best Hyperparameter Max-Depth using GridSearchCV
x_train, x_test, y_train, y_test = train_test_split(dt_data['Cleaned_Text'] ,dt_data['Score'], test_size=0.3, shuffle = False)
x_test.head()
tf_idf_vect = TfidfVectorizer(ngram_range=(1,2))
x_train = tf_idf_vect.fit_transform(x_train)
normal_scale = Normalizer().fit(x_train)
x_train = normal_scale.transform(x_train)
x_test = tf_idf_vect.transform(x_test)
x_test = normal_scale.transform(x_test)
dtc = DecisionTreeClassifier()
time_split_cv = TimeSeriesSplit(n_splits = 3)
max_depth_list = sorted(np.random.randint(1,40,5))
param_grid = {'max_depth':max_depth_list}
grid_search = GridSearchCV(dtc,param_grid,cv=time_split_cv,verbose=1,n_jobs= -1,scoring = {'f1_micro'},refit= 'f1_micro')
grid_search.fit(x_train,y_train)
save_data(grid_search,'tfidf_grid_search_dt')
a1 = load_data('tfidf_grid_search_dt')
print('Best Max-Depth ',a1.best_params_['max_depth'])
print('Best Accuracy %f% %'%(a1.best_score_*100))
from sklearn.metrics import accuracy_score
dt_optimal = DecisionTreeClassifier(max_depth = a1.best_params_['max_depth'])
dt_optimal.fit(x_train, y_train)
pred = dt_optimal.predict(x_test)
acc = accuracy_score(y_test, pred) * 100
metric_scores(y_test,pred)
confusion_matrix_plot(y_test,pred)
# ### Accuracy vs Maximum Depth Plot
acc_vs_max_depth_plot(max_depth_list,a1.cv_results_['mean_test_f1_micro']*100)
from sklearn.tree import export_graphviz
export_graphviz(dt_optimal, out_file='dot_data_tfidf.dot',filled=True, rounded=True,special_characters=True,feature_names=tf_idf_vect.get_feature_names())
# ### Word2Vec - Decision Trees
# ### Best Hyperparameter Max-Depth using GridSearchCV
x_train, x_test, y_train, y_test = train_test_split(dt_data['Cleaned_Text'] ,dt_data['Score'], test_size=0.3, shuffle = False)
list_of_train_sent = []
for sent in x_train.values:
list_of_train_sent.append(sent.split())
w2v_model = Word2Vec(list_of_train_sent,min_count=5,workers=4,size=100)
w2v_words = list(w2v_model.wv.vocab)
from tqdm import tqdm
sent_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sent in tqdm(list_of_train_sent): # for each review/sentence
sent_vec = np.zeros(100) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words:
vec = w2v_model.wv[word]
sent_vec += vec
#print(sent_vec)
cnt_words += 1
#print(cnt_words)
if cnt_words != 0:
sent_vec /= cnt_words
sent_vectors.append(sent_vec)
sent_vectors_arr = np.asarray(sent_vectors)
x_train = normalize(sent_vectors_arr)
list_of_test_sent = []
for sent in x_test.values:
list_of_test_sent.append(sent.split())
sent_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sent in tqdm(list_of_test_sent): # for each review/sentence
sent_vec = np.zeros(100) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words:
vec = w2v_model.wv[word]
sent_vec += vec
#print(sent_vec)
cnt_words += 1
#print(cnt_words)
if cnt_words != 0:
sent_vec /= cnt_words
sent_vectors.append(sent_vec)
sent_vectors_arr = np.asarray(sent_vectors)
x_test = normalize(sent_vectors_arr)
dtc = DecisionTreeClassifier()
time_split_cv = TimeSeriesSplit(n_splits = 5)
max_depth_list = sorted(np.random.randint(1,100,10))
param_grid = {'max_depth':max_depth_list}
grid_search = GridSearchCV(dtc,param_grid,cv=time_split_cv,verbose=1,n_jobs= -1,scoring = {'f1_micro'},refit= 'f1_micro')
grid_search.fit(x_train,y_train)
save_data(grid_search,'w2v_grid_search_dt')
a1 = load_data('w2v_grid_search_dt')
print('Best Max-Depth ',a1.best_params_['max_depth'])
print('Best Accuracy %f% %'%(a1.best_score_*100))
from sklearn.metrics import accuracy_score
dt_optimal = DecisionTreeClassifier(max_depth = a1.best_params_['max_depth'])
dt_optimal.fit(x_train, y_train)
pred = dt_optimal.predict(x_test)
acc = accuracy_score(y_test, pred) * 100
metric_scores(y_test,pred)
confusion_matrix_plot(y_test,pred)
# ### Accuracy vs Maximum Depth Plot
acc_vs_max_depth_plot(max_depth_list,a1.cv_results_['mean_test_f1_micro']*100)
# ### Tf-Idf Word2Vec Decision Trees
# ### Best Hyperparameter Max-Depth using GridSearchCV
x_train, x_test, y_train, y_test = train_test_split(dt_data['Cleaned_Text'],dt_data['Score'], test_size=0.3, random_state=0)
tf_idf_vect = TfidfVectorizer()
x_train = tf_idf_vect.fit_transform(x_train)
dict_dt = dict(zip(tf_idf_vect.get_feature_names(), list(tf_idf_vect.idf_)))
from tqdm import tqdm_notebook as tqdm
tfidf_feat= tf_idf_vect.get_feature_names()
tfidf_sent_vectors = []
row=0;
for sent in tqdm(list_of_train_sent): # for each review/sentence
sent_vec = np.zeros(100) # as word vectors are of zero length
weight_sum =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if ((word in w2v_words) and (word in tfidf_feat)):
vec = w2v_model.wv[word]
# obtain the tf_idfidf of a word in a sentence/review
#tf_idf = x_train[row, tfidf_feat.index(word)]
tf_idf = dict_dt[word]*sent.count(word)
sent_vec += (vec * tf_idf)
weight_sum += tf_idf
if weight_sum != 0:
sent_vec /= weight_sum
tfidf_sent_vectors.append(sent_vec)
row += 1
tfidf_sent_vectors_arr = np.asarray(tfidf_sent_vectors)
x_train = normalize(tfidf_sent_vectors_arr)
x_test = tf_idf_vect.transform(x_test)
save_data(x_train,'tfidf_w2v_train_set_dt')
x_train = load_data('tfidf_w2v_train_set_dt')
tfidf_feat= tf_idf_vect.get_feature_names()
tfidf_sent_vectors = []
row=0;
for sent in tqdm(list_of_test_sent): # for each review/sentence
sent_vec = np.zeros(100) # as word vectors are of zero length
weight_sum =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if ((word in w2v_words) and (word in tfidf_feat)):
vec = w2v_model.wv[word]
tf_idf = dict_svm[word]*sent.count(word)
sent_vec += (vec * tf_idf)
weight_sum += tf_idf
if weight_sum != 0:
sent_vec /= weight_sum
tfidf_sent_vectors.append(sent_vec)
row += 1
tfidf_sent_vectors_arr = np.asarray(tfidf_sent_vectors)
x_test = normalize(tfidf_sent_vectors_arr)
save_data(x_test,'tfidf_w2v_test_set_dt')
x_test = load_data('tfidf_w2v_test_set_dt')
dtc = DecisionTreeClassifier()
time_split_cv = TimeSeriesSplit(n_splits = 5)
max_depth_list = sorted(np.random.randint(1,100,10))
param_grid = {'max_depth':max_depth_list}
grid_search = GridSearchCV(dtc,param_grid,cv=time_split_cv,verbose=1,n_jobs= -1,scoring = {'f1_micro'},refit= 'f1_micro')
grid_search.fit(x_train,y_train)
save_data(grid_search,'tfidf_w2v_grid_search_dt')
a1 = load_data('tfidf_w2v_grid_search_dt')
print('Best Max-Depth ',a1.best_params_['max_depth'])
print('Best Accuracy %f% %'%(a1.best_score_*100))
from sklearn.metrics import accuracy_score
dt_optimal = DecisionTreeClassifier(max_depth = a1.best_params_['max_depth'])
dt_optimal.fit(x_train, y_train)
pred = dt_optimal.predict(x_test)
acc = accuracy_score(y_test, pred) * 100
metric_scores(y_test,pred)
confusion_matrix_plot(y_test,pred)
# ### Accuracy vs Maximum Depth Plot
acc_vs_max_depth_plot(max_depth_list,a1.cv_results_['mean_test_f1_micro']*100)
# ### Summary
# | Model | CV - Type | Best Max-Depth | Best Accuracy |
# | --- | --- | --- | ---|
# | BOW | Grid Search | 16 | 89.46971428571429 |
# | Bigram | Grid Search | 8 | 89.54412698412698 |
# | Tf-Idf | Grid Search | 14 | 89.66349206349207 |
# | W2V | Grid Search | 6 | 88.7222857142857 |
# | Tf-Idf - W2v | Grid Search | 8 | 87.99542857142856 |
# ### Procedure to solve the assignment
# 1. Preprocessing of text and time based splitting of train and test data. I have performed it already and have loaded it from a file.
# 2. Vectorizing the train data and applying the same vectorizer to test data to transform it into vectors.
# 3. Finding out the optimal maximum depth of the tree using Grid Search CV.
# 4. Finding out the performance of the model on test data using the Optimal Max. Depth that is obtained.
# 5. Evaluating the various performance metric values of the model that is obtained.
# 6. Plot the Confusion matrix using Seaborn.
# 7. Plot the Accuracy vs Maximum Depth plot to find out the Optimal Max. Depth.
# 8. Visualize the Decision Trees for various models using Graphviz.
# 9. Repeat steps 2 to 8 for vectorizers BOW,Bigram, Tf-Idf, Avg. Word2Vec, Tf-Idf Word2Vec
| 16,088 |
/DivorceCategories.ipynb
|
f278bed1f5c83cc853c89740a16ce659fd31da5b
|
[] |
no_license
|
rknightly/rice-datathon
|
https://github.com/rknightly/rice-datathon
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,116 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Controlling the variance of PANDA networks
#
# Kalyan Palepu<sup>1</sup>, Marouen Ben Guebila<sup>2</sup>
#
# <sup>1</sup> Harvard College, Harvard University, Boston, MA, USA.
#
# <sup>2</sup> Harvard School of Public Health, Harvard University, Boston, MA, USA.
# ## Motivation
# Passing Attributes between Networks for Data Assimilation (PANDA)<sup>1</sup> allows to infer complete bipartite gene regulatory networks (GRNs) between Transcription Factors (TFs) and their target genes using three input data matrices: TF PPI interaction ($P$), gene coexpression ($C$), TF target motif-based prediction ($W_0$). A common procedure to interpret PANDA GRNs consists of thresholding resulting network ($W$) edges to retain the most important ones and to reduce the network size on memory.
#
# A common misconception consists of assuming that strictly positive edges exist, in other words, zero is a natural threshold of PANDA networks. In reality, PANDA edges are distribued in a biomodal distribution with edges present in $W_0$ are in the first mode and all other edges are in the second.
#
# To illustrate this, let's download a network from GRAND (http://grand.networkmedicine.org) and plot the network edges. First let's load the needed libraries.
import os
import pandas as pd # to read the network
import matplotlib.pyplot as plt # to plot the distribution
import numpy as np # for linear algebra operations
import scipy.stats as st # for the densities of common distributions
#os.system('curl -O https://granddb.s3.amazonaws.com/tissues/networks/Adrenal_Gland.csv')
net = pd.read_csv('/opt/data/Adrenal_Gland.csv', index_col=0)
# Then, let's plot the distribution of the network edges.
plt.hist(net.values.flatten(), density=1, bins=30);
plt.title('Fig1. Weight distribution of adrenal gland network edges');
# We clearly see two modes in the histogram, we also deduce that zero is not a central value in the network. Controlling the distribution of PANDA networks is important because it allows comparing the networks to each other, such as in case versus control settings. The current approach to achieve such control is to produce the networks using the same input data, in particular using the same $W_0$ that exerts a large influence on the reconstruction of the inferred GRN.
#
# However, as motif mappings are refined<sup>2</sup>, $W_0$ can change to include additional TFs, genes, and edges. Comparing PANDA networks in those conditions can be less straightforward. Therefore, controlling the distribution of PANDA network edges in a principled approach could offer an alternative to compare networks that were generated using different input data. In particular, we are interested in comparing edge weights from different networks and make inferences of the type: if $W_{(i,j)}^{disease} > W_{(i,j)}^{control}$ means that there an upregulation of gene $j$ by TF $i$ in disease state.
#
# The suggested approach consists of i) identifying network edge distribution then ii) modeling the edge distribution using a common law. Since the input matrices are standardized prior to PANDA loop, the standard normal distribution seems a good candidate.
# ## Modeling network edge distribution
# ### The distribution of the distance of two random variables
# At it is core, PANDA iteratively measures the distance between three sets of inputs after standrdizing them in the first step. The distance considered in the original implementation is a continuous modification of Tanimoto distance. Although, we mentionned that distance is measured between two variables $X$ and $Y$ that follow a standard normal distribution, we will assume the general case where $X \sim\ N(0,a)$ and $Y \sim\ N(0,b)$ respectively.
# Let's define the variances $a$ and $b$.
a = 4
b = 2
# Then, we will investigate the distribution of the variable $Z \sim\ T(X, Y)$, such as $T(x,y)$ is the modified Tanimoto similarity.
#
# Tanimoto similarity or Jaccard index is a measure that is comupted between two discrete samples sets $A$ and $B$ as follows: $J(A,B)=\frac{|A \cap B|}{|A \cup B|}$. The modification or extension of the Tanimoto similarity for continuous variables $X$ and $Y$ is usually computed as follows: $T(x,y)=\frac{x.y}{|x|^{2}+|y|^{2}-x.y}$. In PANDA<sup>1</sup>, the square root of the denominator is considered in the previous formula to give: $T(x,y)=\frac{x.y}{\sqrt{|x|^{2}+|y|^{2}-x.y}}$.
#
# We will use the expression from equation 13 in the supplementary data of the PANDA paper<sup>1</sup>. We will simply generate random samples of the nominator, the denominator, and the resulting variable $Z$.
results = []
nums = []
denoms = []
for i in range(10000):
n = 1000
x = np.random.normal(0, np.sqrt(a), n)
y = np.random.normal(0, np.sqrt(b), n)
num = np.dot(x, y)
denom = np.linalg.norm(x - y)
nums.append(num)
denoms.append(denom)
results.append(num / denom)
# Assuming that $n$ is the number of samples from the distribution, we can verify that the nominator follows $N(0, \frac{n(a + b)^2}{4})$.
fig, ax = plt.subplots(1, 1)
ax.hist(nums, bins=100, density=True)
x = np.linspace(-1000, 1000, num=1000)
ax.plot(x, st.norm.pdf(x, 0, (a + b) / 2 * np.sqrt(n)));
# Although we can't tell much about the denominator, we can check that it follows the square root of a $(a + b) \chi^2(n)$ distribution, which we don't need to develop further to derive the law of the whole expression.
fig, ax = plt.subplots(1, 1)
_ = ax.hist(denoms, bins=100, density=True)
dist = np.sqrt((a + b) * st.chi2.rvs(n, size=100000))
_ = ax.hist(dist, bins=100, density=True)
# Finally, the variable $Z$ which is a fraction of the two previous quantities follows a student $t$-distribution of parameter $n$ multiplied by a constant $\frac{\sqrt{(a+b)}}{2}$. $n$ being very large, we can approximate the student distribution by $N(0,1)$, therefore $Z \sim\ N(0,\frac{a+b}{4})$.
fig, ax = plt.subplots(1, 1)
ax.hist(results, bins=100, density=True)
x = np.linspace(-4, 4, num=10000)
ax.plot(x, st.norm.pdf(x, 0, np.sqrt(a + b) / 2));
# ### Calculating the variance of the updated $W$
# Starting from this point, we will assume that all considered variables are i.i.d.
#
# In the next step of the PANDA algorithm, $W_{i}$, the estimated regulatory netowork in the current step $i$, is computed as a weighted sum of two $Z$-distributed variables, the weights ($respWeight$) are usually set to $\frac{1}{2}$ in the standard implementation of PANDA and can be changed in [optPANDA](https://github.com/netZoo/netZooM/blob/master/tutorials/opt_panda/opt_panda.pdf) for example, therefore $W_{i} \sim\ N(0,\frac{a+b}{8})$.
#
# Next, a learning step is performed using the rate $\alpha$ to infer the updated regulatory network $W_{i+1}$: $W_{i+1} = (1-\alpha)W_{i-1} + \alpha W_{i}$.
# Because we standardize the inputs, we can assume that $a=b=c=1$, therefore, $W_{i-1} \sim\ N(0,1)$ and $W_{i+1} \sim\ N(0,(1-\alpha)^{2} + \frac{\alpha^2}{2})$.
# ### Calculating the variance of the updated $P$ and $C$
# Similarly, updating $P$ and $C$ roughly follows the same steps. First, the distance between two $W_{i+1}$-distributed normal variables of variances $a'$ and $b'$ are computed, therefore they follow $N(0, \frac{a'+b'}{4})$ as previously determined. Second, in contrast to updating $W$, the weighted sum step is skipped. Third, let's assume $P_{i-1} \sim\ N(0,c')$, since we standardize the input, $c'=1$. Finally, let's assume that we controlled the variance of $W$ in the previous step to 1, therefore $a'=b'=1$.
# Taken together, after the learning step, $P_{i+1} \sim\ N(0,(1-\alpha)^{2} + \frac{\alpha^2}{2})$.
#
# Using the same approach, $C_{i+1} \sim\ N(0,(1-\alpha)^{2} + \frac{\alpha^2}{2})$, with $C_{i-1} \sim\ N(0,c'')$ and $C_{i}$ is computed as the distance between two $W_{i+1}$-distributed normal variables of variances $a''$ and $b''$ that we assumed equal to 1 since we standrdized the iintermediary quantity $W_i$ and the input $C_{i-1}$.
#
# For a complete description of the steps of the PANDA algorithm, please check the corresponding publication<sup>1</sup> and an implementation in your [favorite language](https://github.com/netZoo/netZooPy/tree/master/netZooPy/panda).
# ## Controlling the variance of network edge distribution
# Knowing the number of steps until converge, we can control the variance of PANDA network in only one step in the end of the algorithm using the previous formulation that integrates unkonwn variances $a$, $b$, $c$, $c'$, and $c''$. However, while this implementation can be the object of future reasearch for the obvious benefit in terms of speed and the ability to standardize any previously generated PANDA network, we will take a step-wise correction approach for the current work.
#
# First, we will assume that $a=b=c=1$, since all the input matrices are standardized.
# However, keep in mind that while this assumption can hold for $P$, the PPI network and $C$ the correlation matrix that we can assume normally distributed, $W_0$ is a binary matrix, therefore this assumption can be challenged.
#
# Therefore, $\frac{1}{\sqrt{(1-\alpha)^2 + \frac{\alpha^2}{2}}}W \sim\ N(0,1)$. After correcting the variance of $W$, $a"=b"=a'=b'=1$, therefore $\frac{1}{\sqrt{(1-\alpha)^2 + \frac{\alpha^2}{2}}}P \sim\ N(0, 1)$, and $\frac{1}{\sqrt{(1-\alpha)^2 + \frac{\alpha^2}{2} }}C \sim\ N(0, 1)$.
#
# ## Practical considerations
# To test the validity of our approach on a real life example, we can download colon cancer<sup></sup> input data from [GRAND](https://grand.networkmedicine.org) and test our hypotheses.
#os.system('curl -O https://granddb.s3.amazonaws.com/cancer/colon_cancer/cancer_colon_ppi.txt')
#os.system('curl -O https://granddb.s3.amazonaws.com/cancer/colon_cancer/cancer_colon_motif.txt')
#os.system('curl -O https://granddb.s3.amazonaws.com/cancer/colon_cancer/cancer_colon_expression_tcga.txt')
ppi_data_cancer ='/opt/data/cancer_colon_ppi.txt'
motif_data_cancer ='/opt/data/cancer_colon_motif.txt'
expression_data_cancer='/opt/data/cancer_colon_expression_tcga.txt'
ppi_data_c = pd.read_csv(ppi_data_cancer,header=0,index_col=0,sep='\t')
motif_data_c = pd.read_csv(motif_data_cancer,header=0,index_col=0,sep='\t')
expression_data_c = pd.read_csv(expression_data_cancer,header=0,index_col=0,sep='\t')
# Simply plotting the distribution of edge weights of the input matrices shows that while this normality assumption can hold for $C$, $P$ seems lognormally distributed while $W_0$ is a discrete variable.
fig, (ax0, ax1, ax2) = plt.subplots(1, 3, sharey=False, figsize=(12, 6))
ax0.hist(ppi_data_c.iloc[:,1], density=1, bins=30);
ax1.hist(motif_data_c.iloc[:,1], density=1, bins=30);
ax2.hist(expression_data_c.values.flatten(), density=1, bins=30);
ax0.set_title('Edge weight distribution in P');
ax1.set_title('Edge weight distribution in W0');
ax2.set_title('Edge weight distribution in C');
# ## Future steps and the case for a continuous $W_0$
#
# Therefore, to be able to control the variance of the computed regulatory network, the current framework needs to be extended to address the distribution of the modified Tanimoto distance between continuous and discrete variables.
#
# In addition, a more viable approach consists of computing a continuous $W_{0}$ using for example the distance between the TF motif and the promoter or the transcription start site of the target gene. In this case, the current framework lends itself naturally to a more accurate approximation of the variance of regulatory networks. Such approach would also allow to correct the strong bias of the final network $W$ to $W_0$ (Figure 1), accelerate the convergence of the algorithm, and find a natural stopping threshold without relying on a forced convergence.
# ## References
# 1 - Glass, Kimberly, et al. "Passing messages between biological networks to refine predicted interactions." PloS one 8.5 (2013): e64832.
#
# 2 - Lambert, Samuel A., et al. "The human transcription factors." Cell 172.4 (2018): 650-665.
#
# 3 - Lopes-Ramos, Camila M., et al. "Gene regulatory network analysis identifies sex-linked differences in colon cancer drug metabolism." Cancer research 78.19 (2018): 5538-5547.
| 12,495 |
/3rdAssignment/option_2/.ipynb_checkpoints/cluster_starter-checkpoint.ipynb
|
f9d0cd8965567352150f4d3873b97b90e6cb2d46
|
[] |
no_license
|
acdreyer/machine-learning
|
https://github.com/acdreyer/machine-learning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 344,193 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/JRasmusBm/chatbot-
# epsilon/blob/master/Trainer.ipynb" target="_parent"><img
# src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In
# Colab"/></a>
#
# # This is the file in which we perform training of the NN
#
# # Load
# Data
#
# ## In Colab
# + attributes={"classes": [], "id": "", "n": "1"}
#from google.colab import files
#uploaded = files.upload()
#file_name = "amazon_cells_labelled.txt"
#uploaded[file_name].decode("utf-8")
# -
# ## Locally
# + attributes={"classes": [], "id": "", "n": "2"}
data_folder = "../../data"
trained_models_folder = "../../trained_models"
file_name = f"{data_folder}/amazon_cells_labelled.txt"
json_file = f"{data_folder}/amazon_cells_labelled.json"
# -
# # Import code (from TA)
#
# # Imports
# + attributes={"classes": [], "id": "", "n": "3"}
from torchtext import data
import torch
import torch.nn as nn
import json
import time
import numpy as np
from transformers import BertModel, BertTokenizer
# -
# # Extract Data
#
# First, we create lists of labels and sentences. The indices in
# the one
# correspond to those in the other. Due to restrictions in torchtext,
# write it as
# json to disk.
# + attributes={"classes": [], "id": "", "n": "4"}
with open(file_name) as f:
contents = f.read()
labels = []
sentences = []
for line in (l for l in contents.split("\n") if l):
labels.append(int(line[-1]))
sentence = str.strip(line[:-1])
while len(sentence.split(" ")) < 5:
sentence += " a"
sentences.append(sentence)
data_json = [
dict(label=label, text=text) for label, text in zip(labels, sentences)
]
with open(json_file, "w") as f:
text = "\n".join(json.dumps(line) for line in data_json)
f.write(text)
# -
# ## Validate
# + attributes={"classes": [], "id": "", "n": "5"}
with open(json_file) as f:
json_written = [json.loads(line) for line in f.read().split("\n")]
for line in json_written:
if line["label"] not in [0, 1]:
print(line)
if len(line["text"].split(" ")) < 5:
print(line)
# -
# # Generate Torchtext Dataset
# + attributes={"classes": [], "id": "", "n": "6"}
def generate_bigrams(x):
n_grams = set(zip(*[x[i:] for i in range(2)]))
for n_gram in n_grams:
x.append(" ".join(n_gram))
return x
# + attributes={"classes": [], "id": "", "n": "7"}
import random
from IPython.core.debugger import set_trace
from torch.utils.data.dataset import random_split
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# -
# # help(dataset)
# + attributes={"classes": [], "id": "", "n": "8"}
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
init_token_index = tokenizer.cls_token_id
end_of_string_token_index = tokenizer.sep_token_id
padding_token_index = tokenizer.pad_token_id
unknown_token_index = tokenizer.unk_token_id
# + attributes={"classes": [], "id": "", "n": "9"}
max_input_length = tokenizer.max_model_input_sizes["bert-base-uncased"]
TEXT = data.Field(
batch_first=True,
use_vocab=False,
preprocessing=tokenizer.convert_tokens_to_ids,
init_token=init_token_index,
eos_token=end_of_string_token_index,
pad_token=padding_token_index,
unk_token=unknown_token_index,
)
LABEL = data.LabelField(dtype=torch.float)
fields = dict(text=("text", TEXT), label=("label", LABEL),)
dataset = data.TabularDataset(path=json_file, format="json", fields=fields,)
# + attributes={"classes": [], "id": "", "n": "10"}
training_data, test_data, validation_data = dataset.split(
split_ratio=[0.7, 0.2, 0.1], random_state=random.seed(SEED)
)
# + attributes={"classes": [], "id": "", "n": "11"}
def tokenize_and_cut(sentence):
tokens = tokenizer.tokenize(sentence)
tokens = (tokens[: max_input_length - 2],)
return tokens
# -
# ## Validate
# + attributes={"classes": [], "id": "", "n": "12"}
print(f"Length (Training Data): {len(training_data)}")
print(f"Length (Test Data): {len(test_data)}")
print(f"Length (Validation Data): {len(validation_data)}")
# -
# # Build Vocab
# + attributes={"classes": [], "id": "", "n": "13"}
MAX_VOCAB_SIZE = 25_000
LABEL.build_vocab(training_data)
# -
# ## Validate
# + attributes={"classes": [], "id": "", "n": "14"}
print(f"Unique tokens in TEXT vocabulary: {len(tokenizer.vocab)}")
print(f"Unique tokens in LABEL vocabulary: {len(LABEL.vocab)}")
# -
# # Create Iterators
# + attributes={"classes": [], "id": "", "n": "15"}
BATCH_SIZE = 64
# Use GPU if available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
training_iterator, validation_iterator, test_iterator = data.BucketIterator.splits(
(training_data, validation_data, test_data),
batch_size = BATCH_SIZE,
sort_key = lambda x: len(x.text),
sort_within_batch = True,
device = device)
# -
# # Build Model
# + attributes={"classes": [], "id": "", "n": "16"}
bert = BertModel.from_pretrained("bert-base-uncased")
class BERTGRUSentiment(nn.Module):
def __init__(
self, bert, hidden_dim, output_dim, n_layers, bidirectional, dropout
):
super().__init__()
self.bert = bert
embedding_dim = bert.config.to_dict()["hidden_size"]
self.rnn = nn.GRU(
embedding_dim,
hidden_dim,
num_layers=n_layers,
bidirectional=bidirectional,
batch_first=True,
dropout=0 if n_layers < 2 else dropout,
)
self.out = nn.Linear(
hidden_dim * 2 if bidirectional else hidden_dim, output_dim
)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
with torch.no_grad():
embedded = bert(text)[0]
_, hidden = self.rnn(embedded)
if self.rnn.bidirectional:
hidden = self.dropout(
torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)
)
else:
hidden = self.dropout(hidden[-1, :, :])
output = self.out(hidden)
return output
# -
# # Instantiate Model
# + attributes={"classes": [], "id": "", "n": "18"}
HIDDEN_DIM = 256
OUTPUT_DIM = 1
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.25
model = BERTGRUSentiment(bert,
HIDDEN_DIM,
OUTPUT_DIM,
N_LAYERS,
BIDIRECTIONAL,
DROPOUT,
)
# -
# ## Validate
# + attributes={"classes": [], "id": "", "n": "19"}
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
# -
# # Freeze Parameters
# + attributes={"classes": [], "id": "", "n": "20"}
for name, param in model.named_parameters():
if name.startswith("bert"):
param.requires_grad = False
# -
# ## Validate
# + attributes={"classes": [], "id": "", "n": "25"}
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
# + attributes={"classes": [], "id": "", "n": "81"}
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
# + attributes={"classes": [], "id": "", "n": "80"}
optimizer = torch.optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.s
| 8,192 |
/HW4_xz1809/.ipynb_checkpoints/HW4_assignment3-checkpoint.ipynb
|
d914788708ee045de17a5757b8c76b8b2df2a6fd
|
[] |
no_license
|
xingezhong/PUI2016_xz1809
|
https://github.com/xingezhong/PUI2016_xz1809
| 0 | 4 | null | 2016-11-10T04:03:34 | 2016-09-19T15:59:09 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 3,198,860 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Q8UsFXfVC2W0" outputId="f49963e7-6251-420f-abbe-7a15469c45a4" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !wget "http://datasets.d2.mpi-inf.mpg.de/mateusz14visual-turing/nyu_depth_images.tar"
# !wget "https://datasets.d2.mpi-inf.mpg.de/mateusz14visual-turing/qa.894.raw.txt"
# !wget "http://nlp.stanford.edu/data/glove.6B.zip"
# !unzip glove.6B.zip
# !tar -xvf nyu_depth_images.tar
# + id="fMDTfN5nCaYw" outputId="965c7198-e96b-476e-c7b8-49b376aa9cbc" colab={"base_uri": "https://localhost:8080/", "height": 106}
from torch import nn
from torch.autograd import Variable
import torch
import numpy as np
import glob
import os
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms
import torch.cuda as cuda
import torch.utils.data as torchdata
import torchvision.models as models
import torch.nn.functional as F
import random
from torch.utils.data.sampler import SubsetRandomSampler
import copy
import matplotlib.pyplot as plt
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
glove_path = '/content/glove.6B.300d.txt'
# create word dictionary
def load_glove(path):
with open(path, encoding="utf8") as f:
glove = {}
for line in f.readlines():
values = line.split()
word = values[0]
vector = np.array(values[1:], dtype='float32')
glove[word] = vector
return glove
glove = load_glove(glove_path)
def create_emb_layer(weights_matrix, non_trainable=False):
num_embeddings, embedding_dim = weights_matrix.shape
emb_layer = nn.Embedding(num_embeddings, embedding_dim)
emb_layer.load_state_dict({'weight': torch.from_numpy(weights_matrix)})
if non_trainable:
emb_layer.weight.requires_grad = False
return emb_layer, num_embeddings, embedding_dim
my_vocab = set()
all_questions = []
all_answers = []
with open('/content/qa.894.raw.txt', encoding="utf8") as f:
lines = f.readlines()
for line in lines:
vocab = list(line.split())
vocab = [x.replace(',', '') for x in vocab]
if any('?' in s for s in vocab):
temp = copy.deepcopy(vocab)
all_questions.append(temp)
else:
temp = copy.deepcopy(vocab)
all_answers.append(temp)
for i in range(len(vocab)):
if 'image' in vocab[i]:
vocab[i] = 'image'
my_vocab.update(vocab)
matrix_len = len(my_vocab)
weights_matrix = np.zeros((matrix_len, 300))
words_found = 0
final_vocab = dict()
for i, word in enumerate(my_vocab):
try:
if '_' in word:
temp = copy.deepcopy(word)
temp = temp.split('_')
try:
s1 = glove[temp[0]]
except KeyError:
s1 = np.random.normal(scale=0.6, size=(300,))
try:
s2 = glove[temp[1]]
except KeyError:
s2 = np.random.normal(scale=0.6, size=(300,))
weights_matrix[i] = (s1 + s2) / 2
final_vocab[word] = i
continue
weights_matrix[i] = glove[word]
words_found += 1
except KeyError:
weights_matrix[i] = np.random.normal(scale=0.6, size=(300,))
final_vocab[word] = i
#############################################################################
akbar = []
with open('/content/captions.txt', encoding="utf8") as f:
lines = f.readlines()
for line in lines:
akbar.append(line)
#############################################################################
class my_dataset(Dataset):
def __init__(self, questions, answers, file_path, transform=None):
self.path = file_path
self.transforms = transform
self.questions = questions
self.answers = answers
def __getitem__(self, item):
current_question = self.questions[item]
current_answer = self.answers[item]
akbar_index = 0
for i in range(len(current_question)):
if 'image' in current_question[i]:
index = copy.deepcopy(current_question[i])
if index.replace('image', '') == '':
index = 'image1'
current_question[i] = 'image'
img_path = self.path + index + '.png'
akbar_index = int(index[5:])
img = Image.open(img_path)
if self.transforms is not None:
img = self.transforms(img)
designed_answer = []
designed_answer.append(current_answer[0])
sample = random.sample(current_question, 5)
designed_answer.extend(sample)
sample = random.sample(my_vocab, 26)
designed_answer.extend(sample)
ans_ind = [final_vocab[x] for x in designed_answer]
ques_ind = [final_vocab[x] for x in current_question]
length = len(ques_ind)
while (len(ques_ind) < 31):
ques_ind.append(0)
return img, torch.tensor(ques_ind), torch.tensor(ans_ind), torch.tensor(length), akbar[
akbar_index-1] #########################################################################
def __len__(self):
return len(self.questions)
batch_size = 1
total_epoch = 20
class network(nn.Module):
def __init__(self, matrix_weights):
super(network, self).__init__()
self.resnet = models.resnet18(pretrained=True)
self.resnet.fc = nn.Sequential()
self.linear_from_resnet_to_lstm = nn.Linear(512, 150)
self.embedding, num_embeddings, embedding_dim = create_emb_layer(weights_matrix, True)
self.gru = nn.GRU(embedding_dim, 150, 1, batch_first=True)
self.fc1 = nn.Linear(662, 300)
self.fc2 = nn.Linear(300, 300)
##############################################################################
def find_similar_word(self, output, caption):
caption = caption[0].split()
list_of_encodings = []
for i in caption:
if i == '<unk>' or i == '<end>':
continue
try:
asghar = glove[i]
except KeyError:
asghar = np.random.normal(scale=0.6, size=(300,))
list_of_encodings.append(asghar)
minimum = 0
min_dist = np.Inf
for i in list_of_encodings:
dst = (np.dot(output, i)
/ np.linalg.norm(output)
/ np.linalg.norm(i))
if dst < min_dist:
minimum = i
min_dist = dst
return minimum
##############################################################################
def forward(self, image, question, answers, lengths, caption, flag=False):
with torch.no_grad():
image_features1 = self.resnet(image)
image_features = self.linear_from_resnet_to_lstm(image_features1)
image_features = image_features.unsqueeze(0)
embedded = self.embedding(question)
embedded = torch.nn.utils.rnn.pack_padded_sequence(embedded, batch_first=True, lengths=lengths,
enforce_sorted=False)
lstm_output = self.gru(embedded, image_features)
concated = torch.cat((lstm_output[1].squeeze(0), image_features1), 1)
output = self.fc1(concated)
output = F.relu(output)
output = self.fc2(output)
answer_vec = self.embedding(answers)
###################################################
if flag:
similar_word = self.find_similar_word(output.cpu().detach().numpy(), caption)
answer_vec[0][0] = torch.from_numpy(similar_word)
###################################################
final_output = torch.einsum('bwf,bf->bw', answer_vec, output)
# print('Final output:')
# print(final_output)
return final_output
images_path = '/content/nyu_depth_images/'
trans = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
test_length = 2500
my_data = my_dataset(all_questions, all_answers, images_path, trans)
data_range = range(len(my_data))
test_index = random.sample(range(len(my_data)), test_length)
train_index = [x for x in data_range if x not in test_index]
train_sampler = SubsetRandomSampler(train_index)
test_sampler = SubsetRandomSampler(test_index)
train_loader = torchdata.DataLoader(my_data, batch_size=batch_size, shuffle=False, sampler=train_sampler)
print(len(train_loader))
test_loader = torchdata.DataLoader(my_data, batch_size=batch_size, shuffle=False, sampler=test_sampler)
print(len(test_loader))
my_net = network(weights_matrix)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(my_net.parameters(), lr=0.01, weight_decay=0.005)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
my_net.to(device)
optimizer.zero_grad()
all_loss = list()
test_loss = list()
# + id="hgDhBHGyChvR" outputId="7d47e442-b8d8-4046-92ee-f52a23887582" colab={"base_uri": "https://localhost:8080/", "height": 1000}
for epoch in range(total_epoch):
this_epoch_loss = list()
this_epoch_test_loss = list()
optimizer.zero_grad()
my_net.train()
for i, (image, question, answer, length, caption) in enumerate(
train_loader): ##################################################################################
#######################################################
flag = False
if epoch < 5:
if i % 10 != 0:
continue
else:
flag = True
if i % 10 == 0:
continue
#######################################################
image = image.to(device)
question = question.to(device)
answer = answer.to(device)
image = Variable(image)
question = Variable(question)
answer = Variable(answer)
optimizer.zero_grad()
output = my_net(image, question, answer, length,
caption, flag) ###################################################################################
label = torch.zeros([len(image)], dtype=torch.long)
loss = criterion(output, label.to(device))
# print("Label", np.shape(label))
# print(label)
# print(loss)
loss.backward()
this_epoch_loss.append(loss.item())
# print('in epoch {} and index {} loss is {}'.format(epoch, i, this_epoch_loss[-1]))
print('in epoch {} average loss was {}'.format(epoch, np.mean(this_epoch_loss)))
all_loss.append(np.mean(this_epoch_loss))
my_net.eval()
for i, (image, question, answer, length, caption) in enumerate(test_loader): ############################################
image = image.to(device)
question = question.to(device)
answer = answer.to(device)
image = Variable(image)
question = Variable(question)
answer = Variable(answer)
output = my_net(image, question, answer, length, caption) ###################################
label = torch.zeros([len(image)], dtype=torch.long)
loss = criterion(output, label.to(device))
this_epoch_test_loss.append(loss.item())
# print('in test epoch {} and index {} loss is {}'.format(epoch, i, this_epoch_test_loss[-1]))
print('in test epoch {} average loss was {}'.format(epoch, np.mean(this_epoch_test_loss)))
test_loss.append(np.mean(this_epoch_test_loss))
# + id="PWpMQlawDhJ9" outputId="8de1c9e6-a580-440e-ae96-fb74da4a84b9" colab={"base_uri": "https://localhost:8080/", "height": 445}
torch.save(my_net, 'model.pt')
# + id="x5-NixZdCh0O" outputId="5ad20f3a-3973-426c-d0cd-58ce21e45e3e" colab={"base_uri": "https://localhost:8080/", "height": 332}
fig = plt.figure()
plt.plot(all_loss, 'r', label='train_loss')
plt.plot(test_loss, 'b', label='test_loss')
plt.legend(loc='upper right')
plt.title('average loss in epoch')
plt.xlabel('# of epoch')
plt.ylabel('average Loss')
plt.show()
print(all_loss)
# + id="rtkcWTCOVpt3"
import scipy.io as sio
# + id="QnTjM-wWWsj7" outputId="babc60cf-805c-49e2-8b0a-4175389018f1" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(test_loss)
# + id="Cq2zODweWwrL"
train_loss = all_loss[10:]
# + id="zGECNMi_W1VK" outputId="8ac1d6b0-9111-4491-f2f1-df7780093edf" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(train_loss)
# + id="G2GuuTSRW5Iv" outputId="76355996-7ad8-47b0-932e-442cb7298230" colab={"base_uri": "https://localhost:8080/", "height": 295}
fig = plt.figure()
plt.plot(train_loss, 'r', label='train_loss')
plt.plot(test_loss, 'b', label='test_loss')
plt.legend(loc='upper right')
plt.title('average loss in epoch')
plt.xlabel('# of epoch')
plt.ylabel('average Loss')
plt.show()
# + id="VF-3KE7EXACO"
sio.savemat('matrices.mat', {'train': train_loss, 'test': test_loss})
# + id="Mw-AJ-6UXlHJ"
| 13,171 |
/Fed Demo Without Pw.ipynb
|
2807fa287861e000ee3a7cd6924620cff167a4e0
|
[] |
no_license
|
db2Dean/db2Dean-Share
|
https://github.com/db2Dean/db2Dean-Share
| 3 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,514,253 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FEDERATION DEMO FOR AFCU
# Show features of Db2 and Db2 Federation including:
# - Postgres source
# - Db2 source
# - File Source using an External Table
# - Query the Sources
# - Joining heterogenous sources
# - Creating a View that joins hetergenous sources
# - Caching the File source using a Materialized Query Table
# - Query using the cache
# - Build REST Service to Query the Nicknames
# - Execute the Rest Api that Queries the Nicknames
# <div>
# <img src="attachment:image.png" width="500"/>
# </div>
#
# ## Connect to the Federation Database
# +
# #!wget https://raw.githubusercontent.com/IBM/db2-jupyter/master/db2.ipynb
# -
# This is the newest notebook and has graphing capabilities
# %run "db2 (4).ipynb"
DB="SAMPLE"
USER="db2inst1"
PW="xxxxxxxx"
HOST="localhost"
PORT=50000
# %sql CONNECT TO {DB} USER {USER} USING {PW} HOST {HOST} PORT {PORT}
# ## Configure PostgreSQL source and Create Nickname on the customer_loyalty table
# Create the wrapper and server to configure the connection and then a nickname used to reference the customer loyalty table. Finally query the nickname.
# + language="sql"
# drop wrapper "pgwrap";
# create wrapper "pgwrap" library 'libdb2rcjdbc.so' options(db2_fenced 'y');
# + language="sql"
# CREATE SERVER PGSERV1
# TYPE JDBC
# VERSION 3.0
# WRAPPER "pgwrap"
# OPTIONS (
# DRIVER_PACKAGE '/x/db2fs/postgresql-42.5.1.jar',
# DRIVER_CLASS 'org.postgresql.Driver',
# URL 'jdbc:postgresql://85331fa6-6b56-4355-935e-290f3ac8aa8c.8117147f814b4b2ea643610826cd2046.databases.appdomain.cloud:31128/3RDPARTY');
# + language="sql"
# create user mapping for db2inst1 server PGSERV1 options (REMOTE_AUTHID 'cpdemo', REMOTE_PASSWORD 'xxxxxxx');
# create user mapping for service_user1 server PGSERV1 options (REMOTE_AUTHID 'cpdemo', REMOTE_PASSWORD 'xxxxx');
# + language="sql"
# create or replace nickname feddemo.pg_cust_loyalty for PGSERV1."CUSTOMER"."CUSTOMER_LOYALTY";
# GRANT SELECT ON feddemo.pg_cust_loyalty TO USER service_user1
# + language="sql"
# select * from feddemo.pg_cust_loyalty limit 10
# -
# %sql -pb select loyalty_status, count(*) from feddemo.pg_cust_loyalty group by loyalty_status
# ## Configure Db2 source and Create Nickname on the customer_loyalty table
# Create the server to configure the connection and then a nickname used to reference the customer table. For Db2 the DRDA wraper exists by default. Fnally query the nickname.
# +
# #%%sql
#create wrapper drda options(db2_fenced 'y')
# + language="sql"
# DROP SERVER "DB2WOC1";
# CREATE SERVER "DB2WOC1"
# TYPE DB2/LUW
# VERSION '11.5'
# WRAPPER "DRDA"
# AUTHORIZATION "cpdemo"
# PASSWORD "C!oudP@k4DataDem0s"
# OPTIONS
# (DB2_MAXIMAL_PUSHDOWN 'Y'
# ,DBNAME 'DB2WOC'
# );
# + language="sql"
# CREATE USER MAPPING FOR db2inst1
# SERVER "DB2WOC1"
# OPTIONS
# (REMOTE_AUTHID 'cpdemo'
# ,REMOTE_PASSWORD 'xxxxxxxxxxx'
# );
# + language="sql"
# create or replace nickname feddemo.db2_customer for DB2WOC1."CUSTOMER"."CUSTOMER"
# + language="sql"
# select * from feddemo.db2_customer limit 10
# -
# ## Use DCW to show Another App connecting to the Nicknames (virtual tables)
# - Demonstrate that any application can use the objects
# - Demonstrate that Nicknames only availalbe to users granted access.
# ## Create an External Table on the CUSTOMER LOYALTY HISTORY file in Amazon AWS S3 storage
# Federation doesn't have a way to create a nickname on S3 files yet.
# + magic_args="-a " language="sql"
# DROP TABLE IF EXISTS FEDDEMO.EXT_CUST_LOYALTY_HIST ;
#
# CREATE EXTERNAL TABLE FEDDEMO.EXT_CUST_LOYALTY_HIST
# (LOYALTY_NBR INTEGER
# ,ORDER_YEAR INTEGER
# ,QUARTER VARCHAR(100)
# ,MONTHS_AS_MEMBER INTEGER
# ,LOYALTY_STATUS VARCHAR(100)
# ,PRODUCT_LINE VARCHAR(100)
# ,COUPON_RESPONSE VARCHAR(100)
# ,COUPON_COUNT INTEGER
# ,QUANTITY_SOLD INTEGER
# ,UNIT_SALE_PRICE DECIMAL
# ,UNIT_COST DECIMAL
# ,REVENUE DECIMAL
# ,PLANNED_REVENUE DECIMAL
# ,SHIPPING_DAYS INTEGER
# ,CUSTOMER_LIFETIME_VALUE DECIMAL
# ,LOYALTY_COUNT BOOLEAN
# ,BACKORDER_STATUS VARCHAR(100)
# ,SATISFACTION_RATING INTEGER
# ,SATISFACTION_REASON VARCHAR(100)
# )
#
# USING (dataobject 'CUSTOMER_LOYALTY_HISTORY.csv'
# s3('s3.us-east-2.amazonaws.com',
# 'SKJ5KJDOUjlj$--934i',
# '4j=0ejPKPjp4usz',
# 'cpd-outcomes-s3/Customer')
# maxerrors 100000
# DELIMITER ','
# DATEDELIM '-'
# Y2BASE 2000
# DATESTYLE 'DMONY2'
# MAXROWS 2000
# STRING_DELIMITER DOUBLE
# SKIPROWS 1
# FILLRECORD True
# NOLOG True
# )
# ;
# + language="sql"
# select * from FEDDEMO.EXT_CUST_LOYALTY_HIST limit 15
# -
# ## Query a join of the Db2 and Postgres tables
# + language="sql"
# SELECT DB2.CUSTOMER_ID AS CUSTOMER_ID, DB2.LOYALTY_NBR AS LOYALTY_NBR, DB2.FIRST_NAME AS FIRST_NAME,
# DB2.LAST_NAME AS LAST_NAME, DB2.CUSTOMER_NAME AS CUSTOMER_NAME, DB2.COUNTRY AS COUNTRY,
# DB2.STATE_NAME AS STATE_NAME, DB2.STATE_CODE AS STATE_CODE, DB2.CITY AS CITY,
# DB2.LATITUDE AS LATITUDE, DB2.LONGITUDE AS LONGITUDE, DB2.POSTAL_CODE AS POSTAL_CODE,
# DB2.LOCATION_CODE AS LOCATION_CODE, DB2.INCOME AS INCOME, DB2.MARITAL_STATUS AS MARITAL_STATUS,
# DB2.CREDIT_CARD_TYPE AS CREDIT_CARD_TYPE, DB2.CREDIT_CARD_NUMBER AS CREDIT_CARD_NUMBER,
# DB2.CREDIT_CARD_CVV AS CREDIT_CARD_CVV, DB2.CREDIT_CARD_EXPIRY AS CREDIT_CARD_EXPIRY,
# PG.ORDER_YEAR AS ORDER_YEAR, PG.QUARTER AS QUARTER,
# PG.MONTHS_AS_MEMBER AS MONTHS_AS_MEMBER, PG.LOYALTY_STATUS AS LOYALTY_STATUS,
# PG.PRODUCT_LINE AS PRODUCT_LINE, PG.COUPON_RESPONSE AS COUPON_RESPONSE,
# PG.COUPON_COUNT AS COUPON_COUNT, PG.QUANTITY_SOLD AS QUANTITY_SOLD,
# PG.UNIT_SALE_PRICE AS UNIT_SALE_PRICE, PG.UNIT_COST AS UNIT_COST,
# PG.REVENUE AS REVENUE, PG.PLANNED_REVENUE AS PLANNED_REVENUE,
# PG.SHIPPING_DAYS AS SHIPPING_DAYS, PG.CUSTOMER_LIFETIME_VALUE AS CUSTOMER_LIFETIME_VALUE,
# PG.LOYALTY_COUNT AS LOYALTY_COUNT, PG.BACKORDER_STATUS AS BACKORDER_STATUS,
# PG.SATISFACTION_RATING AS SATISFACTION_RATING, PG.SATISFACTION_REASON AS SATISFACTION_REASON
# FROM feddemo.db2_customer DB2,
# feddemo.pg_cust_loyalty PG
# WHERE DB2.LOYALTY_NBR=PG.LOYALTY_NBR
# LIMIT 10;
# -
# ## Create a VIEW using the same query making life easier for the analyst
# + language="sql"
# CREATE OR REPLACE VIEW FEDDEMO.CUSTOMER_SUMMARY_V2 AS
# SELECT DB2.CUSTOMER_ID AS CUSTOMER_ID, DB2.LOYALTY_NBR AS LOYALTY_NBR, DB2.FIRST_NAME AS FIRST_NAME,
# DB2.LAST_NAME AS LAST_NAME, DB2.CUSTOMER_NAME AS CUSTOMER_NAME, DB2.COUNTRY AS COUNTRY,
# DB2.STATE_NAME AS STATE_NAME, DB2.STATE_CODE AS STATE_CODE, DB2.CITY AS CITY,
# DB2.LATITUDE AS LATITUDE, DB2.LONGITUDE AS LONGITUDE, DB2.POSTAL_CODE AS POSTAL_CODE,
# DB2.LOCATION_CODE AS LOCATION_CODE, DB2.INCOME AS INCOME, DB2.MARITAL_STATUS AS MARITAL_STATUS,
# DB2.CREDIT_CARD_TYPE AS CREDIT_CARD_TYPE, DB2.CREDIT_CARD_NUMBER AS CREDIT_CARD_NUMBER,
# DB2.CREDIT_CARD_CVV AS CREDIT_CARD_CVV, DB2.CREDIT_CARD_EXPIRY AS CREDIT_CARD_EXPIRY,
# PG.ORDER_YEAR AS ORDER_YEAR, PG.QUARTER AS QUARTER,
# PG.MONTHS_AS_MEMBER AS MONTHS_AS_MEMBER, PG.LOYALTY_STATUS AS LOYALTY_STATUS,
# PG.PRODUCT_LINE AS PRODUCT_LINE, PG.COUPON_RESPONSE AS COUPON_RESPONSE,
# PG.COUPON_COUNT AS COUPON_COUNT, PG.QUANTITY_SOLD AS QUANTITY_SOLD,
# PG.UNIT_SALE_PRICE AS UNIT_SALE_PRICE, PG.UNIT_COST AS UNIT_COST,
# PG.REVENUE AS REVENUE, PG.PLANNED_REVENUE AS PLANNED_REVENUE,
# PG.SHIPPING_DAYS AS SHIPPING_DAYS, PG.CUSTOMER_LIFETIME_VALUE AS CUSTOMER_LIFETIME_VALUE,
# PG.LOYALTY_COUNT AS LOYALTY_COUNT, PG.BACKORDER_STATUS AS BACKORDER_STATUS,
# PG.SATISFACTION_RATING AS SATISFACTION_RATING, PG.SATISFACTION_REASON AS SATISFACTION_REASON
# FROM feddemo.db2_customer DB2,
# feddemo.pg_cust_loyalty PG
# WHERE DB2.LOYALTY_NBR=PG.LOYALTY_NBR
# LIMIT 10
# -
# loyalty_df=%sql select * from feddemo.CUSTOMER_SUMMARY_V2 limit 10
type(loyalty_df)
loyalty_df.head()
# ### Cache the file in a Materialized Query Table for faster access
# + language="sql"
# DROP TABLE if EXISTS FEDDEMO.MQT_FILE_LOYALTY_HIST_CACHE;
# CREATE TABLE FEDDEMO.MQT_FILE_LOYALTY_HIST_CACHE
# AS (SELECT * FROM FEDDEMO.EXT_CUST_LOYALTY_HIST)
# DATA INITIALLY DEFERRED REFRESH DEFERRED
# ENABLE QUERY OPTIMIZATION MAINTAINED BY USER;
#
# SET INTEGRITY FOR FEDDEMO.MQT_FILE_LOYALTY_HIST_CACHE ALL IMMEDIATE UNCHECKED;
#
# INSERT INTO FEDDEMO.MQT_FILE_LOYALTY_HIST_CACHE (SELECT * FROM FEDDEMO.EXT_CUST_LOYALTY_HIST);
#
# SET CURRENT REFRESH AGE ANY;
# + language="sql"
# select count(*) from FEDDEMO.MQT_FILE_LOYALTY_HIST_CACHE
# -
# ## Now you can query all three tables
# Do NOT Run this in the Demo. It takes too long
# #%%sql
SELECT DB2.COUNTRY AS DB2_COUNTRY,
PG.ORDER_YEAR AS PG_LOYALTY_NBR,
AVG(FILE.SATISFACTION_RATING) AS AVG_SATISFACTION_RATING
FROM feddemo.db2_customer DB2,
feddemo.pg_cust_loyalty PG,
FEDDEMO.EXT_CUST_LOYALTY_HIST FILE
WHERE DB2.LOYALTY_NBR=PG.LOYALTY_NBR
AND PG.LOYALTY_NBR=FILE.LOYALTY_NBR
AND PG.ORDER_YEAR=FILE.ORDER_YEAR
AND PG.QUARTER=FILE.QUARTER
group by DB2_COUNTRY, PG_LOYALTY_NBR
;
# %sql CONNECT RESET
# ## Create a REST Service to Query one the Postgres table
# ### By creating rest services:
# - Developers don't need database drivers
# - Developers don't need database Connectivity
# - Developers don't even need to know SQL
# <div>
# <img src="attachment:image.png" width="500"/>
# </div>
# ## DBA or SQL Developer Writes a Query and Creates a RESTful Service on it
# Get a token from the REST service by calling the application that provides it.
# +
usertype="db2admin"
# %run "Db2 RESTful Endpoint Get Token Notebook.ipynb"
admin_headers = {
"authorization": token,
"content-type": "application/json"
}
# -
# ##### Write a query to return the data for a given year and quarter from the Postgres Loyalty table
query= "select * from feddemo.pg_cust_loyalty \
where order_year = @ORDER_YEAR \
and quarter = @QUARTER \
limit 4"
print(query)
# #### Define a service called "get_loyalty" that uses the Query defined above.
# Notice that isQuery is set to true because a row will be returned from the service.
body = {"isQuery": True,
"parameters": [
{
"datatype": "INTEGER",
"name": "@ORDER_YEAR"
},
{
"datatype": "VARCHAR(2)",
"name": "@QUARTER"
}
],
"schema": "REST_SERVICES",
"serviceDescription": "Return Loyalty Data for a given Year and Quarter",
"serviceName": "get_loyalty",
"sqlStatement": query,
"version": "1.0"
}
print(body)
# ##### Create the Service and show the results
# +
Db2RESTful = "http://localhost:50050"
API_makerest = "/v1/services"
try:
response = requests.post("{}{}".format(Db2RESTful,API_makerest), headers=admin_headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
# A response of 400 frequently means that the service already exists or there is an error in the SQL
# and you need to service delete it using the delete cells below.
print(response)
if (response.status_code == 201):
print("Service Created")
else:
print(response.json())
# -
# ## Developer Executes the Query getting results for the desired YEAR and QUARTER
# ### Run the "get_loyalty" Service
# Define the service we will call
API_runrest = "/v1/services/get_loyalty/1.0"
# +
# Define values for year and quarter and put them into the Python Dictionary
year=2019
quarter="Q4"
body = {
"parameters": {
"@ORDER_YEAR": year,
"@QUARTER": quarter
},
"sync": True
}
print(body)
# -
# ##### Run the service
# +
try:
response = requests.post("{}{}".format(Db2RESTful,API_runrest), headers=admin_headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print(response)
print(response.json())
# -
# # What did I show?
# - Created table-like objects called nicknames
# - Created views on the objects to make it easyer for analysts and developers
# - Cached results of a slow object in an MQT to speed queries
# - These allowed my application (Jupyter notebook) to query ONE data source but get data from many
# - Created and executed a REST service to allow applications that don't have DB Drivers or use SQL
# <div>
# <img src="attachment:image.png" width="700"/>
# </div>
# ###### Delete the service. Needed as I was developing the service.
# +
API_deleteService = "/v1/services"
Service = "/get_loyalty"
Version = "/1.0"
try:
response = requests.delete("{}{}{}{}".format(Db2RESTful,API_deleteService,Service,Version), headers=admin_headers)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
# A response of 204 indicates success.
print (response)
# -
#
| 13,800 |
/_rmd/extra_BCa/bca_python.ipynb
|
f41fba47fa1a7f204d2821c96bd1f5bb4dde75a6
|
[
"MIT"
] |
permissive
|
erikdrysdale/erikdrysdale.github.io
|
https://github.com/erikdrysdale/erikdrysdale.github.io
| 2 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 406,413 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Loading, Storage,
import numpy as np
import pandas as pd
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4, suppress=True)
# ## Reading and Writing Data in Text Format
# !type('../pydata-book/.examples/ex1.csv')
df = pd.read_csv('examples/ex1.csv')
df
pd.read_table('examples/ex1.csv', sep=',')
# !cat examples/ex2.csv
pd.read_csv('examples/ex2.csv', header=None)
pd.read_csv('examples/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
names = ['a', 'b', 'c', 'd', 'message']
pd.read_csv('examples/ex2.csv', names=names, index_col='message')
# !cat examples/csv_mindex.csv
parsed = pd.read_csv('examples/csv_mindex.csv',
index_col=['key1', 'key2'])
parsed
list(open('examples/ex3.txt'))
result = pd.read_table('examples/ex3.txt', sep='\s+')
result
# !cat examples/ex4.csv
pd.read_csv('examples/ex4.csv', skiprows=[0, 2, 3])
# !cat examples/ex5.csv
result = pd.read_csv('examples/ex5.csv')
result
pd.isnull(result)
result = pd.read_csv('examples/ex5.csv', na_values=['NULL'])
result
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('examples/ex5.csv', na_values=sentinels)
# ### Reading Text Files in Pieces
pd.options.display.max_rows = 10
result = pd.read_csv('examples/ex6.csv')
result
pd.read_csv('examples/ex6.csv', nrows=5)
chunker = pd.read_csv('examples/ex6.csv', chunksize=1000)
chunker
# +
chunker = pd.read_csv('examples/ex6.csv', chunksize=1000)
tot = pd.Series([])
for piece in chunker:
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
# -
tot[:10]
# ### Writing Data to Text Format
data = pd.read_csv('examples/ex5.csv')
data
data.to_csv('examples/out.csv')
# !cat examples/out.csv
import sys
data.to_csv(sys.stdout, sep='|')
data.to_csv(sys.stdout, na_rep='NULL')
data.to_csv(sys.stdout, index=False, header=False)
data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])
dates = pd.date_range('1/1/2000', periods=7)
ts = pd.Series(np.arange(7), index=dates)
ts.to_csv('examples/tseries.csv')
# !cat examples/tseries.csv
# ### Working with Delimited Formats
# !cat examples/ex7.csv
# +
import csv
f = open('examples/ex7.csv')
reader = csv.reader(f)
# -
for line in reader:
print(line)
with open('examples/ex7.csv') as f:
lines = list(csv.reader(f))
header, values = lines[0], lines[1:]
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict
# class my_dialect(csv.Dialect):
# lineterminator = '\n'
# delimiter = ';'
# quotechar = '"'
# quoting = csv.QUOTE_MINIMAL
# reader = csv.reader(f, dialect=my_dialect)
# reader = csv.reader(f, delimiter='|')
# with open('mydata.csv', 'w') as f:
# writer = csv.writer(f, dialect=my_dialect)
# writer.writerow(('one', 'two', 'three'))
# writer.writerow(('1', '2', '3'))
# writer.writerow(('4', '5', '6'))
# writer.writerow(('7', '8', '9'))
# ### JSON Data
obj = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 30, "pets": ["Zeus", "Zuko"]},
{"name": "Katie", "age": 38,
"pets": ["Sixes", "Stache", "Cisco"]}]
}
"""
import json
result = json.loads(obj)
result
asjson = json.dumps(result)
siblings = pd.DataFrame(result['siblings'], columns=['name', 'age'])
siblings
# !cat examples/example.json
data = pd.read_json('examples/example.json')
data
print(data.to_json())
print(data.to_json(orient='records'))
# ### XML and HTML: Web Scraping
# conda install lxml
# pip install beautifulsoup4 html5lib
tables = pd.read_html('examples/fdic_failed_bank_list.html')
len(tables)
failures = tables[0]
failures.head()
close_timestamps = pd.to_datetime(failures['Closing Date'])
close_timestamps.dt.year.value_counts()
# #### Parsing XML with lxml.objectify
# <INDICATOR>
# <INDICATOR_SEQ>373889</INDICATOR_SEQ>
# <PARENT_SEQ></PARENT_SEQ>
# <AGENCY_NAME>Metro-North Railroad</AGENCY_NAME>
# <INDICATOR_NAME>Escalator Availability</INDICATOR_NAME>
# <DESCRIPTION>Percent of the time that escalators are operational
# systemwide. The availability rate is based on physical observations performed
# the morning of regular business days only. This is a new indicator the agency
# began reporting in 2009.</DESCRIPTION>
# <PERIOD_YEAR>2011</PERIOD_YEAR>
# <PERIOD_MONTH>12</PERIOD_MONTH>
# <CATEGORY>Service Indicators</CATEGORY>
# <FREQUENCY>M</FREQUENCY>
# <DESIRED_CHANGE>U</DESIRED_CHANGE>
# <INDICATOR_UNIT>%</INDICATOR_UNIT>
# <DECIMAL_PLACES>1</DECIMAL_PLACES>
# <YTD_TARGET>97.00</YTD_TARGET>
# <YTD_ACTUAL></YTD_ACTUAL>
# <MONTHLY_TARGET>97.00</MONTHLY_TARGET>
# <MONTHLY_ACTUAL></MONTHLY_ACTUAL>
# </INDICATOR>
# +
from lxml import objectify
path = 'datasets/mta_perf/Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()
# +
data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data[child.tag] = child.pyval
data.append(el_data)
# -
perf = pd.DataFrame(data)
perf.head()
from io import StringIO
tag = '<a href="http://www.google.com">Google</a>'
root = objectify.parse(StringIO(tag)).getroot()
root
root.get('href')
root.text
# ## Binary Data Formats
frame = pd.read_csv('examples/ex1.csv')
frame
frame.to_pickle('examples/frame_pickle')
pd.read_pickle('examples/frame_pickle')
# !rm examples/frame_pickle
# ### Using HDF5 Format
frame = pd.DataFrame({'a': np.random.randn(100)})
store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store
store['obj1']
store.put('obj2', frame, format='table')
store.select('obj2', where=['index >= 10 and index <= 15'])
store.close()
frame.to_hdf('mydata.h5', 'obj3', format='table')
pd.read_hdf('mydata.h5', 'obj3', where=['index < 5'])
os.remove('mydata.h5')
# ### Reading Microsoft Excel Files
xlsx = pd.ExcelFile('examples/ex1.xlsx')
pd.read_excel(xlsx, 'Sheet1')
frame = pd.read_excel('examples/ex1.xlsx', 'Sheet1')
frame
writer = pd.ExcelWriter('examples/ex2.xlsx')
frame.to_excel(writer, 'Sheet1')
writer.save()
frame.to_excel('examples/ex2.xlsx')
# !rm examples/ex2.xlsx
# ## Interacting with Web APIs
import requests
url = 'https://api.github.com/repos/pandas-dev/pandas/issues'
resp = requests.get(url)
resp
data = resp.json()
data[0]['title']
issues = pd.DataFrame(data, columns=['number', 'title',
'labels', 'state'])
issues
# ## Interacting with Databases
import sqlite3
query = """
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL, d INTEGER
);"""
con = sqlite3.connect('mydata.sqlite')
con.execute(query)
con.commit()
data = [('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
cursor.description
pd.DataFrame(rows, columns=[x[0] for x in cursor.description])
import sqlalchemy as sqla
db = sqla.create_engine('sqlite:///mydata.sqlite')
pd.read_sql('select * from test', db)
# !rm mydata.sqlite
# ## Conclusion
import pandas as pd
a=pd.read_table('./examples/volume.csv',names=[1])
a
a['nc']=a[1]
a
lambda '%3%.2f'
a=pd.Index=([1,2,3])
a.append(4)
a.difference([4])
a.union(a)
a=pd.Series([[1,2,3,4],[5,6,7,8]])
states = ['Texas', 'Utah', 'California']
a.reindex(columns=states)
a
a=pd.read_csv('../pydata-book/examples/ex4.csv',sep=',')
a
b=pd.read_csv('../pydata-book/examples/ex4.csv')
b
c=pd.read_table('../pydata-book/examples/spx.csv',sep='\t')
c
d=pd.read_table('../pydata-book/e Results from Grid Search " )
print("\n The best estimator across ALL searched params:\n", grid.best_estimator_)
print("\n The best score across ALL searched params:\n", grid.best_score_)
print("\n The best parameters across ALL searched params:\n", grid.best_params_)
# -
# ### Fitting model
# +
#Fit the algorithm on the data
CBC = CatBoostClassifier(depth = 10, iterations = 100, learning_rate = 0.1)
CBC.fit(X_train_std, y_train_smote)
#Predict training set:
y_pred_CBC = CBC.predict(X_test_std)
#Print model report:
print("\nModel Report Train")
#print("The Training Score of XGboost is: %.4g"% format(xgb1.score(X_train_sm, y_train_sm)*100))
print("Test Accuracy score : %.4g" % accuracy_score(y_test, y_pred_CBC))
#print("precision_score : %.4g" % precision_score(y_test, y_pred_CBC))
#print("recall score : %.4g" % recall_score(y_test, y_pred_CBC))
#print("F1 score : %.4g" % f1_score(y_test, y_pred_CBC))
print("Auc score : %.4g" % roc_auc_score(y_test, y_pred_CBC))
print("classification report :{}".format(classification_report(y_test, y_pred_CBC)))
# -
plot_roc_curve(CBC, X_test_std, y_test)
#Test (Split 15% from training data)GridSearchCV
from sklearn import metrics
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_CBC)
metrics.auc(fpr, tpr)
# +
cm_CBC = confusion_matrix(y_test,y_pred_CBC)
plt.figure(figsize=(14,5))
conf_matrix_CBC = pd.DataFrame(data=cm_CBC,columns=['Predicted:0','Predicted:1'],index=['Actual:0','Actual:1'])
sns.heatmap(conf_matrix_CBC, annot=True,fmt='d',cmap="Greens");
print(accuracy_score(y_test,y_pred_CBC))
# -
# ## Without Upsampling the data
X_train.shape
y_train.shape
X_test.shape
y_test.shape
sc=StandardScaler()
X_train=sc.fit_transform(X_train)
X_test=sc.transform(X_test)
# ### XGboost
# +
clf1 = XGBClassifier()
# A parameter grid for XGBoost
params = {
'min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5]
}
random_cv1=RandomizedSearchCV(estimator=clf1,param_distributions=params,
cv=5,n_iter=5,scoring='roc_auc',n_jobs=1,verbose=3,return_train_score=True,random_state=121)
random_cv1.fit(X_train,y_train)
# -
#best parameter
random_cv1.best_params_
# +
clf1 = XGBClassifier(colsample_bytree= 0.6,
gamma= 5,
max_depth= 5,
min_child_weigh= 10,
subsample= 1.0)
clf1.fit(X_train, y_train)
# +
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score,confusion_matrix,roc_auc_score,ConfusionMatrixDisplay,precision_score,recall_score,f1_score,classification_report,roc_curve
y_pred_xgb1 = clf1.predict(X_test)
print("The Training Score of XGboost is: {}%".format(clf1.score(X_train, y_train)*100))
print("The Accuracy Score of XGboost is: {}%".format(accuracy_score(y_test, y_pred_xgb1)*100))
print("The Confusion Matrix for XGboost is: \n{}\n".format(confusion_matrix(y_test, y_pred_xgb1)))
print('\n')
print(classification_report(y_test, y_pred_xgb1))
cm_xgb1 = confusion_matrix(y_test,y_pred_xgb1)
plt.figure(figsize=(14,5))
conf_matrix_xgb1 = pd.DataFrame(data=cm_xgb1,columns=['Predicted:0','Predicted:1'],index=['Actual:0','Actual:1'])
sns.heatmap(conf_matrix_xgb1, annot=True,fmt='d',cmap="Greens");
print(accuracy_score(y_test,y_pred_xgb1))
plot_roc_curve(clf1, X_test, y_test)
# -
TN=cm_xgb1[0,0]
TP=cm_xgb1[1,1]
FN=cm_xgb1[1,0]
FP=cm_xgb1[0,1]
sensitivity=TP/float(TP+FN)
specificity=TN/float(TN+FP)
# +
print('The acuuracy of the model = TP+TN/(TP+TN+FP+FN) = ',(TP+TN)/float(TP+TN+FP+FN),'\n',
'The Missclassification = 1-Accuracy = ',1-((TP+TN)/float(TP+TN+FP+FN)),'\n',
'Sensitivity or True Positive Rate = TP/(TP+FN) = ',TP/float(TP+FN),'\n',
'Specificity or True Negative Rate = TN/(TN+FP) = ',TN/float(TN+FP),'\n')
# -
# - XGboost on normal data gives less miss-classification rate means it is correctly identifying candidates who will work for company
# - It also have high specificity or True Negative rate means identifying those who are looking for job change
# - Increased precision value when compared to other model.
import pickle
pickle.dump(clf1, open('job_change.pkl','wb'))
| 12,530 |
/src/processing.ipynb
|
9784b7585738471361358fd82fcbd23640c546f4
|
[] |
no_license
|
abu-rayyan/oil_futures
|
https://github.com/abu-rayyan/oil_futures
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 83,636 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#pip install seaborn
# -
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sb
from sklearn.preprocessing import MinMaxScaler
# # Data Loading
# +
csv1 = pd.read_excel('data/Cushing Inventory.xlsx') # Loading of cushing file
csv2=pd.read_excel(open('data/Combined.xlsx', 'rb'),sheet_name='Main') # Loading of pricing data
df_inventory=csv1[['SeriesDate', 'Inventory']].copy()
# -
# # Data processing of inventory data
df_inventory['SeriesDate'] = pd.to_datetime(df_inventory.SeriesDate, format='%Y/%M/%d')
df_inventory=df_inventory.set_index('SeriesDate').resample('B').ffill().reset_index()
start_date = '2011-01-01'
mask = (df_inventory['SeriesDate'] > start_date)
df_inventory = df_inventory.loc[mask]
df_inventory.dropna(axis=0, inplace=True)
df_inventory.describe()
df_inventory.head()
# # Data processing of pricing data
csv2.dropna(axis=0, inplace=True) # dropping all NAN valued rows
csv2['Date'] = pd.to_datetime(csv2['Date']) # converting Date to datetime format
csv2.describe()
csv2.head()
# # Merging both data frames and further processing
#combined=pd.merge(df_inventory,csv2, how='right', on='SeriesDate')
combined=pd.merge(
df_inventory,
csv2,
left_on=['SeriesDate'],
right_on=['Date']
)
# %store combined
combined['year'] = pd.DatetimeIndex(combined['Date']).year
combined['month'] = pd.DatetimeIndex(combined['Date']).month
combined['day'] = pd.DatetimeIndex(combined['Date']).day
combined['weekday'] = pd.DatetimeIndex(combined['Date']).weekday # Monday is 0 and Sunday is 6
# Drop features
combined=combined.drop(columns=['SeriesDate','Date'])
combined.head()
combined.describe()
# # Getting Lag copies of all features
# ## Getting lag copies of input features
df_input_lag=combined.loc[:,'CL':'3&12']
df_input_lag.head()
df_input_lag.count()
# +
lags = range(1, 50) # 50 lags
df_input_lag=df_input_lag.assign(**{'{} (t-{})'.format(col, t): df_input_lag[col].shift(t)
for t in lags
for col in df_input_lag
})
# -
df_input_lag.head()
df_input_lag.count()
df_input_lag.dropna(axis=0, inplace=True) # dropping all NAN valued rows
df_input_lag.head()
df_input_lag.count()
df_input_lag.to_csv('input_data.csv') # saving data as csv file to be used with other modules
# ## Creating Lag copies of Output
out_columns=['Inventory']
df_inventory_lag=combined[out_columns]
# +
lags = range(1, 50,6) # 50 lags
df_inventory_lag=df_inventory_lag.assign(**{'{} (t-{})'.format(col, t): df_inventory_lag[col].shift(t)
for t in lags
for col in df_inventory_lag
})
# -
df_inventory_lag.describe()
df_inventory_lag.count()
df_inventory_lag.dropna(axis=0, inplace=True) # dropping all NAN valued rows
df_inventory_lag.head()
df_inventory_lag.to_csv('inventory_data.csv') # saving data as csv file to be used with other modules
to be the most correlated feature with Yearly Amount Spent?**
# +
# length of membership ( most linear or correlated )
# -
# **Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership. **
sns.lmplot(x= 'Length of Membership',y ='Yearly Amount Spent',data = df)
# ## Training and Testing Data
#
# Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets.
# ** Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. **
df.columns
X = df[[ 'Avg. Session Length', 'Time on App ,Time on Website', 'Length of Membership']] # x should have numerical values
Y = df['Yearly Amount Spent']
# ** Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101**
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size=0.3, random_state=101)
# ## Training the Model
#
# Now its time to train our model on our training data!
#
# ** Import LinearRegression from sklearn.linear_model **
from sklearn.linear_model import LinearRegression
# **Create an instance of a LinearRegression() model named lm.**
lm = LinearRegression()
# ** Train/fit lm on the training data.**
lm.fit(X_train,Y_train)
# **Print out the coefficients of the model**
lm.coef_
# ## Predicting Test Data
# Now that we have fit our model, let's evaluate its performance by predicting off the test values!
#
# ** Use lm.predict() to predict off the X_test set of the data.**
predictions = lm.predict(X_test)
# ** Create a scatterplot of the real test values versus the predicted values. **
plt.scatter(Y_test,predictions)
plt.xlabel('Y_test(true values)')
plt.ylabel('Predicted values')
# ## Evaluating the Model
#
# Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).
#
# ** Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas**
from sklearn import metrics
print('MAE:' , metrics.mean_absolute_error(Y_test , predictions))
print('MSE:' , metrics.mean_squared_error(Y_test,predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(Y_test,predictions)))
metrics.explained_variance_score(Y_test , predictions) # model explains 99% of variance
# ## Residuals
#
# You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data.
#
# **Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().**
sns.distplot((Y_test - predictions), bins = 50)
# ## Conclusion
# We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.
#
# ** Recreate the dataframe below. **
cdf = pd.DataFrame(lm.coef_,X.columns,columns=['Coeff'])
cdf
# keeping othe parameters constant a 1 unit inc. in Avg. Session Length there is 25.981 inc. in coeff(price)
# and similar for all
# ** How can you interpret these coefficients? **
| 6,558 |
/Insurance claim.ipynb
|
d55aea9c54b690cd8b9cfcfd31ea904e46981e8d
|
[] |
no_license
|
Sai-Rho/share
|
https://github.com/Sai-Rho/share
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 2,937,810 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predict the Automobile Insurance claim
# Here from the provided dataset we need to Predict the Automobile Insurance claim by using various models
#
# Importing the required libraries as shown below
#
# Data Analysing
#
# Data Visualization
#
# Exploratory data analysis(EDA)
#
# Data Preprocessing
#
# Model Building
#
# Cross Validation and Prediction
#
# Grid Search
#
# Saving the model with joblib
#
# Conclusion
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import GridSearchCV
from sklearn.externals import joblib
import warnings
warnings.filterwarnings('ignore')
data=pd.read_csv('insur_claim.csv')
data
# +
#dataset columns
data.columns
# +
#data types of columns level
data.dtypes
# +
#data info of columns
data.info()
# +
#finding null values column level
data.isnull().sum()
# +
#finging unique values in the columns
data.nunique()
# -
# # Univariate Plot
print(data['State'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['State'])
# Most of the customer are from
#
# Missouri 3150,
# Iowa 2601,
# Nebraska 1703,
# Oklahoma 882,
# Kansas 798..
print(data['Response'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Response'])
# Here only 1308 customer got "Yes" response
print(data['Coverage'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Coverage'])
print(data['Education'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Education'])
print(data['EmploymentStatus'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['EmploymentStatus'])
print(data['Gender'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Gender'])
print(data['Marital Status'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Marital Status'])
print(data['Number of Open Complaints'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Number of Open Complaints'])
print(data['Number of Policies'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Number of Policies'])
print(data['Policy Type'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Policy Type'])
print(data['Policy'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Policy'])
print(data['Claim Reason'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Claim Reason'])
print(data['Vehicle Class'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Vehicle Class'])
print(data['Vehicle Size'].value_counts())
plt.figure(figsize=(12,5))
sb.countplot(data['Vehicle Size'])
# # Bivariate Plot
plt.figure(figsize=(12,8))
sb.violinplot(x='Claim Reason',y='Claim Amount',hue='Response',data=data)
plt.figure(figsize=(28,8))
sb.lineplot(x='Income',y='Claim Amount',data=data)
plt.figure(figsize=(28,8))
sb.violinplot(x='Education',y='Claim Amount',data=data)
plt.figure(figsize=(28,8))
sb.barplot(x='Coverage',y='Claim Amount',hue='Response',data=data)
# +
#To get the descriptive analysis
data.describe()
# -
# # Data Preprocessing
col=[ 'Country','State Code', 'State',
'Response', 'Coverage', 'Education', 'Effective To Date',
'EmploymentStatus', 'Gender', 'Location Code',
'Marital Status', 'Policy Type', 'Policy', 'Claim Reason',
'Sales Channel', 'Vehicle Class', 'Vehicle Size']
# +
#label encoding the above columns
le=LabelEncoder()
data[col]=data[col].apply(lambda x:le.fit_transform(x))
data
# +
#data types by column
data.dtypes
# +
#droping the columns which are not useful in building a model
data1=data.drop(['Customer', 'Country', 'State Code', 'State', 'Effective To Date', 'Gender', 'Location Code',
'Marital Status'],axis=1)
# +
# the Correlation between the different variables
data1.corr()
# +
# This heatmap shows the Correlation between the different variables
plt.figure(figsize=(12,12))
sb.heatmap(data1.corr(),annot=True)
# +
# This clustermap shows the Correlation between the different variables
plt.figure(figsize=(20,28))
sb.clustermap(data1.corr(),annot=True)
# -
data1.dtypes
# +
#hot encoding of data
data2 = pd.get_dummies(data=data1, columns=["Policy","Claim Reason", "Sales Channel", "Vehicle Class", "Vehicle Size"])
data2.head(3)
# -
#
# pairwise plot of relationship with the dataset
#
# Multivariate plotting of dataset
sb.pairplot(data1)
# # Model Building
# Assigning Independent and Dependent Variables
#
# Here x is considered as Independent variable
#
# y is Dependent Variables
x=data2.drop(['Claim Amount'],axis=1)
y=data2['Claim Amount']
print(x.shape)
print(y.shape)
# We will split the data into a training and a test part. The models will be trained on the training data set and tested on the test data set
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=.20,random_state=1)
# +
#working on RandomForestRegressor
#training and testing data using RandomForestRegressor
rfr=RandomForestRegressor()
rfr.fit(x_train,y_train)
pred=rfr.predict(x_test)
# +
#Finding mean_absolute_error,mean_squared_error, root_mean_square,r2 score on RandomForestRegressor
print('MAE :',metrics.mean_absolute_error(y_test,pred))
print('MSE :',metrics.mean_squared_error(y_test,pred))
print('RMSE :',np.sqrt(metrics.mean_squared_error(y_test, pred)))
print()
print('RMSLE:',np.sqrt(metrics.mean_squared_log_error(y_test, pred)))
print('r2 score :',metrics.r2_score(y_test, pred))
# -
# cross valiadtion and prediction
#cross valiadtion
score=cross_val_score(rfr,x,y,cv=5)
print('mean score :',score.mean())
print('STD score:',score.std())
print()
#cross prediction
predscore=cross_val_predict(rfr,x,y,cv=5)
print("cross prediction",predscore)
# +
#working on DecisionTreeRegressor
#training and testing data using DecisionTreeRegressor
dtr=DecisionTreeRegressor()
dtr.fit(x_train,y_train)
pred=dtr.predict(x_test)
# +
#Finding mean_absolute_error,mean_squared_error, root_mean_square,r2 score and rms_log_error on DecisionTreeRegressor
print('MAE :',metrics.mean_absolute_error(y_test,pred))
print('MSE :',metrics.mean_squared_error(y_test,pred))
print('RMSE :',np.sqrt(metrics.mean_squared_error(y_test, pred)))
print()
print('RMSLE:',np.sqrt(metrics.mean_squared_log_error(y_test, pred)))
print('r2 score :',metrics.r2_score(y_test, pred))
# +
#working on GradientBoostingRegressor
#training and testing data using GradientBoostingRegressor
gbr=GradientBoostingRegressor()
gbr.fit(x_train,y_train)
pred=gbr.predict(x_test)
# +
#Finding mean_absolute_error,mean_squared_error, root_mean_square on GradientBoostingRegressor
print('MAE :',metrics.mean_absolute_error(y_test,pred))
print('MSE :',metrics.mean_squared_error(y_test,pred))
print('RMSE :',np.sqrt(metrics.mean_squared_error(y_test, pred)))
print()
print('RMSLE:',np.sqrt(metrics.mean_squared_log_error(y_test, pred)))
print('r2 score :',metrics.r2_score(y_test, pred))
# -
# cross valiadtion and prediction
#cross valiadtion
score=cross_val_score(gbr,x,y,cv=5)
print('mean score :',score.mean())
print('STD score:',score.std())
print()
#cross prediction
predscore=cross_val_predict(gbr,x,y,cv=5)
print("cross prediction",predscore)
# # Grid Search
#
# GridSearchCV for RandomForestRegressor
param_grid = {
"n_estimators" : [10,20,30],
"max_features" : ["auto", "sqrt", "log2"],
"min_samples_split" : [2,4,8],
"bootstrap": [True],
}
gridscv = GridSearchCV(estimator=rfr, param_grid=param_grid, n_jobs=-1, cv=5)
gridscv.fit(x_train, y_train)
print(gridscv)
print('best score :',gridscv.best_score_)
print('best params :',gridscv.best_params_)
# GridSearchCV for GradientBoostingRegressor
param_grid = {'learning_rate': [0.1, 1],
'max_depth': [5,10]
}
gridscv = GridSearchCV( estimator=gbr, param_grid=param_grid, cv=5, n_jobs = -1)
gridscv.fit(x_train,y_train)
print(gridscv)
print('best score :',gridscv.best_score_)
print('best params :',gridscv.best_params_)
# # Saving the model with joblib
# +
# Save the best model with the help of joblib and pickle
joblib.dump(rfr,'insurance.pkl')
# -
# # Conclusion
# Importing the required libraries Analysing the dataset by check various aspects..
#
# Data Visualization is done by Univariate bivariate and multivariate to get a better insight of the data..
#
# Exploratory data analysis and Data Preprocessing is used to prepare the data for modeling
#
# data is tarined and tested using different Models ,this includes working with metrics and cross validation,gridsearch rtc
#
# Saving the best model with joblib
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# -
# ## Training a specific model
#
# In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
#
# Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:
#
# * 7x7 Convolutional Layer with 32 filters and stride of 1
# * ReLU Activation Layer
# * Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
# * 2x2 Max Pooling layer with a stride of 2
# * Affine layer with 1024 output units
# * ReLU Activation Layer
# * Affine layer from 1024 input units to 10 outputs
#
#
# +
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
pass
y_out = complex_model(X,y,is_training)
# -
# To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
# %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
# You should see the following from the run above
#
# `(64, 10)`
#
# `True`
# ### GPU!
#
# Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
# %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
# You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
# ### Train the model.
#
# Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
#
# Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
#
# First, set up an **RMSprop optimizer** (using a 1e-3 learning rate) and a **cross-entropy loss** function. See the TensorFlow documentation for more information
# * Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
# * Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
mean_loss = None
optimizer = None
pass
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# ### Train the model
# Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
# +
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
# -
# ### Check the accuracy of the model.
#
# Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# ## Train a _great_ model on CIFAR-10!
#
# Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves ** >= 70% accuracy on the validation set** of CIFAR-10. You can use the `run_model` function from above.
# ### Things you should try:
# - **Filter size**: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
# - **Number of filters**: Above we used 32 filters. Do more or fewer do better?
# - **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?
# - **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
# - **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
# - [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
# - [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
# - [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
# - **Use TensorFlow Scope**: Use TensorFlow scope and/or [tf.layers](https://www.tensorflow.org/api_docs/python/tf/layers) to make it easier to write deeper networks. See [this tutorial](https://www.tensorflow.org/tutorials/layers) for how to use `tf.layers`.
# - **Use Learning Rate Decay**: [As the notes point out](http://cs231n.github.io/neural-networks-3/#anneal), decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the [Tensorflow documentation](https://www.tensorflow.org/versions/master/api_guides/python/train#Decaying_the_learning_rate) for learning rate decay.
# - **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).
# - **Regularization**: Add l2 weight regularization, or perhaps use [Dropout as in the TensorFlow MNIST tutorial](https://www.tensorflow.org/get_started/mnist/pros)
#
# ### Tips for training
# For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
#
# - If the parameters are working well, you should see improvement within a few hundred iterations
# - Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
# - Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
# - You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.
#
# ### Going above and beyond
# If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these; however they would be good things to try for extra credit.
#
# - Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
# - Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
# - Model ensembles
# - Data augmentation
# - New Architectures
# - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
# - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
# - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
#
# If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
#
# ### What we expect
# At the very least, you should be able to train a ConvNet that gets at **>= 70% accuracy on the validation set**. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
#
# You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.
#
# Have fun and happy training!
# +
# Feel free to play with this cell
def my_model(X,y,is_training):
pass
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = my_model(X,y,is_training)
mean_loss = None
optimizer = None
pass
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# +
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# -
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# ### Describe what you did here
# In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
# _Tell us here_
# ### Test Set - Do this only once
# Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
# ## Going further with TensorFlow
#
# The next assignment will make heavy use of TensorFlow. You might also find it useful for your projects.
#
# # Extra Credit Description
# If you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable.
| 23,276 |
/session_14/session_14_solutions/Time Series Practice Solution.ipynb
|
6da1204da31b63ecd0406a62970284be2441ecaa
|
[] |
no_license
|
paulhoffman98/Python-Data-Analytics
|
https://github.com/paulhoffman98/Python-Data-Analytics
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 53,939 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Steph Curry Shot Analysis
#
#
# In this practice, we will answer the questions posed at the beginning of the Powerpoint lecture. We will plot the Curry's cumulative field goal percentage over each game. Then, on the same axis, plot his cumulative average distance from closest defender (for each shot) over each game.
#Import pandas,numpy, matplotlib here here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# +
#Read in the data here - make the date the index
df_curry = pd.read_csv("Data/Stephen_Curry_Shots.csv",\
parse_dates=["Date"], index_col = "Date")
#Sort the data frame by the index using sort_index
df_curry.sort_index(inplace = True, ascending = True)
df_curry.columns = df_curry.columns.str.lower()
df_curry.head()
# -
# Next we need to find the number of shots taken and made by Curry in each game. The method cumsum() might be useful.
# +
#Resample daily
df_shots = df_curry.fgm.resample(rule="D").\
agg(["sum", "count"]).dropna()
df_shots.columns = ["Made", "Taken"]
#do cumulative sums
df_shots_cumulative = df_shots.cumsum()
#compute FG percentage
df_shots_cumulative["FG_Percentage"] = df_shots_cumulative.Made/\
df_shots_cumulative.Taken
df_shots_cumulative.head()
# -
# Now do the same thing for the average distance from the closest defender. You might find the expanding() method useful.
# +
df_def = df_curry.close_def_dist.resample("D").mean().dropna()
df_def_cumulative = df_def.expanding().mean()
df_def_cumulative.head()
# -
# Now we create the plot. You may find the ax.twinx() useful.
# +
# %matplotlib inline
plt.style.use("fivethirtyeight")
fig, ax = plt.subplots()
df_shots_cumulative.FG_Percentage.plot(ax = ax,\
color = "b",\
label = "FG %")
ax.legend(loc = 0)
ax1 = ax.twinx()
df_def_cumulative.plot(ax=ax1, color = "g", label = "Dist. Def.")
ax1.legend(loc= 0)
# -
ort math
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import cv2
# own python codes
sys.path.append(os.path.join(os.getcwd(), '..'))
from utils import *
from tensorflow.keras.datasets.cifar10 import load_data
from skimage import feature
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
import numpy as np
class LocalBinaryPatterns:
def __init__(self, numPoints, radius):
# store the number of points and radius
self.numPoints = numPoints
self.radius = radius
def describe(self, image):
# compute the Local Binary Pattern representation
# of the image, and then use the LBP representation
# to build the histogram of patterns
lbp = feature.local_binary_pattern(image, self.numPoints,
self.radius)
#flatten
(hist, _) = np.histogram(lbp.ravel(),
bins=16)
# return the histogram of Local Binary Patterns
return hist, lbp
def get_batch(num,data,labels):
idx = np.arange(0,len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = [data[i] for i in idx]
labels_shuffle = [labels[i] for i in idx]
return np.asarray(data_shuffle), np.asarray(labels_shuffle)
# -
# skimage.feature.local_binary_pattern(image, P, R, method)
# - image: (N, M) array
# - P: number of neighbors
# - R: number of radius
# (P = R * 8)
# - method: {'default', 'ror', 'uniform', 'var'}
#
# output: (N, M) array
# ### Download the CIFAR-10 dataset
# **CIFAR-10** dataset은 32x32 사이즈의 이미지들로 airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck으로 이루어져있다. 각 클래스 별 6천장씩 구성되어 있으며 이중 50000장이 training에 사용되고 10000장이 test에 사용된다. TF-Slim library에서 제공하는 코드를 이용하여 현재 작업중인 폴더의 하위 폴더에 cifar-10 dataset을 저장한다.
#
# *CIFAR datasets URL*: https://www.cs.toronto.edu/~kriz/cifar.html
#
# *CIFAR-10 download link*: https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
(x_train,y_train), (x_test,y_test) = load_data()
# ### Create data loader and check the CIFAR-10 dataset
# 위에서 다운받은 데이터를 batch 단위로 읽어올 수 있는 data loader class를 선언. 현재 data loader는 get_batch()를 통해 단순히 이미지 및 레이블을 batch 크기만큼 받아올 수 있으며, 여기서는 특별한 preprocessing을 하지는 않는다. 하지만 보통 더 높은 성능을 위해서 random crop, flipping 등의 preprocessing을 하는 편이다.
loader = CIFAR10_loader()
class_names = loader.get_class_names()
batch = loader.get_batch(9)
fig = plot_images(batch['images'], batch['labels'], class_names)
fig.show()
# 사용할 LBP의 points와 radius를 정의하고 각 training image에서 추출된 LBP, HOG, ground truth를 저장할 변수를 선언한다.
# +
# initialize the local binary patterns descriptor along with
# the data and label lists
desc = LocalBinaryPatterns(8, 1) # num of points and radius
train_lbp = []
train_hog = []
train_labels = []
# -
# LBP와 HOG feature를 추출하기 위해 gray scale로 image를 변환하고 scikit-image library에 정의된 함수를 이용하여 각 feature를 추출한다.
# +
# set iterator as 0
loader.reset()
for ii in range(50000):
# Load a batch data
batch = loader.get_batch(1, 'train')
images = batch['images'].reshape(32,32,3) * 255
images = images.astype('uint8')
gray = cv2.cvtColor(images, cv2.COLOR_BGR2GRAY)
fd = feature.hog(gray, orientations=9, pixels_per_cell=(8, 8),
cells_per_block=(4, 4))
hist, lbp = desc.describe(gray)
if ii % 1000 == 0:
print("feature gen: %d" % ii)
print(hist)
print(len(fd))
print(lbp.max())
print(lbp.min())
train_labels.append(batch['labels'])
train_lbp.append(hist)
train_hog.append(fd)
# -
# ### Random Forest Classifier 선언
# scikit-learn library에 정의된 RandomForestClassifier()를 이용하여 선언한다. 이 함수에서 사용될 수 있는 인자는 아래 링크에 자세히 나와있으며 그 중 중요한 인자들은 아래와 같다.
#
# link: *http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier*
#
#
# - n_estimaters: number of trees
# - criterion: gini / entropy (impurity 계산 방식)
# - max_features: number of features to consider
# - max_depth
# - max_samples_split: number of children nodes
#
# +
# Create a random forest Classifier. By convention, clf means 'Classifier'
clf = RandomForestClassifier()
# Train the Classifier to take the training features and learn how they relate
# to the training y (the species)
clf.fit(np.concatenate((train_lbp, train_hog), axis=1), train_labels)
# -
# Testset data에서 추출한 lbp, hog feature와 ground truth label 정보를 저장할 변수를 선언한다.
# +
loader.reset()
test_lbp = []
test_hog = []
test_labels = []
for ii in range(10000):
# Load a batch data
batch = loader.get_batch(1, 'test')
images = batch['images'].reshape(32,32,3) * 255
images = images.astype('uint8')
gray = cv2.cvtColor(images, cv2.COLOR_BGR2GRAY)
fd = feature.hog(gray, orientations=9, pixels_per_cell=(8, 8),
cells_per_block=(4, 4))
hist, lbp = desc.describe(gray)
if ii % 1000 == 0:
print("feature gen: %d" % ii)
print(len(hist))
print(len(fd))
test_labels.append(batch['labels'])
test_lbp.append(hist)
test_hog.append(fd)
# -
# 생성된 Random Forest Classifier에 testset의 feature vectors를 이용하여 classification을 진행한다.
# +
# Apply the Classifier we trained to the test data (which, remember, it has never seen before)
pred_labels = []
pred_labels = clf.predict(np.concatenate((test_lbp, test_hog), axis=1))
print(len(pred_labels))
print(len(test_labels))
test_labels = [ int(x) for x in test_labels ]
# -
# ground truth label과 predicted label를 이용하여 confusion matrix를 생성한다.
# Create confusion matrix
confusion_matrix(test_labels, pred_labels)
# accuracy를 측정한다.
#
# - true positive (TP): number of hit
# - true negative (TN): number of correct rejection
# - false positive (FP): number of false hit
# - false negative (FN): number of false miss
#
#
# - precision: TP / (TP + FP)
# - recall: TP / (TP + FN)
# - f1-score (harmonic mean of precision and recall): 1/precision + 1/recall
print(classification_report(test_labels, pred_labels))
| 8,397 |
/산학_기성모델 코드/Private_6위, Public 점수 _0.43211, 1DCNN+Transformer.ipynb
|
1b22b59b6eef2ea459e25fe326b54dfa253b2290
|
[] |
no_license
|
kdhyun2/Study-for-Bigdata
|
https://github.com/kdhyun2/Study-for-Bigdata
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 65,625 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lists and Tuples in Python
# Lists and Tuples are the most versatile data-types used widely in Python environment. Lets dive into them one by one
# ##Python Lists
# A list can be defined as a collection of elements, similar to an array but not homogeneous.
a = ['anish', 'debayan', 'richard', 'alexis']
print(a) #Simple way to print a list
b = ['debayan', 'alexis', 'richard', 'anish']
a==b #Lists are ordered so the lists are not the same in this case
a = ['hello', 123, 'Richard', True, 5.675] #List can contain non-homogenous objects
print(a)
# +
def foo():
pass
import math
a = [int, len, foo, math] #List can even hold complex objects
print(a)
# +
#Indexing of the List
a = ['debayan', 'alexis', 'richard', 'anish']
#Printing single element
print(a[0])
print(a[3])
print(a[-2])
print(a[-3])
#Printing multiple elements
print(a[:])
print(a[2:])
print(a[:3])
print(a[2:3])
# -
#Appending to list
a=a+['harshit', 'umang']
print(a)
#Warning
a = ['foo', 'bar', 'baz']
a += 'dorm'
print(a)
#Multiplying entries
print(a*2) #The list now contains the elements twice but in the original order
#Few basic properties
a = [1, 4, 6, 8, 2, 9]
print(len(a))
print(max(a))
print(min(a))
#Lists can be nested
b = [1, 2, [4, 5], 6, [3], 9]
print(b[0])
print(b[2])
#Deleting from a list
del a[3]
print(a)
#Lists are modifiable
a[2:4] = [10, 11]
print(a)
a = [1, 2, 3]
a[1] = [5, 6]
print(a)
# ##Tuples
# Python Tuples are similar to Lists in all respect, except in the following two conditions:
# 1. They are defined within closing parenthesis '('')'
# 2. They are immutable
# +
tup = ('anish', 'debayan', 'subha', 'harshit')
print(tup)
print(tup[0])
print(tup[2: 4])
# -
#Printing in reverse
print(tup[::-1])
#Proving my point of being immutable
tup[2] = 'umang'
print(tup)
# Reasons to use Tuples over List:
# 1. Program executaions are faster
# 2. Sometimes, the developer wants to safeguard the data and make it un-modifiable
# 3. Python dictionary data-type uses Tuples as its value in the key-value pairs
# +
#Another trick
print(t)
(a, b, c, d) = t
print(a)
print()
= te_id.copy()
# - 각 변수에 대한 max,min,mean,std 집계를 하여 설명변수를 생성
# - 각 변수에 기울기에 대한 mean , std ,sum 집계를 하여 설명변수를 생성
cols = ['id']
cols.extend(train.iloc[:,2:].columns.tolist())
f_list = [
('max', 'max' ),
('min', 'min' ),
('mean', 'mean' ),
('std', 'std' ),
(
'gradient_mean',
lambda x : np.gradient(x).mean()
),
(
'gradient_std',
lambda x : np.gradient(x).std()
),
(
'gradient_sum',
lambda x : np.gradient(x).sum()
),
]
# 학습 데이터
f_ = train[cols].groupby("id").agg(f_list)
f_.columns = [ f"{c1}_{c2}" for c1, c2 in f_.columns ]
f_ = f_.reset_index()
f_
ft_train = pd.merge(ft_train,f_,how="left")
# 테스트 데이터
f_ = test[cols].groupby("id").agg(f_list)
f_.columns = [ f"{c1}_{c2}" for c1, c2 in f_.columns ]
f_ = f_.reset_index()
f_
ft_test = pd.merge(ft_test,f_,how="left")
# ### - 트랜스포머의 인코더에 추가로 넣을 설명변수 생성
# - datamanim 님이 코드 공유 게시판에 공유해주신 설명변수 추가 (감사합니다.)
# 학습 데이터
train['acc_t'] =(train['acc_x']**2+train['acc_y']**2+train['acc_z']**2)**(1/3)
# 테스트 데이터
test['acc_t'] =(test['acc_x']**2+test['acc_y']**2+test['acc_z']**2)**(1/3)
# - 각 변수의 차분값을 구하여 설명변수를 생성 (첫번째 값은 0으로 대체)
# +
# 학습 데이터
f_ = train.groupby("id").progress_apply(
lambda x : x.iloc[:,2:].diff().fillna(0)
).add_prefix("diff_")
train = pd.concat([train,f_],axis=1)
# 테스트 데이터
f_ = test.groupby("id").progress_apply(
lambda x : x.iloc[:,2:].diff().fillna(0)
).add_prefix("diff_")
test = pd.concat([test,f_],axis=1)
# -
# ### - 스케일 조정
ft_sc = StandardScaler()
train.iloc[:,2:] = ft_sc.fit_transform(train.iloc[:,2:]) # 학습 데이터
test.iloc[:,2:] = ft_sc.transform(test.iloc[:,2:]) # 테스트 데이터
ft_sc = StandardScaler()
ft_train = ft_sc.fit_transform(ft_train.iloc[:,1:]) # 학습 데이터
ft_test = ft_sc.transform(ft_test.iloc[:,1:]) # 테스트 데이터
# ### - 학습및 테스트 데이터 세팅
# +
ft_cnt = train.iloc[:,2:].columns.shape[0] # 설명변수 개수
X_train = np.array(train.iloc[:,2:])
X_test = np.array(test.iloc[:,2:])
#차원 변경
X_train = X_train.reshape(-1, 600, ft_cnt)
X_test=X_test.reshape(-1, 600, ft_cnt)
y = tf.keras.utils.to_categorical(train_labels['label'])
X_train.shape , X_test.shape , y.shape
# -
# <font color="red"><Br>
# # 3.학습및 예측
# - 다음의 링크를 참고 하여 Transformer를 활용 하였습니다.
#
# https://keras.io/examples/nlp/text_classification_with_transformer/
# ### - 모델링
def transformer_block(inputs,node,drop_rate,activation):
attn_output = keras.layers.MultiHeadAttention(num_heads=2,
key_dim=node)(inputs, inputs)
attn_output = keras.layers.Dropout(drop_rate)(attn_output)
out1 = keras.layers.LayerNormalization(epsilon=1e-6)(inputs + attn_output)
ffn_output = keras.layers.Dense(node, activation=activation)(out1) #
ffn_output = keras.layers.Dense(node)(ffn_output) #
ffn_output = keras.layers.Dropout(drop_rate)(ffn_output)
return keras.layers.LayerNormalization(epsilon=1e-6)(out1 + ffn_output)
# - 총 3개의 인풋을 받아 학습을 진행
# - 2개의 인풋은 CNN 을 거쳐 트랜스포머의 인코더의 인풋으로 들어감
# - 1개의 인풋은 별도의 간단한 히든레이어에 들어감
# - 3개의 아웃풋을 에버리지 레이어를 통과후 소프트맥스를 통해 최종 예측값 생성
def my_dnn_model(node=64,activation='relu', drop_rate = 0.2 ,loss="categorical_crossentropy",
optimizer="rmsprop",metrics=['accuracy']):
avg_list = []
inputs_list = []
for i in range(3):
if i < 2:
inputs = keras.Input(shape=(600, 7))
x = keras.layers.Conv1D(node*2, 5, activation=activation)(inputs)
x = keras.layers.MaxPooling1D(3)(x)
x = keras.layers.Dropout(drop_rate)(x)
x = keras.layers.Conv1D(node, 5, activation=activation)(x)
x = keras.layers.MaxPooling1D(3)(x)
x = keras.layers.Dropout(drop_rate)(x)
positions = tf.range(start=0, limit=x.shape[1], delta=1,dtype="float32")
positions = keras.layers.Embedding(input_dim=x.shape[1], output_dim=node)(positions)
x = x + positions
x = transformer_block(x,node,drop_rate,activation)
x = keras.layers.GlobalMaxPooling1D()(x)
x = keras.layers.Dropout(drop_rate)(x)
avg_list.append(x)
else:
inputs = keras.Input(shape=(42,))
x = inputs
x = keras.layers.Dense(node, activation=activation)(x)
x = keras.layers.Dropout(drop_rate)(x)
x = keras.layers.Dense(node, activation='softmax')(x)
avg_list.append(x)
inputs_list.append(inputs)
x = keras.layers.Average()(avg_list)
outputs = keras.layers.Dense(61, activation='softmax')(x)
model = keras.Model(inputs=inputs_list, outputs=outputs)
model.compile(loss=loss, optimizer=optimizer,metrics=metrics)
return model
model = my_dnn_model()
model.summary()
# - DACON.Dobby님이 공유해주신 증강코드를 수정하여 함수로 생성(감사합니다.)
def aug_data(data , data_name, n=0 ,shift=False,list_ = False):
"""데이터 증강 함수
Args:
data (numpy array): 증강할 데이터
data_name (str): print 용
n (int): 증강 데이터 세트수
shift (bool): shift 사용 여부
list_ (bool): list 묶음으로 반환 여부
Returns:
numpy array or list:
"""
data_ = data.copy()
if list_:
data_ = [data]
print(f"##### {data_name} 데이터 {n} 개 증강... #####")
for _ in range(n):
if shift:
shift_n = int(random.random()*600)
print(f"shift num : {shift_n}")
r_idx = np.roll(np.arange(600), shift_n)
if list_:
data_.append(np.array(data[:,r_idx], np.float32))
else:
data_ = np.concatenate( ( data_, np.array(data[:,r_idx], np.float32) ),axis=0 )
else:
if list_:
data_.append(data)
else:
data_ = np.concatenate( ( data_, data ),axis=0 )
print("# 완료!!")
return data_
# ### - 학습및 예측
# - 학습데이터를 증강하여 학습진행(검증데이터 증강 X)
# - 검증은 CV 각 폴드 과정에서 검증데이터 증강후에 데이터 각 세트에 대해 예측을 진행후 나온 예측값을 산술평균후에 logloss 값 확인 및 저장
# - 테스트 데이터에 대한 각 폴드에 예측값의 경우도 검증과 동일하게 예측값 생성
# - CV 5fold 로 생성된 예측값을 산술평균 후에 최종 예측값으로 생성
# +
idx = int(X_train.shape[-1] / 2) # 차분한 설명변수들을 구분하는 인덱스
holdout_break = False
aug_n = 10 # 증강 데이터 세트수
final_pred_list = [] # 최종 예측값 리스트
log_loss_list = [] # logloss 스코어 리스트
# 테스트 데이터 증강
reset_seeds(SEED)
X_test_aug = aug_data(X_test,"X_test" ,n=aug_n,shift=True,list_=True)
ft_test_aug = aug_data(ft_test,"ft_test",n=aug_n,list_=True)
model_idx = 0
kf = KFold(n_splits=5,random_state=0,shuffle=True)
for tri, tei in kf.split(X_train,y):
reset_seeds(SEED)
early_stop = EarlyStopping(monitor='val_loss', patience=7)
mc = ModelCheckpoint(f'best_model{model_idx}.h5', monitor = 'val_loss', mode = 'min',
verbose = 1, save_best_only = True)
# model = my_dnn_model()
model
#학습데이터 증강
X_tri = aug_data(X_train[tri],"X_train[tri]",n=aug_n,shift=True)
y_tri = aug_data(y[tri],"y[tri]",n=aug_n)
ft_train_tri = aug_data(ft_train[tri],"ft_train[tri]",n=aug_n)
tri_list = [ X_tri[:,:,:idx] , X_tri[:,:,idx:] , ft_train_tri ]
#검증데이터는 증강 X
tei_list = [ X_train[tei,:,:idx] , X_train[tei,:,idx:] , ft_train[tei] ]
with tf.device("/CPU:0"):
history = model.fit(tri_list, y_tri , epochs=100, batch_size=128,callbacks=[early_stop,mc],
validation_data=(tei_list, y[tei]),
)
# 검증 데이터 증강후 각 세트를 예측하여 산술평균후에 logloss 값을 확인
reset_seeds(SEED)
X_tei_aug = aug_data(X_train[tei],"X_train[tei]",n=aug_n,shift=True,list_=True)
ft_train_tei_aug = aug_data(ft_train[tei],"ft_train[tei]",n=aug_n,list_=True)
aug_preds = []
loaded_model = load_model(f'best_model{model_idx}.h5') # 베스트 모델 로드
for X_aug,ft_aug in zip(X_tei_aug,ft_train_tei_aug):
aug_preds.append(
loaded_model.predict([ X_aug[:,:,:idx] , X_aug[:,:,idx:] ,ft_aug ])
)
aug_preds = np.mean(aug_preds,axis=0)
score = log_loss( y[tei] , aug_preds )
log_loss_list.append(score)
print(f"log_loss : {score}")
# 증강된 테스트 데이터 각 세트를 예측하여 산술평균후 저장
aug_preds = []
for X_aug,ft_aug in zip(X_test_aug,ft_test_aug):
aug_preds.append(
loaded_model.predict([ X_aug[:,:,:idx] , X_aug[:,:,idx:] ,ft_aug ])
)
aug_preds = np.mean(aug_preds,axis=0)
final_pred_list.append(aug_preds)
model_idx += 1
if holdout_break:
break
# -
print(f"log_loss mean: {np.array(log_loss_list).mean()}")
print(f"log_loss std: {np.array(log_loss_list).std()}")
final_pred = np.mean(final_pred_list,axis=0)
log_loss_list
# ### - 예측파일 내보내기
final_submit = submission.copy()
final_submit.iloc[:,1:]=final_pred
final_submit
final_submit.to_csv(f'{SUB_PATH}submission_{np.array(log_loss_list).mean()}.csv', index=False)
print("끝!!")
# # End.
| 11,405 |
/script/cavitanalysis.ipynb
|
f2bb7faba35416963beea697b26c5641ad769896
|
[] |
no_license
|
xbouteiller/FormanriskAnalysis
|
https://github.com/xbouteiller/FormanriskAnalysis
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,473,479 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Ladda data och definiera variabler
# +
#encoding: utf8
# %load_ext autoreload
# %autoreload 2
import matplotlib.pyplot as pl
import interactive as intr
import visualize as vis
import danframe as dan
import kontin as con
import lines as lin
import numpy as np
s6405_t5p = dan.frameseries("data/6405_aS1","top 5%")
s6405_seg = dan.frameseries("data/6405_aS1","segments")
lins = lin.make_lines_from_wins(s6405_t5p,s6405_t5p.pkwindows)
FeI = lins[1]
SiFe = lins[2]
myst = lins[3]
mes = {}
mes["FeI top 5%"] = FeI.measure_linecores(s6405_t5p)
mes["FeI segm" ] = FeI.measure_linecores(s6405_seg)
mes["Myst top 5%"] = myst.measure_linecores(s6405_t5p)
mes["Myst segm" ] = myst.measure_linecores(s6405_seg)
Myst_t5_cont = mes["Myst top 5%"][slice(lin.lc.cont,63,3),1:-1].reshape((-1))
Myst_t5_lcen = mes["Myst top 5%"][slice(lin.lc.lbot,63,3),1:-1].reshape((-1))
mes["FeI top 5% by line"] = FeI.byline_measure_linecores(s6405_t5p)
mes["Myst top 5% by line"] = myst.measure_linecores(s6405_t5p)
#Konstiga mätningar
# Första bilden
# Myst segm : 10,406,693, (695)
# Myst top 5% : 10,406,693, (695)
# 406,693,695 - ett extremvärde vid randen
# -
# ### Examining scatter of line bottom
# Plotting the relative line intensity at line bottom versus the continuum value (as given by the linear continuum fit) at line centre we see clear clustering but with some suspicious values for the outliers: negative intensities as well as relative intensities larger that one...
# +
# %matplotlib inline
pl.rcParams["figure.figsize"] = (10,6) # Bigger figures
pl.plot(mes["FeI top 5%"][lin.lc.cont,:],mes["FeI top 5%"][lin.lc.lbot,:],'bo',alpha=0.2);
pl.title("Bottom, FeI " + str(FeI))
pl.ylabel("Relative intensity of line bottom")
pl.xlabel("Continuum value at line centre")
pl.show()
# -
# ### Manually checking the fit on one of the outlier fits
# Choosing one of the outliers with negative relative line intensity, specifically the one from row 313.
#
# First load the data
# %matplotlib inline
pl.rcParams["figure.figsize"] = (10,6) # Bigger figures
frm = s6405_t5p.frames[0];
ref,lmd = frm.group.ref,frm.group.lmbd
s313 = frm.spec(313) # The frame row that gives the outlier
# The method for identifying the line bottom is very simple: take the minimal value (of the reference spectra in the line window) as the guess of the central value. Then use that and four points to either side to fit a second order polynomial with the least squres method.
#
# Doing this using the same FeI line as shown in the scatter plot above, using one of the outliers, results in the plot below; the blue line is the spectra, the red: the fit, the black stars are the point that the method outlined above has selected as the bottom of the line.
# +
guess = frm.group.ref[FeI.idx].argmin()
bottom = FeI.idx[slice(guess-4,guess+5)]
a,b,c = np.polyfit(lmd[bottom],s313[bottom],2)
fit = np.polyval((a,b,c),lmd[FeI.idx])
pl.step(lmd[FeI.idx],s313[FeI.idx],'b')
pl.plot(lmd[bottom],s313[bottom],'*k')
pl.plot(lmd[FeI.idx[23:38]],fit[23:38],'r')
pl.show()
# -
# It is clear that the above simple procedure for identifying the line bottom does not handle when the line centre shifts too far away from its position in the reference spectra.
#
# A better method would examine each row of a frame individually, finding the minimum at each row and taking the neighbourhood of that as the line bottom on which to perform the fit. The result of this method on the same line as above is shown below
# +
bottom = select_bottom(FeI,s313)
a,b,c = np.polyfit(lmd[bottom],s313[bottom],2)
fit = np.polyval((a,b,c),lmd[FeI.idx])
pl.step(lmd[FeI.idx],s313[FeI.idx],'b')
pl.plot(lmd[bottom],s313[bottom],'*k')
pl.plot(lmd[FeI.idx[31:43]],fit[31:43],'r')
pl.show()
# -
# Employing this method on the entire frame used for the scatter plot shown above significantly improves the most extreme scatter:
pl.plot(mes["FeI top 5% by line"][lin.lc.cont,:-1],mes["FeI top 5% by line"][lin.lc.lbot,:-1],'bo',alpha=0.2);
pl.title("Bottom, FeI" + str(FeI) + " fitted by row")
pl.show()
# #### The largest outlier of the unknown line
# Investigating one of the outliers of the unknown line that remains after using the improved method. It seems like this line requires smoothing before a fit can meaningfully be applied, in contrast to the line above.
# +
# %matplotlib inline
pl.rcParams["figure.figsize"] = (10,6) # Bigger figures
frm = s6405_t5p.frames[0]
spe = frm.spec(10) # Outlier
#spe = s6405_t5p.ref
ref = frm.group.ref
lmd = frm.group.lmbd
ln = myst
guess = frm.group.ref[ln.idx].argmin()
bottom = select_bottom(ln,spe,3)
pl.step(lmd[ln.idx],spe[ln.idx],'b')
pl.plot(lmd[bottom],spe[bottom],'.k')
pl.plot(lmd[ln.idx[guess]],spe[ln.idx[guess]],'xr')
pl.show()
# +
a,b,c = np.polyfit(lmd[bottom],spe[bottom],2)
lam_min = -b/(2*a)
lin_bot = np.polyval((a,b,c),lam_min)
fit = np.polyval((a,b,c),lmd[ln.idx])
pl.step(lmd[ln.idx],spe[ln.idx],'b')
pl.plot(lmd[bottom],spe[bottom],'.k')
pl.plot(lmd[ln.idx[40:63]],fit[40:63],'r')
pl.plot(lmd[ln.idx[guess]],spe[ln.idx[guess]],'xg')
pl.show()
# -
# Some more examples of the unkonwn line
# +
spe = frm.spec(431) # Outlier
guess = frm.group.ref[ln.idx].argmin()
bottom = select_bottom(ln,spe,3)
pl.step(lmd[ln.idx],spe[ln.idx],'b')
pl.show()
spe = frm.spec(701) # Outlier
guess = frm.group.ref[ln.idx].argmin()
bottom = select_bottom(ln,spe,3)
pl.step(lmd[ln.idx],spe[ln.idx],'b')
pl.show()
spe = frm.spec(211) # Outlier
guess = frm.group.ref[ln.idx].argmin()
bottom = select_bottom(ln,spe,3)
pl.step(lmd[ln.idx],spe[ln.idx],'b')
pl.show()
# -
def select_bottom(line,spectra,width=3):
cent = spectra[line.idx].argmin()
return line.idx[cent+np.arange(-width,width+1)]
# %matplotlib qt
intr.select_linecore(mes["Myst top 5% by line"])
s
def plot_features(self, label):
self._compute_corvar()
fig, axes = plt.subplots(figsize=(8,8))
axes.set_xlim(-1,1)
axes.set_ylim(-1,1)
#affichage des étiquettes (noms des variables)
assert self.p == label.shape[0], 'cols number should have the same length than label'
for j in range(self.p):
plt.annotate(label[j],(self.corvar[j,0],self.corvar[j,1]))
#ajouter les axes
plt.plot([-1,1],[0,0],color='silver',linestyle='-',linewidth=1)
plt.plot([0,0],[-1,1],color='silver',linestyle='-',linewidth=1)
#ajouter un cercle
cercle = plt.Circle((0,0),1,color='blue',fill=False)
axes.add_artist(cercle)
#affichage
plt.show()
def compute_cos2(self):
try:
self.corvar
except:
print("corvar is not defined, use plot_features before")
self.cos2var = self.corvar**2
print('Axis 1\n---------------------------------------------\n')
print(pd.DataFrame({'id':self.df.columns,'COS2_1':self.cos2var[:,0],'COS2_2':self.cos2var[:,1]}).sort_values('COS2_1', ascending = False))
print('Axis 2\n---------------------------------------------\n')
print(pd.DataFrame({'id':self.df.columns,'COS2_1':self.cos2var[:,0],'COS2_2':self.cos2var[:,1]}).sort_values('COS2_2', ascending = False))
# -
# ## Plot of the pop
# Temperature
# 
# Aridity Index (lower = more arid)
# 
# in black : *Pinus pinaster* populations
# >Trabucco, A., and Zomer, R.J. 2018. Global Aridity Index and Potential
# >Evapo-Transpiration (ET0) Climate Database v2. CGIAR Consortium for Spatial Information
# >(CGIAR-CSI). Published online, available from the CGIAR-CSI GeoPortal at
# >https://cgiarcsi.community
# ## Importing data
# import df
df = pd.read_table("/home/xavier/Documents/research/FORMANRISK/data/data_formanrisk/individual_join.csv", sep = ";")
# remove few columns
# df = df.drop(columns = ["X","Y",'X_TYPE_', 'X_FREQ_', 'individual', 'branch_diam', 'branch_diamn','num',
# 'P50n','P12n','P88n','slopen','Kmaxn'])
df = df.drop(columns=['email', 'info', 'x.1', 'y.1'])
print('dimensions of df are \nnrows : {0}\nncols : {1}'.format(df.shape[0], df.shape[1]))
df.columns
# ## Some data cleaning
# ### Renaming
# remove the _15 from bioclim var
df.columns = [re.sub("_15", "", c) for c in df.columns]
df = df.rename(columns={'AI':'bio20'})
# extracting index of bioclim var
bio_index = [i for i, item in enumerate(df.columns) if re.search('bio\d{1,2}', item)]
[item for i, item in enumerate(df.columns) if re.search('bio\d{1,2}', item)]
# renaming bioclim var with meaningful names
keys = ["bio1" ,"bio2" ,"bio3" ,"bio4" ,"bio5" ,"bio6" ,"bio7" ,"bio8" ,"bio9" ,"bio10" ,"bio11" ,"bio12" ,"bio13" ,"bio14" ,"bio15" ,"bio16" ,"bio17" ,"bio18" ,"bio19", 'bio20']
values = ["Tmean_annual" ,"Mean_D_range" ,"Isothermality" ,"T_seasonality" ,"Tmax_warmerM" ,"Tmin_coldestM" ,"T_annual_range" ,"Tmean_wettestQ" ,"Tmean_driestQ" ,"Tmean_warmerQ" ,"Tmean_coldestQ" ,"P_annual" ,"P_wettestM" ,"P_driestM" ,"P_seasonality" ,"P_wettestQ" ,"P_driestQ" ,"P_warmestQ" ,"P_coldestQ", "Aridity_Index"]
dictionary = create_dict(keys,values)
df = df.rename(columns = dictionary)
# +
df_oin = df[(df.site == 'oin_fr') | (df.site == 'oin_P') | (df.site == 'oin_es')].reset_index()
df_oin = df_oin[['P50', 'site', 'Treatment']]
# keep only pop from oin_es
if False:
df = df[(df.site != 'oin_fr') & (df.site != 'oin_P')].reset_index(drop = True)
print(df.site.unique())
# Convert all pop from oin to oin_es whatever the origin
if True:
df.loc[(df.site == 'oin_fr') | (df.site == 'oin_P'),'site'] = 'oin_es'
print(df.site.unique())
# remove all pop from oin
if False:
df = df[(df.site != 'oin_fr') & (df.site != 'oin_P') & (df.site != 'oin_es')].reset_index(drop = True)
print(df.site.unique())
# remove san vicente
if True:
df = df[df.site != 'san vicente'].reset_index(drop = True)
print(df.site.unique())
# Remove REP
if True:
df = df[df.REP<2].reset_index(drop = True)
print(df.REP.unique())
# -
# ### summarizing df at pop level
# creating summary tables with mean, std and n values per group defined by level
df_pop_mean,df_pop_std,df_pop_n = grouping_pop(df=df, level=['Species','site'], start_rename=2)
# extracting labels of columns of interest
label_num = df_pop_mean.iloc[:,2::].columns
# concat mean, std and n summary tables
df_pop_mean = pd.concat([df_pop_mean,df_pop_std,df_pop_n], axis = 1)
# remove duplicated columns
df_pop_mean =df_pop_mean.loc[:,~df_pop_mean.columns.duplicated()]
# ### summarizing df at pop level with Treatment
# creating summary tables with mean, std and n values per group defined by level
df_pop_mean_T,df_pop_std_T ,df_pop_n_T = grouping_pop(df=df, level=['Species','site','Treatment'], start_rename=3)
# concat mean, std and n summary tables
df_pop_mean_T = pd.concat([df_pop_mean_T ,df_pop_std_T ,df_pop_n_T ], axis = 1)
# remove duplicated columns
df_pop_mean_T =df_pop_mean_T.loc[:,~df_pop_mean_T.columns.duplicated()]
# ## Short Analysis of Oin Populations
fig = px.box(df_oin, x="site", y="P50", color = 'Treatment')
fig.show()
# #### statistical tests: mixed model
# +
# Import the linear regression model class
from pymer4.models import Lmer, Lm
# Initialize model using 2 predictors and sample data
model = Lm("P50 ~ Treatment ", data=df_oin)
# Fit it
print(model.fit())
# +
# Import the linear regression model class
from pymer4.models import Lmer
# Initialize model using 2 predictors and sample data
model = Lmer("P50 ~ Treatment + (1|site)", data=df_oin)
# Fit it
print(model.fit())
# -
# ### **Conclusions : Best AIC without site as random effect, site is not improving model, can be removed**
# #### statistical tests: linear model
# +
import statsmodels.api as sm
from statsmodels.formula.api import ols
oin_lm = ols('P50 ~ C(Treatment, Sum)+C(site, Sum)',
data=df_oin).fit()
table = sm.stats.anova_lm(oin_lm, typ=2) # Type 2 ANOVA DataFrame
print(table)
# +
# import python data frame to R global environment
with localconverter(ro.default_converter + pandas2ri.converter):
df_oin_r = ro.conversion.py2rpy(df_oin[['P50','site', 'Treatment']])
base.dim(df_oin_r)
# -
# %load_ext rpy2.ipython
# + magic_args="-i df_oin_r" language="R"
# require(basics)
# + language="R"
# lm1 = lm(P50 ~ site + Treatment, data = df_oin_r)
# summary(lm1)
# + language="R"
# anova(lm1)
# -
# python & R statistical packages leads to the same F and p values.
# ### **Conclusion : No difference between site (F=3.06, p=0.055), pooling all provenances from Oin should be considered**
# ## Filter Pinus pinaster populations
# keeping only p pinaster pop
df_pp = df[df.Species == "pinus pinaster"]
df_mean_pp = df_pop_mean[df_pop_mean.Species == "pinus pinaster"]
df_mean_pp_T = df_pop_mean_T[df_pop_mean_T.Species == "pinus pinaster"]
fig = px.bar(df_mean_pp_T, x='site', y='counts',
hover_data=['Treatment','counts'], color='Treatment',
labels={'counts':'Nb indivs per site per Treatment (P. pinaster)'},
height=400, barmode='group')
fig.show()
# ___
# Data are cleaned, let's see now for some data exploration
# ## P50 measurements
# ### Individual level
fig = px.histogram(df_pp, x="P50", color = 'Treatment', marginal="rug",
hover_data=df_pp.columns)
fig.show()
print("Mean P50 value is {0:.3f} with sd {1:.3f}".format(df_pp.P50.mean(),df_pp.P50.std()))
print('\n'.join('{}: {:.3f}'.format(['P50 adult', 'P50 young'][k],j) for k,j in enumerate(df_pp.groupby('Treatment').P50.mean().values)))
fig = px.box(df_pp, x="site", y="P50", color = 'Treatment')
fig.show()
# ### Population level
mci = mean_confidence_interval(sm = df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='adult', "P50_std"],
n = df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='adult', "counts"],
verbose = False)
# +
import matplotlib.pyplot as plt
plt.figure(figsize = (10,10))
X = df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='young', "P50_mean"]
Y = df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='adult', "P50_mean"]
plt.errorbar(X, Y, yerr=mci, fmt='o', label = 'mean P50 + CI')
plt.xlim([-4.15,-3.55])
plt.ylim([-4.15,-3.55])
m, b = np.polyfit(X, Y, 1)
plt.plot(X, m*X + b, label = 'fitted line')
plt.plot( [-5,1],[-5,1], c = 'green', lw=2, label = 'identity line')
plt.xlabel('P50 young')
plt.ylabel('P50 adult')
for i, j in enumerate(X.tolist()):
plt.text(X.tolist()[i], Y.tolist()[i], df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='young', "site"].tolist()[i])
plt.legend()
plt.show()
# +
fig = px.scatter(x=df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='young', "P50_mean"],
y=df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='adult', "P50_mean"],
trendline="ols",
error_y=mci,
error_y_minus=mci,
labels={
"x": "P50 mean young",
"y": "P50 Mean adult"
},
title="P50 of adults vs P50 pof youngs per population with 95% confidence interval of the mean",
range_x=[-4.15,-3.55],
range_y=[-4.15,-3.55],
width = 700,
height = 700)
fig.show()
results = px.get_trendline_results(fig)
print('Statistics summary\n-------------------------\n')
results.px_fit_results.iloc[0].summary()
# -
# There is a significant correlation between young and adult P50 (R² = 0.54, F=15.24, p = 0.0018) The estimated value of the slope is 0.69 (t = 3.9, p = 0.002)
# ### Populations ratio between Treatment
ratio = df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='young', "P50_mean"].values/df_mean_pp_T.loc[df_mean_pp_T['Treatment']=='adult', "P50_mean"].values
df_mean_pp_T['P50_ratio']=np.repeat(ratio,2)
df_mean_pp_T['P50_ratio'].plot(kind='density')
plt.scatter(df_mean_pp_T['Aridity_Index_mean'], df_mean_pp_T['P50_ratio'])
plt.ylim([0.8,1.2])
plt.xlim([0,1])
# ## Link between bioclim variables & P50
# ### Correlations between variables at the population level
# mean values per population
df_corr= df_mean_pp[label_num].corr()
plot_heatmap(cor=df_corr, mode='abs', ann=True)
# Strongest correlation between P50 and:
# - Mean d range
# - T season
# - Tmin coldest
# - T annual range
# - T coldest Q
#
# ### Pairwise comparisons
# #### Individual level
# +
import plotly.express as px
fig = px.scatter(df_pp, x="Aridity_Index", y="P50", color="Treatment", trendline="ols", title = 'P50 vs Aridity_index')
fig.show()
results = px.get_trendline_results(fig)
print(results)
# -
# Very weak R² between individuals measure of P50 and T mean annual of the pop (R² = 0.02 and 0.06)
# #### Population level
# *Without the Treatment effect*
# +
df_mean_pp_wide = df_mean_pp[["Tmean_annual_mean",
'Tmean_coldestQ_mean',
'T_seasonality_mean',
'T_annual_range_mean',
'Tmin_coldestM_mean',
"P50_mean",
"Aridity_Index_mean",
'site']]
df_mean_pp_wide = pd.melt(df_mean_pp_wide,
id_vars=[
'site',
"P50_mean"],
value_vars=["Tmean_annual_mean",
'Tmean_coldestQ_mean',
'T_seasonality_mean',
'T_annual_range_mean',
'Tmin_coldestM_mean',
"Aridity_Index_mean"
])
df_mean_pp_wide.columns
# -
fig = px.scatter(df_mean_pp_wide,
x="value",
y="P50_mean",
trendline="ols",
text = "site",
facet_col="variable",
facet_col_wrap=3,
facet_row_spacing=0.04, # default is 0.07 when facet_col_wrap is used
facet_col_spacing=0.04, # default is 0.03
height=800, width=1000)
fig.update_traces(textposition='top center')
fig.update_xaxes(matches=None)
fig.show()
# **NB : Oin (es, fr & P) is a common garden located in corogna (spain) with 3 provenances (spanish, portugese (Leiria) and Frecnh)**
# *With the Treament effect*
# +
df_mean_pp_T_wide = df_mean_pp_T[["Tmean_annual_mean",
'Tmean_coldestQ_mean',
'T_seasonality_mean',
'T_annual_range_mean',
'Tmin_coldestM_mean',
"P50_mean",
"Treatment",
"Aridity_Index_mean",
'site']]
df_mean_pp_T_wide = pd.melt(df_mean_pp_T_wide,
id_vars=["Treatment",
'site',
"P50_mean"],
value_vars=["Tmean_annual_mean",
'Tmean_coldestQ_mean',
'T_seasonality_mean',
'T_annual_range_mean',
'Tmin_coldestM_mean',
"Aridity_Index_mean"
])
df_mean_pp_T_wide.columns
# -
fig = px.scatter(df_mean_pp_T_wide,
x="value",
y="P50_mean",
color="Treatment",
trendline="ols",
text = "site",
facet_col="variable",
facet_col_wrap=3,
facet_row_spacing=0.04, # default is 0.07 when facet_col_wrap is used
facet_col_spacing=0.04, # default is 0.03
height=800, width=1000)
fig.update_traces(textposition='top center')
fig.update_xaxes(matches=None)
fig.show()
# # ARIDITY INDEX
# +
import plotly.express as px
fig = px.scatter(df_mean_pp_T, x="Aridity_Index_mean", y="P50_mean", color="Treatment", trendline="ols",
text = "site", title = 'P50 vs Aridity_index')
fig.update_traces(textposition='top center')
fig.show()
# -
# Results:
#
# Some traits some to be more correlated to p50 as seen on the heatmap, with maybe some slight differences between treatment
# ## PCA on bioclim var
mypca = MyPCA(df_mean_pp_T[[v +'_mean' for v in values]] )
mypca.standardize()
mypca.dopca()
# mypca.assess_pca()
# +
# mypca.plot_indiv(label = df_mean_pp_T.site)
# +
# mypca.plot_features(label = df_mean_pp_T[[v +'_mean' for v in values]].columns)
# +
# mypca.compute_cos2()
# -
# ### Plot P50 again PCA axis
acp_coord = pd.DataFrame(mypca.coord, columns = ['acp_'+str(i) for i in np.arange(0,mypca.coord.shape[1])])
df_mean_pp_T_acp = pd.concat([df_mean_pp_T, acp_coord], axis = 1)
fig = px.scatter(df_mean_pp_T_acp, x="acp_0", y="P50_mean", color="Treatment", trendline="ols",
text = "site")
fig.update_traces(textposition='top center')
fig.show()
fig = px.scatter(df_mean_pp_T_acp, x="acp_1", y="P50_mean", color="Treatment", trendline="ols",
text = "site")
fig.update_traces(textposition='top center')
fig.show()
# It seems that the correlation is better between p50 and the second axis of the PCA (R² = 0.23, 0.26) but it is not super wonderful, maybe fit a non linear model
#
# first axis is associated with :
# - (+) Tmean warmer Quarter
# - (+) Tmean driest Quarter
# - (-) P coldest Quarter
# - (-) Aridity_Index_mean
#
# second axis is associated with :
# - (+) Tmin coldest Month
# - (+) Tmean coldest Quarter
# ## ACP on complete data set
mypca = MyPCA(df_pp.iloc[:,bio_index])
mypca.standardize()
# mypca.dopca()
# mypca.assess_pca()
# +
# mypca.plot_indiv(label = df_pp.site)
# -
acp_coord = pd.DataFrame(mypca.coord, columns = ['acp_'+str(i) for i in np.arange(0,mypca.coord.shape[1])])
df_pp_acp = pd.concat([df_pp, acp_coord], axis = 1)
# ### Saving data frame
def create_group(df, group, col = 'site', Name = 'Groupclim'):
gr = 1
for g in group:
for i in g:
df.loc[df[col]==i,Name] = 'group_'+str(gr)
gr+=1
return df
ll=['leiria', 'oin_es', 'san vicente']+['cerbere','ceret', 'perpignan', 'spain dune']
pp= ['biscarrosse', 'cerbere', 'ceret', 'hourtin', 'la teste', 'leiria',
'lit et mixe', 'mimizan', 'oin_es' ,'perpignan', 'ribeira' ,'san vicente','spain dune' ,'branas' ,'orzaduero', 'llanos']
# +
# Group based on ACP clusters
group = [['leiria', 'oin_es', 'san vicente'],
['cerbere','ceret', 'perpignan', 'spain dune'],
['biscarrosse', 'branas', 'hourtin', 'la teste', 'lit et mixe',
'llanos', 'mimizan', 'orzaduero', 'ribeira']]
group = [['leiria', 'oin_es', 'san vicente'],
['cerbere','ceret', 'perpignan', 'spain dune'],
['biscarrosse', 'branas', 'hourtin', 'la teste', 'lit et mixe',
'llanos', 'mimizan', 'orzaduero', 'ribeira']]
# -
df_mean_pp_T_acp= create_group(df= df_mean_pp_T_acp,
group = group,
col = 'site')
df_pp_acp = create_group(df= df_pp_acp,
group = group,
col = 'site')
# #### Create a data frame for QGIS
# +
df_mean_young = df_mean_pp_T_acp[df_mean_pp_T_acp.Treatment == 'young']
df_mean_adult = df_mean_pp_T_acp[df_mean_pp_T_acp.Treatment == 'adult']
df_mean_young = df_mean_young[['Species', 'site', 'Tmean_annual_mean', 'Aridity_Index_mean','Y_mean', 'X_mean', 'P50_mean']]
df_mean_young = df_mean_young.rename(columns = {'P50_mean':'P50_mean_young'}).reset_index(drop = True)
df_mean_adult = df_mean_adult[['P50_mean']]
df_mean_adult = df_mean_adult.rename(columns = {'P50_mean':'P50_mean_adult'}).reset_index(drop = True)
df_mean_sig = pd.concat([df_mean_young,df_mean_adult], axis = 1)
df_mean_sig.head()
# -
if True:
df_mean_pp_T_acp.to_csv("/home/xavier/Documents/research/FORMANRISK/analyse/forman_cavit/output/table/df_mean_PP.csv")
df_pp_acp.to_csv("/home/xavier/Documents/research/FORMANRISK/analyse/forman_cavit/output/table/df_PP.csv")
df_mean_sig.to_csv("/home/xavier/Documents/research/FORMANRISK/analyse/forman_cavit/output/table/df_mean_sig.csv")
df_pp_acp.columns
# ## Stats models
# +
# Import the linear regression model class
from pymer4.models import Lm
# Initialize model using 2 predictors and sample data
model = Lm("P50 ~ Treatment + Aridity_Index + site", data=df_pp_acp)
# Fit it
print(model.fit())
# +
import statsmodels.api as sm
from statsmodels.formula.api import ols
oin_lm = ols('P50 ~ C(Treatment)+Aridity_Index',
data=df_pp_acp).fit()
# print(oin_lm.summary())
table = sm.stats.anova_lm(oin_lm, typ=2) # Type 2 ANOVA DataFrame
print(table)
# -
df_restricted = df_pp_acp[['P50','site', 'Treatment', 'acp_0', 'acp_1', 'Groupclim', 'Aridity_Index']]
# # Random intercept model
# +
from pymer4.models import Lmer
# Initialize model instance using 1 predictor with random intercepts
# this is the full model
model = Lmer("P50 ~ (1|site)", data=df_restricted)
# Fit it
print(model.fit())
# -
# ### Full model
# +
from pymer4.models import Lmer
# Initialize model instance using 1 predictor with random intercepts
# this is the full model
model = Lmer("P50 ~ Treatment + acp_0 + acp_1 + (1|site)", data=df_restricted)
# Fit it
print(model.fit())
# -
print(model.anova())
# ### Full model with Aridity Index but no ACP axis
# +
from pymer4.models import Lmer
# Initialize model instance using 1 predictor with random intercepts
# this is the full model
model = Lm("P50 ~ Treatment * Aridity_Index ", data=df_restricted)
# Fit it
print(model.fit())
# +
from pymer4.models import Lmer
# Initialize model instance using 1 predictor with random intercepts
# this is the full model
model = Lmer("P50 ~ Treatment * Aridity_Index + (1|site)", data=df_restricted)
# Fit it
print(model.fit())
# -
print(model.anova())
# Visualize coefficients with group/cluster fits overlaid ("forest plot")
model.plot_summary()
model.plot("Aridity_Index", plot_ci=False, ylabel="predicted DV")
# ### Simplest model with only Treatment & site as random
# +
# Initialize model instance using 1 predictor with random intercepts
# no climatic variables
model = Lmer("P50 ~ Treatment + (1|site)", data=df_restricted)
# Fit it
print(model.fit())
# -
# ### Selected model (with & without interaction)
# +
# Initialize model instance using 1 predictor with random intercepts
model = Lmer("P50 ~ Treatment + acp_1 + (Treatment|site)", data=df_restricted)
# Fit it
print(model.fit())
# +
# Initialize model instance using 1 predictor with random intercepts and slopes
model = Lmer("P50 ~ Treatment + acp_1 + (1|site)", data=df_restricted)
# Fit it
print(model.fit())
# -
# This is the best model based on AIC value (both models random intercep & random intercept + slope perform very similarly)
#
# No effect of treatment
# effect on acp_1 which is the second axis of the pca correlated with :
#
# - (+) Tmin coldest Quarter
# - (+) Tmin coldest Month
# Visualize coefficients with group/cluster fits overlaid ("forest plot")
model.plot_summary()
model.plot("acp_1", plot_ci=False, ylabel="predicted DV")
# +
# Initialize model instance using 1 predictor with random intercepts
model = Lmer("P50 ~ Groupclim + (1|site)", data=df_restricted)
# Fit it
print(model.fit())
# -
print(model.anova())
# ## Test with R
# +
import rpy2.robjects as ro
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
from rpy2.robjects.packages import importr
from rpy2.robjects import r, pandas2ri
base = importr('base')
stats = importr('stats')
graphics = importr('graphics')
utils = importr('utils')
ade4 = importr('ade4')
nlme = importr('nlme')
lme4 = importr('lme4')
lmertest = importr('lmerTest')
from rpy2.robjects.conversion import localconverter
import rpy2.ipython.html
rpy2.ipython.html.init_printing()
# +
with localconverter(ro.default_converter + pandas2ri.converter):
df_restricted_r = ro.conversion.py2rpy(df_restricted)
base.dim(df_restricted_r)
# -
from rpy2.robjects.packages import importr
utils = importr('utils')
# %load_ext rpy2.ipython
# + magic_args="-i df_restricted_r" language="R"
#
# require(tidyverse)
# require(dplyr)
# require(lme4)
# require(nlme)
# require(lmerTest)
# # glimpse(df_restricted_r)
# + language="R"
# par(mfrow=c(2,2))
# # df_restricted_r = df_restricted_r[(df_restricted_r[,'site']!='oin_es' & df_restricted_r[,'site']!='san vicente'),]
# plot(df_restricted_r[,'P50']~as.factor(df_restricted_r[,'Groupclim']))
# plot(df_restricted_r[,'P50']~as.factor(df_restricted_r[,'Treatment']))
# plot(df_restricted_r[,'P50']~as.factor(df_restricted_r[,'site']))
# plot(df_restricted_r[,'P50']~df_restricted_r[,'Aridity_Index'])
# + language="R"
# levels(as.factor(df_restricted_r[,'site']))
# + language="R"
# df = df
# df_restricted_r[,'Treatment'] = as.factor(df_restricted_r[,'Treatment'] )
# df_restricted_r[,'site'] = as.factor(df_restricted_r[,'site'] )
# df_restricted_r[,'Groupclim'] = as.factor(df_restricted_r[,'Groupclim'] )
# mm1 = lmer(P50 ~ Treatment + acp_1 + (1|site), data = df_restricted_r)
# summary(mm1)
#
# + language="R"
# df_restricted_r[,'Treatment'] = as.factor(df_restricted_r[,'Treatment'] )
# df_restricted_r[,'site'] = as.factor(df_restricted_r[,'site'] )
# df_restricted_r[,'Groupclim'] = as.factor(df_restricted_r[,'Groupclim'] )
# mm1 = lmer(P50 ~ Treatment + Groupclim + (1|site), data = df_restricted_r)
# summary(mm1)
#
# + language="R"
# df_restricted_r[,'Treatment'] = as.factor(df_restricted_r[,'Treatment'] )
# df_restricted_r[,'site'] = as.factor(df_restricted_r[,'site'] )
# df_restricted_r[,'Groupclim'] = as.factor(df_restricted_r[,'Groupclim'] )
# mm1 = lmer(P50 ~ Treatment + Aridity_Index + (1|site), data = df_restricted_r)
# summary(mm1)
# + language="R"
# # intraclass correlation coefficient
# 0.02207/(0.02207+0.08448)
# # Design effect
# 1 + (10-1)*0.2071328
# # Neffective
# (12*10) / 2.864195
# + language="R"
# anova(mm1, type='II')
# + language="R"
# AIC(mm1)
# + language="R"
# hist(residuals(mm1))
# qqnorm(residuals(mm1))
# qqline(residuals(mm1))
#
# plot(fitted(mm1)~residuals(mm1))
#
# + language="R"
# plot(residuals(mm1)~as.factor(df_restricted_r[,'Treatment']))
# plot(residuals(mm1)~as.factor(df_restricted_r[,'Groupclim']))
# + language="R"
# plot(residuals(mm1)~df_restricted_r[,'acp_0'])
# plot(residuals(mm1)~df_restricted_r[,'acp_1'])
# plot(residuals(mm1, type = 'pearson')~df_restricted_r[,'Aridity_Index'])
# + language="R"
#
# M0 = gls(P50 ~ Treatment + Aridity_Index , method = "REML", data = df_restricted_r)
# M1 = lme(P50 ~ Treatment, random = ~ 1|site, method = "REML", data = df_restricted_r)
# M2 = lme(P50 ~ Treatment + Aridity_Index, random = ~ 1|site, method = "REML", data = df_restricted_r)
#
# AIC(M0,M1, M2)
# + language="R"
# anova(M0,M2)
# -
# **There is a need for a random site effect**
# # Conclusions
# **The main results are :**
#
# - **There is no Treatment effect**
# - **There is no link between P50 & Aridity**
# - **There is a site effect (i.e. variability among sites)**
# - **Variability within site is higher than between sites**
#
break
# ___
# ## TEST FOR PLOTTING MAP
# https://jakevdp.github.io/PythonDataScienceHandbook/04.13-geographic-data-with-basemap.html
import os
os.environ['PROJ_LIB'] = r'/home/xavier/anaconda3/pkgs/proj4-5.2.0-he6710b0_1/share/proj'
from mpl_toolkits.basemap import Basemap
# +
# cities = pd.read_csv('data/california_cities.csv')
# Extract the data we're interested in
lat = df_mean_pp.y_mean.values # cities['latd'].values
lon = df_mean_pp.x_mean.values
population = df_mean_pp.P50_mean.values*1
area = df_mean_pp.P50_mean.values*-1
# -
population
# +
# 1. Draw the map background
fig = plt.figure(figsize=(12, 12))
m = Basemap(projection='lcc', resolution='h',
lat_0=df_mean_pp.y_mean.min(), lon_0=df_mean_pp.x_mean.min(),
width=2.2E6, height=1.4E6)
m.shadedrelief()
m.drawcoastlines(color='gray')
m.drawcountries(color='gray')
m.drawstates(color='gray')
# 2. scatter city data, with color reflecting population
# and size reflecting area
m.scatter(lon, lat, latlon=True,
c=population,
s=area,
cmap='Reds',
alpha=0.5)
# 3. create colorbar and legend
plt.colorbar(label=r'$\log_{10}({\rm population})$')
plt.clim(-4.2, -3.5)
# make legend with dummy points
for a in [100, 300, 500]:
plt.scatter([], [], c='k', alpha=0.5, s=a,
label=str(a) + ' km$^2$')
plt.legend(scatterpoints=1, frameon=False,
labelspacing=1, loc='lower left');
# -
# https://towardsdatascience.com/reading-and-visualizing-geotiff-images-with-python-8dcca7a74510
import rasterio
from rasterio.plot import show
strmap = '/home/xavier/Downloads/7504448/global-ai_et0/ai_et0/ai_et0.tif'
dataset = rasterio.open(strmap)
show(dataset)
df_mean_pp.y_mean.tolist()
lat = df_mean_pp_T.y_mean.values.min()# cities['latd'].values
lon = df_mean_pp.x_mean.values
population = df_mean_pp.P50_mean.values*1
area = df_mean_pp.P50_mean.values*-1
Basemap()
# +
# https://stackoverflow.com/questions/55854988/subplots-onto-a-basemap
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches
# prep values for map extents and more
llcrnrlat = df_mean_pp.y_mean.values.min()-0.5
llcrnrlon = df_mean_pp.x_mean.values.min()-0.5
urcrnrlat = df_mean_pp.y_mean.values.max()+0.5
urcrnrlon = df_mean_pp.x_mean.values.max()+0.8
mid_lon = (urcrnrlon+llcrnrlon)/2.0
hr_lon = (urcrnrlon-llcrnrlon)/2.0
mid_lat = (urcrnrlat+llcrnrlat)/2.0
hr_lat = (urcrnrlat-llcrnrlat)/2.0
# function to create inset axes and plot bar chart on it
# this is good for 3 items bar chart
def build_bar(mapx, mapy, ax, width, xvals=['a','b'], yvals=[1,4], fcolors=['r','b']):
ax_h = inset_axes(ax, width=width, \
height=width, \
loc=3, \
bbox_to_anchor=(mapx, mapy), \
bbox_transform=ax.transData, \
borderpad=0, \
axes_kwargs={'alpha': 0.35, 'visible': True})
for x,y,c in zip(xvals, yvals, fcolors):
ax_h.bar(x, y, label=str(x), fc=c)
#ax.xticks(range(len(xvals)), xvals, fontsize=10, rotation=30)
ax_h.axis('off')
return ax_h
fig, ax = plt.subplots(figsize=(12, 12)) # bigger is better
bm = Basemap(llcrnrlat= llcrnrlat,
llcrnrlon= llcrnrlon,
urcrnrlat= urcrnrlat,
urcrnrlon= urcrnrlon,
ax = ax,
resolution='h',
projection='lcc',
lon_0=mid_lon,
lat_0=mid_lat)
# bm.fillcontinents(color='gray', zorder=0)
# bm.drawcoastlines(color='gray', linewidth=0.3, zorder=2)
bm.shadedrelief()
bm.drawcoastlines(color='gray')
bm.drawcountries(color='gray')
bm.drawstates(color='gray')
plt.title('site_scores', fontsize=20)
# ======================
# make-up some locations
# ----------------------
# you may use 121 here
lon1s = df_mean_pp.x_mean + np.random.normal(loc=0,scale=0.1, size = len(df_mean_pp.x_mean))
lat1s = df_mean_pp.y_mean + np.random.normal(loc=0,scale=0.1, size = len(df_mean_pp.x_mean))
# make-up list of 3-values data for the locations above
# -----------------------------------------------------
bar_data = np.array([[item] for i, item in enumerate(df_mean_pp_T.P50_mean)]).reshape(-1,2).tolist() # list of 3 items lists
# create a barchart at each location in (lon1s,lat1s)
# ---------------------------------------------------
bar_width = 0.1 # inch
colors = ['r','b']
for ix, lon1, lat1 in zip(list(range(len(lon1s))), lon1s, lat1s):
x1, y1 = bm(lon1, lat1) # get data coordinates for plotting
bax = build_bar(x1, y1, ax, 0.2, xvals=['a','b'], \
yvals=bar_data[ix], \
fcolors=colors)
# create legend (of the 3 classes)
patch0 = mpatches.Patch(color=colors[0], label='P50 adult')
patch1 = mpatches.Patch(color=colors[1], label='P50 young')
ax.legend(handles=[patch0,patch1], loc=1)
plt.show()
# -
# https://stackoverflow.com/questions/45677300/how-to-plot-geotiff-data-in-specific-area-lat-lon-with-python
import georaster
import georaster
# +
fig = plt.figure(figsize=(8,8))
# full path to the geotiff file
fpath = r"C:\\path_to_your\geotiff_file\srtm_57_10.tif" # Thailand east
# read extent of image without loading
# good for values in degrees lat/long
# geotiff may use other coordinates and projection
my_image = georaster.SingleBandRaster(strmap, load_data=False)
# grab limits of image's extent
minx, maxx, miny, maxy = my_image.extent
# set Basemap with slightly larger extents
# set resolution at intermediate level "i"
m = Basemap( projection='cyl', \
llcrnrlon=minx-2, \
llcrnrlat=miny-2, \
urcrnrlon=maxx+2, \
urcrnrlat=maxy+2, \
resolution='i')
m.drawcoastlines(color="gray")
m.fillcontinents(color='beige')
# load the geotiff image, assign it a variable
image = georaster.SingleBandRaster( fpath, \
load_data=(minx, maxx, miny, maxy), \
latlon=True)
# plot the image on matplotlib active axes
# set zorder to put the image on top of coastlines and continent areas
# set alpha to let the hidden graphics show through
plt.imshow(image.r, extent=(minx, maxx, miny, maxy), zorder=10, alpha=0.6)
plt.show()
# -
lat = df_mean_pp_T.y_mean.values.min()# cities['latd'].values
lon = df_mean_pp.x_mean.values
population = df_mean_pp.P50_mean.values*1
area = df_mean_pp.P50_mean.values*-1
# +
# https://stackoverflow.com/questions/55854988/subplots-onto-a-basemap
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches
# prep values for map extents and more
llcrnrlat = df_mean_pp.y_mean.values.min()-0.5
llcrnrlon = df_mean_pp.x_mean.values.min()-0.5
urcrnrlat = df_mean_pp.y_mean.values.max()+0.5
urcrnrlon = df_mean_pp.x_mean.values.max()+0.8
mid_lon = (urcrnrlon+llcrnrlon)/2.0
hr_lon = (urcrnrlon-llcrnrlon)/2.0
mid_lat = (urcrnrlat+llcrnrlat)/2.0
hr_lat = (urcrnrlat-llcrnrlat)/2.0
# function to create inset axes and plot bar chart on it
# this is good for 3 items bar chart
def build_bar(mapx, mapy, ax, width, xvals=['a','b'], yvals=[1,4], fcolors=['r','b']):
ax_h = inset_axes(ax, width=width, \
height=width, \
loc=3, \
bbox_to_anchor=(mapx, mapy), \
bbox_transform=ax.transData, \
borderpad=0, \
axes_kwargs={'alpha': 0.35, 'visible': True})
for x,y,c in zip(xvals, yvals, fcolors):
ax_h.bar(x, y, label=str(x), fc=c)
#ax.xticks(range(len(xvals)), xvals, fontsize=10, rotation=30)
ax_h.axis('off')
return ax_h
fig, ax = plt.subplots(figsize=(12, 12)) # bigger is better
bm = Basemap(llcrnrlat= llcrnrlat,
llcrnrlon= llcrnrlon,
urcrnrlat= urcrnrlat,
urcrnrlon= urcrnrlon,
ax = ax,
resolution='h',
projection='lcc',
lon_0=mid_lon,
lat_0=mid_lat)
# bm.fillcontinents(color='gray', zorder=0)
# bm.drawcoastlines(color='gray', linewidth=0.3, zorder=2)
bm.shadedrelief()
bm.drawcoastlines(color='gray')
bm.drawcountries(color='gray')
bm.drawstates(color='gray')
plt.title('site_scores', fontsize=20)
# ======================
# make-up some locations
# ----------------------
# you may use 121 here
lon1s = df_mean_pp.x_mean.tolist()
lat1s = df_mean_pp.y_mean.tolist()
# make-up list of 3-values data for the locations above
# -----------------------------------------------------
bd = [[bd[0]-bd[1]] for bd in bar_data] # list of 3 items lists
# create a barchart at each location in (lon1s,lat1s)
# ---------------------------------------------------
bar_width = 0.1 # inch
colors = ['r']
for ix, lon1, lat1 in zip(list(range(len(lon1s))), lon1s, lat1s):
x1, y1 = bm(lon1, lat1) # get data coordinates for plotting
bax = build_bar(x1, y1, ax, 0.2, xvals=['a'], \
yvals=bd[ix], \
fcolors=colors)
# create legend (of the 3 classes)
patch0 = mpatches.Patch(color=colors[0], label='P50 adult')
ax.legend(handles=[patch0,patch1], loc=1)
plt.show()
| 41,323 |
/code/.ipynb_checkpoints/[Coffee Data] Location Cleaning-checkpoint.ipynb
|
bbc61153fcf0cb4bc4009f7da0e8d1083c89685c
|
[] |
no_license
|
hfarb/209-Final-Project
|
https://github.com/hfarb/209-Final-Project
| 0 | 0 | null | 2021-02-28T23:41:09 | 2021-02-28T23:28:20 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 25,252 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
df = pd.read_excel (r'coffee_raw2.xlsx')
df.head()
df = df[df['Est. Price'].notna()]
df['Coffee Origin_split'] = df['Coffee Origin'].str.split(";", n = 6, expand = False)
df = df.explode('Coffee Origin_split')
df.reset_index(inplace=True)
df['Coffee Origin_last'] = df['Coffee Origin_split'].apply(lambda x: str(x.split(",")[-1]))
df['Coffee Origin_last'] = df['Coffee Origin_last'].str.strip()
# https://www.britannica.com/topic/list-of-countries-1993160#ref326808
countries_df = pd.read_excel (r'countries.xlsx')
countries_list = countries_df['Country'].tolist()
countries_list.append("Puerto Rico")
countries_list.append('Saint Helena Island')
for i, item in enumerate(df['Coffee Origin_last']):
if "Hawai" in item:
df.loc[i,'Coffee Origin_last'] = "United States"
if "Costa RIca" in item:
df.loc[i,'Coffee Origin_last'] = "Costa Rica"
if "Sumatra" in item:
df.loc[i,'Coffee Origin_last'] = "Indonesia"
if "Congo" in item:
df.loc[i,'Coffee Origin_last'] = "Democratic Republic of the Congo"
if "California" in item:
df.loc[i,'Coffee Origin_last'] = "United States"
if "Huehuetenango" in item:
df.loc[i,'Coffee Origin_last'] = "Guatemala"
if "Java" in item:
df.loc[i,'Coffee Origin_last'] = "Indonesia"
if "USA" in item:
df.loc[i,'Coffee Origin_last'] = "United States"
if "Bali" in item:
df.loc[i,'Coffee Origin_last'] = "Indonesia"
if "Panana" in item:
df.loc[i,'Coffee Origin_last'] = "Panama"
if "Ethopia" in item:
df.loc[i,'Coffee Origin_last'] = "Ethiopia"
if "Sulawesi" in item:
df.loc[i,'Coffee Origin_last'] = "Indonesia"
if "Minas Gerais" in item:
df.loc[i,'Coffee Origin_last'] = "Brazil"
if "Columbia" in item:
df.loc[i,'Coffee Origin_last'] = "Colombia"
if "Kona" in item:
df.loc[i,'Coffee Origin_last'] = "United States"
for country in countries_list:
for i in range(len(df['Coffee Origin_last'])):
if country in df['Coffee Origin_last'][i]:
df.loc[i, 'Coffee Origin_last'] = country
# +
country_present = df['Coffee Origin_last'].apply(lambda x: any(t in str(x) for t in countries_list))
country_present.value_counts()
df.head()
# +
non_disclosed = df[(country_present == False) & ((df['Coffee Origin_last'] != 'Not disclosed') & (df['Coffee Origin_last'] != 'Not disclosed.'))]
df[country_present == False]['Coffee Origin_last'].unique()
# -
df['Coffee Origin_last'] = df['Coffee Origin_last'].replace(['Africa', 'South America', 'Central America', 'Not disclosed',
'Asia', 'Americas', 'Central and South America', 'Asia Pacific',
'the Pacific', 'South and Central America', 'East Africa',
'Not disclosed.', 'Various Latin American origins.',
'other Central America origins.', 'East and Central Africa.',
'Central America.', 'Central and South America.', 'Africa.',
'other undisclosed origins.', 'South America.', 'undisclosed.',
'the Americas.', 'undisclosed Central America origins.',
'East Africa.', 'Asia.', 'Asia and Latin America.',
'Latin America.', 'Latin America', 'the Americas',
'presumably East and-or Central Africa',
'other undisclosed origins',
'Not disclosed. Contains coffee of the Robusta species.',
'Not disclosed. Almost certainly contains coffee of the Robusta species.',
'Not disclosed. Composed entirely of coffees of the Arabica species.'], np.nan)
df['Coffee Origin_last'].unique()
len(df['Coffee Origin_last'].unique())
df.to_excel('coffee_clean_location.xlsx',index = False)
| 3,947 |
/Basics (1).ipynb
|
f496c5fd1e1a5a095a40c79b5834dbfd26e0b3a9
|
[] |
no_license
|
KishanKunwar/Exploring-Ebay-Car-Sales-Data
|
https://github.com/KishanKunwar/Exploring-Ebay-Car-Sales-Data
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 68,931 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# https://www.geeksforgeeks.org/submatrix-sum-queries/
# https://en.wikipedia.org/wiki/Summed-area_table
import numpy as np
A = np.random.randint(0, 10, (3, 4))
print (A)
print (A.cumsum(axis=0).cumsum(axis=1))
# -
A.shape
# +
#r = A.copy()
# #copy first row
#col wise
for i in range(1,len(A)):
for j in range(0,len(A[0])):
r[i][j] = A[i][j]+A[i-1][j]
#row wise
for i in range(0,len(A)):
for j in range(1,len(A[0])):
r[i][j] += r[i][j-1]
r,A
# -
len(A[0]),len(A)
u would expect from a scraped dataset (the version uploaded to Kaggle was cleaned to be easier to work with)
#
# The aim of this project is to clean the data and analyze the included used car listings.
import pandas as pd
import numpy as np
autos = pd.read_csv("autos.csv", encoding = "Latin-1")
autos.head()
autos.info()
# +
import re
autos.rename({"yearOfRegistration":"registration_year",
"monthOfRegistration":"registration_month",
"notRepairedDamage":"unrepaired_damage",
"dateCreated":"ad_created"}, axis = 1, inplace = True)
columns = autos.columns
def cleaning(name):
name = re.sub(r'(?<!^)(?=[A-Z])', '_', name).lower()
return name
new_columns = []
for name in columns:
new_column = cleaning(name)
new_columns.append(new_column)
autos.columns = new_columns
print(autos.columns)
# -
autos.head()
# Above, We edited some columns name:
# - yearOfRegistration to registration_year
# - monthOfRegistration to registration_month
# - notRepairedDamage to unrepaired_damage
# - dateCreated to ad_created
# - The rest of the columnn names from camelcase to snakecase
autos.describe(include = "all")
autos.info()
# We can see that , "Seller" and "offer_type" column has almost one type i.e "private" and "Angebot" respectively. So we can asume that its just one time and we can no longer need it .
#
# As we see "nr_of_picture" seems a bit more supicious. And it required bit more investigation.
#Dropping Seller and offer_type columns
autos.drop(columns= {"seller","offer_type"}, inplace = True)
autos.nr_of_pictures.value_counts()
# Since all the value In "nr_of_pictures" columns has 0 pictures, We can remove it.
# Removing nr_of_pictures columns
autos.drop(columns= {"nr_of_pictures"}, inplace = True)
# Changing data type form string to int
autos["price"]= autos["price"].str.replace("$","").str.replace(",","").astype(int)
autos["odometer"]= autos["odometer"].str.replace(",","").str.replace("km","").astype(int)
#Renaming odometer to odometer_km
autos.rename({"odometer":"odometer_km"}, axis = 1, inplace = True)
# ## Exploring Price And Odometer_km
# ### Price
# Total number of Unique values in price
autos.price.unique().shape
# +
#Finding maximum and minimum of price columns
autos.price.describe()
# -
autos.price.value_counts().sort_index().head(20)
autos.price.value_counts().sort_index(ascending = False).head(20)
# There are 1421 riws in which price for a car is 0 . We need to remove whole rows where price is equal to 0 .
#
# The highest Price for a Car in Ebay was almost $ 1 million doller.
#Removing rows where price of a car os equal to 0
autos.price= autos.price[autos.price !=0 ]
autos.price.value_counts().sort_index().head(20)
# ## Odometer_km
autos.odometer_km.describe()
autos.odometer_km.value_counts()
# It seems like about two third of car in ebay has travelled abouT 150000 km and few of the vehicel are below 50000 km .
# ## Exploring the Date columns
# There are 5 columns that should represent date values. Some of these columns were created by the crawler, some came from the website itself.
#
# Right now, the date_crawled, last_seen, and ad_created columns are all identified as string values by pandas. Because these three columns are represented as strings, we need to convert the data into a numerical representation so we can understand it quantitatively. The other two columns are represented as numeric values, so we can use methods like Series.describe() to understand the distribution without any extra data processing.
autos[["date_crawled","last_seen","ad_created"]][0:5]
# In frequency
autos["date_crawled"].str[:10].value_counts()
# In percentage
autos["date_crawled"].str[:10].value_counts(normalize = True, dropna= False)
# Ranking the date in ascending order(earliest to latest)
autos["date_crawled"].str[:10].value_counts(normalize = True, dropna= False).sort_index()
autos["last_seen"].str[:10].value_counts(normalize = True, dropna= False)
# The crawler recorded the date it last saw any listing, which allows us to determine on what day a listing was removed, presumably because the car was sold.
#
# The last three days contain a disproportionate amount of 'last seen' values. Given that these are 6-10x the values from the previous days, it's unlikely that there was a massive spike in sales, and more likely that these values are to do with the crawling period ending and don't indicate car sales.
autos["ad_created"].str[:10].value_counts(normalize = True, dropna= False)
#
# There is a large variety of ad created dates. Most fall within 1-2 months of the listing date, but a few are quite old, with the oldest at around 9 months.
autos.registration_year.describe()
# We can see that ,registration_year column contains some odd values:
#
# - The minimum value is 1000, before cars were invented
# - The maximum value is 9999, many years into the future
# ## Dealing with incorrect Registration Year Data
# Because a car can't be first registered after the listing was seen, any vehicle with a registration year above 2016 is definitely inaccurate. Determining the earliest valid year is more difficult. Realistically, it could be somewhere in the first few decades of the 1900s.
#
# Let's count the number of listings with cars that fall outside the 1900 - 2016 interval and see if it's safe to remove those rows entirely, or if we need more custom logic.
(~autos["registration_year"].between(1900,2016)).sum()/autos.shape[0]
# Since there are just 4% of rows has registration year other than 1900 to 2016 , we will remove it.
autos = autos[autos["registration_year"].between(1900,2016)]
# TOP 20 YEARSFOR REGISTRATION
autos.registration_year.value_counts(normalize = True, dropna= False).head(20)
# It seems like most of vehicles were registered within 20 years.
# ## Exploring Price by Brand
# Peforiming Aggreagtion for top 10 brands mean price in ebay
print(autos["brand"].value_counts().head(10))
print(autos["brand"].value_counts().tail(10))
# Almost one fift of the car are of "volkswagen" brand where as just 29 of cars are from "lada" brand.
# +
# Finding average price of top 10 branded car
brand_price = {}
top_10 = autos["brand"].value_counts().head(10).index
for key in top_10:
selected_rows = autos[autos["brand"]==key]
mean_value = selected_rows["price"].mean()
brand_price[key] =int( mean_value)
print(brand_price)
price_series = pd.Series(brand_price)
price_df= pd.DataFrame(price_series, columns=["mean_price"]).sort_values(by = "mean_price", ascending = False)
print(price_df)
# -
# The highest mean price of top 10 brand in ebay was of "mercedes_benz" with about $$31000 and the least was of "renalut" brand with just about $2500 per average. And also:
# - Audi, BMW and Mercedes Benz are more expensive
# - Ford and Opel are less expensive
# - Volkswagen is in between
# +
#Finding average milage for top 10 brand cars
brand_milage = {}
top_10 = autos["brand"].value_counts().head(10).index
for key in top_10:
selected_rows = autos[autos["brand"]==key]
mean_value = selected_rows["odometer_km"].mean()
brand_milage[key] = int(mean_value)
print(brand_milage)
milage_series = pd.Series(brand_milage)
milage_df= pd.DataFrame(milage_series, columns=["mean_milage"]).sort_values(by= "mean_milage", ascending = False)
print(milage_df)
# +
#Compairing mean_milage and mean_price of the top 10 brand cars
brand_info = milage_df
brand_info["mean_price"]= price_series
print(brand_info)
# -
# We can see that price of a brand car does not depends mostly on the milage of that brand car i.e less the mean milage(odometer), less the price is not applied to this. There may be other condtions that made the price of the car less or more eg Year of Registration , unrepaired_damage etc
| 8,647 |
/finalProject/Data processing for ML.ipynb
|
9911d657c98c4e200944692b43fa0420d32a750d
|
[] |
no_license
|
lewiechiu/DataScience
|
https://github.com/lewiechiu/DataScience
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 41,224 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Heap's algorithm - Generate All Possible Permutations of n objects
# ### Overall Review - Recursive
objects = list('123')
# +
def heapsAlg(obj, size, n):
if size==1:
print('---------------------------------------------------------------')
print('result: ' + str(obj))
print('---------------------------------------------------------------')
return obj
for i in range(size):
heapsAlg(obj, size-1, n);
if size&1:
# size sub-array is odd: swap first element with last (buffer)
# essentially rotates all one place to right
obj[0],obj[size-1] = obj[size-1],obj[0]
else:
# size sub-array is even: ith element swapped with last element (buffer)
# handles swapping with buffer for a reversed version of the original
obj[i],obj[size-1] = obj[size-1],obj[i]
# -
heapsAlg(objects, len(objects), len(objects))
# ### Reviewing ODD operation - rotate one spot right, results in same array
# +
def heapsAlg(obj, size, n):
if size==1:
return obj
for i in range(size):
heapsAlg(obj, size-1, n);
if size&1:
# size sub-array is odd: swap first element with last (buffer)
# essentially rotates all one place to right
obj[0],obj[size-1] = obj[size-1],obj[0]
print(str(obj) + 'ODD with 0th index and index ' + str(size-1) + ' element swapped')
else:
# size sub-array is even: ith element swapped with last element (buffer)
# handles swapping with buffer for a reversed version of the original
obj[i],obj[size-1] = obj[size-1],obj[i]
# print(str(obj) + 'EVEN swap index ' + str(i) + ' value to index ' + str(size-1) + ' position')
# -
heapsAlg(objects, len(objects), len(objects))
# ### Reviewing EVEN operation - swap ith element, results in reversal of array
def heapsAlg(obj, size, n):
if size==1:
return obj
for i in range(size):
heapsAlg(obj, size-1, n);
if size&1:
# size sub-array is odd: swap first element with last (buffer)
# essentially rotates all one place to right
obj[0],obj[size-1] = obj[size-1],obj[0]
# print(str(obj) + 'ODD with 0th index and index ' + str(size-1) + ' element swapped')
else:
# size sub-array is even: ith element swapped with last element (buffer)
# handles swapping with buffer for a reversed version of the original
obj[i],obj[size-1] = obj[size-1],obj[i]
print(str(obj) + 'EVEN swap index ' + str(i) + ' value to index ' + str(size-1) + ' position')
heapsAlg(objects, len(objects), len(objects))
ore it, what we need at most is $int16$. This can save up to 48 bit. When we have, say 1 million rows, we save upto 48 Mb.
# +
def featureModify(isTrain):
if isTrain:
all_data = pd.read_csv("data/train_V2.csv")
all_data = all_data[all_data['maxPlace'] > 1]
all_data = reduce_mem_usage(all_data)
all_data = all_data[all_data['winPlacePerc'].notnull()]
else:
all_data = pd.read_csv('../input/test_V2.csv')
all_data['matchType'] = all_data['matchType'].map({
'crashfpp':1,
'crashtpp':2,
'duo':3,
'duo-fpp':4,
'flarefpp':5,
'flaretpp':6,
'normal-duo':7,
'normal-duo-fpp':8,
'normal-solo':9,
'normal-solo-fpp':10,
'normal-squad':11,
'normal-squad-fpp':12,
'solo':13,
'solo-fpp':14,
'squad':15,
'squad-fpp':16
})
all_data = reduce_mem_usage(all_data)
print("Match size")
matchSizeData = all_data.groupby(['matchId']).size().reset_index(name='matchSize')
all_data = pd.merge(all_data, matchSizeData, how='left', on=['matchId'])
del matchSizeData
gc.collect()
all_data.loc[(all_data['rankPoints']==-1), 'rankPoints'] = 0
all_data['_killPoints_rankpoints'] = all_data['rankPoints']+all_data['killPoints']
all_data["_Kill_headshot_Ratio"] = all_data["kills"]/all_data["headshotKills"]
all_data['_killStreak_Kill_ratio'] = all_data['killStreaks']/all_data['kills']
all_data['_totalDistance'] = 0.25*all_data['rideDistance'] + all_data["walkDistance"] + all_data["swimDistance"]
all_data['_killPlace_MaxPlace_Ratio'] = all_data['killPlace'] / all_data['maxPlace']
all_data['_totalDistance_weaponsAcq_Ratio'] = all_data['_totalDistance'] / all_data['weaponsAcquired']
all_data['_walkDistance_heals_Ratio'] = all_data['walkDistance'] / all_data['heals']
all_data['_walkDistance_kills_Ratio'] = all_data['walkDistance'] / all_data['kills']
all_data['_kills_walkDistance_Ratio'] = all_data['kills'] / all_data['walkDistance']
all_data['_totalDistancePerDuration'] = all_data["_totalDistance"]/all_data["matchDuration"]
all_data['_killPlace_kills_Ratio'] = all_data['killPlace']/all_data['kills']
all_data['_walkDistancePerDuration'] = all_data["walkDistance"]/all_data["matchDuration"]
all_data['walkDistancePerc'] = all_data.groupby('matchId')['walkDistance'].rank(pct=True).values
all_data['killPerc'] = all_data.groupby('matchId')['kills'].rank(pct=True).values
all_data['killPlacePerc'] = all_data.groupby('matchId')['killPlace'].rank(pct=True).values
all_data['weaponsAcquired'] = all_data.groupby('matchId')['weaponsAcquired'].rank(pct=True).values
all_data['_walkDistance_kills_Ratio2'] = all_data['walkDistancePerc'] / all_data['killPerc']
all_data['_kill_kills_Ratio2'] = all_data['killPerc']/all_data['walkDistancePerc']
all_data['_killPlace_walkDistance_Ratio2'] = all_data['walkDistancePerc']/all_data['killPlacePerc']
all_data['_killPlace_kills_Ratio2'] = all_data['killPlacePerc']/all_data['killPerc']
all_data['_totalDistance'] = all_data.groupby('matchId')['_totalDistance'].rank(pct=True).values
all_data['_walkDistance_kills_Ratio3'] = all_data['walkDistancePerc'] / all_data['kills']
all_data['_walkDistance_kills_Ratio4'] = all_data['kills'] / all_data['walkDistancePerc']
all_data['_walkDistance_kills_Ratio5'] = all_data['killPerc'] / all_data['walkDistance']
all_data['_walkDistance_kills_Ratio6'] = all_data['walkDistance'] / all_data['killPerc']
all_data[all_data == np.Inf] = np.NaN
all_data[all_data == np.NINF] = np.NaN
all_data.fillna(0, inplace=True)
features = list(all_data.columns)
features.remove("Id")
features.remove("matchId")
features.remove("groupId")
features.remove("matchSize")
features.remove("matchType")
if isTrain:
features.remove("winPlacePerc")
print("Mean Data")
meanData = all_data.groupby(['matchId','groupId'])[features].agg('mean')
meanData = reduce_mem_usage(meanData)
meanData = meanData.replace([np.inf, np.NINF,np.nan], 0)
meanDataRank = meanData.groupby('matchId')[features].rank(pct=True).reset_index()
meanDataRank = reduce_mem_usage(meanDataRank)
all_data = pd.merge(all_data, meanData.reset_index(), suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
del meanData
gc.collect()
all_data = all_data.drop(["vehicleDestroys_mean","rideDistance_mean","roadKills_mean","rankPoints_mean"], axis=1)
all_data = pd.merge(all_data, meanDataRank, suffixes=["", "_meanRank"], how='left', on=['matchId', 'groupId'])
del meanDataRank
gc.collect()
all_data = all_data.drop(["numGroups_meanRank","rankPoints_meanRank"], axis=1)
all_data = all_data.join(reduce_mem_usage(all_data.groupby('matchId')[features].rank(ascending=False).add_suffix('_rankPlace').astype(int)))
print("Std Data")
stdData = all_data.groupby(['matchId','groupId'])[features].agg('std').replace([np.inf, np.NINF,np.nan], 0)
stdDataRank = reduce_mem_usage(stdData.groupby('matchId')[features].rank(pct=True)).reset_index()
del stdData
gc.collect()
all_data = pd.merge(all_data, stdDataRank, suffixes=["", "_stdRank"], how='left', on=['matchId', 'groupId'])
del stdDataRank
gc.collect()
print("Max Data")
maxData = all_data.groupby(['matchId','groupId'])[features].agg('max')
maxData = reduce_mem_usage(maxData)
maxDataRank = maxData.groupby('matchId')[features].rank(pct=True).reset_index()
maxDataRank = reduce_mem_usage(maxDataRank)
all_data = pd.merge(all_data, maxData.reset_index(), suffixes=["", "_max"], how='left', on=['matchId', 'groupId'])
del maxData
gc.collect()
all_data = all_data.drop(["assists_max","killPoints_max","headshotKills_max","numGroups_max","revives_max","teamKills_max","roadKills_max","vehicleDestroys_max"], axis=1)
all_data = pd.merge(all_data, maxDataRank, suffixes=["", "_maxRank"], how='left', on=['matchId', 'groupId'])
del maxDataRank
gc.collect()
all_data = all_data.drop(["roadKills_maxRank","matchDuration_maxRank","maxPlace_maxRank","numGroups_maxRank"], axis=1)
print("Min Data")
minData = all_data.groupby(['matchId','groupId'])[features].agg('min')
minData = reduce_mem_usage(minData)
minDataRank = minData.groupby('matchId')[features].rank(pct=True).reset_index()
minDataRank = reduce_mem_usage(minDataRank)
all_data = pd.merge(all_data, minData.reset_index(), suffixes=["", "_min"], how='left', on=['matchId', 'groupId'])
del minData
gc.collect()
all_data = all_data.drop(["heals_min","killStreaks_min","killPoints_min","maxPlace_min","revives_min","headshotKills_min","weaponsAcquired_min","_walkDistance_kills_Ratio_min","rankPoints_min","matchDuration_min","teamKills_min","numGroups_min","assists_min","roadKills_min","vehicleDestroys_min"], axis=1)
all_data = pd.merge(all_data, minDataRank, suffixes=["", "_minRank"], how='left', on=['matchId', 'groupId'])
del minDataRank
gc.collect()
all_data = all_data.drop(["killPoints_minRank","matchDuration_minRank","maxPlace_minRank","numGroups_minRank"], axis=1)
print("group Size")
groupSize = all_data.groupby(['matchId','groupId']).size().reset_index(name='group_size')
groupSize = reduce_mem_usage(groupSize)
all_data = pd.merge(all_data, groupSize, how='left', on=['matchId', 'groupId'])
del groupSize
gc.collect()
print("Match Mean")
matchMeanFeatures = features
matchMeanFeatures = [ v for v in matchMeanFeatures if v not in ["killPlacePerc","matchDuration","maxPlace","numGroups"] ]
matchMeanData= reduce_mem_usage(all_data.groupby(['matchId'])[matchMeanFeatures].transform('mean')).replace([np.inf, np.NINF,np.nan], 0)
all_data = pd.concat([all_data,matchMeanData.add_suffix('_matchMean')],axis=1)
del matchMeanData,matchMeanFeatures
gc.collect()
print("matchMax")
matchMaxFeatures = ["walkDistance","kills","_walkDistance_kills_Ratio","_kill_kills_Ratio2"]
all_data = pd.merge(all_data, reduce_mem_usage(all_data.groupby(['matchId'])[matchMaxFeatures].agg('max')).reset_index(), suffixes=["", "_matchMax"], how='left', on=['matchId'])
print("match STD")
matchMaxFeatures = ["kills","_walkDistance_kills_Ratio2","_walkDistance_kills_Ratio","killPerc","_kills_walkDistance_Ratio"]
all_data = pd.merge(all_data, reduce_mem_usage(all_data.groupby(['matchId'])[matchMaxFeatures].agg('std')).reset_index().replace([np.inf, np.NINF,np.nan], 0), suffixes=["", "_matchSTD"], how='left', on=['matchId'])
all_data = all_data.drop(["Id","groupId"], axis=1)
all_data = all_data.drop(["DBNOs","assists","headshotKills","heals","killPoints","_killStreak_Kill_ratio","killStreaks","longestKill","revives","roadKills","teamKills","vehicleDestroys","_walkDistance_kills_Ratio","weaponsAcquired"], axis=1)
all_data = all_data.drop(["_walkDistance_heals_Ratio","_totalDistancePerDuration","_killPlace_kills_Ratio","_totalDistance_weaponsAcq_Ratio","_killPlace_MaxPlace_Ratio","_walkDistancePerDuration","rankPoints","rideDistance","boosts","winPoints","swimDistance","_kills_walkDistance_Ratio"], axis=1)
all_data = all_data.drop(["_Kill_headshot_Ratio","maxPlace","_totalDistance","numGroups","walkDistance","killPlace"], axis=1)
all_data = reduce_mem_usage(all_data)
gc.collect()
print("done")
features_label = all_data.columns
features_label = features_label.drop('matchId')
if isTrain:
features_label = features_label.drop('winPlacePerc')
gc.collect()
return all_data,features_label
def split_train_val(data, fraction):
matchIds = data['matchId'].unique().reshape([-1])
train_size = int(len(matchIds)*fraction)
random_idx = np.random.RandomState(seed=2).permutation(len(matchIds))
train_matchIds = matchIds[random_idx[:train_size]]
val_matchIds = matchIds[random_idx[train_size:]]
data_train = data.loc[data['matchId'].isin(train_matchIds)]
data_val = data.loc[data['matchId'].isin(val_matchIds)]
return data_train, data_val
# -
X_train,features_label = featureModify(True)
# +
X_train, X_train_test = split_train_val(X_train, 0.91)
print("Y time")
y = X_train['winPlacePerc']
y_test = X_train_test['winPlacePerc']
X_train = X_train.drop(columns=['matchId', 'winPlacePerc'])
X_train_test = X_train_test.drop(columns=['matchId', 'winPlacePerc'])
print("X test np time")
X_train_test = np.array(X_train_test)
print("y test np time")
y_test = np.array(y_test)
y = np.array(y)
X_train = np.array(X_train)
np.save("y", y)
np.save("x", X_train)
np.save("x_test",X_train_test)
np.save("y_test",y_test)
# -
X_train.shape
# Loading the file, so the next time I don't have to spend time waiting.
X_train = np.load("x.npy", allow_pickle=True)
X_train_test = np.load("x_test.npy", allow_pickle=True)
y = np.load('y.npy', allow_pickle=True)
y_test = np.load('y_test.npy', allow_pickle=True)
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, BatchNormalization, Dropout
from keras.callbacks import EarlyStopping
# +
model = Sequential()
model.add(Dense(450, input_dim=409))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(450))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(450))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('tanh'))
model.compile(optimizer='Adam', loss='mse', metrics=['mae'])
model.summary()
# -
start = time.time()
es = EarlyStopping(patience=4)
model.fit(X_train,y, validation_data=(X_train_test,y_test), epochs=40, batch_size=2048, callbacks=[es])
end = time.time()
# The above results seems to have exploded in the validation set.
end - start, (1 - 0.0359)/ (end-start)
# It took us 863.37 seconds to reach a best mae of 0.0359. That is, for every 1 addtional second we train the model, the model roughly gives us a 0.0011 mae improvement.
# +
# model.save('450_3.h5')
# del model
model = Sequential()
model.add(Dense(450, input_dim=409))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(450))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(450))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
model.add(Activation('tanh'))
model.compile(optimizer='Adam', loss='mse', metrics=['mae'])
model.summary()
# -
start = time.time()
es = EarlyStopping(patience=10)
model.fit(X_train,y, validation_data=(X_train_test,y_test), epochs=80, batch_size=30000, callbacks=[es])
end = time.time()
end - start, (1 - 0.037)/ (end-start)
#
# From there we see if we apply some more dropout layers to the neural network, although it did perform somewhat better
| 16,045 |
/notebooks/homecdt_model/ss_archive/archive/.ipynb_checkpoints/ss_model_20200129_homecdt_submission_LGBM-checkpoint.ipynb
|
9e1d7146c9dfb5e396f1f84e379b2e78f72ffb02
|
[] |
no_license
|
stansuo/BDSE12-Group3
|
https://github.com/stansuo/BDSE12-Group3
| 6 | 2 | null | 2020-01-02T12:29:43 | 2020-01-02T10:09:12 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 99,191 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ZGrWeffvX7vk"
# ## ML Part 1-3. 데이터 셋 분할 (Hold-Out)
# + [markdown] id="g4MvpJsZ7hDe"
# ## 1.머신러닝의 Training vs Testing
# - 머신러닝은 Training 단계와 Testing 단계로 구분됨
# - Training 단계 : Training Data와 Learning Algorithm을 가지고 Model을 만듦
# - Testing 단계 : Model에 Test Data를 적용하여 결과를 얻음
#
# + [markdown] id="TBhA3aFM_bJu"
# 
# + [markdown] id="HQMTGgYp0wqm"
# ## 2.머신러닝의 Datasets
# - 지도학습에서는 레이블링된 데이터가 있으므로 결과의 정확도를 측정할 수 있음
# - 머신러닝 모델의 효과를 검증하기 위해 데이터 셋을 나누어 사용함
#
#
# + [markdown] id="KL2W-hrE9nON"
# ## 3.머신러닝의 Datasets 종류
# - Training Set (학습 세트)
# - 알고리즘이 학습에 사용할 데이터
# - Validation Set (검증 세트)
# - 학습 세트를 사용해 모델을 학습하고 난 뒤,
# - 검증 세트를 사용해 모델의 예측/분류 정확도를 계산
# - Overfitting 을 줄이거나 Parameter 결정에 도움
# - Test Set(평가 세트)
# - 모델이 예측/분류해준 값과 실제 값을 비교하여 '모델 성능 평가'
# - 정확도(Accuracy), 정밀도(precision), 재현율(recall), F1 Score 등을 계산
# - 알고리즘이 현실 세계에서 얼마나 잘 수행되는지 이해할 수 있게 됨
#
# + [markdown] id="RDSmu-MC_PvC"
# 
# + [markdown] id="k9gL2fh7U-9z"
# ## 4.sklearn.model_selection
#
# - X_train, X_test, y_train, y_test = train_test_split(sample, label, test_size, train_size, random_state, suffle, stratify)
# - 같은 크기의 Numpy 배열 2개를 지정된 비율로 나눠서 반환
# - test_size = 0.25 : 0.0~1.0 테스트 데이터셋 비율
# - train_size = None : 0.0~1.0 훈련 데이터셋 비율
# - random_state = None : 정수 값, 난수 발생의 시드(seed) 값
# - suffle = True : boolean 값을 전달해서 섞을지 말지 결정
# - stratify : y의 지정한 데이터 비율을 유지(층화추출), y가 범주형일 때 사용함
# - 예) 레이블 y가 0,1로 이루어진 binary이고, 비율이 25:75일 때, stratify=y이면 데이터셋도 0,1을 같은 비율로 유지함
# - https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
# + colab={"base_uri": "https://localhost:8080/"} id="PwByO5KbOyKS" outputId="fcc47e93-eed9-4daf-f0ae-a67abfe6cac4"
import pandas as pd
from sklearn.datasets import load_iris
# iris 데이터 가져오기
iris = load_iris()
# key 목록 출력
iris.keys()
# + colab={"base_uri": "https://localhost:8080/"} id="lOAkLLwivZg-" outputId="46ca45ee-932a-4131-d101-5db94f89e08f"
# 데이터 저장
X = iris.data
y = iris.target
print(X.shape, y.shape)
# + id="6iHsvIvt0kp8" colab={"base_uri": "https://localhost:8080/"} outputId="f2a4df2d-f80f-4383-f99f-475dd0b5a232"
# training, test 세트 나누기
from sklearn.model_selection import train_test_split
# X, y를 사용하여 train_size : test_size 를 0.8 : 0.2로 나누기, random_state = 0 사용
X_train, X_test , y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state = 0)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 129} id="RNVc_Sd5VzDl" outputId="23f256a5-a43f-4f1a-d59c-fbab67c1b3ab"
# random_state를 사용한 경우 - 여러 번 실행해도 동일 결과
X_train, X_test , y_train, y_test = train_test_split(X, y, train_size=0.8, random_state = 0)
display(X_train[:3], X_test[:3])
# + colab={"base_uri": "https://localhost:8080/", "height": 148} id="pHM7g5GUP9MZ" outputId="855443d6-a5ad-4693-936e-81e3ab5ea457"
# random_state를 사용하지 않으면 실행시마다 다른 결과
# train_size 지정하지 않으면 0.75, 0.25
X_train, X_test , y_train, y_test = train_test_split(X, y) # 0.75, 0.25
print(X_train.shape, X_test.shape)
display(X_train[:3], X_test[:3])
# + colab={"base_uri": "https://localhost:8080/"} id="cLAyWaNxnKkn" outputId="ae2eec8d-5f2b-4b45-e00e-4c96887696ea"
# numpy의 random
import numpy as np
np.random.random()
# + colab={"base_uri": "https://localhost:8080/"} id="v1p3a99enUSr" outputId="ffc271cc-a0be-4d0f-fc74-2cec6624b600"
# 아래의 두 줄을 따로 쓰면 효과 없음
np.random.seed(0)
np.random.random()
# + [markdown] id="B0hljbaKJNKy"
# ## 5.데이터 분할 시 고려사항
# 1. class의 비율 (층화추출)
# 2. 원본(y)의 비율이 고른가?
# - Under Sampling : 적은 class의 수에 맞추는 것
# - Over Sampling : 많은 class의 수에 맞추는 것
# + [markdown] id="bVOa6nsUwFra"
# 
# - 출처 : ```https://www.kdnuggets.com/2020/01/top-tweets-jan22-28.html```
# + id="MTyYsd6UwQtg"
# 읽어 보기 자료 : https://github.com/ufoym/imbalanced-dataset-sampler
| 96,866 |
/ahem_detector.ipynb
|
5b4a2632b535a92257a4c2a4bc687db23debde19
|
[
"MIT"
] |
permissive
|
1102ankit/Help_detector
|
https://github.com/1102ankit/Help_detector
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,590,801 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(12,12),facecolor='w')
xmin=-1.2; xmax=1.2
ymin=-1.2; ymax=1.2
plt.subplot(121)
plt.xlim([xmin,xmax])
plt.ylim([ymin,ymax])
plt.quiver(1,0, color='#0000ff',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.quiver(0,1, color='#00ff00',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.quiver(1,1, color='#00ffff',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.quiver(-1,1, color='#ff00ff',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.text(1,0,"a_1",fontsize=20, color='#0000ff')
plt.text(0,1,"a_2",fontsize=20, color='#00ff00')
plt.text(1,1,"b_1",fontsize=20, color='#00ffff')
plt.text(-1,1,"b_2",fontsize=20, color='#ff00ff')
plt.xlabel("x",fontsize=20)
plt.ylabel("y",fontsize=20)
plt.tick_params(labelsize = 20)
plt.grid()
plt.draw()
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig('E3-1-2Ans.eps')
plt.show()
# +
import numpy as np
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(12,12),facecolor='w')
xmin=-1.2; xmax=1.2
ymin=-1.2; ymax=1.2
plt.subplot(121)
plt.xlim([xmin,xmax])
plt.ylim([ymin,ymax])
plt.quiver(1,0, color='#0000ff',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.quiver(0,1, color='#00ff00',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.quiver(1,1, color='#00ffff',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.quiver(1,1, color='#ff0000',angles='xy',scale_units='xy',scale=1,width=0.02, headwidth=3)
plt.quiver(-1,1, color='#ff00ff',angles='xy',scale_units='xy',scale=1,headwidth=6)
plt.text(1,0,"a_1",fontsize=20, color='#0000ff')
plt.text(0,1,"a_2",fontsize=20, color='#00ff00')
plt.text(1,1,"b_1",fontsize=20, color='#00ffff')
plt.text(1,1,"c",fontsize=20, color='#ff0000')
plt.text(-1,1,"b_2",fontsize=20, color='#ff00ff')
plt.xlabel("x",fontsize=20)
plt.ylabel("y",fontsize=20)
plt.tick_params(labelsize = 20)
plt.grid()
plt.draw()
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig('E3-1-4Ans.eps')
plt.show()
# -
D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from keras.models import model_from_json
import skimage.io as io
import os
from os import listdir
from os.path import isfile, join
import utils as ut
import librosa
import librosa.display
import IPython.display
import numpy as np
from skimage.measure import block_reduce
import skimage.io as io
# -
# network configuration
batch_size = 32
# number of epochs
nb_epoch = 5
# number of convolutional filters to use
nb_filters = 32
# number of classes
nb_classes = 2
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
# +
# the data, shuffled and split between train and test sets
# we save generated images here (make sure there is space)
path_class_0 = '/data/class_0/'
path_class_1 = '/data/class_1/'
# load filenames into lists
class0_files = [f for f in listdir(path_class_0) if isfile(join(path_class_0, f))]
class1_files = [f for f in listdir(path_class_1) if isfile(join(path_class_1, f))]
# +
# prepare training set
X_t = []
Y_t = []
for fn in class0_files[:100]:
img = io.imread(os.path.join(path_class_0, fn))
img = img.transpose((2,0,1))
img = img[:3, :, :]
X_t.append(img)
Y_t.append(0)
for fn in class1_files[:100]:
img = io.imread(os.path.join(path_class_1, fn))
img = img.transpose((2,0,1))
img = img[:3, :, :]
X_t.append(img)
Y_t.append(1)
X_t = np.asarray(X_t)
X_t = X_t.astype('float32')
X_t /= 255
Y_t = np.asarray(Y_t)
Y_t = np_utils.to_categorical(Y_t, nb_classes)
# -
img_rows, img_cols = X_t.shape[2], X_t.shape[3]
# input image dimensions
img_channels = 3 # RGB
input_shape = (3, img_rows, img_cols)
# +
## test set
X_test = []
Y_test = []
for fn in class0_files[6000:8000]:
img = io.imread(os.path.join(path_class_0, fn))
img = img.transpose((2,0,1))
img = img[:3, :, :]
X_test.append(img)
Y_test.append(0)
for fn in class1_files[6000:8000]:
img = io.imread(os.path.join(path_class_1, fn))
img = img.transpose((2,0,1))
img = img[:3, :, :]
X_test.append(img)
Y_test.append(1)
X_test = np.asarray(X_test)
Y_test = np.asarray(Y_test)
X_test = X_test.astype('float32')
X_test /= 255
Y_test = np_utils.to_categorical(Y_test, nb_classes)
# +
def make_model():
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Convolution2D(nb_filters, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
return model
def load_image(filename):
img = io.imread(filename)
img = img.transpose((2,0,1))
img = img[:3, :, :]
return img
# -
model = make_model()
model.compile(loss='binary_crossentropy', optimizer='adadelta', metrics=['accuracy'])
for e in xrange(3):
model.fit(X_t, Y_t,
#validation_data=(X_test, Y_test),
batch_size=batch_size,
nb_epoch=1, verbose=1)
# ## Trying if all this stuff works
# +
predictions = model.predict_classes(X_test)
y = []
for e in Y_test:
if e[0]> e[1]:
y.append(0)
else:
y.append(1)
print('how many did we guess out of ', Y_test.shape)
np.sum(y == predictions)
# -
# # Predictions on new audio sample
# the data, shuffled and split between train and test sets
path_newsample = '/archive/'
newsample_files = [f for f in listdir(path_newsample) if isfile(join(path_newsample, f))]
# +
# prepare test set as we did for training set
X_test = []
for fn in newsample_files:
img = io.imread(os.path.join(path_newsample, fn))
img = img.transpose((2,0,1))
img = img[:3, :, :]
X_test.append(img)
X_test = np.asarray(X_test)
X_test = X_test.astype('float32')
X_test /= 255
# -
# grab a large cup of coffee this will take a while
predictions = model.predict_classes(X_test)
# collect all indices of noisy samples (class 1)
# start position is encoded in filename (a trick to run this in parallel with no sequential order)
noisy_frames = np.where(predictions==1)[0]
noisy_files = [newsample_files[n] for n in noisy_frames]
# ## Playback and clean new samples
# Load a sound with a lot of "ahem" in it
path = '/data'
sound_file_paths = [os.path.join(path, "provocation_dirty.wav")]
sound_names = ["dirty"]
raw_sounds = ut.load_sound_files(sound_file_paths)
windowsize = 6000
# create positive samples
audiosamples = raw_sounds[0]
numsamples = audiosamples.shape[0]
original_audio = audiosamples
clean_audio = audiosamples
# Playback from ipython (cool uh?)
IPython.display.Audio(data=original_audio, rate=44100)
# +
noisy_start = []
for fn in noisy_files:
noisy_start.append(int(fn.split('_')[2].split('.')[0]))
noisy_start.sort(reverse=True)
# -
clean_audio = audiosamples
prev_idx = 0
for start in xrange(1, len(noisy_start)):
prev_pos = noisy_start[prev_idx]
current_pos = noisy_start[start]
diff = prev_pos - current_pos
prev_idx += 1
# set volume to zero for 'ahem' samples
clean_audio[current_pos:current_pos+windowsize] = 0
# Play it back!
IPython.display.Audio(data=clean_audio, rate=44100)
# save to file and enjoy the clean episode!
librosa.output.write_wav('/archive/cleaned.wav', clean_audio, sr=44100)
# # Audio analytics without deep learning
# +
ut.plot_waves(sound_names,raw_sounds)
ut.plot_specgram(list(sound_names[3:]), list(raw_sounds[3:]))
ut.plot_log_power_specgram(sound_names,raw_sounds)
# traditional audio features
mfccs, chroma, mel, contrast,tonnetz = ut.extract_feature('./data/jingle.wav')
ut.specgram_frombuffer(raw_sounds[0][0:44100], 6, 6, fname='/archive/buffer.png', show=True)
# found a good model to analyze the audio features above
# and... good luck!
| 8,779 |
/prime no..ipynb
|
a58f0d207f74e1c74f8446b5611ff015fad345b7
|
[] |
no_license
|
akash7481/codes
|
https://github.com/akash7481/codes
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,258 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import pandas as pd
import scipy as sp
from sklearn import preprocessing
from sklearn.cross_validation import KFold
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve,auc,log_loss
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.cross_validation import train_test_split
import matplotlib
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore') #为了整洁,去除弹出的warnings
pd.set_option('precision', 5) #设置精度
pd.set_option('display.float_format', lambda x: '%.5f' % x) #为了直观的显示数字,不采用科学计数法
# %matplotlib inline
df = pd.read_csv("train.csv")
df = df[~np.isnan(df['3'])]
encode = preprocessing.LabelEncoder()
for column in df.columns:
df.loc[:, column] = encode.fit_transform(df[column])
# 处理缺失值
df = df.fillna(10)
x = df.values[:,1:3]
y = df.values[:,3]
print x.shape
print y.shape
#Gradient Boosting Decision Tree
gbdt = GradientBoostingClassifier(n_estimators=200)
def cv(x, y, model, n, model_name,mars=False):
k_folds = KFold(x.shape[0], n_folds=n, shuffle=True)
scores = []
loss_list = []
for train_indices, validation_indices in k_folds:
# Generate training data
x_train_cv = x[train_indices]
y_train_cv = y[train_indices]
# Generate validation data
x_validate = x[validation_indices]
y_validate = y[validation_indices]
# Fit model on training data
model.fit(x_train_cv, y_train_cv)
# Score on validation data
scores += [model.score(x_validate, y_validate)]
# log_loss on validation data
proba = model.predict_proba(x_validate)
loss_list += [log_loss(y_validate,proba)]
# Record and report accuracy
average_score = np.mean(scores)
average_log_loss = np.mean(loss_list)
print "Score:", average_score
print "Log_loss:", average_log_loss
return average_score,average_log_loss
gbdt_score, gbdt_log_loss = cv(x,y,gbdt,5,"Gradient Boosting Decision Tree")
model2 = gbdt.fit(x, y)
# # 五个特征的模型
data5 = pd.read_csv('MICEimputedTrain.csv')
data5.head()
data5 = data5[~np.isnan(data5['5'])]
encode = preprocessing.LabelEncoder()
for column in data5.columns:
data5.loc[:, column] = encode.fit_transform(data5[column])
x5 = data5.values[:,2:6]
y5 = data5.values[:,1]
print x.shape
print y.shape
gbdt5_score, gbdt5_log_loss = cv(x5,y5,gbdt,5,"Gradient Boosting Decision Tree")
# # 5个feature的test
model5 = gbdt.fit(x5, y5)
test5 = pd.read_csv('testdf.csv')
# # 2个Feature的test
test2 = pd.read_csv('test_feature2.csv')
test2.head()
user_id = test2['id'].values
encode = preprocessing.LabelEncoder()
for column in test2.columns:
test2.loc[:, column] = encode.fit_transform(test2[column])
# 处理缺失值
test2 = test2.fillna(10)
x2 = test2.values[:,1:3]
x2.shape
proba2 = model2.predict_proba(x2)
result = pd.DataFrame({'0id':user_id,'event_30018':proba2[:,0],'event_30021':proba2[:,1],'event_30024':proba2[:,2],
'event_30027':proba2[:,3],'event_30039':proba2[:,4],'event_30042':proba2[:,5],'event_30045':proba2[:,6],
'event_30048':proba2[:,7],'event_36003':proba2[:,8],'event_45003':proba2[:,9]})
result.head()
result.to_csv('result_2feature.csv',index=False)
read counts. Each dot represents an insertion and the height is log(reads+1). The middle track plots the insertion density. The third track represents the reference genes and peaks. Finally, the last track represents peak calls. Below you can see that regions with high densities of insertions are accurately called as Sp1 binding sites.
#
cc.pl.draw_area("chr1",999921,1000324,20000,peak_data_HCT116, HCT116_SP1, "hg38", HCT116_brd4, font_size=2, ind.to_csv(path,header=headers)
# Appends
df_ind.to_csv(path_t,header=False,mode="a")
# -
# ### Import full dataset with Article Data
# Import as csv
file_name='df_exec.csv'
path=root_path_data+file_name
df_fexec=pd.read_csv(path,index_col=0)
# Import as csv
file_name='df_ind.csv'
path=root_path_data+file_name
df_find=pd.read_csv(path,index_col=0)
# ## Initial Data Exploration
df_fexec.head()
df_fexec.describe()
df_find.head()
df_find.columns
# +
# table6=df_find.pivot(index=['Variation', 'Solution'])
# -
table6=pd.pivot_table(df_find,index=["Variation","Solution"])
table6
df_find.loc[df_find.Variation!="Article"].describe()
df_find.columns
df_ind_short=df_find.drop(['Execution'],axis=1)
df_ind_short
# Export as csv
file_name='df_ind_short.csv'
path=root_path_data+file_name
df_find_short=df_ind_short.to_csv(path)
# Import as csv
file_name='df_ind_short.csv'
path=root_path_data+file_name
df_find_short=pd.read_csv(path,index_col=0)
df_find_short.columns
df_find_short.reset_index(drop=True)
df_find_short
table6=df_find_short.T
table6
# +
# Export table to txt for latex Report and Beamer
file_name = "resultado.txt"
path = root_path_report + file_name
with codecs.open(path, 'w', 'utf-8') as outfile:
outfile.write(table6.to_latex(index=True,header=False,escape=True))
path = root_path_beamer + file_name
with codecs.open(path, 'w', 'utf-8') as outfile:
outfile.write(table6.to_latex(index=True,header=False,escape=True))
pseq_signal = "ENCFF587ZMX.bigWig")
# Visualize it by the plotting the signal values.
cc.pl.signal_plot(mtx_HCT116, alpha = 0.05, figsize=(6, 4))
# Visualized by the plotting the signal heatmap plot.
cc.pl.signal_heatmap(mtx_HCT116,pad = 0.035)
# We can now use HOMER to call motifs. We hope to find the canonical Sp1 motif enriched under the call peaks.
cc.tl.call_motif("peak_HCT116.bed",reference ="hg38",save_homer = "Homer/peak_HCT116",
homer_path = "/home/juanru/miniconda3/bin/", num_cores=12)
# In the motif analysis result, SP1 motif and many other family members rank top.
#
#
# <img src="peak_HCT116.png" alt="drawing" width="800"/>
# Do the exact same thing for K562 SP1 data.
# read experiment data
K562_SP1 = cc.datasets.SP1_K562HCT116_data(data="K562_SP1_qbed")
K562_SP1
# read background data
K562_brd4 =cc.datasets.SP1_K562HCT116_data(data="K562_brd4_qbed")
K562_brd4
peak_data_K562 = cc.pp.call_peaks(K562_SP1, K562_brd4, method = "MACCs", reference = "hg38", window_size = 2000, step_size = 500,
pvalue_cutoffTTAA = 0.0001, pvalue_cutoffbg = 0.1, lam_win_size = None, pseudocounts = 0.1, record = True, save = "peak_k562.bed")
peak_data_K562
cc.pl.draw_area("chr10",3048452,3049913,60000,peak_data_K562,K562_SP1, "hg38", K562_brd4 , font_size=2,
figsize = (30,15),peak_line = 4,save = False,bins =400, plotsize = [1,1,5], example_length = 1000)
qbed = {"SP1":K562_SP1, "Brd4": K562_brd4}
bed = {"peak":peak_data_K562}
cc.pl.WashU_browser_url(qbed = qbed,bed = bed,genome = 'hg38')
cc.pl.whole_peaks(peak_data_K562, reference = "hg38",figsize=(100, 70),height_scale = 1.7)
# We can see that SP1 binds much more frequently in K562 than HCT116.
# We can then check with reference Chip-seq data of SP1 in K562 from [ENCSR372IML](https://www.encodeproject.org/experiments/ENCSR372IML/) (and use the bigWig file [ENCFF588UII](https://www.encodeproject.org/files/ENCFF588UII/) generated by it).
#
# Download the data if needed:
#
# ``` Python
# # !wget https://www.encodeproject.org/files/ENCFF588UII/@@download/ENCFF588UII.bigWig
# ```
mtx_K562 = cc.pl.calculate_signal(peak_data_K562, chipseq_signal = "ENCFF588UII.bigWig")
cc.pl.signal_plot(mtx_K562, alpha = 0.05, figsize=(6, 4))
cc.pl.signal_heatmap(mtx_K562,pad = 0.023, belowlength = 100)
# We can see that calling cards peaks are consistent with Chip-seq data. Peak centers tend to have a higher signal and the signal goes lower as the distance increases.
# Call motif to check the peak results.
cc.tl.call_motif("peak_k562.bed",reference ="hg38",save_homer = "Homer/peak_k562",
homer_path = "/home/juanru/miniconda3/bin/", num_cores=12)
# In the motif analysis result, SP1 motif and many other family members rank top.
#
#
# <img src="peak_k562.png" alt="drawing" width="800"/>
# Next we want to identify binding sites that are differentially bound in K562 and Hct-116 cells. This can be challenging as the two samples may have slightly shifted peaks centers at a given genomic region, leading to false positive differential peak calls. To handle this, Pycallingcards first combines the insertions from the two samples and calls peaks on the joint dataset. We do this using [bedtools](https://bedtools.readthedocs.io/en/latest/) and [pybedtools](https://daler.github.io/pybedtools/).
import pybedtools
peak = cc.rd.combine_qbed([peak_data_HCT116, peak_data_K562])
peak = pybedtools.BedTool.from_dataframe(peak).merge().to_dataframe()
peak_data = peak.rename(columns={"chrom":"Chr", "start":"Start", "end":"End"})
peak_data
# We can now visualize the peaks called on the joint dataset.
cc.pl.draw_area("chr1",999921,1000324,15000,peak_data, HCT116_SP1, "hg38", HCT116_brd4, font_size=2,
figsize = (30,10),peak_line = 2,save = False,plotsize = [1,1,3], example_length = 1000,
title = "HCT116_SP1")
cc.pl.draw_area("chr1",999921,1000324,15000,peak_data, K562_SP1, "hg38", K562_brd4, font_size=2,
figsize = (30,10),peak_line = 2,save = False,plotsize = [1,1,3], example_length = 1000,
title = "K562_SP1")
cc.pl.draw_area("chr10",3048452,3049913,60000,peak_data, HCT116_SP1, "hg38", HCT116_brd4, font_size=2,
figsize = (30,14), peak_line = 3,save = False, bins = 200, plotsize = [1,1,5],
example_length = 1000, title = "HCT116_SP1")
cc.pl.draw_area("chr10",3048452,3049913,60000,peak_data, K562_SP1, "hg38", K562_brd4, font_size=2,
figsize = (30,14), peak_line = 3,save = False, bins = 200, plotsize = [1,1,5],
example_length = 1000, title = "K562_SP1")
# The results seem to be good! Congratulations! Now we can annotate the peaks using bedtools.
peak_annotation = cc.pp.annotation(peak_data, reference = "hg38")
peak_annotation = cc.pp.combine_annotation(peak_data,peak_annotation)
peak_annotation
# Combine the two experiment qbed files to make anndata object.
exp_qbed = pd.concat([K562_SP1,HCT116_SP1])
exp_qbed
# Read the barcode file.
barcodes = cc.datasets.SP1_K562HCT116_data(data = "barcodes")
barcodes = barcodes.drop_duplicates(subset=['Index'])
barcodes
# Now we will connect the peaks (and insertions under the peaks) to the cell barcode data. To do so, we will use the qbed data, peak data and barcodes data to make a cell by peak anndata object.
adata_cc = cc.pp.make_Anndata(exp_qbed, peak_annotation, barcodes)
adata_cc
# Although one peak should have many insertions, there is a chance that all the cells from the peak were filtered by the RNA preprocesssing. In this case, we advise to filter the peaks. Additionally, we also recommend to filter cells that have very few insertions.
cc.pp.filter_peaks(adata_cc, min_counts = 5)
cc.pp.filter_peaks(adata_cc, min_cells = 5)
adata_cc
# Next we can perform differential peak analysis to determine which peaks are cell type specific. In this example, we use the *fisher exact test* to find peaks enriched in K562 versus Hct116 cells.
cc.tl.rank_peak_groups(adata_cc, "cluster", method = 'fisher_exact', key_added = 'fisher_exact')
# We can plot the results for differential peak analysis.
cc.pl.rank_peak_groups(adata_cc, key = 'fisher_exact')
# Now let's visualize some peaks that are differentially bound. The colored ones are the insertions for the cluster of interest (i.e. cell type) and the grey ones are insertions in the rest of the clusters. In this case there are only two clusters, HCT116 and K562. We observe large differences in Sp1 binding in HCT116 and K562 cells.
bg_qbed = pd.concat([K562_brd4, HCT116_brd4])
bg_qbed
# In the tracks above, we see a strong peak on Chr 15 in HCT116 cell (purple) that is not present in K562 cells (red)
cc.pl.draw_area("chr7", 143706539, 143718962, 100000, peak_data, exp_qbed, "hg38", adata = adata_cc,
bins = 250, font_size=2, name = "K562", key = 'cluster', figsize = (30,13),
name_insertion2 = 'Total Insertions', name_density2 = 'Total Insertion Density',
name_insertion1 = 'K562 Insertions', name_density1 = 'K562 Insertion Density',
peak_line = 4, color = "red", plotsize = [1,1,5], title = "chr7_143706539_143718962_K562")
cc.pl.draw_area("chr7",143706539,143718962,100000,peak_data,exp_qbed,"hg38",adata = adata_cc,
bins = 250, font_size=2, name = "HCT116", key ='cluster', figsize = (30,13),
name_insertion2 = 'Total Insertions', name_density2 = 'Total Insertion Density',
name_insertion1 = 'HCT116 Insertions', name_density1 = 'HCT116 Insertion Density',
peak_line = 4, color = "purple", plotsize = [1,1,5], title = "chr7_143706539_143718962_HCT116")
# In the tracks above we see a number of Sp1 peaks on chr7 that are tightly bound in K562, but not in HCT116.
cc.pl.draw_area("chr8", 17799370, 17802353, 100000, peak_data, exp_qbed, "hg38", adata = adata_cc,
bins = 250, font_size=2, name = "K562", key = 'cluster', figsize = (30,13), peak_line = 3,
name_insertion2 = 'Total Insertions', name_density2 = 'Total Insertion Density',
name_insertion1 = 'K562 Insertions', name_density1 = 'K562 Insertion Density',
color = "red", plotsize = [1,1,6], title = "chr19_9818643_9820060")
cc.pl.draw_area("chr8", 17799370, 17802353, 100000, peak_data, exp_qbed, "hg38", adata = adata_cc,
bins = 250, font_size=2, name = "HCT116",key ='cluster',figsize = (30,13),peak_line = 3,
name_insertion2 = 'Total Insertions', name_density2 = 'Total Insertion Density',
name_insertion1 = 'HCT116 Insertions', name_density1 = 'HCT116 Insertion Density',
color = "purple", plotsize = [1,1,6], title = "chr19_9818643_9820060")
# Here we find peaks on Chr8 that are bound in HCT116 but not in K562 cells.
# Saved the file if needed.
adata_cc.write("SP1_qbed.h5ad")
| 14,413 |
/day04_NLP/中文分词.ipynb
|
e2c0a2c37cc644f007622e3837e4403991230375
|
[] |
no_license
|
baimax321/machine_learning_Regression_Model
|
https://github.com/baimax321/machine_learning_Regression_Model
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,869 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
import jieba
#全模式
seg_list = jieba.cut("我来到北京清华大学",cut_all=True)
"/".join(seg_list)
#精确模式
seg_list = jieba.cut("我来到北京清华大学",cut_all=False)
"/".join(seg_list)
#搜索引擎模式
seg_list = jieba.cut_for_search("小明毕业于中国科学院研究所,后在日本京都大学深造")
"/".join(seg_list)
# +
#添加自己的分词字典
jieba.load_userdict("mydict.txt")
seg_list = jieba.cut("乒乓球拍卖完了",cut_all=False)
"/".join(seg_list)
seg_list = jieba.cut("张平平安全到家了",cut_all=False)
"/".join(seg_list)
# -
# # 酒店评论舆情分析
| 765 |
/L3/Action2.ipynb
|
244e6d9bb41a757affc4724a286d6aa4d439ccb0
|
[] |
no_license
|
Polaris-Huang/AI
|
https://github.com/Polaris-Huang/AI
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 46,818 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#针对MarketBasket进行词云分析,可视化探索(TOP10的商品有哪些)
# -
# -*- coding:utf-8 -*-
# 词云展示
from wordcloud import WordCloud
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
from lxml import etree
from nltk.tokenize import word_tokenize
data=pd.read_csv("./Market_Basket_Optimisation.csv",header=None,sep=',')
content=[]
for i in data.values:
for j in i:
if str(j) is not None and str(j)!='nan':
content.append(str(j).strip())
else:
pass
content
# 生成词云
def create_word_cloud(input_text):
print('根据词频,开始生成词云!')
wc = WordCloud(
max_words=10,
width=2000,
height=1200,
)
wordcloud = wc.generate(input_text)
# 写词云图片
wordcloud.to_file("wordcloud.jpg")
# 显示词云文件
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# +
# 生成词云
create_word_cloud(",".join(content))
# -
# ?create_word_cloud
# ?WordCloud
| 1,163 |
/Week5_Continuity_IVT_DeltaEpsilon.ipynb
|
b8df357ba20eb7f254fed3ace35fd208930bb6d3
|
[
"Apache-2.0"
] |
permissive
|
ctralie/Math111_F2019_Review
|
https://github.com/ctralie/Math111_F2019_Review
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 76,238 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: SageMath 8.8
# language: sage
# name: sagemath
# ---
# # Week 5 Review
# ## Problem 1
#
# $\Large f(x) = \{ \begin{array}{cc} x^2 + 4x & x > 2 \\ 2x & x \leq 2 \end{array}$
#
#
# 1) Use the continuity checklist to show that $f$ is not continuous at 2
#
#
# a. (2) is not defined
#
# b. Although $\lim_{x \to 2} f(x)$ exists, it does not equal f(2)
#
# c. $\lim_{x \to 2} f(x)$ does not exist
#
#
# 2) Is $f(x)$ continuous from the left or from the right at 2?
#
#
# 3) State the intervals of continuity
#
def f(x):
if x > 2:
return x^2 + 4*x
else:
return 2*x
print(f(2.001))
plot(f, (x, 0, 4))
# ## Problem 2
# Does the following equation have a solution on $(-1, 0)$?
#
# $ \Large 2x^3 + x + 2 = 0 $
#
# Yes / No / Uncertain
f = 2*x^3 + x + 2
print(f(-1))
print(f(0))
plot(f, (x, -1, 0))
# ## Problem 3
# If $f(x)$ is continuous on $(0, 1)$, $f(0) = -10$ and $f(1) = -5$, then does the equation $f(x) = 0$ have a solution on the interval $(0, 1)$?
#
# Yes/No/Uncertain
# ## Problem 4
# Does $f(x) = \log_2(\frac{x^2}{x+3})$
#
# have a solution on the interval $(2, 3)$?
#
# Yes / No / Uncertain
#
f = log(x^2/(x+3))/log(2)
plot(f, (x, 2, 3))
# ## Problem 5
# Find the points of discontinuity of the following function
#
# $\Large f(x) = \frac{x-1}{|3(x-1)|} + \frac{1}{x}$
f = (x-1)/abs(3*(x-1)) + 1/x
plot(f, (x, -2, 2), ymin=-2, ymax=2)
# ## Intermezzo:
#
# https://www.desmos.com/calculator/iejhw8zhqd
#
#
# ## Problem 6
# For the function in problem 5, find an $\epsilon$ for which it is impossible to find a $\delta$ so that $|f(1+x) - f(x)| < \epsilon$ for $-\delta < x < \delta$
#
# ## Problem 7
#
# Let
#
#
# $ \Large g(x) = \frac{2x^2 + 12x + 18}{x+3} $
#
# What is
#
# $\Large \lim_{x \to -3} g(x)$ ?
#
# Given an $\epsilon$, find a $\delta$ for which
#
# $ |f(-3 + x)| < \epsilon $ whenever $-\delta < x < \delta$
g = (2*x^2+12*x+18)/(x+3)
plot(g, (x, -4, -2))
| 2,132 |
/Camaras/.ipynb_checkpoints/Efecto Acuerdos-checkpoint.ipynb
|
f042b12447e0a2dd991b1bb8abff5c853d255439
|
[] |
no_license
|
SantiagoCuratPeya/Planning
|
https://github.com/SantiagoCuratPeya/Planning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 15,049 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### IMPORTS
import sys
path = 'C:\\Users\\santiago.curat\\Pandas\\PEYA'
sys.path.insert(0,path)
import pandas as pd
import numpy as np
import calendar
import datetime
from dateutil.relativedelta import relativedelta
from Roster_FEHGRA import *
from gspread_pandas import Spread, conf
# ### CREDENCIALES Y CONEXIONES
cred = conf.get_config('C:\\Users\\santiago.curat\\Pandas\\PEYA', 'PedidosYa-8b8c4d19f61c.json')
# ### GOOGLE SHEETS
# Me traigo los impactos
sheet_id = '18TKPCJy746jqLvD6u2fkuOYkgpXVYoOUe5cIU-LTgaI'
wks_name = 'Analisis Post'
sheet = Spread(sheet_id, wks_name, config=cred)
base = sheet.sheet_to_df(index=0,header_rows=1)
# Limpio las filas vacias
base = base[base['Grids'] != ''].copy()
#Filtro columnas
base = base[['Grids','Ejecución Impacto','Impacto','Comi Vieja 2','Comision Nueva','Camara']].copy()
base.columns = ['Grids','Ejecucion','Impacto','Comision Vieja','Comision Nueva','Camara']
# Cambio el formato de las fechas
base['Ejecucion'] = [x.replace('/', '-') for x in base['Ejecucion']]
base['Ejecucion'] = base['Ejecucion'].apply(lambda x: str(datetime.date.today().replace(day=1))[:-3] if x == '' else x)
# Cambio el formato a las comisiones
base['Comision Vieja'] = [float(x.replace(',','.').replace('%','')) for x in base['Comision Vieja']]
# Armo lista de Grids
grids = str(base['Grids'].to_list()).replace('[','(').replace(']',')')
# ### CONSTANTES
inicio = '2021-04-01'
fin = '2021-04-30'
# ### QUERIES
# +
q_p = '''WITH orders_table AS (
SELECT o.restaurant.id AS id,
SUBSTR(CAST(o.registered_date AS STRING),1,7) AS month,
SUM(CASE WHEN o.with_logistics THEN o.total_amount + o.discount_paid_by_company + o.shipping_amount - o.shipping_amount_no_discount
ELSE o.total_amount + o.shipping_amount + o.discount_paid_by_company END) AS income,
SUM(o.commission_amount) AS Revenue
FROM `peya-bi-tools-pro.il_core.fact_orders` AS o
WHERE o.country_id = 3
AND o.registered_date >= DATE('2020-12-01')
AND o.order_status = 'CONFIRMED'
GROUP BY 1,2)
SELECT p.salesforce_id AS Grids,
ot.month AS Month,
CASE WHEN p.is_online THEN 'Si' ELSE 'No' END AS Online,
CASE WHEN p.is_logistic THEN 'Si' ELSE 'No' END AS Logistic,
IFNULL(p.billingInfo.sap_id,'-') AS SAP_Id,
IFNULL(p.billingInfo.rut,'-') AS CUIT,
p.city.name AS City,
a.area_name AS Area,
p.billingInfo.partner_commission AS Commission,
ot.income AS Income,
ot.revenue AS Revenue
FROM `peya-bi-tools-pro.il_core.dim_partner` AS p
LEFT JOIN orders_table AS ot ON p.partner_id = ot.id
LEFT JOIN `peya-bi-tools-pro.il_core.dim_area` AS a ON p.address.area_id = a.area_id
WHERE p.country_id = 3
AND p.salesforce_id IN {0}'''.format(grids)
q_resto = '''SELECT p.city.name AS City,
a.area_name AS Area,
SUM(CASE WHEN o.with_logistics THEN o.total_amount + o.discount_paid_by_company + o.shipping_amount - o.shipping_amount_no_discount ELSE 0 END) AS Income_Log,
SUM(CASE WHEN o.with_logistics = FALSE THEN o.total_amount + o.shipping_amount + o.discount_paid_by_company ELSE 0 END) AS Income_Mktp,
SUM(CASE WHEN o.with_logistics THEN o.commission_amount ELSE 0 END) AS Revenue_Log,
SUM(CASE WHEN o.with_logistics= FALSE THEN o.commission_amount ELSE 0 END) AS Revenue_Mktp
FROM `peya-bi-tools-pro.il_core.fact_orders` AS o
LEFT JOIN `peya-bi-tools-pro.il_core.dim_partner` AS p ON o.restaurant.id = p.partner_id
LEFT JOIN `peya-bi-tools-pro.il_core.dim_area` AS a ON p.address.area_id = a.area_id
WHERE p.country_id = 3
AND o.registered_date BETWEEN DATE('{0}') AND DATE('{1}')
AND p.salesforce_id NOT IN {1}
AND o.order_status = 'CONFIRMED'
GROUP BY 1,2'''.format(inicio,fin,grids)
# -
# Descargo la data
hue_p = pd.io.gbq.read_gbq(q_p, project_id='peya-argentina', dialect='standard')
hue_resto = pd.io.gbq.read_gbq(q_resto, project_id='peya-argentina', dialect='standard')
# Copio las bases
partners = hue_p.copy()
resto = hue_resto.copy()
# ### FUNCIONES
# +
def comisiones(i):
if i['Comision Vieja'] == 0:
bandera = 0
for j in tuplas_com:
if i['Grids'] == j[0]:
val = j[1]
bandera = 1
if bandera == 0:
val = 'Error'
else:
val = i['Comision Vieja']
return val
def incomes_revenues(i):
bandera = 0
for j in tuplas_inc:
if i['Grids'] == j[1] and str(j[0]) == (i['Ejecucion'] - pd.DateOffset(months=1)).strftime('%Y-%m'):
income = j[2]
revenue = j[3]
bandera = 1
if bandera == 0:
income = 0
revenue = 0
return pd.Series([income,revenue])
def nuevas_comisiones(i):
if i['Comision Nueva'] == '-':
if i['Logistic'] == 'Si':
val = 18
else:
val = 10
else:
val = i['Comision Nueva']
return val
# -
# ### TRABAJO
# Creo la lista de tuplas para comisiones
tuplas_com = list(partners[['Grids','Commission']].drop_duplicates().to_records(index=False))
# Coloco la comision a los partners faltantes
base['Comision Vieja'] = base.apply(comisiones,axis=1)
base = base[base['Comision Vieja'] != 'Error'].copy()
# Doy formato a la fecha
partners['Month'] = pd.to_datetime(partners['Month'], format='%Y-%m').dt.strftime('%Y-%m')
base['Ejecucion'] = pd.to_datetime(base['Ejecucion'], format='%Y-%m')
# Creo una lista de tuplas para Income
tuplas_inc = list(partners[['Month','Grids','Income','Revenue']].to_records(index=False))
# Coloco el Income LM
base[['Income LM','Revenue LM']] = base.apply(incomes_revenues,axis=1)
# Cabmio el formato al mes
base['Ejecucion'] = pd.to_datetime(base['Ejecucion'], format='%Y-%m-%d').dt.strftime('%Y-%m')
# Agrego Feudo y Reino a partners
partners['Feudo'] = partners.apply(feudos,axis=1)
partners['Reino'] = partners.apply(reinos,axis=1)
# Agrego Online y Logistic a la Base
final_base = base.merge(partners[['Grids','Online','Logistic','Feudo','Reino','SAP_Id','CUIT']].drop_duplicates(),on=['Grids'],how='left')
# Coloco la comision nueva estimada a los partners nuevos
final_base['Comision Nueva'] = final_base.apply(nuevas_comisiones,axis=1)
# Coloco el nuevo Revenue
final_base['Income LM'] = final_base['Income LM'].astype(float)
final_base['Comision Nueva'] = final_base['Comision Nueva'].astype(float)
final_base['Revenue Nuevo'] = final_base['Income LM'] * (final_base['Comision Nueva']/100)
final_base['Revenue Nuevo'].replace([np.nan,np.inf,-np.inf],0,inplace=True)
# Coloco Feudo y Reino al resto
resto['Feudo'] = resto.apply(feudos,axis=1)
resto['Reino'] = resto.apply(reinos,axis=1)
# Creo una PT del resto
values = ['Income_Log','Income_Mktp','Revenue_Log','Revenue_Mktp']
resto[values] = resto[values].astype(float)
pt_resto = resto.pivot_table(index=['Reino','Feudo'],values=values,aggfunc='sum',fill_value=0).reset_index()
# Arreglo el formato de numeros a Final_Base
final_base['Comision Vieja'] = [str(x).replace('.', ',') for x in final_base['Comision Vieja']]
final_base['Comision Nueva'] = [str(x).replace('.', ',') for x in final_base['Comision Nueva']]
final_base['Income LM'] = [str(x).replace('.', ',') for x in final_base['Income LM']]
final_base['Revenue LM'] = [str(x).replace('.', ',') for x in final_base['Revenue LM']]
final_base['Revenue Nuevo'] = [str(x).replace('.', ',') for x in final_base['Revenue Nuevo']]
# Arreglo el formato de numeros a PT_Resto
pt_resto['Income_Log'] = [str(x).replace('.', ',') for x in pt_resto['Income_Log']]
pt_resto['Income_Mktp'] = [str(x).replace('.', ',') for x in pt_resto['Income_Mktp']]
pt_resto['Revenue_Log'] = [str(x).replace('.', ',') for x in pt_resto['Revenue_Log']]
pt_resto['Revenue_Mktp'] = [str(x).replace('.', ',') for x in pt_resto['Revenue_Mktp']]
# Ordeno las columnas de Final_Base
cols = ['Grids','Ejecucion','Impacto','Comision Vieja','Comision Nueva','Camara','Income LM','Revenue LM','Online','Logistic',
'Feudo','Reino','Revenue Nuevo','SAP_Id','CUIT']
final_base = final_base[cols].copy()
# ### CARGA
# Carga Final Impactos
sheet_id = '18TKPCJy746jqLvD6u2fkuOYkgpXVYoOUe5cIU-LTgaI'
wks_name = 'Analisis Economico Impactos'
sheet = Spread(sheet_id, wks_name, config=cred)
sheet.df_to_sheet(final_base, index=False, sheet=wks_name, replace=True)
# Carga Final Resto
sheet_id = '18TKPCJy746jqLvD6u2fkuOYkgpXVYoOUe5cIU-LTgaI'
wks_name = 'Analisis Economico Resto'
sheet = Spread(sheet_id, wks_name, config=cred)
sheet.df_to_sheet(pt_resto, index=False, sheet=wks_name, replace=True)
# We saw that the most common words include "the" and others above - start by making these stop words.
#
# N-grams are conjunctions of words (e.g. a 2-gram adds all sequences of 2 words)
#
#
# Look at the docs: `CountVectorizer()` and `TfidfVectorizer()` can be modified to handle all of these things. Work in groups and try a few different combinations of these settings for anything you want: binary counts, numeric counts, tf-idf counts. Here is how you would use these settings:
#
# - "`ngram_range=(1,2)`": would include unigrams and bigrams (ie including combinations of words in sequence)
# - "`stop_words="english"`": would use a standard set of English stop words
# - "`lowercase=False`": would turn off lowercase transformation (it is actually on by default)!
#
# You can use some of these like this:
#
# `tfidf_vectorizer = TfidfVectorizer(ngram_range=(1,2), lowercase=False)`
#
# #### Models
# Next swap out the line creating a logistic regression with one making a naive Bayes or support vector machines (SVM). SVM have been shown to be very effective in text classification. Naive Bayes has been used a lot also.
#
# For example see: http://www.cs.cornell.edu/home/llee/papers/sentiment.pdf
#
# +
# Try different features, models, or both!
# What is the highest accuracy you can get?
#Tfidf
from sklearn.svm import LinearSVC
tfidf_vectorizer = TfidfVectorizer(ngram_range=(1,2), lowercase=False)
tfidf_vectorizer.fit(X_text)
# Turn these tokens into a numeric matrix
X = tfidf_vectorizer.transform(X_text)
NB = BernoulliNB()
SVM = LinearSVC()
acc = cross_validation.cross_val_score(NB, X, Y, scoring="accuracy", cv=5)
acc1 = cross_validation.cross_val_score(SVM, X, Y, scoring="accuracy", cv=5)
# Print out the average AUC rounded to three decimal points
print( "Accuracy for naive Bayes is " + str(round(np.mean(acc), 3)) )
print( "Accuracy for SVM is " + str(round(np.mean(acc1), 3)) )
# +
#Count
count_vectorizer = CountVectorizer()
# Let the vectorizer learn what tokens exist in the text data
count_vectorizer.fit(X_text)
# Turn these tokens into a numeric matrix
X = count_vectorizer.transform(X_text)
NB = BernoulliNB()
SVM = LinearSVC()
acc = cross_validation.cross_val_score(NB, X, Y, scoring="accuracy", cv=5)
acc1 = cross_validation.cross_val_score(SVM, X, Y, scoring="accuracy", cv=5)
# Print out the average AUC rounded to three decimal points
print( "Accuracy for naive Bayes is " + str(round(np.mean(acc), 3)) )
print( "Accuracy for SVM is " + str(round(np.mean(acc1), 3)) )
| 11,353 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.