path
stringlengths 8
204
| content_id
stringlengths 40
40
| detected_licenses
list | license_type
stringclasses 2
values | repo_name
stringlengths 8
100
| repo_url
stringlengths 27
119
| star_events_count
int64 0
6.26k
| fork_events_count
int64 0
3.52k
| gha_license_id
stringclasses 10
values | gha_event_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_language
stringclasses 12
values | language
stringclasses 1
value | is_generated
bool 1
class | is_vendor
bool 1
class | conversion_extension
stringclasses 6
values | size
int64 172
10.2M
| script
stringlengths 367
7.46M
| script_size
int64 367
7.46M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
/notebook_history/model_with_increased_layers.ipynb
|
9d02f0d9114c2927f70218c96ed816d23ad4bcd3
|
[] |
no_license
|
aarushiibisht/Face-Detection-and-attribute-classification
|
https://github.com/aarushiibisht/Face-Detection-and-attribute-classification
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 199,430 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pandas
# - open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for Python
# +
# import pandas
import pandas as pd
import numpy as np
# -
df = pd.DataFrame(np.arange(0,10).reshape(5,2),
index=['a','b','c','d','e'],
columns=['x','y'])
df.head()
df.to_csv('Test.csv') #converting above into inbuilt function
# this file can be seen in home page
# Accessing elements in two ways
# 1. .loc -> location
# 2. iloc -> (index location) focused on both row and column index
#for indexing row
df.loc['a']
#for indexing column
#df['x']
df[['x','y']]
type(df.loc['a'])
# Data frames vs Data series
# - Data frames is the combination of many rows and columns(at least more than 1)
# - Data Series can be either 1 row or 1 column
df.iloc[:,:]
type(df.iloc[:,:])
type(df.iloc[2:,:1])
type(df.iloc[1:4,0:1])
df.iloc[1:4,0:1]
type(df.iloc[1:4,0:1])
df.iloc[1:4,0]
type(df.iloc[1:4,0])
#convert dataframes into array
df.iloc[:,:].values
#checking if there is null value in column
df.isnull().sum()
#checking unique categories in column
df['x'].value_counts()
#repeated value won't be shown...only unique value
df['x'].unique()
df.info()
df.describe()
df = pd.read_csv('mercedesbenz.csv')
df.head()
df.value_counts('X0')
df['X11'].value_counts()
df.value_counts('X11')
df[df['y']<100]
df.corr()
df.iloc[:,9]
# # CSV
from io import StringIO, BytesIO
data = ('col1,col2,col3\n'
'x,y,1\n'
'a,b,2\n'
'c,d,3')
type(data)
pd.read_csv(StringIO(data))
# Reading from specific columns
df = pd.read_csv(StringIO(data), usecols=['col1','col3'])
df
df.to_csv('Test.csv')
# +
# specifying columns data types
data = ('a,b,c,d\n'
'1,2,3,4\n'
'5,6,7,8\n'
'9,10,11')
# -
print(data)
df = pd.read_csv(StringIO(data),dtype=object)
df
df['d']
# for same data dtype=int/float will cause error so modifying as
data = ('a,b,c,d\n'
'1,2,3,4\n'
'5,6,7,8\n'
'9,10,11,12')
df = pd.read_csv(StringIO(data),dtype=float)
df
df['a'][2]
# using dictionary
df = pd.read_csv(StringIO(data),dtype={'a':int,'b':float,'c':'Int64'})
df
df.dtypes
# ### Index columns and training delimiters
# +
data = ('a,b,c,d\n'
'1,2,3,4\n'
'5,6,7,8\n'
'9,10,11')
print(data)
# -
pd.read_csv(StringIO(data))
# now making column d as index
pd.read_csv(StringIO(data),index_col=3)
data = ('a,b,c,d\n'
'1,apple,banana,orange,\n'
'5,parrot,sparrow,crow,\n'
'9,dog,cat,pig,')
pd.read_csv(StringIO(data))
# Above, 1 5 9 is taken as index but it should be in column a instead so using index_col=False to arrange data in particular order
pd.read_csv(StringIO(data),index_col=False)
# combining usecols and index_col
data = ('a,b,c,d\n'
'1,apple,banana,orange,\n'
'5,parrot,sparrow,crow,\n'
'9,dog,cat,pig,')
pd.read_csv(StringIO(data),usecols=['a','d'],
index_col=False)
# +
# Quoting and escape characters (useful for NLP)
data = 'a,b\n"hello,\\"bob\\",wassup?","hh"'
# -
pd.read_csv(StringIO(data), escapechar='\\')
# URL to CSV
df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')
df.head()
# # Read Json to CSV
Data = '{"employee_name": "James", "email": "[email protected]", "job_profile": [{"title1":"Team Lead", "title2":"Sr. Developer"}]}'
d=pd.read_json(Data)
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data',
header=None)
df.head()
# converting json to csv
df.to_csv('wine.csv')
# converting json to different json formats
d.to_json()
d.to_json(orient='records')
# +
# whole dataframe converted into json record wise
df.to_json(orient='records')
p_datapoint_generator(validation_dataset_files)
# + colab={"base_uri": "https://localhost:8080/", "height": 715} colab_type="code" executionInfo={"elapsed": 1220, "status": "ok", "timestamp": 1543111120685, "user": {"displayName": "Jagpreet Chawla", "photoUrl": "https://lh3.googleusercontent.com/-vBzP1UZ9bmg/AAAAAAAAAAI/AAAAAAAAAPY/BFFMcbuhRgE/s64/photo.jpg", "userId": "02909728349369560624"}, "user_tz": 300} id="YphR7elqYXQc" outputId="f1babc18-28b1-4bf1-eed5-eb5464afa841"
img, out = next(gen)
see_and_compare_yolo_outputs(sess, X, pred, img, out)
# +
gen = single_np_datapoint_generator(validation_dataset_files)
batch_size = 100
print("Reading images")
img_gen = group_iterable_into_list(gen, batch_size, 2)
images, _ = next(img_gen)
print("Getting predictions")
st = time.time()
for imgs in group_iterable_into_list(images, 50):
o = sess.run([pred], feed_dict={X: imgs})
total_time = time.time()-st
print("Processing", batch_size, "images took", total_time, "seconds.")
print("Average rate is", batch_size/total_time, "images per second.")
img_gen.close()
gen.close()
# + colab={} colab_type="code" id="nVfkNM8nYXQ0"
tf.saved_model.simple_save(sess, './models/yolo_saved_model',
inputs={"x": X},
outputs={"pred": tf.identity(pred, name="Output")})
# + colab={} colab_type="code" id="jT9JSIRyYXQ4"
| 5,439 |
/notebooks/pyspark_sql_dev.ipynb
|
66d6fed0915cb75a08887e8809f5619de7db0299
|
[
"MIT"
] |
permissive
|
mpetyx/pyspark-social-example
|
https://github.com/mpetyx/pyspark-social-example
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,830 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Section 1-2 - Creating Dummy Variables
# In previous sections, we replaced the categorical values {C, S, Q} in the column Embarked by the numerical values {1, 2, 3}. The latter, however, has a notion of ordering not present in the former (which is simply arranged in alphabetical order). To get around this problem, we shall introduce the concept of dummy variables.
# ## Pandas - Extracting data
# +
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv')
df_train = df.iloc[:712, :]
df_test = df.iloc[712:, :]
# -
# ## Pandas - Cleaning data
# +
df_train = df_train.drop(['Name', 'Ticket', 'Cabin'], axis=1)
age_mean = df_train['Age'].mean()
df_train['Age'] = df_train['Age'].fillna(age_mean)
df_train['Embarked'] = df_train['Embarked'].fillna('S')
# -
# As there are only two unique values for the column Sex, we have no problems of ordering.
df_train['Sex'] = df_train['Sex'].map({'female': 0, 'male': 1})
# For the column Embarked, however, replacing {C, S, Q} by {1, 2, 3} would seem to imply the ordering C < S < Q when in fact they are simply arranged alphabetically.
# To avoid this problem, we create dummy variables. Essentially this involves creating new columns to represent whether the passenger embarked at C with the value 1 if true, 0 otherwise. Pandas has a built-in function to create these columns automatically.
pd.get_dummies(df_train['Embarked'], prefix='Embarked').head(5)
# We now concatenate the columns containing the dummy variables to our main dataframe.
df_train = pd.concat([df_train, pd.get_dummies(df_train['Embarked'], prefix='Embarked')], axis=1)
# **Exercise**
#
# - Write the code to create dummy variables for the column Sex.
df_train = df_train.drop(['Embarked'], axis=1)
# We review our processed training data.
df_train.head(10)
X_train = df_train.iloc[:, 2:].values
y_train = df_train['Survived']
# ## Scikit-learn - Training the model
# +
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100, random_state=0)
model = model.fit(X_train, y_train)
# -
# ## Scikit-learn - Making predictions
# +
df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df_test['Age'] = df_test['Age'].fillna(age_mean)
df_test['Embarked'] = df_test['Embarked'].fillna('S')
df_test['Sex'] = df_test['Sex'].map({'female': 0, 'male': 1})
# -
# Similarly we create dummy variables for the test data.
df_test = pd.concat([df_test, pd.get_dummies(df_test['Embarked'], prefix='Embarked')], axis=1)
# +
df_test = df_test.drop(['Embarked'], axis=1)
X_test = df_test.iloc[:, 2:]
y_test = df_test['Survived']
y_prediction = model.predict(X_test)
# -
# ## Evaluation
np.sum(y_prediction == y_test) / float(len(y_test))
ImplicitRecommenderWithNull_online_df = \
eval_loop_ColdStartImplicitRecommender(ColdStartRecommenderClass=ColdStartImplicitRecommender,
ImplicitRecommenderClass=ImplicitRecommenderWithNull,
n_loop=n_loop, n_pred=n_pred,
online_batch_size=1,
user_dim=user_dim, item_dim=item_dim,
n_hidden=n_hidden, hidden_size=hidden_size,
dropout=dropout, l2_reg=l2_reg,
ID=ID)
ImplicitRecommenderWithNull_online_df_normalized = normalized_results(ImplicitRecommenderWithNull_online_df, 'Null special, en ligne')
ImplicitRecommenderWithNull_online_df_normalized
# ### Implicit Recommender avec nuls et indicateur special, en ligne version batch
#
# %%time
ImplicitRecommenderWithNull_online_batch_df = \
eval_loop_ColdStartImplicitRecommender(ColdStartRecommenderClass=ColdStartImplicitRecommender,
ImplicitRecommenderClass=ImplicitRecommenderWithNull,
n_loop=n_loop, n_pred=n_pred,
online_batch_size=online_batch_size,
user_dim=user_dim, item_dim=item_dim,
n_hidden=n_hidden, hidden_size=hidden_size,
dropout=dropout, l2_reg=l2_reg,
ID=ID)
ImplicitRecommenderWithNull_online_batch_df_normalized = \
normalized_results(ImplicitRecommenderWithNull_online_batch_df, 'Null special, en ligne (batch)')
ImplicitRecommenderWithNull_online_batch_df_normalized
# ### Fully Implicit Recommender avec nuls et indicateur binaire, en ligne version batch
#
# %%time
FullyImplicitRecommenderWithNull_binary_indicator_online_batch_df = \
eval_loop_ColdStartImplicitRecommender(ColdStartRecommenderClass=ColdStartFullyImplicitRecommender,
ImplicitRecommenderClass=ImplicitRecommenderWithNull_binary_indicator,
n_loop=n_loop, n_pred=n_pred,
online_batch_size=online_batch_size,
user_dim=user_dim, item_dim=item_dim,
n_hidden=n_hidden, hidden_size=hidden_size,
dropout=dropout, l2_reg=l2_reg,
ID=ID)
FullyImplicitRecommenderWithNull_binary_indicator_online_batch_df_normalized = \
normalized_results(FullyImplicitRecommenderWithNull_binary_indicator_online_batch_df, 'Fully Implicit Null binaire, en ligne (batch)')
FullyImplicitRecommenderWithNull_binary_indicator_online_batch_df_normalized
# ## Aggregation des résutats
all_results = pd.concat( (
ImplicitRecommenderWithNull_no_indicator_online_df_normalized,
ImplicitRecommenderWithNull_df_normalized,
ImplicitRecommenderWithNull_online_df_normalized,
ImplicitRecommenderWithNull_online_batch_df_normalized,
FullyImplicitRecommenderWithNull_binary_indicator_online_batch_df_normalized,
), axis = 0)
aggregat = all_results.groupby('recommendation').mean()
# ### Classement ratio de recommandations effectives
col = 'good_reco_ratio'
aggregat.sort_values(col, ascending=False)\
.style.apply((lambda x: ['background: lightgreen' if x.name == col else '' for _ in x]))
# ### Classement par average reward normalisée (max)
col = 'average_reward_normalized_max'
aggregat.sort_values(col, ascending=False)\
.style.apply((lambda x: ['background: lightgreen' if x.name == col else '' for _ in x]))
# ### Classement par average reward normalisée (mean)
col = 'average_reward_normalized_mean'
aggregat.sort_values(col, ascending=False)\
.style.apply((lambda x: ['background: lightgreen' if x.name == col else '' for _ in x]))
# ### Résumé (classé par average_reward)
cm = sns.light_palette("green", as_cmap=True)
col = ['average_reward_normalized_max', 'average_reward_normalized_mean']
aggregat.sort_values(col, ascending=False)\
.style.background_gradient(cmap=cm).highlight_max(color='red')
# La recommendation prenant en charge les nulls semble meilleure. <br>
# 1) Null secial en ligne<br>
# 2) Null en ligne<br>
# 3) Fully Implicit Null binaire, en ligne (batch)<br>
# Précédemment : <br>
# La recommendation prenant en charge les nulls semble meilleure. <br>
# 1) Null secial en ligne (batch) <br>
# 2) Fully Implicit Null special, en ligne (batch) <br>
# 3) Null, hors ligne <br>
5715750891, "user": {"displayName": "giovanni buroni", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjarG9foST_wpJ_PfvHTR0iWGuChdFnmm4ong7n=s64", "userId": "06127736531396060751"}, "user_tz": -60} id="oYHppaM9shP3"
l2_reg = 5e-4 # Regularization rate for l2
# Build model
class GCN_Net(Model):
def __init__(self, state_size):
super(GCN_Net, self).__init__()
self.street = 5975
self.head = 4
self.state_size = 64
self.state_size_gru = state_size
# GCN 1st order approximation
self.gcn_flow_0_enc = GCNConv(12, activation="relu", kernel_regularizer=l2(l2_reg), use_bias=False)
self.gcn_flow_1_enc = GCNConv(12, activation="relu", kernel_regularizer=l2(l2_reg), use_bias=False)
self.gcn_vel_0_enc = GCNConv(12, activation="relu", kernel_regularizer=l2(l2_reg), use_bias=False)
self.gcn_vel_1_enc = GCNConv(12, activation="relu", kernel_regularizer=l2(l2_reg), use_bias=False)
###################
self.lstm_flow_init = tf.compat.v1.keras.layers.CuDNNGRU(self.state_size_gru, #activation ='relu',
kernel_initializer='glorot_uniform',
recurrent_initializer='glorot_uniform',
kernel_regularizer=regularizers.l2(0.001),
bias_initializer='zeros', return_sequences = True, return_state =True)
self.lstm_flow_fin = tf.compat.v1.keras.layers.CuDNNGRU(self.state_size_gru, #activation ='relu',
kernel_initializer='glorot_uniform',
recurrent_initializer='glorot_uniform',
kernel_regularizer=regularizers.l2(0.001),
bias_initializer='zeros')
self.drop_flow = tf.keras.layers.Dropout(0.1)
self.dense_flow_fin = tf.keras.layers.Dense(self.street*12,
kernel_regularizer=regularizers.l2(0.001))
##########################
self.lstm_vel_init = tf.compat.v1.keras.layers.CuDNNGRU(self.state_size_gru, #activation ='relu',
kernel_initializer='glorot_uniform',
recurrent_initializer='glorot_uniform',
kernel_regularizer=regularizers.l2(0.001),
bias_initializer='zeros', return_sequences = True, return_state =True)
self.lstm_vel_fin = tf.compat.v1.keras.layers.CuDNNGRU(self.state_size_gru, #activation ='relu',
kernel_initializer='glorot_uniform',
recurrent_initializer='glorot_uniform',
kernel_regularizer=regularizers.l2(0.001),
bias_initializer='zeros')
self.drop_vel = tf.keras.layers.Dropout(0.1)
self.dense_vel_fin = tf.keras.layers.Dense(self.street*12,
kernel_regularizer=regularizers.l2(0.001))
##################################
self.attention_flow = tf.keras.layers.MultiHeadAttention(num_heads=self.head, key_dim=self.head)
self.attention_vel = tf.keras.layers.MultiHeadAttention(num_heads=self.head, key_dim=self.head)
self.dense_temporal_0 = tf.keras.layers.Dense(self.state_size_gru, activation ='relu') #tf.keras.layers.TimeDistributed()
self.dense_temporal_1 = tf.keras.layers.Dense(self.state_size, activation ='relu')
self.dense_temporal_2 = tf.keras.layers.Dense(self.state_size_gru)
self.drop = tf.keras.layers.Dropout(0.1)
self.add = tf.keras.layers.Add()
self.norm = tf.keras.layers.LayerNormalization()
self.reshape = tf.keras.layers.Reshape([12, self.street])
self.repeat = tf.keras.layers.RepeatVector(12)
def call(self, flow_vel, a, past_cov, fut_cov):
# sparse matrix
sparse_a = tf.sparse.from_dense(a)
# flow
flow_0 = flow_vel[:, :, :-self.street]
# velocity
vel_0 = flow_vel[:, :, -self.street:]
# shape for gcn
flow_sh = tf.reshape(flow_0 , [flow_0.shape[0], flow_0 .shape[2], flow_0.shape[1]])
vel_sh = tf.reshape(vel_0 , [vel_0.shape[0], vel_0.shape[2], vel_0.shape[1]])
# two gcn on flow and vel
gcn_flow = self.gcn_flow_0_enc([flow_sh, sparse_a])
gcn_flow = self.gcn_flow_1_enc([gcn_flow, sparse_a])
gcn_vel = self.gcn_vel_0_enc([vel_sh, sparse_a])
gcn_vel = self.gcn_vel_1_enc([gcn_vel, sparse_a])
# shape for lstm
flow_sh = tf.reshape(gcn_flow, [gcn_flow.shape[0], gcn_flow.shape[2], gcn_flow.shape[1]])
vel_sh = tf.reshape(gcn_vel, [gcn_vel.shape[0], gcn_vel.shape[2], gcn_vel.shape[1]])
# two lstm models
flow_init, h_flow = self.lstm_flow_init(flow_sh)
vel_init, h_vel = self.lstm_vel_init(vel_sh)
# merge layer
# output
flow_vel_i = tf.concat([flow_init, vel_init], axis=2) #
flow_vel = self.dense_temporal_0(flow_vel_i)
flow_vel = self.dense_temporal_1(flow_vel)
flow_vel_f = self.dense_temporal_2(flow_vel)
# multi-head attention
# # Q: flow & vel
# # K: flow
att_flow, weight_flow = self.attention_flow(flow_vel_f, flow_init,
return_attention_scores=True)
# # Q: vel & vel
# # K: vel
att_vel, weight_vel = self.attention_vel(flow_vel_f, vel_init,
return_attention_scores=True)
# -----
# concatenate covariates
# output flow_vel - covariates
fut_cov = tf.cast(fut_cov, dtype=tf.float32)
concat_flow = tf.concat([att_flow, fut_cov], axis=2)
concat_vel = tf.concat([att_vel, fut_cov], axis=2)
# two models for flow and speed respectively
# flow
flow_final = self.lstm_flow_fin(concat_flow , initial_state = [flow_vel_f[:,-1,:]]) # , initial_state = [flow_vel_f]) # flow_vel_f, h_flow flow_vel_f[:,-1,:]
flow = self.drop_flow(flow_final)
flow = self.dense_flow_fin(flow)
flow = self.reshape(flow)
# velocity
vel_final = self.lstm_vel_fin(concat_vel, initial_state = [flow_vel_f[:,-1,:]]) # , initial_state = [flow_vel_f])
vel = self.drop_vel(vel_final)
vel = self.dense_vel_fin(vel)
vel = self.reshape(vel)
# concatenate two finals results
final = tf.concat([flow, vel], axis=-1)
return final
loss_fn = tf.keras.losses.MeanAbsoluteError()
# + executionInfo={"elapsed": 31896, "status": "ok", "timestamp": 1615715759853, "user": {"displayName": "giovanni buroni", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjarG9foST_wpJ_PfvHTR0iWGuChdFnmm4ong7n=s64", "userId": "06127736531396060751"}, "user_tz": -60} id="v4iRDYOyi6FE"
adj_matrix = gcn_filter(adj_matrix, symmetric=True)
# +
# del data_tr, data_te, data_flow, data_vel, data, scaled_tr, scaled_te
# gc.collect()
# -
def prepare_data_DL(INPUT, FEAT, BATCH):
dataset = FEAT.reshape(FEAT.shape[0], FEAT.shape[1])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
inputs = dataset.window(INPUT, shift=1, stride=1, drop_remainder=True)
inputs = inputs.flat_map(lambda window: window.batch(INPUT))
targets = dataset.window(INPUT, shift=1, stride=1, drop_remainder=True).skip(INPUT)
targets = targets.flat_map(lambda window: window.batch(INPUT))
dataset = tf.data.Dataset.zip((inputs, targets))
dataset = dataset.batch(BATCH).prefetch(tf.data.experimental.AUTOTUNE)
return dataset
# + executionInfo={"elapsed": 181346, "status": "ok", "timestamp": 1615716832009, "user": {"displayName": "giovanni buroni", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjarG9foST_wpJ_PfvHTR0iWGuChdFnmm4ong7n=s64", "userId": "06127736531396060751"}, "user_tz": -60} id="mDbqw5X4shP3"
import time
import gc
start = time.time()
list_results = []
state_size_lst = [50, 150, 250]
epochs = 5
for state_size in state_size_lst:
print('')
print('state_size: '+str(state_size))
print('EPOCHS: '+str(epochs))
print('')
list_avg_loss_split = []
timeseriesplit = zip(list_training_validation, list_time)
optimizer = Adam(lr=0.001)
# Training function
@tf.function
def train_on_batch(inputs, target, past_cov, cov):
loss = 0
with tf.GradientTape() as tape:
predictions = model(inputs, adj_matrix, past_cov, cov, training=True)
loss = loss_fn(target, predictions)
variables = model.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
# Create model
model = GCN_Net(state_size)
for data, time in timeseriesplit:
data_tr, data_val = data[0], data[1]
time_tr, time_val = time[0], time[1]
scaler = MinMaxScaler(feature_range=(0, 1))
scaler_cov = MinMaxScaler(feature_range=(0, 1))
# fit and transform
scaled_tr = scaler.fit_transform(data_tr)
# transform
scaled_val= scaler.transform(data_val)
# fit and transform
scaled_tr_cov = scaler_cov.fit_transform(time_tr)
# transform
scaled_val_cov = scaler_cov.transform(time_val)
batch_train = 32
batch_val = 32
# features
loader_tr = prepare_data_DL(12, scaled_tr, batch_train)
loader_val = prepare_data_DL(12, scaled_val, batch_val)
# covariates
loader_tr_cov = prepare_data_DL(12, scaled_tr_cov, batch_train)
loader_val_cov = prepare_data_DL(12, scaled_val_cov, batch_val)
# Keep results for plotting
train_loss_results = []
val_loss_results = []
samples_cov = list(loader_tr_cov)
samples_cov_val = list(loader_val_cov)
for epoch in range(epochs):
## training
step = 0
epoch_loss_avg = tf.keras.metrics.Mean()
for batch in loader_tr:
cov = samples_cov[step]
past_cov = cov[0]
fut_cov = cov[1]
# Training step
inputs, target = batch
loss = train_on_batch(inputs, target, past_cov, fut_cov)
# Track progress
epoch_loss_avg.update_state(loss)
step += 1
## validation
step_val = 0
epoch_loss_avg_val = tf.keras.metrics.Mean()
for batch in loader_val:
cov = samples_cov_val[step_val]
past_cov = cov[0]
fut_cov = cov[1]
# Validation step
inputs, targ = batch
pred = model(inputs, adj_matrix, past_cov, fut_cov, training=False)
loss = loss_fn(targ, pred)
epoch_loss_avg_val.update_state(loss)
step_val += 1
list_avg_loss_split.append(epoch_loss_avg_val.result())
print('loss per split: '+str(epoch_loss_avg_val.result()))
del inputs, target, past_cov, fut_cov
gc.collect()
print('')
print('average loss per 3 split: '+str(np.mean(list_avg_loss_split)))
list_results.append(np.mean(list_avg_loss_split))
epochs = epochs + 5
# -
list_results_array = [np.array(x) for x in list_results]
list_results_avg = [np.mean(x) for x in list_results_array]
# +
import pickle
# Saving the objects:
with open('results_GRU_bel.pkl', 'wb') as f:
pickle.dump([list_results_array, list_results_avg], f)
| 19,768 |
/CanadaPostCode.ipynb
|
170b9a57218ab91b97f6727ad4ea3388344c3e31
|
[] |
no_license
|
samguan2020/sFL_restaraunt_jupiter
|
https://github.com/samguan2020/sFL_restaraunt_jupiter
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 10,029 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# !conda install -c conda-forge BeautifulSoup4 --yes
# !conda install -c conda-forge lxml --yes
# !conda install -c conda-forge html5lib --yes
# !conda install -c conda-forge requests --yes
from bs4 import BeautifulSoup
import requests
import pandas as pd
# +
page = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M')
html_file = page.content
# with open('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M') as html_file:
soup = BeautifulSoup(html_file, 'lxml')
# print(soup.prettify())
# +
My_table = soup.find('table',{'class':'wikitable'})
table_rows = table.find_all('tr')
row_data = []
for tr in table_rows:
td = tr.find_all('td')
row = [i.text for i in td]
row_data.append(row)
# print(row_data)
df = pd.DataFrame()
df = row_data
df[0] = table.find_all('th')
t = []
for tt in df[0]:
tt = str(tt).replace('<th>','')
tt = str(tt).replace('</th>','')
tt = str(tt).replace('\n','')
t.append(tt)
# df = str(df)
df2 = pd.DataFrame(df[1:len(df)], columns=t)
df2 = df2.replace('\n',' ', regex=True)
print(df2)
# +
na = df2.iloc[0]['Borough']
for i in range (len(df2)-1, -1, -1):
if df2.iloc[i]['Borough'] == na:
df2.drop(index=i, inplace = True)
df2.reset_index()
print(df2)
# -
df2.reset_index(inplace = True)
#df2.drop(['level_0','index'], axis=1, inplace = True)
print(df2)
| 1,658 |
/Genomes2Fragments_Version_Beta_ESTOY_AQUI.ipynb
|
8da4954411b8542a14645e330105faa78dd7c249
|
[] |
no_license
|
KamyNz/Thesis
|
https://github.com/KamyNz/Thesis
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,149 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Лабораторная 2
#
# Выполнил студент группы 9381 Кравченко Павел
#
# ## Цель работы
# Цель работы – практическое применение дискретного преобразования Фурье c использованием библиотек Python для анализа речевых сигналов.
#
# ## Данные для работы
# 1) Средствами ОС записать свой речевой сигнал.
# 2) Фраза — Добрый день, я учусь в университете ЛЭТИ, на факультете компьютерных технологий и информатики.
# 3) Формат записи — моно-сигнал, 16000 гц или 22000 гц, без сжатия (wav-формат).
#
# ## Задачи лабораторной работы
#
# 1) Используя ДПФ построить траекторию основного тона (F0) на записи своего голоса.
# 2) Используя ДПФ построить траектории второй, третьей и четвертой формантных частот на записи своего голоса.
# 3) Построить сонограмму по записи своего голоса.
#
# +
from scipy.io import wavfile
import scipy.io
sampleRate, data = wavfile.read('2.wav')
# +
import matplotlib.pyplot as plt
import numpy as np
import more_itertools as mit
lengthSignal = len(data) / sampleRate
time = np.linspace(0, lengthSignal, len(data))
f = plt.figure()
f.set_figwidth(40)
f.set_figheight(10)
plt.title("Signal Wave")
plt.plot(time, data, label="Signal Wave")
plt.legend()
plt.xlabel("Time")
plt.ylabel("Amplitude")
plt.show()
# -
# # Используя ДПФ построить траекторию основного тона (F0) на записи своего голоса.
# +
dimension = 1024
spectr_res = round(sampleRate/dimension)
overlap = 0.25
intervals = list(mit.windowed(data, n=dimension, step=int(overlap*dimension)))
intervals[-1] = [i for i in intervals[-1] if i]
intervals = [i*np.hamming(len(i)) for i in intervals]
# +
def DFT(x):
N = len(x)
n = np.arange(N)
k = n.reshape((N, 1))
M = np.exp(-2j * np.pi * k * n / N)
return np.dot(M, x)
intervals = [DFT(i) for i in intervals]
# -
amplitude_spec = [sum([j**2 for j in i[:int(len(i)/2)]])**0.5 for i in intervals]
amplitude_spec = np.array([i.real for i in amplitude_spec])
plt.plot(amplitude_spec)
# +
F0 = []
for i in list(mit.chunked(amplitude_spec, spectr_res)):
segment = np.array(i)
if segment.max() > 2.5*segment.mean():
F0 += segment.tolist()
else:
F0 += [0 for i in range(len(segment))]
plt.plot(F0)
# -
# # Используя ДПФ построить траектории второй, третьей и четвертой формантных частот на записи своего голоса.
# +
F1 = []
i = 0
while i < len(F0):
try:
inter_max = max([F0[2*i], F0[2*i-1], F0[2*i+1]])
if F0[i] > 2*inter_max:
F1.append(inter_max)
else:
F1.append(0)
except:
F1.append(0)
i += 1
plt.plot(F1)
# +
F2 = []
i = 0
while i < len(F0):
try:
inter_max = max([F0[3*i], F0[3*i-1], F0[3*i+1]])
if F0[i] > 4*inter_max:
F2.append(inter_max)
else:
F2.append(0)
except:
F2.append(0)
i += 1
plt.plot(F2)
# +
F3 = []
i = 0
while i < len(F0):
try:
inter_max = max([F0[4*i], F0[4*i-1], F0[4*i+1]])
if F0[i] > 8*inter_max:
F3.append(inter_max)
else:
F3.append(0)
except:
F3.append(0)
i += 1
plt.plot(F3)
# -
# # Построить сонограмму по записи своего голоса.
# +
matrix_amp = []
for i in list(mit.chunked(amplitude_spec[:-2], spectr_res)):
matrix_amp.append(i)
matrix_amp = np.array(matrix_amp)
fig, ax = plt.subplots()
h = ax.imshow(matrix_amp, cmap = 'Reds')
fig.colorbar(h)
plt.show()
# -
# ## Вывод
#
# С использованием ДПФ была построена траектория основного тона (F0), траектории второй, третьей и четвертой формантных частот, а также сонограмма.
| 3,916 |
/.ipynb_checkpoints/test-checkpoint.ipynb
|
7a77aa7c0c06beba064f065ec13947a303d6e6d9
|
[] |
no_license
|
SFurnace/Notebooks
|
https://github.com/SFurnace/Notebooks
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 782,566 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import numpy as np
import scipy as sp
import sympy as sy
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import rcParams
sy.init_printing()
sns.set_context('notebook')
rcParams['font.family'] = 'STFangsong'
# +
a = np.linspace(1, 10, 1000)
b = np.sin(a)
plt.plot(a, b)
# +
from bokeh.models import HoverTool
from bokeh.palettes import Viridis6
from bokeh.plotting import figure, show, output_notebook, ColumnDataSource
from bokeh.sampledata.us_counties import data as counties
from bokeh.sampledata.unemployment import data as unemployment
counties = {
code: county for code, county in counties.items() if county["state"] == "tx"
}
county_xs = [county["lons"] for county in counties.values()]
county_ys = [county["lats"] for county in counties.values()]
county_names = [county['name'] for county in counties.values()]
county_rates = [unemployment[county_id] for county_id in counties]
county_colors = [Viridis6[int(rate/3)] for rate in county_rates]
source = ColumnDataSource(data=dict(
x=county_xs,
y=county_ys,
color=county_colors,
name=county_names,
rate=county_rates,
))
TOOLS="pan,wheel_zoom,box_zoom,reset,hover,save"
p = figure(title="Texas Unemployment 2009", tools=TOOLS,
x_axis_location=None, y_axis_location=None)
p.grid.grid_line_color = None
p.patches('x', 'y', source=source,
fill_color='color', fill_alpha=0.7,
line_color="white", line_width=0.5)
hover = p.select_one(HoverTool)
hover.point_policy = "follow_mouse"
hover.tooltips = [
("Name", "@name"),
("Unemployment rate)", "@rate%"),
("(Long, Lat)", "($x, $y)"),
]
output_notebook()
show(p)
es_num_c = body['data']['pages']['count']
for j in range(1, pages_num_c + 1):
comment_url = comment_url_format.format(product_id=id_, page_num=j)
time.sleep(0.5)
resp = requests.get(comment_url)
body = json.loads(resp.text)
data = body['data']['comments']
for item in data:
res = {'user': item['usertitle'], 'mark': item['mark'], 'text': item['text'],
'pros': item['dignity'], 'cons': item['shortcomings']}
lang = lang_detect(res['text'].strip())
if lang == 'uk' and res not in lst:
lst.append(res)
counter += 1
if counter >= limit:
return pd.DataFrame(lst)
elif counter % 100 == 0:
print(counter)
return pd.DataFrame(lst)
# -
# ### Parse
# %%time
try:
df_tv = pd.read_csv('data/reviews_tv.csv')
except FileNotFoundError:
df_tv = get_data(tv_cat, limit=10000)
df_tv.to_csv('data/reviews_tv.csv', index=False)
# %%time
try:
df_laptop = pd.read_csv('data/reviews_laptop.csv')
except FileNotFoundError:
df_laptop = get_data(notebook_cat, limit=10000)
df_laptop.to_csv('data/reviews_laptop.csv', index=False)
# %%time
try:
df_smart = pd.read_csv('data/reviews_smart.csv')
except FileNotFoundError:
df_smart = get_data(smart_cat, limit=10000)
df_smart.to_csv('data/reviews_smart.csv', index=False)
# %%time
try:
df_cpu = pd.read_csv('data/reviews_cpu.csv')
except FileNotFoundError:
df_cpu = get_data(cpu_cat, limit=10000)
df_cpu.to_csv('data/reviews_cpu.csv', index=False)
# %%time
try:
df_gpu = pd.read_csv('data/reviews_gpu.csv')
except FileNotFoundError:
df_gpu = get_data(gpu_cat, limit=10000)
df_gpu.to_csv('data/reviews_gpu.csv', index=False)
# %%time
try:
df_ssd = pd.read_csv('data/reviews_ssd.csv')
except FileNotFoundError:
df_ssd = get_data(ssd_cat, limit=10000)
df_ssd.to_csv('data/reviews_ssd.csv', index=False)
# %%time
try:
df_ram = pd.read_csv('data/reviews_ram.csv')
except FileNotFoundError:
df_ram = get_data(ram_cat, limit=10000)
df_ram.to_csv('data/reviews_ram.csv', index=False)
# ### Preprocess
# +
import ast
def mark_allign(x):
if isinstance(x, str):
try:
return ast.literal_eval(x)
except:
return 0
else:
return x
# +
df = pd.concat([df_tv, df_laptop, df_smart, df_cpu, df_gpu, df_ssd, df_ram])
df = df.loc[(df.mark.notna())]
df['mark'] = df.mark.map(mark_allign)
df = df.loc[df.mark != 0]
df.fillna(" ", inplace=True)
df['target'] = np.where(df.mark < 3, 'neg', np.where(df.mark == 3, 'neu', 'pos'))
# df['text'] = df.apply(lambda row: " ".join([row.text, row.pros, row.cons]).strip(), 1)
df['text'] = df.text.map(lambda x: BeautifulSoup(x).get_text())
df['pros'] = df.pros.map(lambda x: BeautifulSoup(x).get_text())
df['cons'] = df.cons.map(lambda x: BeautifulSoup(x).get_text())
df.drop(['user', 'mark'], 1, inplace=True)
df.reset_index(inplace=True, drop=True)
# -
print(df.shape)
df.head()
target_map = {"neg": 0, "neu": 1, "pos": 2}
df.target = df.target.map(target_map)
df.target.value_counts()
def tokenize(x):
if len(x.strip()) == 0:
return ""
filter_pos = ('PUNCT', 'ADP', 'SYM', 'CCONJ', 'SCONJ', 'PROPN')
filter_words = ["і", "та", "або", "й", "то", "б", "але"]
sentences = nlp_uk(x).sentences
res = []
for sent in sentences:
if '?' in list(sent.words)[-1].text:
continue
res.append([token.lemma for token in sent.words if token.upos not in filter_pos])
try:
return " ".join(list(filter(lambda x: x not in filter_words, sum(res, [])))).lower()
except:
return " "
# +
# df = df[:100]
# -
try:
df = pd.read_csv('data/clean_data.csv')
df = df.loc[~df.text.isna()]
except FileNotFoundError:
text_l = []
pros_l = []
cons_l = []
for text, pros, cons, _ in tqdm(df.values):
text_l.append(tokenize(text))
pros_l.append(tokenize(pros))
cons_l.append(tokenize(cons))
df['clean_text'] = text_l
df['clean_pros'] = pros_l
df['clean_cons'] = cons_l
df['clean_text_all'] = df['clean_pros'] + " " + df['clean_cons'] + " " + df['clean_text']
df.to_csv('data/clean_data.csv', index=False)
df.head()
import matplotlib.pyplot as plt
from wordcloud import WordCloud
# %matplotlib inline
neg_text = " ".join(df.loc[df.target == 0].clean_text_all.values)
neu_text = " ".join(df.loc[df.target == 1].clean_text_all.values)
pos_text = " ".join(df.loc[df.target == 2].clean_text_all.values)
wc_pos = WordCloud(background_color='white', width=600, height=300, max_font_size=50,
max_words=40).generate(pos_text)
wc_neu = WordCloud(background_color='white', width=600, height=300, max_font_size=50,
max_words=40).generate(neu_text)
wc_neg = WordCloud(background_color='white', width=600, height=300, max_font_size=50,
max_words=40).generate(neg_text)
# +
plt.figure(figsize=(12,20))
plt.subplot(3, 1, 1)
plt.imshow(wc_pos)
plt.title("Positive sentiment", fontsize=30)
plt.axis("off")
plt.subplot(3, 1, 2)
plt.imshow(wc_neu)
plt.title("Neutral sentiment", fontsize=30)
plt.axis("off")
plt.subplot(3, 1, 3)
plt.imshow(wc_neu)
plt.title("Negative sentiment", fontsize=30)
plt.axis("off")
plt.show()
# -
# ### Train model
# #### NB baseline
RANDOM_STATE = 0
from sklearn.naive_bayes import GaussianNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import *
# +
import re
def regex_token(x):
return " ".join(re.findall(r"\w+", x)).lower()
# -
(data, clean_data, target) = ((df["pros"] + " " + df['cons'] + " " + df['text']).map(regex_token),
df['clean_text_all'],
df['target'])
# ##### NB, CountVect, removed only punct
X_train, X_test, y_train, y_test = train_test_split(data, target,
test_size=0.2,
random_state=RANDOM_STATE,
stratify=target)
# +
c_vect = CountVectorizer()
X_train_vec = c_vect.fit_transform(X_train)
X_test_vec = c_vect.transform(X_test)
# +
# %%time
nb_model = GaussianNB()
nb_model.fit(X_train_vec.toarray(), y_train);
# +
y_pred = nb_model.predict(X_test_vec.toarray())
print(classification_report(y_test, y_pred))
# -
# ##### The same with cleaned text
X_train, X_test, y_train, y_test = train_test_split(clean_data, target,
test_size=0.2,
random_state=RANDOM_STATE,
stratify=target)
# +
c_vect = CountVectorizer()
X_train_vec = c_vect.fit_transform(X_train)
X_test_vec = c_vect.transform(X_test)
# +
# %%time
nb_model = GaussianNB()
nb_model.fit(X_train_vec.toarray(), y_train);
# +
y_pred = nb_model.predict(X_test_vec.toarray())
print(classification_report(y_test, y_pred))
# -
# Higher recall on neutrall class, everything else is slightly worse
# ##### trying tf-idf instead CountVect, standard params, no punct
X_train, X_test, y_train, y_test = train_test_split(data, target,
test_size=0.2,
random_state=RANDOM_STATE,
stratify=target)
# +
tfidf_vect = TfidfVectorizer()
X_train_vec = tfidf_vect.fit_transform(X_train)
X_test_vec = tfidf_vect.transform(X_test)
# +
# %%time
nb_model = GaussianNB()
nb_model.fit(X_train_vec.toarray(), y_train);
# +
y_pred = nb_model.predict(X_test_vec.toarray())
print(classification_report(y_test, y_pred))
# -
# ##### The same with cleaned text
X_train, X_test, y_train, y_test = train_test_split(clean_data, target,
test_size=0.2,
random_state=RANDOM_STATE,
stratify=target)
# +
tfidf_vect = TfidfVectorizer()
X_train_vec = tfidf_vect.fit_transform(X_train)
X_test_vec = tfidf_vect.transform(X_test)
# +
# %%time
nb_model = GaussianNB()
nb_model.fit(X_train_vec.toarray(), y_train);
# +
y_pred = nb_model.predict(X_test_vec.toarray())
print(classification_report(y_test, y_pred))
# -
# ### Trying another alrotithms and approaches
X_train, X_test, y_train, y_test = train_test_split(clean_data, target,
test_size=0.2,
random_state=RANDOM_STATE,
stratify=target)
class_weights = (1 / y_train.value_counts(normalize=True)).to_dict()
# ##### svc with tf-idf on clean text and weighted classes
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import LogisticRegression
def train_eval(clf):
clf.fit(X_train_vec, y_train)
y_pred = clf.predict(X_test_vec)
print("f1 macro:", f1_score(y_test, y_pred, average='macro'))
print(clf)
# +
tf_idf = TfidfVectorizer(min_df=5, max_df=0.75)
X_train_vec = tf_idf.fit_transform(X_train)
X_test_vec = tf_idf.transform(X_test)
# +
reg_interval = [0.01, 0.1, 0.2, 0.5, 1, 2, 5, 10, 100]
for i in reg_interval:
train_eval(LogisticRegression(C=i, class_weight=class_weights))
# +
model_svc = LinearSVC(C=5, class_weight=class_weights)
model_svc.fit(X_train_vec, y_train);
# +
y_pred = model_svc.predict(X_test_vec)
print(classification_report(y_test, y_pred))
# +
model_lr = LogisticRegression(C=5, class_weight=class_weights)
model_lr.fit(X_train_vec, y_train);
# +
y_pred = model_lr.predict(X_test_vec)
print(classification_report(y_test, y_pred))
# -
# #### trying pipeline
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import FeatureUnion, Pipeline
from scipy.stats import uniform
from sklearn.metrics import make_scorer
from sklearn.decomposition import TruncatedSVD
tf_idf = FeatureUnion([
('TfIdf_Unigram', TfidfVectorizer(min_df=5, max_df=0.75, ngram_range=(1, 1), strip_accents='unicode')),
('TfIdf_Bigram', TfidfVectorizer(min_df=2, max_df=0.75, ngram_range=(2, 2), strip_accents='unicode'))
])
def f1_macro(y_true, y_pred):
return f1_score(y_true, y_pred, average='macro')
# +
# N_COMP = 400
pipeline = Pipeline([
("main_union", FeatureUnion([
("pipe1", Pipeline([
('tf_idf', tf_idf),
])),
("pipe2", Pipeline([
('tf_idf', tf_idf),
("SVD", TruncatedSVD())
])),
])),
# ('LinearSVC', LinearSVC(class_weight=class_weights))
("LogReg", LogisticRegression(max_iter=1000, class_weight=class_weights))
])
distributions = {
# "LinearSVC__C": [0.5, 1, 5],
"LogReg__C": [1, 5, 10],
"LogReg__penalty": ["l2"],
"main_union__pipe2__SVD__n_components": [300, 400, 500]
}
clf = RandomizedSearchCV(pipeline,
distributions,
random_state=0,
scoring=make_scorer(f1_macro),
n_iter=10,
cv=5,
verbose=5,
n_jobs=-1)
search = clf.fit(X_train, y_train)
print(search.best_params_, search.best_score_)
# +
y_pred = search.predict(X_test)
print(classification_report(y_test, y_pred))
# -
# #### out-of-fold LGB training
from sklearn.model_selection import StratifiedKFold
from lightgbm import plot_importance, LGBMClassifier
use_sample_weight = True
N_FOLDS = 4
num_threads = cpu_count()
N_COMP = 400
# https://lightgbm.readthedocs.io/en/latest/Parameters.html
# +
params = {
'num_class': 3,
'num_rounds': 2000,
'max_depth': -1, # 8
'learning_rate': 0.01, # 0.007
'num_leaves': 31, # was 127
'verbose': 100,
'early_stopping_rounds': 300,
'min_data_in_leaf': 20,
'lambda_l2': 0.7,
'feature_fraction': 0.2, # 0.8
'metric': 'custom',
'random_state': RANDOM_STATE
}
classifier = LGBMClassifier(**params)
# -
def lgb_fscore(y_true, y_pred):
y_pred = y_pred.reshape(len(np.unique(y_true)), -1)
y_pred = y_pred.argmax(axis=0)
res = f1_score(y_true, y_pred, average='macro')
return 'macro_f1', res, True
strategy = StratifiedKFold(n_splits=N_FOLDS, random_state=RANDOM_STATE, shuffle=True)
# +
train = df.loc[X_train.index]
sample_weight = y_train.map(class_weights).values
test = df.loc[X_test.index]
# -
pred_oof = np.zeros(len(train), dtype=np.float32)
pred_test = np.zeros((len(test), params['num_class'], N_FOLDS), dtype=np.float32)
fold_metrics = np.zeros(N_FOLDS)
for i, (tr_ind, val_ind) in enumerate(strategy.split(X=np.ones(len(train)), y=train['target'])):
print(f'Fold: {i + 1}\n\tTrain len: {len(tr_ind)}\n\tVal len: {len(val_ind)}')
pipe = Pipeline([
('TFIDF', tf_idf),
("SVD", TruncatedSVD(n_components=N_COMP))
])
pipe.fit(train.iloc[tr_ind]['text'])
X = pipe.transform(train.iloc[tr_ind]['text'].copy())
y = train.iloc[tr_ind]['target'].copy()
X_val = pipe.transform(train.iloc[val_ind]['text'].copy())
y_val = train.iloc[val_ind]['target'].copy()
X_test_ = pipe.transform(test['text'])
# fit model
print('\tFITTING MODEL...')
classifier.fit(
X=X,
y=y,
eval_set=[(X_val, y_val)],
early_stopping_rounds=params['early_stopping_rounds'],
verbose=params['verbose'],
eval_metric=lgb_fscore,
sample_weight=sample_weight[tr_ind] if use_sample_weight else None,
)
# predict OOF val
print('\tPREDICT OOF...')
pred_oof[val_ind] = classifier.predict(X_val, num_threads=num_threads)
# predict test
print('\tPREDICTING TEST...')
pred_test[..., i] = classifier.predict_proba(
X_test_, num_threads=num_threads)
fold_metrics[i] = f1_macro(y_val, pred_oof[val_ind])
print(f'\tFold score: {fold_metrics[i]}')
print(f'Total score: ', f1_macro(train['target'], pred_oof))
# +
y_pred_raw = pred_test.mean(axis=-1)
y_pred = y_pred_raw.argmax(axis=1).astype(np.int32)
print(classification_report(y_test, y_pred))
# -
# Actually, in practice oof works better and avoids overfitting. But, I didn't use X_test, just to make it possible easy compare results with previous models.
#
# What can also be done:
# * add words tonal features from tone-dict-uk.tsv. and concat as sparse matrix
# * replace words with tones by tokens like `<positive>` or `<negative>`
# * classical features like `word_num`, `text_len`, `has_pros`, `has_cons`, `mean_words_tone` etc. But I'm not sure if it possilbe due to task limitations, as it should be only BoW
# * get BoW features w/o pros and cons concatenation
#
# Currently, pipeline with un and bi-grams extended by SVD features shows the best result, not suffering recall for neu class.
# ### Model explain with eli5
# +
# import eli5
# +
# eli5.show_weights(model_lr, vec=tf_idf)
# -
| 17,377 |
/Sequence_classification_with_Roberta.ipynb
|
e00950405272302cedcfeee399ce9655dcfa1c6a
|
[] |
no_license
|
mitramir55/Kaggle_NLP_competition
|
https://github.com/mitramir55/Kaggle_NLP_competition
| 1 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 121,963 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mitramir55/Kaggle_NLP_competition/blob/master/Sequence_classification_with_Roberta.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="K0bXCqwk6io9" outputId="c5feed99-bd54-42c9-a50d-be872be7e910" colab={"base_uri": "https://localhost:8080/", "height": 37}
import tensorflow as tf
tf.__version__
# + id="R0Xryk4r6yRq"
# !pip install transformers
# + id="UwQcnZ2G6im6"
from transformers import (TFRobertaForSequenceClassification,
RobertaTokenizer,)
# + id="GitlTNOM6-mI" outputId="0d01f300-ef85-4141-b3ab-cd6fd3aecbd9" colab={"base_uri": "https://localhost:8080/", "height": 241, "referenced_widgets": ["521105bba5c447b690dae23775fde59d", "9651156bb63e443cbf21cc3ed4ba9a94", "cc1c0bb3867a4e2f822e6a5e22c2e243", "a69e2061779c4c3ab00c96e4057a3e2a", "b1c3a3b0b13e4a248625233bdd56151e", "fba814b316ff4527ba9c0f729a0ab885", "6adb730c4f4c4fc4a98b9d820d92b03a", "4fd9d1286fb9460d9d485c61348e8c0f", "732df25ef7604ebab4bc337f50911279", "267f47db1c7645c18cee7759c515d4d4", "0dad413d08d245dca1040999d3f2ada6", "54c5a52c63ab4a9d8b2df81588a5aba7", "511e6a3edc8a438b8ecf8e08314bda21", "6bc15dec1f164546855fdb9426d8c0e3", "8c30eaed903b44938744643a3a19066c", "1f7a3186a9e346c79a37347439155f5c"]}
roberta_model = TFRobertaForSequenceClassification.from_pretrained("roberta-base")
roberta_tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
# + id="7Iji5KsJ6iip" outputId="e3060b8c-aca2-4c2a-b125-58322a838817" colab={"base_uri": "https://localhost:8080/", "height": 37}
sentence = "babe you see me? I see you"
roberta_tokenizer.tokenize(sentence)
# + id="qaeL-ldn6igR"
# + id="IlVFrbni6f2m" outputId="bcb46964-60f5-4992-f907-a058d81dc7ce" colab={"base_uri": "https://localhost:8080/", "height": 157}
# Bert Tokenizer for all of them
import tensorflow_hub as hub
# !wget --quiet https://raw.githubusercontent.com/tensorflow/models/master/official/nlp/bert/tokenization.py
import tokenization
FullTokenizer = tokenization.FullTokenizer
BERT_MODEL_HUB = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2'
disc = 'Base_uncased'
bert_layer = hub.KerasLayer(BERT_MODEL_HUB, trainable=True)
print('Bert layer is ready to use!')
to_lower_case = bert_layer.resolved_object.do_lower_case.numpy()
vocabulary_file = bert_layer.resolved_object.vocab_file.asset_path.numpy()
tokenizer = FullTokenizer(vocabulary_file, to_lower_case)
print('Bert Tokenizer is ready!!!')
# + id="dTHeWu6b7UMf" outputId="fcdc1c99-1b5f-419c-f29c-3c9da42477ce" colab={"base_uri": "https://localhost:8080/", "height": 277}
tokenizer.tokenize("Systolic arrays are cool. This 🐳 is cool too.")
# + id="XmDuE6GQ7UQ_"
['sy','##sto','##lic','arrays','are','cool','.','this','[UNK]', 'is','cool','too','.']
# + id="0a9PLmAX7UX4" outputId="1acfd925-48d0-4702-d87b-dfc984bd64d4" colab={"base_uri": "https://localhost:8080/", "height": 817, "referenced_widgets": ["74463b2925a7497199ad3176b3d98438", "d06819844eee4d82b0ec7bfd2be141eb", "f894411ef231470aa884c6f8f6021f7a", "4c2a582cc52a4a009366d4cdf7685ee8", "0b9c9a1b1f6644e49f5a35898a8a6cd0", "0588b4f3f2f0415eae99fdf4a74bd0fa", "45738e96776d495090d8ff320adc1039", "f73476f3761b4a67985dfff34ccaf64e", "c68f428e995d41ae96a11731093f103e", "07b7c2ca3cc0428ab815355a7f8a3be4", "fcdbd6cd989c4eb3bcfa4eb2b4d0740d", "38501c43a82c418bb0df8e66c8c000ed", "799de146619b49eea865ccc10cb9c59e", "e41f33ce8d554735a2c7ac7cc228119f", "347eb8f98186457da5ed7f9cebf4f2ae", "9ccb8caee49549e9b9a3cc70628e70c7", "08b0f5e4cb9641498715f1891307eea2", "b6e22b299ec9464f89dea48a94ec3536", "61068b91d9f145b38dffa55e0787d477", "8303fbb3520f489c88bf9db604420910", "648b845c8ccb47ad88b01c675e3d3f08", "f571565f3c3b49dea17a5f8388e50cb5", "c69a57fc2ad24b749308627be6d1cacd", "a9040d3150b7440685479318022d3a17", "84182ec241b8405583fa025fd947db09", "a1ff8ea34be94b2180a036ca1fad0b74", "141db5648a104f2da970e7dc7285e9e1", "232992d4e68848cb8893646d1c1edf4b", "85a4f63b17264cdf9e108a7c8e4fc72a", "1f30da48c634448ca97442d05eb6daa4", "ae7126e3c743429b8c4ddd563927ede1", "94e652fd704b410a9d7b2a86fec472a7", "a876e42c3fad4bbc91f1e01386c139f6", "4f0965726a804088af9f56def20ca198", "06cf1750e6a74375a464a115035e8889", "98b57415b8c147ba8e201f51627fc6d6", "d395cf0ab4c84e8a974062bf19e31383", "c3b0c635ff4f4d20963c44bf3de4a44f", "3f1be28a6ab14625b2faef9dd41002e7", "2ec4c9c9bc664ec6b89742c5fddff9d3", "c2cd6a9c5ba7410984f0ab609c2fd869", "39fde0a670fd4c08928ac12381739fff", "dba020bb36d34a7cac8ff80605c42546", "ce13bdf54a014386a0c02d60513d4ce1", "5897bc59f617421d960957c4135bbfa2", "191a776288cc4c398da484716fd2ab20", "56dbbde45feb4fde9f999db6b6ecc0ec", "84c165c6f66342bcb579acd88ee42bc6", "739f2c24f1ed44928f04a3225e79cc6e", "1b22c256516d445a835bb7ce813e8b5e", "bc64b2ae9627466b96e4f6bd1d5893f3", "82e7501bb8e64f7aa1f746104fd119bd", "23f4c0b4d82a4bf9b824c5312605f6a1", "41af81b0b58e459aa345b55bbe42b6f7", "f003269283434950943acf13b46d60a8", "49dac417307a4298a0811d6c80af904a", "bdfc25bdbc254741a0a18adb67066d86", "28f4657110344191a3b50e7f2b4f8dd3", "29b42c869c274724b7f10c645414f550", "3e6376b2b1f74b6b8d8b399a7f43b7cd", "3519731bd0704526ad625ab40fea7b13", "d3500b95c8fb4aa683398e7e245b64fc", "04303d8a545145b08acd172d6bd98bde", "9d3cdcbafdf34e7fb1f945bb2fb8e736"]}
import tensorflow_datasets
data = tensorflow_datasets.load("glue/mrpc")
train_dataset = data["train"]
validation_dataset = data["validation"]
# + id="2lc0ug8a7Ubp" outputId="ac2e6a50-d093-4934-8691-ded172f76993" colab={"base_uri": "https://localhost:8080/", "height": 117}
example = list(train_dataset.__iter__())[2]
print('',
'idx: ', example['idx'], '\n',
'label: ', example['label'], '\n',
'sentence1:', example['sentence1'], '\n',
'sentence2:', example['sentence2'],
)
# + id="D5XVTzYH7UgH" outputId="b3f7852a-e2b3-47cc-8c8c-b3980564ce05" colab={"base_uri": "https://localhost:8080/", "height": 77}
seq0 = example['sentence1'].numpy().decode('utf-8') # Obtain bytes from tensor and convert it to a string
seq1 = example['sentence2'].numpy().decode('utf-8') # Obtain bytes from tensor and convert it to a string
print("First sequence:", seq0)
print("Second sequence:", seq1)
# + id="0TWzesVi7Ukt"
encoded_roberta_sequence = roberta_tokenizer.encode(seq0, seq1,max_length=128)
# + id="DoQyDUb87UpX"
roberta_special_tokens = [roberta_tokenizer.sep_token_id, roberta_tokenizer.cls_token_id]
# + id="CHxnuKA67Uty"
encoded_roberta_sequence
# + id="iznV95cQKRy-"
from transformers import glue_convert_examples_to_features
def token_type_ids_removal(example, label):
del example["token_type_ids"]
return example, label
# + id="v2zSJs3w7UyA"
roberta_train_dataset = glue_convert_examples_to_features(train_dataset, roberta_tokenizer, 128, 'mrpc') # outputs examples and labels
#roberta_train_dataset = roberta_train_dataset.map(token_type_ids_removal)
roberta_train_dataset = roberta_train_dataset.shuffle(100).batch(32)
roberta_validation_dataset = glue_convert_examples_to_features(validation_dataset, roberta_tokenizer, 128, 'mrpc')
#roberta_validation_dataset = roberta_validation_dataset.map(token_type_ids_removal)
roberta_validation_dataset = roberta_validation_dataset.batch(64)
# + id="xeSHH4gN7Uv3"
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
roberta_model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# + id="i3T63Y5o7Urc" outputId="d0211cee-4c10-4c90-c687-be863fc13cfc" colab={"base_uri": "https://localhost:8080/", "height": 237}
roberta_history = roberta_model.fit(roberta_train_dataset, epochs=3, validation_data=roberta_validation_dataset)
# + id="afoefMCO7UnK"
# + id="SipaG63W7UiZ"
# + id="6GxNTG6P7UeJ"
# + id="khgeyUtj7UaK"
# + id="o5Q3Kbh37UVd"
# + id="jeSVkAnC7UOf"
# + id="-jrQtNUn7UFU"
# + id="VbQdKetP6ik5" outputId="b395f14e-4121-4c10-9a09-c946d0bcf307" colab={"base_uri": "https://localhost:8080/", "height": 97}
import time, psutil
uptime = time.time() - psutil.boot_time()
print('How much I used?\n {} hours, and {:.2f} minutes '.format(uptime//3600, uptime%60))
remain = 12*60*60 - uptime
print('How much is remaining?\n {} hours, and {:.2f} minutes '.format(remain//3600, remain%60))
| 8,672 |
/examples/naiveBayes_titanic.ipynb
|
cd4da82b81c369089d32573683a82d928aad50ea
|
[] |
no_license
|
jaziri-hamza/Naive-Bayes-Model-From-scratch
|
https://github.com/jaziri-hamza/Naive-Bayes-Model-From-scratch
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,540 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
train=pd.read_csv('project_train.csv')
test=pd.read_csv('project_test.csv')
train.head()
test.head()
x = train.drop('Loan_Status',axis=1)
y = train['Loan_Status']
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)
x_train.reset_index(drop=True)
y_train.reset_index(drop=True)
x_train=x_train.iloc[:,1:len(train)-1]
x_test=x_test.iloc[:,1:len(x_train)-1]
x_train.head()
x_test.head()
k=int(input('enter the value of k'))
#distance between two points
def distance(train_value,test_value):
distance=[]
for i in train_value:
summ=0
for j in range(len(i)):
summ+=(i[j]-test_value[j])**2
distance.append(np.sqrt(summ))
return distance
def knn(train,y_train,test,k):
predicted=[]
train=train.values
y_train=y_train.values
test=test.values
for i in test:
dis=distance(train,i)
temp=dis.copy()
temp.sort()
index=[]
for j in range(k):
index.append(dis.index(temp[j]))
#print(index,k)
count_0=0
count_1=0
for l in index:
if y_train[l]==0:
count_0+=1
else:
count_1+=1
if count_0>count_1:
predicted.append(0)
else:
predicted.append(1)
return predicted
p=knn(x_train,y_train,x_test,k)
def accuracy(predi,actual):
act=actual.values
count=0
for i in range(len(act)):
if predi[i]==act[i]:
count+=1
#print(count)
print((count/len(act))*100)
accuracy(p,y_test)
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=5)
neigh.fit(x_train,y_train)
p=neigh.predict(x_test)
p=list(p)
accuracy(p,y_test)
p=knn(x_train,y_train,test,k)
print(p)
| 2,208 |
/HW10/clusteringGenes_tFessaras.ipynb
|
a41e4067a80c713dd71ba28965214307a8669003
|
[] |
no_license
|
TedFess/DSPS_tFessaras
|
https://github.com/TedFess/DSPS_tFessaras
| 0 | 3 | null | 2019-11-03T22:20:45 | 2019-10-30T03:10:19 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 535,263 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/TedFess/DSPS_tFessaras/blob/master/HW10/clusteringGenes_tFessaras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="VZJtP61lvYgB" colab_type="code" outputId="e85f0e59-dcb6-4a58-a1d4-9772431aec60" colab={"base_uri": "https://localhost:8080/", "height": 34}
import numpy as np
import pandas as pd
import pylab as pl
import sklearn as skl
from sklearn import cluster
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.cluster import AgglomerativeClustering
# %pylab inline
# + [markdown] id="JI2KBXS0wJUX" colab_type="text"
# read the data in https://github.com/fedhere/DSPS/tree/master/HW10
# + id="qI5qQ0YRpkX8" colab_type="code" colab={}
genes = pd.read_csv("https://raw.githubusercontent.com/fedhere/DSPS/master/HW10/kidpackgenes.csv")
# + id="3prQGIoWp6vs" colab_type="code" colab={}
genes.drop(genes.columns[0], axis = 1, inplace = True)
# + id="4mUXbPFs65LH" colab_type="code" outputId="88499b09-9dc1-495b-f66f-583f32f74095" colab={"base_uri": "https://localhost:8080/", "height": 444}
genes
# + [markdown] id="pkE4re17H6cx" colab_type="text"
# # 1. explore the data.
# + id="RPLLzVLTHylF" colab_type="code" outputId="ff809c69-7354-4f96-8df7-c5df4b5936e0" colab={"base_uri": "https://localhost:8080/", "height": 34}
genes.shape
# + id="vZuyzajXvTDR" colab_type="code" outputId="d1a97314-c9ce-4619-e489-99a00b3faf8c" colab={"base_uri": "https://localhost:8080/", "height": 320}
genes.describe()
# + [markdown] id="vlcivICbIDug" colab_type="text"
# # 2 preprocess the data
# 2.1 whiten the data (scale it) with https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html
# + id="lAcAlI_Iw9n8" colab_type="code" colab={}
scaledgenes = skl.preprocessing.scale(genes)
# + id="lUuS2Pf_7fC-" colab_type="code" outputId="0ef0e789-c96f-4218-fd4c-6f8e068a9f0a" colab={"base_uri": "https://localhost:8080/", "height": 208}
scaledgenes.mean(0).round(2), scaledgenes.std(0)
# + [markdown] id="ZW8bRqIKIJhq" colab_type="text"
# 2.1 use TSNE to make a projection of the data on an optimal 2D plane using https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html
# + id="ha2IMIqrzAtG" colab_type="code" colab={}
twodproj = TSNE(n_components = 2).fit_transform(scaledgenes)
# + id="0m_MHk2D8IKi" colab_type="code" outputId="53a86996-f738-4b7b-aad0-9a436b05c33b" colab={"base_uri": "https://localhost:8080/", "height": 34}
twodproj.shape
# + [markdown] id="4XI_oOjeIc6i" colab_type="text"
# 2.3 plot the TSNE projection
#
# + id="wKWnm-Wv0LME" colab_type="code" outputId="080dc659-e217-42c4-c0f7-de061452ac73" colab={"base_uri": "https://localhost:8080/", "height": 298}
pl.scatter(twodproj[:,0], twodproj[:,1], alpha = 0.15)
pl.title("Plot of 2-D project of kidpackgenes")
# + [markdown] id="t_dkeH1OY-XS" colab_type="text"
# **Figure 1:** Scatterplot representation of kidpackgenes data projected data. Can see the various density of points around the distribution.
# + [markdown] id="gbL-sqSwIi1K" colab_type="text"
# 2.4 calculate a function that measures the intracluster variance (i did it in class)
# + id="S-E07JcK0olN" colab_type="code" colab={}
def calICVar(X, labels):
icvar = 0
for n in np.unique(labels):
#print (n, X[labels == n].std() ** 2)https://getpocket.com/explore/item/please-do-not-try-to-survive-on-an-all-meat-diet?utm_source=pocket-newtab
icvar += np.sum((X[labels == n] - X[labels == n].mean())**2)
#X[labels == n].var()
print(icvar)
return icvar
# + [markdown] id="dNtMb3JcItsv" colab_type="text"
# # 3 K-Means clustering
# 3.1 cluster the data with K-Means using 1 to 10 clusters. Calculate and plot the intracluster variance as a function of number of clusters and look for an "elbow" in the value of the intracluster variance. What is the optimal number of clusters? discuss
# + id="A9PEx8Kh1bf2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 608} outputId="afd8bca9-9507-4442-cdda-ab8a5bc2c353"
variances = []
for i in range(1, 10):
results = skl.cluster.k_means(scaledgenes,i)
variances.append([i, calICVar(scaledgenes, results[1])])
print('%d clusters: variance = %.f' % (variances[i - 1][0], variances[i - 1][1]))
variances = np.array(variances)
plt.plot(variances[:,0], variances[:,1])
plt.title('Intra-cluster variance vs Number of Clusters')
plt.xlabel('Number of Clusters')
plt.ylabel('Intra-cluster variance');
# + [markdown] id="wkYxB_qLZXiM" colab_type="text"
# **Figure 2:** Plot of the intra-cluster variance vs the number of clusters for the kidpackgenes data. The most siginificant drop in variance takes place from the 1-3 number of clusters. Because of the shape of the drop, the drop from 1-3 produces an elbow implying that three is a good choice for number of clusters.
# + [markdown] id="xC97yL8bJE2U" colab_type="text"
# 3.2 plot the cluster on the 2D TSNE projection colorcoded by clusters
# + id="_A-R9c0XCOTH" colab_type="code" outputId="dfe7f27c-8f69-4afb-fa4c-3b0c08e00f1d" colab={"base_uri": "https://localhost:8080/", "height": 265}
#plots dont need to look exactly like mine
geneclustersKM = cluster.KMeans(n_clusters=3).fit(scaledgenes)
pl.scatter(twodproj[:,0], twodproj[:,1], c=geneclustersKM.labels_/ geneclustersKM.n_clusters)
pl.colorbar();
# + [markdown] id="hAuM1opFZr6B" colab_type="text"
# **Figure 3:** Plot of the kidpackgenes data in 3 clusters through use of k_means. First cluster seems centered around (-20,20), second cluster at (10,-20), and third cluster at (40,-40)
# + [markdown] id="a-eHKQ87JZOr" colab_type="text"
# # Choose to use DBSCAN or hierarchical clustering (EC also to the other method)
# + [markdown] id="5_6yn7nkJflN" colab_type="text"
# # 4a DBSCAN
# 4a.1 calculate and plot the distance matrix if you have not yet. Discuss: is there structure?
# + [markdown] id="5Ku9uw2bJxs9" colab_type="text"
# 4a.2 make a histogram of the pairwise distances. You should choose a value to initialize dbscan that is just below the mean
# + [markdown] id="jFl0v2CoKDBq" colab_type="text"
# 4a.3 initialize the dbscan eps value appropriately and fit a dbscan model to the data plot the 2D TSNE projection colorcoded as before.
#
# 4a.4 How many clusters do you have, how many outliers? is that a significant number?
# + [markdown] id="8NaGfMrmKupd" colab_type="text"
# # 4b Agglomerative clustering
# 4b.1 cluster the data with the ward linkage
#
# + id="vcdUTbd8V1bV" colab_type="code" colab={}
from scipy.cluster.hierarchy import dendrogram, linkage
# + id="LHaXudIuVmnI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="309118e7-ba9b-4a1f-ea82-8fc06b4febe8"
aggClusters = AgglomerativeClustering(n_clusters = 3, linkage = 'ward').fit(scaledgenes)
aggClusters.labels_
# + [markdown] id="6wveDa6ELOBY" colab_type="text"
# 4b.2 calculate the linkage and plot the dendrogram of the clusters
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html
#
# + id="Pr30oop4E2jN" colab_type="code" outputId="63a3b2e4-ab6d-4052-d4fd-c599cd6dd21c" colab={"base_uri": "https://localhost:8080/", "height": 610}
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram using Ward Linkage')
Z = linkage(scaledgenes, 'ward')
dendrogram(Z);
plt.show()
# + [markdown] id="Zi4poJDSaJAk" colab_type="text"
# **Figure 4:** Heirarchical dendrogram of the kidpackgenes data. Shows the division into three separate groups (similar in size to the previous figure).
# + [markdown] id="uqkWdEnTMSes" colab_type="text"
# 4b.3 repeat with a different linkage and comment on differences
# + id="LK72-k7qWu-V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 610} outputId="608a072e-4739-432d-b3e1-9fefe7987d75"
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram using complete linkage')
Z = linkage(scaledgenes, 'complete')
dendrogram(Z);
plt.show()
# + [markdown] id="2HgYvk1MagCD" colab_type="text"
# **Figure 5:** Rerun of the hierarchical clustering except with complete linkage. One group dominates the majority.
# + [markdown] id="lNsd1HxkML35" colab_type="text"
# 4b.4 rerun agglomerative cluatering to get the "ideal" number of clusters as decided by k-means. plot the 2D TSNE projection colorcoded as before.
# + id="1unHLA-HN_vO" colab_type="code" outputId="8db61603-69e7-4d89-ae53-967eef97812a" colab={"base_uri": "https://localhost:8080/", "height": 608}
variances = []
for i in range(1,10):
# Now using Agglomerative clustering to fit the data
results = AgglomerativeClustering(n_clusters = i, linkage = 'ward').fit(scaledgenes)
# K-means
variances.append([i, calICVar(scaledgenes, results.labels_)])
print('%d clusters: variance = %.f' % (variances[i - 1][0], variances[i - 1][1]))
variances = np.array(variances)
plt.plot(variances[:,0], variances[:,1])
plt.title('Intra-cluster variance vs Number of Clusters')
plt.xlabel('Number of Clusters')
plt.ylabel('Intra-cluster variance');
# + [markdown] id="eVmPu23Ea0sr" colab_type="text"
# **Figure 6:** Plot of ICV vs number of clusters. Similar in shape to the plot in figure 2. The ideal number of clusters is again at 3 due to where the elbow is positioned.
# + id="AaCPFGw_Xz-u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="aacd2392-dd21-4bb5-921a-a905add8ee69"
geneclustersAg = AgglomerativeClustering(n_clusters = 3, linkage = 'ward').fit(scaledgenes)
geneclustersAg.labels_
# + id="DNVtekUaX4O4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="5fcc3a59-dee1-4785-8012-01cdd9cf9fd7"
labels = np.unique(geneclustersAg.labels_)
print('The cluster labels are', labels)
print('The ideal number of clusters as decided by k-means is %d.' % len(labels))
plt.scatter(twodproj[:,0], twodproj[:,1], c = geneclustersAg.labels_/ geneclustersAg.n_clusters, alpha = 0.55)
plt.colorbar();
# + [markdown] id="OUHGOz62bLi9" colab_type="text"
# **Figure 7:** The plot of the genes data grouped into three separate clusters using agglomerative clustering. It is similar but has slightly different boundaries for the clusters.
# + [markdown] id="RttbRU7uOSCr" colab_type="text"
# # EC, 667: do the other method as well: agglomerative if you used DBSCAn, DBSCAN if you used agglomerative
#
# + id="djsJTGukRiPe" colab_type="code" colab={}
| 10,886 |
/Data Preparation/.ipynb_checkpoints/Testing-checkpoint.ipynb
|
9cc6427d801f7856d43bdda8777a090544fb0392
|
[] |
no_license
|
nrodri86/ForecastingTimeSeries
|
https://github.com/nrodri86/ForecastingTimeSeries
| 0 | 0 | null | 2018-06-05T20:19:09 | 2018-05-31T17:35:53 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 4,697 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Test Model in data test
#
# Load the models in sklearn form
import numpy as np
import pandas as pd
import pickle
from sklearn.externals import joblib
from collections import namedtuple
from sklearn.preprocessing import StandardScaler
from IPython.display import display, HTML
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.svm import SVR,LinearSVR,NuSVR
from sklearn.kernel_ridge import KernelRidge
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import GradientBoostingRegressor,RandomForestRegressor,ExtraTreesRegressor,AdaBoostRegressor
from sklearn.metrics import r2_score,make_scorer
from keras.models import model_from_json
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import (RBF, Matern, RationalQuadratic,
ExpSineSquared, DotProduct,
ConstantKernel)
from keras.wrappers.scikit_learn import KerasRegressor
from keras import losses
from keras.optimizers import RMSprop, Adam, SGD, Nadam
Dataset=namedtuple('Dataset','exchange df')
DatasetMLModel= namedtuple('DatasetMLModel','exchange train_size tscv_split test_size X_train y_train X_test y_test scaler_features scaler_target')
Regressor= namedtuple('Regressor','name regressor_class params type')
FeatureSelection= namedtuple('FeatureSelection','dataset regressor params RFECV')
# +
models={}
def load_model_deep_learning():
# loading model
model = model_from_json(open('models/deep_learning_arch.json').read())
model.load_weights('models/deep_learning_weights.h5')
model.compile(loss=losses.logcosh, optimizer=Adam(lr=0.001))#Dependencies
return KerasRegressor(build_fn=model)
deeplearning_model=load_model_deep_learning()
def load_model_sklearn(name):
return joblib.load('models/'+name)
def load_models():
model_names=[
'AdaBoostRegressor',
'GaussianProcessRegressor',
'GradientBoostingRegressor',
'KernelRidge',
'NuSVR',
'SVR']
pca='pca'
models={}
for model_name in model_names:
models[model_name]= load_model_sklearn(model_name)
pca=load_model_sklearn(pca)
return (pca,models)
pca, models=load_models()
# -
deeplearning_model
pca
yearly['births']
# Extract Clinic 1 data into clinic_1 and Clinic 2 data into clinic_2
clinic_1 = yearly[0:6]
clinic_2 = yearly[6:12]
# Print out clinic_1
print(clinic_1)
# + dc={"key": "2bc9206960"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 3. Death at the clinics
# <p>If we now plot the proportion of deaths at both Clinic 1 and Clinic 2 we'll see a curious pattern…</p>
# + dc={"key": "2bc9206960"} tags=["sample_code"]
# This makes plots appear in the notebook
# %matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = clinic_1.plot(x = "year", y = "proportion_deaths", label = "clinic_1")
clinic_2.plot(x = "year", y = "proportion_deaths", label = "clinic_2",
ax = ax, ylabel = "proportion_deaths")
# + dc={"key": "0c9fdbf550"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 4. The handwashing begins
# <p>Why is the proportion of deaths consistently so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. </p>
# <p>Semmelweis started to suspect that something on the corpses spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: <em>Wash your hands!</em> This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. </p>
# <p>Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.</p>
# + dc={"key": "0c9fdbf550"} tags=["sample_code"]
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates = ['date'])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly['deaths'] / monthly['births']
# Print out the first rows in monthly
print(monthly.head())
# + dc={"key": "2da2a84119"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 5. The effect of handwashing
# <p>With the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!</p>
# + dc={"key": "2da2a84119"} tags=["sample_code"]
# Plot monthly proportion of deaths
ax = monthly.plot(x = 'date', y = 'proportion_deaths', ylabel = 'Proportion deaths')
# + dc={"key": "518e95acc5"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 6. The effect of handwashing highlighted
# <p>Starting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. </p>
# <p>The effect of handwashing is made even more clear if we highlight this in the graph.</p>
# + dc={"key": "518e95acc5"} tags=["sample_code"]
# Date when handwashing was made mandatory
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly['date'] < handwashing_start]
after_washing = monthly[monthly['date'] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x = 'date', y = 'proportion_deaths', label = 'before_washing')
after_washing.plot(x = 'date', y = 'proportion_deaths', label = 'after_washing', ax = ax )
# + dc={"key": "586a9f9803"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 7. More handwashing, fewer deaths?
# <p>Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?</p>
# + dc={"key": "586a9f9803"} tags=["sample_code"]
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
# + dc={"key": "d8ff65292a"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 8. A Bootstrap analysis of Semmelweis handwashing data
# <p>It reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). </p>
# <p>To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).</p>
# + dc={"key": "d8ff65292a"} tags=["sample_code"]
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac = 1, replace = True)
boot_after = after_proportion.sample(frac = 1, replace = True)
boot_mean_diff.append( boot_after.mean() - boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff.quantile([0.023]))
confidence_interval
# + dc={"key": "0645423069"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 9. The fate of Dr. Semmelweis
# <p>So handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.</p>
# <p>The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as <em>bacteria</em>) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.</p>
# <p>One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.</p>
# + dc={"key": "0645423069"} jupyter={"outputs_hidden": true} tags=["sample_code"]
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = False
| 9,257 |
/src/pytransit+MH.ipynb
|
8e57118daad80d6f95f624e01d6d066254bbd0ae
|
[
"MIT"
] |
permissive
|
mileslucas/yeehaw
|
https://github.com/mileslucas/yeehaw
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,301,720 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''.venv'': venv)'
# language: python
# name: python374jvsc74a57bd0114dc88d8f07b6f071ef262b38b3b95aebd22776352c24eaca614a89ac124591
# ---
import pytransit
import corner
import arviz as az
import matplotlib.pyplot as plt
import exoplanet as xo
import numpy as np
import pymc3 as pm
import pymc3_ext as pmx
# +
def rootdir(*args):
return os.path.join("..", *args)
def datadir(*args):
return rootdir("data", *args)
# -
ground_truth = {
"period": 3.5,
"t0": 1.3,
"us": [0.5, 0.2],
"r": 0.03,
"a": 10.0,
"yerr": 1e-4
}
with np.load(datadir("generated_data_100ppm.npz")) as data:
t = data["t"]
flux = data["flux"]
# +
from ops import PyTransitlightcurve
with pm.Model() as model:
# evaluate period in log-space
logP = pm.Normal("logP", mu=np.log(ground_truth["period"]), sd=0.1)
period = pm.Deterministic("period", pm.math.exp(logP))
# wide prior on t0 to try and help latch onto truth
t0 = pm.Normal("t0", sd=10, testval=ground_truth["t0"])
# Kipping (2013) parameterization
us = xo.distributions.QuadLimbDark("us", testval=ground_truth["us"])
r = pm.Uniform("r", lower=0.01, upper=0.1, testval=ground_truth["r"])
# evaluate a in log-space
loga = pm.Normal("loga", mu=np.log(ground_truth["a"]), sd=0.1)
a = pm.Deterministic("a", pm.math.exp(loga))
y = pm.Deterministic("y", PyTransitlightcurve(t0, period, a, r, us))
# for error model use basic half-cauchy
yerr = pm.HalfCauchy("yerr", 0.1, testval=ground_truth["yerr"])
# Guassian likelihood
pm.Normal("obs", mu=y, sd=yerr, observed=flux)
# -
with model:
map_soln = pm.find_MAP(start=model.test_point)
import matplotlib.pyplot as plt
plt.plot(t, flux, ".k", ms=4, label="data")
plt.plot(t, map_soln["y"], lw=1)
np.random.seed(8462852)
with model:
trace = pm.sample(
tune=5000,
draws=10000,
start=map_soln,
step=pm.Metropolis(),
cores=4,
chains=4,
return_inferencedata=True
)
var_names = ["period", "t0", "r", "a", "us", "yerr"]
with model:
summary = az.summary(trace, var_names=var_names)
summary
trace.to_netcdf(datadir("pytransit+MH_trace.nc"))
# make sure loads
# trace = az.InferenceData.from_netcdf(datadir("pytransit+MH_trace.nc"))
with model:
az.plot_trace(trace, var_names=var_names);
_ = corner.corner(
trace,
var_names=var_names,
truths=ground_truth,
)
# Get the posterior median orbital parameters
p = np.median(trace["posterior"]["period"])
Pmean = np.mean(trace["posterior"]["period"]).values
Pstd = np.std(trace["posterior"]["period"]).values
t0 = np.median(trace["posterior"]["t0"])
pred = trace["posterior"]["y"]
pred = np.median(pred, axis=(0, 1))
with np.load(datadir("generated_data.npz")) as data:
true_mod = data["flux"]
# +
# Plot the folded data
x_fold = (t - ground_truth["t0"] + 0.5 * ground_truth["period"]) % ground_truth["period"] - 0.5 * ground_truth["period"]
plt.errorbar(
x_fold, flux, yerr=ground_truth["yerr"], fmt=".k", label="data", zorder=-1000
)
# Plot the folded model
maxphase = 0.1
inds = np.argsort(x_fold)
inds = inds[np.abs(x_fold)[inds] < maxphase]
plt.plot(x_fold[inds], pred[inds], color="C1", label="model")
plt.plot(x_fold[inds], true_mod[inds], color="C2", label="original")
# Annotate the plot with the planet's period
txt = f"period = {Pmean:.4f} +/- {Pstd:.4f} d"
plt.annotate(
txt,
(0, 0),
xycoords="axes fraction",
xytext=(5, 5),
textcoords="offset points",
ha="left",
va="bottom",
fontsize=12,
)
plt.legend(fontsize=10, loc=4)
plt.xlabel("time since transit [days]")
plt.ylabel("relative flux")
plt.xlim(-maxphase, maxphase);
# -
# Plot the folded model residuals
plt.plot(x_fold[inds], pred[inds] - true_mod[inds], color="k")
plt.xlabel("time since transit [days]")
plt.ylabel("flux residual")
plt.xlim(-maxphase, maxphase);
| 4,130 |
/exploratory-data-analysis-pandas-numpy-seaborn.ipynb
|
62c6acbc3c832b472bebd75439abcf16f0dc4ff4
|
[] |
no_license
|
97joseph/Economic_Analysis
|
https://github.com/97joseph/Economic_Analysis
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 980,908 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Armenia
#
# * Homepage of project: https://oscovida.github.io
# * Plots are explained at http://oscovida.github.io/plots.html
# * [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Armenia.ipynb)
# +
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
# -
# %config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Armenia", weeks=5);
overview("Armenia");
compare_plot("Armenia", normalise=True);
# +
# load the data
cases, deaths = get_country_data("Armenia")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
# -
# # Explore the data in your web browser
#
# - If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Armenia.ipynb)
# - and wait (~1 to 2 minutes)
# - Then press SHIFT+RETURN to advance code cell to code cell
# - See http://jupyter.org for more details on how to use Jupyter Notebook
# # Acknowledgements:
#
# - Johns Hopkins University provides data for countries
# - Robert Koch Institute provides data for within Germany
# - Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)
# - Open source and scientific computing community for the data tools
# - Github for hosting repository and html files
# - Project Jupyter for the Notebook and binder service
# - The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))
#
# --------------------
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# +
# to force a fresh download of data, run "clear_cache()"
# -
print(f"Notebook execution took: {datetime.datetime.now()-start}")
1.-Why-EDA-?)
# 2. [Pandas, Numpy, Matplotlib, Seaborn](#2.-Pandas,-Numpy,-Matplotlib,-Seaborn)
# 3. [Data types](#3.-Data-types)
# 4. [Exploring categorical features](#4.-Exploring-categorical-features)
# 5. [Exploring numerical features](#5.-Exploring-numerical-features)
# 6. [Bivariate analysis](#6.-Bivariate-analysis)
# 7. [Outliers](#7.-Outliers)
# + [markdown] papermill={"duration": 0.020861, "end_time": "2021-02-09T14:59:52.038459", "exception": false, "start_time": "2021-02-09T14:59:52.017598", "status": "completed"}
# ## 1. Why EDA ?
#
# Because in order to start working with our data, we need to know what kind of data we are dealing with. And this detective work got itself the dry name of exploratory data analysis (which I don't think does justice to it).
#
# These are only some of the questions that we ask ourselves. Depending on the answer, we have to proceed with different processing steps before we can use any algorithms on our data:
# - Do we have 1000 or 1 million entries in our data ?
# - Are we dealing with text or numbers ?
# - Do we have dates ? What format to these dates have ?
# - Do we have outliers ? (Data points that are extremely different than all the other ones)
# - Do we have missing data ? That is, is any of the cells in our dataset empty ?
# + [markdown] papermill={"duration": 0.02098, "end_time": "2021-02-09T14:59:52.081412", "exception": false, "start_time": "2021-02-09T14:59:52.060432", "status": "completed"}
# If I just open my data, the csv file, in a spreadsheet application and look at it with the naked eye, I won't be able to tell much.<br/>
# <img src="https://mihaelagrigore.info/wp-content/uploads/2020/10/Happiness-CSV.png"></img>
# + [markdown] papermill={"duration": 0.021138, "end_time": "2021-02-09T14:59:52.123748", "exception": false, "start_time": "2021-02-09T14:59:52.102610", "status": "completed"}
# I will open the csv file and read all my data.
# + papermill={"duration": 0.054588, "end_time": "2021-02-09T14:59:52.199684", "exception": false, "start_time": "2021-02-09T14:59:52.145096", "status": "completed"}
import pandas as pd
df = pd.read_csv('../input/world-happiness-report/2020.csv')
# + [markdown] papermill={"duration": 0.021107, "end_time": "2021-02-09T14:59:52.242936", "exception": false, "start_time": "2021-02-09T14:59:52.221829", "status": "completed"}
# ## 2. Pandas, Numpy, Matplotlib, Seaborn
#
# 
# These are red pandas. We are mostly used to the black & white ones. This image by <a href="https://pixabay.com/users/1443435-1443435/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=970798">1443435</a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=970798">Pixabay</a> is a tribute to diversity.
#
# ### 2.1 Pandas
# In the code right above (before the cute furry animals), I just imported pandas library and used **read_csv** to read my csv data in a **Pandas DataFrame.**
#
#
# Pandas is a software library created for data manipulation and analysis. Using pandas we can read various file formats easily into data structures specifically created for data manipulation procedures.
#
# The most commonly used data structures in pandas are [Series](https://pandas.pydata.org/pandas-docs/stable/reference/series.html) and [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/frame.html). **Series** stores one-dimensional data (like a table with only one column) and **DataFrame** stores 2-dimensional data (tables with multiple columns).
#
# The best place to learn pandas is the official documentation. If during or after reading this you feel like you need a more thorough work session with pandas, have a look at this [10 minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html) tutorial. # Mind that it only takes 10 min if you're some species from another planet. For humans, most likely, it takes way more than that.
#
# ### 2.2 Numpy
# Numpy is a library mainly used for the Mathematical functions it implements. This way we don't have to write the functions ourselves all the time.
#
# ### 2.3 Matplotlib
# Matplotlib brings us data visualizations.
#
# ### 2.4 Seaborn
# Seaborn takes visualisations to the next level: more powerful and more beautiful. You'll see..
# + papermill={"duration": 0.060977, "end_time": "2021-02-09T14:59:52.325651", "exception": false, "start_time": "2021-02-09T14:59:52.264674", "status": "completed"}
#let's set the precision to 2 decimal places
pd.set_option("display.precision", 2)
#the first 3 rows of our pandas DataFrame object
#if we run df.head(), it will display the first 5 rows by default
df.head(3)
# + [markdown] papermill={"duration": 0.021807, "end_time": "2021-02-09T14:59:52.370128", "exception": false, "start_time": "2021-02-09T14:59:52.348321", "status": "completed"}
# Pandas makes it very easy to handle tabular data.
#
# Tabular data means that our data fits or belongs in a table. Other types of data can be visual (that is, images, for which it doesn't really make sense to be stored as csv files).
#
# The standard way to store tabular data is that:
# - **each row** represents a different **observation**. Observation is a fancy Statistics term, but it just means a new data point, a new measurement we did.
# If our data is about happiness in various countries, each row contains data for a new country.
# - **each column** is a different **feature** (or attribute) of our observations. For the World Happiness Report dataset, examples of features can be the Country name, the Regional indicator or the Social Support score.
# + [markdown] papermill={"duration": 0.021725, "end_time": "2021-02-09T14:59:52.414585", "exception": false, "start_time": "2021-02-09T14:59:52.392860", "status": "completed"}
# Let's use the numpy library to see the maximum value of the **feature** *Ladder score* across **all observations** in our dataset (all countries).
# + papermill={"duration": 0.032016, "end_time": "2021-02-09T14:59:52.469648", "exception": false, "start_time": "2021-02-09T14:59:52.437632", "status": "completed"}
#Let's import the numpy library
import numpy as np
#and use a numpy function to see what's the maximum value for our Ladder score feature
np.max(df["Ladder score"])
# + [markdown] papermill={"duration": 0.022309, "end_time": "2021-02-09T14:59:52.514630", "exception": false, "start_time": "2021-02-09T14:59:52.492321", "status": "completed"}
# And since we're here, I'll do a quick demo of how convenient it is to use pandas DataFrame structure.
# We found the maximum values for "Ladder score" feature. What is the row number of the entry with the max Ladder score ?
# + papermill={"duration": 0.033274, "end_time": "2021-02-09T14:59:52.570301", "exception": false, "start_time": "2021-02-09T14:59:52.537027", "status": "completed"}
df['Ladder score'].argmax()
# + [markdown] papermill={"duration": 0.023028, "end_time": "2021-02-09T14:59:52.616298", "exception": false, "start_time": "2021-02-09T14:59:52.593270", "status": "completed"}
# It only took one line of code to find the row number. Let's see this observation's features, to convince ourselves we got the right entry. # Mind that when displaying one single entry from the DataFrame, the feature values won't appear o a row anymore, but will be displayed as a column (I find this switch a bit confusing).
# + papermill={"duration": 0.035165, "end_time": "2021-02-09T14:59:52.674283", "exception": false, "start_time": "2021-02-09T14:59:52.639118", "status": "completed"}
df.iloc[df['Ladder score'].argmax()]
# + [markdown] papermill={"duration": 0.022651, "end_time": "2021-02-09T14:59:52.721018", "exception": false, "start_time": "2021-02-09T14:59:52.698367", "status": "completed"}
# ## 3. Data types
#
# We have some idea about or features types just by looking a the CSV file. But a better method is the one below.
# + papermill={"duration": 0.040264, "end_time": "2021-02-09T14:59:52.784373", "exception": false, "start_time": "2021-02-09T14:59:52.744109", "status": "completed"}
#DataFrame has this very handy method.
df.info()
# + [markdown] papermill={"duration": 0.02336, "end_time": "2021-02-09T14:59:52.833098", "exception": false, "start_time": "2021-02-09T14:59:52.809738", "status": "completed"}
# What I see in the output above:
# - my data is a DataFrame, with 153 entries (from 0 to 152)
# - I have 20 columns (from 0 to 19)
# - all my columns have 153 non-null values (I don't have "missing" data in any of these columns)
# - my column types are: object (2 of them) and float64* (18 of them)
#
# *float64 means they can store fractional numbers and each number takes 64 bits
#
# The 'object' type I see above most likely refers to a string. I'll use [DataFrame indexing / selection](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html) to look at one particular value to verify my assumption.
# + papermill={"duration": 0.03224, "end_time": "2021-02-09T14:59:52.889135", "exception": false, "start_time": "2021-02-09T14:59:52.856895", "status": "completed"}
print(df['Country name'][0])
print(df['Regional indicator'][0])
# + [markdown] papermill={"duration": 0.024262, "end_time": "2021-02-09T14:59:52.937125", "exception": false, "start_time": "2021-02-09T14:59:52.912863", "status": "completed"}
# Ok, so, in this case, 'object' means String.
# + [markdown] papermill={"duration": 0.023582, "end_time": "2021-02-09T14:59:52.984936", "exception": false, "start_time": "2021-02-09T14:59:52.961354", "status": "completed"}
# ## 4. Exploring categorical features
#
# We have 2 features which contain text:
# - Contry
# - Region
#
# ### Country
#
# Our intuition is that each country is unique in our dataset (one country per row). This is what we would expect from a study of happiness levels in different countries across the worls. We can verify this assumption, to make sure we don't have errors in our data. For example, the social scientist running this study could have accidentally entered the same observation twice because she was working late to finish her data analysis.
# + papermill={"duration": 0.037713, "end_time": "2021-02-09T14:59:53.047565", "exception": false, "start_time": "2021-02-09T14:59:53.009852", "status": "completed"}
#how many entries we have for each country
#shown in descending order (highest value first)
df["Country name"].value_counts().sort_values(ascending = False)
# + papermill={"duration": 0.03258, "end_time": "2021-02-09T14:59:53.104752", "exception": false, "start_time": "2021-02-09T14:59:53.072172", "status": "completed"}
#Uncomment the line below to see what data type we used. This is a nice way to explore the functioning of pandas.
#print("\nThe code above returns a date of type: ", type(df['Country name'].value_counts()))
# + [markdown] papermill={"duration": 0.024048, "end_time": "2021-02-09T14:59:53.153399", "exception": false, "start_time": "2021-02-09T14:59:53.129351", "status": "completed"}
# ### Region
#
# Let's have a look at the **regions** now. It would be interesting to see what different regions we have. This would open the door for questions like: 'Are people happier in Western Europen than in Eastern Europe ?'. We don't know yet what question we can ask and exploring our data informs our next steps.
#
# By the way, since we are dealing with long column names, it's worth mentioning that I don't have to type the whole column name. I just input the first 3 letters and press Tab for autocomplete.
#
# We see in the output below that:
# - Europe is split into 2: 'Western Europe' and 'Central and Eastern Europe'
# - The Americas are divided into 2: 'Latin America and Caribbean' and 'North America and ANZ' (which is North America, Australia and New Zealand)
# - Africa is split into 2: 'Sub-Saharan Africa' and 'Middle East and North Africa'
# - Asia is divided into 3: 'Southeast Asia', 'South Asia' and 'East Asia'
# - There is a group of post-Soviet republics in Eurasia making up the 'Commonwealth of Independent States'
# + papermill={"duration": 0.040873, "end_time": "2021-02-09T14:59:53.218694", "exception": false, "start_time": "2021-02-09T14:59:53.177821", "status": "completed"}
#here's each individual region and its corresponding frequency (the statistical term
#for the number of times this region appears in our dataset)
df['Regional indicator'].value_counts()
# + papermill={"duration": 0.033186, "end_time": "2021-02-09T14:59:53.277806", "exception": false, "start_time": "2021-02-09T14:59:53.244620", "status": "completed"}
#we have 10 regions and pandas DataFrame has a method to find this out
print(f"The number of regions in our dataset is: {df['Regional indicator'].nunique()}")
# + [markdown] papermill={"duration": 0.024566, "end_time": "2021-02-09T14:59:53.328193", "exception": false, "start_time": "2021-02-09T14:59:53.303627", "status": "completed"}
# I just used Python's fancy formatting in the line of code above. If you like it and want to read more, know that it's called Literal String Interpolation (but the popular name is f-string). You can read more [here](https://www.programiz.com/python-programming/string-interpolation).
# + [markdown] papermill={"duration": 0.02489, "end_time": "2021-02-09T14:59:53.378196", "exception": false, "start_time": "2021-02-09T14:59:53.353306", "status": "completed"}
# ### Visualisation for categorical features
#
# Since the frequencies (the number of times they appear in our dataset) of our regions is greater than one, it invites us to look at them in a more intuitive way rather than the text displayed above.
#
# It is generally much better for the audience to present any data in visual form, whenever possible. For countries, nothing else made sense since each country appeared once in our data. But for regions, we can use a **bar chart**.
#
# The bar chart below shows the same information as the table we've seen earlier.
# But in visual form it's so much easier to gain insights like "Sub-Saharan Africa is present in our dataset approximately twice as much as the next region in line, Western Europe".
# + papermill={"duration": 0.255638, "end_time": "2021-02-09T14:59:53.659667", "exception": false, "start_time": "2021-02-09T14:59:53.404029", "status": "completed"}
df['Regional indicator'].value_counts().plot(kind='bar', title='Absolute frequency distribution of Regional indicator')
# + [markdown] papermill={"duration": 0.026355, "end_time": "2021-02-09T14:59:53.713926", "exception": false, "start_time": "2021-02-09T14:59:53.687571", "status": "completed"}
# In the code above I've used **Pandas built-in capabilities for data visualization**. I didn't feel a need to turn to matplotlit or seaborn for basic visualisation that can be provided by pandas.
# If you feel like you want to read more abour Pandas visualisation, see the [official documentation.](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html)
# + [markdown] papermill={"duration": 0.026226, "end_time": "2021-02-09T14:59:53.766647", "exception": false, "start_time": "2021-02-09T14:59:53.740421", "status": "completed"}
# Another obsvervation for the plot above is that those numbers are absolute frequencies. That is, the bar chart shows the number of times each region is present in our dataset. Sometimes it's enough to know that we have 39 countries from Sub-Saharan Africa. But there are times when we're wondering how much this represents in terms of percentage.
# + papermill={"duration": 0.275942, "end_time": "2021-02-09T14:59:54.069047", "exception": false, "start_time": "2021-02-09T14:59:53.793105", "status": "completed"}
(df['Regional indicator'].value_counts()/df.shape[0]).plot(kind='bar', title='Relative frequency of Regional indicators')
# + [markdown] papermill={"duration": 0.027089, "end_time": "2021-02-09T14:59:54.123725", "exception": false, "start_time": "2021-02-09T14:59:54.096636", "status": "completed"}
# Now we know that Sub-Saharan Africa represents 25% of our data. For this dataset this is not unusual. But imagine you're trying to see how happy people are in a single country, you broadcast a digital survey that people can take and during data analysis you realize that 25% of the people who filled in the survey are from the same city in this country.
# + [markdown] papermill={"duration": 0.028249, "end_time": "2021-02-09T14:59:54.179579", "exception": false, "start_time": "2021-02-09T14:59:54.151330", "status": "completed"}
# ## 5. Exploring numerical features
#
# Pandas has a nice built-in method that performs descriptive statistics on a DataFrame.
# It shows us:
# - the number of values for each feature (again, an opportunity to see if we have missing values for any feature)
# - the mean value
# - the standard error
# - the min and max value
# - the median of our data (50%)
# - the lower and upper quartile (25% and 75%)
# + papermill={"duration": 0.088254, "end_time": "2021-02-09T14:59:54.298541", "exception": false, "start_time": "2021-02-09T14:59:54.210287", "status": "completed"}
df.describe()
# + [markdown] papermill={"duration": 0.027858, "end_time": "2021-02-09T14:59:54.354873", "exception": false, "start_time": "2021-02-09T14:59:54.327015", "status": "completed"}
# Insights from the descriptive statistics above:
# - Ladder score actually goes from 2.5 to 7.8. There's no 0 or 10.
# - Healthy life expectancy has a minimum of 45 and a maximum of 76. This is a large range. There are countries in our dataset where life expenctancy is 45 years !
# - Generosity can be negative. It's the only feature that has negative values.
# - Other features are more difficult to interpret from the descriptive stats above.
# + [markdown] papermill={"duration": 0.028615, "end_time": "2021-02-09T14:59:54.412645", "exception": false, "start_time": "2021-02-09T14:59:54.384030", "status": "completed"}
# Numerical data is best viewed as histograms. We will use both matplotlit and seaborn for this.
# + papermill={"duration": 2.042832, "end_time": "2021-02-09T14:59:56.484003", "exception": false, "start_time": "2021-02-09T14:59:54.441171", "status": "completed"}
import matplotlib.pyplot as plt
import seaborn as sns
columns = ['Logged GDP per capita', 'Social support', 'Healthy life expectancy', 'Freedom to make life choices', 'Generosity',\
'Perceptions of corruption']
scols = int(len(columns)/2)
srows = 2
fig, axes = plt.subplots(scols, srows, figsize=(10,6))
for i, col in enumerate(columns):
ax_col = int(i%scols)
ax_row = int(i/scols)
sns.distplot(df[col], hist=True, ax=axes[ax_col, ax_row])
axes[ax_col, ax_row].set_title('Frequency distribution '+ col, fontsize=12)
axes[ax_col, ax_row].set_xlabel(col, fontsize=8)
axes[ax_col, ax_row].set_ylabel('Count', fontsize=8)
fig.tight_layout()
plt.show()
# + [markdown] papermill={"duration": 0.030241, "end_time": "2021-02-09T14:59:56.544737", "exception": false, "start_time": "2021-02-09T14:59:56.514496", "status": "completed"}
# Insights from the visual exploration of our numerical data:
# - the distributions of GDP, social support, healthy life expectancy, freedom and corruption are all [left skewed](http://www.cvgs.k12.va.us/DIGSTATS/main/descriptv/d_skewd.html) (or negative skew). That is to say, most of our values do not happen to be in the middle of the min-max range, but are pushed towards the upper end of our range. For all but Perception of corruption this is good news.
# - generosity, though, is right skewed. The majority of the countries are in the bottom half of the generosity scale (unfortunately)
#
# If you feel the need to read more about why we might want to look at the distribution of our data, [here is a very quick overview](http://www.cvgs.k12.va.us/DIGSTATS/main/descriptv/).
# + [markdown] papermill={"duration": 0.029829, "end_time": "2021-02-09T14:59:56.604690", "exception": false, "start_time": "2021-02-09T14:59:56.574861", "status": "completed"}
# ## 6. Bivariate analysis
#
# All the explorations above belong to univariate analysis (that is, we looked at each variable individually).
# We can also perform bivariate analysis - we can look at pairs of two variables to explore a possible relation between them.
#
# When Data Scientists perform a bivariate analysis, they look at scatterplots like the ones below and they search for clouds of dots that arrange themselves into straight diagonal lines. This is a visual representation of two variables that correlate.
#
# Here's how to read the plots below:
# Let's look at the **second plot on the first row**. On the **far left** of the image we see "Logged GDP per capita". All plots on the first row have on the y axis (the vertical axis) the Logged GDP per capita as the label of the Y axis. Now look at the bottom of the plots, all the way down, under the second column we have "Social support" as the name of the X axis. All plots on the second columns have the Social support on the x axis (the horizontal axis).
#
# Armed with this information, let's look at the contents of the second plot, first row. As the 'Social support' increases, so does 'Logged GDP per capita'. What does this mean ? Nothing more than the fact that the two feature seems to be correlated (correlation, not causation). Most likely (intuition dictates) as the country gets riches it can afford to offer more social support to its inhabitants.
#
# Now look at the fourth subplot on the same row. The datapoints are all over place and there seems to be no correlation between 'GDP per capita' and 'Freedom to make life choices'.
#
# Correlation is not assessed only by looking at a scatterplot, but this is a good start.
#
# Take a few moments to explore the plots below. Look on the diagonal, from upper left to lower right. Do you recognize them from the univariate analysis section ? These are the histograms we've seen earlier.
# + papermill={"duration": 8.509348, "end_time": "2021-02-09T15:00:05.144201", "exception": false, "start_time": "2021-02-09T14:59:56.634853", "status": "completed"}
#This will take slightly longer than other plots, don't worry if the plots don't show up immediately.
columns = ['Logged GDP per capita', 'Social support', 'Healthy life expectancy', 'Freedom to make life choices', 'Generosity',\
'Perceptions of corruption']
sns.pairplot(df[columns])
# + [markdown] papermill={"duration": 0.038577, "end_time": "2021-02-09T15:00:05.219117", "exception": false, "start_time": "2021-02-09T15:00:05.180540", "status": "completed"}
# Seaborn allows us to add a 'hue' to our plots.
# We will set our scatterplots to assign **different colors to datapoints that belong to different global regions.**
#
# You can read about [Seaborn pairplot here](https://seaborn.pydata.org/generated/seaborn.pairplot.html)
#
# This helps us gain insight like: Sub-Saharan African countries (the purple dots, according to the legend on the right) have the lowest GDP and the lowest Healthy life expectancy, but they are not less generous than more fortunate countries.
# + papermill={"duration": 17.174463, "end_time": "2021-02-09T15:00:22.430448", "exception": false, "start_time": "2021-02-09T15:00:05.255985", "status": "completed"}
#This will take slightly longer than other plots, don't worry if the plots don't show up immediately.
columns = ['Regional indicator','Logged GDP per capita', 'Social support', 'Healthy life expectancy', 'Freedom to make life choices', 'Generosity',\
'Perceptions of corruption']
sns.pairplot(df[columns], hue="Regional indicator", palette="Paired")
# + [markdown] papermill={"duration": 0.044808, "end_time": "2021-02-09T15:00:22.521211", "exception": false, "start_time": "2021-02-09T15:00:22.476403", "status": "completed"}
# Correlation is not assessed only by looking at a scatterplot, but the mono-coloured pairplot above was a good start.
# Another useful tool in the EDA toolset is the **correlation matrix.**
# + papermill={"duration": 0.444054, "end_time": "2021-02-09T15:00:23.011918", "exception": false, "start_time": "2021-02-09T15:00:22.567864", "status": "completed"}
meaningful_columns = ['Ladder score','Logged GDP per capita', 'Social support', 'Healthy life expectancy',
'Freedom to make life choices', 'Generosity',
'Perceptions of corruption', 'Ladder score in Dystopia']
plt.figure(figsize=(8,6))
#sns.heatmap(df.corr(), annot = True, fmt='.1g', cmap= 'coolwarm')
sns.heatmap(df[columns].corr(), annot = True, fmt='.1g', cmap= 'coolwarm')
# + [markdown] papermill={"duration": 0.048719, "end_time": "2021-02-09T15:00:23.108925", "exception": false, "start_time": "2021-02-09T15:00:23.060206", "status": "completed"}
# ## 7. Outliers
#
# A nice way to spot outliers is a Box and Whiskers plot.
# + papermill={"duration": 0.478245, "end_time": "2021-02-09T15:00:23.635916", "exception": false, "start_time": "2021-02-09T15:00:23.157671", "status": "completed"}
small = ['Social support', 'Freedom to make life choices', 'Generosity', 'Perceptions of corruption']
medium = ['Ladder score', 'Logged GDP per capita']
large = ['Healthy life expectancy']
f, axs = plt.subplots(1,3,figsize=(15,5))
# equivalent but more general
ax1=plt.subplot(1, 3, 1)
df.boxplot(column=small, ax = ax1)
plt.xticks(rotation=90)
ax2=plt.subplot(1, 3, 2)
df.boxplot(column=medium, ax = ax2)
ax3=plt.subplot(1, 3, 3)
df.boxplot(column=large, ax = ax3)
# + [markdown] papermill={"duration": 0.047292, "end_time": "2021-02-09T15:00:23.730441", "exception": false, "start_time": "2021-02-09T15:00:23.683149", "status": "completed"}
# The classical interpretation in Statistics is that whatever falls outside the 'whiskers' represents an outlier.
#
# If you'd like to read more about box plots and what the box, the line that splits the box and the whiskers represent, [this resource](https://publiclab.org/notes/mimiss/06-18-2019/creating-a-boxplot-to-identify-outliers-using-codap) seemed to have nice visuals.
#
# In practice, deciding what to do with outliers depends on many factory (whether you think they can be a mistake in data collection, for example).
# + [markdown] papermill={"duration": 0.047599, "end_time": "2021-02-09T15:00:23.827446", "exception": false, "start_time": "2021-02-09T15:00:23.779847", "status": "completed"}
# Let's examine the case of Perceptions of corruption.
# + papermill={"duration": 0.340652, "end_time": "2021-02-09T15:00:24.218390", "exception": false, "start_time": "2021-02-09T15:00:23.877738", "status": "completed"}
f, axs = plt.subplots(1,2,figsize=(12,4))
# equivalent but more general
ax1=plt.subplot(1, 2, 1)
sns.distplot(df['Perceptions of corruption'], hist=True, ax=ax1)
ax2=plt.subplot(1, 2, 2)
df.boxplot(column=['Perceptions of corruption'], ax = ax2)
# + [markdown] papermill={"duration": 0.048548, "end_time": "2021-02-09T15:00:24.316341", "exception": false, "start_time": "2021-02-09T15:00:24.267793", "status": "completed"}
# Because 'Perceptions of corruption' feature is left skewed, countries with lowest perception of corruption are automatically categorized as outliers in the boxplot.
#
# But just because they are technically outliers does not necessarily mean we should do something about them. The next question is: is the data correct ? Let's see who these outliers are.
# + papermill={"duration": 0.066481, "end_time": "2021-02-09T15:00:24.431711", "exception": false, "start_time": "2021-02-09T15:00:24.365230", "status": "completed"}
(df[df['Perceptions of corruption'] < 0.4])[['Country name', 'Perceptions of corruption']].sort_values(by = 'Perceptions of corruption', axis=0, ascending=True)
# + [markdown] papermill={"duration": 0.050822, "end_time": "2021-02-09T15:00:24.533494", "exception": false, "start_time": "2021-02-09T15:00:24.482672", "status": "completed"}
# It's no surprise to find almost all these countries in the bottom of the Perceptions of Corruption top. I admit I did not know about the low corruption in Rwanda !
# + [markdown] papermill={"duration": 0.050814, "end_time": "2021-02-09T15:00:24.635299", "exception": false, "start_time": "2021-02-09T15:00:24.584485", "status": "completed"}
# *If you find the line of code above confusing, I did too, in the begining. When I found lines like this in someone else's code, I used to dissect them to examine the output and the data types. Maybe this tip helps.*
# + papermill={"duration": 0.057331, "end_time": "2021-02-09T15:00:24.743317", "exception": false, "start_time": "2021-02-09T15:00:24.685986", "status": "completed"}
#Uncomment the code below, line by line, if you want to dissect the previous line of code.
#I find it useful to first make a hypothesis about what I expect the line of code does before running it.
#print(f'df has {len(df)} entries')
#df['Perceptions of corruption'] < 0.4
#df[df['Perceptions of corruption'] < 0.4]
#print(f"our selection has {len(df[df['Perceptions of corruption'] < 0.4])} entries")
#(df[df['Perceptions of corruption'] < 0.4])[['Country name', 'Perceptions of corruption']]
# + [markdown] papermill={"duration": 0.053487, "end_time": "2021-02-09T15:00:24.850249", "exception": false, "start_time": "2021-02-09T15:00:24.796762", "status": "completed"}
# That's it for EDA for this rather simple tabular dataset.
| 31,723 |
/Wavenumber_Spectra/.ipynb_checkpoints/2020-02-10-AA-coord-box-spectra-NATL60GS-checkpoint.ipynb
|
47dc315cf3d12745138e6a080621e483d9d51566
|
[] |
no_license
|
auraoupa/diags-CMEMS-on-occigen
|
https://github.com/auraoupa/diags-CMEMS-on-occigen
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 10,624 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3 boites en GS :
#
# - box1 : 376 740 124 585
# - box2 : 736 1099 123 574
# - box3 : 1096 1457 122 565
# ## 4 boites en EU :
#
# - box1 : 75 256 98 326
# - box2 : 70 253 324 568
# - box3 : 60 245 563 826
# - box4 : 21 208 1089 1391
#
# +
import sys
import xarray as xr
import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
import glob
from netCDF4 import Dataset
import cartopy.crs as ccrs
import os
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
# %autosave 60
# -
sys.path.insert(0,"/scratch/cnt0024/hmg2840/albert7a/DEV/git/powerspec/powerspec")
import powerspec as pp
sys.path.insert(0,"/scratch/cnt0024/hmg2840/albert7a/DEV/git/diags-CMEMS-on-occigen/common-lib/")
import GriddedData
import WavenumberSpectrum as ws
# +
# Datasets of daily surface meridional and zonal velocities
fileu = sorted(glob.glob("/store/colombo/GS36/GS36-MPC001-S/1d/2012/GS36-MPC001_y2012m??d??.1d_gridU.nc"))
filev = sorted(glob.glob("/store/colombo/GS36/GS36-MPC001-S/1d/2012/GS36-MPC001_y2012m??d??.1d_gridV.nc"))
# +
# Some functions that allow us to compute spectra in boxes
##########################################
def get_values_in_box(imin,imax,jmin,jmax,data):
values = data[:,0,jmin:jmax+1,imin:imax+1]
values = ma.masked_invalid(values)
return values
##########################################
def compute_spec_for_box(imin,imax,jmin,jmax,data,Mth,navlon,navlat,name):
var = get_values_in_box(imin,imax,jmin,jmax,data)
kspec,pspec = compute_spec(var,navlon,navlat)
np.savez(WaveSpecResult+'WaveSpec_'+name+'_'+param+'_'+Mth, kspec=kspec ,pspec=pspec)
##########################################
def compute_spec(data,navlon,navlat):
days = len(data)
mth_pspec = []
for it in np.arange(0,days):
arr = data[it]
datab = arr.squeeze()
pspec,kstep = pp.wavenumber_spectra(datab,navlon,navlat)
mth_pspec.append(pspec)
mthly_pspec = np.array(mth_pspec)
mean_mthly_pspec = mthly_pspec.mean(axis=0)
return kstep,mean_mthly_pspec
# +
# Implementation for surface zonal velocities U
imin=376
imax=740
jmin=124
jmax=585
param='Uspec'
varname = 'vozocrtx'
filenames = fileu
WaveSpecResult = '/scratch/cnt0024/hmg2840/albert7a/GS36-spec/GS36-MPC001-'+str(param)+'/'
YrMth = ['2012m01','2012m02','2012m03','2012m04','2012m05','2012m06','2012m07','2012m08','2012m09','2012m10','2012m11','2012m12']
for ii in np.arange(12):
Mth = YrMth[ii]
if not os.path.exists(WaveSpecResult+'WaveSpec_box1_'+param+'_'+Mth+'.npz'):
filename="/store/colombo/GS36/GS36-MPC001-S/1d/2012/GS36-MPC001_y"+str(YrMth[ii])+"d??.1d_gridU.nc"
ds = xr.open_mfdataset(filename,chunks={'x':500,'y':500,'time_counter':1})
data = ds[varname]
navlon=ds.nav_lon[imin:imax+1,jmin:jmax+1]
navlat=ds.nav_lat[imin:imax+1,jmin:jmax+1]
compute_spec_for_box(imin,imax,jmin,jmax,data,Mth,navlon,navlat,'box1')
# +
# Implementation for surface zonal velocities U
imin=376
imax=740
jmin=124
jmax=585
param='Vspec'
varname = 'vomecrty'
filenames = fileu
WaveSpecResult = '/scratch/cnt0024/hmg2840/albert7a/GS36-spec/GS36-MPC001-'+str(param)+'/'
YrMth = ['2012m01','2012m02','2012m03','2012m04','2012m05','2012m06','2012m07','2012m08','2012m09','2012m10','2012m11','2012m12']
for ii in np.arange(12):
Mth = YrMth[ii]
if not os.path.exists(WaveSpecResult+'WaveSpec_box1_'+param+'_'+Mth+'.npz'):
filename="/store/colombo/GS36/GS36-MPC001-S/1d/2012/GS36-MPC001_y"+str(YrMth[ii])+"d??.1d_gridV.nc"
ds = xr.open_mfdataset(filename,chunks={'x':500,'y':500,'time_counter':1})
data = ds[varname]
navlon=ds.nav_lon[imin:imax+1,jmin:jmax+1]
navlat=ds.nav_lat[imin:imax+1,jmin:jmax+1]
compute_spec_for_box(imin,imax,jmin,jmax,data,Mth,navlon,navlat,'box1')
# -
# Folders containing U and V spectral
u_database = '/scratch/cnt0024/hmg2840/albert7a/GS36-spec/GS36-MPC001-Uspec/'
v_database = '/scratch/cnt0024/hmg2840/albert7a/GS36-spec/GS36-MPC001-Vspec/'
# Folders to contain computed KE spectral
KEspecFolder = '/scratch/cnt0024/hmg2840/albert7a/GS36-spec/GS36-MPC001-KEspec/'
u_filenames = sorted(glob.glob(u_database + 'WaveSpec_box1_Uspec_*.npz'))
v_filenames = sorted(glob.glob(v_database + 'WaveSpec_box1_Vspec_*.npz'))
for i in range(len(u_filenames)):
uspec = np.load(u_filenames[i])['pspec']
vspec = np.load(v_filenames[i])['pspec']
kspec = np.load(u_filenames[i])['kspec']
KEspec = 0.5*(uspec + vspec)
np.savez(KEspecFolder+'WaveSpec_box1_KEspec_'+YrMth[i]+'.npz',kspec = kspec,KEspec = KEspec)
# +
# Some functions to make the plots
def mean_pspec(filenames,i,j):
''' Compute mean spectrum'''
_pspec = []
for filename in filenames:
spec = np.load(filename)
kspec = spec['kspec'];
pspec = spec['KEspec'];
_pspec.append(pspec)
pspec_ar = np.array(_pspec)
mean_pspec = pspec_ar[i:j].mean(axis=0)
return kspec,mean_pspec
def comp_slope(database,i,j):
''' Compute slope of avearged spectrum'''
slope_10_100 = []
slope_70_250 = []
for box in boxes:
filenames = sorted(glob.glob(database + 'KEspec/WaveSpec_'+box.name+'_KEspec_*.npz'))
kpsec,pspec = mean_pspec(filenames,i,j)
m1 = pp.get_slope(kpsec,pspec,10*1E3,100*1E3)
m2 = pp.get_slope(kpsec,pspec,70*1E3,250*1E3)
slope_10_100.append(m1)
slope_70_250.append(m2)
slope_10_100_arr = np.array(slope_10_100)
slope_70_250_arr = np.array(slope_70_250)
return slope_10_100_arr,slope_70_250_arr
# +
# Final Plots
fig, axs = plt.subplots(1,1, figsize=(18, 18))
title = 'Annual Mean of KE spectrum : GS36-MPC001'
plt.suptitle(title,size = 25,y=1.05)
# - general slope
k = np.array([1E-6,1E-3])
s3 = k**-3/1.e11
s2 = k**-2/1.e7
s53 = k**(-5./3.)/1.e5
GS36_filenames = sorted(glob.glob('/scratch/cnt0024/hmg2840/albert7a/GS36-spec/GS36-MPC001-KEspec/WaveSpec_box1_KEspec_*.npz'))
GS36_kpsec,GS36_pspec = mean_pspec(GS36_filenames,0,12)
axs[i].loglog(GS36_kpsec,GS36_pspec,'r',linewidth=2.0,label='GS36-MPC001')
axs[i].loglog(k,s3,'k-',label=r'$k^{-3}$')
axs[i].loglog(k,s2,'k-.',label=r'$k^{-2}$')
axs[i].loglog(k,s53,'k--',label=r'$k^{-5/3}$')
axs[i].set_xlim(1E-6,1E-3)
axs[i].set_ylim(1E-4,1E5)
axs[i].set_title('box 1',size=15)
axs[i].legend()
axs[i].grid(True)
fig.tight_layout()
| 6,736 |
/datasets/movie-review-sentiment-analysis-kernels-only/kernels/NULL---diff77---features-from-words-n-gram-chars-together.ipynb
|
e0ec5234682771e75784db66d021e1bc2d96313d
|
[] |
no_license
|
mindis/GDS
|
https://github.com/mindis/GDS
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,331 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _uuid="ab68a22c96cea6e52eb27cca95761440b9bb0de4"
# ##### I wanted to see if char n-grams perform better than words n-grams and after combined the both as features which resulted in better performance. But need to test it more.
#
# *** I borrowed some ideas from other Kernels here
# + _uuid="cc4dcf44ff148bf1ab4bd497b4ff5ffd427343c7"
import pandas as pd
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import SnowballStemmer,WordNetLemmatizer
from string import punctuation
import re
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from scipy.sparse import vstack, hstack
# + _uuid="1021e3c6463d5240448bb26bdea8ee4b79b432d0"
# + _uuid="c6428309b33fd58dc5f865fdcdfeb74fee862e8a"
train=pd.read_csv('../input/train.tsv', sep='\t')
test=pd.read_csv('../input/test.tsv', sep='\t')
sub=pd.read_csv('../input/sampleSubmission.csv')
# + [markdown] _uuid="b598b57a1c89bdad56b252df69d9138713afcd94"
# 0 - negative </br>
# 1 - somewhat negative </br>
# 2 - neutral </br>
# 3 - somewhat positive </br>
# 4 - positive
# + _uuid="d70c929b582d6d5e413c820e9fca7836c355c57c"
train[train['Sentiment']==0].head(10)
# + _uuid="c7dee4ce358d600254e4e6292252754bcedca84e"
test['Sentiment']=777
# + _uuid="13bf4341f41d7087362168fca97ae42363e9a136"
df=pd.concat([train, test], ignore_index=True, sort=False)
print(df.shape)
# + _uuid="cc33b57517902eb93b69fe8687514ba0f41497f3"
df.tail()
# + _uuid="ce99c6f143b2e4262d3d8b7a6bded18542b7e917"
stemmer=SnowballStemmer('english')
lemma=WordNetLemmatizer()
#nltk.download('punkt')
# + _uuid="b180e28181975e6b11c97f017f877c02c40bdb93"
def clean_phrase(review_col):
review_corpus=[]
for i in range(0,len(review_col)):
review=str(review_col[i])
review=re.sub('[^a-zA-Z]',' ',review)
#review=[stemmer.stem(w) for w in word_tokenize(str(review).lower())]
review=[lemma.lemmatize(w) for w in word_tokenize(str(review).lower())]
review=' '.join(review)
review_corpus.append(review)
return review_corpus
# + _uuid="d27d9d8522c3e7102a88383bffdcae3b9746078a"
df['clean_review']=clean_phrase(df.Phrase.values)
df.head()
# + _uuid="ba2866596a9ae233d1f69637884e78193979e0bd"
df_train=df[df.Sentiment!=777]
df_train.shape
# + _uuid="5f430624deac795dc32dadfe0bad5f7db905a495"
df_test=df[df.Sentiment==777]
df_test=df_test.drop('Sentiment',axis=1)
print(df_test.shape)
df_test.head()
# + _uuid="35d40a8ddbbce272bfb8df7364fde86a8dccc1ce"
# + [markdown] _uuid="99e975dd590c5996f70e2f543f6e32fac891d71b"
# #### One Hot Encoding
# + _uuid="a96c2ff1a7657f35ecd2524704284b956e3beacf"
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
le=LabelEncoder()
y=le.fit_transform(df_train.Sentiment.values)
# + [markdown] _uuid="827e4b62e1f3fc3f29fcf13e06659f6b0198e4b1"
# ### TfIdf ngramms 1 -4
# + _uuid="e0451c239f36bc18a4f8c67adaa2bfe7faafacc9"
from sklearn.feature_extraction.text import TfidfVectorizer
# + _uuid="7cc6359a86ed1af6a18aa93bd2b947d079db292b"
# + _uuid="556ffe1d5e7a8ff1894677edfba8c06e6457eacd"
tfidf_word=TfidfVectorizer(ngram_range=(1,2),
stop_words = 'english',
max_df=0.95,min_df=10,
sublinear_tf=True,
analyzer='word',
max_features=18000
)
tfidf_word.fit(df.clean_review)
tfidf_WordTrain=tfidf_word.transform(df_train.clean_review)
# + _uuid="9c0c3c5262b88b603ebcea73b3ea7e9a9ac0861a"
tfidf_WordTrain.shape
# + _uuid="219371de129ebb0b6717fdfd2b582f0e16b7975d"
list(tfidf_word.vocabulary_)[:10]
# + [markdown] _uuid="14fe4c4cdade35a6188a674d5a6f7da4330c05e6"
# ### Tidf chars 3 - 5
# + _uuid="dd69063c40f3238b247cb8c07a96a36ec7d2201c"
tfidf_char=TfidfVectorizer(ngram_range=(3,5),
strip_accents='unicode',
analyzer='char',
stop_words='english',
sublinear_tf=True,
#max_features=50000,
#dtype=np.int32
)
tfidf_char.fit(df.clean_review)
tfidf_CharTrain=tfidf_char.transform(df_train.clean_review)
# + _uuid="048e9a039b78466db8d3c128caf3f05e7ea7bf02"
tfidf_CharTrain.shape
# + _uuid="8bc45ecb1bcd94f7bde306ad072b021d184e5a40"
list(tfidf_char.vocabulary_)[:10]
# + [markdown] _uuid="48c524c8c0acfaf88788ec0009d6127caffa41a8"
# ### TRAIN TEST SPLIT WORDS
# + _uuid="e8fc16acaa6bd5bedfac60054b61ba83f9d3c7ee"
from sklearn.model_selection import train_test_split
# + _uuid="52aa8685ce8429bef38b50a5a054f81a40fafd56"
X_train_word,X_val_word,y_train_word,y_val_word=train_test_split(tfidf_WordTrain,y,test_size=0.3)
# + [markdown] _uuid="41d7019cfb8b617e42fe27ac7fdffe59426c3824"
# #### LOGISTICK REGRESSION
# + _uuid="f77a3203871582e02c0aef2a566ae2c04ce9a6b0"
lr=LogisticRegression(penalty='l1', max_iter=100)
# + [markdown] _uuid="4e0dfa7176dae85af91f7dff21a2dd8b43cb6a63"
# ### Log Reg with Words
# + _uuid="ae30ec6db74f0375275f21e9245e49bd5833b19e"
lr.fit(X_train_word,y_train_word)
y_pred_word=lr.predict(X_val_word)
print("Test Accuracy ", accuracy_score(y_pred_word, y_val_word)*100 , '%')
# + [markdown] _uuid="d32b2650f89a954de446f345e2cf4db62ede3f0c"
# ### Log Reg with Chars
# + _uuid="e35303ab3f9fa8168434c72a7678472da1a5b68b"
X_train_char,X_val_char,y_train_char,y_val_char=train_test_split(tfidf_CharTrain, y ,test_size=0.3)
# + _uuid="506fc5c32f6d1cfcd2c63cf7d6380e2ea47f2c73"
lr.fit(X_train_char,y_train_char)
y_pred_char=lr.predict(X_val_char)
print("Test Accuracy ", accuracy_score(y_pred_char, y_val_char)*100 , '%')
# + [markdown] _uuid="2318fdd9470243ed3265b1506925e8d4990b6468"
# #### Words and chars gave us similar performance, let's see we if can get a better score by combining them
# + [markdown] _uuid="dfefc0b57e95b260a7d7edbb32dd7e85cca2cdef"
# ### CHARS + WORDS as features
# + _uuid="7f605264b14794631c4d9a6f7e47fb6cda0bb7b2"
tfidf_WordTrain.shape
# + _uuid="dfec57af0eea3a203442d4ec856610414061acbe"
tfidf_CharTrain.shape
# + _uuid="2c9d51c74e15552134ee62f2b86b1a1eb1ce77bd"
big_train=hstack([tfidf_WordTrain, tfidf_CharTrain])
big_train.shape
# + _uuid="a61e9e74a42defbffaad5cc194ba02361bb613dc"
X_train, X_val, y_train, y_val=train_test_split(big_train,y,test_size=0.3)
# + _uuid="6c5cbc7c9226a2d2e81ab26aca0c3f73cc5244ba"
lr.fit(X_train, y_train)
y_pred=lr.predict(X_val)
print("Test Accuracy ", accuracy_score(y_pred, y_val)*100 , '%')
# + [markdown] _uuid="382de7fd2e0925913ffde6d29f5f0501ed9cc97a"
# #### SUBMIT
# + _uuid="c92367f60c09243ec96af97465f2ff6913e7f72e"
X_val.shape
# + _uuid="f98c0e704f92a2bfd0b961be9411cb743e817cf1"
sub_test_char=tfidf_char.transform(df_test.clean_review)
sub_test_word=tfidf_word.transform(df_test.clean_review)
test_final=hstack([sub_test_char, sub_test_word])
# + _uuid="56bdf6e998db30e5a7b2cc6ab8df7f47258e36cf"
test_final.shape
# + _uuid="314c90f5a3d73042d49d1b6bd9d554d90833f552"
final_pred=lr.predict(test_final)
# + _uuid="6256b3f5279a4900849f7ac602ca02639cdbcfc0"
sub.shape
# + _uuid="a9b21c608d4fbaab268b086c9e9667557fab7864"
# + _uuid="d505e714cd39474a9320ca43a38771f50439ab9a"
sub.Sentiment=final_pred
sub.head()
sub.to_csv('submission.csv',index=False)
# + _uuid="2a1a11087dcc34cf0b71c82700f457d9a750cecf"
sub.head()
| 7,636 |
/archive/optimize_f_stat.ipynb
|
fe8794e8a1ec18f212200566fc655b9dd4fdcd24
|
[
"MIT"
] |
permissive
|
Sourav-lab/pothos
|
https://github.com/Sourav-lab/pothos
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 27,221 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 평균제곱근오차
import numpy as np
ab = [3, 76]
ab
data = [[2, 81], [4, 93], [6, 91], [8,97]]
data
x = [i[0] for i in data]
y = [j[1] for j in data]
def predict(x):
return ab[0]*x + ab[1]
def rmse(p, a):
return np.sqrt(((p-a)**2).mean())
def rmse_val(predict_result, y):
return rmse(np.array(predict_result), np.array(y))
# +
predict_result = []
for i in range(len(x)):
predict_result.append(predict(x[i]))
print(f"공부한 시간 = {x[i]}, 실제점수 = {y[i]}, 예측 점수 = {predict(x[i])}")
# -
print(f"rmse 최종값 : {str(rmse_val(predict_result, y))}")
| 831 |
/ref_genome_mapping.ipynb
|
9ae5d9864a809eb1f034cc8ebbbf7e50ef4f3fdb
|
[] |
no_license
|
szitenberg/gasp
|
https://github.com/szitenberg/gasp
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 87,019 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Dependencies
# ## Dependencies: import from modules
import wget
import gzip
from os import remove, path, mkdir, stat
import tarfile
from subprocess import PIPE, Popen
import glob
from jgh.Interface import Interface
from Bio import SeqIO
import numpy as np
import pandas as pd
from tqdm import tqdm
# %matplotlib inline
import seaborn as sns
import statsmodels.api as sm
import matplotlib.pyplot as plt
from collections import Counter
# bwa, samtools
# +
# #! conda install -y -c bioconda bwa samtools
# +
def gc(string):
string = string.upper()
atgc = {'A':0,'T':0,'G':0,'C':0}
for base in atgc:
atgc[base] = string.count(base)
length = len(string)
return 100*((atgc['G']+atgc['C'])/float(length))
def windows_coverage(ref, cov, scaffold):
windows = [] # pd.DataFrame(columns=['scaffold','start','end','GC','median_cov'])
records = SeqIO.index(ref,'fasta')
cov_df = pd.read_table(cov, header=None,names=['scaffold','position','coverage'])
scaffold_cov_df = cov_df.loc[cov_df.scaffold == scaffold]
start_index = scaffold_cov_df.index.tolist()[0]
scaff_str = str(records[scaffold].seq)
for i in range(len(scaff_str)):
if i+150 > len(scaff_str):
break
window = scaff_str[i:i+150]
window_gc = gc(window)
window_cov_df = scaffold_cov_df.loc[list(range(start_index,start_index+151))]
window_cov = np.median(window_cov_df.coverage.tolist())
#windows.loc[len(windows)] = [scaffold, i+1, i+150,window_gc,window_cov]
windows.append({'scaffold':scaffold,'start':i+1,'end':i+150,'GC':window_gc,'median_cov':window_cov})
start_index += 1
return windows
# -
# ## Dependencies: reference genome
# +
if not path.exists('ref_genomes'):
mkdir('ref_genomes')
url = 'ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/146/045/'
url += 'GCF_000146045.2_R64/GCF_000146045.2_R64_genomic.fna.gz'
fpath = wget.download(url,'ref_genomes')
ref_assembly = fpath.replace('.gz','')
with open(ref_assembly,'wt') as hndl:
hndl.write(gzip.open(fpath,'r').read())
remove(fpath)
# -
# ## Map reads to reference genome
# +
if not path.exists('mapping_to_ref'):
mkdir('mapping_to_ref')
cline = ""
cline += "bwa index {assembly} && "
cline += "bwa mem {assembly} {read1} {read2} > {sam} && "
cline += "samtools sort -o {bam} -T {smpl} -O bam {sam} && "
cline += "samtools depth -aa {bam} > {cov} && "
cline += "samtools view -f 4 {bam} > {un}"
for read1 in glob.glob('trimmed_reads/*_forward.paired.fastq.gz'):
smpl = read1.split('/')[-1].split('_')[0]
read2 = read1.replace('forward','reverse')
sam = 'mapping_to_ref/%s.sam' % smpl
bam = 'mapping_to_ref/%s.bam' % smpl
cov = 'mapping_to_ref/%s.cov' % smpl
un = 'mapping_to_ref/%s.un.sam' % smpl
if path.exists(bam):
continue
instance = cline.format(assembly=ref_assembly,read1=read1,read2=read2,sam=sam,bam=bam,smpl=smpl,cov=cov,un=un)
p = Popen(instance,shell=True,stdout=PIPE,stderr=PIPE)
out, err = p.communicate()
# -
# ## Coverage and GC content sliding window
for cov in glob.glob('mapping_to_ref/*.cov.gz'):
smpl = cov.split('/')[-1].replace('.cov.gz','')
scaffolds = [r.id for r in SeqIO.parse('ref_genomes/GCF_000146045.2_R64_genomic.fna','fasta')]
for scaff in scaffolds:
expected_output = 'mapping_to_ref/%s-%s.windows.tsv'%(smpl,scaff)
if path.exists(expected_output):
continue
windows = windows_coverage('ref_genomes/GCF_000146045.2_R64_genomic.fna', cov, scaff)
with open(expected_output,'wt') as log:
header = ['scaffold','start','end','GC','median_cov']
log.write('\t'.join(header) + '\n')
for l in windows:
row = []
for key in header:
row.append(str(l[key]))
log.write('\t'.join(row) + '\n')
# ## Coverage histograms
for f in sorted(list(glob.glob('./mapping_to_ref/*.windows.tsv')),key = lambda i: len(open(i,'r').readlines())):
smpl, scaffold = f.split('/')[-1].replace('.windows.tsv','').split('-')
windows = pd.read_table(f)
windows = windows.drop(windows.index[len(windows)-1])
mod = sm.OLS(windows.median_cov,windows.GC)
res = mod.fit()
fig, ax = plt.subplots()
mode_real = sorted(Counter(windows.median_cov).items(),key=lambda i: i[1],reverse=True)[0][0]
mode_resid = sorted(Counter(res.resid).items(),key=lambda i: i[1],reverse=True)[0][0]
delta_mode = mode_real - mode_resid
residuals = [i+delta_mode for i in res.resid]
sns.distplot(residuals,label='corrected residual coverage',ax=ax,bins=100)
sns.distplot(windows.median_cov,label='coverage',ax=ax,bins=100)
title = "%s mapped to %s" % (smpl, scaffold)
ax.set_title(title)
ax.legend()
fig.savefig('mapping_to_ref/%s.png' % (title.replace(' ','_')))
plt.close()
for smpl in ['404','424','Sefale','SRR5678584','SRR5678596','SRR5688143']:
coverage = []
for f in glob.glob('./mapping_to_ref/%s-*.windows.tsv' % smpl):
windows = pd.read_table(f)
coverage += windows.median_cov.tolist()[:-1]
fig, ax = plt.subplots()
sns.distplot(coverage,label='coverage',ax=ax,bins=1000,kde=False)
ax.set_title(smpl)
ax.set_xlim(0,200)
fig.savefig('mapping_to_ref/%s.png' % smpl)
# ## Sync repository
repo = Interface()
repo.evaladd('mapping_to_ref')
repo.evaladd('ref_genome_mapping.ipynb')
repo.commitpush('mapping to reference genome')
| 5,865 |
/pre and post processing/preproc_4_locations.ipynb
|
1aa73bc7c1c450c617932d7550b5283dea986bda
|
[] |
no_license
|
mccurrymitchell3/ems_simulation
|
https://github.com/mccurrymitchell3/ems_simulation
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 131,702 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import math
import datetime as dt
br = pd.read_csv("bronx.csv")
br2 = br[['INCIDENT_DATETIME', 'INITIAL_CALL_TYPE', 'INITIAL_SEVERITY_LEVEL_CODE', 'FINAL_CALL_TYPE', 'FINAL_SEVERITY_LEVEL_CODE', 'FIRST_ASSIGNMENT_DATETIME', 'FIRST_ON_SCENE_DATETIME', 'BOROUGH', 'INCIDENT_DISPATCH_AREA', 'ZIPCODE']]
br2
# ## Location frequencies
# +
zips_freq = {}
for index, row in br2.iterrows():
if not math.isnan(row['ZIPCODE']):
if row['ZIPCODE'] not in zips_freq.keys():
zips_freq[row['ZIPCODE']] = 1
else:
zips_freq[row['ZIPCODE']] += 1
zips_freq
# -
total_zips = 0
for k in zips_freq.keys():
total_zips += zips_freq[k]
total_zips
# +
zipcode_freq = {}
for k in zips_freq.keys():
zipcode_freq[k] = zips_freq[k]/total_zips
zipcode_freq
# -
total_zips = 0
for k in zipcode_freq.keys():
total_zips += zipcode_freq[k]
total_zips
ambs = {10459: 4, 10458: 2, 11706: 38, 11223: 19, 10466: 1, 10461: 159, 10473: 169, 10465: 2}
ambs
import matplotlib.pyplot as plt
# +
x = [1, 2, 3, 4, 5, 6, 7, 8]
y = [1, 2, 38, 169, 159, 19, 4, 2]
plt.plot(x, y)
# -
from scipy.stats import norm
mean,std=norm.fit(y)
plt.hist(data, bins=30, normed=True)
xmin, xmax = plt.xlim()
print(y)
print(mean)
print(std)
y = norm.pdf(y, mean, std)
plt.plot(x, y)
plt.show()
print(x)
print(y)
print(mean)
print(std)
x
y
np.mean(y)
zips = zipcode_freq.keys()
zips
# +
dists_to_stations = {}
for k in zipcode_freq.keys():
dists_to_stations[k] = []
dists_to_stations
# -
# #### based on google maps data
# +
station1 = 10458
station2 = 10459
station3 = 10461
station4 = 10465
station5 = 10466
station6 = 10473
#0: 1028 Freeman Street, Bronx, NY 10459: 4
#1: 1624 Stillwell Ave, Bronx, NY 10461: 56
#2: 441 East Fordham Road, Bronx, NY 10458: 2
#3: 1028 Freeman Street, Bronx, NY 11706: 38
#4: 2593 West 13th St, Brooklyn, NY 11223 -> 1439 Ferris Pl, Bronx, NY 10461: 19
#5: 700 Havemeyer Avenue, Bronx, NY 10473: 1
#6: 111 East 210 Street, Bronx, NY 10466: 1
#7: 1624 Stillwell Avenue, Bronx, NY 10461: 103
#8: 700 Havermeyer Avenue, Bronx, NY 10473: 168
#9: 3955 E Tremont Ave, Bronx, NY, 10465: 2
dists_to_stations[10451.0] = [5.2, 9.5, 4.6, 2.6, 8.6, 7.9, 6.4, 9.5, 6.9, 7.9]
dists_to_stations[10452.0] = [4.0, 9.0, 3.2, 4.0, 5.7, 5.6, 7.3, 9.0, 5.6, 6.5]
dists_to_stations[10453.0] = [3.8, 8.8, 1.8, 3.8, 5.5, 5.3, 2.8, 8.8, 5.3, 6.3]
dists_to_stations[10454.0] = [2.5, 6.8, 5.8, 2.5, 5.9, 5.2, 7.4, 6.8, 5.2, 5.9]
dists_to_stations[10455.0] = [2.1, 7.0, 6.0, 2.1, 6.0, 5.4, 7.6, 7.0, 5.4, 6.1]
dists_to_stations[10456.0] = [2.2, 6.5, 2.7, 2.2, 5.3, 4.9, 6.6, 6.5, 5.1, 6.1]
dists_to_stations[10457.0] = [2.2, 7.2, 1.2, 2.2, 3.9, 3.8, 5.3, 7.2, 3.8, 4.7]
dists_to_stations[10458.0] = [4.3, 3.2, 0.3, 4.3, 3.8, 5.0, 2.2, 3.2, 5.0, 6.0]
dists_to_stations[10459.0] = [0.5, 5.4, 3.7, 0.5, 4.4, 3.9, 6.1, 5.4, 3.8, 4.5]
dists_to_stations[10460.0] = [1.4, 2.5, 1.4, 1.4, 2.2, 2.8, 3.2, 2.5, 2.8, 3.7]
dists_to_stations[10461.0] = [4.9, 1.3, 3.9, 4.9, 0.4, 2.2, 5.6, 1.3, 2.2, 2.1]
dists_to_stations[10462.0] = [2.3, 1.8, 2.4, 2.3, 1.2, 2.2, 4.2, 1.7, 2.2, 4.0]
dists_to_stations[10463.0] = [7.3, 5.3, 2.8, 9.7, 9.0, 8.2, 1.8, 5.3, 8.9, 9.8]
dists_to_stations[10464.0] = [8.7, 3.7, 6.2, 8.7, 4.6, 6.2, 8.0, 3.7, 6.2, 5.6]
dists_to_stations[10465.0] = [4.2, 3.0, 5.6, 4.2, 2.1, 1.5, 7.2, 3.0, 1.5, 0.9]
dists_to_stations[10466.0] = [6.7, 3.6, 5.0, 6.7, 4.6, 6.0, 3.3, 3.6, 6.0, 5.5]
dists_to_stations[10467.0] = [6.3, 4.0, 1.7, 4.9, 5.8, 5.7, 0.6, 4.0, 5.7, 6.6]
dists_to_stations[10468.0] = [4.9, 3.9, 1.2, 5.0, 4.5, 5.8, 1.5, 4.0, 5.8, 6.7]
dists_to_stations[10469.0] = [7.5, 1.5, 2.8, 7.5, 2.4, 4.9, 2.2, 2.5, 4.9, 4.4]
dists_to_stations[10470.0] = [6.5, 4.5, 4.8, 6.5, 6.1, 7.3, 3.1, 4.5, 7.3, 8.2]
dists_to_stations[10471.0] = [8.8, 13.0, 4.9, 8.4, 8.0, 9.2, 3.7, 7.4, 9.9, 11.0]
dists_to_stations[10472.0] = [1.5, 3.9, 3.2, 1.6, 3.0, 1.7, 5.1, 3.9, 1.7, 3.0]
dists_to_stations[10473.0] = [3.3, 4.2, 4.4, 3.3, 2.6, 1.2, 6.0, 4.2, 1.2, 3.3]
dists_to_stations[10474.0] = [2.2, 6.5, 5.9, 2.2, 5.5, 5.3, 7.5, 6.9, 5.3, 5.6]
dists_to_stations[10475.0] = [7.6, 2.6, 5.2, 7.6, 3.5, 4.9, 6.9, 2.6, 5.1, 4.4]
dists_to_stations[10803.0] = [8.5, 3.5, 6.0, 8.5, 4.4, 6.0, 7.8, 3.5, 6.0, 5.4]
# +
{10459: 42,
10458: 2,
10466: 1,
10461: 178,
10473: 169,
10465: 2}
station1 = 10458
station2 = 10459
station3 = 10461
station4 = 10465
station5 = 10466
station6 = 10473
dists_to_stations
# -
br2
| 4,755 |
/Business Data Science/Business Data Science 2.ipynb
|
3e747d3faf906aa1e503299d92c605085470a0e1
|
[] |
no_license
|
wilshirefarm/Data-Science-Jupyter-Notebooks
|
https://github.com/wilshirefarm/Data-Science-Jupyter-Notebooks
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 364,382 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
from environment import RNAInvEnvironment, make_vec_env, Monitor
from RNA_helper import get_puzzle
import torch as th
from models import EmbeddinsFeatureExtractor
from stable_baselines3.common import logger
# 1, 41, 84, 92, 97, 5
puzzle_idx=5
objective_structure, sequence, puzzle_name = get_puzzle(idx=puzzle_idx, return_name=True, verbose=False)
len(objective_structure)
max_steps = 1
features_dim = 512
EMBEDDING_DIM = 16
metric = 'energies_mse'
model_name = puzzle_name.lower().replace(' ', '_') + f'_{features_dim}_{EMBEDDING_DIM}_{metric}'
print(model_name)
env_kwargs = {
'objective_structure': objective_structure,
'max_steps': max_steps,
'tuple_obs_space': True,
'metric_type': metric,
'sequences_file': f'solved_puzzles/{model_name}.txt'
}
n_envs=12
env = make_vec_env(RNAInvEnvironment, n_envs=n_envs, env_kwargs=env_kwargs)
# env = RNAInvEnvironment(objective_structure=objective_structure, max_steps=max_steps, tuple_obs_space=True)
from stable_baselines3 import PPO
from stable_baselines3.common.callbacks import EvalCallback
from stable_baselines3.common.policies import ActorCriticPolicy
policy_kwargs = dict(
features_extractor_class=EmbeddinsFeatureExtractor,
features_extractor_kwargs=dict(EMBEDDING_DIM=EMBEDDING_DIM, features_dim=features_dim),
)
model = PPO(
ActorCriticPolicy,
env,
verbose=1,
tensorboard_log='tensorboard_logs',
n_steps=512,
gamma=0.99,
policy_kwargs=policy_kwargs
)
# +
# log_path = f"logs/{model_name}"
# # set up logger
# new_logger = logger.configure(log_path, ["stdout", "csv", "log", "tensorboard", "json"])
# model.set_logger(new_logger)
# +
# eval_env = make_vec_env(
# RNAInvEnvironment, n_envs=1,
# env_kwargs={'objective_structure': objective_structure, 'max_steps': max_steps, 'tuple_obs_space': True}
# )
eval_env = make_vec_env(
RNAInvEnvironment, n_envs=1,
env_kwargs=env_kwargs,
monitor_dir=f'logs/{model_name}',
monitor_kwargs={
'info_keywords': (
'free_energy',
'structure_distance',
'energy_to_objective',
'energy_reward',
'distance_reward',
'folding_struc',
'sequence',
'solved',
'unique_sequences_N'
)
}
)
# -
eval_callback = EvalCallback(
eval_env = eval_env,
eval_freq=512*5,
n_eval_episodes=1024,
deterministic=True,
verbose=1,
best_model_save_path=f'models/{model_name}',
)
# %%time
model.learn(
total_timesteps=1_000_000,
tb_log_name=model_name,
callback=[eval_callback]
)
estion 4
def displayTopK(year, k):
filepath = "Names/yob" + str(year) + ".txt"
df4 = pd.read_csv(filepath, header=None)
df4 = df4.sort_values(by=2, ascending=False)
return df4.head(k)
displayTopK(1997, 5)
def nameFrequency(year, name):
filepath = "Names/yob" + str(year) + ".txt"
df5 = pd.read_csv(filepath, header=None)
criteria = df5[0] == name
return df5[criteria]
nameFrequency(1997, 'William')
def nameRelativeFrequency(year, name):
filepath = "Names/yob" + str(year) + ".txt"
df6 = pd.read_csv(filepath, header=None)
criteria = df6[0] == name
maleNames = df6[1] == 'M'
femaleNames = df6[1] == 'F'
totalMaleNames = df6[maleNames][2].sum()
totalFemaleNames = df6[femaleNames][2].sum()
df6Male = df6[maleNames].apply(lambda x : x/totalMaleNames if x.name == 2 else x)
df6Female = df6[femaleNames].apply(lambda x : x/totalFemaleNames if x.name == 2 else x)
frames = df6Male, df6Female
df6 = pd.concat(frames)
return df6[criteria]
nameRelativeFrequency(1995, 'Joseph')
filesDict = {}
for year in range(1880,2016):
filepath = "Names/yob" + str(year) + ".txt"
data = pd.read_csv(filepath, header=None)
filesDict[year] = data
Names = []
for file in filesDict.values():
for name in file[0]:
if name not in Names:
Names.append(name)
len(Names)
# +
namesThatFlipped = []
#testNames = ['Sarah', 'Courtney']
for name in Names:
yearDict = {}
for year in range(1880,2016):
frequencies = nameFrequency(year, name)
if len(frequencies) == 0:
yearDict[year] = None
elif len(frequencies) > 1:
if frequencies[2].iloc[0] > frequencies[2].iloc[1]:
yearDict[year] = frequencies[1].iloc[0]
else:
yearDict[year] = frequencies[1].iloc[1]
else:
yearDict[year] = frequencies[1].iloc[0]
if 'M' in yearDict.values() and 'F' in yearDict.values():
namesThatFlipped.append(name)
#year = 1881
#filepath = "Names/yob" + str(year) + ".txt"
#df7 = pd.read_csv(filepath, header=None)
#df7
#d8 = pd.DataFrame()
# -
print(namesThatFlipped)
# # Question 5
from statsmodels.formula.api import ols
kidiq = pd.read_stata('kidiq.dta')
kidiq.head(5)
est = ols(formula="kid_score ~ mom_hs*mom_iq", data=kidiq).fit()
est.summary()
print("The interaction variable of mom_hs and mom_iq is statistically significant considering the p-value of 0.003 is less \t than 0.05")
kidiq_hs = kidiq[kidiq.mom_hs==1]
kidiq_nohs = kidiq[kidiq.mom_hs==0]
X_hs = kidiq_hs.as_matrix(["mom_iq"])
Y_hs = kidiq_hs.as_matrix(["kid_score"])
X_nohs = kidiq_nohs.as_matrix(["mom_iq"])
Y_nohs = kidiq_nohs.as_matrix(["kid_score"])
# And we can plot the fit again:
plt.scatter(X_hs, Y_hs, color='pink', label = 'hs')
plt.scatter(X_nohs, Y_nohs, color='green', label = 'no_hs')
plt.plot(X_hs, est.params[0]+est.params[2]*X_hs + est.params[1] +est.params[3]*X_hs, color='darkred',
linewidth=2)
plt.plot(X_nohs, est.params[0]+ est.params[2]*X_nohs, color='green',
linewidth=2)
plt.legend()
sns.despine()
print("Both the p-value and the plot above show that the interaction term should be there to make a better prediction. The interaction term, with an coefficient of -0.48, indicates that when a kid's mom went to high school, the coefficient \t of mom_iq should be (0.97-0.48). It indicates that when the mom if a kid's mom went to high school, her iq has a \t smaller impact on kid's iq compared to when a mom didn't go to high school.")
| 6,556 |
/DengAIML.ipynb
|
d7a1459db1d69348bca4b4ba3d0c207eca19c23a
|
[] |
no_license
|
Sampanna-Sharma/DengAI
|
https://github.com/Sampanna-Sharma/DengAI
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 324,615 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://bit.ly/2VnXWr2" width="100" align="left">
# # Bus
#
# This bus has a passenger entry and exit control system to monitor the number of occupants it carries and thus detect when there are too many.
#
# At each stop, the entry and exit of passengers is represented by a tuple consisting of two integer numbers.
# ```
# bus_stop = (in, out)
# ```
# The succession of stops is represented by a list of these tuples.
# ```
# stops = [(in1, out1), (in2, out2), (in3, out3), (in4, out4)]
# ```
#
# ## Tools
# You don't necessarily need to use all the tools. Maybe you opt to use some of them or completely different ones, they are given to help you shape the exercise. Programming exercises can be solved in many different ways.
# * Data structures: **lists, tuples**
# * Loop: **while/for loops**
# * Functions: **min, max, len**
#
# ## Tasks
# Variables
stops = [(10, 0), (4, 1), (3, 5), (3, 4), (5, 1), (1, 5), (5, 8), (4, 6), (2, 3)]
# #### 1. Calculate the number of stops.
stops_count = len(stops)
print("The number of stops is ",stops_count)
# #### 2. Assign to a variable a list whose elements are the number of passengers at each stop (in-out).
# Each item depends on the previous item in the list + in - out.
for stop in stops:
passengers_diff = stop[0] - stop[1]
if(len(passenger_count) ==0) :
passenger_count.append(passengers_diff)
else :
passenger_count.append(passenger_count[-1] + passengers_diff)
# #### 3. Find the maximum occupation of the bus.
print("maximum occupation is ",max(passenger_count))
# #### 4. Calculate the average occupation. And the standard deviation.
avg_occupation = sum(passenger_count)/len(passenger_count)
print("average occupation in the bus is ", avg_occupation)
ias=False)
df = poly.fit_transform(df)
df = pd.DataFrame(df)
df.columns = colnames
return df
df = add_interactions(df)
print(df.head(5))
# +
from sklearn.cross_validation import train_test_split
import sklearn.feature_selection
from sklearn.preprocessing import MinMaxScaler
try:
df = df.drop(columns = ['iq + sj'])
except:
pass
#medd = df.median()
#std = df.std()
maxx = df.max()
maxx['year'] = 2013
print(maxx)
minn = df.min()
df = (df-minn)/(maxx - minn)
print(df.head())
X_train, X_test, y_train, y_test = train_test_split(df, ef, train_size=1, random_state=1)
select = sklearn.feature_selection.SelectKBest(k=50)
selected_features = select.fit(X_train, y_train)
indices_selected = selected_features.get_support(indices=True)
colnames_selected = [df.columns[i] for i in indices_selected]
X_train_selected = X_train[colnames_selected]
X_test_selected = X_test[colnames_selected]
# -
print((colnames_selected))
# +
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import HuberRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.ensemble import BaggingClassifier
def find_model_perf(X_train, y_train, X_test, y_test,model):
model.fit(X_train, y_train)
y_hat = [int(max(x,0)) for x in model.predict(X_test)]
print(model,"\n","testing error", mean_absolute_error(y_test,y_hat))
print("\n")
return np.array(y_hat)
# -
model = [HuberRegressor(),
GradientBoostingRegressor(learning_rate=0.05,n_estimators =220 ,random_state=10,max_features=42,max_depth = 5, min_samples_split = 2),
ExtraTreesRegressor(n_estimators=131,max_depth=15,min_samples_split = 2)]
for mod in model:
find_model_perf(X_train_selected, y_train, X_test_selected, y_test,mod)
''''from sklearn.model_selection import GridSearchCV
param_test1 = {'n_estimators':range(130,300,10)}
gsearch1 = GridSearchCV(estimator = ExtraTreesRegressor(max_depth=15,min_samples_split = 2),
param_grid = param_test1, scoring='mean_absolute_error',n_jobs=4,iid=False, cv=5)
gsearch1.fit(X_train_selected,y_train)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_''''
FinalModel = GradientBoostingRegressor(learning_rate=0.05,n_estimators =220 ,random_state=10,max_features=42,max_depth = 5, min_samples_split = 2)
FinalModel.fit(X_train_selected,y_train)
FinalModel.fit(X_test_selected,y_test)
FinalModel2 = ExtraTreesRegressor(n_estimators=131,max_depth=15,min_samples_split = 2)
FinalModel2.fit(X_train_selected,y_train)
FinalModel2.fit(X_test_selected,y_test)
# +
ind = ind.interpolate()
ind = add_interactions(ind)
ind = ind.drop(columns = ['iq + sj'])
#minn = ind.min()
#maxx = ind.max()
maxx['year'] = 2013
ind = (ind-minn)/(maxx-minn)
# -
X_inf_selected = ind[colnames_selected]
y_inf2 = np.array([int(max(x,0)) for x in FinalModel.predict(X_inf_selected)])
y_inf1 = np.array([int(max(x,0)) for x in FinalModel2.predict(X_inf_selected)])
print(y_inf2)
print(y_inf1)
y_inf2 = (y_inf1 + y_inf2)/2
print(y_inf2)
gf = pd.read_csv("Dengue/submission_format.csv")
mat = np.array(y_inf2,dtype=np.int8)
gf['total_cases'] = pd.DataFrame(mat)
print(gf.head())
gf.to_csv("MLesemCV2av.csv",index=False)
| 5,386 |
/Sql in Spark.ipynb
|
1d86f2bb21380dc9e4044757f602f69818f88061
|
[] |
no_license
|
DianaAtef/Sql-And-Spark
|
https://github.com/DianaAtef/Sql-And-Spark
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 107,361 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="qhBfwzBgioTD"
# **Data about passengers:**
# * Name
# * Age
# * Gender.
#
# + [markdown] id="DIEH8iZqi-sk"
# ## Install and Import Libraries
# Let's install PySpark:
# + id="377K-5QTnyCL"
# 1. install all the dependencies in Colab environment i.e. Apache Spark 3.0.1 with hadoop 2.7, Java 8 and Findspark to locate the spark in the system
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !wget -q https://archive.apache.org/dist/spark/spark-3.0.1/spark-3.0.1-bin-hadoop2.7.tgz
# !tar xf spark-3.0.1-bin-hadoop2.7.tgz
# !pip install -q findspark
# 2. Setup Environment Variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.0.1-bin-hadoop2.7"
# + [markdown] id="SDp80mG9jmfU"
# ## Build Spark Session
# + id="ttzML9fpjE5a"
import findspark
findspark.init()
# + id="mM9XyYNFnrzw"
import pyspark
# + id="Rw_kl4Vbnrzx"
from pyspark.sql import SparkSession
# + id="vV7IaS5Jnrzy"
spark = SparkSession.builder.appName('Ml').getOrCreate()
# + [markdown] id="TiqECDzLj1Mg"
# ## Data Loading
#
# + [markdown] id="vn-hxNggkTqV"
# You have two datasets:
# * Train
# * Test.
# + [markdown] id="x8-A8M7QmKDJ"
# Read two datasets:
# * Train
# * Test.
#
#
# + id="Mx2qAccBk15y"
train_filepath = 'train.csv'
test_filepath = 'test.csv'
# + colab={"base_uri": "https://localhost:8080/"} id="kCg4KYdYnrz2" outputId="6508616a-c877-4025-c0ac-e9fed3165458"
train_df = spark.read.csv(train_filepath,header=True,inferSchema=True)
train_df.printSchema()
# + colab={"base_uri": "https://localhost:8080/"} id="9TRy2NmLnrz3" outputId="84d94ce0-c0d2-45e2-8579-7434457d77e0"
train_df.show(5)
# + colab={"base_uri": "https://localhost:8080/"} id="KdeI44m9nrz4" outputId="afa0d543-8243-4ed4-b11b-f55703de2541"
test_df = spark.read.csv(test_filepath,header=True,inferSchema=True)
test_df.printSchema()
# + colab={"base_uri": "https://localhost:8080/"} id="sjHT4Wplnrz4" outputId="353b12e7-4b60-4e56-d138-315c8afa3e06"
test_df.show(5)
# + [markdown] id="Lj2ANTnWmSCq"
# Let's work with train dataset:
# + [markdown] id="b5mWJR30lNs5"
# **Confirm if this is a dataframe or not:**
# + id="tEYTePrzk9yl" colab={"base_uri": "https://localhost:8080/"} outputId="4d7050c0-10d2-46da-a9f9-28fcf7d30b07"
type(train_df)
# + [markdown] id="lvLJElPrlT4i"
# **Show 5 rows.**
# + id="jYwhqvV8lnO0" colab={"base_uri": "https://localhost:8080/"} outputId="a8d7f348-2948-4e02-b5aa-a462d3ac8a56"
train_df.show(5)
# + [markdown] id="6QIYVxRXlnnw"
# **Display schema for the dataset:**
# + id="pcvERiICl1Ep" colab={"base_uri": "https://localhost:8080/"} outputId="6dfd5ee5-93e0-4ee3-eec2-9bc73b490dc9"
train_df.printSchema()
# + [markdown] id="xmE3Wd80l1S6"
# **Statistical summary:**
# + id="cNY0SItol5Mo" colab={"base_uri": "https://localhost:8080/"} outputId="58436a62-46f6-4b48-8270-7be525a971e8"
train_df.describe(train_df.columns).show(truncate = False)
# + [markdown] id="HiFaIEQTl70_"
# ## EDA - Exploratory Data Analysis
# + [markdown] id="PSNPOnP8mw2Q"
# **Display count for the train dataset:**
# + id="zrtpG11Fl9HM" colab={"base_uri": "https://localhost:8080/"} outputId="c5d94e2e-3017-468e-d359-87405c8bda7a"
train_df.count()
# + [markdown] id="t_6nnTfxm9_x"
# **Can you answer this question:**
#
# **How many people survived, and how many didn't survive?**
#
# **Please save data in a variable.**
# + id="QDoqPwyomYxA"
survived_df = train_df.groupby("Survived").count()
# + [markdown] id="P8DUtZXPn46m"
# **Display your result:**
# + id="0XHAK8ceoCMU" colab={"base_uri": "https://localhost:8080/"} outputId="0c4e7017-033a-4ba7-bd7d-66d178a7d040"
survived_df.show()
# + [markdown] id="Ygsg7wQqor9a"
# **Can you display your answer in ratio form?(Hint: Use UDF.)**
#
#
#
#
#
# + id="3uiaN29PoQnf" colab={"base_uri": "https://localhost:8080/"} outputId="dfa2395a-074e-4648-d316-4472d1d15f2d"
import pyspark.sql.functions as f
total = survived_df.select("count").agg({"count": "sum"}).collect().pop()['sum(count)']
result = survived_df.withColumn('percent', (survived_df['count']/total)).show()
# + id="PvzEwesgoQ3s" colab={"base_uri": "https://localhost:8080/"} outputId="135e12bb-32d4-4c6c-a397-538ee1160870"
total = train_df.select('survived').count()
total
# + colab={"base_uri": "https://localhost:8080/"} id="uGmbsCjzttap" outputId="72234e72-b4cc-4ecc-ec96-2b0bf9d26408"
import pyspark.sql.functions as F
survived_df.agg(F.sum("count")).collect()[0][0]
# + [markdown] id="d57UGXmxnrz9"
# # Another Method
# + colab={"base_uri": "https://localhost:8080/"} id="OA99A9hTnrz9" outputId="4d51ca97-2cfc-4403-b81c-b7204b2d4580"
survived_df.withColumn('ratio',survived_df['count'] / total).show()
# + id="mQxy8PJZzgiG"
from pyspark.sql.types import FloatType, IntegerType, StringType
from pyspark.sql.functions import udf,col
# + id="vqdSH2Zczpzv"
def ratio(number):
return number/total
# + [markdown] id="3HGHXg_n0GrL"
# # Using UDF
# + id="uu7qsFcmzk_6"
ratio_udf = udf(lambda z: ratio(z),FloatType())
# + colab={"base_uri": "https://localhost:8080/"} id="woXHtciQzxc6" outputId="59ba20aa-6cfb-45bb-c2e6-1317e0a4e5f0"
survived_df.withColumn("ratio", ratio_udf(col("count"))).show(truncate=False)
# + [markdown] id="Q7Aker_lp1h4"
# **Can you get the number of males and females?**
#
# + id="XllkDlo3ongJ" colab={"base_uri": "https://localhost:8080/"} outputId="46c4746d-b87e-4bd3-d1e5-bb36e1df7c22"
count_df = train_df.groupby("Sex").count()
count_df.show()
# + colab={"base_uri": "https://localhost:8080/"} id="SFkL-uhJnrz-" outputId="cdb4e8b0-f150-4739-dc3d-9e02c45bef90"
total_female = count_df.select('count').where(count_df.Sex == 'female').show()
# + colab={"base_uri": "https://localhost:8080/"} id="yvM2fO01nrz-" outputId="b1d7b11e-c951-43e0-ce36-decdddec5264"
total_female = train_df.select('Sex').where(train_df.Sex == 'female').count()
total_female
# + colab={"base_uri": "https://localhost:8080/"} id="HZ1RIm-ynrz-" outputId="f56f4c7b-0917-4ac1-d8d1-ee48fb4cf473"
total_male = train_df.select('Sex').where(train_df.Sex == 'male').count()
total_male
# + [markdown] id="YHFaJ15zqtEV"
# **1. What is the average number of survivors of each gender?**
#
# **2. What is the number of survivors of each gender?**
#
# (Hint: Group by the "sex" column.)
# + id="NUikH7MUqdKq" colab={"base_uri": "https://localhost:8080/"} outputId="d897fd1c-caf6-4e21-fabf-2623cea9d4f8"
survivors = train_df.groupby(["Sex","Survived"]).count()
survivors.show()
# + colab={"base_uri": "https://localhost:8080/"} id="z7J-0wQBnrz_" outputId="67221496-74d9-488a-b5d9-a9aaa3c5d563"
survivors_df = survivors.select("Sex","Survived","count").where(survivors.Survived == 1)
survivors_df.show()
# + colab={"base_uri": "https://localhost:8080/"} id="YwMk3uVjnrz_" outputId="d6608f9c-4dca-45e1-cc80-d5ce08d9a0c1"
from pyspark.sql.functions import when
from pyspark.sql.functions import col
survivors_avg = survivors_df.withColumn("avg", when(col("Sex") == "female",survivors_df['count']/total_female)
.when(col("Sex") == "male",survivors_df['count']/total_male)).show()
# + [markdown] id="kCEdYNdArtRN"
# **Create temporary view PySpark:**
# + id="YjlK6HDUqsI5"
train_df.createOrReplaceTempView('train_df_view')
# + [markdown] id="JXNePifnshHr"
# **How many people survived, and how many didn't survive? By SQL:**
# + id="0HxfPRTMslqk" colab={"base_uri": "https://localhost:8080/"} outputId="c65f4f62-c1b1-418c-f129-ed0e7e86c8fd"
spark.sql('SELECT Survived,COUNT(Survived) AS Count FROM train_df_view GROUP BY Survived').show()
# + [markdown] id="sVCdY6EasFWV"
# **Can you display the number of survivors from each gender as a ratio?**
#
# (Hint: Group by "sex" column.)
#
# **Can you do this via SQL?**
# + id="7xQc3pUUr3HF" colab={"base_uri": "https://localhost:8080/"} outputId="586dcb44-f93a-4694-ff76-343deda17fae"
spark.sql('SELECT Sex,round(SUM(Survived) / COUNT(Survived),2) AS Gender_ratio FROM train_df_view GROUP BY Sex').show()
# + [markdown] id="j6QXc5V8uu3Y"
# **Display a ratio for p-class:**
#
# + id="Mscs2mDFdFsD" colab={"base_uri": "https://localhost:8080/"} outputId="5b8b70bc-6d48-4afa-8bf5-3ef6ab7f4c4c"
spark.sql('SELECT Pclass,round(SUM(Survived) / COUNT(Survived),2) AS Gender_ratio FROM train_df_view GROUP BY Pclass').show()
# + [markdown] id="EX0klxwAvg6J"
# **Let's take a break and continue after this.**
# + [markdown] id="_ctM9t8atxJl"
# ## Data Cleaning
# + [markdown] id="7CfanZTCt6Wk"
# **First and foremost, we must merge both the train and test datasets. (Hint: The union function can do this.)**
#
#
# + id="8Nm8S1K0r4uY" colab={"base_uri": "https://localhost:8080/"} outputId="c2020039-055c-415c-dac8-3eb3f0240344"
all_data = train_df.union(test_df)
all_data.show(5)
# + [markdown] id="jI7AD8FLz3iO"
# **Display count:**
# + id="I4rd9e6nzzr5" colab={"base_uri": "https://localhost:8080/"} outputId="db80505d-1690-444a-e361-e5bb2847ebb1"
all_data.count()
# + [markdown] id="lVQlr9vDy7Y4"
# **Temporary view PySpark:**
# + id="s_WERAL8wvJa"
all_data.createOrReplaceTempView('all_df_view')
# + [markdown] id="5R4Miuy0z_uP"
# **Can you define the number of null values in each column?**
#
# + id="0LMOalKBxhpD" colab={"base_uri": "https://localhost:8080/"} outputId="4ee2b89c-315e-4583-bf2f-caf20638d765"
from pyspark.sql.functions import *
all_data.select([count(when(isnull(c), c)).alias(c) for c in all_data.columns]).show()
# + [markdown] id="tBX8cJ000aqe"
# **Create Dataframe for null values**
#
# 1. Column
# 2. Number of missing values.
# + id="ITmyUelNxjJM" colab={"base_uri": "https://localhost:8080/"} outputId="ae3aa86d-40f3-4a13-8c27-09b3c85d5d70"
new_df = all_data.select([count(when(isnull(c), c)).alias(c) for c in all_data.columns])
new_df.show()
# + [markdown] id="cuKrOi5a0-Ma"
# ## Preprocessing
# + [markdown] id="Txa8NZIO1JaP"
# **Can you show me the name column from your temporary table?**
# + id="m7yXqJoJy35k" colab={"base_uri": "https://localhost:8080/"} outputId="e971830a-85e9-46d4-db6b-b552663b7a0d"
spark.sql('SELECT Name FROM all_df_view').show(5,truncate = False)
# + [markdown] id="3F0F9cTZ2Cuz"
# **Run this code:**
# + id="0kx6OcB-2BBT"
combined = all_data.withColumn('Title',regexp_extract(col("Name"),"([A-Za-z]+)\.",1))
combined.createOrReplaceTempView('combined_view')
# + [markdown] id="xbZeUWS12r59"
# **Display the title and count "Title" column:**
# + id="hGkFMtlp1FAI" colab={"base_uri": "https://localhost:8080/"} outputId="6c5aae42-f624-4781-f6b7-76a71eba8ae4"
spark.sql('SELECT Title,COUNT(Title) FROM combined_view GROUP BY TITLE').show(truncate = False)
# + [markdown] id="nLBQDKYu4JOa"
# **We can see that Dr, Rev, Major, Col, Mlle, Capt, Don, Jonkheer, Countess, Ms, Sir, Lady, and Mme are really rare titles, so create Dictionary and set the value to "rare".**
# + id="MqG4zzUFnr0E"
rare_list = ["Dr", "Rev", "Major", "Col", "Mlle", "Capt", "Don", "Jonkheer", "Countess", "Ms", "Sir", "Lady", "Mme"]
# + id="rjnx5l5r2Qaf"
title_dictionary = {"Dr":"rare","Rev":"rare","Major":"rare","Col":"rare", "Mlle":"rare", "Capt":"rare", "Don":"rare", "Jonkheer":"rare", "Countess":"rare", "Ms":"rare", "Sir":"rare", "Lady":"rare", "Mme":"rare"}
# + [markdown] id="9wrE95Cv7Oqh"
# **Run the function:**
# + id="HdDbWuDl7Pf4"
def impute_title(title):
return titles_map[title]
# + id="jYD2-j4bnr0F"
def impute_title(title):
if title in rare_list:
return "rare"
else:
return title
# + [markdown] id="f5EQVIhK7a9R"
# **Apply the function on "Title" column using UDF:**
# + id="2myr0BfS2eJe"
udf_data = udf(lambda z: impute_title(z),StringType())
# + id="rBAiIOn77XFa"
#udf_data = udf(impute_title)
title_df = combined.withColumn("Title",udf_data(combined['Title']))
# + [markdown] id="sn8ewllf7kiV"
# **Display "Title" from table and group by "Title" column:**
# + id="J9sjQb084GU6" colab={"base_uri": "https://localhost:8080/"} outputId="758cd5fa-2aeb-47aa-d0b9-cbe1c4c5379c"
title_df.groupBy("Title").count().show()
# + colab={"base_uri": "https://localhost:8080/"} id="9_gCYuU-zjdE" outputId="1f393457-b30c-4250-b373-1f9ef2614201"
combined.show(truncate = False)
# + [markdown] id="-H45QNLj9vJp"
# ## **Preprocessing Age**
# + [markdown] id="XwRAhumK-u__"
# **Based on the age mean, you will fill in the missing age values:**
# + colab={"base_uri": "https://localhost:8080/"} id="RbOhJb9pzyYy" outputId="702b2f63-48b2-4b57-c513-dd3b2ed4f3d2"
avg_age = combined.select(F.mean("Age")).collect()
avg_age[0][0]
# + [markdown] id="JLPivde8_GI-"
# **Fill missing age with age mean:**
# + id="lBgW8aFD90PA" colab={"base_uri": "https://localhost:8080/"} outputId="4ca66f85-e9e0-49fc-a859-ea3e62738671"
df_filled = combined.na.fill(avg_age[0][0],subset = ['Age'])
df_filled.show(truncate= False)
# + [markdown] id="jGsnUz-m_P95"
# ## **Preprocessing Embarked**
# + [markdown] id="iHbbamcXMSYP"
# **Select Embarked, count them, order by count Desc, and save in grouped_Embarked variable:**
#
#
#
# + id="O1rINM412KjK"
grouped_Embarked = df_filled.groupBy("Embarked").count().orderBy('count',ascending = False)
# + [markdown] id="E1qf5u2IOQrx"
# **Show groupped_Embarked:**
# + id="jSFNDTNg_erb" colab={"base_uri": "https://localhost:8080/"} outputId="4afd8940-fcd1-4c7b-a2e5-3b1f50ccdfd9"
grouped_Embarked.show()
# + [markdown] id="mzQWYgKBMrbp"
# **Get the groupped_Embarked:**
# + id="uu46QWrn_gCX"
mode = grouped_Embarked.orderBy("count",ascending = False).first()[0]
# + [markdown] id="L8vhoEs8N2w_"
# **Fill missing values with grouped_Embarked:**
# + id="LdzQCRud_mAa" colab={"base_uri": "https://localhost:8080/"} outputId="e4e7ff1b-263f-436a-ff92-5e0ae7e3f469"
df_filled2 = df_filled.na.fill(mode,subset = ['Embarked'])
df_filled2.show()
# + [markdown] id="TEcdV5Vb_qR_"
# ## **Preprocessing Cabin**
# + [markdown] id="_BQzPs7tqhpA"
# **Replace "cabin" column with first char from the string:**
#
#
# + id="4b6L5pK0_nQz"
from pyspark.sql.functions import substring
df_filled3 = df_filled2.withColumn(
'Cabin', substring('Cabin', 1, 1))
# + [markdown] id="6H8XshnYj4k2"
# **Show the result:**
# + id="gJUQwnG1Oj2U" colab={"base_uri": "https://localhost:8080/"} outputId="913d409b-2515-4097-82bf-b1ae0c181641"
df_filled3.show()
# + [markdown] id="yzSDsWsUj9Im"
# **Create the temporary view:**
# + id="MR7CXTY7_tMJ"
df_filled3.createOrReplaceTempView("df_view")
# + [markdown] id="Gv7lfQFkrLlN"
# **Select "Cabin" column, count Cabin column, Group by "Cabin" column, Order By count DESC**
# + colab={"base_uri": "https://localhost:8080/"} id="IHdW3SsBwbP4" outputId="d74bda7e-c9c9-4e13-d543-6247c97b35db"
df_filled3.select("cabin").groupBy("cabin").count().orderBy("count",ascending = False).show()
# + id="A0tZG_mvrKXv" colab={"base_uri": "https://localhost:8080/"} outputId="3c5f53f1-5a7b-40a9-accf-021ac27c5be8"
spark.sql('''SELECT Cabin, count(*) AS count
FROM df_view
GROUP BY Cabin
ORDER BY count DESC''').show()
# + [markdown] id="1GR6j0LOsB4y"
# **Fill missing values with "U":**
# + id="mwq5CHEz_up_" colab={"base_uri": "https://localhost:8080/"} outputId="87fb1740-0013-4fa7-980f-c2a65d6cda32"
df_filled4 = df_filled3.na.fill("U",subset = ["Cabin"])
df_filled4.show(5,truncate = False)
# + colab={"base_uri": "https://localhost:8080/"} id="XTUjzy4Syhjr" outputId="2a5a1c03-52c2-4098-d488-a9b0f6145293"
df_filled4.select("cabin").groupBy("cabin").count().orderBy("count",ascending = False).show()
# + [markdown] id="RRnhA_5-0Hi4"
# **StringIndexer: A label indexer that maps a string column of labels to an ML column of label indices. If the input column is numeric, we cast it to string and index the string values. The indices are in [0, numLabels). By default, this is ordered by label frequencies so the most frequent label gets index 0. The ordering behavior is controlled by setting stringOrderType. Its default value is ‘frequencyDesc’.**
# + [markdown] id="1RIKlOX71GQ-"
# **StringIndexer(inputCol=None, outputCol=None)**
# + [markdown] id="c0c_Hf_b0R12"
# **Pipeline: ML Pipelines provide a uniform set of high-level APIs built on top of DataFrames that help users create and tune practical machine learning pipelines.**
# + [markdown] id="sWfXaZ0I4dXD"
# ____________________________________________
# + [markdown] id="KmcT3Afi0tRf"
# **Create list comprehension, use StringIndexer to Converting "Sex, Embarked, Title, and Cabin" columns to column name+index like "Title_index":**
# + colab={"base_uri": "https://localhost:8080/"} id="0t4l8Ufl4J2h" outputId="710c63a5-7838-4484-bec5-2f1aa82cd192"
df_filled4.show()
# + colab={"base_uri": "https://localhost:8080/"} id="T862HcQp9_x3" outputId="57fc7d77-3aa1-4856-d5db-422e6ad78555"
trainDF, testDF = df_filled4.randomSplit([.8,.2],seed=42)
print(f"There are {trainDF.count()} rows in the training set, and {testDF.count()} in the test set")
# + [markdown] id="RzFCa54R15-g"
# **Use Pipline to fit and transform:**
# + id="dZARTPDE0CQJ"
categoricalCols = ["Sex", "Embarked", "Title", "Cabin"]
# + colab={"base_uri": "https://localhost:8080/"} id="ODWqANXh0krP" outputId="6272dd07-bf70-4212-8719-b83312db5a46"
indexOutputCols = [x + "_Index" for x in categoricalCols]
indexOutputCols
# + id="9K0mX7SS3KsA"
from pyspark.ml.feature import StringIndexer
stringIndexer = StringIndexer(inputCols=categoricalCols,
outputCols=indexOutputCols,
handleInvalid='skip')
# + colab={"base_uri": "https://localhost:8080/"} id="UAEe9G0D5vbl" outputId="962c1870-c318-4ab9-bdb3-4b50f2521e91"
numericCols = [field for (field,dataType) in trainDF.dtypes
if ((dataType=='double') or (dataType=='int') and (field !='Survived') )]
numericCols
# + colab={"base_uri": "https://localhost:8080/"} id="DgbHsEzS67gp" outputId="364400b5-e361-4c4e-a97b-49cc96293333"
assemblerInputs = indexOutputCols + numericCols
assemblerInputs
# + id="on3Y7ptl7Ifd"
from pyspark.ml.feature import VectorAssembler
vecAssembler = VectorAssembler(inputCols=assemblerInputs,outputCol='features')
# + id="rqzHf7gP7QNg"
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(labelCol='Survived',featuresCol='features')
from pyspark.ml import Pipeline
pipeline =Pipeline(stages = [stringIndexer,vecAssembler,lr])
# + id="Qwqw0O0Z8TLG"
pipelineModel = pipeline.fit(trainDF)
# + id="H7I0HMWU9VFs"
predDF = pipelineModel.transform(testDF)
# + colab={"base_uri": "https://localhost:8080/"} id="qxs60eXq-anH" outputId="6bd94578-7300-4a1c-879f-0c6bcfa639aa"
predDF.show()
# + [markdown] id="Qlm2ZbgR4Cyg"
# **Drop "Sex, PassengerId, Name, Title, SibSp, Parch, Ticket, Cabin, Embarked" columns**
# + id="XVj3iumKAiyv" colab={"base_uri": "https://localhost:8080/"} outputId="84823619-fc55-4713-d0c1-9273c6a5d89b"
drop_columns = ["Sex", "PassengerId", "Name", "Title", "SibSp", "Parch", "Ticket", "Cabin", "Embarked"]
df = df_filled4.drop(*drop_columns)
df.show(5)
# + [markdown] id="69Yc8MwC4ggb"
# **Convert to pandas**
# + id="R9Vfe1ZUAO9o"
df_panda = df.toPandas()
# + [markdown] id="nLJAstwQ4pgg"
# **Display result**
# + id="xiIo669gAmHb" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="cc3d8ea7-3bd5-4519-b843-6b2e63d45edb"
df_panda
# + [markdown] id="1FsiLsd9452v"
# **VectorAssembler: VectorAssembler(*, inputCols=None, outputCol=None) A feature transformer that merges multiple columns into a vector column.**
#
#
# + [markdown] id="R0XLjbGQ5NUR"
# **Use VectorAssembler and set InputCols the to "train from first coloumn to end" and set the Output to "features"**
# + id="mWerPowK_cUn"
vector_input = list(df_panda.columns[1:])
# + id="GbZ_FxHCAstl"
vecAssembler = VectorAssembler(inputCols=vector_input,outputCol='features')
# + [markdown] id="aP9PrLe45Tvv"
# **Use VectorAssembler and set the "InputCols" to test all column and set the "Output" to "features":**
# + id="BPznOTfJAwji"
vecAssembler2 = VectorAssembler(inputCols=list(df_panda.columns),outputCol='features')
# + [markdown] id="dU8DeZfh7JIo"
# **Use randomSplit function and split data to x_train, and X_test with 80% and 20% Consecutive**
# + id="8C11xf1iAzKp"
x_train, X_test = df.randomSplit([.8,.2],seed=42)
# + [markdown] id="IJQvmFai72O7"
# **Use RandomForestClassifier to fit and transform then display "prediction, Survived, features" columns**
# + id="MzIDSJzgA035"
from pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(labelCol='Survived',featuresCol='features')
pipeline =Pipeline(stages = [vecAssembler,rf])
# + colab={"base_uri": "https://localhost:8080/"} id="XFzaR0sYBDBn" outputId="7973e6ef-c119-4615-c35d-741a27bc8577"
pipelineModel = pipeline.fit(x_train)
predDF = pipelineModel.transform(X_test)
predDF.select("prediction", "Survived", "features").show(5)
# + [markdown] id="FSXEI8-r8bKY"
# **Use MulticlassClassificationEvaluator and set the "labelCol" to "Survived", "predictionCol" to "prediction", "metricName" to "accuracy"**
# + id="Rl0UAKCaBDO-"
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
classificationEvaluator = MulticlassClassificationEvaluator(predictionCol='prediction',
labelCol='Survived',
metricName='accuracy')
# + [markdown] id="zRGSUvTt8-os"
# **Use the model to predict**
# + id="IpyrkChYBLjB"
acc = classificationEvaluator.evaluate(predDF)
# + id="cR4Ux2ymBOtc" colab={"base_uri": "https://localhost:8080/"} outputId="83a631b9-190f-4670-d6b0-b6dc45805178"
acc
# + [markdown] id="sO6_R1zJ9R1R"
# **When you are finished send the project via Google classroom**
# **Please let me know if you have any questions.**
# * [email protected]
# * +201015197566 (Whatsapp)
#
# **Don't Hate me, I push you to learn**
#
# **I will help you to become an awesome data engineer.**
#
# **Why did I say that "Data Engineer"?**
#
# **Tricky question, but an optional question, if you would like to know the answer, ask me.**
#
| 22,237 |
/.ipynb_checkpoints/SurfsUpInHawaii-checkpoint.ipynb
|
f25c73a50a89c7c1bd4871ec0f81cc8f99bcc30e
|
[] |
no_license
|
coralmaven/SurfsUpInHawaii
|
https://github.com/coralmaven/SurfsUpInHawaii
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 21,288 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.preprocessing import Binarizer, binarize
# -
num_list = [[ -1000, 0],
[ 500, -3000],
[ 100, 650]]
binarizer = Binarizer()
binarizer
binarizer.fit(num_list)
# +
binarized_list = binarizer.transform(num_list)
binarized_list
# +
binarizer = Binarizer(threshold=500)
binarizer.fit_transform(num_list)
# +
binarizer = Binarizer(threshold=[0, 100])
binarizer.fit_transform(num_list)
# -
# ### Using the binarizer to make numeric values categorical
diet_data = pd.read_csv('Datasets/diet_data.csv')
diet_data.head()
# +
diet_data = diet_data.dropna()
diet_data = diet_data.drop(['Date', 'Stone', 'Pounds', 'Ounces'], axis=1)
# -
diet_data = diet_data.astype(np.float64)
diet_data.head()
diet_data.describe()
# +
median_calories = diet_data['calories'].median()
median_calories
# +
binarizer = Binarizer(threshold=median_calories)
diet_data['calories_above_median'] = binarizer.fit_transform(diet_data[['calories']])
# -
diet_data.head()
# +
mean_calories_per_oz = diet_data['cals_per_oz'].mean()
mean_calories_per_oz
# -
diet_data['cals_per_oz_above_mean'] = binarize(diet_data[['cals_per_oz']],
threshold=mean_calories_per_oz)
diet_data.sample(10)
d
Base.classes.keys()
# Or we can use the inspector to get the table names
inspector = inspect(engine)
inspector.get_table_names()
for c in inspector.get_columns('measurement'):
print(c['name'],c['type'])
for c in inspector.get_columns('station'):
print(c['name'],c['type'])
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Exploratory Climate Analysis
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# +
# Sets an object to utilize the default declarative base in SQL Alchemy
Base = declarative_base()
# Creates Classes which will serve as the anchor points for our Tables
class Measurement(Base):
__tablename__ = 'measurement'
id = Column(Integer, primary_key=True)
station = Columns(String(11))
date = Column(String(10))
prcp = Column(Float)
tobs = Column(Float)
def get_last_date(self):
return session.query(self.date)\
.order_by(Measurement.date.desc()).first()
def get_date_prev_yr(last_date):
yr = int(last_date.split("-")[0])
mn = int(last_date.split("-")[1])
dy = int(last_date.split("-")[2])
return dt.date(yr, mn, dy) - dt.timedelta(days=365)
class Station(Base):
__tablename__ = 'station'
id = Column(Integer, primary_key=True)
station = Columns(String(11))
name = Column(String(40))
latitude = Column(Float)
longitude = Column(Float)
elevation = Column(Float)
# -
# Calculate the date 1 year ago from the last data point in the database using SqlAlchemy
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
query_date = dt.date(2017, 8, 23) - dt.timedelta(days=365)
print("Query Date: ", query_date)
# Perform a query to retrieve the data and precipitation scores
query = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= query_date)
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
df = pd.read_sql(query.statement, engine).sort_values('date',ascending=False)
df.head()
plt.rcParams['figure.figsize']=[15,10]
hawaii_df = df.set_index('date')
ax=hawaii_df.plot(title='Precipitation over the last 12 months for Hawaii by date');
# Create a Month column in order to be able to plot by month
df['Month'] = [d.split('-')[1] for d in df['date']]
df["Month"] = pd.to_numeric(df["Month"])
groupedByMonth_df = df.groupby('Month')
avg_prcp_by_month = groupedByMonth_df['prcp'].mean()
max_prcp_by_month = groupedByMonth_df['prcp'].max()
prcp_df = pd.merge(avg_prcp_by_month, max_prcp_by_month, on='Month', how='outer')
prcp_df.rename(columns={'prcp_x':'prcp_avg','prcp_y':'prcp_max'},inplace=True)
prcp_df.plot.bar(title='Precipitation over the last 12 months for Hawaii by Month');
# Use Pandas to calculate the summary statistics for the precipitation data
df.describe()
# Design a query to show how many stations are available in this dataset?
session.query(Measurement.station).distinct().count()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
session.query(func.count(Measurement.station),Measurement.station).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).\
all()
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
query = session.query(Measurement.date,Measurement.tobs).filter(Measurement.station=='USC00519281')
active_df = pd.read_sql(query.statement, engine).sort_values('tobs',ascending=False)
active_df.describe()
active_df.tail()
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
query = session.query(Measurement.date, Measurement.tobs).filter(Measurement.date >= query_date).\
filter(Measurement.station=='USC00519281')
yr_active_df = pd.read_sql(query.statement, engine).sort_values('date',ascending=True)
yr_active_df['Month'] = [d.split('-')[1] for d in yr_active_df['date']]
yr_active_df["Month"] = pd.to_numeric(yr_active_df["Month"])
yr_active_df = yr_active_df.set_index('Month')
yr_active_df.head()
active_df.describe()
active_df.hist(column='tobs')
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# -
def calc_temps_y(y):
args = calc_temps(str(y)+'-02-28',str(y)+'-03-05')
return (*args[0]), y
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
prev_years_temp = [calc_temps_y(y) for y in range(2012,2018)]
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
trip_avg_temp_df = pd.DataFrame(prev_years_temp,columns=['TMIN','TAVG','TMAX','YEAR'])
trip_avg_temp_df.set_index('YEAR')
ax = trip_avg_temp_df.plot.bar(x='YEAR', title='Trip Avg Temp')
ax.set_ylim(50,80)
# +
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
def total_rain(station, start_date, end_date):
"""Total rainfall for the weather station for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
station (string): station id
Returns:
TotalRainfall
"""
return session.query(func.sum(Measurement.prcp)).\
filter(Measurement.station==station).\
filter(Measurement.date >= start_date).\
filter(Measurement.date <= end_date).first()
# function usage example
print(total_rain('USC00519281','2012-02-28', '2012-03-05'))
# +
def total_rain_y(s,y):
r = total_rain(s,str(y)+'-02-28',str(y)+'-03-05')
return r[0]
print(total_rain_y('USC00519281','2012'))
# -
stations = session.query(Measurement.station).distinct().all()
station_id = []
for s in stations:
station_id.append(s[0])
station_id
[total_rain_y('USC00519397',y) for y in range(2012,2018)]
prev_years_rain = []
for s in station_id:
rain = [total_rain_y(s,y) for y in range(2012,2018)]
row = [s, *rain]
prev_years_rain.append(row)
total_rain_df = pd.DataFrame(prev_years_rain, columns=['station','2012','2013','2014','2015','2016','2017'])
total_rain_df = total_rain_df.\
sort_values(by=['2017','2016','2015','2014','2013','2012'],ascending=False)
total_rain_df
# Convert Station table to DataFrame
query = session.query(Station)
Station_df = pd.read_sql(query.statement, engine)
del(Station_df['id'])
Station_df.head()
# Merge the two tables
hawaii_df = pd.merge(total_rain_df, Station_df, on='station', how='left')
hawaii_df.head()
# ## Optional Challenge Assignment
# +
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# -
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Strip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
date_range = ['02-28','03-01','03-02','03-03','03-04','03-05']
normals = [(d,*daily_normals(d)[0]) for d in date_range]
normals
def getDTObj(s):
m = pd.to_numeric(s.split('-')[0])
d = pd.to_numeric(s.split('-')[1])
return dt.date(2017, m, d)
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
normals_df = pd.DataFrame(normals, columns=['trip_dates','TMIN','TAVG','TMAX'])
normals_df['trip_dates'] = normals_df['trip_dates'].apply(getDTObj)
normals_df.set_index('trip_dates')
# Plot the daily normals as an area plot with `stacked=False`
x_axis = np.arange(len(normals_df))
tick_locations = [value for value in x_axis]
ax = normals_df.plot.area(stacked=False, title='Daily Normals',use_index='trip_dates')
plt.xticks(tick_locations, date_range, rotation="vertical")
plt.show()
based on your system, you may change this notebook and more importantly the function 'forward'
### in line 259 of ./DIP_UNET_models/unet_and_tv/varnetn.py
for i,cas in enumerate(vmdl.cascades):
vmdl.cascades[i] = cas.to(devices[i//3])
# ### VarNet sample test
# +
e = 0.08
R,loss,clean_rec,pert_recs = runner(vmdl,
slice_ksp_torchtensor.type(dtype).to(devices[0]),
eps = e,
num_iter = 2000,
LR = 1e-6,
OPTIMIZER='adam',
model_type = 'varnet',
mask = mask.data.cpu(),
devices = devices,
lr_decay_epoch = 0,
weight_decay=0,
retain_graph = True,
find_best=True,
)
### pick the worst perturbation across the iterations
P = []
for j,im in enumerate([clean_rec]+pert_recs):
if j == 0:
im1 = orig.copy()
im2 = im.copy()
im1 = (im1-im1.mean()) / im1.std()
im1 *= im2.std()
im1 += im2.mean()
#ssim_ = ssim(np.array([im1]),np.array([im2]))
psnr_c = psnr(np.array([im1]),np.array([im2]))
if j > 0:
im1 = orig.copy()
im2 = im.copy()
im1 = (im1-im1.mean()) / im1.std()
im1 *= im2.std()
im1 += im2.mean()
ssim_ = ssim(np.array([im1]),np.array([im2]))
psnr_ = psnr(np.array([im1]),np.array([im2]))
P.append(psnr_)
arg = loss.argmin()
# -
plot_([orig,clean_rec,pert_recs[arg]],['ground truth','clean reconstruction','perturbed reconstruction'],[0,psnr_c,P[arg]])
# ### U-net sample test
# +
e = 0.08
R,loss,clean_rec,pert_recs = runner(umdl,
slice_ksp_torchtensor.type(dtype).to(devices[0]),
eps = e,
num_iter = 2000,
LR = 5e-9,
OPTIMIZER='adam',
model_type = 'unet',
mask = mask.data.cpu(),
devices = devices,
lr_decay_epoch = 0,
weight_decay=0,
retain_graph = True,
find_best=True,
)
### pick the worst perturbation across the iterations
P = []
for j,im in enumerate([clean_rec]+pert_recs):
if j == 0:
im1 = orig.copy()
im2 = im.copy()
im1 = (im1-im1.mean()) / im1.std()
im1 *= im2.std()
im1 += im2.mean()
#ssim_ = ssim(np.array([im1]),np.array([im2]))
psnr_c = psnr(np.array([im1]),np.array([im2]))
if j > 0:
im1 = orig.copy()
im2 = im.copy()
im1 = (im1-im1.mean()) / im1.std()
im1 *= im2.std()
im1 += im2.mean()
ssim_ = ssim(np.array([im1]),np.array([im2]))
psnr_ = psnr(np.array([im1]),np.array([im2]))
P.append(psnr_)
arg = loss.argmin()
# -
plot_([orig,clean_rec,pert_recs[arg]],['ground truth','clean reconstruction','perturbed reconstruction'],[0,psnr_c,P[arg]])
| 14,808 |
/TSF.ipynb
|
c5c87505d6fc15e3cf330192d06e4eb6e7e7323e
|
[] |
no_license
|
MahendraHere/TheSparksFoundation
|
https://github.com/MahendraHere/TheSparksFoundation
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 33,087 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mahendra Reddy Bolla
# # Data Science and Business Analytics(Task-1)
#
# +
# Simple Linear Regression Model using Supervised Machine Learning
# -
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
link="http://bit.ly/w-data"
sdata=pd.read_csv(link)
print("Data imported Successfully")
sdata.head(10)
sdata.plot(x='Hours',y='Scores',style='o')
plt.title('Hours vs Percentages')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
# # Preparing the data
X=sdata.iloc[:,:-1].values
y=sdata.iloc[:,1].values
X
y
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
# # Training the Algorithm
# +
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train,y_train)
print("Training Complete.")
# -
# plotting the regression line
line=regressor.coef_*X+regressor.intercept_
# plotting the test data
plt.scatter(X,y)
plt.plot(X,line);
plt.show()
# # Making Some Predictions
print(X_test) # testing the data
y_pred=regressor.predict(X_test) # predicting the scores
# comparing
df=pd.DataFrame({'Actual':y_test,'Predicted':y_pred})
df
# # Evaluating the Model
from sklearn import metrics
print("Mean Absolute Error:",metrics.mean_absolute_error(y_test,y_pred))
| 1,626 |
/K - Nearest Neighbours Assignments/KNN - Diabetes Detection.ipynb
|
265226d004978694acaac12dcdd4a9de495ddfed
|
[] |
no_license
|
prithvi8117/AI-Mafia---Machine-Learning
|
https://github.com/prithvi8117/AI-Mafia---Machine-Learning
| 0 | 1 | null | 2020-07-05T13:30:56 | 2020-07-05T08:32:44 | null |
Jupyter Notebook
| false | false |
.py
| 187,093 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Problem Statement
# Given the transaction history of credit cards we have to predict the transactions as fraudulent or non-fraudulent.
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
# -
# ## Background
# Suppose one day you wake up and recieve a message on your text that some amount has been deducted from your card for some purchase of say item 'x'. But you didn't made any purchase from your card, someone with your card cardentials or by any anhyhow has made a purchase through your card.It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase. The banks fraud detection system saves you from these kind of fraudulent activities.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import style
style.use('ggplot')
# %matplotlib inline
plt.rcParams['figure.figsize'] = (15, 8)
seed = 999
# -
# ## Data Definition
# * The datasets contains transactions made by credit cards in September 2013 by european cardholders.
# * This dataset contains 284,807 records and 31 features.
# * Features V1, V2, … V28 are the principal components obtained with PCA.
# * The only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time'.
# * Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
train = pd.read_csv('/kaggle/input/creditcardfraud/creditcard.csv')
# -
# ### Check Data
#check shape
print(f'data has {train.shape[0]} number of rows and {train.shape[1]} number of columns')
# ### Check head of the data
train.head(10)
# ### Check the Information
train.info()
# Fron information we can check that there are only numeric features only.
# ### Check Null Values
train.isnull().sum()
# There are no null values present in our dataset.
# ### Check Description
train.describe().T
# All the V-columns are principal components. The Time and Amount columns are not anonymized.
# When we look at the Amount column mean is 22 while the maximum is 25691. The data seems to be positively skewed. Another thing we can observe is that the time is given in seconds minimum is 0 and maximum is 172792. In one day we have 86,400 sec, so if we divide 172972/86,400 we will see that the data is of nearly 2 days.
# ## Exploratory Data Analysis
# Since all the features are anonymized. We will focus our EDA on Time, Amount and target feature (Class).
# ### Plot Histogram to check the distribution of Amount
sns.distplot(train['Amount'])
plt.title('Amount Distribution',fontsize=20)
plt.show()
# **Interpretation from the graph**
# * Amount is positively skewed.
# * Most of the transaction amounts are low only few comes close to maximum.
# ### Plot histogram to see distribution of time
plt.hist(train['Time'])
plt.title('Distribution of Time',fontsize=20)
plt.xlabel('Time')
plt.show()
# **Interpretations from the graph**
# * Time is Bimodal.
# * Number of transactions has dropped between the two mode. May be because it was night time.
# ### Check the Target column (Class)
sns.countplot(train['Class'])
plt.title('Class Counts',fontsize=20)
plt.show()
# **Interpretations from the graph**
# * Class is highly imbalanced
# * There are very less positive class as compared to negative class
# ### Count percentage of positive and negative class
train['Class'].value_counts()/train.shape[0] #check percentage of positive and negative class
# About 99.8273 % are negative (Non-Fraudulent) cases in our dataset and only 0.1727% are positive cases. It is going to impact the evaluation metric that we choose as if we simply choose the accuracy score as our evaluation metric it will always give us a good high score even if we classify all the positive classes as negative. We will be using ROC_AUC score as our evaluation metric here.
# ### Check the Distibution of fraudulent and non-fraudulent transactions seperately
fraud_cases = train[train['Class'] == 1] #keep all the fraud cases
non_fraud_cases = train[train['Class'] == 0] #keep all the non-fraud cases
# ### Plot histogram to check the distribution of fraud cases
#Distribution of amount in fraud cases
plt.hist(fraud_cases['Amount'])
plt.title('Distribution of Amount for fraudulent cases',fontsize=20)
plt.show()
# **Interpretations**
# * Amount is heavily positively skewed.
# * Few transaction amounts have higher values.
# * Most of the transaction amounts are low
# ### Plot histogram to check the distribution of fraud cases
plt.figure(figsize=(12,6))
plt.hist(non_fraud_cases['Amount'])
plt.title('Distribution of Amount for non-fraudulent cases')
plt.show()
# **Interpreations**
# * Transaction amount for non-fraud cases are mostly small.
# * Most of the Amounts are between 0 and 500.
# * From above 2 graphs we can say that the fraudulent cases have more transaction amounts.
# ### Plot the Distribution of Time for fraud and non-fraud cases
plt.figure(figsize=(12,6))
plt.hist(fraud_cases['Time'])
plt.title('Distribution of Time for fraudulent cases')
plt.show()
plt.figure(figsize=(12,6))
plt.hist(non_fraud_cases['Time'])
plt.title('Distribution of Time for non-fraudulent cases')
plt.show()
# ### Check the Correlation between the variables using Heatmap
plt.figure(figsize=(17,8))
cor = train.corr()
cor
#sns.heatmap(cor,annot=True,fmt='.2g')
plt.figure(figsize=(17,8))
cor = train.corr()
sns.heatmap(cor,annot=True,fmt='.2g')
# ## Data Preperation
# ### Scale Amount and Time
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
train['Amount'] = sc.fit_transform(train[['Amount']])
train['Time'] = sc.fit_transform(train[['Time']])
# ## Handling Imbalanced Class
# To handle class imbalance mostly used techniques are -
# 
# * **Random Over Sampling**
#
# In over sampling we make the instances of minority class equal to the majority class. Suppose out of 100 there are only 2 fraud cases. By over sampling we make the instances of fraud cases from 2 to 100 by duplicating.
#
# * **Random Down Sampling**
#
# In random down sampling we make the instances of majority class equal to minority class. In our previous example down sampling will result in 2 cases of frauds and 2 cases of non-frauds.
#
# Each of these techniques possess challenges as well. If our dataset is very big then **over sampling will make our dataset even bigger. And there will be time complexity as we are almost doubling the dataset and if we Down sample the minority classes we will miss the instances and specific patterns present in our dataset.**
#
# While splitting our dataset into training and testing we should make sure to have same proportion of classes in both training and testing set as it was there in original dataset.
# ### Split the data into train and test
#
# keep all the predictor variables in X and Classes in Y
X = train.drop('Class', axis=1)
Y = train['Class']
#Check the shape of X and Y
print('Shape of X and Y')
print(X.shape)
print(Y.shape)
# +
#split the data into 67:33 ratio for training and testing
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LogisticRegression
X_train,X_test,Y_train,Y_test = train_test_split(X,Y, stratify = Y, test_size = .33, random_state = 42)
#check proportion of fraud and non-fraud cases in tarining and testing sets
print('Proportion of classes in training set')
print(Y_train.value_counts() / len(Y_train))
print('Proportion of classes in test set')
print(Y_test.value_counts() / len(Y_test))
# -
# Both the classes are present in almost equal proportion in taraining and testing set.
# ## Model Building
# As we have a binary classification problem here i.e to predict the transactions as fraudulent and non-fraudulent. We will apply the logistic regression on on our datset. We will tune only the signle parameter **C** using GridSearchCV. Then we will check the auc_score as specified and look at the confusion matrix. Then we try to set the threshold which gives our model confidence to predict the fraudulent transactions. To read about Roc Auc score [Click Here](https://www.dataschool.io/roc-curves-and-auc-explained/)
# +
from sklearn.model_selection import GridSearchCV #import GridSearchCV to find best parameters
from sklearn.metrics import roc_auc_score, roc_curve, confusion_matrix #import metrices
clf = LogisticRegression(verbose=3,warm_start=True) #create instance of LogisticRegression
params = {'C' : np.power(10.0, np.arange(-3,3))} #set c parameter
#find best parameter value for c
logit_grid = GridSearchCV(clf, param_grid = params, scoring='roc_auc', n_jobs=70)
logit_grid.fit(X_train,Y_train) #fit model on training set
predict = logit_grid.predict(X_test) #predict for test set
# -
# ### Check roc_auc_score
#check the roc_auc_score
print('Training score ',roc_auc_score(Y_train,logit_grid.predict(X_train)))
print('Testing score ',roc_auc_score(Y_test,predict))
# ### Plot confusion matrix
#Plot confusion matric
from sklearn.metrics import plot_confusion_matrix
conf = confusion_matrix(Y_train,logit_grid.predict(X_train))
sns.heatmap(conf,annot=True,fmt='d')
plt.ylabel('ACtual Class')
plt.xlabel('Predicted Class')
plt.show()
# ### Plot FPR vs TPR
pred_probas = logit_grid.predict_proba(X_train)[:, 1]
fpr,tpr, thresholds = roc_curve(Y_train,pred_probas)
plt.plot(fpr,tpr)
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.axvline(0.1,linestyle='--')
plt.show()
# ### Set The Threshold
# we will set the threshold as 0.1
predicted_class = []
for i in pred_probas:
if i > 0.1:
predicted_class.append(1)
else:
predicted_class.append(0)
plt.figure(figsize=(8,5))
from sklearn.metrics import plot_confusion_matrix
conf = confusion_matrix(Y_train,predicted_class)
sns.heatmap(conf,annot=True,fmt='d')
plt.title('Confusion Matrix')
plt.ylabel('ACtual Class')
plt.xlabel('Predicted Class')
plt.show()
roc_auc_score(Y_train,predicted_class)
# ## Conclusion
# By using Stratified sampling and setting the threshold of our Logistic Regression model to 0.1 we were able to build a model which can classify the Transactions as fraudulent and non-fraudulent with roc_auc_score of 83.7%
| 11,456 |
/labs/17-NLP_and_simple_sentiment.ipynb
|
a6ce2390ca92e211a215420c2ec7b3988012e387
|
[] |
no_license
|
almacaus/DAT_SF_19
|
https://github.com/almacaus/DAT_SF_19
| 0 | 0 | null | 2015-12-03T04:30:59 | 2015-12-03T04:17:37 | null |
Jupyter Notebook
| false | false |
.py
| 1,844,346 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <H1>Notebook 1A: Exploring through the most useful and important features within a Jupyter notebook</font>
# ***
# _This will be an incomplete and biased run-through of the important features and functions in a jupyter notebook_
#
# Incomplete, because no one notebook (or set) could cover all of the available features and abilities of the Jupyter project
# ***
# # 1. Cell Types
# # 2. Editing modes
# # 3. Imports and output
# # 4. Help
#
# ### Great sources of Jupyter notebooks to explore and tick off the list:
# - [Jupyter.org a range of tools for reproducible computing](https://jupyter.org/try)
#
# - [The Carpentries, great basic training for software and data handling](https://software-carpentry.org/lessons/index.html)
#
# - [And the main Jupyter notebook documentation site](https://jupyter-notebook.readthedocs.io/en/stable/)
#
# - [IPython, information for interactive computing, the precedant of Jupyter](https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Index.ipynb)
#
#
# ><font color='navy' size=4>We love stories. The ability to provide information and **context** to accompany _data_ and `code` means that, in addtion to providing comments for the function and behavior of the `code` you can return the results of each stage of your computation, as well as allowing a more gerenal discussion of motivations and possible findings</font>
# # 1. Cell types:
#
# 1. Markdown
# 2. Code
# 3. Heading
# 4. Raw NBConvert
#
#
# ## 1_1: Markdown and Formatting
# # Heading 1
# ## Heading 2
# ### Heading 3
#
# body text
# _italic_ or *italic*
# __Bold__ or **Bold**
# <h1> Markdown cells also render HTML </h1>
# <br>
# <p><font size=5>When unsure you can always import 🐼 🐼</font></p>
# <br>
#
#
# <font size=4 color='green'>Not that I am encouraging even a light dusting of emoji, but a great number of options are available to communicate:
# https://www.w3schools.com/charsets/ref_emoji.asp </font>
#
# Further resources:
# - https://daringfireball.net/projects/markdown/
# - https://www.w3schools.com/
#
# <br>
# <br>
# <font size=8 color="navy" style="background-color:powderblue;"> Why this matters? </font>
# <br>
# <br>
# ***
# > ### The first thing we may need to look at is the types of information that we want to provide to the reader
# > ### (99% of the time, that will be you).
# ***
# Over time these might be:
#
# - Overall aims and research goals
# - Specific tasks to be achived here
# - Descriptions of data
# - Libraires and code
# ## 1_2: Code cells
#code and comments
a = 1
b = 2
a + b
# ### We can use markdown to format code in a number of ways, using \`\`\` followed by a language (_e.g._ \`\`\`python) then code on lines below finished with \`\`\` on a separate line
#
# ```python
# def TimesTable(val=1, n=10):
# for i in range(1,n):
# print (i*val)
# ```
# ### a formatted html example
#
# ```html
# <h1> Hello world!</h1>
# ```
# ### Equations: LaTeX and Mathjax options to illustrate
#
# Full LaTex is an option for those familiar with it
# + language="latex"
# \begin{align}
# F(k) = {\sum}_0^{\infty}(x) e^{2\pi}y
# \end{align}
# -
# ### MathJax provides
# https://www.mathjax.org/
# +
# https://www.mathjax.org/
from IPython.display import Math
Math(r'F(k) = \ {\sum}_0^{\infty}(x) e^{2\pi}y')
# -
# #### But markdown can also shorten this process
# by using '$' before and after your text
#
# $F(k) = \ {\sum}_0^{\infty}(x) e^{2\pi}y$
# # 2. Editing Modes
# # 3. Imports and output
# ### Well done for getting this far. You deserve a high-five.
# ### Just a little work to earn it...
# ---
# We will import a couple of packages and use these to show live code in the notebook.
# Remember, we are executing each cell in turn (either to render the HTML or Markdown, or to run the code), by hitting the 'Run' button, or by hitting **Ctrl+Enter** / **Shift+Enter** /**Cmnd+Enter**
#
# Notice the `In [*]` near the left of each cell as the cell is executed. This changes to a number when finished.
#
# ---
#
#
import IPython.display as ipd
import random
image = ipd.Image('https://upload.wikimedia.org/wikipedia/commons/f/fb/High_five%21%21.jpg', width=200)
ipd.display(image) , image.metadata
h_fives= []
h_fives.append(r'<a title="Ingorr [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)],via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:High_five!!.jpg"><img width="512" alt="High five!!" src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/fb/High_five%21%21.jpg/512px-High_five%21%21.jpg"></a>')
h_fives.append(r'<a title="Kiidos [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Annaviis.jpg"><img width="512" alt="Annaviis" src="https://upload.wikimedia.org/wikipedia/commons/4/4b/Annaviis.jpg"></a>')
h_fives.append(r'<a title="Jonas Vincent jonasvincentbe [CC0], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Sally_high_five_(Unsplash).jpg"><img width="256" alt="Sally high five (Unsplash)" src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Sally_high_five_%28Unsplash%29.jpg/256px-Sally_high_five_%28Unsplash%29.jpg"></a>')
h_fives.append(r'<a><img src=https://i.imgur.com/Ii0ALYG.jpg></a>')
# +
#a high-five for you (observe repeated re-run for another)
webout = ipd.HTML(h_fives[random.randint(0,3)])
webout
# -
# ###### You also have many options provided by IPython and Jupyter for other media to enrich you presentations and explainations.
#
# More detailed notes and notebooks are provided here:
#
# https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Index.ipynb
#
# https://jupyter.org/try
#
# +
vid =ipd.YouTubeVideo('3VDw7XIulIk', autoplay='0', width=720, height=400)
ipd.display(vid)
# +
#a list of the available cell and line 'magics'
# %lsmagic
# + language="html"
# <h1> HTML magics <em style="background-color:lightgrey;"> <font color="red"> can </font> </em>
# be used for whole cells </h1>
# <br>
# -
# <br/>
def TimesTable(val=1, n=10):
for i in range(1,n):
print (i*val)
# #%timeit
# %time TimesTable(3,10)
# # 4. Help
# ?random.randint
#core python
import pandas as pd
# +
# pd.read_csv?
# -
# ## Conclusion: Jupyter notebooks are an environment in which you can learn (and recall), explain and explore. Code, data, and context
# # Please try:
# - creating and renaming a new notebook for yourself
# - making a copy of an exisitng notebook
# - Search for an example of an interactive notebook from your area of research
# -
#
d:
d[j.strip()].add(df["StartupName"][i].strip())
else:
s = set()
d[j.strip()] = s
d[j.strip()].add(df["StartupName"][i].strip())
else:
a = e.strip()
if a in d:
d[a].add(df["StartupName"][i].strip())
else:
s = set()
d[a] = s
d[a].add(df["StartupName"][i].strip())
d1 = {} #created a dictionary where key is investor's name and value is count of startup's in which they had invested..
for i in d:
if i == "":
continue
d1[i] = len(d[i])
#sorting the keys according to there values in descending order..and taking the top 5 investor's among all..
investors=[]
counts=[]
for key, value in sorted(d1.items(), key=lambda item: item[1],reverse=True)[0:5]:
a=key
investors.append(a)
b=value
counts.append(b)
print(key,value)
#plotting graph
plt.bar(investors,counts,edgecolor='violet')
plt.title('TOP 5 INVESTORS,INVESTED IN DIFFERENT NUMBER OF STARTUPS')
plt.xlabel('Investor Name')
plt.ylabel('No Of Times They Invested')
city
plt.xticks(rotation=40)
plt.grid()
plt.show()
'''
I have suggested the top 5 investors who have invested in different number of startups. This list will be more helpful
than my previous list in finding the investment for my friend startup.The top 5 investors who have
invested maximum number of times in different companies are:
1.Sequoia Capital, invested 48 times
2.Accel Partners, invested 47 times
3.Kalaari Capital, invested 41 times
4.Indian Angel Network, invested 40 times
5.Blume Ventures, invested 36 times
'''
# -
# 4.Even after putting so much effort in finding the probable investors, it didn't turn out to be helpful for your friend. So you went to your investor friend to understand the situation better and your investor friend explained to you about the different Investment Types and their features. This new information will be helpful in finding the right investor. Since your friend startup is at an early stage startup, the best-suited investment type would be - Seed Funding and Crowdfunding. Find the top 5 investors who have invested in a different number of startups and their investment type is Crowdfunding or Seed Funding. Correct spelling of investment types are - "Private Equity", "Seed Funding", "Debt Funding", and "Crowd Funding". Keep an eye for any spelling mistake. You can find this by printing unique values from this column. There are many errors in startup names. Ignore correcting all, just handle the important ones - Ola, Flipkart, Oyo and Paytm.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
startup = pd.read_csv('C:/Users/sowndariya/Desktop/CNN/startup_funding.csv')
df = startup.copy()
#removing row's having nan's in Investors name and startupnames columns..
df.dropna(subset = ["StartupName","InvestorsName","InvestmentType"],inplace = True)
#ignoring the Undisclosed investors
df= df[df.InvestorsName != 'Undisclosed investors']
df= df[df.InvestorsName != 'Undisclosed Investors']
#replacing the wrong word with the correct one..
df["StartupName"].replace("Flipkart.com","Flipkart",inplace = True)
df["StartupName"].replace("Ola Cabs","Ola",inplace = True)
df["StartupName"].replace("Olacabs","Ola",inplace = True)
df["StartupName"].replace("Oyorooms","Oyo",inplace = True)
df["StartupName"].replace("OyoRooms","Oyo",inplace = True)
df["StartupName"].replace("OYO Rooms","Oyo",inplace = True)
df["StartupName"].replace("Oyo Rooms","Oyo",inplace = True)
df["StartupName"].replace("Paytm Marketplace","Paytm",inplace = True)
df['InvestmentType'].replace("SeedFunding","Seed Funding",inplace=True)
df['InvestmentType'].replace("Crowd funding","Crowd Funding",inplace=True)
df['InvestmentType'].replace("PrivateEquity","Private Equity",inplace=True)
#getting Seed Funding and Crowd Funding
df = df.loc[(df['InvestmentType'] == 'Seed Funding') | (df['InvestmentType'] == 'Crowd Funding')]
#to get seed funding and crowd funding investment type
#firstly ...created a dictionary ...for each investor names ... maintained a set..means each key(investor's name) having a value set(names of stratup's in which they invested)..
#set is taken as a value to avoid count of multiple investment in a single startup by an investor...
#in the set ..there are startup names in which investor's had invested...
#in case there are multiple investors for a single startup...used split function to split that ..and traversed through each name separately...
d = {}
for i in df.index:
e = df["InvestorsName"][i].strip()
if "," in e:
for j in e.strip().split(','):
if j.strip() in d:
d[j.strip()].add(df["StartupName"][i].strip())
else:
s = set()
d[j.strip()] = s
d[j.strip()].add(df["StartupName"][i].strip())
else:
a = e.strip()
if a in d:
d[a].add(df["StartupName"][i].strip())
else:
s = set()
d[a] = s
d[a].add(df["StartupName"][i].strip())
#created a dictionary where key is investor's name and value is count of startup's in which they had invested..
d1 = {}
for i in d:
if i == "":
continue
d1[i] = len(d[i])
#sorting the keys according to there values in descending order..and taking the top 5 investor's among all..
investors=[]
counts=[]
for key, value in sorted(d1.items(), key=lambda item: item[1],reverse=True)[0:5]:
a=key
investors.append(a)
b=value
counts.append(b)
print(key,value)
#plotting graph
plt.bar(investors,counts,edgecolor='black')
plt.title('TOP 5 INVESTORS IN DIFFERENT NUMBER OF STARTUPS AND THEIR INVESTMENT FOR CROWD FUNDING OR SEED FUNDING')
plt.xlabel('Investor Name')
plt.ylabel('No Of Times They Invested')
city
plt.xticks(rotation=40)
plt.grid()
plt.show()
'''
I suggest my friend that the best-suited investment type would be - Seed Funding and Crowdfunding.
The top 5 investors who haveinvested in a different number of startups and their investment type is Crowdfunding or
Seed Funding are,
1.Indian Angel Network, invested 33 times.
2.Rajan Anandan, invested 23 times.
3.LetsVenture, invested 16 times.
4.Anupam Mittal, invested 16 times.
5.Kunal Shah, invested 14 times.
'''
# -
# 5.Due to your immense help, your friend startup successfully got seed funding and it is on the operational mode. Now your friend wants to expand his startup and he is looking for new investors for his startup. Now you again come as a saviour to help your friend and want to create a list of probable new new investors. Before moving forward you remember your investor friend advice that finding the investors by analysing the investment type. Since your friend startup is not in early phase it is in growth stage so the best-suited investment type is Private Equity. Find the top 5 investors who have invested in a different number of startups and their investment type is Private Equity. Correct spelling of investment types are - "Private Equity", "Seed Funding", "Debt Funding", and "Crowd Funding". Keep an eye for any spelling mistake. You can find this by printing unique values from this column.There are many errors in startup names. Ignore correcting all, just handle the important ones - Ola, Flipkart, Oyo and Paytm.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
startup = pd.read_csv('C:/Users/sowndariya/Desktop/CNN/startup_funding.csv')
df = startup.copy()
#removing row's having nan's in Investors name and startupnames columns..
df.dropna(subset = ["StartupName","InvestorsName","InvestmentType"],inplace = True)
#ignoring the Undisclosed investors
df= df[df.InvestorsName != 'Undisclosed investors']
df= df[df.InvestorsName != 'Undisclosed Investors']
#replacing the wrong word with the correct one..
df["StartupName"].replace("Flipkart.com","Flipkart",inplace = True)
df["StartupName"].replace("Ola Cabs","Ola",inplace = True)
df["StartupName"].replace("Olacabs","Ola",inplace = True)
df["StartupName"].replace("Oyorooms","Oyo",inplace = True)
df["StartupName"].replace("OyoRooms","Oyo",inplace = True)
df["StartupName"].replace("OYO Rooms","Oyo",inplace = True)
df["StartupName"].replace("Oyo Rooms","Oyo",inplace = True)
df["StartupName"].replace("Paytm Marketplace","Paytm",inplace = True)
df['InvestmentType'].replace("SeedFunding","Seed Funding",inplace=True)
df['InvestmentType'].replace("Crowd funding","Crowd Funding",inplace=True)
df['InvestmentType'].replace("PrivateEquity","Private Equity",inplace=True)
#getting Private Equity Funding
df = df.loc[(df['InvestmentType'] == 'Private Equity')]
#to get seed funding and crowd funding investment type
#firstly ...created a dictionary ...for each investor names ... maintained a set..means each key(investor's name) having a value set(names of stratup's in which they invested)..
#set is taken as a value to avoid count of multiple investment in a single startup by an investor...
#in the set ..there are startup names in which investor's had invested...
#in case there are multiple investors for a single startup...used split function to split that ..and traversed through each name separately...
d = {}
for i in df.index:
e = df["InvestorsName"][i].strip()
if "," in e:
for j in e.strip().split(','):
if j.strip() in d:
d[j.strip()].add(df["StartupName"][i].strip())
else:
s = set()
d[j.strip()] = s
d[j.strip()].add(df["StartupName"][i].strip())
else:
a = e.strip()
if a in d:
d[a].add(df["StartupName"][i].strip())
else:
s = set()
d[a] = s
d[a].add(df["StartupName"][i].strip())
#created a dictionary where key is investor's name and value is count of startup's in which they had invested..
d1 = {}
for i in d:
if i == "":
continue
d1[i] = len(d[i])
#sorting the keys according to there values in descending order..and taking the top 5 investor's among all..
investors=[]
counts=[]
for key, value in sorted(d1.items(), key=lambda item: item[1],reverse=True)[0:5]:
a=key
investors.append(a)
b=value
counts.append(b)
print(key,value)
#plotting graph
plt.bar(investorname,nooftimes,edgecolor='black')
plt.title('TOP 5 INVESTORS IN DIFFERENT NUMBER OF STARTUPS AND THEIR INVESTMENT FOR PRIVATE EQUITY')
plt.xlabel('Investor Name')
plt.ylabel('No Of Times They Invested')
city
plt.xticks(rotation=40)
plt.grid()
plt.show()
'''
The best-suited investment type is Private Equity, for that i suggest my friend the following investors who have invested in
Private Equity:
1.Sequoia Capital, invested 45 times
2.Accel Partners, invested 43 times
3.Kalaari Capital,invested 35 times
4.Blume Ventures, invested 27 times
5.SAIF Partners, invested 24 times
'''
p-values as a bar plot, 10 most significant
# feature order
feature_order_idx=np.flip(np.argsort(feature_importance),0)
feature_to_show=20
plt.bar(np.arange(0,feature_to_show,1),feature_importance[feature_order_idx[0:feature_to_show]])
plt.ylabel('Feature weights')
plt.xlabel('Scaled Feature number')
plt.title('Random forest classifier')
# show the feature order idx
print 'Names of the important features:'
print '\n'
for i in range(feature_to_show):
print feature_names[feature_order_idx[i]]
# save figure to eps
#plt.savefig('all_features_random_forest_10_largest.eps',format='eps',dpi=300)
# -
# # Show the random forest performance after 2 PCA
# +
# Plot the decision boundary only for Random forest classifier
# Parameters
n_classes = 2
n_estimators = 200
cmap = plt.cm.RdYlBu
plot_step = 0.02 # fine step width for decision surface contours
plot_step_coarser = 0.5 # step widths for coarse classifier guesses
RANDOM_SEED = 13 # fix the seed on each iteration
model=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='sqrt', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=n_estimators, n_jobs=1,
oob_score=True, random_state=0, verbose=0, warm_start=False)
pair=[0,1]
X = X_r[:, pair]
y = cell_type_14
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Train
clf = clone(model)
clf = model.fit(X, y)
# get the scores of the trees
scores = clf.score(X, y)
# print the model score
print 'Classifier performance, PC1 & PC2 features only: ' + str(clf.oob_score_)
# Create a title for each column and the console by using str() and
# slicing away useless parts of the string
model_title = str(type(model)).split(
".")[-1][:-2][:-len("Classifier")]
model_details = model_title
if hasattr(model, "estimators_"):
model_details += " with {} estimators".format(
len(model.estimators_))
print(model_details + " with features", pair,
"has a score of", scores)
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
# Plot either a single DecisionTreeClassifier or alpha blend the
# decision surfaces of the ensemble of classifiers
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
else:
# Choose alpha blend level with respect to the number
# of estimators
# that are in use (noting that AdaBoost can use fewer estimators
# than its maximum if it achieves a good enough fit early on)
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=ListedColormap(['b','r']))
# Build a coarser grid to plot a set of ensemble classifications
# to show how these are different to what we see in the decision
# surfaces. These points are regularly space and do not have a
# black outline
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = model.predict(np.c_[xx_coarser.ravel(),
yy_coarser.ravel()]
).reshape(xx_coarser.shape)
#cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,
# c=Z_points_coarser, cmap=cmap,
# edgecolors="none")
# Plot the training points, these are clustered together and have a blue-red outline
plt.scatter(X[:, 0], X[:, 1], c=y,
cmap=ListedColormap(['b','r']),
edgecolor='k', s=40)
plt.suptitle("Random forest classifier (2 classes, WG1 - blue, WG4 - red)")
plt.axis("tight")
plt.xlabel('Principal component 1')
plt.ylabel('Principal component 2')
#plt.savefig('Random_forest_visualisation.svg', format='svg', dpi=300)
# -
# # tSNE representation of features of all models
# +
n_samples = 24
# t_SNE parameter, 2D vs 3D
n_components = 2
(fig, subplots) = plt.subplots(1, 2, figsize=(15, 8))
perplexities = [30]
X = all_features_scaled[:,:]
y = cell_type_14
pca = PCA(n_components=2)
# get the cell positions in the new coordinates
X_r = pca.fit(all_features_scaled).transform(all_features_scaled)
red = y == 4
blue = y == 1
ax = subplots[0]
ax.scatter(X_r[red, 0], X_r[red, 1], c="r")
ax.scatter(X_r[blue, 0], X_r[blue, 1], c="b")
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
ax.set_title('PCA')
for i, perplexity in enumerate(perplexities):
ax = subplots[i + 1]
t0 = time()
tsne = manifold.TSNE(n_components=n_components, init='random',
random_state=0, perplexity=perplexity, metric='cosine',n_iter=20000)
Y = tsne.fit_transform(X)
t1 = time()
print("Ephys dataset, perplexity=%d in %.2g sec" % (perplexity, t1 - t0))
ax.set_title("Perplexity=%d" % perplexity)
ax.scatter(Y[red, 0], Y[red, 1], c="r")
ax.scatter(Y[blue, 0], Y[blue, 1], c="b")
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis('tight')
plt.show()
# -
# # Train random forest classifier based on tSNE data
# +
# Plot the decision boundary only for Random forest classifier
# Parameters
n_classes = 2
n_estimators = 200
cmap = plt.cm.RdYlBu
plot_step = 0.1 # fine step width for decision surface contours
plot_step_coarser = 0.5 # step widths for coarse classifier guesses
RANDOM_SEED = 13 # fix the seed on each iteration
model=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='sqrt', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=n_estimators, n_jobs=1,
oob_score=True, random_state=0, verbose=0, warm_start=False)
pair=[0,1]
#X = Y[:, pair]
X=Y[:,pair]
y = cell_type_14
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Train
clf = clone(model)
clf = model.fit(X, y)
# get the scores of the trees
scores = clf.score(X, y)
# print the model score
print 'Classifier performance, PC1 & PC2 features only: ' + str(clf.oob_score_)
# Create a title for each column and the console by using str() and
# slicing away useless parts of the string
model_title = str(type(model)).split(
".")[-1][:-2][:-len("Classifier")]
model_details = model_title
if hasattr(model, "estimators_"):
model_details += " with {} estimators".format(
len(model.estimators_))
print(model_details + " with features", pair,
"has a score of", scores)
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
# Plot either a single DecisionTreeClassifier or alpha blend the
# decision surfaces of the ensemble of classifiers
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
else:
# Choose alpha blend level with respect to the number
# of estimators
# that are in use (noting that AdaBoost can use fewer estimators
# than its maximum if it achieves a good enough fit early on)
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=ListedColormap(['b','r']))
# Build a coarser grid to plot a set of ensemble classifications
# to show how these are different to what we see in the decision
# surfaces. These points are regularly space and do not have a
# black outline
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = model.predict(np.c_[xx_coarser.ravel(),
yy_coarser.ravel()]
).reshape(xx_coarser.shape)
#cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,
# c=Z_points_coarser, cmap=cmap,
# edgecolors="none")
# Plot the training points, these are clustered together and have a blue-red outline
plt.scatter(X[:, 0], X[:, 1], c=y,
cmap=ListedColormap(['b','r']),
edgecolor='k', s=40)
plt.suptitle("Random forest classifier (2 classes, WG1 - blue, WG4 - red)")
plt.axis("tight")
plt.xlabel('Principal component 1')
plt.ylabel('Principal component 2')
#plt.savefig('Random_forest_visualisation_tSNE.svg', format='svg', dpi=300)
# -
# ### Compare PCA and tSNE by case
# +
cell_type_case=np.array(cell_type_14)
# WG1 group
cell_type_case[0]=11
cell_type_case[1]=11
cell_type_case[2]=11
cell_type_case[3]=11
cell_type_case[4]=11
cell_type_case[5]=1
cell_type_case[6]=1
cell_type_case[7]=1
cell_type_case[8]=1
cell_type_case[9]=1
cell_type_case[10]=1
cell_type_case[11]=1
# WG4 group
step=12
cell_type_case[step]=2
cell_type_case[step+1]=2
cell_type_case[step+2]=2
cell_type_case[step+3]=2
cell_type_case[step+4]=2
cell_type_case[step+5]=22
cell_type_case[step+6]=22
cell_type_case[step+7]=22
cell_type_case[step+8]=22
cell_type_case[step+9]=22
cell_type_case[step+10]=22
cell_type_case[step+11]=22
print cell_type_case
# +
# Doing PCA on data with only 2 labels
pca = PCA(n_components=2)
# get the cell positions in the new coordinates
X_r = pca.fit(all_features_scaled).transform(all_features_scaled)
# print the variance explained
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
# create the colors vectors
#colors = np.heaviside(cell_type-2,1)
# all indexes of WG
g1_idx=np.zeros(1)
g11_idx=np.zeros(1)
g2_idx=np.zeros(1)
g22_idx=np.zeros(1)
g1_idx=np.where(cell_type_case==1)
g11_idx=np.where(cell_type_case==11)
g2_idx=np.where(cell_type_case==2)
g22_idx=np.where(cell_type_case==22)
fig, ax = plt.subplots()
plt.scatter(X_r[g1_idx, 0],X_r[g1_idx, 1],c='blue',s=50,marker='o')
plt.scatter(X_r[g11_idx, 0],X_r[g11_idx, 1],c='blue',s=50,marker='x')
plt.scatter(X_r[g2_idx, 0],X_r[g2_idx, 1],c='red',s=50,marker='o')
plt.scatter(X_r[g22_idx, 0],X_r[g22_idx, 1],c='red',s=50,marker='x')
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.title('PCA on Granule cell model ephys (2 classes)')
plt.legend(['H17.06.015','H17.06.014','H16.06.013','H17.06.012'])
plt.xlabel('Principal component 1')
plt.ylabel('Principal component 2')
# save figure
#plt.savefig('PCA_WG1WG1TS_WG4.eps', format='eps', dpi=300)
# -
# ## tSNE and PCA representation: data split by patients
# +
n_samples = 24
# t_SNE parameter, 2D vs 3D
n_components = 2
(fig, subplots) = plt.subplots(1, 2, figsize=(15, 8))
perplexities = [30]
#X = all_features_scaled[:,:]
X = X_r
y = cell_type_case
wg1 = y == 1
wg11 = y == 11
wg2 = y == 2
wg22 = y == 22
ax = subplots[0]
ax.scatter(X[wg1, 0], X[wg1, 1],c='blue',s=50,marker='o')
ax.scatter(X[wg11, 0], X[wg11, 1],c='blue',s=50,marker='x')
ax.scatter(X[wg2, 0], X[wg2, 1],c='red',s=50,marker='o')
ax.scatter(X[wg22, 0], X[wg22, 1],c='red',s=50,marker='x')
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
ax.set_title('Original dataset')
for i, perplexity in enumerate(perplexities):
ax = subplots[i + 1]
t0 = time()
tsne = manifold.TSNE(n_components=n_components, init='random',
random_state=0, perplexity=perplexity, metric='cosine',n_iter=6000)
Y = tsne.fit_transform(X)
t1 = time()
print("Ephys dataset, perplexity=%d in %.2g sec" % (perplexity, t1 - t0))
ax.set_title("Perplexity=%d" % perplexity)
ax.scatter(Y[wg1, 0], Y[wg1, 1],c='blue',s=50,marker='o')
ax.scatter(Y[wg11, 0], Y[wg11, 1],c='blue',s=50,marker='x')
ax.scatter(Y[wg2, 0], Y[wg2, 1],c='red',s=50,marker='o')
ax.scatter(Y[wg22, 0], Y[wg22, 1],c='red',s=50,marker='x')
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis('tight')
ax.legend(['H17.06.015','H17.06.014','H16.06.013','H17.06.012'])
#plt.show()
#plt.savefig('tSNE_PCA.eps', format='eps', dpi=300)
# -
| 31,053 |
/practice/seong/Untitled1.ipynb
|
1dfc3018d91033f88647d7cf652a0d71a1b11805
|
[] |
no_license
|
jwlee2218/zero
|
https://github.com/jwlee2218/zero
| 1 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 453,717 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # Reconocimiento óptico de caracteres
#
# 
#
# Uno de los desafíos más habituales de Computer Vision es detectar e interpretar el texto de una imagen. Este tipo de procesamiento suele conocerse como *reconocimiento óptico de caracteres* (OCR)
#
# ## Usar el servicio Computer Vision para encontrar el texto de una imagen
#
# El servicio **Computer Vision** de Cognitive Services permite realizar tareas de OCR, como estas:
#
# - Una API de **OCR** que puede usar para leer el texto en varios idiomas. La API se puede usar de forma sincrónica y es muy útil cuando se necesita detectar y leer una pequeña cantidad de texto de una imagen.
# - Una **Read** API optimizada para documentos más grandes. Esta API se usa de forma asincrónica y se puede utilizar para texto impreso y escrito a mano.
#
# Para usar este servicio, cree un recurso de **Computer Vision** o un recurso de **Cognitive Services**.
#
# Si no lo ha hecho aún, cree un recurso de **Cognitive Services** en su suscripción de Azure.
#
# > **Nota**: Si ya tiene un recurso de Cognitive Services, abra su página de **Inicio rápido** en Azure Portal y copie la clave y el punto de conexión en la siguiente celda. En caso contrario, siga estos pasos para crear uno.
#
# 1. En la pestaña de otro explorador, abra Azure Portal (https://portal.azure.com) e inicie sesión con su cuenta de Microsoft.
#
# 2. Haga clic en el botón **+Crear un recurso**, busque *Cognitive Services* y cree un recurso de **Cognitive Services** con esta configuración:
# - **Suscripción**: *su suscripción de Azure*.
# - **Grupo de recursos**: *seleccione o cree un grupo de recursos con un nombre único.*
# - **Región**: *seleccione cualquier región disponible*:
# - **Nombre**: *escriba un nombre único*.
# - **Plan de tarifa**: S0
# - **Confirmo que he leído y comprendido las notificaciones**: seleccionado.
# 3. Espere a que la implementación finalice. Vaya al recurso de Cognitive Services y, en la página **Información general**, haga clic en el vínculo para administrar las claves del servicio. Necesitará el punto de conexión y las claves para conectarse a su recurso de Cognitive Services desde aplicaciones de cliente.
#
# ### Obtener la clave y el punto de conexión de un recurso de Cognitive Services
#
# Para usar su recurso de Cognitive Services, las aplicaciones de cliente necesitan su clave de autenticación y su punto de conexión:
#
# 1. En Azure Portal, en la página **Claves y punto de conexión** de su recurso de Cognitive Services, copie la **Key1** de su recurso y péguela en el siguiente código, en sustitución de **YOUR_COG_KEY**.
# 2. Copie el **Punto de conexión** de su recurso y péguelo en el siguiente código, en sustitución de **YOUR_COG_ENDPOINT**.
# 3. Haga clic en **Run cell** (▷), a la izquierda de la celda siguiente, para ejecutar su código.
# + gather={"logged": 1599694246277}
cog_key = 'YOUR_COG_KEY'
cog_endpoint = 'YOUR_COG_ENDPOINT'
print('Ready to use cognitive services at {} using key {}'.format(cog_endpoint, cog_key))
# + [markdown] nteract={"transient": {"deleting": false}}
# Ahora que ha configurado la clave y el punto de conexión, puede usar su recurso de Computer Vision para extraer el texto de una imagen.
#
# Empezaremos con la API de **OCR**, que permite analizar de forma sincrónica una imagen y leer el texto que contiene. En este caso, tenemos una imagen publicitaria de la empresa ficticia Northwind Traders que incluye algo de texto. Ejecute la celda siguiente para leerlo.
# + gather={"logged": 1599694257280}
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
import os
# %matplotlib inline
# Obtener un cliente del servicio Computer Vision
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Lectura del archivo de imagen
image_path = os.path.join('data', 'ocr', 'advert.jpg')
image_stream = open(image_path, "rb")
# Usar el servicio Computer Vision para encontrar el texto de la imagen
read_results = computervision_client.recognize_printed_text_in_stream(image_stream)
# Procesar el texto línea a línea
for region in read_results.regions:
for line in region.lines:
# Read the words in the line of text
line_text = ''
for word in line.words:
line_text += word.text + ' '
print(line_text.rstrip())
# Abrir la imagen para mostrarla.
fig = plt.figure(figsize=(7, 7))
img = Image.open(image_path)
draw = ImageDraw.Draw(img)
plt.axis('off')
plt.imshow(img)
# -
# El texto de la imagen está organizado en una estructura jerárquica de regiones, líneas y palabras. El código lee esta estructura para obtener los resultados.
#
# En los resultados, podemos ver el texto leído encima de la imagen.
#
# ## Mostrar cuadros de límite
#
# Los resultados también incluyen las coordenadas del *cuadro de límite* de las líneas de texto y las palabras individuales encontradas en la imagen. Ejecute la siguiente celda para ver los cuadros de límite de las líneas de texto de la imagen publicitaria utilizada anteriormente.
# + gather={"logged": 1599694266106}
# Abrir la imagen para mostrarla.
fig = plt.figure(figsize=(7, 7))
img = Image.open(image_path)
draw = ImageDraw.Draw(img)
# Procesar el texto línea a línea
for region in read_results.regions:
for line in region.lines:
# Show the position of the line of text
l,t,w,h = list(map(int, line.bounding_box.split(',')))
draw.rectangle(((l,t), (l+w, t+h)), outline='magenta', width=5)
# Read the words in the line of text
line_text = ''
for word in line.words:
line_text += word.text + ' '
print(line_text.rstrip())
# Mostrar la imagen con la ubicación del texto resaltada
plt.axis('off')
plt.imshow(img)
# -
# En el resultado, el cuadro de límite de cada línea de texto aparece como un rectángulo en la imagen.
#
# ## Usar la Read API
#
# La API de OCR utilizada anteriormente es útil para imágenes con poco texto. Cuando se necesita leer una cantidad mayor de texto, como pasa con los documentos escaneados, puede usar la **Read** API. Se trata de un proceso compuesto por varios pasos:
#
# 1. Envíe una imagen al servicio Computer Vision para leerla y analizarla de forma asincrónica.
# 2. Espere a que se complete la operación de análisis.
# 3. Recupere los resultados del análisis.
#
# Ejecute la siguiente celda para usar este proceso y leer el texto de una carta escaneada dirigida al director de una tienda de Northwind Traders.
# + gather={"logged": 1599694312346}
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from msrest.authentication import CognitiveServicesCredentials
import matplotlib.pyplot as plt
from PIL import Image
import time
import os
# %matplotlib inline
# Lectura del archivo de imagen
image_path = os.path.join('data', 'ocr', 'letter.jpg')
image_stream = open(image_path, "rb")
# Obtener un cliente del servicio Computer Vision
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Enviar una solicitud para leer el texto escrito de la imagen y obtener el ID de operación
read_operation = computervision_client.read_in_stream(image_stream,
raw=True)
operation_location = read_operation.headers["Operation-Location"]
operation_id = operation_location.split("/")[-1]
# Esperar hasta que se termine la operación asincrónica
while True:
read_results = computervision_client.get_read_result(operation_id)
if read_results.status not in [OperationStatusCodes.running]:
break
time.sleep(1)
# Una vez completada la operación, procesar el texto línea a línea
if read_results.status == OperationStatusCodes.succeeded:
for result in read_results.analyze_result.read_results:
for line in result.lines:
print(line.text)
# Abrir la imagen y mostrarla.
print('\n')
fig = plt.figure(figsize=(12,12))
img = Image.open(image_path)
plt.axis('off')
plt.imshow(img)
# -
# Revise los resultados. Encontrará una transcripción completa de la carta, que contiene en su mayoría texto impreso con una firma a mano. La imagen original de la carta aparece debajo de los resultados del OCR (quizá deba desplazarse hacia abajo para verla).
#
# ## Leer texto manuscrito
#
# En el ejemplo anterior, la solicitud para analizar la imagen especificaba un modo de reconocimiento de texto que optimizaba la operación para texto *impreso*. A pesar de esto, se ha leído la firma manuscrita.
#
# Esta capacidad de leer texto manuscrito es muy útil. Por ejemplo, supongamos que ha escrito una nota con una lista de la compra y quiere usar una aplicación de su teléfono para leer la nota y transcribirla.
#
# Ejecute la siguiente celda para ver un ejemplo de una operación de lectura de una lista de la compra manuscrita.
# + gather={"logged": 1599694340593}
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from msrest.authentication import CognitiveServicesCredentials
import matplotlib.pyplot as plt
from PIL import Image
import time
import os
# %matplotlib inline
# Lectura del archivo de imagen
image_path = os.path.join('data', 'ocr', 'note.jpg')
image_stream = open(image_path, "rb")
# Obtener un cliente del servicio Computer Vision
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Enviar una solicitud para leer el texto escrito de la imagen y obtener el ID de operación
read_operation = computervision_client.read_in_stream(image_stream,
raw=True)
operation_location = read_operation.headers["Operation-Location"]
operation_id = operation_location.split("/")[-1]
# Esperar hasta que se termine la operación asincrónica
while True:
read_results = computervision_client.get_read_result(operation_id)
if read_results.status not in [OperationStatusCodes.running]:
break
time.sleep(1)
# Una vez completada la operación, procesar el texto línea a línea
if read_results.status == OperationStatusCodes.succeeded:
for result in read_results.analyze_result.read_results:
for line in result.lines:
print(line.text)
# Abrir la imagen y mostrarla.
print('\n')
fig = plt.figure(figsize=(12,12))
img = Image.open(image_path)
plt.axis('off')
plt.imshow(img)
# -
# ## Más información
#
# Para obtener más información sobre cómo usar el servicio Computer Vision para realizar OCR, consulte la [documentación de Computer Vision](https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text)
| 11,347 |
/Seaborn.ipynb
|
33806283a07ea7ca5a2e05a68a597f3401f71781
|
[] |
no_license
|
amanagarwal22/pandas
|
https://github.com/amanagarwal22/pandas
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 386,104 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Datetime
#
# Python has the datetime module to help deal with timestamps in your code. Time values are represented with the time class. Times have attributes for hour, minute, second, and microsecond. They can also include time zone information. The arguments to initialize a time instance are optional, but the default of 0 is unlikely to be what you want.
#
# ##### datetime class
# ##### time class
# ##### date class
# ##### timedelta class
# ##### timezone class
# ##### tzinfo class
#
# ## Time
# Let's take a look at how we can extract time information from the datetime module. We can create a timestamp by specifying datetime.time(hour,minute,second,microsecond)
# +
import datetime
from pytz import timezone
timedefault = datetime.time()
t = datetime.time(4, 20, 1,123456)
utctime = datetime.time(4, 20, 1,123456, timezone('UTC'))
Austime = datetime.time(4, 20, 1,123456, timezone('Australia/Sydney'))
Ustime = datetime.time(4, 20, 1,123456, timezone('America/St_Johns'))
# Let's show the different components
print(timedefault)
print(timedefault.tzinfo)
print("######################################")
print(t)
print("######################################")
print(utctime)
print("######################################")
print(Austime)
print("######################################")
print(Ustime)
print("######################################")
print('hour :', t.hour)
print('minute:', t.minute)
print('second:', t.second)
print('microsecond:', t.microsecond)
print('tzinfo:', Austime.tzinfo)
# -
# ##### Get all timezones
from pytz import all_timezones
print(type(all_timezones))
print(list(all_timezones))
# ### convert current utc time to another timezone
# +
from datetime import datetime
import pytz
#print(datetime.time())
utcmoment_naive = datetime.utcnow()
print(utcmoment_naive)
print(utcmoment_naive.tzinfo)
utcmoment = utcmoment_naive.replace(tzinfo=pytz.utc)
print(utcmoment)
# print "utcmoment_naive: {0}".format(utcmoment_naive) # python 2
print("utcmoment_naive: {0}".format(utcmoment_naive))
print("utcmoment: {0}".format(utcmoment))
localFormat = "%Y-%m-%d %H:%M:%S:%f"
timezones = ['America/Los_Angeles', 'Europe/Madrid', 'America/Puerto_Rico','Asia/Kathmandu']
print("######################")
for tz in timezones:
localDatetime = utcmoment.astimezone(pytz.timezone(tz))
print(localDatetime.strftime(localFormat))
#localFormat = "%Y$%m$%d - %H$%M$%S$%f"
localFormat = "%Y-%m-%d - %H:%M:%S:%f"
# +
from pytz import all_timezones
utcmoment_naive = datetime.utcnow()
print(utcmoment_naive)
print(utcmoment_naive.tzinfo)
utcmoment = utcmoment_naive.replace(tzinfo=pytz.utc)
print("Current Utc Time :", utcmoment)
print("##############################################")
utcmoment_india = datetime.now()
tzinfo_india = timezone('Asia/Kathmandu')
utcmoment_india = utcmoment_india.replace(tzinfo = tzinfo_india)
print("Current India Time", utcmoment_india)
print("##############################################")
i =0
for timezonename in list(all_timezones):
if(i>500):
break
print("timezone name: {} - current time in that time zone - {}".format(timezonename,utcmoment_india.astimezone(pytz.timezone(timezonename))))
i+=1
# +
#### 'TimeZones Starting with 'B'
# -
from pytz import all_timezones
utcmoment_naive = datetime.now()
print(utcmoment_naive)
print(utcmoment_naive.tzinfo)
utcmoment = datetime.utcnow()
print(utcmoment)
print(utcmoment.tzinfo)
utcmoment = utcmoment.replace(tzinfo=pytz.utc)
print(utcmoment.tzinfo)
print("Current Utc Time :", utcmoment)
for timezonename in list(all_timezones):
if(timezonename.startswith('B')):
print("timezone name: {} - current time in that time zone - {}".format(timezonename,utcmoment.astimezone(pytz.timezone(timezonename))))
print(type(pytz.timezone(timezonename)),pytz.timezone(timezonename))
# +
import datetime as dt
t = dt.time(4, 20, 1)
# Let's show the different components
print(t)
print('hour :', t.hour)
print('minute:', t.minute)
print('second:', t.second)
print('microsecond:', t.microsecond)
print('tzinfo:', t.tzinfo)
print(dt.datetime.now(pytz.timezone('Asia/Katmandu')))
print(dt.datetime.now(pytz.timezone('Brazil/Acre')))
# -
# ### DateTime
# ##### able to represent both time and date details
import datetime
print(datetime.datetime.today())
print(datetime.datetime.today().ctime())
print('tuple:', datetime.datetime.today().timetuple())
print('ordinal:', datetime.datetime.today().toordinal()) # Each day has a value of 1, starting from january 1 0001
print('Year :', datetime.datetime.today().year)
print('Month:', datetime.datetime.today().month)
print('Day :', datetime.datetime.today().day)
print('Hour :', datetime.datetime.today().hour)
print('Minute :', datetime.datetime.today().minute)
print('Seconds :', datetime.datetime.today().second)
print('MicroSecond :', datetime.datetime.today().microsecond)
print('timezone :', datetime.datetime.today().tzinfo)
# Note: A time instance only holds values of time, and not a date associated with the time.
#
# We can also check the min and max values a time of day can have in the module:
# +
import datetime
print('Earliest :', datetime.time.min)
print('Latest :', datetime.time.max)
#print('Earliest Hour :', datetime.time.hour.min)
#print('Final Hour :', datetime.time.hour.min)
print('Resolution:', datetime.time.resolution)
# -
# The min and max class attributes reflect the valid range of times in a single day.
# ## Date
# datetime (as you might suspect) also allows us to work with date timestamps. Calendar date values are represented with the date class. Instances have attributes for year, month, and day. It is easy to create a date representing today’s date using the today() class method.
#
# Let's see some examples:
# ##### able to represent only date details
#from datetime import date
import datetime
today = datetime.date.today()
print(today)
print('ctime:', today.ctime())
print('tuple:', today.timetuple())
print('ordinal:', today.toordinal()) # Each day has a value of 1, starting from january b 0001
print('Year :', today.year)
print('Month:', today.month)
print('Day :', today.day)
datetime.date.today()
#datetime.time()
today = datetime.date.today()
print(today)
today = datetime.date(today.year, today.month, day = today.day-2)
print(today)
today = datetime.date(today.year, today.month-2, day = today.day-2)
print(today)
# +
import time
pattern = '%d.%m.%Y %H:%M:%S:%f'
#pattern = '%H:%M:%S:%f'
print(datetime.datetime.now())
date_time = datetime.datetime.now().strftime(pattern)
print(date_time)
print(type(time.strptime(date_time, pattern)))
print(time.strptime(date_time, pattern))
epoch = int(time.mktime(time.strptime(date_time, pattern)))
#epoch = int(time.mktime(time.strptime(date_time, '%S')))
#epoch = int(time.mktime(datetime.datetime.now()))
print(epoch)
# -
d1 = datetime.date(2015, 3, 11)
print('d1:', d1)
print(d1.toordinal())
print("#####################################")
d2 = d1.replace(year=2014)
print('d2:', d2)
print(d2.toordinal())
print("#####################################")
d3 = d1.replace(year=2020, month=2,day=29)
print('d3:', d3)
print(d3.toordinal())
d1.replace(2020,9,30)
# As with time, the range of date values supported can be determined using the min and max attributes.
print('Earliest :', datetime.date.min)
print('Latest :', datetime.date.max)
print('Resolution:', datetime.date.resolution)
# Another way to create new date instances uses the replace() method of an existing date. For example, you can change the year, leaving the day and month alone.
# +
# d1 = datetime.date(2015, 3, 11)
print('d1:', d1)
d2 = d1.replace(year=2014)
print('d2:', d2)
d3 = d1.replace(month=12)
print('d3:', d3)
d4 = d1.replace(day=20)
print('d4:', d4)
d5 = d1.replace(1)
print('d5:', d5)
d6 = d1.replace(1,12)
print('d6:', d6)
d7 = d1.replace(9991,12,30)
print('d7:', d7)
# -
# # Arithmetic
# We can perform arithmetic on date objects to check for time differences. For example:
d1
d2
d1-d2
# #### time delta on datetime objects
current_datetime = datetime.datetime.now()
current_datetime
time_old_1 = datetime.datetime(2018,10,15)
time_old_1
current_datetime - time_old_1
print(current_datetime - time_old_1)
print((current_datetime - time_old_1).total_seconds())
print((current_datetime - time_old_1).total_seconds()/60)
print((current_datetime - time_old_1).total_seconds()/60/60)
print((current_datetime - time_old_1).total_seconds()/60/60/24)
# ### Formatting Date time to different string formats in string
from datetime import datetime
print(str(datetime.now()))
print(type(datetime.now()))
print("####################################################")
print(datetime.now().strftime('%Y-%m-%d-%H %M %S %f'))
print("####################################################")
print(datetime.now().strftime('%d-%H %M %S %f'))
print("####################################################")
print(datetime.now().strftime('%H %M %S %f'))
print("####################################################")
print(datetime.now().strftime('%H$%M$%S$%f'))
print("####################################################")
# +
from datetime import datetime
newtime = str(datetime.now().strftime('%Y-%m-%d-%H %M %S'))
print(newtime,"\t",type(newtime))
print("####################################################")
Previoustime = datetime.strptime('2011-11-15 07 54 38 002300','%Y-%m-%d %H %M %S %f')### provide the formatters in the same way as
#string datetime to parse the string
print(Previoustime,"\t",type(Previoustime))
print(type(Previoustime))
print(Previoustime.year)
print(Previoustime.month)
print(Previoustime.day)
print(Previoustime.hour)
print(Previoustime.minute)
print(Previoustime.second)
print(Previoustime.microsecond)
print("####################################################")
newtime1 = datetime.strptime(newtime,'%Y-%m-%d-%H %M %S')
print(newtime1)
# -
################################################
print("#######################Calculating time difference #############################")
timedifference = newtime1 - Previoustime
print(timedifference, type(timedifference))
secstime = (newtime1 - Previoustime).total_seconds()
print("total seconds", secstime)
totalminutes = secstime /60
# total timedifference between two timestamps in seconds, gives us the uptime between previous ping and current ping
print("total minutes",totalminutes)
totalhours = totalminutes /60
print("total hours",totalhours)
totaldays = totalhours /24
print("total days",totaldays)
from datetime import datetime
def datetimediffsecs(oldtime):
newtime = str(datetime.now().strftime('%Y-%m-%d %H %M %S'))
Previoustime = datetime.strptime(str(oldtime), '%Y-%m-%d %H:%M:%S.%f')
Currenttime = datetime.strptime(newtime, '%Y-%m-%d %H %M %S')
secstime = (Currenttime - Previoustime).total_seconds() # total timedifference between two timestamps in seconds, gives us the uptime between previous ping and current ping
return secstime
print(datetime.now())
print(datetimediffsecs(datetime.now()))
# This gives us the difference in days between the two dates. You can use the timedelta method to specify various units of times (days, minutes, hours, etc.)
#
# Great! You should now have a basic understanding of how to use datetime with Python to work with timestamps in your code!
| 11,690 |
/src/Boundary_Value_Problems/.ipynb_checkpoints/Eigen_Value_With_Potentials-checkpoint.ipynb
|
bc9a714260a8d32acb73238f0255a4500c949de5
|
[] |
no_license
|
kartavya2000/PHY473
|
https://github.com/kartavya2000/PHY473
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 37,020 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Solve the Following Schrodinger Equation:
# $$ -\frac{1}{2}y'' +V(x)y = Ey $$
# and fine the Eigen Values E.
#
# The procedure uses both Odeint and Euler Forward as in the Eigen Values Problems
from numpy import *
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def V(x):
return (1.1-x**2)**-1
def ydot(y, x):
return (y[1], 2.0*(V(x)-a)*y[0])
N = 100
x = linspace(-1.0, 1.0, N)
slope_a = 0.1 #initial slope can be taken any value,
#afterwards impose normalization condition
yinit = array([0, slope_a])
a = 4.0
y = odeint(ydot, yinit, x)
yfinal = y[-1, 0]
a_p = a
a = 5.0
tol = 0.001
iter = 0
while((abs(a-a_p)>tol) and (iter<10)):
y = odeint(ydot, yinit, x)
yfinal_p, yfinal = yfinal, y[-1, 0]
a_p, a = a, a - yfinal*(a-a_p)/(yfinal-yfinal_p)
iter +=1
plt.plot(x,y[:,0])
print(a)
# +
from numpy import *
import matplotlib.pyplot as plt
def V(x):
return (1.1-x**2)**-1
def ydot(y, x):
return matrix([y[1], 2.0*(V(x)-a)*y[0]])
def EulerForward(ydot, yinit, x):
n = len(x)
y = zeros((n, len(yinit)))
dx = x[1]-x[0]
y[0,:] = yinit[:]
for i in range(1, n):
y[i, :] = y[i-1, :] + ydot(y[i-1, :], x[i-1]) * dx
return y
N = 100
x = linspace(-1.0, 1.0, N)
slope_a = 0.1 #initial slope can be taken any value,
#afterwards impose normalization condition
yinit = array([0, slope_a])
a = 0
y = EulerForward(ydot, yinit, x)
yfinal = y[-1, 0]
a_p = a
a = 1
tol = 0.001
iter = 0
while((abs(a-a_p)>tol) and (iter<10)):
y = EulerForward(ydot, yinit, x)
yfinal_p, yfinal = yfinal, y[-1, 0]
a_p, a = a, a - yfinal*(a-a_p)/(yfinal-yfinal_p)
iter +=1
plt.plot(x,y[:,0])
print(a)
fit_transform(X_test))
results = classifier.predict(X_test)
f = open('Test_set_results.txt','w')
for i in results:
if i == 0:
f.write('e')
else:
f.write('p')
f.write("\n")
f.close()
| 2,191 |
/COVID-19 Analysis on Bangladehs/COVID_19_Analysis_on_Bangladesh.ipynb
|
af23f5ada5a6f8693914cf327e0bb24f1c336394
|
[
"MIT"
] |
permissive
|
utshabkg/Machine-Learning-Projects
|
https://github.com/utshabkg/Machine-Learning-Projects
| 2 | 0 |
MIT
| 2020-10-09T20:02:41 | 2020-08-12T17:26:09 | null |
Jupyter Notebook
| false | false |
.py
| 26,317 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/AmitHasanShuvo/Machine-Learning-Projects/blob/master/COVID_19_Analysis_on_Bangladesh.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" id="TtBZGVkozLYa" colab_type="code" colab={}
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import plotly.express as px
import plotly.graph_objs as go
from plotly.subplots import make_subplots
from fbprophet.plot import plot_plotly, add_changepoints_to_plot
import plotly.offline as py
from datetime import date, timedelta
from statsmodels.tsa.arima_model import ARIMA
from sklearn.cluster import KMeans
from fbprophet import Prophet
# + id="Clie6MZ-zLYd" colab_type="code" colab={}
dataset=pd.read_csv("../input/corona-virus-report/covid_19_clean_complete.csv")
# + id="KIjnTIWWzLYg" colab_type="code" colab={}
dataset.shape
# + id="Yfhe7f-mzLYi" colab_type="code" colab={}
dataset.describe()
# + id="vn8VvCvmzLYk" colab_type="code" colab={}
daily = dataset.sort_values(['Date','Country/Region','Province/State'])
latest = dataset[dataset.Date == daily.Date.max()]
latest.head()
# + id="fnmOo2OgzLYp" colab_type="code" colab={}
data=latest.rename(columns={ "Country/Region": "country", "Province/State": "state","Confirmed":"confirm","Deaths": "death","Recovered":"recover"})
data.head()
# + id="ot43MS1czLYs" colab_type="code" colab={}
dgc=data.groupby("country")[['confirm', 'death', 'recover']].sum().reset_index()
dgc.head()
# + id="ixpYALCczLYv" colab_type="code" colab={}
import folium
worldmap = folium.Map(location=[32.4279,53.6880 ], zoom_start=4,tiles='Stamen Toner')
for Lat, Long, state in zip(data['Lat'], data['Long'],data['state']):
folium.CircleMarker([Lat, Long],
radius=5,
color='red',
popup =('State: ' + str(state) + '<br>'),
fill_color='red',
fill_opacity=0.7 ).add_to(worldmap)
worldmap
# + id="mwYTWThFzLY0" colab_type="code" colab={}
fig = px.bar(dgc[['country', 'confirm']].sort_values('confirm', ascending=False),
y="confirm", x="country", color='country',
log_y=True, template='ggplot2', title='Confirmed Cases')
fig.show()
# + id="AheDtO68zLY3" colab_type="code" colab={}
fig = px.bar(dgc[['country', 'recover']].sort_values('recover', ascending=False),
y="recover", x="country", color='country',
log_y=True, template='ggplot2', title='Recovered Cases')
fig.show()
# + id="A60ZcR2WzLY7" colab_type="code" colab={}
fig = px.bar(dgc[['country', 'death']].sort_values('death', ascending=False),
y="death", x="country", color='country',
log_y=True, template='ggplot2', title='Death')
fig.show()
# + id="gbqmqSWrzLY-" colab_type="code" colab={}
bd_data = dataset[dataset['Country/Region']=='Bangladesh']
bdata = bd_data.tail(22)
bdata.head(50)
# + id="GM1tgiLrzLZB" colab_type="code" colab={}
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Confirmed,label="Confirm")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title("Confirmation",fontsize=50)
plt.show()
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Recovered,label="Recovery")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title("Recoverey",fontsize=50)
plt.show()
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Deaths,label="Death")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title("Death",fontsize=50)
plt.show()
# + id="XNE46iDnzLZE" colab_type="code" colab={}
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Confirmed,label="Confirm")
plt.bar(bdata.Date, bdata.Recovered,label="Recovery")
plt.bar(bdata.Date, bdata.Deaths,label="Death")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title("Confirmation vs Recoverey vs Death",fontsize=50)
plt.show()
f, ax = plt.subplots(figsize=(23,10))
ax=sns.scatterplot(x="Date", y="Confirmed", data=bdata,
color="black",label = "Confirm")
ax=sns.scatterplot(x="Date", y="Recovered", data=bdata,
color="red",label = "Recovery")
ax=sns.scatterplot(x="Date", y="Deaths", data=bdata,
color="blue",label = "Death")
plt.plot(bdata.Date,bdata.Confirmed,zorder=1,color="black")
plt.plot(bdata.Date,bdata.Recovered,zorder=1,color="red")
plt.plot(bdata.Date,bdata.Deaths,zorder=1,color="blue")
# + id="h2euidTxzLZJ" colab_type="code" colab={}
bdata['Confirmed_new'] = bdata['Confirmed']-bdata['Confirmed'].shift(1)
bdata['Recovered_new'] = bdata['Recovered']-bdata['Recovered'].shift(1)
bdata['Deaths_new'] = bdata['Deaths']-bdata['Deaths'].shift(1)
# + id="eYj6PbnHzLZM" colab_type="code" colab={}
bdata = bdata.fillna(0)
# + id="rQB__CmJzLZO" colab_type="code" colab={}
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Confirmed_new,label="Confirm Cases")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title('Confirmed Cases',fontsize = 35)
plt.show()
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Recovered_new,label="Recovered Cases")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title('Recovered Cases',fontsize = 35)
plt.show()
plt.figure(figsize=(23,10))
plt.bar(bdata.Date, bdata.Deaths_new,label="Deaths")
plt.xlabel('Date')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title('Deaths',fontsize = 35)
plt.show()
# + id="7cNMz4XZzLZR" colab_type="code" colab={}
bdata.head()
# + id="WPcW2niXzLZV" colab_type="code" colab={}
f, ax = plt.subplots(figsize=(23,10))
ax=sns.scatterplot(x="Date", y="Confirmed_new", data=bdata,
color="black",label = "Confirm")
ax=sns.scatterplot(x="Date", y="Recovered_new", data=bdata,
color="red",label = "Recovery")
ax=sns.scatterplot(x="Date", y="Deaths_new", data=bdata,
color="blue",label = "Death")
plt.plot(bdata.Date,bdata.Confirmed_new,zorder=1,color="black")
plt.plot(bdata.Date,bdata.Recovered_new,zorder=1,color="red")
plt.plot(bdata.Date,bdata.Deaths_new,zorder=1,color="blue")
# + id="DaPRHbkDzLZX" colab_type="code" colab={}
dgd=data.groupby("Date")[['confirm', 'death', 'recover']].sum().reset_index()
dgd.head()
# + id="oMQYQma-zLZa" colab_type="code" colab={}
r_cm = float(dgd.recover/dgd.confirm)
d_cm = float(dgd.death/dgd.confirm)
# + id="AyMJjswJzLZd" colab_type="code" colab={}
print("The percentage of recovery after confirmation is "+ str(r_cm*100) )
print("The percentage of death after confirmation is "+ str(d_cm*100) )
# + id="RlY60Xk2zLZf" colab_type="code" colab={}
global_data = pd.read_csv("../input/novel-corona-virus-2019-dataset/covid_19_data.csv")
# + id="H1vNmQ_OzLZi" colab_type="code" colab={}
# This functions smooths data, thanks to Dan Pearson. We will use it to smooth the data for growth factor.
def smoother(inputdata,w,imax):
data = 1.0*inputdata
data = data.replace(np.nan,1)
data = data.replace(np.inf,1)
#print(data)
smoothed = 1.0*data
normalization = 1
for i in range(-imax,imax+1):
if i==0:
continue
smoothed += (w**abs(i))*data.shift(i,axis=0)
normalization += w**abs(i)
smoothed /= normalization
return smoothed
def growth_factor(confirmed):
confirmed_iminus1 = confirmed.shift(1, axis=0)
confirmed_iminus2 = confirmed.shift(2, axis=0)
return (confirmed-confirmed_iminus1)/(confirmed_iminus1-confirmed_iminus2)
def growth_ratio(confirmed):
confirmed_iminus1 = confirmed.shift(1, axis=0)
return (confirmed/confirmed_iminus1)
# This is a function which plots (for in input country) the active, confirmed, and recovered cases, deaths, and the growth factor.
def plot_country_active_confirmed_recovered(country):
# Plots Active, Confirmed, and Recovered Cases. Also plots deaths.
country_data = global_data[global_data['Country/Region']==country]
table = country_data.drop(['SNo','Province/State', 'Last Update'], axis=1)
table['ActiveCases'] = table['Confirmed'] - table['Recovered'] - table['Deaths']
table2 = pd.pivot_table(table, values=['ActiveCases','Confirmed', 'Recovered','Deaths'], index=['ObservationDate'], aggfunc=np.sum)
table3 = table2.drop(['Deaths'], axis=1)
# Growth Factor
w = 0.5
table2['GrowthFactor'] = growth_factor(table2['Confirmed'])
table2['GrowthFactor'] = smoother(table2['GrowthFactor'],w,5)
# 2nd Derivative
table2['2nd_Derivative'] = np.gradient(np.gradient(table2['Confirmed'])) #2nd derivative
table2['2nd_Derivative'] = smoother(table2['2nd_Derivative'],w,7)
#Plot confirmed[i]/confirmed[i-1], this is called the growth ratio
table2['GrowthRatio'] = growth_ratio(table2['Confirmed'])
table2['GrowthRatio'] = smoother(table2['GrowthRatio'],w,5)
#Plot the growth rate, we will define this as k in the logistic function presented at the beginning of this notebook.
table2['GrowthRate']=np.gradient(np.log(table2['Confirmed']))
table2['GrowthRate'] = smoother(table2['GrowthRate'],0.5,3)
# horizontal line at growth rate 1.0 for reference
x_coordinates = [1, 100]
y_coordinates = [1, 1]
#plots
table2['Deaths'].plot(title='Deaths')
plt.show()
table3.plot()
plt.show()
table2['GrowthFactor'].plot(title='Growth Factor')
plt.plot(x_coordinates, y_coordinates)
plt.show()
table2['2nd_Derivative'].plot(title='2nd_Derivative')
plt.show()
table2['GrowthRatio'].plot(title='Growth Ratio')
plt.plot(x_coordinates, y_coordinates)
plt.show()
table2['GrowthRate'].plot(title='Growth Rate')
plt.show()
return
# + id="YueiH8kczLZk" colab_type="code" colab={}
plot_country_active_confirmed_recovered('Bangladesh')
# + id="amvgC8DPzLZp" colab_type="code" colab={}
prophet=bd_data.iloc[: , [4,5 ]].copy()
prophet.head()
prophet.columns = ['ds','y']
prophet.head()
# + id="1JGjVBvVzLZr" colab_type="code" colab={}
m=Prophet()
m.fit(prophet)
future=m.make_future_dataframe(periods=15)
forecast=m.predict(future)
forecast
# + id="28pOQT9QzLZu" colab_type="code" colab={}
cnfrm = forecast.loc[:,['ds','trend']]
cnfrm = cnfrm[cnfrm['trend']>0]
cnfrm=cnfrm.tail(15)
cnfrm.columns = ['Date','Confirm']
cnfrm.head()
# + id="3Zq-_BGQzLZx" colab_type="code" colab={}
figure = plot_plotly(m, forecast)
py.iplot(figure)
figure = m.plot(forecast,xlabel='Date',ylabel='Confirmed Count')
# + id="p03dCg7azLZ0" colab_type="code" colab={}
figure=m.plot_components(forecast)
# + id="z9zwXPjTzLZ3" colab_type="code" colab={}
prophet_rec=bd_data.iloc[: , [4,7 ]].copy()
prophet_rec.head()
prophet_rec.columns = ['ds','y']
prophet_rec.head()
# + id="rTDNK1YqzLZ8" colab_type="code" colab={}
m1=Prophet()
m1.fit(prophet_rec)
future_rec=m1.make_future_dataframe(periods=15)
forecast_rec=m1.predict(future_rec)
forecast_rec
# + id="sohawR9uzLZ-" colab_type="code" colab={}
rec = forecast_rec.loc[:,['ds','trend']]
rec = rec[rec['trend']>0]
rec=rec.tail(15)
rec.columns = ['Date','Recovery']
rec.head()
# + id="7zKlm8iRzLaV" colab_type="code" colab={}
figure_rec = plot_plotly(m1, forecast_rec)
py.iplot(figure_rec)
figure_rec = m1.plot(forecast_rec,xlabel='Date',ylabel='Recovery Count')
# + id="q3kDcYddzLaX" colab_type="code" colab={}
figure_rec=m1.plot_components(forecast_rec)
# + id="ZZmEBguHzLaZ" colab_type="code" colab={}
prophet_dth=bd_data.iloc[: , [4,6 ]].copy()
prophet_dth.head()
prophet_dth.columns = ['ds','y']
prophet_dth.head()
# + id="DLtEwN2gzLab" colab_type="code" colab={}
m2=Prophet()
m2.fit(prophet_dth)
future_dth=m2.make_future_dataframe(periods=15)
forecast_dth=m2.predict(future_dth)
forecast_dth
# + id="1TEA5M9yzLad" colab_type="code" colab={}
dth = forecast_dth.loc[:,['ds','trend']]
dth = dth[dth['trend']>0]
dth=dth.tail(15)
dth.columns = ['Date','Death']
dth.head()
# + id="dkOShz2dzLaf" colab_type="code" colab={}
figure_dth = plot_plotly(m2, forecast_dth)
py.iplot(figure_dth)
figure_dth = m2.plot(forecast_dth,xlabel='Date',ylabel='Death Count')
# + id="vDeP7agzzLak" colab_type="code" colab={}
figure_dth=m2.plot_components(forecast_dth)
# + id="dzcUq_JOzLan" colab_type="code" colab={}
prediction = cnfrm
prediction['Recover'] = rec.Recovery
prediction['Death'] = dth.Death
prediction.head()
# + id="MGU1eCMuzLap" colab_type="code" colab={}
pr_pps = float(prediction.Recover.sum()/prediction.Confirm.sum())
pd_pps = float(prediction.Death.sum()/prediction.Confirm.sum())
# + id="WaKY2WIhzLar" colab_type="code" colab={}
print("The percentage of Predicted recovery after confirmation is "+ str(pr_pps*100) )
print("The percentage of Predicted Death after confirmation is "+ str(pd_pps*100) )
# + id="uBPCdOEXzLau" colab_type="code" colab={}
| 13,329 |
/Classifying offensive in 3 categories.ipynb
|
a9ae3013f7fd2a1c34371da994a1f2d7a2403c61
|
[] |
no_license
|
shatakshisingh24/Offense_evaluation_in_memes
|
https://github.com/shatakshisingh24/Offense_evaluation_in_memes
| 6 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 33,610 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # PERCEPTRON
# ##### LOAD THE DATA
# +
import numpy as np
# each row contains Sepal length in cm, Sepal width in cm and type (0:setosa|1:versicolor)
filename = '../../dataset/iris-data.csv'
data = np.loadtxt(filename, delimiter=',')
data[:10] # print the first 10 items
# -
# ##### VISUALIZE THE DATA
# It is a good idea to visualize the data so we can confirm that the data is linearly separable.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
plt.grid()
for i in range(len(data)) :
point = data[i]
if point[2] == 0 :
color = 'r' # setosas will appear in red
else:
color = 'b' # versicolor will appear in blue
plt.scatter(point[0], point[1], c=color)
plt.xlabel('Sepal Length [cm]')
plt.ylabel('Sepal Width [cm]');
# +
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'https://farm9.staticflickr.com/8383/8675226902_e72273713f_k.jpg', width=350, height=350)
# -
# ##### DEFINE TRAINING AND TESTING SETS
# +
from sklearn.model_selection import train_test_split
target = data[:, -1]
data = data[:, :-1]
X_train, X_test, y_train, y_test = train_test_split(
data, target, test_size=0.30, random_state=42)
X_train.shape
# -
# ##### TRAIN THE MODEL
# +
from sklearn.metrics import mean_squared_error
np.random.seed(93)
class Perceptron(object):
def __init__(self, learning=0.01, n_epochs=20):
self.learning = learning
self.n_epochs = n_epochs
def predict(self, X):
pred = np.dot(X, self.w_) + self.b_
return 1.0 if pred >= 0.0 else 0.0
def fit(self, X, y):
# iniciate the weights and bias
self.w_ = np.random.uniform(0, 1, X.shape[1])
self.b_ = np.random.uniform(0, 1, 1)
self.costList_ = []
for ep in range(self.n_epochs):
cost_epoch = 0 # MSE
for Xi, target in zip(X, y):
# cost function
pred = self.predict(Xi)
cost = np.square(target - pred)
cost_epoch += float(cost/len(X)) # MSE calculation
# update weights and bias
update = self.learning * (target - pred) # Once the difference is zero it will stop updating
self.w_ += update * Xi
self.b_ += update
# store MSE through every epoch iteration
self.costList_.append(cost_epoch)
# print model improvements
print("Epoch: {:04}\tLoss: {:06.5f}".format((ep+1), cost_epoch), end='')
print("\t\tRegression: {:.2f}(X1) + {:.2f}(X2) + {:.2f}".format(self.w_[0],
self.w_[1],
float(self.b_)))
return self
# -
# ##### EXECUTE THE MODEL
clf = Perceptron()
clf.fit(X_train, y_train)
# ##### VISUALIZE MODEL IMPROVEMENT
plt.plot(clf.costList_)
plt.xlabel('epochs')
plt.ylabel('MSE');
plt.show()
# ##### VISUALIZE CONFUSION MATRIX
# +
import seaborn as sns
from sklearn.metrics import confusion_matrix
# this will make the plot better-looking
cmap = cmap=plt.cm.get_cmap('Reds', 10)
# create a confusion matrix for training set
print('Computing confusion matrix on training sets..')
matrix_labels = ['0:setosa', '1:versicolor']
train_predictions = [clf.predict(item) for item in X_train]
mat = confusion_matrix(y_train, train_predictions)
sns.heatmap(mat.T, square=True, annot=True, fmt='d',
cbar=True, xticklabels=matrix_labels,
yticklabels=matrix_labels, cmap=cmap)
plt.xlabel('Train Label set')
plt.ylabel('Predictions')
plt.show()
# create a confusion matrix for testing set
print('Computing confusion matrix on testing sets..')
matrix_labels = ['0:setosa', '1:versicolor']
test_predictions = [clf.predict(item) for item in X_test]
mat = confusion_matrix(y_test, test_predictions)
sns.heatmap(mat.T, square=True, annot=True, fmt='d',
cbar=True, xticklabels=matrix_labels,
yticklabels=matrix_labels, cmap=cmap)
plt.xlabel('Test Label set')
plt.ylabel('Predictions')
plt.show()
# -
# ###### COMPARING EVERY SINGLE RESULT
# +
for i, point in enumerate(data):
# Plot the samples with labels = 0
out = clf.predict(point)
if out==0:
plt.scatter(point[0], point[1], s=120, marker='_', linewidths=2, color='blue')
# Plot the samples with labels = 1
else:
plt.scatter(point[0], point[1], s=120, marker='+', linewidths=2, color='blue')
plt.xlabel('Sepal Length [cm]')
plt.ylabel('Sepal Width [cm]')
plt.show()
for i in range(len(y_test)):
print('expectedValue vs prediction:\t {} | {}'.format(y_test[i], clf.predict(X_test[i])))
# -
# ##### TRY IT YOURSELF
# +
petal_length = 5.5
petal_width = 2.3
# 0: Iris-setosa | 1: Iris-versicolor
print('Iris-versicolor' if clf.predict([petal_length, petal_width]) else 'Iris-setosa')
# -
fs
f.close()
# + id="h4K3w8Ww7i_q" colab_type="code" colab={}
#Preaping embedding layer
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_test = tokenizer.texts_to_sequences(X_test)
vocab_size = len(tokenizer.word_index) + 1
maxlen = 15
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
# + id="UqUgV-JWAvKu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="42ad6f9a-281d-4f1c-d34d-a7ace3fe0084"
vocab_size = len(tokenizer.word_index) + 1
print(vocab_size)
# + id="JTgD3N3OA1F6" colab_type="code" colab={}
embedding_matrix = np.zeros((vocab_size, 300))
for word, index in tokenizer.word_index.items():
embedding_vector = embeddings_dictionary.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
# + id="15sUaRoYA8r9" colab_type="code" colab={}
## Create model
model_glove = Sequential()
model_glove.add(Embedding(vocab_size, 300, input_length=maxlen, weights=[embedding_matrix], trainable=False))
model_glove.add(Dropout(0.2))
model_glove.add(Conv1D(64, 5, activation='relu'))
model_glove.add(MaxPooling1D(pool_size=4))
model_glove.add(LSTM(100))
model_glove.add(Dense(3, activation='softmax'))
model_glove.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# + id="fU2IWHvap3J1" colab_type="code" colab={}
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y_train = le.fit_transform(y_train)
y_test = le.transform(y_test)
from keras.utils import to_categorical
y_train = to_categorical(y_train)
# + id="o6bI3QAxp6s6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="58366654-aa39-44bc-cdfc-d752e17fe345"
history = model_glove.fit(X_train, y_train, batch_size=2048, epochs=7, verbose=1, validation_split=0.45)
# + id="n-_AWrGap-V0" colab_type="code" colab={}
| 7,261 |
/Competitions/NLP_Disaster Competition/Kaggle_PhilCulliton/Transcript.ipynb
|
fcb197e15b45b1425fda21e0e8297be4375d6fbc
|
[] |
no_license
|
EDA-KING/TIL
|
https://github.com/EDA-KING/TIL
| 0 | 0 | null | 2021-02-03T06:28:39 | 2021-02-02T14:13:55 | null |
Jupyter Notebook
| false | false |
.py
| 10,354 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="PowFTA2-yajz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} executionInfo={"status": "ok", "timestamp": 1594952626676, "user_tz": 300, "elapsed": 2468, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="2ecca6c0-ee68-4205-d117-09ed244c8cb9"
# Importing required libraries
import numpy
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout, Input, BatchNormalization
from keras.constraints import maxnorm
from keras.models import Model
from keras.optimizers import SGD, Adam
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.common.image_dim_ordering()
# + id="Xj0PLIuRyiUf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} executionInfo={"status": "ok", "timestamp": 1594952633383, "user_tz": 300, "elapsed": 6763, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="74ee9551-09f3-4034-9f3a-8d7387033ded"
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
yp=y_test
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
print(num_classes)
# + id="UpcazlmUyyHz" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594952636422, "user_tz": 300, "elapsed": 778, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}}
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(32, 32, 3), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
# + id="oxC5nY9pzGEy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 832} executionInfo={"status": "ok", "timestamp": 1594952640935, "user_tz": 300, "elapsed": 516, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="9382926f-ac36-49d7-dc76-a1625318d615"
# Compile model
epochs = 2
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
print(model.summary())
# + id="Ejr074HdzoKd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} executionInfo={"status": "ok", "timestamp": 1594953596703, "user_tz": 300, "elapsed": 947183, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="2d50e333-0815-4efd-a8f0-7d18db663172"
# Fit the model
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
# + id="xNUmMI5ozz0M" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594953603421, "user_tz": 300, "elapsed": 534, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}}
model.save("my_model.h5")
# + id="eNWNw2Z39mtu" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594953606130, "user_tz": 300, "elapsed": 978, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}}
import tensorflow as tf
from tensorflow import keras
loaded_model = keras.models.load_model('my_model.h5')
# + id="GGg26t-R90FZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} executionInfo={"status": "ok", "timestamp": 1594953616285, "user_tz": 300, "elapsed": 476, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="8e4b6cfd-e700-4452-9949-18e0cd214ce2"
#Predicting the first four images of dataset
for i in range(0,4):
y=model.predict_classes(X_test[[i],:])
print("actual",yp[i],"predicted",y[0])
# + id="s63rqs5g-B27" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} executionInfo={"status": "ok", "timestamp": 1594953621159, "user_tz": 300, "elapsed": 552, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="02a590a4-1e41-472b-81af-97fe09bc18ed"
import matplotlib.pyplot as plt
#Plotting graphs for Accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + id="XTUQpRPw-bhC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} executionInfo={"status": "ok", "timestamp": 1594953627346, "user_tz": 300, "elapsed": 520, "user": {"displayName": "Sravani Garikapati", "photoUrl": "https://lh4.googleusercontent.com/-jeffqYeBdYM/AAAAAAAAAAI/AAAAAAAABM8/5_s3K0I3R94/s64/photo.jpg", "userId": "01506930317160125465"}} outputId="8e5fdd3e-9c66-41df-d076-da1f3ae9ff9e"
#Plotting graphs for the Loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + id="KXeh7q9t-4op" colab_type="code" colab={}
| 7,637 |
/Embedding_labeling_RNN.ipynb
|
5835f38a06f9bc68704d6c2274cc50cce432ed8d
|
[] |
no_license
|
Fusroda-h/Deepfake_Detection
|
https://github.com/Fusroda-h/Deepfake_Detection
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 19,367 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
import operator
import random
def knn(x_test,x_data,y_data,k):
#计算样本数量
x_data_size = x_data.shape[0]
#复制x_test
np.tile(x_test,(x_data_size,1))
diffMat = np.tile(x_test,(x_data_size,1)) - x_data
#计算差值的平方
sqDiffMat = diffMat**2
#求和
sqDistance = sqDiffMat.sum(axis = 1)
#开方
distance = sqDistance**0.5
#从小到大排序
sortedDistaces = distance.argsort()
classCount={}
for i in range(k):
votelabel= y_data[sortedDistaces[i]]
#统计标签数量
classCount[votelabel] = classCount.get(votelabel,0)+1
#根据operator.itemgetter(1)第一个值对classCount排序,然后再取倒叙
sortedClassCount=sorted(classCount.items(),key=operator.itemgetter(1),reverse=True)
#获取数量最多的标签
return sortedClassCount[0][0]
# +
iris = datasets.load_iris()
#x_train,x_test,y_train,y_test = train_test_split(iris.data,iris.target,test_size=0.2)#分割数据0.2为测试数据,0.8为训练数据
#打乱数据
data_size = iris.data.shape[0]
index = [i for i in range(data_size)]
random.shuffle(index)
iris.data = iris.data[index]
iris.target = iris.target[index]
#切分数据集
test_size = 40
x_train = iris.data[test_size:]
x_test = iris.data[:test_size]
y_train = iris.target[test_size:]
y_test = iris.target[:test_size]
predictions=[]
for i in range(x_test.shape[0]):
predictions.append(knn(x_test[i],x_train,y_train,5))
print(classification_report(y_test,predictions))
).__init__()
conv1 = nn.Conv2d(3, 8, (3,3), 1)
pool1 = nn.MaxPool2d(2)
conv2 = nn.Conv2d(8, 8, (5,5), 1)
pool2 = nn.MaxPool2d(2)
conv3 = nn.Conv2d(8, 16, (5,5), 1)
pool3 = nn.MaxPool2d(2)
conv4 = nn.Conv2d(16, 16, (5,5), 1)
pool4 = nn.MaxPool2d(4)
self.conv_module = nn.Sequential(
conv1,
nn.ReLU(),
pool1,
conv2,
nn.ReLU(),
pool2,
conv3,
nn.ReLU(),
pool3,
conv4,
nn.ReLU(),
pool4
)
fc1 = nn.Linear(576, 1024)
dp1 = nn.Dropout(0.5)
fc2 = nn.Linear(1024, 16)
d2 = nn.Dropout(0.5)
fc3 = nn.Linear(16, 1024)
self.fc_module = nn.Sequential(
fc1,
dp1,
fc2,
nn.LeakyReLU(),
d2,
fc3
)
def forward(self, x):
out = self.conv_module(x)
dim = 1
for d in out.size()[1:]:
dim = dim * d
out = out.view(-1, dim)
out = self.fc_module(out)
return out
class LSTM_variable_input(torch.nn.Module) :
def __init__(self, INPUT_DIM, HIDDEN_DIM, N_LAYERS):
super().__init__()
self.hidden_dim = HIDDEN_DIM
self.dropout = nn.Dropout(0.3)
self.lstm = nn.LSTM(input_size = INPUT_DIM, hidden_size = HIDDEN_DIM, num_layers = N_LAYERS, batch_first=True)
self.linear = nn.Linear(self.hidden_dim, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.dropout(x)
#x = pack_padded_sequence(x, s, batch_first=True, enforce_sorted=False)
out_pack, (ht, ct) = self.lstm(x)
print(out_pack)
print(ht.shape)
out = self.linear(out_pack[-1])
print(out)
out = self.sigmoid(out)
return out
# +
Embedding_Model = CNNClassifier().double()
Embedding_Model.load_state_dict(torch.load('c:/Users/jungw/Desktop/new/GAN/dfdc_pract/deepfake_database/model_2nd.pt'))
print(Embedding_Model)
model = LSTM_variable_input(1024, 1024, 2).double()
model
# -
lr=0.005
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
x = torch.rand([1,3,1024])
x = torch.tensor([[1,2],[3,4]])
z=x.unsqueeze(1)
print(z,z.shape)
# +
num_epochs = 1
num_batches = len(trn_loader)
use_cuda = False
model.train()
trn_loss_list = []
val_loss_list = []
for epoch in range(num_epochs):
trn_loss = 0.0
for i,data in enumerate(trn_loader):
x = torch.DoubleTensor(data['image']/255)
label = data['label']
if use_cuda:
x = torch.cuda.DoubleTensor(x.cuda())
label = label.cuda()
inputs = Embedding_Model(x.squeeze(0))
model.zero_grad()
print(inputs.unsqueeze(0).shape)
output = model(inputs.unsqueeze(0))
loss = criterion(output, ((label+1)/2).type(torch.DoubleTensor))
loss.backward()
optimizer.step()
# trn_loss summary
trn_loss += loss.item()
# del (memory issue)
del loss
del output
if (i+1) % 100 == 0: # every 100 mini-batches
with torch.no_grad(): # very very very very important!!!
val_loss = 0.0
for j, val in enumerate(val_loader):
val_x = torch.DoubleTensor(data['image']/255)
val_label = data['label']
if use_cuda:
val_x = torch.cuda.DoubleTensor(val_x.cuda())
val_label = val_label.cuda()
inputs_val = Embedding_Model(val_x.squeeze(0))
# forward propagation
output = model(inputs_val.unsqueeze(0))
v_loss = criterion(output, ((val_label+1)/2).type(torch.DoubleTensor))
val_loss += v_loss
print("epoch: {}/{} | step: {}/{} | trn loss: {:.8f} | val loss: {:.8f}".format(
epoch+1, num_epochs, i+1, num_batches, trn_loss / 100, val_loss / len(val_loader)
))
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': trn_loss/100
}, 'c:/Users/jungw/Desktop/new/GAN/dfdc_pract/deepfake_database/rnn_model_1st.pt')
trn_loss_list.append(trn_loss/100)
val_loss_list.append(val_loss/len(val_loader))
trn_loss = 0.0
# -
| 6,555 |
/nDGP_implementation/Plots_python_Analysis/nDGP_smooth_screen.ipynb
|
6f187a4e031493fd10cbfb8d4650534197697cb2
|
[] |
no_license
|
GraCosPA/MG_evolution
|
https://github.com/GraCosPA/MG_evolution
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 573,953 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Web Scraping 101
#importing packages
from selenium import webdriver
import pandas as pd
driver = webdriver.Chrome(executable_path='/home/hazzy0606/chromedriver')
driver.get('https://forums.edmunds.com/discussion/2864/general/x/entry-level-luxury-performance-sedans/p702')
# '''
# #getting user_id
#
# userid_el = driver.find_elements_by_xpath('//*[@id="Comment_5561090"]/div/div[2]/div[1]/span[1]/a[2]')[0]
# userid = userid_el.text
#
# #getting date
#
# user_date = driver.find_elements_by_xpath('//*[@id="Comment_5561090"]/div/div[2]/div[2]/span[1]/a/time')[0]
# date = user_date.get_attribute('title')
#
# #getting comment
#
# user_message = driver.find_elements_by_xpath('//*[@id="Comment_5561090"]/div/div[3]/div/div[1]')[0]
# comment = user_message.text
#
# '''
# +
comments = pd.DataFrame(columns = ['Date', 'user_id', 'comments'])
#extracting all comment ids
ids = driver.find_elements_by_xpath("//*[contains(@id,'Comment_')]")
comment_ids = []
for i in ids:
comment_ids.append(i.get_attribute('id'))
for x in comment_ids:
#extract the date
user_date = driver.find_elements_by_xpath('//*[@id="' + x +'"]/div/div[2]/div[2]/span[1]/a/time')[0]
date = user_date.get_attribute('title')
#extract user id
userid_el = driver.find_elements_by_xpath('//*[@id="' + x +'"]/div/div[2]/div[1]/span[1]/a[2]')[0]
userid = userid_el.text
#extract comment
user_message = driver.find_elements_by_xpath('//*[@id="' + x +'"]/div/div[3]/div/div[1]')[0]
comment = user_message.text
#adding the above to dataframe
comments.loc[len(comments)] = [date,userid,comment]
#viewing dataframe
comments
# -
_ndgp_delta_z_all.append(np.loadtxt(addressGev_ndgp+"/ndgp_pk"+str(i).zfill(3)+"_delta.dat"))
pow_ndgp_screen_delta_z_all.append(np.loadtxt(address_screen+"/ndgp_pk"+str(i).zfill(3)+"_delta.dat"))
pow_GR_delta_z_all.append(np.loadtxt(addressGev_lcdm+"lcdm_newt_pk"+str(i).zfill(3)+"_delta.dat"))
pow_ndgp_smooth_delta_z_all_r0.append(np.loadtxt(address_screen_smooth_r0+"/ndgp_pk_smooth"+str(i).zfill(3)+"_delta.dat"))
pow_ndgp_smooth_delta_z_all_r1.append(np.loadtxt(address_screen_smooth_r1+"/ndgp_pk_smooth"+str(i).zfill(3)+"_delta.dat"))
pow_ndgp_smooth_delta_z_all_r2.append(np.loadtxt(address_screen_smooth_r2+"/ndgp_pk_smooth"+str(i).zfill(3)+"_delta.dat"))
pow_ndgp_smooth_delta_z_all_r5.append(np.loadtxt(address_screen_smooth_r5+"/ndgp_pk_smooth"+str(i).zfill(3)+"_delta.dat"))
# -
# # matter power spectrum Plots
# +
plt.figure(figsize=(20,15))
z_list=[100, 50, 30,10, 3, 1, 0]
plt.figure(1)
#####################
#####################
#####################
string=r"$ H0r_c=0.5, Nparticles=128^3, Boxsize=200Mpc/h$";
ax = plt.gca()
ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
for i in range (6,7):
plt.loglog(pow_ndgp_delta_z_all[i][:,0], (pow_ndgp_delta_z_all[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-4],linestyle='solid',lw=2.5,label=r"$\frac{(ndgp)}{(GR)}$" )
plt.loglog(pow_ndgp_screen_delta_z_all[i][:,0], (pow_ndgp_screen_delta_z_all[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-6],linestyle='dashed',lw=2.5,label=r"$\frac{(ndgp.screen)}{(GR)}$" )
plt.loglog(pow_ndgp_smooth_delta_z_all_r0[i][:,0], (pow_ndgp_smooth_delta_z_all_r0[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-5],linestyle='dashed',lw=2.5,label=r"$\frac{(ndgp.smooth)}{(GR)}$, R=0" )
plt.loglog(pow_ndgp_smooth_delta_z_all_r1[i][:,0], (pow_ndgp_smooth_delta_z_all_r1[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-2],linestyle='dashed',lw=2.5,label=r"$\frac{(ndgp.smooth)}{(GR)}$, R=1Mp/h" )
plt.loglog(pow_ndgp_smooth_delta_z_all_r2[i][:,0], (pow_ndgp_smooth_delta_z_all_r2[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-0],linestyle='dashed',lw=2.5,label=r"$\frac{(ndgp.smooth)}{(GR)}$, R=2Mp/h" )
plt.loglog(pow_ndgp_smooth_delta_z_all_r5[i][:,0], (pow_ndgp_smooth_delta_z_all_r5[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-3],linestyle='dashed',lw=2.5,label=r"$\frac{(ndgp.smooth)}{(GR)}$, R=5Mpc/h" )
ax.hlines(y=1.25, xmin=0.01, xmax=20, linewidth=2, color='black')
# ax.hlines(y=1.15, xmin=0.01, xmax=20, linewidth=2, color='black')
plt.legend(bbox_to_anchor=(0.02, 0.88, 0.26, .102), loc=1,ncol=1,fontsize=16, mode="expand", borderaxespad=0.)
plt.xlabel("k[h/Mpc]",fontsize=18)
plt.ylabel(r"$\mathcal{P} $",fontsize=18)
# plt.xlim(0.0088,20)
# plt.ylim(0,2.5)
plt.grid(True)
plt.title(string+" "+", z="+str(z_list[i]))
plt.yticks(np.arange(1.0,1.7,0.05))
# ax.hlines(y=1.03, xmin=0.01, xmax=20, linewidth=2, color='black')
# ax.hlines(y=1.15, xmin=0.01, xmax=20, linewidth=2, color='black')
plt.xlim(0.0088,1.)
plt.ylim(1.0,1.5)
# plt.grid(True)
# plt.savefig('./Gev-Class_Images/CsCs.jpg', format='jpg', dpi=100)
plt.show()
# +
plt.figure(figsize=(20,15))
z_list=[100, 50, 30,10, 3, 1, 0]
plt.figure(1)
#####################
#####################
#####################
ax = plt.gca()
ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
for i in range (6,7):
# plt.loglog(pow_ndgp_delta_z_all[i][:,0], (pow_ndgp_delta_z_all[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-4],linestyle='solid',lw=2.5,label=r"$\frac{(ndgp)}{(GR)}$, H0r_c=0.5, z= "+str(z_list[i]) )
# plt.loglog(pow_ndgp_screen_delta_z_all[i][:,0], (pow_ndgp_screen_delta_z_all[i][:,1])/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-4],linestyle='dashed',lw=2.5,label=r"$\frac{(ndgp-screen)}{(GR)}$, H0r_c=0.5, z= "+str(z_list[i]) )
plt.loglog(pow_ndgp_smooth_delta_z_all_r0[i][:,0], (pow_ndgp_smooth_delta_z_all_r0[i][:,1])/(pow_ndgp_delta_z_all[i][:,1]),color="red",linestyle='dashed',lw=2.5,label=r"over ndgp" )
plt.loglog(pow_ndgp_smooth_delta_z_all_r0[i][:,0], (pow_ndgp_smooth_delta_z_all_r0[i][:,1])/(pow_ndgp_screen_delta_z_all[i][:,1]),color="blue",linestyle='dashed',lw=2.5,label=r"over screen" )
# plt.loglog(pow_ndgp_smooth_delta_z_all[i][:,0], (pow_ndgp_smooth_delta_z_all[i][:,1])/(pow_GR_delta_z_all[i][:,1]),color="blue",linestyle='dashed',lw=2.5,label=r"over GR" )
# plt.loglog(pow_ndgp_delta_z_all[i][:,0], (pow_ndgp_delta_z_all[i][:,1]),color=ColorsI[i-3],linestyle='dashed',lw=2.5,label=r"ndgp" )
# plt.loglog(pow_GR_delta_z_all[i][:,0], (pow_ndgp_screen_delta_z_all[i][:,1]),color="r",linestyle='dashed',lw=2.5,label=r"lcdm" )
plt.legend(bbox_to_anchor=(0.02, 0.68, 0.26, .102), loc=1,ncol=1,fontsize=16, mode="expand", borderaxespad=0.)
plt.xlabel("k[h/Mpc]",fontsize=18)
plt.ylabel(r"$\mathcal{P} $",fontsize=18)
# plt.xlim(0.0088,20)
# plt.ylim(0,2.5)
plt.grid(True)
# plt.ylim(1.0,2.0)
plt.grid(True)
plt.show()
# +
plt.figure(figsize=(20,15))
z_list=[100, 50, 30,10, 3, 1, 0]
plt.figure(1)
#####################
#####################
#####################
#Blue,red,purple,olivedrab,darkblue,salmon,blueviolet,yellowgreen
#SubplotI Check the pi in class and Gevolution
# Hconf*pi comparison
# plt.subplot(211)
ax = plt.gca()
ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
for i in range (6,7):
plt.semilogx(pow_ndgp05_delta_z_all[i][:,0],pow_ndgp05_delta_z_all[i][:,1]/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-4],linestyle='-',lw=2.5,label=r"$\frac{\mathcal{P}_{\delta_m} (ndgp)}{\mathcal{P}_{\delta_m} (GR)}$, H0r_c=0.5, z= "+str(z_list[i]) )
plt.semilogx(pow_ndgp05_screen_delta_z_all[i][:,0], pow_ndgp05_screen_delta_z_all[i][:,1]/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-2],linestyle='--',lw=2.5,label=r"$\frac{\mathcal{P}_{\delta_m} (screen)}{\mathcal{P}_{\delta_m} (GR)}$, H0r_c=0.5, z= "+str(z_list[i]) )
ax.hlines(y=1.25, xmin=0.01, xmax=20, linewidth=2, color='black')
# ax.hlines(y=1.15, xmin=0.01, xmax=20, linewidth=2, color='black')
plt.legend(bbox_to_anchor=(0.02, 0.78, 0.26, .102), loc=1,ncol=1,fontsize=16, mode="expand", borderaxespad=0.)
plt.xlabel("k[h/Mpc]",fontsize=18)
plt.ylabel(r"$\mathcal{P} $",fontsize=18)
# plt.xlim(0.0088,20)
# plt.ylim(0,2.5)
plt.grid(True)
# ##Subplot 2
# plt.subplot(212)
# ax = plt.gca()
# ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
# ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
# for i in range (1,5):
# plt.loglog(pow_gr_delta_zall[i][:,0], np.abs(pow_ndgp_delta_zall[i][:,1]-pow_gr_delta_zall[i+2][:,1])/pow_gr_delta_zall[i][:,1],color=tableau_color[i],linestyle='dashed',lw=2.5, label=r"$\frac{\mathcal{P}_{\delta_m} (ndgp) - \mathcal{P}_{\delta_m}(GR)}{\mathcal{P}_{\delta_m} (GR)}$, z="+str(z_list[i]) )
# plt.axvspan(0.6, 14, alpha=0.5, color='pink')
# plt.legend(bbox_to_anchor=(0.02, 0.88, 0.36, .102), loc=1,ncol=1,fontsize=15, mode="expand", borderaxespad=0.)
# plt.xlabel("k[h/Mpc]",fontsize=18)
# # plt.ylabel(r"$\mathcal{P} $",fontsize=18)
# plt.xlim(0.0088,0.7)
plt.ylim(1.0,1.5)
plt.grid(True)
# plt.savefig('./Gev-Class_Images/CsCs.jpg', format='jpg', dpi=100)
plt.show()
# +
plt.figure(figsize=(18,10))
z_list=[100, 50, 30,10, 3, 1, 0]
plt.figure(1)
ax = plt.gca()
ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
for i in range (2,7):
plt.semilogx(pow_ndgp_screen_delta_z_all[i][:,0],( pow_ndgp_screen_delta_z_all[i][:,1]/pow_GR_delta_z_all[i][:,1]) -(pow_ndgp_delta_z_all[i][:,1]/pow_GR_delta_z_all[i][:,1]) ,color=ColorsI[i-4],linestyle='dashed',lw=2.5,label=r"$\frac{\mathcal{P}_{\delta_m} (screen)}{\mathcal{P}_{\delta_m} (GR)} -\frac{\mathcal{P}_{\delta_m} (ndgp)}{\mathcal{P}_{\delta_m} (GR)} $, z= "+str(z_list[i]) )
# plt.loglog(pow_ndgp_delta_z_all[i][:,0], pow_ndgp_delta_z_all[i][:,1]/pow_GR_delta_z_all[i][:,1],color=ColorsI[i-4],linestyle='solid',lw=2.5,label=r"$\frac{\mathcal{P}_{\delta_m} (ndgp)}{\mathcal{P}_{\delta_m} (GR)}$, z= "+str(z_list[i]) )
plt.legend(bbox_to_anchor=(0.02, 0.88, 0.26, .102), loc=1,ncol=1,fontsize=16, mode="expand", borderaxespad=0.)
plt.xlabel("k[h/Mpc]",fontsize=18)
plt.ylabel(r"$\mathcal{P} $",fontsize=18)
# plt.xlim(0.0088,20)
# plt.ylim(0,2.5)
plt.grid(True)
# plt.savefig('./Gev-Class_Images/CsCs.jpg', format='jpg', dpi=100)
plt.show()
# -
# #### Phi power spectrum Plots
# +
plt.figure(figsize=(15,15))
z_list=[100, 50, 30,10, 3, 1, 0]
plt.figure(1)
#####################
#####################
#####################
#Blue,red,purple,olivedrab,darkblue,salmon,blueviolet,yellowgreen
#SubplotI Check the pi in class and Gevolution
# Hconf*pi comparison
x = np.linspace(0, 10, 1000)
plt.subplot(211)
ax = plt.gca()
ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
for i in range (0,maxNum-1):
# plt.loglog(pow_gr_phi_zall[i][:,0], pow_gr_phi_zall[i][:,1],color=tableau20[i],linestyle='dashed',lw=2.5, label=r"$\Phi$(ndgp), z="+str(z_list[i]) )
plt.loglog(pow_ndgp_phi_z_all[i][:,0], pow_ndgp_screen_phi_z_all[i][:,1]/pow_GR_screen_delta_z_all[i][:,1],color=ColorsI[i],linestyle='solid',lw=2.5,label=r"$\frac{\mathcal{P}_{\Phi}(ndgp)}{\mathcal{P}_{\Phi}(GR)}$, z= "+str(z_list[i]) )
plt.axvspan(0., 14, alpha=0.5, color='pink')
# plt.loglog(power_gr_phi_zall[i+1][:,0], power_gr_phi_zall[i+1][:,1],color="red",linestyle='dashed',lw=2.5, label=r"$\Phi$(ndgp), z="+str(z_list[i+1]) )
# plt.loglog(pow_ndgp_phi_zall[i+1][:,0], pow_ndgp_phi_zall[i+1][:,1],color="purple",linestyle='solid',lw=2.5,label=r"$\Phi$(GR), z= "+str(z_list[i+1]) )
plt.legend(bbox_to_anchor=(0.62, 0.78, 0.36, .102), loc=1,ncol=1,fontsize=12, mode="expand", borderaxespad=0.)
plt.xlabel("k[h/Mpc]",fontsize=18)
plt.ylabel(r"$\mathcal{P} $",fontsize=18)
plt.xlim(0.0088,0.7)
plt.ylim(0.,2.5)
plt.grid(True)
# ##Subplot 2
# plt.subplot(212)
# ax = plt.gca()
# ax.tick_params(axis = 'both', which = 'major', labelsize = 18)
# ax.tick_params(axis = 'both', which = 'minor', labelsize = 18)
# for i in range (2,5):
# plt.loglog(pow_gr_phi_zall[i][:,0], np.abs(pow_ndgp_phi_zall[i][:,1]-pow_gr_phi_zall[i+2][:,1])/pow_gr_phi_zall[i][:,1],color=tableau_color[i],linestyle='dashed',lw=2.5, label=r"$|\frac{\mathcal{P}_{\Phi}(ndgp) - \mathcal{P}_{\Phi} (GR)}{\mathcal{P}_{\Phi}(GR)}|$, z="+str(z_list[i]) )
# plt.axvspan(0.6, 14, alpha=0.5, color='pink')
# plt.legend(bbox_to_anchor=(0.02, 0.88, 0.36, .102), loc=1,ncol=1,fontsize=12, mode="expand", borderaxespad=0.)
# plt.xlabel("k[h/Mpc]",fontsize=18)
# # plt.ylabel(r"$\mathcal{P} $",fontsize=18)
# plt.xlim(0.0088,0.7)
# # plt.ylim(1.e-15,1.e-9)
# plt.grid(True)
# plt.savefig('./Gev-Class_Images/CsCs.jpg', format='jpg', dpi=100)
plt.show()
| 12,960 |
/Alura_Mod_1.ipynb
|
0449b2d549549e27fca547b418997f9cba25df23
|
[] |
no_license
|
gustavohn73/Complete-Python-3-Bootcamp
|
https://github.com/gustavohn73/Complete-Python-3-Bootcamp
| 0 | 0 | null | 2020-06-20T17:08:41 | 2020-06-20T16:09:29 | null |
Jupyter Notebook
| false | false |
.py
| 506,602 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/gustavohn73/Complete-Python-3-Bootcamp/blob/master/Alura_Mod_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="VEjKOHPrXxM7"
# #Projeto Alura
#
# ##Módulo 1 - Pandas e Python
# + [markdown] id="z-Oytpy0YBBW"
# **Objetivo**: Fazer um estudo dos dados abertos sobre COVID do estado de São Paulo das cidades de **Colina** e **Barretos** nos anos de **2020** e **2021**
# + id="NUyssYPdX7dD"
import pandas as pd
pd.options.display.float_format = "{:.2f}".format
# + [markdown] id="xhoDqydgdE75"
# Usando o Google Drive para armazenar os dados e não precisar ficar subindo toda vez que o caderno é reiniciado
# + [markdown] id="gBKuZk9CedD8"
# Extraindo arquivo Zip (menor para fazer upload)
# + id="mR4rm1VZbJ6w"
# importar zipfile
from zipfile import ZipFile
# carregar arquivo
file_name = "/content/drive/MyDrive/Covid/20210531_Casos-e-obitos-ESP.zip"
# Fazer extração
#with ZipFile(file_name, 'r') as zip:
#Caso esteja local
#zip.extract()
# leitura dos dados de casos local
#dados_casos = pd.read_csv("/content/20210531_Casos-e-obitos-ESP.csv", encoding="UTF-8", skiprows = 0, sep=";", skipfooter=0, thousands=".", decimal=",")
# + id="LS4fW5DGdk_w"
#leitura dos dados no Google Drive
dados_casos = pd.read_csv( file_name, encoding="UTF-8", skiprows = 0, sep=";", skipfooter=0, thousands=".", decimal=",")
# + colab={"base_uri": "https://localhost:8080/", "height": 292} id="nxnrh5GTfa2Y" outputId="512fb719-6cd6-4014-b353-61702827be7e"
dados_casos.head()
# + id="Jboaz3pmhDYu"
#Leitura de dados SRAG
dados_srag = pd.read_csv("/content/drive/MyDrive/Covid/20210528_SRAG.csv", encoding="UTF-8",
skiprows = 0, sep=";", skipfooter=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="FmUS69gDkbjH" outputId="f85b9833-596b-43ef-b5bd-42f5d1765c97"
dados_srag.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="_lSLBhHBhiM0" outputId="e89d02fd-34a9-4a1c-90e0-20cd35dd04b7"
#Leitura de dados isolamento
dados_iso = pd.read_csv("/content/drive/MyDrive/Covid/20210530_isolamento-1.csv", encoding="UTF-8",
skiprows = 0, sep=";", skipfooter=0,
thousands=".", decimal=",")
dados_iso.head()
# + [markdown] id="uZTTa-TErdAT"
# ##Trabalhando com Datetime
# + id="6Fvzc0rBrbjz"
import datetime
# + id="b-qdLCMVtHYy"
dados_srag["Data de Notificação"] = pd.to_datetime(dados_srag["Data de Notificação"], format="%d/%m/%Y", errors="ignore")
#dados_srag.index = pd.to_datetime(dados_srag["Data de Notificação"], format="%d/%m/%Y", errors="ignore")
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="VPoB1uabssTl" outputId="cea19a10-71d2-48ce-a7a8-f44e36cd617d"
dados_srag.head()
# + [markdown] id="fbwWTpGGjFPO"
# #Contagem
# + colab={"base_uri": "https://localhost:8080/"} id="lqT9S63fpKVd" outputId="277d0ca3-8147-4e87-a72c-c34cdd8682c6"
#para fazer a contagem de tamanho.
dados_srag.groupby("Data de Notificação").size()
# + [markdown] id="coYp5WHGjAzR"
# #Filtro de Ano
#
# + colab={"base_uri": "https://localhost:8080/", "height": 660} id="hgtHAE0euBqP" outputId="1fa88e37-1f2c-46d2-ccd6-7f7369e5c3f6"
#Garantindo que o dado seja relativo aos anos de 2020 e 2021 (fazeendo com append para usar a função)
srag_filtered = dados_srag[dados_srag["Data de Notificação"].dt.strftime('%Y') == "2021"]
srag_filtered_2020 = dados_srag[dados_srag["Data de Notificação"].dt.strftime('%Y') == "2020"]
srag_filtered.append(srag_filtered_2020)
# + [markdown] id="g87GzC3hjLJ_"
# #Agrupamento
# + colab={"base_uri": "https://localhost:8080/"} id="1AhZ13LWy6IT" outputId="d131947e-b6d7-410b-de7e-a9a80a6f6b02"
srag_filtered.groupby("Municípios").size()
# + colab={"base_uri": "https://localhost:8080/"} id="mjdcO5ux0Nx8" outputId="251c7254-6442-4aa8-e55c-2822ee80012e"
srag_filtered.loc[(srag_filtered.Municípios == "COLINA")].groupby("Evolução").size()
# + colab={"base_uri": "https://localhost:8080/"} id="AE9LBxKF9Z60" outputId="2e576094-cee8-4ee3-f246-fdf02dca43f4"
#verificando quantidade de dados por ano.
dados_srag['ano'] = dados_estudo["Data de Notificação"].dt.year
dados_srag.loc[(dados_srag.Municípios == "COLINA") | (dados_srag.Municípios == "BARRETOS") ].groupby("ano").size()
# + [markdown] id="RmAwogpQjazw"
# #Trabalhando os dados
# + colab={"base_uri": "https://localhost:8080/", "height": 643} id="yR26BblA16sS" outputId="f3af1e42-532f-417f-a649-7a2094c94f2c"
#Para colocar as duas cidadades posso usar o append
#dados_estudo_Colina = srag_filtered.loc[(srag_filtered.Municípios == "COLINA")]
#dados_estudo = srag_filtered.loc[(srag_filtered.Municípios == "BARRETOS")]
#dados_estudo.append(dados_estudo_Colina)
#Ou posso usar o "|" para fazer o OR (se tivesse que ser nas duas ao mesmo tempo usa-se o "&"no filtro) menor e mais facil.
dados_estudo = srag_filtered.loc[(srag_filtered.Municípios == "COLINA") | (srag_filtered.Municípios == "BARRETOS")]
#Arrumar o nome da coluna OUTRAS SRAG
dados_estudo = dados_estudo.rename(columns={'OUTRAS SRAG': 'srag'})
#filtrar de SRAG para COVID
dados_estudo = dados_estudo.loc[(dados_estudo.srag == "COVID 19")]
#filtrar de Evolução
dados_estudo = dados_estudo.loc[(dados_estudo.Evolução != "Em avaliação")]
#colocar mes e ano
dados_estudo['ano'] = dados_estudo["Data de Notificação"].dt.year
dados_estudo['mes'] = dados_estudo["Data de Notificação"].dt.month
dados_estudo['mes_ano'] = dados_estudo["Data de Notificação"].dt.strftime('%b/%Y')
dados_estudo
# + colab={"base_uri": "https://localhost:8080/", "height": 638} id="sqhes31h-P1G" outputId="946798af-903d-402f-838d-3bb977185e8c"
cb_dados = dados_estudo.loc[(dados_estudo.srag == "COVID 19")].groupby(["Municípios", "mes","Evolução"]).agg({'Quantidade de Casos': ['sum']})
cb_dados.columns = ['Qtd Casos Total']
cb_dados = cb_dados.reset_index()
cb_dados
# + id="urYbvoWTal6y"
#importando as bibliotecas
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
# + [markdown] id="hidEYGfAeEZ8"
# #DESAFIOS
#
# + [markdown] id="oOyf1FR7eP2A"
# ##Aula 1
# + [markdown] id="7u-EMw3gbJDC"
# ###Desafio 01: Escolher um título mais descritivo, que passe a mensagem adequada ao gráfico de barras.
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="Fed8pqT2A0f1" outputId="7b5bd884-9a91-4357-c2e8-2969990d3a9c"
cb_dados.loc[(cb_dados.Municípios == "BARRETOS") & (cb_dados.Evolução == "Óbito")].plot(x="mes", y="Qtd Casos Total", kind="bar")
plt.title("Óbitos em Barretos por mês no ano de 2021")
# + [markdown] id="azYLC0fGeXal"
# ### Desafio 02: Faça a mesma análise realizada em aula, porém para o mês mais recente.
# + [markdown] id="ZCN3zZqZedMQ"
# Feito durante as aulas e refeito usando uma nova base de dados.
# + [markdown] id="kdNksUnAeh2P"
# ##Aula 2
# + [markdown] id="tF1a8iBZddMZ"
# ###Desafio 01: Reposicionar a legenda do gráfico em uma posição mais adequada
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="mH-n76XZk1Ao" outputId="0e073a52-61e0-4cdf-81e7-b88ad2d82dc3"
cb_dados.loc[(cb_dados.Municípios == "BARRETOS") & (cb_dados.Evolução == "Cura")].plot(x="mes", y="Qtd Casos Total", kind="bar")
plt.title("Óbitos em Barretos por mês no ano de 2021")
#a posição 0 (best) seria a mesma escolhida por mim, que é a posição 2 (upper left)
plt.legend(loc = 2)
# + [markdown] id="yOC4H4YYfJxq"
# ###Desafio 02: Plotar o gráfico de linha com apenas 5 estados de sua preferência
# + [markdown] id="AEkj-sDufOKY"
# Eu já havia filtrado previamente os dados de meu interesse e os armazenei em um tabela (cb_dados) que usei para plotar os gráficos anteriros.
# + [markdown] id="i1FquzSOgVEN"
# ##Aula 3
# + [markdown] id="ex_5PLy7f25O"
# ###Desafio 01: Escolher uma palete de cores mais adequada do matplotlib.
# + id="VB9d-IHo1fLZ"
#criar tabela com os dados de Colina e Barretos
b = dados_estudo.loc[(dados_estudo.srag == "COVID 19") & (dados_estudo.Municípios == "BARRETOS") & (dados_estudo.Evolução == "Cura")].groupby(["mes"]).agg({'Quantidade de Casos': ['sum']})
b.columns = ['Barretos']
c = dados_estudo.loc[(dados_estudo.srag == "COVID 19") & (dados_estudo.Municípios == "COLINA") & (dados_estudo.Evolução == "Cura")].groupby(["mes"]).agg({'Quantidade de Casos': ['sum']})
c.columns = ['Colina']
m = b.index
b = b.T
c = c.T
b = b.append(c)
b = b.T
# + id="vBj4d0Wk_ggC"
#criar tabela com os dados de Colina e Barretos
b = dados_estudo.loc[(dados_estudo.srag == "COVID 19") & (dados_estudo.Municípios == "BARRETOS")].groupby(["mes", "mes_ano", "Evolução"]).agg({'Quantidade de Casos': ['sum']})
b.columns = ['Barretos']
c = dados_estudo.loc[(dados_estudo.srag == "COVID 19") & (dados_estudo.Municípios == "COLINA")].groupby(["mes", "mes_ano", "Evolução"]).agg({'Quantidade de Casos': ['sum']})
c.columns = ['Colina']
m = b.index
b = b.T
c = c.T
b = b.append(c)
b = b.T
# + id="_Pwq8S4zx5Pd"
#drop mes (usado apenas para organizar os dados)
b = b.reset_index(level = "mes", drop=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 408} id="nBGG6b77y3Pg" outputId="6bbdc168-81d7-437f-9a1d-be6eb5f2419e"
# Plotar o grafico
b.plot( kind="bar", color = ("gray", "navy"))
plt.title("Evolutivo de casos curados de COVID 19 em Barretos e Colina")
plt.xlabel('Meses de 2021')
plt.ylabel('N de casos Curados')
plt.grid(False)
plt.legend(loc = 2)
# + [markdown] id="dzz-_pYbgE5q"
# ###Desafio 03: Formatar o gráfico de custos por mês dos 5 estados, deixando ele agradável (Bonitão, segundo o Gui)
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 408} id="iNnXjaaMrmgr" outputId="dcb33e74-0cfe-4c03-e733-018966a67099"
import seaborn as sns
#
sns.set_palette("mako")
# Plotar o grafico
b.plot( kind="bar")
plt.title("Evolutivo de casos curados de COVID 19 em Barretos e Colina")
plt.xlabel('Meses de 2021')
plt.ylabel('N de casos Curados')
plt.grid(False)
plt.legend(loc = 2)
# + [markdown] id="xtQ5xXwpgHWp"
# ###Desafio 04: Adicione o seu estado aos 5 estados plotados anteriormente
# + [markdown] id="doKBW4g13trA"
# Exercício não necessário, já que estou usando outra base e já fiz isso.
# + [markdown] id="XpS2pze-gLug"
# ###Desafio 05: Buscar os casos de dengue no Brasil (época de maior número de casos e regiões mais atingidas) e se os picos de alguns estados em fevereiro e verão de modo geral, pode ser reflexos dos casos de dengue
# + [markdown] id="h-XNOC8j34tA"
# Bases diferentes, exercícios não necessário.
# + [markdown] id="sDrqdal3gOyy"
# ###Desafio 06: Plotar o gráfico dos custos apenas dos estados da região sudeste e verificar se os picos de 2013/Fev teve comportamento similar em todos os demais estados da região
# + colab={"base_uri": "https://localhost:8080/", "height": 390} id="rDo5HM0fFq20" outputId="bdacefe3-448c-4cf6-ec54-b41a5c9b8334"
b = b.T
b
# + colab={"base_uri": "https://localhost:8080/", "height": 382} id="X0LvM3d2JZeK" outputId="c8c8289d-d531-43ad-df01-4f758acf557a"
sns.displot(data=dados_estudo, x="Data de Notificação", row ="Quantidade de Casos", col="Municípios", hue = "Evolução", kind = 'hist', multiple="dodge", shrink=.8, bins = 5, kde = True, palette = "dark")
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="CQy2l6T8HFB7" outputId="5e2800ec-4c17-4a09-94d7-49e304839000"
sns.histplot(data=dados_estudo, x="Data de Notificação", bins=5, kde=True, hue ="Municípios", palette = "dark", multiple="dodge", shrink=.8)
# + colab={"base_uri": "https://localhost:8080/", "height": 359} id="FlXjtip1NQ6F" outputId="e76781c4-5f66-4d90-ff7f-28c8b9923a2c"
#reset de index
#b = b.reset_index()
b
# + [markdown] id="AXYWFPnogR74"
# ###Desafio 07: Adicionar seu estado escolhido novamente, deixe o gráfico informativo e tire conclusões sobre seus estados comparando com os demais. Tire suas conclusões e compartilhe com a gente.
# + colab={"base_uri": "https://localhost:8080/", "height": 643} id="mG2b0pFjXaXV" outputId="517b8e78-b1de-4f42-dc1b-972e03b3e82f"
dados_estudo
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="VHyLZXNaXVo-" outputId="9ef6a509-4f79-47ee-f460-74884a4b6a48"
sns.displot(data=dados_estudo, x="Data de Notificação", row ="Quantidade de Casos", col ="Grupo de Idades", hue = "Evolução", kind = 'hist', multiple="dodge", shrink=.8, bins = 5, kde = True, palette = "dark")
se} new_sheet=false run_control={"read_only": false}
venue_id = '4fa862b3e4b0ebff2f749f06' # ID of Harry's Italian Pizza Bar
url = 'https://api.foursquare.com/v2/venues/{}?client_id={}&client_secret={}&oauth_token={}&v={}'.format(venue_id, CLIENT_ID, CLIENT_SECRET,ACCESS_TOKEN, VERSION)
url
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Send GET request for result
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
result = requests.get(url).json()
print(result['response']['venue'].keys())
result['response']['venue']
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### B. Get the venue's overall rating
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
try:
print(result['response']['venue']['rating'])
except:
print('This venue has not been rated yet.')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# That is not a very good rating. Let's check the rating of the second closest Italian restaurant.
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
venue_id = '4f3232e219836c91c7bfde94' # ID of Conca Cucina Italian Restaurant
url = 'https://api.foursquare.com/v2/venues/{}?client_id={}&client_secret={}&oauth_token={}&v={}'.format(venue_id, CLIENT_ID, CLIENT_SECRET,ACCESS_TOKEN, VERSION)
result = requests.get(url).json()
try:
print(result['response']['venue']['rating'])
except:
print('This venue has not been rated yet.')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Since this restaurant has no ratings, let's check the third restaurant.
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
venue_id = '3fd66200f964a520f4e41ee3' # ID of Ecco
url = 'https://api.foursquare.com/v2/venues/{}?client_id={}&client_secret={}&oauth_token={}&v={}'.format(venue_id, CLIENT_ID, CLIENT_SECRET,ACCESS_TOKEN, VERSION)
result = requests.get(url).json()
try:
print(result['response']['venue']['rating'])
except:
print('This venue has not been rated yet.')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Since this restaurant has a slightly better rating, let's explore it further.
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### C. Get the number of tips
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
result['response']['venue']['tips']['count']
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### D. Get the venue's tips
#
# > `https://api.foursquare.com/v2/venues/`**VENUE_ID**`/tips?client_id=`**CLIENT_ID**`&client_secret=`**CLIENT_SECRET**`&v=`**VERSION**`&limit=`**LIMIT**
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Create URL and send GET request. Make sure to set limit to get all tips
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
## Ecco Tips
limit = 15 # set limit to be greater than or equal to the total number of tips
url = 'https://api.foursquare.com/v2/venues/{}/tips?client_id={}&client_secret={}&oauth_token={}&v={}&limit={}'.format(venue_id, CLIENT_ID, CLIENT_SECRET,ACCESS_TOKEN, VERSION, limit)
results = requests.get(url).json()
results
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Get tips and list of associated features
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
tips = results['response']['tips']['items']
tip = results['response']['tips']['items'][0]
tip.keys()
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Format column width and display all tips
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
pd.set_option('display.max_colwidth', -1)
tips_df = json_normalize(tips) # json normalize tips
# columns to keep
filtered_columns = ['text', 'agreeCount', 'disagreeCount', 'id', 'user.firstName', 'user.lastName', 'user.id']
tips_filtered = tips_df.loc[:, filtered_columns]
# display tips
tips_filtered.reindex()
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Now remember that because we are using a personal developer account, then we can access only 2 of the restaurant's tips, instead of all 15 tips.
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
#
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# <a id="item3"></a>
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ## 3. Search a Foursquare User
#
# > `https://api.foursquare.com/v2/users/`**USER_ID**`?client_id=`**CLIENT_ID**`&client_secret=`**CLIENT_SECRET**`&v=`**VERSION**
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Define URL, send GET request and display features associated with user
#
# +
idnumber = '484542633' # user ID with most agree counts and complete profile
url = 'https://api.foursquare.com/v2/users/{}/?client_id={}&client_secret={}&oauth_token={}&v={}'.format(idnumber,CLIENT_ID, CLIENT_SECRET, ACCESS_TOKEN,VERSION) # define URL
# send GET request
results = requests.get(url).json()
user_data=results['response']['user']['photos']['items']
#results
pd.set_option('display.max_colwidth', -1)
users_df = json_normalize(user_data)
#This mainly used later to display the photo of the user
filtered_columns = ['id','prefix','suffix','width','height']
tips_filtered = users_df.loc[:, filtered_columns]
#url
tips_filtered
# +
g=tips_df.loc[tips_df['user.id'] == '484542633']
print('First Name: ' + tips_df['user.firstName'])
print('Last Name: ' + tips_df['user.lastName'])
# -
# ### Retrieve the User's Profile Image
#
# 1. grab prefix of photo
# 2. grab suffix of photo
# 3. concatenate them using the image size
Image(url='https://fastly.4sqi.net/img/general/540x920/484542633_ELnUC1di2LwJTjPi04McysQZNqJHSCCSxS3i_GKGTEY.jpg')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Wow! So it turns out that Nick is a very active Foursquare user, with more than 250 tips.
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Get User's tips
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
# define tips URL
user_id='484542633'
url = 'https://api.foursquare.com/v2/users/{}/tips?client_id={}&client_secret={}&oauth_token={}&v={}&limit={}'.format(user_id, CLIENT_ID, CLIENT_SECRET,ACCESS_TOKEN,VERSION, limit)
# send GET request and get user's tips
results = requests.get(url).json()
tips = results['response']['tips']['items']
# format column width
pd.set_option('display.max_colwidth', -1)
tips_df = json_normalize(tips)
# filter columns
filtered_columns = ['text', 'agreeCount', 'disagreeCount', 'id']
tips_filtered = tips_df.loc[:, filtered_columns]
# display user's tips
tips_filtered
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Let's get the venue for the tip with the greatest number of agree counts
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
tip_id = '5ab5575d73fe2516ad8f363b' # tip id
# define URL
url = 'https://api.foursquare.com/v2/users/{}/tips?client_id={}&client_secret={}&oauth_token={}&v={}'.format(idnumber, CLIENT_ID, CLIENT_SECRET,ACCESS_TOKEN, VERSION) # define URL
# send GET Request and examine results
result = requests.get(url).json()
print(result['response']['tips']['items'][0]['venue']['name'])
print(result['response']['tips']['items'][0]['venue']['location'])
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ## 4. Explore a location
#
# > `https://api.foursquare.com/v2/venues/`**explore**`?client_id=`**CLIENT_ID**`&client_secret=`**CLIENT_SECRET**`&ll=`**LATITUDE**`,`**LONGITUDE**`&v=`**VERSION**`&limit=`**LIMIT**
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### So, you just finished your gourmet dish at Ecco, and are just curious about the popular spots around the restaurant. In order to explore the area, let's start by getting the latitude and longitude values of Ecco Restaurant.
#
# + button=false jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
latitude = 40.715337
longitude = -74.008848
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Define URL
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
url = 'https://api.foursquare.com/v2/venues/explore?client_id={}&client_secret={}&ll={},{}&v={}&radius={}&limit={}'.format(CLIENT_ID, CLIENT_SECRET, latitude, longitude, VERSION, radius, LIMIT)
url
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Send GET request and examine results
#
# + button=false jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
import requests
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
results = requests.get(url).json()
'There are {} around Ecco restaurant.'.format(len(results['response']['groups'][0]['items']))
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Get relevant part of JSON
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
items = results['response']['groups'][0]['items']
items[0]
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Process JSON and convert it to a clean dataframe
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
dataframe = json_normalize(items) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories'] + [col for col in dataframe.columns if col.startswith('venue.location.')] + ['venue.id']
dataframe_filtered = dataframe.loc[:, filtered_columns]
# filter the category for each row
dataframe_filtered['venue.categories'] = dataframe_filtered.apply(get_category_type, axis=1)
# clean columns
dataframe_filtered.columns = [col.split('.')[-1] for col in dataframe_filtered.columns]
dataframe_filtered.head(10)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Let's visualize these items on the map around our location
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
venues_map = folium.Map(location=[latitude, longitude], zoom_start=15) # generate map centred around Ecco
# add Ecco as a red circle mark
folium.CircleMarker(
[latitude, longitude],
radius=10,
popup='Ecco',
fill=True,
color='red',
fill_color='red',
fill_opacity=0.6
).add_to(venues_map)
# add popular spots to the map as blue circle markers
for lat, lng, label in zip(dataframe_filtered.lat, dataframe_filtered.lng, dataframe_filtered.categories):
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
fill=True,
color='blue',
fill_color='blue',
fill_opacity=0.6
).add_to(venues_map)
# display map
venues_map
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
#
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# <a id="item5"></a>
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ## 5. Explore Trending Venues
#
# > `https://api.foursquare.com/v2/venues/`**trending**`?client_id=`**CLIENT_ID**`&client_secret=`**CLIENT_SECRET**`&ll=`**LATITUDE**`,`**LONGITUDE**`&v=`**VERSION**
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Now, instead of simply exploring the area around Ecco, you are interested in knowing the venues that are trending at the time you are done with your lunch, meaning the places with the highest foot traffic. So let's do that and get the trending venues around Ecco.
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
# define URL
url = 'https://api.foursquare.com/v2/venues/trending?client_id={}&client_secret={}&ll={},{}&v={}'.format(CLIENT_ID, CLIENT_SECRET, latitude, longitude, VERSION)
# send GET request and get trending venues
results = requests.get(url).json()
results
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Check if any venues are trending at this time
#
# + button=false jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
if len(results['response']['venues']) == 0:
trending_venues_df = 'No trending venues are available at the moment!'
else:
trending_venues = results['response']['venues']
trending_venues_df = json_normalize(trending_venues)
# filter columns
columns_filtered = ['name', 'categories'] + ['location.distance', 'location.city', 'location.postalCode', 'location.state', 'location.country', 'location.lat', 'location.lng']
trending_venues_df = trending_venues_df.loc[:, columns_filtered]
# filter the category for each row
trending_venues_df['categories'] = trending_venues_df.apply(get_category_type, axis=1)
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
# display trending venues
trending_venues_df
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Now, depending on when you run the above code, you might get different venues since the venues with the highest foot traffic are fetched live.
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Visualize trending venues
#
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
if len(results['response']['venues']) == 0:
venues_map = 'Cannot generate visual as no trending venues are available at the moment!'
else:
venues_map = folium.Map(location=[latitude, longitude], zoom_start=15) # generate map centred around Ecco
# add Ecco as a red circle mark
folium.CircleMarker(
[latitude, longitude],
radius=10,
popup='Ecco',
fill=True,
color='red',
fill_color='red',
fill_opacity=0.6
).add_to(venues_map)
# add the trending venues as blue circle markers
for lat, lng, label in zip(trending_venues_df['location.lat'], trending_venues_df['location.lng'], trending_venues_df['name']):
folium.CircleMarker(
[lat, lng],
radius=5,
poup=label,
fill=True,
color='blue',
fill_color='blue',
fill_opacity=0.6
).add_to(venues_map)
# + button=false jupyter={"outputs_hidden": false} new_sheet=false run_control={"read_only": false}
# display map
venues_map
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# <a id="item6"></a>
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
#
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Thank you for completing this lab!
#
# This notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork-21253531&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork-21253531&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). I hope you found this lab interesting and educational. Feel free to contact me if you have any questions!
#
# This notebook modified by Nayef Abou Tayoun ([https://www.linkedin.com/in/nayefaboutayoun/](https://www.linkedin.com/in/nayefaboutayoun?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork-21253531&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ))
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# This notebook is part of a course on **Coursera** called _Applied Data Science Capstone_. If you accessed this notebook outside the course, you can take this course online by clicking [here](http://cocl.us/DP0701EN_Coursera_Week2_LAB1).
#
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ## Change Log
#
# | Date (YYYY-MM-DD) | Version | Changed By | Change Description |
# | ----------------- | ------- | ------------- | -------------------------------------------- |
# | 2021-03-17 | 2.1 | Lakshmi Holla | Changed the code for retreiving user profile |
# | 2020-11-26 | 2.0 | Lakshmi Holla | Updated the markdown cells |
# | | | | |
# | | | | |
#
# ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
#
| 30,380 |
/TensorFlow - Linear regression example.ipynb
|
b7993ac8f384120757204f33118100d026b93f1f
|
[] |
no_license
|
Andriyluck/tf_examples
|
https://github.com/Andriyluck/tf_examples
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 33,549 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import datetime as dt
import matplotlib.pyplot as plt
from matplotlib import style
import pandas as pd
import pandas_datareader.data as web
style.use('ggplot')
start = dt.datetime(2000,1,1)
end= dt.datetime(2016,12,31)
df= web.DataReader('TSLA','yahoo',start,end)
# -
df.head()
#to put inside csv
df.to_csv('tsla.csv')
# To read csv into dataframe
df = pd.read_csv('tsla.csv',parse_dates=True,index_col=0)
df.head()
# %matplotlib inline
df.plot()
df['Adj Close'].plot()
df['100ma']=df['Adj Close'].rolling(window=100,min_periods=0).mean()
# df.dropna(inplace=True)
print(df.head())
ax1= plt.subplot2grid((6,1),(0,0),rowspan=5,colspan=1)
ax2= plt.subplot2grid((6,1),(5,0),rowspan=1,colspan=1,sharex=ax1)
ax1.plot(df.index,df['Adj Close'])
ax1.plot(df.index,df['100ma'])
ax2.bar(df.index,df['Volume'])
df_ohlc = df['Adj Close'].resample('10D').ohlc()
df_volume = df['Volume'].resample('10D').sum()
print(df_ohlc.head())
from matplotlib.finance import candlestick_ohlc
import matplotlib.dates as mdates
df_ohlc.reset_index(inplace=True)
df_ohlc['Date'] = df_ohlc['Date'].map(mdates.date2num)
# +
ax1= plt.subplot2grid((6,1),(0,0),rowspan=5,colspan=1)
ax2= plt.subplot2grid((6,1),(5,0),rowspan=1,colspan=1,sharex=ax1)
ax1.xaxis_date()
candlestick_ohlc(ax1,df_ohlc.values,width=2,colorup='g')
ax2.fill_between(df_volume.index.map(mdates.date2num),df_volume.values,0)
# -
# ## Getting S&P500 list
# +
import bs4 as bs
import pickle
import requests
def save_sp500_tickers():
resp=requests.get('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
soup=bs.BeautifulSoup(resp.content,"lxml")
table = soup.find('table',{'class':'wikitable sortable'})
tickers=[]
for row in table.findAll('tr')[1:]:
ticker=row.findAll('td')[0].text
tickers.append(ticker)
with open('sp500tickers.pickle','wb') as f:
pickle.dump(tickers,f)
return tickers
# +
import datetime as dt
import os
import pandas as pd
import pandas_datareader.data as web
def get_data_from_yahoo(reload_sp500 = False):
if reload_sp500:
tickers=save_sp500_tickers()
else:
with open("sp500tickers.pickle","rb") as f:
tickers = pickle.load(f)
if not os.path.exists('stock_dfs'):
os.makedirs('stock_dfs')
start = dt.datetime(2000,1,1)
end = dt.datetime(2016,12,31)
for ticker in tickers[:10]:
print (ticker)
if not os.path.exists('stock_dfs/{}.csv'.format(ticker)):
df = web.DataReader(ticker,'yahoo',start,end)
df.to_csv('stock_dfs/{}.csv'.format(ticker))
else:
print('Already have {}'.format(ticker))
def compile_data():
with open("sp500tickers.pickle","rb") as f:
tickers = pickle.load(f)[:10]
main_df = pd.DataFrame()
for count,ticker in enumerate(tickers):
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker))
df.set_index('Date',inplace=True)
df.rename(columns={'Adj Close':ticker},inplace=True)
df.drop(['Open','High','Low','Close','Volume'],1,inplace=True)
if main_df.empty:
main_df = df
else:
main_df = main_df.join(df,how='outer')
if count % 10 == 0 :
print(count)
print(main_df.head())
main_df.to_csv('sp500_joined_closes.csv')
compile_data()
# +
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
def visualize_data():
df = pd.read_csv('sp500_joined_closes.csv')
# df['AAPL'].plot()
# plt.show()
df_corr = df.corr()
visualize_data()
# -
| 3,912 |
/Siamese_mnist_contrastive.ipynb
|
5644cbcee4a0b71ed7f92203d4c7181c8c3b38d4
|
[] |
no_license
|
s-mrb/siamese-mnist
|
https://github.com/s-mrb/siamese-mnist
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 249,814 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="egSzu9Kk8kPr"
# Title: Siamese network with a contrastive loss
# Author: Mehdi
# Date created: 2021/05/06
# Last modified: 2021/05/06
# Description: Similarity learning using siamese network with contrastive loss
# + [markdown] id="eimAG0nw8ni6"
# ## Introduction
#
# [Siamese Network](https://en.wikipedia.org/wiki/Siamese_neural_network)
# is any Neural Network which share weights between two or more sister networks,
# each producing embedding vector of its respective input and these embeddings
# are then passed through some
# [distance heuristic](https://developers.google.com/machine-learning/clustering/similarity/measuring-similarity)
# to find the distance between them. This distance is later used to increase the
# contrast between embeddings of inputs of different classes and decrease it with
# that of similar class by employing some loss function, with the main objective
# of contrasting [vector spaces](https://en.wikipedia.org/wiki/Vector_space)
# from which these sample inputs were taken.
# + [markdown] id="QRdM8S1o8pgs"
# ## Setup
# + id="e52nzoc62Vag"
import random
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve,auc
# + [markdown] id="1ywIxDBDAsoK"
# Define some hyperparameters
# + id="edaiPnVdArIl"
epochs = 10
batch_size = 16
# margin for constrastive loss
margin = 1
# + [markdown] id="xjX1wJOg8tiD"
# ## Load the MNIST dataset
# + id="F7sNumMU8v_w" colab={"base_uri": "https://localhost:8080/"} outputId="a0f4ebc9-c8c8-4c9c-e9af-87ceb4f5ae32"
(x_train_val, y_train_val), (x_test, y_test) = keras.datasets.mnist.load_data()
# Change the data type to a floating point format
x_train_val = x_train_val.astype("float32")
x_test = x_test.astype("float32")
# + [markdown] id="HxZKGBFs8ykp"
# Use list slicing to split train_val data into `train` and `val`
# + id="S93LmzW689Dt"
# Keep 50% of train_val in validation set
x_train, x_val = x_train_val[0:30000], x_train_val[30000:]
y_train, y_val = y_train_val[0:30000], y_train_val[30000:]
del x_train_val
del y_train_val
# + [markdown] id="04Y0Map88_XA"
# ## Create pairs of images
#
# We will train the model to differentiate each digit from one another. For
# example, digit `0` needs to be differentiated from the rest of the
# digits (`1` through `9`), digit `1` - from `0` and `2` through `9`, and so on.
# To carry this out, we will select N random images from class A (for example,
# for digit `0`) and pair it with N random images from another class B
# (for example, for digit `1`). Then, we can repeat this process for all classes
# of digits (until digit `9`). Once we have paired digit `0` with other digits,
# we can repeat this process for the remaining classes for the rest of the digits
# (from `1` until `9`).
# + id="LuNBm4CH9DHJ"
def make_pairs(x, y):
"""Creates a tuple containing image pairs with corresponding label.
Arguments:
x: List containing images, each index in this list corresponds to
one image.
y: List containing labels, each label with datatype of `int`.
Returns:
Tuple containing two numpy arrays as (pairs_of_samples, labels),
where pairs_of_samples' shape is (2len(x), 2,n_features_dims) and
labels are a binary array of shape (2len(x)).
"""
num_classes = max(y) + 1
digit_indices = [np.where(y == i)[0] for i in range(num_classes)]
pairs = []
labels = []
for idx1 in range(len(x)):
# add a matching example
x1 = x[idx1]
label1 = y[idx1]
idx2 = random.choice(digit_indices[label1])
x2 = x[idx2]
pairs += [[x1, x2]]
labels += [1]
# add a non-matching example
label2 = random.randint(0, num_classes - 1)
while label2 == label1:
label2 = random.randint(0, num_classes - 1)
idx2 = random.choice(digit_indices[label2])
x2 = x[idx2]
pairs += [[x1, x2]]
labels += [0]
return np.array(pairs), np.array(labels).astype("float32")
# + id="X7_lJAKn9FLG"
# make train pairs
pairs_train, labels_train = make_pairs(x_train, y_train)
# make validation pairs
pairs_val, labels_val = make_pairs(x_val, y_val)
# make test pairs
pairs_test, labels_test = make_pairs(x_test, y_test)
# + [markdown] id="qxlLUSCJ9HqV"
# **pairs_train.shape = (60000, 2, 28, 28)**
#
# Imagine it as:
#
# **pairs_train.shape = (60000, pair.shape)**
#
#
# `pairs_train` contains 60K `pairs` in `axis 0`, shape of each pair
# is (2,28,28) hence `each pair` of `pairs_train` contains one image in its
# `axis 0` (do not confuse it with the `axis 0` of `pairs_train`) and the
# other one in the `axis 1`. We will slice `pairs_train` on its `axix 0`
# followed by desired axis of pair to obtain all images (60K) which belong
# either to the `axis 0` or the `axis 1` of all the pairs of `pairs_train`.
#
#
# **Note:** Do not confuse axes of `pairs_train` with those of
# `pair within pairs_train`, `pairs_train` have only one axis `axis 0` which
# contain 60K pairs, whereas each `pair within pairs_train` have two axis,
# each for one image of a pair.
# + [markdown] id="lYHJtZgl9KLH"
# Separate train pairs
# + id="NGf4Ljx99MUj"
x_train_1 = pairs_train[:, 0]
x_train_2 = pairs_train[:, 1]
# x_train_1.shape = (60000, 28, 28)
# + [markdown] id="22j8-Www9N7t"
# Separate validation pairs
# + id="PQyAllzD9PtR"
x_val_1 = pairs_val[:, 0]
x_val_2 = pairs_val[:, 1]
# x_val_1.shape = (60000, 28, 28)
# + [markdown] id="fqxqPhuk9Ris"
# Separate test pairs
# + id="yBe0cMRP9Tm7"
x_test_1 = pairs_test[:, 0]
x_test_2 = pairs_test[:, 1]
# x_test_1.shape = (20000, 28, 28)
# + [markdown] id="2eOAJA4G9VZY"
# ## Visualize
# + id="Dm7Xcpqq9Y8y"
def visualize(pairs, labels, to_show=6, num_col=3, predictions=None, test=False):
"""Creates a plot of pairs and labels, and prediction if it's test dataset.
Arguments:
pairs: Numpy Array, of pairs to visualize, having shape
(Number of pairs, 2, 28, 28).
to_show: Int, number of examples to visualize (default is 6)
`to_show` must be an integral multiple of `num_col`.
Otherwise it will be trimmed if it is greater than num_col,
and incremented if if it is less then num_col.
num_col: Int, number of images in one row - (default is 3)
For test and train respectively, it should not exceed 3 and 7.
predictions: Numpy Array of predictions with shape (to_show, 1) -
(default is None)
Must be passed when test=True.
test: Boolean telling whether the dataset being visualized is
train dataset or test dataset - (default False).
Returns:
None.
"""
# Define num_row
# If to_show % num_col != 0
# trim to_show,
# to trim to_show limit num_row to the point where
# to_show % num_col == 0
#
# If to_show//num_col == 0
# then it means num_col is greater then to_show
# increment to_show
# to increment to_show set num_row to 1
num_row = to_show // num_col if to_show // num_col != 0 else 1
# `to_show` must be an integral multiple of `num_col`
# we found num_row and we have num_col
# to increment or decrement to_show
# to make it integral multiple of `num_col`
# simply set it equal to num_row * num_col
to_show = num_row * num_col
# Plot the images
fig, axes = plt.subplots(num_row, num_col, figsize=(5, 5))
for i in range(to_show):
# If the number of rows is 1, the axes array is one-dimensional
if num_row == 1:
ax = axes[i % num_col]
else:
ax = axes[i // num_col, i % num_col]
ax.imshow(tf.concat([pairs[i][0], pairs[i][1]], axis=1), cmap="gray")
ax.set_axis_off()
if test:
ax.set_title("True: {} | Pred: {:.5f}".format(labels[i], predictions[i][0]))
else:
ax.set_title("Label: {}".format(labels[i]))
if test:
plt.tight_layout(rect=(0, 0, 1.9, 1.9), w_pad=0.0)
else:
plt.tight_layout(rect=(0, 0, 1.5, 1.5))
plt.show()
# + [markdown] id="4Pdjit8q9bBJ"
# Inspect train pairs
# + colab={"base_uri": "https://localhost:8080/", "height": 108} id="b9ulSSBC9ct3" outputId="88f90676-555b-4139-f760-453972bc8a8c"
visualize(pairs_train[0:-1], labels_train[0:-1], to_show=4, num_col=4)
# + [markdown] id="prPcxFus9eGf"
# Inspect validation pairs
# + colab={"base_uri": "https://localhost:8080/", "height": 108} id="0jUbkPut9fNs" outputId="e219a987-8571-4dee-b404-b402b28120d3"
visualize(pairs_val[0:-1], labels_val[0:-1], to_show=4, num_col=4)
# + [markdown] id="eJxhMx7m9goP"
# Inspect test pairs
# + colab={"base_uri": "https://localhost:8080/", "height": 108} id="dIurNeAe9iAp" outputId="ee01777c-8854-412e-a47a-9b394477fec1"
visualize(pairs_test[0:-1], labels_test[0:-1], to_show=4, num_col=4)
# + [markdown] id="BfvLTJio9lK3"
# ## Define the model
#
# There will be two input layers, each leading to its own network, which
# produces embeddings. Lambda layer will merge them using
# [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) and the
# merged layer will be fed to final network.
# + id="bLNXHjab9rqd"
# Provided two tensors t1 and t2
# Euclidean distance = sqrt(sum(square(t1-t2)))
def euclidean_distance(vects):
"""Find the Euclidean distance between two vectors.
Arguments:
vects: List containing two tensors of same length.
Returns:
Tensor containing euclidean distance
(as floating point value) between vectors.
"""
x, y = vects
sum_square = tf.math.reduce_sum(tf.math.square(x - y), axis=1, keepdims=True)
return tf.math.sqrt(tf.math.maximum(sum_square, tf.keras.backend.epsilon()))
# + id="rvJHiFXN9weP"
input = layers.Input((28, 28, 1))
x = tf.keras.layers.BatchNormalization()(input)
x = layers.Conv2D(4, (5, 5), activation="tanh")(x)
x = layers.AveragePooling2D(pool_size=(2, 2))(x)
x = layers.Conv2D(16, (5, 5), activation="tanh")(x)
x = layers.AveragePooling2D(pool_size=(2, 2))(x)
x = layers.Flatten()(x)
x = tf.keras.layers.BatchNormalization()(x)
x = layers.Dense(10, activation="tanh")(x)
embedding_network = keras.Model(input, x)
input_1 = layers.Input((28, 28, 1))
input_2 = layers.Input((28, 28, 1))
# As mentioned above, Siamese Network share weights between
# tower networks (sister networks). To allow this, we will use
# same embedding network for both tower networks.
tower_1 = embedding_network(input_1)
tower_2 = embedding_network(input_2)
merge_layer = layers.Lambda(euclidean_distance)([tower_1, tower_2])
normal_layer = tf.keras.layers.BatchNormalization()(merge_layer)
output_layer = layers.Dense(1, activation="sigmoid")(normal_layer)
siamese = keras.Model(inputs=[input_1, input_2], outputs=output_layer)
# + [markdown] id="MPf40lKp9yS_"
# Define Constrastive Loss
# + id="aa8t3bdF90UK"
def loss(margin=1):
"""Provides 'constrastive_loss' an enclosing scope with variable 'margin'.
Arguments:
margin: Integer, defines the baseline for distance for which pairs
should be classified as dissimilar. - (default is 1).
Returns:
'constrastive_loss' function with data ('margin') attached.
"""
# Contrastive loss = mean( (1-true_value) * square(prediction) +
# true_value * square( max(margin-prediction, 0) ))
def contrastive_loss(y_true, y_pred):
"""Calculate the constrastive loss.
Arguments:
y_true: List of labels, each label is of type float32.
y_pred: List of predictions of same length as of y_true,
each label is of type float32.
Returns:
A tensor containing constrastive loss as floating point value.
"""
square_pred = tf.math.square(y_pred)
margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0))
return tf.math.reduce_mean((1 - y_true) * square_pred + (y_true) * margin_square)
return contrastive_loss
# + [markdown] id="cxsXlH8l92Q6"
# Compile the model with constrastive loss
# + colab={"base_uri": "https://localhost:8080/"} id="hX9aUD9Z94Yi" outputId="c700860a-b085-411c-b2f1-b712d752fddc"
siamese.compile(loss=loss(margin=margin), optimizer="RMSprop", metrics=["accuracy",keras.metrics.Precision(), keras.metrics.Recall(),keras.metrics.BinaryCrossentropy()])
siamese.summary()
# + [markdown] id="zvWeTMkw97Rn"
# Train the model
# + colab={"base_uri": "https://localhost:8080/"} id="tKaMGNRV99oc" outputId="a8ddac85-f9ab-4cc6-8737-bc8129e17f2f"
history = siamese.fit(
[x_train_1, x_train_2],
labels_train,
validation_data=([x_val_1, x_val_2], labels_val),
batch_size=batch_size,
epochs=epochs,
)
# + [markdown] id="trTX4B3X9_5s"
# ## Visualize results
# + id="0GzOPlDs-COo"
def plt_metric(metric, title):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric])
plt.title(title)
plt.ylabel(metric)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# + id="7-BlIuW1Cobt" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="d9f38973-e3b9-49d3-fedf-0adce966f57e"
plt_metric(metric='accuracy', title='Model accuracy')
# + id="dWbTYVtqCw6a" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="028d59c6-2407-43df-e545-58ac9755e753"
plt_metric(metric='loss', title='Constrastive Loss')
# + id="vrQJnCtoC1S0" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="082bcbac-9709-463c-da24-5b8a897ecdfa"
plt_metric(metric='precision', title='Model Precision')
# + id="MtoUcaXBC5Xv" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="94f86fa6-1e6c-4b75-9d26-6ab3ee806d04"
plt_metric(metric='recall', title='Model Recall')
# + id="KyaJhY3bC-xw" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="f9c5127a-fe5e-4385-b6d5-3879b7155c60"
plt_metric(metric='binary_crossentropy', title='Binary Cross Entropy Loss')
# + [markdown] id="Tla5z6tw-D7d"
# Evaluate the model
# + id="dGYhKhX6-FgL" colab={"base_uri": "https://localhost:8080/"} outputId="be81a16e-5649-41de-bbfb-d4766bfa0455"
results = siamese.evaluate([x_test_1, x_test_2], labels_test)
print("test loss, test acc:", results)
# + id="tBNzCWsc0SHn"
predictions = siamese.predict([x_test_1, x_test_2])
# + [markdown] id="WfasVRQczPFa"
# ROC curve
# + id="aIYYqShVzQBL"
fpr_keras, tpr_keras, thresholds_keras = roc_curve(labels_test, predictions)
# + [markdown] id="_Wo0kW17zfS8"
# AUC
# + id="N_8j4PwEzgnQ"
auc_keras = auc(fpr_keras, tpr_keras)
# + [markdown] id="ibHpUFHd1V6V"
# **Plot AUC**
#
# As a rule of thumb, an AUC above 0.85 means high classification accuracy, one between 0.75 and 0.85 moderate accuracy, and one less than 0.75 low accuracy (D' Agostino, Rodgers, & Mauck, 2018).
# + id="J9M8rXrx0bCW" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="0578c135-1883-4914-8e4d-060e5ec72591"
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
# + [markdown] id="UBySLLZJ-HCw"
# Visualize the predictions
# + id="wu5DVxZy-JFh" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="99f1e692-013f-4877-bda8-f5f45b9d84ba"
visualize(
pairs_test, labels_test, to_show=3, predictions=predictions, test=True
)
| 16,274 |
/09_applied_data_science_capstone.ipynb
|
a881c4dd72506c5d1e58bcd8c21e6f05db146bd5
|
[] |
no_license
|
MesonicInterference/Coursera_Capstone
|
https://github.com/MesonicInterference/Coursera_Capstone
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,359 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 09 - Applied Data Science Capstone
#
# This Jupyter notebook will be used for the completion of the Applied Data Science Capstone component of the requirements for the IBM Data Science Professional Certificate.
### required imports
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
.1 dbname=studentdb user=student password=student")
conn.set_session(autocommit=True)
cur = conn.cursor()
# create sparkify database with UTF8 encoding
cur.execute("DROP DATABASE IF EXISTS sparkifydb")
cur.execute("CREATE DATABASE sparkifydb WITH ENCODING 'utf8' TEMPLATE template0")
# close connection to default database
conn.close()
# connect to sparkify database
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student")
cur = conn.cursor()
return cur, conn
def drop_tables(cur, conn):
"""
Drops each table using the queries in `drop_table_queries` list.
"""
for query in drop_table_queries:
cur.execute(query)
conn.commit()
def create_tables(cur, conn):
"""
Creates each table using the queries in `create_table_queries` list.
"""
for query in create_table_queries:
cur.execute(query)
conn.commit()
def main():
"""
- Drops (if exists) and Creates the sparkify database.
- Establishes connection with the sparkify database and gets
cursor to it.
- Drops all the tables.
- Creates all tables needed.
- Finally, closes the connection.
"""
cur, conn = create_database()
drop_tables(cur, conn)
create_tables(cur, conn)
conn.close()
if __name__ == "__main__":
main()
# + [markdown] editable=true
# # purpose
# <p>this database is to store song,artists,time and log.</p>
#
# # database schema design
# * it includes below tables</br>
# * songplays</br>
# * users</br>
# * songs</br>
# * artists</br>
# * time</br>
#
# # ETL pipeline
# read files and extract to song, aritis and log
#
# + editable=true
import os
import glob
import psycopg2
import pandas as pd
from sql_queries import *
def process_song_file(cur, filepath):
"""
Description: This function can be used to read the file in the filepath (data/song_data)
to get the song and artist info and used to populate the song and artist dim tables.
Arguments:
cur: the cursor object.
filepath: song data file path.
Returns:
None
"""
df = pd.read_json(filepath, lines=True)
# insert song record
song_data = df[['song_id', 'title', 'artist_id', 'year', 'duration']]
song_data = [list(row) for row in song_data.itertuples(index=False)]
song_data = song_data[0]
print(song_data[0])
cur.execute(song_table_insert, (song_data[0],song_data[1],song_data[2],int(song_data[3]),song_data[4]))
# insert artist record
artist_data = df[['artist_id', 'artist_name', 'artist_location', 'artist_latitude', 'artist_longitude']]
artist_data= [list(row) for row in artist_data.itertuples(index=False)][0]
cur.execute(artist_table_insert, artist_data)
def process_log_file(cur, filepath):
"""
Description: This function can be used to read the file in the filepath (data/log_data)
to get the user and time info and used to populate the users and time dim tables.
Arguments:
cur: the cursor object.
filepath: log data file path.
Returns:
None
"""
df = pd.read_json(filepath, lines=True)
# filter by NextSong action
#df =
# convert timestamp column to datetime
df['ts']=pd.to_datetime(df['ts'], unit='ms')
t = df[['ts']]
t.columns=['start_time']
t['hour']=t['start_time'].dt.hour
t['day']=t['start_time'].dt.day
t['week']=t['start_time'].dt.week
t['month']=t['start_time'].dt.month
t['year']=t['start_time'].dt.year
t['weekday']=t['start_time'].dt.weekday
# insert time data records
time_data = [list(row) for row in t.itertuples(index=False)]
column_labels = ['start_time', 'hour', 'day', 'week', 'month', 'year', 'weekday']
time_df = t
for i, row in time_df.iterrows():
cur.execute(time_table_insert, list(row))
# load user table
user_df = df[['userId', 'firstName', 'lastName', 'gender', 'level']]
# insert user records
for i, row in user_df.iterrows():
if row[0]=='':
continue
cur.execute(user_table_insert, row)
# insert songplay records
for index, row in df.iterrows():
# get songid and artistid from song and artist tables
cur.execute(song_select, (row.song, row.artist))
results = cur.fetchone()
if results:
songid, artistid = results
else:
songid, artistid = None, None
if row.userId =='':
continue
# insert songplay record
songplay_data = (row.ts,row.userId,row.level, songid, artistid,row.sessionId,row.location,row.userAgent)
cur.execute(songplay_table_insert, songplay_data)
def process_data(cur, conn, filepath, func):
"""
Description: This function can be used to get all files matching extension from directory
Arguments:
cur: the cursor object.
filepath: data file path.
func: the function to process data
Returns:
None
"""
all_files = []
for root, dirs, files in os.walk(filepath):
files = glob.glob(os.path.join(root,'*.json'))
for f in files :
all_files.append(os.path.abspath(f))
# get total number of files found
num_files = len(all_files)
print('{} files found in {}'.format(num_files, filepath))
# iterate over files and process
for i, datafile in enumerate(all_files, 1):
func(cur, datafile)
conn.commit()
print('{}/{} files processed.'.format(i, num_files))
def main():
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student")
cur = conn.cursor()
process_data(cur, conn, filepath='data/song_data', func=process_song_file)
process_data(cur, conn, filepath='data/log_data', func=process_log_file)
conn.close()
if __name__ == "__main__":
main()
# + editable=true
# + editable=true
| 6,667 |
/ExerciseCode/Lesson1.ipynb
|
4f1f04afc471e0b98cd9162ce4508ebc7c037be8
|
[] |
no_license
|
feitenglee/PythonLearning
|
https://github.com/feitenglee/PythonLearning
| 0 | 2 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 32,381 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PEP8 Python 编码规范
# https://www.python.org/dev/peps/pep-0008/
#
# - 代码编排
# - 缩进。4个空格的缩进(编辑器都可以完成此功能),不使用Tap,更不能混合使用Tap和空格。
# - 每行最大长度79,换行可以使用反斜杠,最好使用圆括号。换行点要在操作符的后边敲回车。
# - 类和top-level函数定义之间空两行;类中的方法定义之间空一行;函数内逻辑无关段落之间空一行;其他地方尽量不要再空行。
# - 文档编排
# - 不要在一句import中多个库,比如import os, sys不推荐。
# - 空格的使用
# - 操作符左右各加一个空格,不要为了对齐增加空格。
# - 不要将多句语句写在同一行,尽管使用‘;’允许。
# - 文档描述
# - 为所有的共有模块、函数、类、方法写docstrings;非共有的没有必要,但是可以写注释(在def的下一行)。
# - 命名规范
# - 模块命名尽量短小,使用全部小写的方式,可以使用下划线。
# - 包命名尽量短小,使用全部小写的方式,不可以使用下划线。
# - 函数命名使用全部小写的方式,可以使用下划线。
# - 常量命名使用全部大写的方式,可以使用下划线。
# - 类的方法第一个参数必须是self,而静态方法第一个参数必须是cls。
# - 编码建议
# - 尽可能使用‘is’‘is not’取代‘==’,比如if x is not None 要优于if x。
# - 二进制数据判断使用 if boolvalue的方式。
#
# # Python 3
#
# - print('hello')
# - def func(x:str) -> bool
# - 模块改名
# # 核心对象
# ## 字符串定义
s = 'hello world'
print(s)
s = "helloworld"
print(s)
s = """helloworld
是是是
another helloworld""";
print(s)
# ## 字符串转义字符
s = ""cat""
s = "\"cat\""; print(s)
s = ''cat''
s = '\'cat\''; print(s)
s = '"cat"'; print(s)
s = '\t\tcat'; print(s)
s = 'cat\b'; print(s)
# 80 char per line
s = 'cat catch \
mouse'
print(s)
s = 'cat catch\nmouse'
print(s)
# ## 字符串格式化
# +
# %[(name)][flags][width].[precision]typecode
# -
s = "love"
t = "python"
print(s+t)
print(s + " " + t)
sp = "%s %s" % (s, t)
print(sp)
sp = "%-10s %s" % (s, t)
print(sp)
sp = "%+10s %s" % (s, t)
print(sp)
n = 10
print(s + " " + n)
sp = "%s %d" % (s, n)
#int
print(sp)
# 转义字符 %
sp = '%%%s%%' % s
print(sp)
# 浮点数
pi = 3.1415
sp = '%.2f' % pi;
print(sp)
sp = '%.6f' % pi;
print(sp)
# 复杂的case
sp = 'spend $%d to buy %d %s' % (1.25, 8, 'cars')
print(sp)
# 另外一种写法
sp = 'spend ${0} to buy {1} {2}'.format(1.25, 8, 'cars')
print(sp)
# ## 字符串操作
print(s*3)
# ## 索引和分片
print(s[1:2])
print(s[1:4])
print(s[1:])
print(len(s))
s = ' car,Bat '
print(s.strip())
print(s.rstrip())
print(s.lstrip())
s.replace('a', 'h')
s.split(',')
s.upper()
s.lower()
s = 'car'
print(s.find('a'))
print(s.find('h'))
# ## RAW 字符串
s = r'\n\n\n\t\"car'
print(s)
# ## Bytes与Str
s = u'中国'
t = '中国'
print(type(s))
print(type(t))
# +
# 可以用字符,也可以用ascii表示
z1 = b'abc'
print(type(z1), z1)
z2 = b'xyz\63'
print(type(z2), z2)
# -
s1 = s.encode('utf8')
print(type(s1), s1)
t1 = t.decode('utf8')
print(type(t1))
s2 = s1.decode('utf8')
print(type(s2), s2)
print(s)
print(t)
print(len(s))
print(len(t))
import sys
sys.getdefaultencoding()
# +
# reload(sys)
# print sys.getdefaultencoding()
# sys.setdefaultencoding('utf-8')
# -
x = '\u4e2d\u56fd'
print(type(x), x)
print(sys.maxunicode)
print(x.encode('utf8'))
# ## 数值类型
print(1234, -100, 0, 99999999999999999999999999999999999999999999999999999999999999)
99999999999999999999999999999999999999999999999999999999999999 + 1
# ### 不同进制
print(0o777, 0x8ee, 0b100)
print(0xFFFFF100)
print(hex(100), oct(100), bin(100))
print(1.23, .23, 3.14e-10)
print(10e2)
# 复数
print(3+2j, complex(3, 2))
# ### 类型转换
int(1.23)
float(1)
# ### 数学操作
print(pow(2,3))
print(abs(-2))
print(round(2.3))
print(round(2.7))
# ## 列表
# 定义
a = []
print(a)
a = list()
print(a)
a = [1, 2, 3]
print(a)
a = [1, 2.0, 'str']
print(a)
# ### 列表操作
len(a)
x = [1, 2, 3] + [8, 'no', 1.0, 2+3j, 2e3]
print(x)
x = [1, 2, 3] + 'a'
print(x)
x = [1, 2] * 4
print(x)
# +
### 分片与索引
# -
a = [1 ,2, 3]
a[0]
a[-1]
a[4]
a[1:]
a[2:]
a[4:]
a[1:2] # 返回的列表 非 元素
a[:-1]
a[:-2]
print(a[1:-1])
# +
# 修改
# -
a = [1, 2, 3]
a[2] = 4
a
a[0] = 5
a
# ### 操作
a = [1, 2, 3]
print(a.append(4)) # 为什么这里没有输出?
a.pop(0); print(a)
a.pop(-1); print(a)
a.insert(0, 1); print(a)
a.insert(-1, 4); print(a)
a.insert(1, 1.5); print(a)
a.remove(1.5); print(a)
a.remove(8);
# 排序
a.sort(); print(a)
sorted(a)
b = ['cde','abc','ABD', 'acd']
b.sort()
b
b.sort(key=str.lower)
b
b.sort(reverse=True)
b
a = [1,2,3]
a.reverse()
a
# +
# 列表的扩展
# -
a = [1, 23]
b = [2, 89]
a.extend(b)
a
# +
# 索引
# -
a = [3,8,10]
a.index(8)
a.index(10)
a.index(-1)
a = [1, 2, 3]
del a[0]
a
del a[-1]
a
del a[0]
a
# +
# 列表的迭代
# -
a = [1,2,3]
for e in a:
print e,
# # 字典
d = {}
d
d = dict()
d
d = {1:'beijing', 2:'hangzhou'}
d
d.keys()
d.values()
d.items()
# +
# 索引
# -
d[1]
d[2]
d[3]
# +
# 修改
# -
d[1] = 'hebei'
d
d.pop(1)
d
d.pop(2)
d
d = {1:'beijing', 2:'hangzhou'}
del d[1]
d
# # 集合
a = set()
a
a = set([1,2,3])
a
a = set([1,1])
a
a = set([1,2,3])
b = set([2,3,4])
a.union(b)
a = set([1,2,3])
b = set([2,3,4])
a.intersection(b)
a = set([1,2,3])
b = set([2,3,4])
a.difference(b)
a = set([1,2,3])
a.add(4)
a
a = set([1,2,3])
a.add(3)
a
a = set([1,2,3])
a.clear()
a
a = set([1,2,3])
a.pop()
a
# # 元组
a = ()
a
a = tuple([1,2])
a
a = (2,3,2)
a
a = (40)
a
a = (40,)
a
a = (1,2,3,4)
a[0:3]
a.remove(2)
a.count(2)
a.index(2)
a = (2,3,2)
b = (3,4,5)
a + b
a
b
# # 加减乘除运算
0.2 + 0.1 + 0.3
1 - 0.9
import decimal
decimal.Decimal(1) - decimal.Decimal(0.9)
decimal.getcontext().prec = 4
decimal.Decimal(1) - decimal.Decimal(0.9)
# # 其他运算
3 ** 3
15//2
15/2
15./2
3<2<5
2==2==2
2==2==3
# # 逻辑运算
True and False
True or False
not True
not False
def check():
print('done')
return False
True and check()
True or check()
not True and False or True
not True and False or False
((not True) and False) or True
((not True) and False) or False
0b11110000 & 0b00001111
~0b11110000
bin(~241)
0b11110000 | 0b00001111
2>>1 #10
# 10->1
2<<1 #10 -> 100
# # 浅复制
a = [1,2,3]
b = a
b[0] = 3
print(a, b)
import copy
b = copy.deepcopy(a)
b[0] = 5
print(a, b)
x = b.copy()
x[0] = 7
print(x, b)
a = [1,2,3]
b = a
id(a) == id(b)
id(a)
id(b)
| 5,899 |
/training/training.ipynb
|
df1204182b762a7eb65587f4c83941a84e21b00a
|
[] |
no_license
|
mhdsbq/MLtest
|
https://github.com/mhdsbq/MLtest
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 149,767 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#importing libraries
import pandas as pd
#read the data files
dtrain = pd.read_csv('loan_pred_OHE_train_data.csv')
dtest = pd.read_csv('loan_pred_OHE_test_data.csv')
# +
#drop the Loan_ID variable
test_original = pd.Series()
train = dtrain.drop('Loan_ID',axis=1)
test_original['Loan_ID'] = dtest['Loan_ID']
test = dtest.drop('Loan_ID',axis=1)
# -
test_original
train.head()
#seperating the target dataset
X = train.drop('Loan_Status',axis=1)
y = train.Loan_Status
# ### Predictive Modelling
# +
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
#Splitting the train data in train and cross validation
#x_train,x_cv,y_train,y_cv = train_test_split(X,y,test_size=0.3)
# -
# ## Logistic Regression
from sklearn.model_selection import StratifiedKFold
from sklearn.linear_model import LogisticRegression
i=1
kf = StratifiedKFold(n_splits=5,random_state=1,shuffle=True)
for train_index,test_index in kf.split(X,y):
print(' {} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = X.iloc[train_index],X.iloc[test_index]
ytr,yvl = y.iloc[train_index],y.iloc[test_index]
model = LogisticRegression(random_state=1)
model.fit(xtr,ytr)
pred_test = model.predict(xvl)
score = accuracy_score(yvl,pred_test)
print("accuracy score: ",score)
i +=1
pred_test = model.predict(test)
#pred = model.predict_proba(xvl)[:,1]
# +
#Reading the sample submission file
submission = pd.read_csv('results/sample_submission_49d68Cx.csv')
submission['Loan_Status'] = pred_test
submission['Loan_ID'] = test_original['Loan_ID']
# -
#we need Loan Status in terms of Y and N
submission['Loan_Status'].replace(1,'Y',inplace=True)
submission['Loan_Status'].replace(0,'N',inplace=True)
#Converting it to csv format
pd.DataFrame(submission,columns=['Loan_ID','Loan_Status']).to_csv('results/log2.csv',index=False)
import numpy as np
unique,counts = np.unique(pred_test,return_counts=True)
print(np.asarray((unique,counts)).T)
# ## Decision Tree
from sklearn.model_selection import StratifiedKFold
from sklearn import tree
i=1
kf = StratifiedKFold(n_splits=5,random_state=1,shuffle=True)
for train_index,test_index in kf.split(X,y):
print(' {} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = X.iloc[train_index],X.iloc[test_index]
ytr,yvl = y.iloc[train_index],y.iloc[test_index]
model = tree.DecisionTreeClassifier(random_state=1)
model.fit(xtr,ytr)
pred_test = model.predict(xvl)
score = accuracy_score(yvl,pred_test)
print("accuracy score: ",score)
i +=1
pred_test = model.predict(test)
#pred = model.predict_proba(xvl)[:,1]
import numpy as np
unique,counts = np.unique(pred_test,return_counts=True)
print(np.asarray((unique,counts)).T)
# +
#Reading the sample submission file
submission = pd.read_csv('results/sample_submission_49d68Cx.csv')
submission['Loan_Status'] = pred_test
submission['Loan_ID'] = test_original['Loan_ID']
# -
#we need Loan Status in terms of Y and N
submission['Loan_Status'].replace(1,'Y',inplace=True)
submission['Loan_Status'].replace(0,'N',inplace=True)
#Converting it to csv format
pd.DataFrame(submission,columns=['Loan_ID','Loan_Status']).to_csv('results/decision_tree.csv',index=False)
# ## Random Forest
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
i=1
kf = StratifiedKFold(n_splits=5,random_state=1,shuffle=True)
for train_index,test_index in kf.split(X,y):
print(' {} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = X.iloc[train_index],X.iloc[test_index]
ytr,yvl = y.iloc[train_index],y.iloc[test_index]
model = RandomForestClassifier(random_state=1,max_depth=10)
model.fit(xtr,ytr)
pred_test = model.predict(xvl)
score = accuracy_score(yvl,pred_test)
print("accuracy score: ",score)
i +=1
pred_test = model.predict(test)
#pred = model.predict_proba(xvl)[:,1]
import numpy as np
unique,counts = np.unique(pred_test,return_counts=True)
print(np.asarray((unique,counts)).T)
# +
#Reading the sample submission file
submission = pd.read_csv('results/sample_submission_49d68Cx.csv')
submission['Loan_Status'] = pred_test
submission['Loan_ID'] = test_original['Loan_ID']
# -
#we need Loan Status in terms of Y and N
submission['Loan_Status'].replace(1,'Y',inplace=True)
submission['Loan_Status'].replace(0,'N',inplace=True)
#Converting it to csv format
pd.DataFrame(submission,columns=['Loan_ID','Loan_Status']).to_csv('results/random_forest.csv',index=False)
# ## Random Forest using GridSearch
# +
from sklearn.model_selection import GridSearchCV
#define parameters
param_grid = { 'max_depth' : list(range(1,21,2)),'n_estimators' : list(range(1,200,20))}
model = RandomForestClassifier(random_state=1)
grid_search = GridSearchCV(model,param_grid)
#fitting the model
grid_search.fit(xtr,ytr)
# -
#estimating the optimized value
grid_search.best_estimator_
# +
from sklearn.model_selection import StratifiedKFold
i=1
kf = StratifiedKFold(n_splits=5,random_state=1,shuffle=True)
for train_index,test_index in kf.split(X,y):
print(' {} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = X.iloc[train_index],X.iloc[test_index]
ytr,yvl = y.iloc[train_index],y.iloc[test_index]
model = RandomForestClassifier(random_state=1,max_depth=5,n_estimators=61)
model.fit(xtr,ytr)
pred_test = model.predict(xvl)
score = accuracy_score(yvl,pred_test)
print("accuracy score: ",score)
i +=1
pred_test = model.predict(test)
pred2 = model.predict_proba(xvl)[:,1]
# -
import numpy as np
unique,counts = np.unique(pred_test,return_counts=True)
print(np.asarray((unique,counts)).T)
# +
#Reading the sample submission file
submission = pd.read_csv('results/sample_submission_49d68Cx.csv')
submission['Loan_Status'] = pred_test
submission['Loan_ID'] = test_original['Loan_ID']
# -
#we need Loan Status in terms of Y and N
submission['Loan_Status'].replace(1,'Y',inplace=True)
submission['Loan_Status'].replace(0,'N',inplace=True)
#Converting it to csv format
pd.DataFrame(submission,columns=['Loan_ID','Loan_Status']).to_csv('results/Grid_random_forest.csv',index=False)
#calculating importance of each feature
importances = pd.Series(model.feature_importances_,index= X.columns)
importances.plot(kind='barh',figsize=(12,8))
# ## XGBosst Classifier
# Parameters:
#
# n_estimators : this specifies the number of trees of the model
# max_depth : max depth of a tree can be specified here
# +
from sklearn.model_selection import StratifiedKFold
from xgboost import XGBClassifier
i=1
kf = StratifiedKFold(n_splits=5,random_state=1,shuffle=True)
for train_index,test_index in kf.split(X,y):
print(' {} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = X.iloc[train_index],X.iloc[test_index]
ytr,yvl = y.iloc[train_index],y.iloc[test_index]
model = XGBClassifier(random_state=1,max_depth=4,n_estimators=50)
model.fit(xtr,ytr)
pred_test = model.predict(xvl)
score = accuracy_score(yvl,pred_test)
print("accuracy score: ",score)
i +=1
pred_test = model.predict(test)
pred2 = model.predict_proba(xvl)[:,1]
# +
#Reading the sample submission file
submission = pd.read_csv('results/sample_submission_49d68Cx.csv')
submission['Loan_Status'] = pred_test
submission['Loan_ID'] = test_original['Loan_ID']
# -
#we need Loan Status in terms of Y and N
submission['Loan_Status'].replace(1,'Y',inplace=True)
submission['Loan_Status'].replace(0,'N',inplace=True)
#Converting it to csv format
pd.DataFrame(submission,columns=['Loan_ID','Loan_Status']).to_csv('results/XGBclassifier.csv',index=False)
| 7,957 |
/Big-O.ipynb
|
130ffaac987bc130f42ebb106809e30917292c16
|
[] |
no_license
|
BrunoPTeruya/Python
|
https://github.com/BrunoPTeruya/Python
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 70,308 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Implement the function fib(n), which returns the nth number in the Fibonacci sequence, using only O(1) space.
# Nesse exercício é necessário entender o conceito de Big-O.
# Ele é utilizado para mostrar o comportamento do crescimento de tempo ou espaço utilizado quando rodamos um algoritmo. Quanto mais baixo for esse crescimento melhor, já que o permite ser escalável.
# 
# No caso do exercício, pede-se que o código tenha espaço O(1), isso significa que não podemos aumentar o espaço utilizado para processar o input. Para passar por essa imposição faremos com que os novos valores gerados sobrescrevam os antigos, mantendo o espaço constante.
def fibonacci(n):
a = 0
b = 1
if n < 0:
print("Não existe")
elif n == 0:
return 0
elif n == 1:
return 1
else:
for i in range(n - 1):
c = a + b
a = b
b = c
return c
fibonacci(10)
| 1,291 |
/notebooks/plot_periods.ipynb
|
aaa8f8b98ebf6ef2fd7165a062ebb732b372d58b
|
[] |
no_license
|
jbirky/tess_binaries
|
https://github.com/jbirky/tess_binaries
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 13,682 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import os, sys
import glob
from astropy import units as u
import random
import h5py
import itertools
import multiprocessing as mp
import time
import pickle
from astropy.timeseries import BoxLeastSquares, LombScargle
from scipy.signal import find_peaks, peak_widths
from sklearn.preprocessing import MinMaxScaler
import altair as alt
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator
from matplotlib import rc
from matplotlib import cm
import matplotlib.colors as colors
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
plt.style.use('classic')
rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
rc('text', usetex=True)
rc('figure', facecolor='w')
rc('xtick', labelsize=20)
rc('ytick', labelsize=20)
sys.path.append('/astro/users/jbirky/projects/tess_binaries')
os.environ['TESS_DATA'] = '/data/epyc/projects2/tess'
import tess_binaries as tb
# +
ff = h5py.File(f'{tb.cat_dir}/asassn_tess_inspected.hdf5', mode="r")
df = {}
for key in list(ff):
if key == 'type':
df[key] = np.array(ff[key].value, dtype='str')
else:
df[key] = ff[key].value
ff.close()
sample = pd.DataFrame(data=df)
# -
def computePowerSpectra(tic_id, **kwargs):
ps_dir = tb.ps_dir
data, end_times = tb.readSourceFiles(tic_id)
ts = data[~np.isnan(data['pdcsap_flux'])]
#Compute lomb-scargle power series
ls0 = time.time()
ls = LombScargle(ts.time.jd, ts['pdcsap_flux'], dy=ts['pdcsap_flux_err'], \
normalization='standard')
ls_freq, ls_power = ls.autopower(minimum_frequency=.001, \
maximum_frequency=100, samples_per_peak=10)
ls_time = time.time() - ls0
ls_pwr_spec = np.vstack([np.array(1/ls_freq), np.array(ls_power)])
print(f'Lomb-Scargle compute time: {ls_time}')
#find best lomb-scargle period
ls_max_pwr = np.argmax(ls_pwr_spec, axis=1)[1]
ls_best_period = ls_pwr_spec[0][ls_max_pwr]
#find best BLS period
bls0 = time.time()
psamp = 10**3
rng = [ls_best_period/4, ls_best_period*5]
model = BoxLeastSquares(ts.time.jd * u.day, ts['pdcsap_flux'], dy=ts['pdcsap_flux_err'])
periods = np.linspace(rng[0], rng[1], psamp) * u.day
periodogram = model.power(periods, rng[0]/2)
bls_time = time.time() - bls0
bls_pwr_spec = np.vstack([np.array(periodogram.period), np.array(periodogram.power)])
bls_max_pwr = np.argmax(bls_pwr_spec, axis=1)[1]
bls_best_period = bls_pwr_spec[0][bls_max_pwr]
ps_output = {'tic_id':tic_id, 'data': data, 'ls_pwr_spec': ls_pwr_spec, 'bls_pwr_spec': bls_pwr_spec, \
'ls_best_period': ls_best_period, 'bls_best_period': bls_best_period, \
'ls_time': ls_time, 'bls_time': bls_time, 'end_times': end_times}
fname = f'{ps_dir}/{tic_id}_ps.pkl'
print(f'Saving power spectra to {ps_dir}.')
output = open(fname, 'wb')
pickle.dump(ps_output, output)
return ps_output
a = computePowerSpectra(sample['tic_id'][0])
print(a['ls_best_period'])
print(a['bls_best_period'])
# +
pool = mp.Pool(mp.cpu_count())
t0 = time.time()
result = pool.map(computePowerSpectra, list(sample['tic_id']))
t1 = time.time() - t0
# -
print(t1/60)
ls_period, bls_period = [], []
for ID in list(sample['tic_id']):
infile = open(f'{tb.ps_dir}/{ID}_ps.pkl','rb')
ps_dict = pickle.load(infile)
infile.close()
ls_period.append(ps_dict['ls_best_period'])
bls_period.append(ps_dict['bls_best_period'])
sample['ls_period'] = np.array(ls_period)
sample['bls_period'] = np.array(bls_period)
# +
s1 = sample[sample['type'].isin(['EA', 'EB', 'EW'])]
s2 = sample[~sample['type'].isin(['EA', 'EB', 'EW'])]
x = np.arange(0,100)
plt.figure(figsize=[10,8])
plt.scatter(s1['period'], s1['ls_period'], edgecolor='none', facecolor='k', label='EB')
# plt.scatter(s2['period'], s2['ls_period'], edgecolor='none', facecolor='m', label='Non EB')
plt.plot(x,2*x, linewidth=.5, linestyle='--', label=r'$y=2x$')
plt.plot(x,x, linewidth=.5, linestyle='--', label=r'$y=x$')
plt.plot(x,x/2, linewidth=.5, linestyle='--', label=r'$y=x/2$')
plt.plot(x,x/4, linewidth=.5, linestyle='--', label=r'$y=x/4$')
plt.legend(loc='lower right', scatterpoints=1)
plt.xlabel('ASASSN period', fontsize=18)
plt.ylabel('Lomb-Scargle period', fontsize=18)
plt.xscale('log')
plt.yscale('log')
plt.xlim(.01,100)
plt.ylim(.01,100)
# plt.savefig('period_comparison.png')
plt.show()
x = np.arange(0,100)
plt.figure(figsize=[10,8])
plt.scatter(s1['period'], s1['bls_period'], edgecolor='none', facecolor='k', label='EB')
# plt.scatter(s2['period'], s2['bls_period'], edgecolor='none', facecolor='m', label='Non EB')
plt.plot(x,2*x, linewidth=.5, linestyle='--', label=r'$y=2x$')
plt.plot(x,x, linewidth=.5, linestyle='--', label=r'$y=x$')
plt.plot(x,x/2, linewidth=.5, linestyle='--', label=r'$y=x/2$')
plt.plot(x,x/4, linewidth=.5, linestyle='--', label=r'$y=x/4$')
plt.legend(loc='lower right', scatterpoints=1)
plt.xlabel('ASASSN period', fontsize=18)
plt.ylabel('BLS period', fontsize=18)
plt.xscale('log')
plt.yscale('log')
plt.xlim(.01,100)
plt.ylim(.01,100)
# plt.savefig('period_comparison.png')
plt.show()
# +
bad = sample.query('ls_period < (period/2 - .1*period)')
print(len(bad))
# parr = np.linspace(0,1,100)
# for i, fl in enumerate(list(bad['flux'])):
# print(list(bad['tic_id'])[i])
# print('A per:', list(bad['period'])[i])
# print('LS per:', list(bad['ls_period'])[i])
# print(list(bad['ls_period'])[i]/list(bad['period'])[i])
# plt.plot(parr, fl, label=list(bad['type'])[i])
# plt.legend(loc='upper left')
# plt.show()
# -
ls_flux_fold = ps_dict['data'].fold(period=ps_dict['ls_best_period']*u.day)
# +
from scipy import optimize
def test_func(x, a, T, p, h):
return a * np.sin(2*np.pi*x/T - p) + h
params, params_covariance = optimize.curve_fit(test_func, pharr, ls_flux_binned, p0=[2,.5,-.2,.1])
# +
plt.figure(figsize=(6, 4))
plt.scatter(pharr, ls_flux_binned, label='Data')
plt.plot(pharr, test_func(pharr, params[0], params[1], params[2], params[3]),label='Fitted function')
plt.legend(loc='best')
plt.show()
# +
s = sample[sample['type'] == 'EB'].reset_index(drop=True)
pharr = np.linspace(0,1,100)
scaler = MinMaxScaler()
for i in np.arange(5,10):
print(s['type'][i])
infile = open(f"{tb.ps_dir}/{s['tic_id'][i]}_ps.pkl",'rb')
ps_dict = pickle.load(infile)
infile.close()
peaks, properties = find_peaks(ps_dict['ls_pwr_spec'][1], height=.1, distance=100)
# widths = peak_widths(ps_dict['ls_pwr_spec'][1], peaks, rel_height=0.5)[0]
# ps_dict['ls_best_period'] *= 2
# ps_dict['bls_best_period'] *= 2
print('ASASSN:', s['period'][i])
print('LS:', ps_dict['ls_best_period'])
print('BLS:', ps_dict['bls_best_period'])
#Fold data
ls_flux_fold = ps_dict['data'].fold(period=ps_dict['ls_best_period']*u.day)
ls_flux_binned = tb.binData(ls_flux_fold, 100)
ls_flux_binned = np.roll(np.array(ls_flux_binned), 100-np.argmin(ls_flux_binned))
ls_dat = np.vstack([pharr, ls_flux_binned]).T
scaler.fit(ls_dat)
ls_flux_binned = scaler.transform(ls_dat).T[1]
bls_flux_fold = ps_dict['data'].fold(period=ps_dict['bls_best_period']*u.day)
bls_flux_binned = tb.binData(bls_flux_fold, 100)
bls_flux_binned = np.roll(np.array(bls_flux_binned), 100-np.argmin(bls_flux_binned))
bls_dat = np.vstack([pharr, bls_flux_binned]).T
scaler.fit(bls_dat)
bls_flux_binned = scaler.transform(bls_dat).T[1]
as_flux_fold = ps_dict['data'].fold(period=s['period'][i]*u.day)
fig, (ax1,ax2) = plt.subplots(1,2, figsize=[18,6])
ax1.plot(pharr, s['flux'][i], color='k')
ax1.plot(pharr, ls_flux_binned, color='g')
ax1.plot(pharr, bls_flux_binned, color='b')
ax2.plot(ps_dict['bls_pwr_spec'][0], ps_dict['bls_pwr_spec'][1]/max(ps_dict['bls_pwr_spec'][1]), color='b')
ax2.plot(ps_dict['ls_pwr_spec'][0], ps_dict['ls_pwr_spec'][1], color='g')
ax2.plot(ps_dict['ls_pwr_spec'][0][peaks], ps_dict['ls_pwr_spec'][1][peaks], "x", color='r')
ax2.axvline(s['period'][i], color='k', alpha=.5, linestyle='--')
# ax2.axvline(ps_dict['ls_pwr_spec'][0][peaks[np.argmax(properties['peak_heights'])]], color='r', alpha=.5)
ax2.axvline(2*s['period'][i], color='m', alpha=.5)
ax2.axvline(.5*s['period'][i], color='m', alpha=.5)
# ax2.axvline(.25*ps_dict['ls_pwr_spec'][0][peaks[np.argmax(properties['peak_heights'])]], color='m', alpha=.5)
ax2.set_xscale('log')
ax2.set_ylim(0,1.1)
ax2.set_xlim(.1, 100)
plt.show()
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=[18,6])
ax1.scatter(as_flux_fold.time.jd, as_flux_fold['pdcsap_flux'], s=5, edgecolor='none', facecolor='k')
ax2.scatter(ls_flux_fold.time.jd, ls_flux_fold['pdcsap_flux'], s=5, edgecolor='none', facecolor='g')
ax3.scatter(bls_flux_fold.time.jd, bls_flux_fold['pdcsap_flux'], s=5, edgecolor='none', facecolor='b')
ax1.set_title('ASASSN')
ax2.set_title('Lomb-Scargle')
ax3.set_title('Box Least Squares')
plt.show()
# -
alt.Chart(sample).mark_circle(size=60).encode(
x='period',
y='ls_period',
color='type',
tooltip=['tic_id', 'type']
).interactive()
| 9,556 |
/Gnod case study part 2- webscraping for top songs continued.ipynb
|
524db708926640208985bb8fe8904a23c6e94142
|
[] |
no_license
|
elizabeth-sames/Gnod_song_recommender
|
https://github.com/elizabeth-sames/Gnod_song_recommender
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,609 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import requests
from bs4 import BeautifulSoup
import pandas as pd
#top songs in UK
url = 'https://www.officialcharts.com/charts/singles-chart/'
response = requests.get(url)
response.status_code
soup = BeautifulSoup(response.content, 'html.parser')
# +
#main > article > div > div.grid__cell.unit-2-3--desktop > section > table > tbody > tr:nth-child(2) > td:nth-child(3) > div
# +
#soup.select('div.title a')
#soup.select('div.artist a')
# +
from string import capwords
songs = []
artists = []
for i in range(len(soup.select('div.title a'))):
songs.append(capwords(soup.select('div.title a')[i].get_text()))
artists.append(capwords(soup.select('div.artist a')[i].get_text()))
# -
chart_songs = pd.DataFrame({"title":songs, "artist":artists})
chart_songs
chart_songs.to_csv('top_UK_songs.csv')
#Billboard Global 200
url = 'https://www.billboard.com/charts/billboard-global-200'
response = requests.get(url)
response.status_code
soup = BeautifulSoup(response.content, 'html.parser')
# +
#charts > div > div.chart-list__wrapper > div > ol > li:nth-child(1) > button > span.chart-element__information
# +
#soup.select('span.chart-element__information span')
# +
title = []
artist = []
max_iter = len(soup.select('span.chart-element__information span'))
for i in range(0,max_iter,7):
title.append(soup.select('span.chart-element__information span')[i].get_text())
artist.append(soup.select('span.chart-element__information span')[i+1].get_text())
# -
top_songs = pd.DataFrame({"title":title, "artist":artist})
top_songs
top_songs.to_csv('international_top100.csv')
he culmination of these two popular events led us to be curious about how much politics can impact finance, or the stock market in particular.
#
# A significant concern when discerning who will be the next President of the United States is how that politician will impact the U.S. stock market - a cornerstone piece of the US and global economy. We utilized the various articles and prior research conducted, which are listed below, regarding the influence of the U.S. president on the U.S. stock market. These resources all seem to agree on a similar sentiment: the political party of the president does not have significant influence on the U.S. stock market. Although this trend was common, we wanted to discover through our project exactly why this was the case as political and economic affairs are relevant elements that have real impacts on practically all United States citizens and many other individuals across the globe.
#
# - 1) Stop stressing about which party is better for the stock market: The data shows it doesn’t matter much. https://www.cnbc.com/2020/11/03/are-republicans-or-democrats-better-for-the-stock-market.html
#
# Generally, one United States political party does not have a dramatic influence over the United States stock market. A majority party in the senate and house of representatives along with party seats within these chambers causes no long-term impacts to the stock market. The most influence politics has on the market is observed when the parties within the government are split “[The] markets tend to like checks and balances”. When the party of the president is considered in the mix, the benefits on the stock market of the split governing party concept becomes a little more ambiguous as the powers are not evenly distributed. However, taking a detailed look at the market during different instances of the divisions of power within the government, we can see that the market performs the best when the powers are as evenly split as possible. Ultimately, a political party produces no momentous impact on the stock market in the long run, regardless of how balanced the government is in terms of political party.
#
# - 2) How Presidential Elections Affect the Stock Market https://www.kiplinger.com/investing/stocks/601629/how-presidential-elections-affect-the-stock-market
#
# It has been observed over the past century that the United States stock market is not affected in a long-term sense by a United States president and their term until an election year rolls around. The market seems to perform well during the first three years of a president’s term then slows down substantially during an election year. The political party of the president shows that there is some short-term impact on the market that breaks the stigma claiming republicans are better for the stock market. We can see that when a democratic president holds office, the market performs significantly better than when a republican president is in office. However, the market may also perform well during an election year if a republican is predicted to win the election. It is also claimed that when political parties split the powers within the US government, the market actually performs worse than when one party occupies the majority of multiple chambers. Albeit, this is very contingent on the political and global atmosphere of that given time. Ultimately, election outcomes and political parties won’t impact the market in the long run, but the market can actually give an indication of which party will hold the white house depending on its performance leading up to the election.
#
# - 3) How does expansionary economic policy impact the stock market? https://www.investopedia.com/ask/answers/042115/how-does-expansionary-economic-policy-impact-stock-market.asp
#
# When it comes to expansionary economic policies, it comes down to two concepts: expansionary fiscal policy and expansionary monetary policy. Expansionary fiscal policies help expand aggregate demand, the overall demand in an economy, and employment rates by sparking growth in economic activity and consumer spending. These policies can cause stock prices to rise and an increase in government spending on infrastructure developments or tax cuts. Expansionary monetary policies boost the money supply in a given economy by enhancing certain financial conditions such as decreasing interest rate payments and financing costs for larger purchases. With the increased money supply, consumer and corporate spending also increase, ultimately boosting the economy and rising stock prices.
#
# Based on our background research, it seems that the presidential party does not have much of an effect on the stock market. However, as the U.S. president has direct influence over fiscal and economic policies, it seems inconsistent that the political party of the president would have insignificant effect on the stock market prices. We wanted to investigate this further, as data analysis might prove that one party does tend to grow the stock market more than the other.
#
# # Hypothesis
#
# We hypothesize that a Republican president will improve the stock market performance significantly compared to a Democratic president. We believe that Republican fiscal policies tend to concentrate on developing businesses and that Republicans typically favor increased spending and free markets over the Democrats.
# # Dataset(s)
# - Dataset Name: sp(S&P500)
# - Link to the dataset: https://finance.yahoo.com/quote/%5EGSPC/
# - Number of observations: 23278 entries with 6 variables describing S&P 500 stock price data for different weekdays
#
# This is the S&P 500 stock market data obtained from Yahoo Finance, with the time data stored using the datetime package. We use pandas_datareader.data to extract the data from Yahoo.
#
# - Dataset Name: dow(DOW)
# - Link to the dataset: https://finance.yahoo.com/quote/%5EDJI/
# - Number of observations: 25028 entries with 6 variables describing NASDAQ stock price data for different weekdays
#
# This is the Dow Jones Industrial Average stock market data obtained from Yahoo Finance, with the time data stored using the datetime package. We use pandas_datareader.data to extract the data from Yahoo.
#
# - Dataset Name: nasdaq(NASDAQ)
# - Link to the dataset: https://finance.yahoo.com/quote/%5EIXIC/
# - Number of observations: 12504 entries with 7 variables describing NASDAQ stock price data for different weekdays
#
# This is the Nasdaq Composite stock market data obtained from Yahoo Finance, with the time data stored using the datetime package. We use pandas_datareader.data to extract the data from Yahoo.
#
# - Dataset Name: pres(Presidents)
# - Link to the dataset: https://gist.githubusercontent.com/namuol/2657233/raw/74135b2637e624848c163759be9cd14ae33f5153/presidents.csv
# - Number of observations: 45 entries with 9 variables
#
# This is the presidential data from Kaggle.
#
# Each entry in the S&P 500, DOW, and NASDAQ dataset represents a different weekday of stock market price data and both datasets have the same six variables, but we plan on using only two of those variables: Data(date of data being recorded) and Close(last stock price of the day).
#
# The Presidents dataset contains an entry for each U.S. President up to Barack Obama, with nine variables. However, we plan on using only five of those variables: Presidency(President number), President(name of President), Took Office(Starting Date of Term), Left Office(End Date of Term), Party(Political Party). We have add a new entry for President Donald Trump to keep the dataset up to date with the stock data.
#
# We plan on comparing the political party of each president with the change in the closing stock prices for the days over the presidents' terms. From these datasets, we can analyze and calculate the percent change of the stock prices of each of three market indexes, S&P 500, DOW and NASDAQ, during each of the various presidential terms. After taking the percent change of stock prices in each index, we can take the average of all three percent changes of each index to get an average percent change of stock prices during each presidential term.
#
# # Setup
# +
# Need to first install pandas_datareader using terminal command: pip install pandas_datareader
import pandas_datareader.data as web
import datetime
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import numpy as np
import pandas as pd
from scipy import stats
from scipy.stats import pearsonr, norm, ttest_ind
# -
# # Data Cleaning
# +
# Set start and end dates for obtained stock data
# Use pandas_datareader to obtain S&P 500 and NASDAQ datasets from Yahoo Finance
start = datetime.datetime(1900,1,1)
end = datetime.datetime(2020,9,1)
# S&P 500, ticker = ^GSPC
sp = web.DataReader('^GSPC', 'yahoo', start, end)
# NASDAQ, ticker = ^IXIC
nasdaq = web.DataReader('^IXIC', 'yahoo', start, end)
# dowjones, ticker = ^DJIA
dow = web.DataReader('DJIA', 'yahoo', start, end)
# President dataset from Kaggle
pres = pd.read_csv('https://gist.githubusercontent.com/namuol/2657233/raw/74135b2637e624848c163759be9cd14ae33f5153/presidents.csv')
# +
# Cleaned Left office date for Barack Obama from Incumbent to 20/01/2017
pres.at[43, 'Left office '] = '20/01/2017'
# Added new entry for President Donald J. Trump
pres = pres.append({'Presidency ':45, 'President ':'Donald J. Trump', 'Took office ':'20/01/2017', 'Left office ':'20/01/2021', 'Party ':'Republican'}, ignore_index=True)
# Cleaning Presidents dataset to change date format so that it matches stock market date format
itere = 0
for i in pres['Took office ']:
pres.loc[itere, ('Wikipedia Entry')] = datetime.datetime.strptime(i,'%d/%m/%Y').date()
itere +=1
# +
# Set date as Presidents dataset index to match stock market index and
pres = pres.rename({"Wikipedia Entry":"Date"}, axis=1).set_index("Date").drop(['Portrait', 'Thumbnail', 'Home State'], axis=1)
#Rename columns
pres.columns = ['Presidency', 'President', 'Start', 'End', 'Party']
#Clean up Party column
pres.loc[pres['Party'].str.contains('Republican'), ('Party')] = 'Republican'
pres.loc[pres['Party'].str.contains('Democratic'), ('Party')] = 'Democratic'
# -
#Loop through each president to calculate change in S&P500 stock during term
for index, row in pres.iterrows():
#Get stock data during duration of term
spslice = sp[(sp.index > pd.to_datetime(row['Start'])) & (sp.index < pd.to_datetime(row['End']))]
if not spslice.empty:
#Convert date to integer
times = spslice.index.astype(int)/(10**9)/604800
#Use stats.lingress to get linear regression slope - change in stock/time
slope, intercept, rvalue, pvalue, stderr = stats.linregress(times, spslice.Close)
#Set slope in new column
pres.loc[index, ('splinregress')] = slope/spslice.Close.iloc[0]
#Calculate percent change using ((final - initial)/initial
slope3 = (spslice.Close.iloc[-1] - spslice.Close.iloc[0])
#Set slope in new column
pres.loc[index, ('sppercentchange')] = slope3/spslice.Close.iloc[0]
# # Data Analysis & Results
# +
#Stocks graph
sp['Close'].plot(label='Open Price', figsize=(15,7))
#Color for each background
for index, row in pres.iterrows():
if (row.Party == 'Republican'):
color = 'r'
else:
color = 'b'
plt.axvspan(pd.to_datetime(row['Start']), pd.to_datetime(row['End']), facecolor=color, alpha=0.2)
plt.title("Stock Name: %s" % 'S&P 500')
plt.ylabel('Stock Price')
rlabel = mpatches.Patch(color='red',alpha=0.2, label="Republican President")
dlabel = mpatches.Patch(color='blue',alpha=0.2, label="Democratic President")
plt.legend(handles=[rlabel,dlabel])
plt.show()
# -
# This graph is showing the stock price of S&P 500 across presidential terms. Republican terms are indicated by red and democratic terms are represented by blue. This visualization gives us a general idea of the overall price change during each term, however given it is based off of stock price, we will use points from this plot to help calculate the percentage change in each term.
#
# +
#Slopes bar graph
color = pres.dropna()['Party']
color = color.replace('Republican', 'red')
color = color.replace('Democratic', 'blue')
pres.dropna().plot.bar(x='President', y='sppercentchange', figsize=(15,7), color=color, alpha=0.2, rot=60, legend=False)
plt.title('S&P 500 % Change By President')
plt.ylabel('% Change in Stock Price During Term')
rlabel = mpatches.Patch(color='red',alpha=0.2, label="Republican President")
dlabel = mpatches.Patch(color='blue',alpha=0.2, label="Democratic President")
plt.legend(handles=[rlabel,dlabel])
plt.show()
# -
# This shows us the percent change in stock price during each presidential term, and whether the change was positive or negative as well as the magnitude of the change. This will help in our analysis to compare the percent change to see if we can determine correlation between significant changes in the stock prices and presidential terms.
#Slopes bar graph of each party
sum = pres.dropna().groupby(['Party']).sum()
sum['sppercentchange'].plot.bar(figsize=(15,7), color=['b', 'r'], alpha=0.2, rot=0)
plt.title('S&P 500 % Change By Presidential Party')
plt.ylabel('% Change in Stock Price')
plt.show()
# This graph is the summation of percent changes between Democratic and Republican presidential terms, and helps us to see any early correlations we might be able to conclude. From this graph, we can see that Democrats seem to have a higher overall stock market increase than Republicans, proving our initial hypothesis wrong. We can continue our analysis on the other stock market indexes to compare results.
# +
#Stocks graph
dow['Close'].plot(label='Open Price', figsize=(15,7))
#Color for each background
for index, row in pres.iterrows():
if (row.Party == 'Republican'):
color = 'r'
else:
color = 'b'
plt.axvspan(pd.to_datetime(row['Start']), pd.to_datetime(row['End']), facecolor=color, alpha=0.2)
plt.title("Stock Name: %s" % 'Dow Jones Industrial Average')
plt.ylabel('Stock Price')
rlabel = mpatches.Patch(color='red',alpha=0.2, label="Republican President")
dlabel = mpatches.Patch(color='blue',alpha=0.2, label="Democratic President")
plt.legend(handles=[rlabel,dlabel])
plt.show()
# -
# This graph displays Dow Jones stock price over time with the colored background separating presidential terms. We can see similar stock trends to the S&P 500 graph. We wanted to use multiple stock market indices to back our claims and avoid any bias that might occur when using just one measure for stock market growth.
#Loop through each president to calculate change in DOW stock during term
for index, row in pres.iterrows():
#Get stock data during duration of term
dowslice = dow[(dow.index > pd.to_datetime(row['Start'])) & (dow.index < pd.to_datetime(row['End']))]
if not dowslice.empty:
#Convert date to integer
dtimes = dowslice.index.astype(int)/(10**9)/604800
#Use stats.lingress to get linear regression slope - change in stock/time
dslope, dintercept, drvalue, dpvalue, dstderr = stats.linregress(dtimes, dowslice.Close)
#Set slope in new column
pres.loc[index, ('dowlinregress')] = dslope/dowslice.Close.iloc[0]
#Calculate percent change using ((final - initial)/initial
dslope3 = (dowslice.Close.iloc[-1] - dowslice.Close.iloc[0])
#Set slope in new column
pres.loc[index, ('dowpercentchange')] = dslope3/dowslice.Close.iloc[0]
# +
#Slopes bar graph
color = pres.dropna()['Party']
color = color.replace('Republican', 'red')
color = color.replace('Democratic', 'blue')
pres.dropna().plot.bar(x='President', y='dowpercentchange', figsize=(15,7), color=color, alpha=0.2, rot=60, legend=False)
plt.title('DOW % Change By President')
plt.ylabel('% Change in Stock Price During Term')
rlabel = mpatches.Patch(color='red',alpha=0.2, label="Republican President")
dlabel = mpatches.Patch(color='blue',alpha=0.2, label="Democratic President")
plt.legend(handles=[rlabel,dlabel])
plt.show()
# -
# This also shows us the percent change in stock price during each term, but instead of S&P 500, we used DOW stock market index. We can see slightly different values regarding specific stock price changes, but similar trends as the S&P 500 graphs.
#Slopes bar graph of each party
sum = pres.dropna().groupby(['Party']).sum()
sum['dowpercentchange'].plot.bar(figsize=(15,7), color=['b', 'r'], alpha=0.2, rot=0)
plt.title('DOW % Change By Presidential Party')
plt.ylabel('% Change in Stock Price')
plt.show()
# This is the same graph as the one before, just using the Dow Jones stock market index. From this early visualization, it seems that it follows the same pattern as S&P 500. The fact that the two are similar allows us to be more confident that our datasets are valid when cross-referencing the two stock market indices.
# +
#Stocks graph
nasdaq['Close'].plot(label='Open Price', figsize=(15,7))
#Color for each background
for index, row in pres.iterrows():
if (row.Party == 'Republican'):
color = 'r'
else:
color = 'b'
plt.axvspan(pd.to_datetime(row['Start']), pd.to_datetime(row['End']), facecolor=color, alpha=0.2)
plt.title("Stock Name: %s" % 'Dow Jones Industrial Average')
plt.ylabel('Stock Price')
rlabel = mpatches.Patch(color='red',alpha=0.2, label="Republican President")
dlabel = mpatches.Patch(color='blue',alpha=0.2, label="Democratic President")
plt.legend(handles=[rlabel,dlabel])
plt.show()
# -
# This graph shows the stock price of NASDAQ across presidential terms, similar to the graphs of S&P 500 and DOW. However, the NASDAQ stock data begins at a later year of 1971 compared to S&P 500 and DOW.
#Loop through each president to calculate change in NASDAQ stock during term
for index, row in pres.iterrows():
#Get stock data during duration of term
nasdaqslice = nasdaq[(nasdaq.index > pd.to_datetime(row['Start'])) & (nasdaq.index < pd.to_datetime(row['End']))]
if not nasdaqslice.empty:
#Convert date to integer
times = nasdaqslice.index.astype(int)/(10**9)/604800
#Use stats.lingress to get linear regression slope - change in stock/time
slope, intercept, rvalue, pvalue, stderr = stats.linregress(times, nasdaqslice.Close)
#Set slope in new column
pres.loc[index, ('nasdaqlinregress')] = slope/nasdaqslice.Close.iloc[0]
#Calculate percent change using ((final - initial)/initial
slope3 = (nasdaqslice.Close.iloc[-1] - nasdaqslice.Close.iloc[0])
#Set slope in new column
pres.loc[index, ('nasdaqpercentchange')] = slope3/nasdaqslice.Close.iloc[0]
# +
#Slopes bar graph
color = pres.dropna()['Party']
color = color.replace('Republican', 'red')
color = color.replace('Democratic', 'blue')
pres.dropna().plot.bar(x='President', y='nasdaqpercentchange', figsize=(15,7), color=color, alpha=0.2, rot=60, legend=False)
plt.title('NASDAQ % Change By President')
plt.ylabel('% Change in Stock Price During Term')
rlabel = mpatches.Patch(color='red',alpha=0.2, label="Republican President")
dlabel = mpatches.Patch(color='blue',alpha=0.2, label="Democratic President")
plt.legend(handles=[rlabel,dlabel])
plt.show()
# -
# Similar to the S&P 500 and DOW bar graphs, we can see the percent change in NASDAQ stock price from president to president. Given the later NASDAQ data start date, there are less presidential terms shown in the graph.
#Slopes bar graph of each party
sum = pres.dropna().groupby(['Party']).sum()
sum['nasdaqpercentchange'].plot.bar(figsize=(15,7), color=['b', 'r'], alpha=0.2, rot=0)
plt.title('NASDAQ % Change By Presidential Party')
plt.ylabel('% Change in Stock Price')
plt.show()
# We can observe a significant difference in total percent change between the Democratic and Republican party, with similar differences in the S&P 500 and DOW graphs. We can now see a clear pattern with stock prices having a more significant growth during Democratic presidential terms compared to Republican presidential terms for all three stock market indexes.
# +
#Average percent change across all three stock market indexes
pres['avgpercentchange'] = pres[['sppercentchange', 'dowpercentchange', 'nasdaqpercentchange']].mean(axis=1)
box=pres.dropna(subset=['avgpercentchange']).boxplot(by='Party', column='avgpercentchange', figsize=(15,7))
plt.title('Average % Change By Presidential Party')
plt.ylabel('Average % Change in Stock Price')
plt.suptitle('')
plt.show()
# -
# Given the three stock market indexes data, we can take the average of the percent change of stock prices for each president. We can then create a boxplot with the average percent change in stock prices shown above. We can observe that although the median average percent change of the Republican party is higher than the median of the Democratic party, the Democratic party has a higher average of average percent change in stock price. This observation supports the idea that the stock markets tends to be more successful during a Democratic presidential term, compared to a Republican presidential term.
# +
pres.groupby(['Party'])['avgpercentchange'].mean()
group1 = pres.where(pres.Party=='Republican').dropna()['avgpercentchange']
group2 = pres.where(pres.Party=='Democratic').dropna()['avgpercentchange']
ttest_ind(group1, group2)
# -
# To analyze the data that we have collected regarding the stock market indexes and Presidential parties over time, we can use a t-test to determine if there is a significant difference between the average percent changes of the stock prices and the two presidential parties. We can define our null hypothesis to be that the average percent change in the stock market is not significantly influenced by the political party of the U.S. president. Our alternative hypothesis would then be that the average percent change in the stock market would be significantly influenced by the political party of the U.S. president. After running the t-test, we calculated a p-value of 0.0557, which unfortunately means that although we were close, we cannot reject the null hypothesis as our p-value was not <= 0.05.
# # Ethics & Privacy
# Since our dataset consists of mostly public stock data and data on presidential policies, there should be no major concerns over people’s ethics and privacy. To ensure the accuracy of the data of each stock market index, we will gather and collect stock data from various sites and sources and cross-reference the data of each stock market index from one another to prevent any inaccuracies and bias made from the sources. We have also considered the issue of inflation when collecting the stock market data, but because we are analyzing and comparing the growth of the stock market during presidential terms, which are at most four years, we believe that inflation will be negligent for our research. Regarding the data of the start and end date of each presidential term, we will reduce inaccuracies and bias by also cross-referencing the data.
#
# Due to the fact that all of our data sets consist of public stock data and whether a president was part of the Republican or Democratic party, there are not many privacy issues with our data. Because the data comes from the stock market, or is about whether the U.S. president was of the Republican or Democratic party, which they displayed publicly, we do not have to worry about the privacy of the data, as it does not contain any information that could be considered confidential.
# However, given the nature of the datasets and how we are evaluating and analyzing them, there are questions when it comes to ethics:
#
# **1. How can we guarantee the legitimacy of the stock market data we are using?**
#
# One issue that has come up is proving the legitimacy of the stock market data we are using for S&P 500, Dow Jones, and NASDAQ data. In order to do so, we cross-referenced the particular dataset we are using in our analysis from Yahoo with other datasets, such as stock market data from Datahub. Since there are many datasets out there of stock market index data, we just were able to compare the line graphs of the data to ensure that they more or less line up, and there aren’t any huge anomalies between the visualizations.
#
# **2. How will you account for inflation when analyzing the stock market across long periods of time?**
#
# Given that the value of the U.S. dollar changes in value throughout history, this is a point of concern when comparing stock market values from the 1940s to values from the 2000s. However, the way that we are choosing to analyze and quantify the change in the stock market is by analyzing the stock values by a presidential term. Terms are 4-year spans, and it is safe to assume that the value of the U.S. dollar does not change significantly within a 4-year span. We also choose to calculate the percent change in the stock market from start to end date of a president’s term and will be comparing the percent change rather than a monetary value. By doing so, we can avoid unfair comparisons between inflated differences in stock prices.
#
# **3. What about economic events in U.S. history? How will these affect analysis?**
#
# It is true that certain events will affect the U.S. economy and this will change stock market values. However, the point of our question and hypothesis is to see how a particular political party affects the growth of the stock market. Our group has decided that ups and downs are certain to happen that will affect the stock market, and how the president and their policies will determine how each event is handled. For example with the Coronavirus pandemic, we saw a large drop in the stock market in March 2020, however after many months the values resumed their normal. In this case, the president (Republican) was able to make changes and handle the event in a way that they could get the stock market back to where it was before the pandemic.
# # Conclusion & Discussion
# Our research utilized nearly a century’s worth of data from multiple United States stock market indexes: the S&P 500, the Dow Jones, and the NASDAQ in order to answer our research question - Is there a significant change in the growth of the major U.S. stock market indexes caused by the political party of a United State's President? After our data analysis and producing visualizations of our data, we can initially interpret from the visualizations that Democratic presidents cause the stock market to perform better than Republican presidents. However, inferring conclusions solely from the visualizations is not enough to properly answer our research question. To come to a stronger conclusion, we performed a t-test, a method to determine if there is a significant difference between the means of the two groups. When performing the t-test on the average percent changes of the stock market contingent on whether the presidential party is Republican or Democratic, we calculated a p-value of ~0.056. Given that we want to be 95% confident to render a significant difference, we get an alpha value of 0.05. When comparing our p-value to our alpha, the p-value is just slightly higher than our alpha. With this, we fail to reject the null hypothesis. Although our visualizations indicate that Democratic presidents cause the stock market to perform better than when a Republican president is in office, we do not have enough statistical evidence and data to render this indication a significant conclusion in a long-term sense. However, it should be safe to claim that Democratic presidents do improve stock market performance in a short-term sense based on just the visualizations. Moreover in the future, with a larger set of data, we believe it will be plausible to claim our visualized indication as a significant conclusion as the p-value will become smaller.
#
# The fact that we are unable to prove statistical significance with our p-value is likely due to our sample size. Given that we only had stock market data for only roughly 20 presidential terms, this is a small sample size and explains the larger p-value. Because of the nature of our data and how our sample size is relatively small, we cannot confidently conclude that there is a significant difference in stock market growth due to the presidential party. The sample size of presidential terms proved to be a limitation in our analysis and did not serve as a large enough sample size to be confident in our conclusions.
#
# Researchers in the future can also expand on this by taking in account the political party of the house and senate in relation to the president’s political party. With more elements and data to consider, it is tenable that researchers can discover a more precise index of which political party can impact the stock market, and if so, to what extent. These findings have the potential to have significant implications on how future elections will play out and change the nature in which politics and economics interact.
#
# # Team Contributions
# - Armin - Initial visualizations, dataset cleaning, data visualizations, data analysis, Hypothesis, Research Question
# - Brendan - Data analysis, Final project write-up (Ethics & Privacy, Conclusion & Discussion, Hypothesis), Video recording
# - Jaikishan - Data analysis, Final project write-up (Background, Conclusion & Discussion, Hypothesis)
# - Jason - Data visualizations, statistical analysis calculations, data analysis, Research Question
# - Yescenia - Research question, Hypothesis, Powerpoint Presentation, Team Organization and planning
| 31,645 |
/HeroesOfPymoli/HeroesOfPymoli_Ganter.ipynb
|
6a765fc70ecf40b05da97f7f79aafa473bd9ea27
|
[] |
no_license
|
mikeganter/pandas-challenge
|
https://github.com/mikeganter/pandas-challenge
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 44,281 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Introducción
# +
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsRegressor
from utilidades.reducir_uso_memoria import reduce_mem_usage
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from xgboost import XGBRegressor
train = pd.read_csv("data/train.csv")
test = pd.read_csv("data/test.csv")
# -
pd.set_option('display.max_columns', 1000) #ver todas las columnas
pd.set_option('display.max_rows', 100) #como mucho ver 100 filas
# # 2. Preprocesamiento de los Datos
# ### Se quita id, titulo, descripción, direccion e Idzona por no entenderlos como tan relevantes
# +
dfinicial = train
dftest = test
dfinicial = dfinicial.drop( columns = ['id', 'titulo', 'descripcion', 'direccion', 'idzona'] )
dftest = dftest.drop( columns = ['id', 'titulo', 'descripcion', 'direccion', 'idzona'] )
# -
# ### Se quita latitud y longitud por la cantidad de NaN, manejando la granularidad sólo a nivel Ciudad y Provincia
dfinicial = dfinicial.drop( columns = ['lat', 'lng'] )
dftest = dftest.drop( columns = ['lat', 'lng'] )
# ### Se rellenan con la moda Baños, Garages y Habitaciones
# +
def completarAlgunosNan2(df):
for column in ["banos", "garages", "habitaciones"]:
df[column] = df.groupby(['provincia', 'tipodepropiedad'])[column].apply(lambda x: x.fillna(x.mode()))
df[column] = df[column].fillna(df[column].mode()[0]) #Por si son todos NaN, y no hay moda
return df
dfinicial = completarAlgunosNan2(dfinicial)
dftest = completarAlgunosNan2(dftest)
# -
# ### Se rellena Antigüedad, Metros Cubiertos y Metros totales con la mediana para los casos de NaN, por tener estos atributos distribución log normal (aproximadamente)
# +
def completarAlgunosNan(df):
for column in ['antiguedad', 'metroscubiertos', 'metrostotales']:
df[column] = df.groupby(['provincia', 'tipodepropiedad'])[column].apply(lambda x: x.fillna(x.median()))
df[column] = df[column].fillna(df[column].median())
return df
dfinicial = completarAlgunosNan(dfinicial)
dftest = completarAlgunosNan(dftest)
# -
# ### Aplicar Onehot encoding para desprenderse de provincias, ciudades tipo de propiedad como tal
# +
poblaciones = pd.read_csv("data/poblacion_de_cada_ciudad.csv")
d_poblaciones = poblaciones.set_index("ciudad")["poblacion"].to_dict()
dfinicial["ciudad"].fillna(dfinicial['ciudad'].mode()[0], inplace=True)
dfinicial["pob_ciu"] = dfinicial.apply(lambda x: d_poblaciones[x["ciudad"]], axis=1)
# +
# Se hallan los casos posibles
provincias_posibles = train["provincia"].dropna().unique()
props_posibles = train["tipodepropiedad"].dropna().unique()
ciudades_posibles = train["ciudad"].dropna().unique()
# Se pasa tipodepropiedad y provincia a valores categóricos
dfinicial["tipodepropiedad"] = pd.Categorical(train["tipodepropiedad"], categories = props_posibles)
dfinicial["provincia"] = pd.Categorical(train["provincia"], categories = provincias_posibles)
dfinicial["ciudad"] = pd.Categorical(train["ciudad"], categories = ciudades_posibles)
dftest["tipodepropiedad"] = pd.Categorical(test["tipodepropiedad"], categories=props_posibles)
dftest["provincia"] = pd.Categorical(test["provincia"], categories=provincias_posibles)
dftest["ciudad"] = pd.Categorical(test["ciudad"], categories = ciudades_posibles)
# Rellenar nans de provincia, tipo de propiedad y ciudad, y usar onehot encoding
datos_categoricos = dfinicial[["tipodepropiedad", "provincia", "ciudad"]].copy()
datos_categoricos['tipodepropiedad'] = datos_categoricos.groupby(['provincia'])['tipodepropiedad'].apply(lambda x: x.fillna(x.mode()))
datos_categoricos["provincia"] = datos_categoricos["provincia"].fillna(datos_categoricos["provincia"].mode()[0])
datos_categoricos["ciudad"] = datos_categoricos.groupby(['provincia'])['ciudad'].apply(lambda x: x.fillna(x.mode()))
datos_categoricos = pd.get_dummies(datos_categoricos)
dfinicial = pd.concat([datos_categoricos, dfinicial], axis=1)
datos_categoricos = dftest[["tipodepropiedad", "provincia", "ciudad"]].copy()
datos_categoricos['tipodepropiedad'] = datos_categoricos.groupby(['provincia'])['tipodepropiedad'].apply(lambda x: x.fillna(x.mode()))
datos_categoricos["provincia"] = datos_categoricos["provincia"].fillna(datos_categoricos["provincia"].mode()[0])
datos_categoricos["ciudad"] = datos_categoricos.groupby(['provincia'])['ciudad'].apply(lambda x: x.fillna(x.mode()))
datos_categoricos = pd.get_dummies(datos_categoricos)
dftest = pd.concat([datos_categoricos, dftest], axis=1)
# Tirar tipodepropiedad y provincia
dfinicial = dfinicial.drop( columns = ['tipodepropiedad', 'provincia', 'ciudad'] )
dftest = dftest.drop( columns = ['tipodepropiedad', 'provincia', 'ciudad'] )
# -
# ### Se pasa fecha a datetime y se agrega mes y año
# +
dfinicial['fecha'] = dfinicial['fecha'].astype('datetime64')
dftest['fecha'] = dftest['fecha'].astype('datetime64')
dfinicial['dia'] = dfinicial['fecha'].dt.day
dfinicial['mes'] = dfinicial['fecha'].dt.month
dfinicial['anio'] = dfinicial['fecha'].dt.year
dfinicial = dfinicial.drop( columns = ['fecha'] )
dftest['dia'] = dftest['fecha'].dt.day
dftest['mes'] = dftest['fecha'].dt.month
dftest['anio'] = dftest['fecha'].dt.year
dftest = dftest.drop( columns = ['fecha'] )
# -
# ### Adecuación de tipos de datos ahora que no hay NaN
# +
def parseoNan(df):
df['gimnasio'] = df['gimnasio'].astype(np.uint8)
df['usosmultiples'] = df['usosmultiples'].astype(np.uint8)
df['piscina'] = df['piscina'].astype(np.uint8)
df['escuelascercanas'] = df['escuelascercanas'].astype(np.uint8)
df['centroscomercialescercanos'] = df['centroscomercialescercanos'].astype(np.uint8)
df['garages'] = df['garages'].astype(np.uint8)
df['antiguedad'] = df['antiguedad'].astype(np.uint8)
df['banos'] = df['banos'].astype(np.uint8)
df['habitaciones'] = df['habitaciones'].astype(np.uint8)
df['metroscubiertos'] = df['metroscubiertos'].astype(np.uint16)
df['metrostotales'] = df['metrostotales'].astype(np.uint16)
df['dia'] = df['dia'].astype(np.uint8)
df['mes'] = df['mes'].astype(np.uint8)
df['anio'] = df['anio'].astype(np.uint16)
parseoNan(dfinicial)
parseoNan(dftest)
dfinicial['precio'] = dfinicial['precio'].astype(np.uint32)
# -
dfinicial = reduce_mem_usage(dfinicial)
dftest = reduce_mem_usage(dftest)
X, y = dfinicial.drop(["precio"], axis=1),dfinicial["precio"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
reg = XGBRegressor(max_depth=15, min_child_weight=0.01, n_jobs=-1, objective ='reg:squarederror')
y_train = np.log(y_train)
reg.fit(X_train, y_train)
pred = reg.predict(X_test)
pred = np.exp(pred)
mean_absolute_error(y_test, pred)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(y_test, pred)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'k--', lw=4)
ax.set_xlabel("Valores reales", fontsize=16)
ax.set_ylabel("Valores predichos", fontsize=16);
#
# +
# GroupBy Screen Name
user_df = purchase_data_df.groupby(["SN"])
# purchase count by user
user_purchase_count = user_df["Item ID"].count()
# average purchase price by user
user_avg_price = user_df["Price"].mean()
# total purchase value by user
user_total_purchase = user_df["Price"].sum()
# summary DataFrame
user_spending_df = pd.DataFrame({"Purchase Count": user_purchase_count,
"Average Purchase Price": user_avg_price,
"Total Purchase Value": user_total_purchase})
# sort by purchase count
user_spending_df = user_spending_df.sort_values(["Total Purchase Value"], ascending=[False])
user_spending_df
# clean up output to add $ signs
user_spending_df["Average Purchase Price"] = user_spending_df["Average Purchase Price"].map("${:.2f}".format)
user_spending_df["Total Purchase Value"] = user_spending_df["Total Purchase Value"].map("${:.2f}".format)
# display top 5 spenders
user_spending_df.head(5)
# -
# ## Most Popular Items
# * Retrieve the Item ID, Item Name, and Item Price columns
#
#
# * Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the purchase count column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
# +
# pull out columns Item ID, Item Name, & Price
reduced_df = purchase_data_df.loc[:, ["Item ID", "Item Name", "Price"]]
# group by item
reduced_group_df = reduced_df.groupby(["Item ID", "Item Name"])
# purchase count by item
item_purchase = reduced_group_df["Item ID"].size()
# average price by item
item_avg_price = reduced_group_df["Price"].mean()
# total value by item
item_total_value = reduced_group_df["Price"].sum()
# summary DataFrame
item_summary_df = pd.DataFrame({"Purchase Count": item_purchase,
"Item Price": item_avg_price,
"Total Purchase Value": item_total_value})
# create copy of DataFrame for final section, otherwise we can't sort formatted values
item_summary_copy_df = item_summary_df
# sort by purchase count
item_summary_df = item_summary_df.sort_values(["Purchase Count"], ascending=[False])
# format to include dollar sign
item_summary_df["Item Price"] = item_summary_df["Item Price"].astype(float).map("${:,.2f}".format)
item_summary_df["Total Purchase Value"] = item_summary_df["Total Purchase Value"].astype(float).map("${:,.2f}".format)
# display top 5 items
item_summary_df.head(5)
# -
# ## Most Profitable Items
# * Sort the above table by total purchase value in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the data frame
#
#
# +
# sort previous summary dataframe by Total Purchase Value descending
item_summary_copy_df = item_summary_copy_df.sort_values(["Total Purchase Value"], ascending=False)
# formatting to include dollar signs
# format to include dollar sign
item_summary_copy_df["Item Price"] = item_summary_copy_df["Item Price"].astype(float).map("${:,.2f}".format)
item_summary_copy_df["Total Purchase Value"] = item_summary_copy_df["Total Purchase Value"].astype(float).map("${:,.2f}".format)
# display top 5 items
item_summary_copy_df.head(5)
# -
| 10,728 |
/CSV.ipynb
|
4d302f82205dd841e6e39af0b5d54102ee1a2f81
|
[] |
no_license
|
ranjanpandey984/pythondemo
|
https://github.com/ranjanpandey984/pythondemo
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 42,628 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
b=open('data.csv','x')
b.close()
b=open('data.csv','r')
print(b.read())
b.close()
#PANDAS use:
import pandas as pd
df = pd.read_csv('data.csv')
df
#Reading only 2 or limited lines by nrows
import pandas as pd
df = pd.read_csv('data.csv',nrows=2)
df
# .head() ley sadai 5 rows data print garcha if not given parameter
import pandas as pd
df = pd.read_csv('data.csv')
df.head()
# .head() ley sadai 5 rows data print garcha if not given parameter
import pandas as pd
df = pd.read_csv('data.csv')
df.head(3)
# Making Sn as index
import pandas as pd
df = pd.read_csv('data.csv',index_col='Sn')
df.head()
# Last 5 data with .tail() function
import pandas as pd
df = pd.read_csv('data.csv',index_col='Sn')
df.tail()
# Last 3 data with .tail() function
import pandas as pd
df = pd.read_csv('data.csv',index_col='Sn')
df.tail(3)
# Only printing required columns
import pandas as pd
df = pd.read_csv('data.csv',index_col='Sn',usecols=['Sn','Name','Phone'])
df
# Filtering data in rows
import pandas as pd
df = pd.read_csv('data.csv')
df.iloc[2:5]
# Filtering data in rows and Columns
# using iloc
import pandas as pd
df = pd.read_csv('data.csv',index_col="Name")
df.iloc[2:5,0:3]
# Using loc
import pandas as pd
df = pd.read_csv('data.csv',index_col="Sn")
df.loc[2:4]
# Using loc
import pandas as pd
df = pd.read_csv('data.csv',index_col="Name")
df.loc["Ram":"Ranjan"]
import pandas as pd
df = pd.read_csv('data.csv')
df['Age']>25
import pandas as pd
df = pd.read_csv('data.csv')
df[df['Age']>25]
import pandas as pd
df = pd.read_csv('data.csv')
df[df['Address'] == "Gangabu"]
import pandas as pd
df = pd.read_csv('data.csv')
df[(df['Address'] == "Tokha") & (df['Age']>20)]
import pandas as pd
df = pd.read_csv('data.csv')
df[(df['Address'] == "Tokha") | (df['Age']>20)]
# +
import pandas as pd
df = pd.read_csv('data.csv',index_col='Sn',usecols=['Sn','Name','Age','Address','Phone'])
data = df[(df['Address'] == "Tokha") | (df['Age']>20)]
data.to_csv('data.csv')
# +
#
# To be continued
#
# -
# USING CSV package instead of pandas
import csv
fields=['Kamal','23','Baluwatar']
with open('books.csv','a') as f:
x = csv.writer(f)
x.writerow(fields)
import csv
fields=[['Kamal','23','Baluwatar'],['Ranjan','22','Tokha']]
with open('books.csv','a') as f:
x = csv.writer(f)
for i in fields:
x.writerow(i)
# using data frame
import pandas as pd
data=({'Names': ["Ram","Shyam","Hari"],
'Age':[23,13,12], 'Address': ['Tokha','Thamel','Pokhara']})
df=pd.DataFrame(data)
df.to_csv('data.csv')
# Dict Reader
import csv
with open ('data.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(dict(row))
# +
# DictWriter
# -
| 2,967 |
/video1_2/linear-regression-2.ipynb
|
52ded40479053e7fd6380f666c7e4dce68ecc07f
|
[
"MIT"
] |
permissive
|
PacktPublishing/Mastering-Scikit-learn-with-Python-3.x
|
https://github.com/PacktPublishing/Mastering-Scikit-learn-with-Python-3.x
| 2 | 4 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 20,714 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error
# %matplotlib inline
# +
# Load the Boston House Pricing dataset
dataset = load_boston()
# Create a dataframe from the data
df = pd.DataFrame(dataset.data)
# Replace column names with feature names
df.columns = dataset.feature_names
# Creating another column in the dataframe for the target prices
df['PRICE'] = dataset.target
df.head()
# -
# Let's take the number of rooms as our X values
x = df.RM
y = df.PRICE
print(x.shape, y.shape)
# +
X = np.expand_dims(x, axis=1)
X_train = X[:-20]
y_train = y[:-20]
X_test = X[-20:]
y_test = y[-20:]
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
# +
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The mean squared error
print('Mean squared error: %.2f' % mean_squared_error(y_test, y_pred))
# -
# Plot outputs
plt.scatter(X_test, y_test, marker='x', color='red')
plt.plot(X_test, y_pred, color='blue')
plt.xlabel('avg number of rooms per dwelling')
plt.ylabel('price')
plt.show()
| 1,615 |
/python/machine-learning/.ipynb_checkpoints/tensorflow-checkpoint.ipynb
|
0766b366269ed336d32a508a930199e7ada35e91
|
[] |
no_license
|
lizhiyong2000/tutorials
|
https://github.com/lizhiyong2000/tutorials
| 1 | 0 | null | 2021-09-22T18:54:50 | 2021-05-02T09:36:35 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 109,949 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import keras
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + pycharm={"is_executing": false, "name": "#%%\n"}
from keras.datasets import mnist
digits = mnist.load_data()
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# + pycharm={"is_executing": false, "name": "#%%\n"}
np.set_printoptions(linewidth=90, formatter={'all': lambda x:'{0}'.format(x)})
for row in train_images[0]:
print(row)
# + pycharm={"is_executing": false, "name": "#%%\n"}
plt.imshow(train_images[1], cmap='binary')
# + pycharm={"is_executing": false, "name": "#%%\n"}
def sigmoid(x):
return 1/(1+ np.exp(-x))
def relu(x):
return np.maximum(x, 0)
x = np.arange(-5., 5., 0.2)
plt.subplot(121)
plt.title('sigmoid')
plt.plot(x, sigmoid(x))
plt.subplot(122)
plt.title('relu')
plt.plot(x, relu(x))
# + pycharm={"name": "#%%\n"}
| 1,187 |
/Week_2_Part_1_Assignment.ipynb
|
fad35234ce578203de00047f8b8488ccadadf7e1
|
[] |
no_license
|
aashik1998/Python_Assignments
|
https://github.com/aashik1998/Python_Assignments
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 12,184 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="p0m9LbIveuis" colab_type="text"
# # Week 2 Part 1 Assignments
# + [markdown] id="eCOgOSARdtsX" colab_type="text"
# 1. Understand number and answer the following question.
# 1. Assign to 10 and 2 on two variables 'a' and 'b' and perform addition, subtraction, multiplication, division and modulo operation. You should properly print answer during each operation.
# + id="jrg4iBQ-epaY" colab_type="code" outputId="4b65d92f-7831-4a0f-a924-d864fb216917" colab={"base_uri": "https://localhost:8080/", "height": 35}
# 1.1 Type your answer here.
a,b=10,2
sum,diff,mul,div,mod=a+b,a-b,a*b,a/b,a%b
print (sum,diff,mul,div,mod)
# + [markdown] id="Pu-nFIeQe08e" colab_type="text"
# 2. Write different rules for naming a variable/identifier.
# + id="EWlJFI2le_Ev" colab_type="code" colab={}
# 2 Type your answer here
The first character of the identifier must be a letter of the alphabet (upper or lowercase) or an underscore ('_').
The rest of the identifier name can consist of letters (upper or lowercase), underscores ('_') or digits (0-9).
Identifier names are case-sensitive. For example, myname and myName are not the same. Note the lowercase n in the former and the uppercase N in te latter.
Examples of valid identifier names are i, __my_name, name_23 and a1b2_c3.
Examples of invalid identifier names are 2things, this is spaced out and my-name
# + [markdown] id="RBzlvoyBfC0a" colab_type="text"
# 3. Understand string and answer the following question.
# 1. Assign 2 string on two variable and perform concatenation.
# 2. Assign a string in a variable and print the reverse of the string.
# 3. Assign 'lower' to a variable and convert it to upper case.
# 4. Assign 'school of ai' into a variable and split it into individual words
# 5. Assign 'school of ai' into a variable and find the starting index of word 'ai'
# + id="NKgITA5IgGf9" colab_type="code" outputId="dff4ebc7-0528-4d39-a78e-5021ce5572f5" colab={"base_uri": "https://localhost:8080/", "height": 35}
# 3.1 Type your answer here
a,b="hello ","guys"
s=a+b
print(s)
# + id="AiLVTn-jgJLx" colab_type="code" outputId="16788a79-e13a-4c55-f654-eb851ec5652f" colab={"base_uri": "https://localhost:8080/", "height": 35}
# 3.2 Type your answer here
a="MalaYALAM"
b=list(reversed(a))
print(b)
# + id="HgCqxoczgJUo" colab_type="code" outputId="fd80af41-e4d1-4a96-8208-60fb80506bc1" colab={"base_uri": "https://localhost:8080/", "height": 35}
# 3.3 Type your answer here
s="hello"
b= s.upper()
print(b)
# + id="xdbj1jW1gv_s" colab_type="code" outputId="ecc8fd6f-395d-42a7-c4aa-a96585a40d52" colab={"base_uri": "https://localhost:8080/", "height": 35}
# 3.4 Type your answer here
a='school of ai'
print(a.split())
# + id="etzffkuEhgTP" colab_type="code" colab={}
# 3.5 Type your answer here
# + [markdown] colab_type="text" id="ruHpADeRxOzD"
# 4. Understand list and answer the following question.
# 1. Create a list with values 10,20,30,40 and find its length and sum
# 2. Create a list with values 11,22,55. Pop the last element and store it in a variable called 'x'
# 3. Create a list with values 10,20,40 and insert 30 in index 2.
# 4. Create a list with values 'a','b','c' and remove 'b' from list
# 5. Create a list with values 5,'a','b','c',1,5. select values from index 1 to 3 and store it in a variable called 'alpha_list' using slicing.
# + colab_type="code" id="uZp1uPLZxOzU" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c4216243-2584-4b85-a392-0d0dce0fb726"
# 4.1 Type your answer here
list = [10,20,30,40]
a,b=0,0
for x in list:
b=b+x
a=a+1
print("the length and the sum is:",b,a)
# + colab_type="code" id="-2OQGCp2xOzl" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="73a1c6da-91ce-4565-935a-b9bd95072849"
# 4.2 Type your answer here
list =[11,22,55]
x=list[2]
print(x)
# + colab_type="code" id="RwXCwmxdxOzy" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="fa3de8de-d8af-4b1f-bc3b-09be0200455b"
# 4.3 Type your answer here
list=[10,20,40]
list.insert(2, 40)
print(list)
# + colab_type="code" id="p-7wB7S-xOz6" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="fd14590d-b29f-495f-c168-f3c524df0487"
# 4.4 Type your answer here
list=['a','b','c']
list.remove("b")
print(list)
# + colab_type="code" id="1HelPyBhxO0E" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="11bd5d42-2793-44eb-d668-f6d2560c87f7"
# 4.5 Type your answer here
list=[5,'a','b','c',1,5]
alpha_list=slice(1,3)
print(alpha_list)
# + [markdown] colab_type="text" id="UOLpFCOZzROl"
# 5. Understand dictionary and answer the following question.
# 1. Create a dict with key '1' and value 'odd', key '2' and value 'even'
# 2. Create a dict with key '1' and value 'odd', key '2' and value 'even' .Then replace 'even' of key 2 with 'EVEN'
# + colab_type="code" id="mbIBaszTzRO-" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="bdf9dc25-1e15-4001-c55d-8c60195b0b06"
# 5.1 Type your answer here
dict = {
"1":'odd',
"2":'even'
}
print(dict)
# + colab_type="code" id="uxux8opvzRPP" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ed8d6da9-7551-4737-9e08-97177da69562"
# 5.2 Type your answer here
dict = {
"1":'odd',
"2":'even'
}
del dict["2"]
dict["2"]="EVEN"
print(dict)
| 5,587 |
/.ipynb_checkpoints/Fig3_region_seasonal-checkpoint.ipynb
|
cfd55f3f2a65d5848cf9f8aa2e84fc5006ccaa7a
|
[] |
no_license
|
youtongzheng/Zheng_2021_GRL_Climatology_CTRC
|
https://github.com/youtongzheng/Zheng_2021_GRL_Climatology_CTRC
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 405,717 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import xarray as xr
import datetime
from Regions import *
from utils import getsnd
from utils import pltsnd
f_season = xr.open_dataset("All_2014_NCEP_1stprcesd_season_averaged.nc")
# -
ds_SEI = getsnd(f_season, "SEI")
ds_SEP = getsnd(f_season, "SEP")
ds_SO = getsnd(f_season, "SO")
ds_SEA = getsnd(f_season, "SEA")
ds_NP = getsnd(f_season, "NP")
ds_NEP = getsnd(f_season, "NEP")
ds_NA = getsnd(f_season, "NA")
ds_NEA = getsnd(f_season, "NEA")
# +
myfontsize = 11
fig = plt.figure(figsize=(20/2.54, (20)/2.54), dpi = 300)
plt.tight_layout()
mycolspan = 3
myrowspan = 4
span = 20
ax1 = plt.subplot2grid((span, span), (0, 0), colspan=mycolspan, rowspan = myrowspan)
ax2 = plt.subplot2grid((span, span), (0, 4), colspan=mycolspan, rowspan = myrowspan)
ax3 = plt.subplot2grid((span, span), (0, 8), colspan=mycolspan, rowspan = myrowspan)
ax4 = plt.subplot2grid((span, span), (0, 12), colspan=mycolspan, rowspan = myrowspan)
ax5 = plt.subplot2grid((span, span), (14, 0), colspan=mycolspan, rowspan = myrowspan)
ax6 = plt.subplot2grid((span, span), (14, 4), colspan=mycolspan, rowspan = myrowspan)
ax7 = plt.subplot2grid((span, span), (14, 8), colspan=mycolspan, rowspan = myrowspan)
ax8 = plt.subplot2grid((span, span), (14, 12), colspan=mycolspan, rowspan = myrowspan)
axs = [ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8]
ax = plt.subplot2grid((span, span), (6, 1), colspan=11, rowspan = 5)
for lct in region:
varplt = -(f_season[lct].sel(var_names = 'SW_heat_dmean') + f_season[lct].sel(var_names = 'LW_cool'))
if lct[0] == 'N':
ax.plot(f_season.season, varplt,'.-', label = lct + ':' +
'%d' % -(varplt.sel(season = 'JJA').values - varplt.sel(season = 'DJF').values))
else:
ax.plot(f_season.season, varplt ,'+--', label = lct + ':' +
'%d' % -(varplt.sel(season = 'DJF').values - varplt.sel(season = 'JJA').values))
ax.set_ylabel('$\mathrm{\Delta F} (W m^{-2}$)', fontsize=myfontsize)
ax.set_xlabel("Season", fontsize=myfontsize)
ax.legend(bbox_to_anchor=(1.,0.5), loc="center left", fontsize=0.7*myfontsize)
ax.text(0.02, 0.9, '(a)', transform=ax.transAxes,
fontsize=myfontsize)
pltsnd(fig, ax1, ds_SEP, '(b) SEP', notick = False, legend = True, ylabel = True, xlabel = True)
pltsnd(fig, ax2, ds_SEA, '(c) SEA', xlabel = True)
pltsnd(fig, ax3, ds_SEI, '(d) SEI', xlabel = True)
pltsnd(fig, ax4, ds_SO, '(e) SO', xlabel = True)
pltsnd(fig, ax5, ds_NEP, '(f) NEP', notick = False, legend = True, xlabel = True, ylabel = True)
pltsnd(fig, ax6, ds_NEA, '(g) NEA', xlabel = True)
pltsnd(fig, ax7, ds_NP, '(h) NP', xlabel = True)
pltsnd(fig, ax8, ds_NA, '(i) NA', xlabel = True)
fig.savefig('Fig3_region_seasonal.png', dpi=fig.dpi, bbox_inches='tight')
| 3,058 |
/.ipynb_checkpoints/视觉实时多物体快速分割-checkpoint.ipynb
|
c04be4ab806049853f2c390c5b46775b3a495f09
|
[] |
no_license
|
Carmanhui/CNN-CCD------
|
https://github.com/Carmanhui/CNN-CCD------
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,548 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stanford Stats 191
#
# ## Introduction
#
# This is a re-creation of the Stanford Stats 191 course, using Python eco-system tools, instead of R. This is lecture "Diagnostics for simple linear regression" ( see https://web.stanford.edu/class/stats191/notebooks/Simple_diagnostics.html)
#
# We look at how we can assert that a linear model is appropriate (or not, as the case may be)
#
# ## Initial Notebook Setup
#
# ```watermark``` documents the Python and package environment, ```black``` is my chosen Python formatter
# %load_ext watermark
# %load_ext lab_black
# %matplotlib inline
# All imports go here.
# +
import pandas as pd
import numpy as np
import seaborn as sn
import math
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy import stats
from statsmodels.formula.api import ols
from statsmodels.formula.api import rlm
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import (
wls_prediction_std,
)
from statsmodels.stats.stattools import jarque_bera
from statsmodels.stats.diagnostic import het_white
from statsmodels.stats.diagnostic import het_breuschpagan
from statsmodels.stats.diagnostic import het_goldfeldquandt
import os
# -
# ----
# ## Create Data
#
# To illustrate the basic concepts of linear regression, we create a dataset, where y depends upon x, and has added Gaussion noise.
# +
x = np.linspace(0, 20, 21)
y = 0.5 * x + 1 + np.random.normal(0, 1, 21)
df_dict = {'x': x, 'y': y}
data = pd.DataFrame(df_dict)
# -
# ----
# ## Analyse Data
# We perform a Ordinary Least Squares regression to get the line of best fit
res = ols('y ~ x', data=data).fit()
# Plot the data. We choose to use high-level matplotlib methods, at the expense of some plotting elegance
_ = plt.plot(x, y, 'o')
# Plot the data and line of best fit
_ = plt.plot(x, y, 'o')
_ = plt.plot(x, res.predict(), 'r-')
_ = plt.axhline(y.mean(), color='k')
# The graphs below illustrate the definitions of various sums of squares; ```SST``` is Sun Of Squares Total (deviation from the mean of the actual data), ```SSE``` is the Sum of Squares of Errors (deviation from the actuals of the predictions), ```SSR``` is the Sum of Squares of the Residuals (deviation from the mean of the predictions)
# $$ SST = \sum_{i=1}^{n}(Y_i-\bar{Y})^2
# \\
# SSE = \sum_{i=1}^{n} (Y_i -\hat{Y_i})^{2}
# \\
# SSR = \sum_{i=1}^n (\bar{Y}-\hat{Y_i})^2 $$
_ = plt.plot(x, y, 'o')
_ = plt.plot(x, res.predict(), 'r-')
_ = plt.axhline(y.mean(), color='k')
for i in range(len(x)):
plt.plot([x[i], x[i]], [y[i], y.mean()], 'y-')
# end for
plt.title('Total Sum of Squares')
_ = plt.plot(x, y, 'o')
_ = plt.plot(x, res.predict(), 'r-')
_ = plt.plot(x, res.predict(), 'r+')
_ = plt.axhline(y.mean(), color='k')
for i in range(len(x)):
plt.plot([x[i], x[i]], [y[i], res.predict()[i]], 'y-')
# end for
plt.title('Error Sum of Squares')
_ = plt.plot(x, y, 'o')
_ = plt.plot(x, res.predict(), 'r-')
_ = plt.plot(x, res.predict(), 'r+')
_ = plt.axhline(y.mean(), color='k')
for i in range(len(x)):
plt.plot(
[x[i], x[i]], [y.mean(), res.predict()[i]], 'y-'
)
# end for
plt.title('Regression Sum of Squares')
# ----
# ## Read Data
#
# We now read a more realistic dataset, relating wage level to education
data = pd.read_csv('../data/wage.csv')
# ### Explore Data
data.head()
_ = plt.plot(data['education'], data['logwage'], 'o')
# In order to more clearly see where the data points lie, we add "jitter" so they don't overlap as much. We also adjust ```alpha```
jitter = 0.1
_ = plt.plot(
data['education']
+ np.random.normal(0, 1, len(data['education']))
* jitter,
data['logwage'],
'o',
alpha=0.2,
)
# ### Perform OLS Linear Best Fit
res = ols('logwage ~ education', data=data).fit()
# Show the line of best fit
jitter = 0.1
_ = plt.plot(
data['education']
+ np.random.normal(0, 1, len(data['education']))
* jitter,
data['logwage'],
'o',
alpha=0.2,
)
data['predict'] = res.predict()
data_std = data.sort_values(by='education')
_ = plt.plot(
data_std['education'], data_std['predict'], 'r-'
)
# Compute the Sums of Squares defined above. We use ```np.ones()``` to create vectors of constants
# +
n_data = len(data['education'])
SSE = sum(res.resid * res.resid)
sst_array = (
data['logwage']
- np.ones(n_data) * data['logwage'].mean()
)
SST = sum(sst_array * sst_array)
sse_array = (
np.ones(n_data) * data['logwage'].mean() - res.predict()
)
SSR = sum(sse_array * sse_array)
# -
SST, SSE + SSR
# Compute the F statistic
F = (SSR / 1) / (SSE / res.df_resid)
F
# Check this manually calculated value with the OLS summary
res.summary()
# We get the interval that contains 90% of the F distribution function, given the degrees of freedom we have
stats.f.interval(0.90, 1, res.df_resid)
# We clearly have a F statistic much larger than this,
# so we would reject the Null Hypothesis that there is no linear dependence of ```logwages``` on ```education```.
# ----
# ## When a Linear Model is Wrong
#
# The Anscombe quartet is a famous quartet of datasets, all having the same descriptive numerical values (mean, etc), but wildly different shapes when plotted. We will use the dataset that is a quadratic.
data = pd.read_csv('../data/anscombe.csv')
data.head()
# Add some Gaussian noise
data['y2'] = (
data['y2']
+ np.random.normal(0, 1, len(data['y2'])) * 0.45
)
# Display the dataset
_ = plt.plot(data['x2'], data['y2'], 'o')
# ### Fit a Linear Model
# We get the best OLS line, and display it
res = ols('y2 ~ x2', data=data).fit()
_ = plt.plot(data['x2'], data['y2'], 'bo')
_ = plt.plot(data['x2'], res.predict(), 'r-')
# When we plot the residuals, we can see that they are apparently not distributed at random (negative at the end, positive in the middle)
_ = plt.plot(data['x2'], res.resid, 'ro')
_ = plt.axhline(0, color='black')
# ### Fit a Quadratic Model
#
# We add a term ```X^2``` to our linear model
# +
data['x2sq'] = data['x2'] * data['x2']
data_std = data.sort_values(by='x2')
# -
res2 = ols('y2 ~ x2 + x2sq', data=data_std).fit()
# The fit is clearly much better, and the residuals look to be more randomly distributed
_ = plt.plot(data['x2'], data['y2'], 'bo')
_ = plt.plot(data_std['x2'], res2.resid, 'ro')
_ = plt.plot(data_std['x2'], res2.predict(), 'g-')
_ = plt.axhline(0, color='black')
# Just plotting the residuals, we see they are smaller than those of the the pure linear model
_ = plt.plot(data['x2'], res2.resid, 'ro')
_ = plt.axhline(0, color='black')
# ----
# ## QQ Plots
#
# One technique for finding a model that is bad is to look at the QQ Plots for the residuals of eac h OLS fit (linear and quadratic). This is a test for normality; in this case, the difference is very subtle, and could not be used to reject one model or the other
_ = sm.qqplot(res.resid, line='r')
_ = sm.qqplot(res2.resid, line='r')
# ### OLS Summary Comparison
#
# A comparison of the RegressionResults summary() output shows that the quadratic model does a much better job of explaining the spread of Y2 values (R Squared of 0.962 versus 0.094). Further, the ```Jarque-Bera``` statistic (a test for normality of the residuals) indicates the the residuals are NOT normally distributed in the linear model, and are probably normally distributed in the quadratic model.
#
# The ```Omnibus``` statistic also indicates that the residuals are not normal in the pure linear case, and probably are normal in the quadratic case.
res.summary()
res2.summary()
# -----
# ## Heteroscedasticity Tests
#
# ### Read and Explore a Medical Dataset
data = pd.read_csv('../data/hiv.txt', sep=' ')
data.head()
_ = plt.plot(data['GSS'], data['VL'], 'bo')
# ### Data Cleaning
# There is clearly an outlier value, so we exclude it.
# +
data2 = data[data['VL'] < 200_000].copy()
len(data), len(data2)
# -
# ### Fit a Linear Model
res = ols('VL ~ GSS', data=data2).fit()
# Display the line of best fit
_ = plt.plot(data['GSS'], data['VL'], 'bo')
_ = plt.plot(data2['GSS'], res.predict(), 'g-')
# Plotting the residuals indicates the the error variance is NOT constant
_ = plt.plot(data2['GSS'], res.resid, 'ro')
_ = plt.axhline(0, color='black')
# Looking at the ```Jarque-Bera``` statistic in the ```summary()``` output, we seec that there is a vanishingly small chance that the residuals have constant variance
res.summary()
# ### Additional Heteroscedasticity Tests
#
# ```statsmodels``` has a number of tests for constant variance residuals. These tests not addressed in the lecture (but almost certainly R supports them). We show the results for two of them below. They confirm that the residuals are very unlikely to be distributed with constant variance.
x = sm.add_constant(data2['GSS'])
lm, lm_pvalue, fvalue, f_pvalue = het_white(res.resid, x)
print(
'White Test for Heteroscedasticity\n',
f'lagrange multiplier statistic. = {lm}, \n lagrange multiplier statistic = {lm_pvalue},\n',
f'f-statistic of the hypothesis that the error variance does not depend on x = {fvalue}, \n',
f'p-value for the f-statistic = {f_pvalue}',
)
lm, lm_pvalue, fvalue, f_pvalue = het_breuschpagan(
res.resid, x
)
print(
'Breusch-Pagan Test for Heteroscedasticity\n',
f'lagrange multiplier statistic. = {lm}, \n lagrange multiplier statistic = {lm_pvalue},\n',
f'f-statistic of the hypothesis that the error variance does not depend on x = {fvalue}, \n',
f'p-value for the f-statistic = {f_pvalue}',
)
#
# ---------
# ## Reproducibility
# %watermark -h -iv
# %watermark
and help us to reach global minumum in less epochs. By choosing big values for learning rate, the training phase becomes unstable and the MLP class raises floting point overflow.
# - Smaller batch size help us to reach global minimum faster.
# ## Satlog dataset
# The database consists of the multi-spectral values of pixels in 3x3 neighbourhoods in a satellite image, and the classification associated with the central pixel in each neighbourhood. The aim is to predict this classification, given the multi-spectral values. In the sample database, the class of a pixel is coded as a number.
#
#
# [More information in UCI repository](https://archive.ics.uci.edu/ml/datasets/Statlog+(Landsat+Satellite))
hidden_layers = [(32, 16, 8), (10, 10, 10), (64,)]
activations = [Tanh(), Sigmoid(), ReLu()]
batch_sizes = [256]
epochs = [30]
mus = [0.95]
betas = [.2, .3]
etas = [.01]
alphas = [0.01, 0]
# +
best_models = analyze('satlog.csv',n=7)
best_models
# -
# Again we trained a network with this specification and reach train accuracy **83.74%** and test accuracy **81.45%** in 100 epochs.
#
# ```
# MLP([64], activation=ReLu(), batch_size=128, epochs=100, mu=0.95, beta=.3, eta=.02, alpha=.001, verbose=1, task='classification')
# ```
#
# <img src="img/satlog-result.png?" width='50%'>
# ## MNIST dataset
# <img src='img/mnist.png' width="500px">
# The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
#
# [More information in lecun.com](http://yann.lecun.com/exdb/mnist/)
hidden_layers = [(64, 64), (128, 64), (128, 32, 32),(128, 32, 32)]
activations = [ReLu(),LeakyReLu(.03)]
batch_sizes = [512]
epochs = [10]
mus = [0, .85]
betas = [.1,.2]
etas = [.001,.01]
alphas = [.001,0]
best_models = analyze('mnist.csv', n=6)
best_models
# Again we trained a network with this architecture and reached the **87.91%** accuracy on train set and **86.89%** on test set in just 15 epochs.
#
# MLP([128, 64], activation=ReLu(), batch_size=64, epochs=15, mu=0.85, beta=.1, eta=.03, alpha=.001, verbose=1, task='classification')
# <img src="img/mnist-result.png?" width='50%'>
# To reach this accuracy with just 15 epochs we reduced the batch size from 512 to 64. This shows the important role of batch size in training a netwrok. Another interesting fact we found is the effect of number of epochs on overfitting problen. In above figure, you sae that from iteration 7-8 the network validation accuracy is reducing.
# ## Fashion mnist dataset
# <img src='img/fashion-mnist-sprite.png' width="400px" height="100px">
#
# Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
#
# [More information in zalandoresearch's github repository](https://github.com/zalandoresearch/fashion-mnist)
hidden_layers = [(64, 64), (128, 64), (128, 32, 32),(128, 32, 32)]
activations = [ReLu(),LeakyReLu(.03)]
batch_sizes = [512]
epochs = [10]
mus = [0, .85]
betas = [.1,.2]
etas = [.001,.01]
alphas = [.001,0]
best_models = analyze('fashion.csv', n=6)
best_models
# Again we trained a network with this architecture and reached the **76.78%** accuracy on train set and **75.36%** on test set in just 10 epochs.
#
# MLP([128, 64], activation=LeakyReLu(.03), batch_size=64, epochs=10, mu=0.85, beta=.2, eta=.01, alpha=.001, verbose=1, task='classification')
# <img src="img/fashion-result.png?" width='50%'>
# The above figures shows that, unlike iris dataset, the validation accuracy is always lower that training accuracy.
# # Concolusion
# In this excersice, we implemented a simple neural network from scratch with matrix form formulations which is very important where the number of neurons and layers are large. Our implementation, do both regression and classification tasks with just setting an argument. All of the preprocessing reguirements such as scaling and encoding are done by `MLP` class automatically. To find appropriate network for a specific dataset, we implpemented a `GridSearch` class which uses **parallelization** techniques and finds optimal architecture among all possible solutions. To show the results, we tested 4 different datasets which is used by many researchers for classification tasks. The results shows the important role of batch size, number of epochs and learning rate for training a network.
| 14,699 |
/vehicle-detection.ipynb
|
4ad2542b57c1fc9cfb1eea988ac26e04be3cd543
|
[] |
no_license
|
lijunsong/udacity-vehicle-detection-and-tracking
|
https://github.com/lijunsong/udacity-vehicle-detection-and-tracking
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,529,683 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import model_selection, preprocessing, linear_model, naive_bayes, metrics, svm, dummy, feature_selection
# For stackoverflow uncomment following lines
# csz = '18'
# ml_in_file = "/data/khodadaa/ml-exp/stack_feat/ml_in_{}.csv".format(csz)
# ml_out_file = "/data/khodadaa/ml-exp/stack_feat/ml_out_{}_".format(csz)
# For wikipedia uncomment following lines
csz = '2'
ml_in_file = "/data/khodadaa/ml-exp/wiki/ml_in_{}.csv".format(csz)
# ml_in_file = "/data/khodadaa/ml-exp/wiki/ml_in_{}_vahid_features.csv".format(csz)
ml_out_file = "/data/khodadaa/ml-exp/wiki/ml_out_{}_new_".format(csz)
#For inex uncomment following lines
# csz = '24'
# ml_in_file = "/data/khodadaa/ml-exp/inex/inex_ml_in_{}.csv".format(csz)
# ml_out_file = "/data/khodadaa/ml-exp/inex/inex_ml_out_{}_".format(csz)
# +
from scipy.stats import ttest_rel
def t_test(data1, data2, alpha=0.05):
# compare samples
stat, p = ttest_rel(data1, data2)
# print('Statistics=%.3f, p=%.5f' % (stat, p))
# interpret
if p > alpha:
# print('Same distributions (fail to reject H0)')
return True
else:
# print('Different distributions (reject H0)')
return False
def evaluation_results(y_true, y_pred, weight):
tn, fp, fn, tp = metrics.confusion_matrix(y_true, y_pred, sample_weight=weight).ravel()
# print("\t precision = %.3f" % (tp / (tp + fp)))
# print("\t recall = %.3f" % (tp / (tp + fn)))
# print('\t f1 score = %.3f' %(metrics.f1_score(y_true, y_pred)))
# print("\t negative predictive value= %.2f" % (tn / (tn + fn)))
# print("\t true negative rate= %.2f" % (tn / (tn + fp)))
# print("\t 1s percentage = %.2f" % (100 * np.sum(y_pred) / y_pred.shape[0]))
ev = {'acc': (tp+tn)/(tp+tn+fp+fn),
'prec': tp/(tp+fp),
'rec': tp/(tp+fn),
'f1': metrics.f1_score(y_true, y_pred),
'NPV': tn/(tn+fn),
'TNR': tn/(tn+fp),
'1-ratio': 100*np.sum(y_pred)/y_pred.shape[0]}
return ev
def rank_features(model, X, y):
feature_selection.RFE(model, 1)
# create the RFE model and select 1 attributes
rfe = feature_selection.RFE(model, 1)
rfe = rfe.fit(X, y)
return rfe.ranking_
def get_mrr(y: pd.Series, rrgrps:dict, weights=None)-> float:
if weights is None:
weights = pd.Series(index=y.index, data=1.0)
(lb1, rrg1), (lb2, rrg2) = list(rrgrps.items())
y1inx, y2inx = y[y==lb1].index, y[y==lb2].index
tot_wg = weights.loc[y.index].sum(axis=0)
summed_rr = (rrg1.loc[y1inx] * weights.loc[y1inx]).sum(axis=0) + \
(rrg2.loc[y2inx] * weights.loc[y2inx]).sum(axis=0)
return summed_rr / tot_wg
# -
# # Train/Test
# +
# %%time
TEST_SIZE = 0.33
THREASHOLD = None
FEATURES = None
# FEATURES = ['ql_t_sub', 'ql_t_cmp']
results = pd.DataFrame(columns=['features', '#feat', '|test|', '%badQ','thres', 'acc-tr',
'acc', 'prec', 'rec', 'f1', 'TNR', 'NPV', '1-ratio',
'mrr', 'mrr-max', 'mrr-bad', 'mrr-good'])
def add_nonlinear_features(df):
cols_sub = [c for c in df.columns if c[-4:] == '_sub']
cols_cmp = [c for c in df.columns if c[-4:] == '_cmp']
for i, _ in enumerate(cols_sub):
df[cols_sub[i]+'/'+cols_cmp[i]] = df[cols_sub[i]]/(df[cols_cmp[i]]+0.00000001)
return df
def build_model(tr_X, tr_y, mod, cw=None, sw=None):
if mod in ['dum-u', 'dum-f', 'dum-s']:
strag = {'dum-u': 'uniform', 'dum-f': 'most_frequent', 'dum-s': 'stratified'}
clf = dummy.DummyClassifier(strategy=strag[mod], random_state=1)
if mod in ['dum-0', 'dum-1']:
clf = dummy.DummyClassifier(strategy='constant', constant=int(mod[-1]))
if mod == 'log':
clf = linear_model.LogisticRegression(class_weight=cw, random_state=1)
clf.fit(tr_X, tr_y, sample_weight=sw)
return clf
in_df = pd.read_csv(ml_in_file)
y = in_df['Y'].copy()
X = in_df[in_df.columns.difference(['Query', 'Y', 'rr_al', 'rr_sb', 'TestViewCount'])].copy()
bad_ix = in_df[in_df['rr_al'] > in_df['rr_sb']].index
good_ix = in_df[in_df['rr_al'] <= in_df['rr_sb']].index
le = preprocessing.LabelEncoder()
y = pd.Series(data=le.fit_transform(y), index=y.index)
X = add_nonlinear_features(X)
if FEATURES:
X = X.filter(FEATURES)
feat = list(X.columns)
print('Features:\n%s' % (feat))
train_x, test_x, train_y, test_y = model_selection.train_test_split(X, y, stratify=y, \
test_size=TEST_SIZE, random_state=5)
sc = preprocessing.MinMaxScaler()
train_x = sc.fit_transform(train_x)
test_x = sc.transform(test_x)
def get_sample_weights(mod):
w = in_df['TestViewCount'].copy()
if mod == 'swbc':
tr0inx, tr1inx = train_y[train_y==0].index, train_y[train_y==1].index
ts0inx, ts1inx = test_y[test_y==0].index, test_y[test_y==1].index
tr0s, tr1s = w[tr0inx].sum(axis=0), w[tr1inx].sum(axis=0)
ts0s, ts1s = w[ts0inx].sum(axis=0), w[ts1inx].sum(axis=0)
w[tr0inx] /= tr0s
w[tr1inx] /= tr1s
w[ts0inx] /= ts0s
w[ts1inx] /= ts1s
if mod == 'swAr':
tr0inx, tr1inx = train_y[train_y==0].index, train_y[train_y==1].index
tr0s, tr1s = w[tr0inx].sum(axis=0), w[tr1inx].sum(axis=0)
w[tr0inx] /= tr0s
w[tr1inx] /= tr1s
if mod == 'swnc':
s0, s1 = w[y==0].sum(axis=0), w[y==1].sum(axis=0)
w[y==0] /= s0
w[y==1] /= s1
return w.loc[train_y.index], w.loc[test_y.index]
pred_df = pd.DataFrame(index=test_y.index)
pred_df['Query'] = in_df.loc[test_y.index, 'Query']
pred_df['TestViewCount'] = in_df.loc[test_y.index, 'TestViewCount']
pred_df['rr_al'] = in_df.loc[test_y.index, 'rr_al']
pred_df['rr_sb'] = in_df.loc[test_y.index, 'rr_sb']
pred_df['true_y'] = test_y
for mde in ['dum-u', 'dum-f', 'dum-s', 'dum-0', 'dum-1', 'ql', 'log', 'log-bal',
'dum-u-swvc', 'dum-s-swvc', 'dum-0-swvc', 'dum-1-swvc', 'ql-swvc', 'log-swvc',
'dum-u-swbc', 'dum-s-swbc', 'dum-0-swbc', 'dum-1-swbc', 'ql-swbc', 'log-swbc',
'dum-u-swAr', 'dum-s-swAr', 'dum-0-swAr', 'dum-1-swAr', 'ql-swAr', 'log-swAr',
'dum-u-swnc', 'dum-s-swnc', 'dum-0-swnc', 'dum-1-swnc', 'ql-swnc', 'log-swnc',
'log-bal-swvc', 'log-bal-swbc', 'log-bal-swAr', 'log-bal-swnc']:
m = mde
clf = None
clweight = None
fea_cnt = 0
print('-------------------------------------------')
print(m + " classifier ..")
train_weights, test_weights = None, None
# set weight
if m[-5:] in ['-swvc', '-swnc', '-swbc', '-swAr']:
train_weights, test_weights = get_sample_weights(m[-4:])
# train without sample_weights, test with sample_weights
if m in ['log-bal-swvc', 'log-bal-swbc', 'log-bal-swAr', 'log-bal-swnc']:
train_weights = None
m = m[:-5]
# set balanced labels
if m[-4:] == '-bal':
clweight = 'balanced'
m = m[:-4]
if m[:2] == 'ql':
X_ql = X.loc[test_y.index, ['ql_t_sub', 'ql_t_cmp']]
pred_y = np.where(X_ql['ql_t_sub'] >= X_ql['ql_t_cmp'], 'sub', 'all')
pred_y = le.transform(pred_y)
fea_cnt = 2
else:
clf = build_model(train_x, train_y, mod=m, sw=train_weights, cw=clweight)
pred_y = clf.predict(test_x)
fea_cnt = len(feat)
pred_y = pd.Series(data=pred_y, index=test_y.index)
res = {'features': feat, '#feat': fea_cnt, '|test|': TEST_SIZE, 'thres': THREASHOLD,
'acc-tr': None if clf is None else clf.score(train_x, train_y, train_weights)}
res.update(evaluation_results(test_y, pred_y, test_weights))
subL, allL = le.transform(['sub', 'all'])
rrg = {subL: in_df['rr_sb'], allL: in_df['rr_al']}
mrr = get_mrr(pred_y, rrg, weights=test_weights)
mrr_mx = get_mrr(test_y, rrg, weights=test_weights)
res.update({'mrr': mrr, 'mrr-max': mrr_mx})
# bad queries
bad_test_ix = test_y.index.intersection(bad_ix)
res['%badQ'] = (bad_test_ix.shape[0] / test_y.shape[0]) * 100.0
bad_rrg = {subL: in_df.loc[bad_test_ix, 'rr_sb'], allL: in_df.loc[bad_test_ix, 'rr_al']}
bad_mrr = get_mrr(pred_y.loc[bad_test_ix], bad_rrg, weights=test_weights)
res['mrr-bad'] = bad_mrr
# good queries
good_test_ix = test_y.index.intersection(good_ix)
good_rrg = {subL: in_df.loc[good_test_ix, 'rr_sb'], allL: in_df.loc[good_test_ix, 'rr_al']}
good_mrr = get_mrr(pred_y.loc[good_test_ix], good_rrg, weights=test_weights)
res['mrr-good'] = good_mrr
# rank features for logisitic regression
if mde[:3] == 'log':
feat_rankings = rank_features(clf, train_x, train_y)
feat_rankings = list(zip(feat_rankings, feat))
feat_rankings.sort()
res['features'] = str(feat_rankings)
results.loc[mde, :] = res
pred_df[mde] = pred_y
results.to_csv(ml_out_file+'{}_evals.csv'.format(len(feat)))
pred_df.to_csv(ml_out_file+'{}_predicts.csv'.format(len(feat)), index=False)
# -
test_names = list(pred_df.columns.difference(['Query', 'TestViewCount', 'rr_al', 'rr_sb']))
for i, m1 in enumerate(test_names):
for m2 in test_names[i+1:]:
if t_test(pred_df[m1], pred_df[m2]):
print(m1, m2, 'Same distributions')
results.loc[['dum-0-swvc', 'dum-1-swvc', 'dum-u-swvc', 'ql-swvc', 'log-bal-swvc'], ['%badQ','mrr-good', 'mrr-bad', 'mrr']]
results.loc[results.index.str.startswith('log'), 'features'].to_csv(ml_out_file+'{}_feat_rankings.csv'.format(len(feat)))
# # Tips
# add a new indexed row to a Dataframe from a dictionary
df = pd.DataFrame(columns=['A', 'B', 'C'], dtype=int)
df.loc['log', :] = {'A':1, 'B':10}
df.loc['dum', :] = {'A':2, 'C':20}
df
# combine two dictionaries
d1 = {'A':1, 'C':3}
d2 = {'B':2, 'A':4}
d1.update(d2)
d1
| 10,358 |
/your-code/main.ipynb
|
0704da707508658a4fc9766df2115629c14fcf2c
|
[] |
no_license
|
Sanka101/lab-confidence-intervals
|
https://github.com/Sanka101/lab-confidence-intervals
| 0 | 0 | null | 2020-07-21T12:16:20 | 2019-07-04T10:19:38 | null |
Jupyter Notebook
| false | false |
.py
| 10,852 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext bigdata
# %hive_start
# %timeout 300
# +
# %%hive
DROP TABLE IF EXISTS t0;
CREATE TABLE t0 (
c1 STRING,
c2 ARRAY<CHAR(1)>,
c3 MAP<STRING, INT>
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY ','
MAP KEYS TERMINATED BY '#'
LINES TERMINATED BY '\n';
LOAD DATA LOCAL INPATH 'data.tsv' INTO TABLE t0;
# +
# %%hive
DROP TABLE IF EXISTS respuesta;
CREATE TABLE respuesta
AS
SELECT
letra,
key,
COUNT(value)
FROM
t0
LATERAL VIEW explode(c2) adTable AS letra
LATERAL VIEW explode(c3) adTable AS key,value
GROUP BY letra,key;
INSERT OVERWRITE LOCAL DIRECTORY 'output'
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE
SELECT * FROM respuesta;
# -
# %hive_quit
rm.interval(0.8, loc=np.mean(heights), scale= stand_dev)
# ## Challenge 2
# In a sample of 105 shops selected randomly from an area, we note that 27 of them have had losses in this month. Get an interval for the proportion of businesses in the area with losses to a confidence level of 80% and a confidence level of 90%.
#
# **Hint**: function `stats.norm.interval` from `scipy` can help you get through this exercise.
# 105 shops , 27 losses
mean= 27/105
# +
# Basicially what we need to fit is our confidence, number of observ(degrees of freedom),
# which is size of sample, loc parameter is the mean , scale is sem
# -
105-27
losses = [1 for i in range(27)]
profit = [0 for i in range(78) ]
# Make new list with all 105 stores
stores= losses + profit
np.mean(stores)
confidence_80 = stats.t.interval(0.8, len(stores)-1, loc = np.mean(stores), scale = stats.sem(stores))
confidence_90 = stats.t.interval(0.9, len(stores)-1, loc = np.mean(stores), scale = stats.sem(stores))
print(f"Confidence of 80% procent = {confidence_80}, and with a confidence of 90% {confidence_90}")
# ## Challenge 3 - More practice
# For the same example in challenge 1, calculate a confidence interval for the variance at 90% level.
#
# **Hint**: function `stats.chi2.interval` from `scipy` can help you get through this exercise.
# +
# NOT THIS TIME
# -
# ## Challenge 4 - More practice
# The sulfuric acid content of 7 similar containers is 9.8, 10.2, 10.4, 9.8, 10.0, 10.2 and 9.6 liters. Calculate a 95% confidence interval for the average content of all containers assuming an approximately normal distribution.
#
# ```
# acid = [9.8, 10.2, 10.4, 9.8, 10.0, 10.2, 9.6]
# ```
#
# **Hint**: function `stats.t.interval` from `scipy` can help you get through this exercise.
# +
# 7 containers
acid = [9.8, 10.2, 10.4, 9.8, 10.0, 10.2, 9.6]
np.mean(acid)
# degrees of freedom len()-1
# error stand dev / sqrtn
conf_95= stats.t.interval(0.95, len(acid)-1, loc= np.mean(acid), scale= stats.sem(acid))
# -
print(f"The average content of sulfuric acid is {conf_95}")
# ## Bonus Challenge
# The error level or sampling error for the first challenge is given by the following expression:
# $$Error = z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt n}$$
# Where z represents the value for N(0,1)
#
#
# Suppose that with the previous data of challenge 1, and with a confidence level of
# 99% (that is, almost certainly) we want to estimate the average population size, so that the error level committed is not greater than half a centimeter.
#
# #### 1.- Determine what size the selected sample of men should be.
print(stats.norm.interval(0.99, loc=np.mean(heights), scale= 0.5))
error= 0.5
sigma= 4
n= math.pow(sigma/error,2)
n
# #### 2.- For the second challenge, we have the following error:
# $$ Error = z_{\frac{\alpha}{2}}\sqrt{\frac{p\times q}{n}} $$
# #### Determine the sample size required to not exceed an error of 1% with a confidence of 80%.
# +
# your code here
# -
# ## Bonus Challenge
#
# Let's consider the following problem:
#
# Build a confidence interval of 94% for the real difference between the durations of two brands of spotlights, if a sample of 40 spotlights taken randomly from the first mark gave an average duration of 418 hours, and a sample of 50 bulbs of another brand gave a duration average of 402 hours. The standard deviations of the two
# populations are 26 hours and 22 hours, respectively.
#
# Sometimes, we will be interested in the difference of two different groups of random variables. We can also build a confidence interval for that! We have some different cases regarding the variance but for this specific case (the variance are different and known), we have that:
#
# $$\overline{X} - \overline{Y} \sim N(\mu_{X} - \mu_{Y} , \sqrt{\frac{\sigma_{X}^2}{n_X}+\frac{\sigma_{Y}^2}{n_Y}})$$
#
# Solve the problem with this information.
# +
# your code here
| 4,990 |
/Extract Entropy, MCR, and ngramscore.ipynb
|
617b5d58585a556dbdb7c263d1b02f20fa847ebe
|
[] |
no_license
|
ameer117/UCI-ML-HACKATHON
|
https://github.com/ameer117/UCI-ML-HACKATHON
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 568,646 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="jaZRNV27ojbM" colab_type="code" outputId="d01ac2bc-3046-4d18-f8fa-bc2a2f1e3985" executionInfo={"status": "ok", "timestamp": 1590292360844, "user_tz": 420, "elapsed": 12056, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 105}
import csv
from gensim.models import Word2Vec
import pandas as pd
import os
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Input, Embedding, Conv1D, MaxPooling1D, Bidirectional, LSTM, Dense, Dropout, ThresholdedReLU, Flatten
from keras.models import Sequential
from tensorflow.keras import Model
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import nltk
from nltk.util import ngrams
import seaborn as sns
import re
import math
tf.test.gpu_device_name()
# + [markdown] id="0aDrPFKHCouN" colab_type="text"
#
# + id="cqoVi72yH0qz" colab_type="code" outputId="a5f3dcfd-8b90-4465-a044-692842e1ff82" executionInfo={"status": "ok", "timestamp": 1590292395738, "user_tz": 420, "elapsed": 46918, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 122}
from google.colab import drive
drive.mount('/content/drive')
# + id="f7le6BgWojbV" colab_type="code" colab={}
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# downloaded = drive.CreateFile({'id':'1LAXYuJapp6u6shheA3ciIy67GrOQdlKs'})
# downloaded.GetContentFile('q2_revised.csv')
df2 = pd.read_csv('/content/drive/Shared drives/Hackathon_new/q2_revised.csv', encoding='latin1')
# Dataset is now stored in a Pandas Dataframe
gt2 = pd.read_excel('/content/drive/Shared drives/Hackathon_new/q2_answer.xlsx')
alexa = pd.read_csv('/content/drive/Shared drives/Hackathon_new/alexa.csv', header=None, names=['dns'], usecols=[1])
words = pd.read_csv('/content/drive/Shared drives/Hackathon_new/google-10000-english.txt', sep=" ", header=None)
words.columns =["words"]
# + id="72KX2I9SyESc" colab_type="code" colab={}
alexa = pd.read_csv('/content/drive/Shared drives/Hackathon_new/alexa.csv', header=None, names=['dns'], usecols=[1])
# + id="Z4GCZkDoQGen" colab_type="code" outputId="5591acb8-bec4-49f9-f235-41f9ddadb030" executionInfo={"status": "ok", "timestamp": 1590292487500, "user_tz": 420, "elapsed": 138641, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
alexa
# + id="znhOTzSROhLC" colab_type="code" outputId="794bad4a-ac17-4c58-c035-707b37129c46" executionInfo={"status": "ok", "timestamp": 1590292487501, "user_tz": 420, "elapsed": 138618, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
words #top 10,000 meaningful words in domains from google
# + id="RjUaIl_zVQWJ" colab_type="code" colab={}
# + id="Oc2rjWSYojbZ" colab_type="code" outputId="3b62b272-3783-4938-f32b-7e4f42ea8a7b" executionInfo={"status": "ok", "timestamp": 1590292487503, "user_tz": 420, "elapsed": 138590, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 323}
df2.head()
# + id="UIJIFxIoHkGb" colab_type="code" colab={}
# + id="4PFzWG-ZAb3h" colab_type="code" outputId="b637afd2-a1e1-40f6-9c2e-e0a2ee31f22c" executionInfo={"status": "ok", "timestamp": 1590292487505, "user_tz": 420, "elapsed": 138562, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 80}
gt2[gt2.domain.str.contains("?", regex=False)] #found this random invalid domain in the answers
# + id="ywVv07ZYIffP" colab_type="code" colab={}
gt2.drop(10992, inplace=True)
# + id="t5ier10xojbd" colab_type="code" outputId="ef94edcb-7d04-40bc-9943-1e9bb95cb4f5" executionInfo={"status": "ok", "timestamp": 1590292487506, "user_tz": 420, "elapsed": 138534, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
gt2
# + id="zjpxuisJojbh" colab_type="code" colab={}
# + id="VYlN98zIojbk" colab_type="code" colab={}
def extractDNS(info):
info = info.split(' ')
for s in info:
if '.' in s:
return s
return ''
# + id="LO73sd53ojbn" colab_type="code" colab={}
df2['dns'] = df2.Info.apply(extractDNS)
# + [markdown] id="rA9Nh_vFojbr" colab_type="text"
#
# + [markdown] id="DgX5rWxXojbr" colab_type="text"
# ### prepare the data
# input is the DNS string
# output is 1 or 0. 1 means malicious and 0 means benign
# + id="3Az2IlM9ojbs" colab_type="code" colab={}
data_x = df2.dns.unique()
# + id="6TIKNzsdojbv" colab_type="code" colab={}
x_set = set(data_x)
for dns in gt2.domain:
if dns not in x_set:
print(dns)
# + id="SPhxQMMiojb8" colab_type="code" colab={}
# there are some 42 dns are both in family 1 and 2. I guess
# they are in 2 since they end with '.ws'.
gt_set = set()
repeat_dns = []
for dns in gt2.domain:
if dns not in gt_set:
gt_set.add(dns)
else:
repeat_dns.append(dns)
# + id="UdUOYNjKojb_" colab_type="code" outputId="554a3c62-be86-447a-bfed-9392f63016e9" executionInfo={"status": "ok", "timestamp": 1590292538449, "user_tz": 420, "elapsed": 2895, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
drop_idx = [] #drop index
for dns in repeat_dns:
drop_idx.append(gt2[(gt2.domain==dns) & (gt2.family == 1)].index[0])
ngt2 = gt2.drop(drop_idx) #new answers
ngt2
# + id="OHqB4USFojcC" colab_type="code" colab={}
gt_set = set(ngt2.domain)
data_y = []
for dns in data_x:
if dns in gt_set:
data_y.append(1)
else:
data_y.append(0) #0 if not DGA
#data_y
# + id="lgwg_2x0ojcF" colab_type="code" outputId="ab08f8d2-a8eb-4956-99ad-08335348a740" executionInfo={"status": "ok", "timestamp": 1590292538450, "user_tz": 420, "elapsed": 2869, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
p1_data = pd.DataFrame((zip(data_x, data_y)), columns =['dns', 'family'])
p1_data = p1_data.sample(frac=1, random_state=2020)
p1_data
# + id="6khGUfdTJeAC" colab_type="code" colab={}
# + id="rnO-WDoVojcI" colab_type="code" colab={}
p1_data['dnsLength'] = p1_data.dns.apply(lambda x: len(x))
# + [markdown] id="fxqpXSsA_ekW" colab_type="text"
# #We want to remove all domains with invalid characters
# + [markdown] id="xtf0WrZFD4TP" colab_type="text"
# #Entropy Calculation
# + id="ItoRjjoEDz_Q" colab_type="code" outputId="82aa5536-c70c-4a05-9155-20ef4cfd9638" executionInfo={"status": "ok", "timestamp": 1590292538453, "user_tz": 420, "elapsed": 2831, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 119}
chrs = [c for c in p1_data.dns[2]]
d = ngrams(chrs, 3)
for grams in d:
print(grams)
# + id="L0_HVZtewzWL" colab_type="code" colab={}
def ngramdict(dic, dns, i):
grams = ngrams(getchars(dns), i)
for gram in grams:
string = "".join(list(gram))
if not (string) in dic:
dic[string] = 1
else:
dic[string] += 1
return dic
# + id="Xgbf0WmiFf2U" colab_type="code" colab={}
def cleandomain(domain):
domain = re.sub('([\;!,?])','', domain) #remove random symbols
#\\\\\(\)\[\]\=\*
domain = domain.lower() #lowercase
if (domain.startswith('http://')):
domain = domain[7:0]
#print("t")
if (domain.startswith('www.')): #remove www.
domain = domain[4:]
domain = domain.split(".", 1)[0] #the ending .com or .ru etc.
return domain
# + id="JdpwXzSNKysp" colab_type="code" colab={}
def getchars(d):
z = []
for c in d:
if (c == '/' or c== '&' or c == '\\\\' or c == '' or c == '(' or c == '[' or c == '?'): #for debugging purposes
print(d)
if (c == '&'):
print(d)
z.append(c)
return z
# + id="YQkGK0qIW90W" colab_type="code" colab={}
# + id="pioxPDTJWoZF" colab_type="code" colab={}
for dns in gt_set:
if ('?' in dns):
print(dns)
# + id="1gNNS7SIqDBY" colab_type="code" outputId="03fab95b-5f1d-4180-ec2d-180dd86904bb" executionInfo={"status": "ok", "timestamp": 1590292540114, "user_tz": 420, "elapsed": 4421, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 680}
malngramdictlist = []
for i in range(1,6):
dic = {}
for dns in gt_set:
dic = ngramdict(dic, cleandomain(dns), i)
malngramdictlist.append(dic)
malngramdictlist[0]
# + id="yJw5EwUqC5QA" colab_type="code" outputId="45cf51bd-1f89-478a-941f-413a31abcc53" executionInfo={"status": "ok", "timestamp": 1590292540116, "user_tz": 420, "elapsed": 4413, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
cnt = 0
for k,v in malngramdictlist[0].items():
#print((k))
cnt += v
#print(cnt)
for k in malngramdictlist[0]:
malngramdictlist[0][k] = malngramdictlist[0].get(k)/float(cnt)
print(malngramdictlist[0]) #1 gram probabilities for malicious sites
# + id="vLxV1ivBaDwi" colab_type="code" colab={}
# + id="NNfguNANCN34" colab_type="code" outputId="1e572908-eaeb-478f-dab7-c27491cfd8a5" executionInfo={"status": "ok", "timestamp": 1590292540117, "user_tz": 420, "elapsed": 4393, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
mask = p1_data['family'].isin([0])
safedomain = p1_data[mask]
safedomain
# + id="WLLXECk6JJwW" colab_type="code" colab={}
# + [markdown] id="IedWjVqhJ9A6" colab_type="text"
# #BEGIN CLEANING SAFE DOMAIN DATA
# + [markdown] id="eaj3ac9DM0MX" colab_type="text"
# for entropy I will disregard the ending like .cc .com etc. and only focus on the domain name. Later I will use ending as a feature. The purpose of this is so the entropy does not get corrupted. .com or .cc or .net are all common occurences and thus might make a random domain name seem less random
# + id="sRYj3kp_KAJ7" colab_type="code" outputId="9a7bc42c-3d27-4677-fae6-9efb0635629a" executionInfo={"status": "ok", "timestamp": 1590292540118, "user_tz": 420, "elapsed": 4368, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
safedomain.dns = safedomain.dns.apply(lambda x: cleandomain(x))
print(safedomain[safedomain.dns.str.contains("?", regex=False)])
print(safedomain[safedomain.dns.str.contains(";", regex=False)])
print(safedomain[safedomain.dns.str.contains("&", regex=False)])
print(safedomain[safedomain.dns.str.contains("[", regex=False)])
print(safedomain[safedomain.dns.str.contains("]", regex=False)])
print(safedomain[safedomain.dns.str.contains("(", regex=False)])
print(safedomain[safedomain.dns.str.contains(")", regex=False)])
print(safedomain[safedomain.dns.str.contains("=", regex=False)])
print(safedomain[safedomain.dns.str.contains(":", regex=False)])
print(safedomain[safedomain.dns.str.contains("*", regex=False)])
print(safedomain[safedomain.dns.str.contains("!", regex=False)])
print(safedomain[safedomain.dns.str.contains("\\\\", regex=False)])
print(safedomain[safedomain.dns.str.contains("-", regex=False)])
print(safedomain[safedomain.dns.str.contains("_", regex=False)])
# + id="C0DxR289tBFu" colab_type="code" outputId="9bb823d7-dc6e-40da-a7cb-e4897d8d78b7" executionInfo={"status": "ok", "timestamp": 1590292540119, "user_tz": 420, "elapsed": 4358, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 918}
combined = list(set([*list(safedomain[safedomain.dns.str.contains("?", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains(";", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("&", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("[", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("]", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("(", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains(")", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("=", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains(":", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("*", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("!", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("_", regex=False)].index),
*list(safedomain[safedomain.dns.str.contains("\\", regex=False)].index)]))
#indexes of elements to remove
combined
# + id="aTqcyBUVv7t5" colab_type="code" outputId="1296d759-ae10-422c-c48b-1c94c9e25dd3" executionInfo={"status": "ok", "timestamp": 1590292540120, "user_tz": 420, "elapsed": 4349, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
#df = df[~df.datecolumn.isin(a)]
safedomain = safedomain[~safedomain.index.isin(combined)] #drop columns
safedomain = safedomain.drop_duplicates(subset='dns', keep="first") #remove duplicates
safedomain
# + id="zUgDTDF4yhdR" colab_type="code" colab={}
# + id="ZX_JBO_QypZ0" colab_type="code" outputId="67d6664a-ae87-4335-a956-bc7ccf7dcacc" executionInfo={"status": "ok", "timestamp": 1590292540121, "user_tz": 420, "elapsed": 4328, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 918}
print(safedomain[safedomain.dns.str.contains("?", regex=False)])
print(safedomain[safedomain.dns.str.contains(";", regex=False)])
print(safedomain[safedomain.dns.str.contains("&", regex=False)])
print(safedomain[safedomain.dns.str.contains("[", regex=False)])
print(safedomain[safedomain.dns.str.contains("]", regex=False)])
print(safedomain[safedomain.dns.str.contains("(", regex=False)])
print(safedomain[safedomain.dns.str.contains(")", regex=False)])
print(safedomain[safedomain.dns.str.contains("=", regex=False)])
print(safedomain[safedomain.dns.str.contains(":", regex=False)])
print(safedomain[safedomain.dns.str.contains("*", regex=False)])
print(safedomain[safedomain.dns.str.contains("!", regex=False)])
print(safedomain[safedomain.dns.str.contains("\\\\", regex=False)])
print(safedomain[safedomain.dns.str.contains("-", regex=False)])
print(safedomain[safedomain.dns.str.contains("_", regex=False)])
# + id="rS_637BnaICt" colab_type="code" outputId="e86a978d-c0db-47ca-ce64-d1bd993aeb0a" executionInfo={"status": "ok", "timestamp": 1590292544235, "user_tz": 420, "elapsed": 8432, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 680}
#1-6 grams
posgramdictlist = []
for i in range(1,6):
dic = {}
for dns in safedomain.dns:
dic = ngramdict(dic, cleandomain(dns), i)
posgramdictlist.append(dic)
posgramdictlist[0]
# + id="RYu073Nm24Mp" colab_type="code" outputId="168f134d-838f-49a7-d9bb-91623ca2969f" executionInfo={"status": "ok", "timestamp": 1590292544236, "user_tz": 420, "elapsed": 8422, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 71}
cnt = 0
for k,v in posgramdictlist[0].items():
#print((k))
cnt += v
print(cnt)
for k in posgramdictlist[0]:
posgramdictlist[0][k] = posgramdictlist[0].get(k)/float(cnt)
print(posgramdictlist[0]) #1 gram probabilities for safe sites
# + id="vw5-GPBMZLZj" colab_type="code" outputId="1f47a8c0-6723-45e0-e872-01cd0ad29553" executionInfo={"status": "ok", "timestamp": 1590292544237, "user_tz": 420, "elapsed": 8413, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 71}
cnt = 0
for k,v in posgramdictlist[1].items():
#print((k))
cnt += v
print(cnt)
for k in posgramdictlist[1]:
posgramdictlist[1][k] = posgramdictlist[1].get(k)/float(cnt)
print(posgramdictlist[1]) #2 gram probabilities for safe sites
# + [markdown] id="0tDO3w_89P41" colab_type="text"
# 
# + id="C_1_6onD3JVn" colab_type="code" colab={}
#https://dl.acm.org/doi/pdf/10.1145/3011077.3011112
def getngramscore(s, numgrams):
count = 0
d = ngrams([c for c in s], numgrams) #get numgrams grams and store in d
for g in d:
#print(g)
count += posgramdictlist[numgrams-1].get("".join(list(g)))
return count/max((float(len(s)-(numgrams-1))),1) #subtract numgrams minus 1 because this gives us how many ngrams there are. for 1 ngrams its just len(s) and 2 grams it's len(s)-1
# + id="VDvldQ1h6aFQ" colab_type="code" outputId="a9157661-e28d-421a-e7e4-65d305d181a2" executionInfo={"status": "ok", "timestamp": 1590292544240, "user_tz": 420, "elapsed": 8395, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 119}
print(getngramscore("test",1))
print(getngramscore("google",1))
print(getngramscore("zappos",1))
print(getngramscore("amazon",1))
print(getngramscore("zzyfhjzmm",1))
print(getngramscore("aaaaaaaaaa",1))
# + id="1DyxxlUfe44s" colab_type="code" outputId="0652d673-9201-4afd-ba24-6e8aa5463f2a" executionInfo={"status": "ok", "timestamp": 1590292544240, "user_tz": 420, "elapsed": 8383, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 136}
print(getngramscore("test",2))
print(getngramscore("google",2))
print(getngramscore("zappos",2))
print(getngramscore("amazon",2))
print(getngramscore("zzyfhjzmm",2))
print(getngramscore("aaaaaaaaaa",2))
print(getngramscore("testfdjslkf-nine",2))
# + [markdown] id="KvGdzDW6Anvh" colab_type="text"
# 
# + id="g5HNoBlk9I_C" colab_type="code" colab={}
#https://redcanary.com/blog/threat-hunting-entropy/
def getentropy(s):
tempdic = {}
count = 0
d = ngrams([c for c in s], 1) #get 1 grams and store in d
for g in d:
if not g in tempdic:
tempdic[g] = 1/len(s)
else:
tempdic[g] += 1/len(s)
for k, v in tempdic.items():
count += (v*math.log2(v/posgramdictlist[0].get(list(k)[0])))
return count
# + id="DpkqP3Vygp78" colab_type="code" colab={}
def getentropy2grams(s):
tempdic = {}
count = 0
d = ngrams([c for c in s], 2) #get 1 grams and store in d
for g in d:
if not g in tempdic:
tempdic[g] = 1/(len(s)-1) #subtract 1 because there are len(s) - 1 bigrams
else:
tempdic[g] += 1/(len(s)-1)
for k, v in tempdic.items():
count += (v*math.log2(v/posgramdictlist[1].get("".join(list(k)))))
return count
# + id="j80tquDM406j" colab_type="code" outputId="dbf22f69-94d3-4f22-9abb-d6b7defee41c" executionInfo={"status": "ok", "timestamp": 1590292544245, "user_tz": 420, "elapsed": 8352, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 119}
print(getentropy("test"))
print(getentropy("google"))
print(getentropy("zappos"))
print(getentropy("amazon"))
print(getentropy("zzyfhjzmm"))
print(getentropy("aaaaaaaaaa"))
# + id="Vez8oWEZinO4" colab_type="code" colab={}
posgramdictlist[1].get("".join(list('t')))
# + id="Dv-MhRagiMGX" colab_type="code" outputId="400c384c-4770-446a-8fc8-92f9d04d47d0" executionInfo={"status": "ok", "timestamp": 1590292544246, "user_tz": 420, "elapsed": 8326, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 119}
print(getentropy2grams("test"))
print(getentropy2grams("google"))
print(getentropy2grams("zappos"))
print(getentropy2grams("amazon"))
print(getentropy2grams("zzyfhjzmm"))
print(getentropy2grams("aaaaaaaaaa"))
# + id="zLd5hsJVdVoa" colab_type="code" outputId="b43da5be-de2a-4db3-a961-a26c25dba09e" executionInfo={"status": "ok", "timestamp": 1590292545418, "user_tz": 420, "elapsed": 9486, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
alexa['cleandns'] = alexa.dns.apply(lambda x: cleandomain(x))
alexa
# + id="ioLGaHnld8Hr" colab_type="code" outputId="fbf924bd-cc36-43c1-dfb1-bcfab117f0da" executionInfo={"status": "ok", "timestamp": 1590292618493, "user_tz": 420, "elapsed": 82550, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 697}
#1-6 grams for alexa
alexagramlist = []
for i in range(1,6):
dic = {}
for dns in alexa.cleandns:
dic = ngramdict(dic, cleandomain(dns), i)
alexagramlist.append(dic)
alexagramlist[0]
# + id="3o63l8TNfAV3" colab_type="code" outputId="5f12ec8f-c75d-4be7-92c4-d554a48d86af" executionInfo={"status": "ok", "timestamp": 1590292618494, "user_tz": 420, "elapsed": 82538, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 71}
cnt = 0
for k,v in alexagramlist[0].items():
#print((k))
cnt += v
print(cnt)
for k in alexagramlist[0]:
alexagramlist[0][k] = alexagramlist[0].get(k)/float(cnt)
print(alexagramlist[0]) #1 gram probabilities for alexa top million sites
# + id="OvHbo323u_zV" colab_type="code" outputId="30d524b1-6269-4cf7-c3da-c3afa917dae5" executionInfo={"status": "ok", "timestamp": 1590292618495, "user_tz": 420, "elapsed": 82527, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 71}
cnt = 0
for k,v in alexagramlist[1].items():
#print((k))
cnt += v
print(cnt)
for k in alexagramlist[1]:
alexagramlist[1][k] = alexagramlist[1].get(k)/float(cnt)
print(alexagramlist[1]) #2 gram probabilities for alexa top million sites
# + id="aA9M0-jznQv5" colab_type="code" outputId="7209bf16-3d38-40d6-d41e-813b0948d176" executionInfo={"status": "ok", "timestamp": 1590292618496, "user_tz": 420, "elapsed": 82516, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
cnt = 0
for k,v in alexagramlist[2].items():
#print((k))
cnt += v
print(cnt)
for k in alexagramlist[2]:
alexagramlist[2][k] = alexagramlist[2].get(k)/float(cnt)
#print(alexagramlist[2]) #3 gram probabilities for alexa top million sites
# + id="LA1x-wzGnZsf" colab_type="code" outputId="3698860e-2bd9-49f2-95d9-c360487fdea8" executionInfo={"status": "ok", "timestamp": 1590292618496, "user_tz": 420, "elapsed": 82504, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
cnt = 0
for k,v in alexagramlist[3].items():
#print((k))
cnt += v
print(cnt)
for k in alexagramlist[3]:
alexagramlist[3][k] = alexagramlist[3].get(k)/float(cnt)
#print(alexagramlist[3]) #4 gram probabilities for alexa top million sites
# + id="APYq2zq3nexl" colab_type="code" outputId="cbbe8e86-1672-441b-aa0d-090d15bdc00f" executionInfo={"status": "ok", "timestamp": 1590292665241, "user_tz": 420, "elapsed": 1949, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
cnt = 0
for k,v in alexagramlist[4].items():
#print((k))
cnt += v
print(cnt)
for k in alexagramlist[4]:
alexagramlist[4][k] = alexagramlist[4].get(k)/float(cnt)
#print(alexagramlist[4]) #5 gram probabilities for alexa top million sites
# + id="t-ZMT5eUuRzp" colab_type="code" colab={}
# + id="TiNGZ95FfgjH" colab_type="code" colab={}
def getalexangramscore(s, numgrams):
#print(s)
count = 0
d = ngrams([c for c in s], numgrams) #get numgrams grams and store in d
for g in d:
#print(g)
if ("".join(list(g)) in (alexagramlist[numgrams-1].keys())):
count += alexagramlist[numgrams-1].get("".join(list(g)))
else:
continue
#print(alexagramlist[numgrams-1].get("".join(list(g))))
return count/max((float(len(s)-(numgrams-1))),1) #subtract numgrams minus 1 because this gives us how many ngrams there are. for 1 ngrams its just len(s) and 2 grams it's len(s)-1
# + id="25LjFSbxgOcA" colab_type="code" outputId="dc229a9c-8b8f-49b5-d7c9-aab7589e9015" executionInfo={"status": "ok", "timestamp": 1590292665515, "user_tz": 420, "elapsed": 2162, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 102}
print(getalexangramscore("google", 1))
print(getalexangramscore("facebook", 1))
print(getalexangramscore("fjdksf", 1))
print(getalexangramscore("fjkdlvzzzzc", 1))
print(getalexangramscore("indeed679ds", 1))
# + id="K3hjMG2Etl0r" colab_type="code" outputId="fabd79a3-eca3-43fc-f788-8fe38d7e17d2" executionInfo={"status": "ok", "timestamp": 1590292665515, "user_tz": 420, "elapsed": 2149, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 102}
print(getalexangramscore("google", 2))
print(getalexangramscore("facebook", 2))
print(getalexangramscore("fjdksf", 2))
print(getalexangramscore("fjkdlvzzzzc", 2))
print(getalexangramscore("indeed679ds", 2))
# + id="yIQl3EHhr5Z3" colab_type="code" outputId="e2646828-21e2-497f-cafc-68c2adcf859a" executionInfo={"status": "ok", "timestamp": 1590292665516, "user_tz": 420, "elapsed": 2132, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
print(getalexangramscore("indeed679ds", 3))
# + id="hPonlueHfOem" colab_type="code" colab={}
def getalexaentropy(s):
tempdic = {}
count = 0
d = ngrams([c for c in s], 1) #get 1 grams and store in d
for g in d:
if not g in tempdic:
tempdic[g] = 1/len(s)
else:
tempdic[g] += 1/len(s)
for k, v in tempdic.items():
count += (v*math.log2(v/alexagramlist[0].get(list(k)[0])))
return count
# + id="6ftnc7AggJGY" colab_type="code" colab={}
def getalexaentropy2grams(s):
tempdic = {}
count = 0
d = ngrams([c for c in s], 2) #get 2 grams and store in d
for g in d:
if not g in tempdic:
tempdic[g] = 1/(len(s)-1) #subtract 1 because there are len(s) - 1 bigrams
else:
tempdic[g] += 1/(len(s)-1)
for k, v in tempdic.items():
count += (v*math.log2(v/alexagramlist[1].get("".join(list(k)))))
return count
# + id="Ou8VS-3bju0K" colab_type="code" outputId="f9a06b06-da07-46b3-a599-9a3b68428ed1" executionInfo={"status": "ok", "timestamp": 1590292665518, "user_tz": 420, "elapsed": 2095, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 85}
print(getalexaentropy('xndsjfls'))
print(getalexaentropy2grams('vddiuiygf'))
print(getalexaentropy('google'))
print(getalexaentropy2grams('google'))
# + [markdown] id="dJuAhe0uBPn0" colab_type="text"
#
# + [markdown] id="M8LK8pKKL-VM" colab_type="text"
# #helper functions for calculating MCR(Meaningful Characters Ratio)
# + id="9Qh10xYzvbcf" colab_type="code" colab={}
#https://github.com/first20hours/google-10000-english n gram dictionary from google's analysis
def MeaningfulChars(dns):
dns = dns.split(".", 1)[0]
dnslength = len(dns)
dic = {}
for i in range(3,6):
dic = ngramdict(dic, dns, i)
l = list(dic.keys())
malwordlist = []
for i in l:
malwordlist.append("".join(i))
#malwordlist.sort(reverse = True, key=len)
return set(malwordlist).intersection(set(words.words)) #returns the set of meaningful words in the dns domain.
# + id="0M68cn7Bu-4y" colab_type="code" colab={}
def cleanwordlist(wlist, dom): #if a word like hi goes inside hit. remove hi from word list
wlist.sort(key=len) #sort so small words come first
#print(wlist)
idx = []
for i in range(len(wlist)):
for j in range(i+1,len(wlist)):
#print(i, j)
if (wlist[i] in wlist[j]):
#print(wlist[i], wlist[j])
idx.append(i)
#print("SCHEDULE INDEX ", i, "FOR DELETION")
break
for i in reversed(idx):
try:
wlist.pop(i)
except:
print("ERROR WITH WORDLIST", wlist, "at index", i, "with domain:", dom)
return wlist #returns list sorted in reverse order
# + id="fnZww6UNQ0Pk" colab_type="code" outputId="4245ce22-8bd2-484a-ddde-4eb5819e4f2d" executionInfo={"status": "ok", "timestamp": 1590292665799, "user_tz": 420, "elapsed": 2333, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
cleanwordlist(['stack', 'low', 'ver', 'over', 'flow'], "stackoverflow.com")
# + id="meOw-bzJ4piy" colab_type="code" colab={}
def findMeaningfulWords(wordlist, domain):
#wordlist is in order from largest to smallest. ex: [fluster, coat, mat]
domain = domain.split(".", 1)[0]
count = 0
flag = True
while(flag):
flag2 = True
for word in wordlist:
if domain.startswith(word):
domain = domain[len(word)-1:] #remove len(word) characters from start of domain
count += len(word)
wordlist.remove(word)
flag2 = False
if (flag2):
domain = domain[1:] #remove first character of domain
if len(domain) <= 3:
flag = False
return count
# + id="JK3sZ03Kb80e" colab_type="code" colab={}
def removeWWW(domain):
if domain.startswith("www."):
return domain[4:]
else:
return domain
# + id="bUiMpZs5DT3u" colab_type="code" outputId="5a534d01-4a2b-4adc-ce30-99b889043b9b" executionInfo={"status": "ok", "timestamp": 1590292665801, "user_tz": 420, "elapsed": 2295, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
wordlist = ["tele", "fone", "onep"]
findMeaningfulWords(wordlist, "jtele-fonef") #test function
# + id="UsFLOOwVscAC" colab_type="code" colab={}
# + id="9frxYGZszZNG" colab_type="code" outputId="357cab52-1258-4d9f-8794-480d3a31c13c" executionInfo={"status": "ok", "timestamp": 1590292665802, "user_tz": 420, "elapsed": 2271, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 68}
dom = "fdsfsffsdf.com"
l = list(MeaningfulChars(dom)) #get meaningingful strings in domain
print(l)
l = cleanwordlist(l, dom) #remove strings that can go inside another string in list. EX: hi and hit, we remove hi
print(l)
cnt = findMeaningfulWords(l, dom)
print(cnt)
# + [markdown] id="8OGAQd8YMFM5" colab_type="text"
# #MCR FUNCTION
# 
# + id="ifIYLQqhlzwY" colab_type="code" colab={}
# Tong-Van-Van_Nguyen-Linh-Giang-Detection-of-DGA-Botnet_En-V2.6-2016-Nov10
def MCR(domain): #meaningful characters ratio
domain = removeWWW(domain)
meaningfulwordsindomain = findMeaningfulWords(cleanwordlist(list(MeaningfulChars(domain)), domain), domain)
lengthofdomain = max(float(len(domain.split(".", 1)[0])), 1.0)
zz = meaningfulwordsindomain/lengthofdomain
return min(zz,1.0) #1 is max value
# + id="RDLxFphBk1XA" colab_type="code" outputId="9a971c0c-f42c-4ba5-ff8d-275b428339a4" executionInfo={"status": "ok", "timestamp": 1590292665803, "user_tz": 420, "elapsed": 2246, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 68}
#TESTS
print(MCR("egoogsedatexy-65.com"))
print(MCR("stackoverflow.com"))
print(MCR("sixabcd")) #it counts abc as valid ngram in google's ngram list
# + id="9bQni9xESrDn" colab_type="code" outputId="1f58b034-c96d-41ea-b634-2a761292d4d3" executionInfo={"status": "ok", "timestamp": 1590292740529, "user_tz": 420, "elapsed": 76957, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
p1_data['MCR'] = p1_data.dns.apply(lambda x: MCR(x))
# + id="tfPvT8Vpf-bH" colab_type="code" outputId="ee6ac7a6-97e8-4e72-bf84-5a6b8c625bb1" executionInfo={"status": "ok", "timestamp": 1590292740531, "user_tz": 420, "elapsed": 76946, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
p1_data
# + id="xTVWKP6auCdC" colab_type="code" outputId="65d5d96a-e1c9-49bd-8544-565e0be0116c" executionInfo={"status": "ok", "timestamp": 1590292740532, "user_tz": 420, "elapsed": 76936, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 102}
p1_data.loc[p1_data['MCR'].idxmax()]
# + id="Aa8SeEHYgQ2V" colab_type="code" outputId="14381a15-da88-4667-c8d7-548dd9ec9c96" executionInfo={"status": "ok", "timestamp": 1590292740534, "user_tz": 420, "elapsed": 76927, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
mask = p1_data['family'].isin([0])
df = p1_data[mask]
df
df2 = p1_data[~mask]
df2
# + id="JwXPg2s-q9UN" colab_type="code" outputId="764d64f0-7afa-42f3-9c7c-2ee2a0b1a37a" executionInfo={"status": "ok", "timestamp": 1590292757006, "user_tz": 420, "elapsed": 93388, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
ngt2['MCR'] = ngt2.domain.apply(lambda x: MCR(x))
# + id="WoVPQmLLr45N" colab_type="code" colab={}
ngt2.rename(columns={'domain': 'dns'}, inplace=True)
df3 = pd.concat([df,ngt2])
# + id="b9f2Tob7quwl" colab_type="code" outputId="c5de9458-ec41-4d48-a573-d7cb8319282d" executionInfo={"status": "ok", "timestamp": 1590292757009, "user_tz": 420, "elapsed": 93369, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='MCR', data=df3) #family 0 is non malicious. families 1-10 are malicious
# + id="h93qJWF6xcrX" colab_type="code" colab={}
# df3 = df3.drop('dnsLength', 1)
# df3
#df3['dnsLength'] = df3.dns.apply(lambda x: len(x))
# + id="4sE2cnf1yUY5" colab_type="code" outputId="ef8af103-89fc-4f9c-890e-68964020b449" executionInfo={"status": "ok", "timestamp": 1590292757017, "user_tz": 420, "elapsed": 93353, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
#df3.dns
df3['dnsLength'] = df3.dns.apply(lambda x: len(str(x)))
df3
# + id="X8rXRvoErjNP" colab_type="code" outputId="d6de426f-c9fa-425d-b083-a5eac5045175" executionInfo={"status": "ok", "timestamp": 1590292757019, "user_tz": 420, "elapsed": 93339, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
df4 = ngt2.copy()
df4 = pd.concat([safedomain, df4])
df4
# + id="PCEw2SPGtM8F" colab_type="code" colab={}
# + id="LayKjDfy1qE2" colab_type="code" outputId="1c506a6b-ec29-4300-e897-9a1a952eb5d5" executionInfo={"status": "ok", "timestamp": 1590292757022, "user_tz": 420, "elapsed": 93317, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='dnsLength', data=df3) #family 0 is non malicious. families 1-10 are malicious.
#this plot shows us some of the families have high variance in their dns length. length seems very useful for family 7 however.
# + id="-sg5VX4QswtM" colab_type="code" colab={}
#ngt2 is dataframe with all the malicious families 1-10
#p1_data is dataframe with all of the domains
# + id="yPtzAgyika3I" colab_type="code" outputId="b4f3703d-5f38-459e-a236-e696bfeda7e2" executionInfo={"status": "ok", "timestamp": 1590292757025, "user_tz": 420, "elapsed": 93290, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
ngt2
# + id="mEpEufZ2hEHv" colab_type="code" colab={}
alexadataframe = ngt2.copy()
# + id="WagHV8isham9" colab_type="code" outputId="f43fecc7-cd36-47cd-e8c9-03aa7de62088" executionInfo={"status": "ok", "timestamp": 1590292757028, "user_tz": 420, "elapsed": 93264, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
alexadataframe
# + id="bROdFJVEkmMz" colab_type="code" colab={}
ngt2['1gramscore'] = ngt2.dns.apply(lambda x: getngramscore(cleandomain(x),1))
ngt2['2gramscore'] = ngt2.dns.apply(lambda x: getngramscore(cleandomain(x), 2))
ngt2['entropy1gram'] = ngt2.dns.apply(lambda x: getentropy(cleandomain(x)))
ngt2['entropy2grams'] = ngt2.dns.apply(lambda x: getentropy2grams(cleandomain(x)))
# + id="fOUjcq0ZlJ6K" colab_type="code" outputId="8abb27db-5cd2-4db1-c296-56a3adaa29f1" executionInfo={"status": "ok", "timestamp": 1590292757460, "user_tz": 420, "elapsed": 93668, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
ngt2
# + id="riuUjO63mniV" colab_type="code" outputId="d28a6fd2-ea91-48a1-c648-ba29ed7c6e1c" executionInfo={"status": "ok", "timestamp": 1590292757829, "user_tz": 420, "elapsed": 94024, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='1gramscore', data=ngt2)
# + id="prTVTXZYm94r" colab_type="code" outputId="bc64f66a-cf21-4030-ee95-360fc9a8f390" executionInfo={"status": "ok", "timestamp": 1590292757830, "user_tz": 420, "elapsed": 94011, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='2gramscore', data=ngt2)
# + id="DbdMnxVWnEG4" colab_type="code" outputId="f8bbe488-e542-42ef-8f2e-81fae1eaa5e8" executionInfo={"status": "ok", "timestamp": 1590292758217, "user_tz": 420, "elapsed": 94385, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='entropy1gram', data=ngt2)
# + id="9zezl9RynIZ_" colab_type="code" outputId="ea1799a2-2ce6-4f5c-e754-3c34979035b5" executionInfo={"status": "ok", "timestamp": 1590292759780, "user_tz": 420, "elapsed": 95936, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='entropy2grams', data=ngt2)
# + id="3cVdZ7GIyU-O" colab_type="code" outputId="c255c591-3f8f-4396-b813-906a898a2d4d" executionInfo={"status": "ok", "timestamp": 1590292759781, "user_tz": 420, "elapsed": 95925, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 419}
safedomain
# + [markdown] id="kZ4XADcZv7nF" colab_type="text"
# #These are poor results. we will now try with alexa top million. Perhaps the original data is biased.
# + id="aj8hk1bHvj1k" colab_type="code" outputId="516a2cef-1b54-47b9-959a-56ce46ac4707" executionInfo={"status": "ok", "timestamp": 1590292760141, "user_tz": 420, "elapsed": 96271, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
alexadataframe['1gramscore'] = alexadataframe.dns.apply(lambda x: math.log(getalexangramscore(cleandomain(x),1)+1))
alexadataframe['2gramscore'] = alexadataframe.dns.apply(lambda x: math.log(getalexangramscore(cleandomain(x),2)+1))
alexadataframe['3gramscore'] = alexadataframe.dns.apply(lambda x: math.log((getalexangramscore(cleandomain(x),3)+1)))
# + id="uskxD2rqv2ar" colab_type="code" outputId="8fa46d21-8e7e-4508-e6ee-b1846d099b07" executionInfo={"status": "ok", "timestamp": 1590292760142, "user_tz": 420, "elapsed": 96249, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='1gramscore', data=alexadataframe)
# + id="aFc5maGFv8dk" colab_type="code" outputId="3d45eceb-cf5c-4b67-b6df-b0e820f27be5" executionInfo={"status": "ok", "timestamp": 1590292760143, "user_tz": 420, "elapsed": 96233, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='2gramscore', data=alexadataframe)
# + id="p8okKE4Gv-Hp" colab_type="code" outputId="c873bd4a-8810-4cc8-f616-c046a634ee30" executionInfo={"status": "ok", "timestamp": 1590292760471, "user_tz": 420, "elapsed": 96543, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='3gramscore', data=alexadataframe)
# + id="aiYskLDRvmT9" colab_type="code" outputId="942b3bb8-c068-4f3f-e1f3-736f569ee3d2" executionInfo={"status": "ok", "timestamp": 1590292760733, "user_tz": 420, "elapsed": 96789, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
alexadataframe['4gramscore'] = alexadataframe.dns.apply(lambda x: math.log(100*getalexangramscore(cleandomain(x),4)+1))
# + id="EKsMYX6cv_f3" colab_type="code" outputId="728f531b-6fb0-4aa6-d631-c3e787c84e9c" executionInfo={"status": "ok", "timestamp": 1590292761037, "user_tz": 420, "elapsed": 97074, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='4gramscore', data=alexadataframe)
# + id="xDVQdNhnvp7o" colab_type="code" outputId="cf5ad1ad-e783-4411-f8ab-9b100420f095" executionInfo={"status": "ok", "timestamp": 1590292905374, "user_tz": 420, "elapsed": 591, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
alexadataframe['5gramscore'] = alexadataframe.dns.apply(lambda x: math.log(100*getalexangramscore(cleandomain(x),5)+1))
# + id="sRzkDO3CwBn9" colab_type="code" outputId="959740d3-eadb-42a7-cc6a-e904d8cd333b" executionInfo={"status": "ok", "timestamp": 1590292907293, "user_tz": 420, "elapsed": 906, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 299}
sns.boxplot(x='family', y='5gramscore', data=alexadataframe)
# + id="7_q3a0mBvrm2" colab_type="code" colab={}
alexadataframe['entropy1gram'] = alexadataframe.dns.apply(lambda x: getalexaentropy(cleandomain(x)))
alexadataframe['entropy2grams'] = alexadataframe.dns.apply(lambda x: getalexaentropy2grams(cleandomain(x)))
# + id="ifOx2e9_wHCH" colab_type="code" outputId="85028bb5-dbad-4e13-ffa8-ad250ad8085b" executionInfo={"status": "ok", "timestamp": 1590292762439, "user_tz": 420, "elapsed": 98423, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='entropy1gram', data=alexadataframe)
# + id="BODAJh-AwJI2" colab_type="code" outputId="8d748113-953b-4f8b-d579-b55dc8bdf931" executionInfo={"status": "ok", "timestamp": 1590292763003, "user_tz": 420, "elapsed": 98974, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
sns.boxplot(x='family', y='entropy2grams', data=alexadataframe)
# + id="2udQxu6OvxM7" colab_type="code" colab={}
alexadataframe['dnsLength'] = alexadataframe.dns.apply(lambda x: len(str(x)))
# + id="KvDRABdOzvXF" colab_type="code" outputId="fd544e83-3606-47a3-c615-b3ad26153529" executionInfo={"status": "ok", "timestamp": 1590292763869, "user_tz": 420, "elapsed": 99813, "user": {"displayName": "Ameer Hussain", "photoUrl": "", "userId": "00842326033074723912"}} colab={"base_uri": "https://localhost:8080/", "height": 298}
sns.boxplot(x='family', y='dnsLength', data=alexadataframe)
# + [markdown] id="-5Y5UVRV2FDn" colab_type="text"
# #We achieve improved results from using Alexa top million domains
| 204,451 |
/Python first class Homework (PIAIC)..ipynb
|
f81261c9def05566869a41f9859892801677ea96
|
[] |
no_license
|
i-israr/First-class-homework-python-
|
https://github.com/i-israr/First-class-homework-python-
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,106 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
print("Israr ul haq")#simple print command
#string and integer data types
Reg_no = 1712202
name = "Israr ul haq"
father_name = "Muahmamnd Riaz"
university = "SZABIST"
#now the upper veriables can be used further in the program e.g we can initilize a variable with multiple lines
message ="""
PIAIC Islamabad
Registration no : {}
Name: {}
Father: {}
University: {}
""".format(Reg_no,name,father_name,university)
#.formate is the predefined function which initilize the valuse in the sequence accourding to the {}
print(message)
# +
#input()......This function allow to take input from the user
print("Enter your name:")
x = input()
print("My name is" , x)
# +
#max()..... Returns the largest no from the array
#min()..... Returns the Smallest no from the array
list= [5,3,2,7,8,1]
print("The largest no from the list is :" , max(list))
list= [7,4,3,5,8,2,3]
print("The smallest no from the list is :" , min(list))
# -
# abs()....This function returns the absolute value of a number
a = -10
print(abs(a))
| 1,284 |
/deep_learning_train_and_evaluate.ipynb
|
dbedd264bc9652d2f4a2e3511772babdf6516a1b
|
[] |
no_license
|
ValerianeSlge/Centaure
|
https://github.com/ValerianeSlge/Centaure
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 726,116 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def read_file(path):
with open(path,'r') as file:
return file.readlines()
# # Quick Find
# +
from Assignment1 import QuickFind
import time
paths = ['mediumUF']
for path in paths:
lines = read_file(path + '.txt')
n, *rest = lines
qf = QuickFind(int(n))
start = int(round(time.time()*1000))
for line in rest:
p, q = [int(x) for x in line.split()]
qf.union(p,q)
end = int(round(time.time()*1000))
print(path + ':', str(qf.count()) + ' trees,', end-start,'ms')
# -
# # Quick Union
# +
from Assignment1 import QuickUnion
import time
paths = ['mediumUF']
for path in paths:
lines = read_file(path + '.txt')
n, *rest = lines
qu = QuickUnion(int(n))
start = int(round(time.time()*1000))
for line in rest:
p, q = [int(x) for x in line.split()]
qu.union(p,q)
end = int(round(time.time()*1000))
print(path + ':', str(qu.count) + ' trees,', end-start,'ms')
# -
# # Weighted Quick Union
# +
from Assignment1 import WeightedQuickUnion
import time
paths = ['mediumUF','largeUF']
for path in paths:
lines = read_file(path + '.txt')
n, *rest = lines
wqu = WeightedQuickUnion(int(n))
start = int(round(time.time()*1000))
for line in rest:
p, q = [int(x) for x in line.split()]
wqu.union(p,q)
end = int(round(time.time()*1000))
print(path + ':', str(wqu.count) + ' trees,', end-start,'ms')
)
return images, labels
liste_photos = listdir("/content/circuit")
tab_img, tab_lab = shape(liste_photos)
print("lab", tab_lab)
print ("nb images", len(tab_img))
# + id="lm93syYOAPEK" colab_type="code" outputId="f4afbf5b-f815-4150-f330-5ffd76effa1e" colab={"base_uri": "https://localhost:8080/", "height": 269}
# Pre process des images en modifiant les images
tab_img=np.array(tab_img)
tab_lab=np.array(tab_lab)
tab_img=tab_img/255.0
#exemple
plt.figure()
plt.imshow(tab_img[0])
plt.colorbar()
plt.grid(False)
plt.show()
# + id="asTI1Rf6Gvtn" colab_type="code" outputId="2a819611-640b-4697-c507-8050bd72b7bf" colab={"base_uri": "https://localhost:8080/", "height": 589}
# Vérification des données
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(tab_img[i], cmap=plt.cm.binary)
plt.xlabel(tab_lab[i])
plt.show()
# + id="nAfA8EitSOdk" colab_type="code" outputId="90064dec-80a6-446b-a10f-c949093198bc" colab={"base_uri": "https://localhost:8080/", "height": 34}
tab_img.shape #Vérifier la dimension des images
# + id="yiPLe2HQHhBk" colab_type="code" colab={}
# Construction du modèle
model = keras.Sequential([
keras.layers.Flatten(input_shape=(1356,1017,3)), # reformate les données (d'un tableau à 3 dimensions en un tableau à 1 dimension)
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(128) # 5 noeuds en sortie ? Puisque 5 directions
])
# + id="6FijwnOWIjew" colab_type="code" colab={}
# Compilation du modèle
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + id="Tkb-s53YLVyY" colab_type="code" outputId="c58bb0b2-aa60-409b-d36e-0a5df7a34581" colab={"base_uri": "https://localhost:8080/", "height": 370}
# Nourrir le modèle
model.fit(tab_img, tab_lab, batch_size=64, epochs=10)
# + id="fiVFnGFYLpDG" colab_type="code" colab={}
#Evaluer la précision
test_loss, test_acc = model.evaluate(tab_img, tab_lab, verbose=2)
print('\nTest accuracy:', test_acc)
# + id="KZhHS2ZL6NOe" colab_type="code" colab={}
model.save('circuit1.h5')
# + id="WlB6Vkk-2ml8" colab_type="code" colab={}
from google.colab import files
files.download('circuit1.h5')
# + id="yOSlrqz8QTPX" colab_type="code" colab={}
#Récupération du modèle
from google.colab import files
files.upload()
# + id="e-b0JRSn7Pa1" colab_type="code" outputId="cff40e63-45fd-43c8-ad7c-e63e62e3ef73" colab={"base_uri": "https://localhost:8080/", "height": 269}
#Récupération du ciruit correspond à la destination demandée
dossier='/content/'+ 'circuit1.h5'
print(dossier)
# Recreate the exact same model, including its weights and the optimizer
new_model = tf.keras.models.load_model('circuit1.h5')
# Show the model architecture
new_model.summary()
# + id="FTETrzr4Wahc" colab_type="code" colab={}
# Importer les images de validation et les pré traiter
from google.colab import files
files.upload()
# !unzip /content/DP_Valid.zip
# + id="iQNWryz88ozC" colab_type="code" outputId="f3100d72-7f5d-4dc2-9831-a5bdd55d8a63" colab={"base_uri": "https://localhost:8080/", "height": 286}
def shape2(liste_photos):
images=[]
labels=[]
for i in liste_photos:
#print ("i1",i)
dossier="/content/DP_Valid/"+i
img= load_img(dossier) #Charge l'image au format PIL
img = img_to_array(img) #Convertit PIL en Numpy array
images.append(img) #On ajoute l'image
lab= int ((i.split('_')[1]).split('+')[0]) #On récupère la valeur de x correspondant à cette image
labels.append(lab)
return images, labels
liste_photos2 = listdir("/content/DP_Valid")
tab_img_val, tab_lab_val = shape2(liste_photos2)
print ("nb images", len(tab_img_val))
tab_img_val=np.array(tab_img_val)
tab_lab_val=np.array(tab_lab_val)
tab_img_val=tab_img_val/255.0
plt.figure()
plt.imshow(tab_img_val[1])
plt.colorbar()
plt.grid(False)
plt.show()
# + id="3tfW9vevF-ey" colab_type="code" colab={}
# Prédictions
probability_model = tf.keras.Sequential([new_model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(tab_img_val)
# + id="DjfGPzlRkpVS" colab_type="code" colab={}
direction={0:"gauche",31:"diagonale gauche",63:"tout droit", 95:"diagonale droite", 127:"droite"}
joy= {0:(0,0), 31:(31,31), 63:(63,127), 95:(95,95), 127:(127,127)}
# + id="I9OMU_4-GrCk" colab_type="code" outputId="3cacfe37-0872-4c5b-cd91-dac815e6b00f" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Evaluer les prédictions
dir = joy[np.argmax(predictions)]
print(dir)
# + id="KJQmWUwCIIlS" colab_type="code" colab={}
# Visualiser les prédictions
def plot_image(i, predictions_array, true_label, img):
predictions_array,true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(direction[predicted_label],
100*np.max(predictions_array),
direction[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# + id="2pL3c6t0IXlP" colab_type="code" outputId="c39b8d3f-9602-40d7-ccf1-374548cb173a" colab={"base_uri": "https://localhost:8080/", "height": 561}
# Affiche l'image, la prédiciton et le vrai label
# Bonnes prédictions en bleu et mauvaises prédictions en rouge
num_rows = 4
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], tab_lab_val, tab_img_val)
plt.tight_layout()
plt.show()
| 7,980 |
/answer.ipynb
|
e9190484b6989c8ecb01534c1c070aa494d12f01
|
[] |
no_license
|
whfh1359/HRNet
|
https://github.com/whfh1359/HRNet
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 273,561 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tightends Yds Rank
import pandas as pd
tes = pd.read_csv('../Capstone_csv_file/tes_stats_rank_19-20', index_col = 'NAME')
tes.head()
tes = tes[['YDS', 'YDS_rank']]
tes = tes.sort_values('YDS_rank')
tes.to_csv('../Capstone_csv_file/tes_yds_rank_19-20')
me'])
df_answer
df_none = df_answer[(df_answer['real_letter_R']==0) &(df_answer['real_letter_L']==0)]
df_none
# +
from matplotlib import rcParams
# figure size in inches
rcParams['figure.figsize'] = 11.7,8.27
# -
fig, ax = plt.subplots(ncols=2)
#sns.boxplot("letter_R", data=df_sort, ax=ax[0])
sns.histplot(df['real_letter_R'], ax=ax[0])
sns.histplot(df['real_letter_L'], ax=ax[1])
print("df_none Count:", len(df_none))
# +
fig, ax = plt.subplots(1, 2, figsize=(15,7))
# subplot : 한 화면에서 여러 그래프를 나눠서 출력하는 기능 / plt.subplots((행 ,열, 그래프의 크기(가로, 세로)))
df_answer['real_letter_R'].value_counts().plot.pie(explode=[0,0.1], autopct='%1.1f%%',ax=ax[0],shadow=True,startangle=90)
# pd.Series.value_counts() : 유일한 값 별로 개수 세기 / .pie (explode = 두 조각간의 거리, autopct = 각 범주가 데이터에서 차지하는 비율, shadow = 그림자, startangle = pie의 시작각도)
ax[0].set_title('real_letter_R') # 파이 그래프 이름
ax[0].set_ylabel('') # 파이 그래프
df_answer['real_letter_L'].value_counts().plot.pie(explode=[0,0.1], autopct='%1.1f%%',ax=ax[1],shadow=True,startangle=90)
# pd.Series.value_counts() : 유일한 값 별로 개수 세기 / .pie (explode = 두 조각간의 거리, autopct = 각 범주가 데이터에서 차지하는 비율, shadow = 그림자, startangle = pie의 시작각도)
ax[1].set_title('real_letter_L') # 파이 그래프 이름
ax[1].set_ylabel('') # 파이 그래프
# 0 = 죽음, 1 = 생존
# 61.6%가 죽음 38.4%가 생존
# -
pd.crosstab(df_answer['real_letter_R'], df_answer['real_letter_L'], margins=True).style.background_gradient(cmap='summer_r')
df_pred = pd.read_csv('predict.csv')
df_pred = df_pred.drop(['Unnamed: 0'], axis = 1)
df_pred[df_pred['file_name']==1408]
temp = list(map(int, df_pred['file_name']))
temp[:3]
#for i in range(len(temp)):
# #print(i)
# new = temp[i]-1400
# temp[i] = new
#df_pred['file_name'] = temp
df_pred = df_pred.sort_values(by = 'file_name', ascending = True)
df_pred[:10]
df_pred[df_pred['file_name'] == 1408]
# +
#print(df_pred.dtypes)
# -
name_pred = list(df_pred['file_name'])
df_pred['middle'] = (df_pred['right_medial_x'] + df_pred['left_medial_x'])/2
middle_pred = list(map(int, df_pred['middle']))
r_prob_pred = list(df_pred['letter_R_prob'])
l_prob_pred = list(df_pred['letter_L_prob'])
right_medial_pred= list(zip(df_pred['right_medial_x'], df_pred['right_medial_y']))
left_medial_pred = list(zip(df_pred['left_medial_x'], df_pred['left_medial_y']))
right_f = list(zip(df_pred['right_f_x'], df_pred['right_f_y']))
left_f = list(zip(df_pred['left_f_x'], df_pred['left_f_y']))
right_letter = list(zip(df_pred['letter_R_x'], df_pred['letter_R_y']))
left_letter = list(zip(df_pred['letter_L_x'], df_pred['letter_L_y']))
letter_R_x_pred = list(df_pred['letter_R_x'])
letter_L_x_pred = list(df_pred['letter_L_x'])
file_view_pred = list(zip(name, middle, right_medial_pred, left_medial_pred, right_f, left_f, r_prob_pred, l_prob_pred, right_letter, left_letter))
# +
#result = math.sqrt( math.pow(df_pred['letter_R_x']- x2, 2) + math.pow(y1 - y2, 2))
# -
df_pred.head(10)
answer_distance = pd.read_csv('answer_distance.csv')
answer_distance['filename'] = answer_distance['filename'].add(1400)
answer_distance.head(7)
right_x_dist_answer= list(answer_distance['right_medial_x'])
right_y_dist_answer= list(answer_distance['right_medial_y'])
right_f_x_dist_answer= list(answer_distance['right_f_x'])
right_f_y_dist_answer= list(answer_distance['right_f_y'])
benchmark = []
for i in range(len(right_x_dist)):
result = math.sqrt(math.pow(right_x_dist_answer[i] - right_f_x_dist_answer[i], 2) + math.pow(right_y_dist_answer[i] - right_f_y_dist_answer[i], 2))
benchmark.append(int(result*0.1)) # 거리는 1/10
benchmark[:3] # 기준점은 얘다... right_medial과 right_f 거리의 1/10
letter_R_x_dist_answer= list(answer_distance['letter_R_x'])
letter_R_y_dist_answer= list(answer_distance['letter_R_y'])
letter_L_x_dist_answer= list(answer_distance['letter_L_x'])
letter_L_y_dist_answer= list(answer_distance['letter_L_y'])
letter_R_x_dist_pred= list(df_pred['letter_R_x'])
letter_R_y_dist_pred= list(df_pred['letter_R_y'])
letter_L_x_dist_pred= list(df_pred['letter_L_x'])
letter_L_y_dist_pred= list(df_pred['letter_L_y'])
dist_R= []
for i in range(len(right_x_dist)):
result = math.sqrt(math.pow(letter_R_x_dist_answer[i] - letter_R_x_dist_pred[i], 2) + math.pow(letter_R_y_dist_answer[i] - letter_R_y_dist_pred[i], 2))
dist_R.append(int(result))
dist_R[:3]
dist_L= []
for i in range(len(right_x_dist)):
result = math.sqrt(math.pow(letter_L_x_dist_answer[i] - letter_L_x_dist_pred[i], 2) + math.pow(letter_L_y_dist_answer[i] - letter_L_y_dist_pred[i], 2))
dist_L.append(int(result))
dist_L[:3]
r_prob_pred[6]
# +
answer = []
answer.append(['filename','pred_letter_R', 'pred_letter_L'])
for i in range(len(name)):
temp = max(r_prob_pred[i], l_prob_pred[i])
if (temp < 0.7):
answer.append([name[i],0,0])
elif ((r_prob_pred[i] < 0.7) & (middle_pred[i] < letter_L_x_pred[i])):
answer.append([name[i],0,1])
elif ((l_prob_pred[i] < 0.7) & (middle_pred[i] > letter_R_x_pred[i])):
answer.append([name[i],1,0])
else:
answer.append([name[i],1,1])
# -
#answer
with open('predict_val_0.7.csv', 'w') as file:
write = csv.writer(file)
write.writerows(answer)
#answer
with open('predict_val.csv', 'w') as file:
write = csv.writer(file)
write.writerows(answer)
answer_val = pd.read_csv('answer_val.csv')
answer_val['filename'] = answer_val['filename'].add(1400)
answer_val[:10]
# +
fig, ax = plt.subplots(1, 2, figsize=(15,7))
# subplot : 한 화면에서 여러 그래프를 나눠서 출력하는 기능 / plt.subplots((행 ,열, 그래프의 크기(가로, 세로)))
answer_val['real_letter_R'].value_counts().plot.pie(explode=[0,0.1], autopct='%1.1f%%',ax=ax[0],shadow=True,startangle=90)
# pd.Series.value_counts() : 유일한 값 별로 개수 세기 / .pie (explode = 두 조각간의 거리, autopct = 각 범주가 데이터에서 차지하는 비율, shadow = 그림자, startangle = pie의 시작각도)
ax[0].set_title('answer_val_real_letter_R') # 파이 그래프 이름
ax[0].set_ylabel('') # 파이 그래프
answer_val['real_letter_L'].value_counts().plot.pie(explode=[0,0.1], autopct='%1.1f%%',ax=ax[1],shadow=True,startangle=90)
# pd.Series.value_counts() : 유일한 값 별로 개수 세기 / .pie (explode = 두 조각간의 거리, autopct = 각 범주가 데이터에서 차지하는 비율, shadow = 그림자, startangle = pie의 시작각도)
ax[1].set_title('answer_val_real_letter_L') # 파이 그래프 이름
ax[1].set_ylabel('') # 파이 그래프
# 0 = 죽음, 1 = 생존
# 61.6%가 죽음 38.4%가 생존
# -
pd.crosstab(answer_val['real_letter_R'], answer_val['real_letter_L'], margins=True).style.background_gradient(cmap='summer_r')
answer_val_real_letter_R = list(answer_val['real_letter_R'])
answer_val_real_letter_L = list(answer_val['real_letter_L'])
predict_val =pd.read_csv('predict_val_0.8.csv')
predict_val.rename(columns = {'filename' : 'pred_filename'}, inplace = True)
predict_val[:10]
# +
fig, ax = plt.subplots(1, 2, figsize=(15,7))
# subplot : 한 화면에서 여러 그래프를 나눠서 출력하는 기능 / plt.subplots((행 ,열, 그래프의 크기(가로, 세로)))
predict_val['pred_letter_R'].value_counts().plot.pie(explode=[0,0.1], autopct='%1.1f%%',ax=ax[0],shadow=True,startangle=90)
# pd.Series.value_counts() : 유일한 값 별로 개수 세기 / .pie (explode = 두 조각간의 거리, autopct = 각 범주가 데이터에서 차지하는 비율, shadow = 그림자, startangle = pie의 시작각도)
ax[0].set_title('predict_val_real_letter_R') # 파이 그래프 이름
ax[0].set_ylabel('') # 파이 그래프
predict_val['pred_letter_L'].value_counts().plot.pie(explode=[0,0.1], autopct='%1.1f%%',ax=ax[1],shadow=True,startangle=90)
# pd.Series.value_counts() : 유일한 값 별로 개수 세기 / .pie (explode = 두 조각간의 거리, autopct = 각 범주가 데이터에서 차지하는 비율, shadow = 그림자, startangle = pie의 시작각도)
ax[1].set_title('predict_val_real_letter_L') # 파이 그래프 이름
ax[1].set_ylabel('') # 파이 그래프
# 0 = 죽음, 1 = 생존
# 61.6%가 죽음 38.4%가 생존
# -
pd.crosstab(predict_val['pred_letter_R'], predict_val['pred_letter_L'], margins=True).style.background_gradient(cmap='summer_r')
predict_val_pred_letter_R = list(predict_val['pred_letter_R'])
predict_val_pred_letter_L = list(predict_val['pred_letter_L'])
total = pd.concat([answer_val, predict_val], axis = 1)
total[:10]
len(total[(total['real_letter_R'] == total['pred_letter_R']) & (total['real_letter_L'] == total['pred_letter_L'])])
len(total[(total['real_letter_R'] == total['pred_letter_R']) & (total['real_letter_L'] == total['pred_letter_L'])])/len(total)
len(total[(total['real_letter_R'] != total['pred_letter_R']) & (total['real_letter_L'] != total['pred_letter_L'])])
total[(total['real_letter_R'] != total['pred_letter_R']) & (total['real_letter_L'] != total['pred_letter_L'])]
len(total[(total['real_letter_R'] != total['pred_letter_R']) | (total['real_letter_L'] != total['pred_letter_L'])])
wrong = total[(total['real_letter_R'] != total['pred_letter_R']) | (total['real_letter_L'] != total['pred_letter_L'])]
wrong
name_list = list(wrong['pred_filename'])
name_list[0]
file_view = list(file_view)
file_view[:1]
# +
temp_2 = []
for i in range(len(file_view)):
if int(file_view[i][0]) in name_list:
temp_2.append(file_view[i])
print(temp_2)
# -
r = (255,0,0)
g = (0,255,0)
b = (0,0,255)
w = (255,255,255)
# +
import cv2
font = cv2.FONT_HERSHEY_DUPLEX
# +
for i in temp_2:
filename ='00000000' + str(i[0])
print(filename)
img = cv2.imread(filename + '.jpg')
temp = max(i[6], i[7])
print(i[6], i[7])
draw_r = (int(i[1]/2),1700)
draw_l = (int(draw_r[0]) + i[1],1700)
print(draw_r,draw_l)
if (temp < 0.5):
print("I don't know")
cv2.putText(img, "I don't know", draw_r, font, 2, (0,0,155), 2, cv2.LINE_AA)
cv2.putText(img, "I don't know", draw_l, font, 2, (0,0,155), 2, cv2.LINE_AA)
elif (i[6]<i[7]):
print("R draw")
cv2.putText(img, "R", draw_r, font, 2, (0,0,155), 2, cv2.LINE_AA)
#cv2.putText(img, "L", draw_l, 1, 2, (255,255,0), 1, 8)
elif (i[6]>i[7]):
print("L draw")
cv2.putText(img, "L", draw_l, font, 2, (0,0,155), 2, cv2.LINE_AA)
p_img = cv2.line(img, i[2], i[2], r, 40)
p_img = cv2.line(img, i[3], i[3], g, 40)
p_img = cv2.line(img, i[4], i[4], b, 40)
p_img = cv2.line(img, i[5], i[5], w, 40)
p_img = cv2.line(img, i[8], i[8], r, 40)
p_img = cv2.line(img, i[9], i[9], b, 40)
cv2.imwrite('_test'+ filename + '.jpg', p_img)
# -
answer_distance = pd.read_csv('answer_distance.csv')
answer_distance['filename'] = answer_distance['filename'].add(1400)
answer_distance
| 10,750 |
/Nikitha_Analysis/AirbnbNYmerge.ipynb
|
511d8676973e37e6231ae56d1547c792ef6f585e
|
[] |
no_license
|
shwetaaaa/pnns_project_1
|
https://github.com/shwetaaaa/pnns_project_1
| 0 | 0 | null | 2018-07-28T17:49:46 | 2018-07-28T17:49:28 | null |
Jupyter Notebook
| false | false |
.py
| 4,657 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Airbnb NY cleaning data
import glob
import pandas as pd
# Import all csv files for NY BNB
ny_df = pd.concat([pd.read_csv(f) for f in glob.glob('tomslee_airbnb_new_york*.csv')], ignore_index = True)
#Extracting relevant columns
ny_df=ny_df[['room_id','host_id','room_type','neighborhood','reviews','overall_satisfaction','accommodates','bedrooms','price','minstay','latitude','longitude','last_modified']]
#Check for NA's
ny_df.count()
#Drop NA
ny_dropna=ny_df.dropna()
#Review count
ny_dropna.count()
# Select room_id's with occurrences greater than 5
room_id_count=ny_dropna['room_id'].value_counts().reset_index(name='counts')
filt_room_id=room_id_count[room_id_count['counts'] > 5]
# New dataframe with filtered room_id's
abnb_ny=ny_dropna.loc[ny_dropna['room_id'].isin(filt_room_id['index'])]
abnb_ny.count()
abnb_ny.to_csv("ABNB_NY.csv")
ple to tell you if a word is positive or negative. The difficult part is figuring out how to access such an audience that will also accept the remuneration you are able to provide.
#
# Crowdsourcing is generally viewed as the answer to constructing a `training` dataset such as this. There a number of such platforms, but the oldest/most oft used is still Amazon's Mechanical Turk. Given that we will work through how to use this service programmatically
# # The process of getting set up
#
# Having an account with Amazon where you can buy shampoo and beef jerky is insufficient to use Mechanical Turk. MT resides (more or less) as a part of its enterprise services (which are labelled AWS-Amazon Web Services). To be able to be a requester on MT you must:
#
# 1. Sign up for an AWS account (aws.amazon.com)
# 2. Sign up for an MTurk requester account (requester.mturk.com)
# 3. Link Your MTurk account to your aws account (https://requester.mturk.com/developer)
# 4. Sign up for MTurk Sandbox, which is where you can test your forms without paying actual people (requestersandbox.mturk.com) an dlink your sandbox to the aws account (requestersandbox.mturk.com/developer)
# 5. Set up the IAM (Identity and Access Management) User
# # Installation and basic AWS Access
#
# Fortunately, there is a python package to manage access to AWS (boto3). First you will need to install this package with pip
# !pip install boto3
# Then you will need to configure a credentials file that goes in your user directory (`~/.aws/credentials`) with your IAM account credentials. The `credentials` file should be structured as:
#
# <pre>
# [default]
# aws_access_key_id = YOUR_KEY
# aws_secret_access_key = YOUR_SECRET
# </pre>
# Then set up a configuration file to tell Amazon which region you want your services to be started in (`~/.aws/config`)?
#
# <pre>
# [default]
# region=us-east-1
# </pre>
# And then you can check your sandbox balance
# +
import boto3
MTURK_SANDBOX = 'https://mturk-requester-sandbox.us-east-1.amazonaws.com'
mturk = boto3.client('mturk', endpoint_url = MTURK_SANDBOX)
# -
#Should have 10,000 available
print("Available sandbox balance: {0}".format(mturk.get_account_balance()['AvailableBalance']))
# If you instead want to connect to your actual MTurk account and marketplace, you can just leave out the endpoint url
real_mturk = boto3.client('mturk')
print("My real money: {0}".format(real_mturk.get_account_balance()['AvailableBalance']))
# But I don't want to actually pay money yet
del real_mturk
# # Terminology
#
# **Worker**: Anyone on the other side of the MTurk marketplace. Workers can view all open assignments and choose which ones to work on.
#
# **HIT**: Human Intelligence Task - the single unit of work that a Turker would accept. This HIT could be a single task (i.e. "What is in this image") or a series of tasks (although that will increase length of time to complete and pay should scale with that factor). For the sake of further discussion we will say that labelling 1 word is 1 HIT and you have 100 words you want to label.
#
# **Assignment**: Number of workers that should complete each HIT. If you set Assignment to 2 for 100 word HITs, then you would have 200 assignments. You will want to have an assignment of 3 or more when labelling words to increase confidence in the assigned score.
# # Hit coding
#
# Hits are ::drumroll please::.....HTML templates :( (Technically it is a HTML page that will be wrapped inside XML, so that's why we save it as `xml`)
#
# That's right, you'll need to create a HTML page for your HIT that will be submitted. In its most basic form, it is relatively simple.
#
# To make life easier, I separate this into 3 parts: `turk_hit_frontmatter.xml`, `turk_question.html`,`turk_hit_backmatter.xml`. The reason is that you can open the `html` page in a browser and see the result directly.
#
# Then to make the final document to submit to AWS, it's just concatenating frontmatter, question, and backmatter to a new file (`backmatter` and `frontmatter` are constant). This front and back matter to the document is pretty simple too.
print(open('turk_hit_frontmatter.xml').read())
print('----00000 Not in File 00000-------')
print(open('turk_hit_backmatter.xml').read())
# And I've coded the simplest turk question possible to pair it with.
# !open turk_question.html
# Constructing the final, submittable question is then relatively simple - it's just putting the three files together into one.
# +
def construct_turk_xml(turk_html):
fulltext = open('turk_hit_frontmatter.xml').read() + open(turk_html).read() + \
open('turk_hit_backmatter.xml').read()
return fulltext
fulltext = construct_turk_xml('turk_question.html')
# -
# Now that the task creation is done, we can move to submitting the task.
# +
new_hit = mturk.create_hit(
Title = 'Is the following word positive, neutral, or negative in emotion?',
Description = 'Read the passage and click the button for the emotion that is attached to the bolded word',
Keywords = 'text, quick, labeling',
Reward = '0.01',
MaxAssignments = 1,
LifetimeInSeconds = 172800,
AssignmentDurationInSeconds = 600,
AutoApprovalDelayInSeconds = 10,
Question = fulltext,
)
print( "https://workersandbox.mturk.com/mturk/preview?groupId=" + new_hit['HIT']['HITGroupId'] )
print( "HITID = " + new_hit['HIT']['HITId'] + " (Use to Get Results)" )
# -
# The fields mostly speak for themselves at the start.
#
# The reward is how much you will pay in USD cents (so this task is for 15 cents).
#
# MaxAssignments is the number of turkers you want to complete the HIT
#
# LifetimeInSeconds - how long the HIT should be available on the MTurk marketplace
#
# AssignmentDurationInSeconds - how long the turker has to complete the HIT once they start the task
#
# AutoApprovalDelayInSeconds - You have the ability to manually approve/deny a turker's work (which determines if the worker gets paid). This threshold sets when the system will move from manual to automatic approval (so that if you forget, the turker still gets paid). Note - requesters are rated on a separate forum for turkers and promptness of paying is one attribute that they track. Don't forget about paying in a reasonable amount of time, especially for low cost/risk tasks.
#
# Question - what you want them to answer.
#
# You can go check out the HIT at the sandbox link (need to register as a turker)
# **Excellent**
#
# Now (that I have most likely completed my own HIT), we should now be able to pull the data.
#
# All will we will need is the client connection to MTurk and the HITID for our task.
worker_results = mturk.list_assignments_for_hit(HITId=new_hit['HIT']['HITId'])
worker_results
# And my answer is inside the assignments list:
worker_results['Assignments'][0]
# And now we see something ugly - the answer is in the `Answer` field, but it's in XML!
#
# Fortunately we can just install the `xmltodict` package which will convert the data out of xml and into something that's friendlier for our purposes.
# !pip install xmltodict
# +
import xmltodict
xml_doc = xmltodict.parse(worker_results['Assignments'][0]['Answer'])
xml_doc['QuestionFormAnswers']['Answer']
# -
# And there we go! We have our answer - the turker thinks that **unkindly** is *negative*
#
# I will leave as an exercise for the reader to figure out how to automatically fill the html template with the passage and word of interest (Hint: manipulate it as a string in python)
# # Accounting for error in our estimates
#
# You will now have multiple values for the valence of each word. There are number of ways to handle and process this data.
#
# For extemely small or large *n* I am confident that you are well-versed in how to reduce this to a single value (take the mean, check for outliers, etc. - up to you).
# # Estimating quantities the Bayesian way
#
# We can assume that there is a true value of the sentiment for a specific word in a single context. We know that the responses, and the spread in them, informs our approximation of the real value and accounts for the uncertainty we have in stating that it is the true value.
# When using a Bayesian approach, we are trying to estimate the probability distribution function for the real value (it inherently incorporates uncertainty - which is a good approach when considering something like quantifying the amount of sentiment a word encodes).
# The basic idea is that we start with some prior knowledge/distribution of 'truth' for a value and then update it as we receive additional information (i.e. mturk responses).
#
# <img src='../images/bayes_learn.png'>
# Mathematically, we just need to follow bayes rule
#
# $P(A\mbox{ | }B) = \frac{P(B\mbox{ | }A)P(A)}{P(B)}$
#
# or stated in a data-centric way
#
# $P(Model\mbox{ | }Data) = \frac{P(Data\mbox{ | }Model)P(Model)}{P(Data)}$
#
# where the $P(Model)$ is prior probability for our model and $P(Model\mbox{ | }Data)$ is our posterior probability after we have incorporated the data. $P(Data\mbox{ | }Model)$ is simply the probability of observing the data given our current model and $P(Data)$ is the marginal likelihood (which is the same for all models under consideration).
# # Sounds complicated?
#
# Fortunately, it's not that hard in practice. There are two ways to go about this - the first that I want you to explore is by hand with scipy.
# +
import scipy.stats as st
import numpy as np
#Set the likelihood
likelihood = np.array([])
#Set our supports
params = np.linspace(-6, 6, 1201)
#And initialize the posterior
posterior = np.array([])
#Construct the prior
prior_sample = np.random.normal(0, 0.2)
prior = np.array([np.product(st.norm.pdf(prior_sample, p)) for p in params])
prior = prior / np.sum(prior)
# +
def update_probability(datapoint, likelihood, prior, posterior, params):
likelihood = np.array([np.product(st.norm.pdf([datapoint], p)) for p in params])
#Construct the posterior
tposterior = [prior[i] * likelihood[i] for i in range(prior.shape[0])]
posterior = tposterior / np.sum(tposterior)
#Reset the prior to the new posterior
prior = np.copy(posterior)
return likelihood, prior, posterior
for i in range(100):
likelihood, prior, posterior = update_probability(0.4, likelihood, prior, posterior, params)
plt.plot(params, posterior)
# -
# Alternatively, you could use a package that implements Bayesian data fitting (such as pymc or emcee), instead of coding it yourself to estimate the parameters.
| 11,656 |
/HDSC_21_stage_B_assessment.ipynb
|
e58b08e3a0b6384628224cf0400c344e3f590c2d
|
[] |
no_license
|
damiiete/Hamoye
|
https://github.com/damiiete/Hamoye
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 47,929 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dhruval-p/Image-Segmentation/blob/main/Image_Segmentation_TF.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="38I9u5PDid_l" outputId="cbff0241-c0cb-4038-d046-5e048040148d"
# !pip install -q git+https://github.com/tensorflow/examples.git
# !pip install -q -U tfds-nightly
# + id="06yDBdClgY4B"
import tensorflow as tf
# + id="el_2qRKBhbfW"
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
from IPython.display import clear_output
import matplotlib.pyplot as plt
# + id="DyS_TNA0hucu" colab={"base_uri": "https://localhost:8080/", "height": 379, "referenced_widgets": ["0e90ba7549d8482fa97a9fe36d93b1e9", "75e9071753d14ecc8e82f4429b99b276", "fc97604a497e4ca79e38f10ae86bc72f", "b93bfb46f1f24e5d8c406b3453a0aa96", "1c48240088ca4418beae98028054e6a6", "5bd14f9516a44045984aeecfc63088dc", "1a45975d8ab6467c8c71b917451e1fa0", "eb1a32c91afe4c81aa5985e400f5d929", "daa6bc2c6fa74920af28cdb0aca1cb4b", "7c19aa57256f41aeb112a59a904b2d50", "388e0bf591a446e58ec7c32f1c8fb71f", "5a34798aa8d245abb3437548a85e94b6", "f820079c25334f8181be70d429531541", "20fe1d78509c49249069e9cb32f9bde1", "7744da2df2c943c19f555e0adf6a8ea5", "6be1d18277d14837b695de215cef0499", "cb953117061c49bd850bddbc0ef1b9af", "c3b95beed7dc44f78487f1289344634b", "5a31294712b94ee3bfa2531e19cfb93b", "f98f038d49934542a5231db1a9588146", "614613d8707f40868fd115241f27761d", "401c993fce1f45499de3ea485877980a", "2dd2da6207e94e869436983d253b3f78", "74dec7eb8e914f139bfbf897662a5a3b", "1fdcadc51e4d40cca751c25d7441c252", "c465378a71744278b21c3cb6a482c63d", "18a4d25267ed41f7b7da8e3dd545b21e", "80d492dc836f4fab9513d791273ca8e0", "4a85cdb907d7459f85aebb5c5b62b9df", "55e3247eac334ba29682d55a106b8ca0", "838cd75cdebf4b4b9a59f2f4cb31c2ba", "ff38dd026f4843be8919f6229f6a21f6", "f21317891cd24f518140f771c18ae040", "b86ed4323c6d47e090d25f8488615869", "b91155c9148d4d629a694984fc654148", "9992beb8e69a4c4eb75a74ebba954788", "2cf540470f5945a492a2443eb88d31c8", "e49b2091f66a4d1a9cda5146bbd11d11", "31ac74706c4941009729662ee804de22", "89c4d97453d64bb0a3bad4c084861027", "de3ea44cc70a4e22a472e5db3c698792", "489562c8576c4949993c9c786b7dee0d", "5314b93f7a554c178be31e327b0d9783", "fad22dac353b4714ab018b3e35650367", "614b87f5c45b49d79de77ce87c7a4fa4", "816432b5d6504c588218b8d0f64e8d5c", "400fc6ed69ee477dac714666f80e41b1", "5ea382b597724901a22152c505ad5291", "1f6fce352f5d461c970b520463eea4ae", "2ce9892bb48248cb912beb8c291ae916", "ff5179babd0743fdb3d4965a5c89c8fe", "d40f2db9fe5349dca1dff199dc51bd70", "c70281a22e144c848fbe1769922e20ce", "9f2915b68c3046d09b7dc08cc7bc17af", "08848b5f506d4a15af28b96946583caf", "fab6e67e18474806b603a5586fa9367c", "7afed07c529148a484b0ad474f2f048a", "3672fb0036fd45b3b8662b6a5cfceb36", "97261f51715f4b3aa7d96e14c19d1770", "75077f6c9dd947958e347fb060a63dea", "15154267ed564cd8907f81aaad8bce90", "03797c3eb5314982963c0d2d456e46af", "de9b6c319535439f83f3a746a32b9cd0", "a6b44295238246988fe0253b6bfbb4e3"]} outputId="f49b5a9e-e991-443e-9810-8a9304b23e75"
dataset, info =tfds.load('oxford_iiit_pet:3.*.*',with_info=True)
# + id="GhYBscfok4VA"
def normalize(input_image,input_mask):
input_image=tf.cast(input_image, tf.float32)/255.0
input_mask-=1
return input_image, input_mask
# + id="EpMjBUuNoUkx"
@tf.function
def load_image_train(datapoint):
input_image=tf.image.resize(datapoint['image'],(128,128))
input_mask=tf.image.resize(datapoint['segmentation_mask'],(128,128))
if tf.random.uniform(())>0.5:
input_image=tf.image.flip_left_right(input_image)
input_mask=tf.image.flip_left_right(input_mask)
input_image,input_mask=normalize(input_image,input_mask)
return input_image,input_mask
# + id="AZDCr3ZVp7R-"
def load_image_test(datapoint):
input_image=tf.image.resize(datapoint['image'],(128,128))
input_mask=tf.image.resize(datapoint['segmentation_mask'],(128,128))
input_image,input_mask=normalize(input_image,input_mask)
return input_image,input_mask
# + id="oJ98h5iPqvm3"
TRAIN_LENGTH=info.splits['train'].num_examples
BATCH_SIZE=64
BUFFER_SIZE=1000
STEPS_PER_EPOCH=TRAIN_LENGTH//BATCH_SIZE
# + id="S_funitHrbWq"
train=dataset['train'].map(load_image_train,num_parallel_calls=tf.data.experimental.AUTOTUNE)
test=dataset['test'].map(load_image_test)
# + id="M0YRAkyTsaqm"
train_dataset=train.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
train_dataset=train_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
test_dataset=test.batch(BATCH_SIZE)
# + id="RV44JJOY6T-G"
def display(display_list):
plt.figure(figsize=(15,15))
title=['Input Image','True Mask','Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1,len(display_list),i+1)
plt.title(title[i])
plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i]))
plt.axis('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 427} id="_IONrPqo8rla" outputId="637bde82-0ffa-4987-853e-77f3561ab88f"
for image,mask in train.take(1):
sample_image, sample_mask=image,mask
display([sample_image,sample_mask])
# + id="cgOwYuVA9FaU"
OUTPUT_CHANNELS=3
# + id="00rVhaRQ-86-" colab={"base_uri": "https://localhost:8080/"} outputId="a86f1b9c-4784-4ada-9a88-55108f47164f"
base_model=tf.keras.applications.MobileNetV2(input_shape=[128,128,3],include_top=False)
layer_names=[
'block_1_expand_relu',
'block_3_expand_relu',
'block_6_expand_relu',
'block_13_expand_relu',
'block_16_project',
]
layers=[base_model.get_layer(name).output for name in layer_names]
down_stack=tf.keras.Model(inputs=base_model.input, outputs=layers)
down_stack.trainable=False
# + id="0_LsAaSOAzmN"
up_stack=[
pix2pix.upsample(512,3),
pix2pix.upsample(256,3),
pix2pix.upsample(128,3),
pix2pix.upsample(64,3),
]
# + id="dn5L3JBXCK0z"
def unet_model(output_channels):
inputs=tf.keras.layers.Input(shape=[128,128,3])
x=inputs
skips=down_stack(x)
x=skips[-1]
skips=reversed(skips[:-1])
for up,skip in zip(up_stack,skips):
x=up(x)
concat=tf.keras.layers.Concatenate()
x=concat([x,skip])
last=tf.keras.layers.Conv2DTranspose(
output_channels,3,strides=2,
padding='same'
)
x=last(x)
return tf.keras.Model(inputs=inputs,outputs=x)
# + id="_oZVrGIlEtMx"
model=unet_model(OUTPUT_CHANNELS)
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="byjuysiyFVQ6" outputId="1e56ce62-8f86-441a-8efa-daf0b1c1038b"
tf.keras.utils.plot_model(model,show_shapes=True)
# + id="fuDZZte4FcqQ"
def create_mask(pred_mask):
pred_mask=tf.argmax(pred_mask,axis=-1)
pred_mask=pred_mask[...,tf.newaxis]
return pred_mask[0]
# + id="vPNPOQ41GaQf"
def show_predictions(dataset=None,num=1):
if dataset:
for image,mask in dataset.take(num):
pred_mask=model.predict(image)
display([image[0],mask[0],create_mask(pred_mask)])
else:
display([sample_image,sample_mask,create_mask(model.predict(sample_image[tf.newaxis,...]))])
# + colab={"base_uri": "https://localhost:8080/", "height": 293} id="JwBr5qXZIL1l" outputId="28949fb6-728b-4dbe-ed7e-5d58785ae406"
show_predictions()
# + id="QD9jlNYnINh2"
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self,epoch,logs=None):
clear_output(wait=True)
show_predictions()
print('\nSample Prediction after epoch {}\n'.format(epoch+1))
# + colab={"base_uri": "https://localhost:8080/", "height": 366} id="l2VTTjiQJPcD" outputId="bd9ef369-5c9b-41ec-dc09-d88c80378af5"
EPOCHS=20
VAL_SUBSPLITS=5
VALIDATION_STEPS = info.splits['test'].num_examples//BATCH_SIZE//VAL_SUBSPLITS
model_history = model.fit(train_dataset, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALIDATION_STEPS,
validation_data=test_dataset,
callbacks=[DisplayCallback()])
# + id="ASANjef6Lajb" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="a00f5364-eba1-4195-ed8a-4c183d7d188c"
loss = model_history.history['loss']
val_loss = model_history.history['val_loss']
epochs = range(EPOCHS)
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'bo', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Value')
plt.ylim([0, 1])
plt.legend()
plt.show()
# + id="2BrJGnD6Ldk1" colab={"base_uri": "https://localhost:8080/", "height": 845} outputId="9c836d8e-d1ae-438a-e122-cb79e15787c0"
show_predictions(test_dataset, 3)
# + id="OICoM3PcLqCv"
stion 20
y_pred_lasso = lasso_reg.predict(X_test)
rmse = np.sqrt(mean_squared_error(Y_test, y_pred_lasso))
rmse.round(3)
| 9,293 |
/deeplearning/Regularization.ipynb
|
edfbbafbc82d7c7be16bbea5ee72e5dae58afef3
|
[] |
no_license
|
luluenen/Moocs
|
https://github.com/luluenen/Moocs
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 273,623 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Colab 사용자를 위한 안내
#
# 해당 노트북은 **로컬** 환경에서 최적화 되어 있습니다. 로컬 환경에서 진행하시는 분들은 바로 학습을 진행하시면 됩니다.
#
# Colab 을 사용하시는 분들은 처음에 아래 주석을 해제하시고 한번 만 실행시켜주세요!
#
# * 주석을 해제하는 방법: 해당 영역을 선택하고, `Ctrl + /` 를 누르면 해당 영역의 주석에 해제됩니다.
# +
# from google.colab import auth
# auth.authenticate_user()
# from google.colab import drive
# drive.mount('/content/gdrive', force_remount=False)
# -
# Colab 을 사용하시는 분들은 아래 주석을 해제하시고 `folder` 변수 명에 프로젝트 디렉토리를 저장한 위치를 작성해주세요! 예를 들어, `02_cnn_tf` 의 위치가 "내 드라이브 > colab_notebook > tensorflow" 폴더 안에 있는 경우, "colab_notebook/tensorflow" 를 작성하시면 됩니다.
#
# ```python
# folder = "colab_notebook/tensorflow"
# ```
# +
# import os
# from pathlib import Path
# # folder 변수에 구글드라이브에 프로젝트를 저장한 디렉토리를 입력하세요!
# folder = ""
# project_dir = "02_cnn_tf"
# base_path = Path("/content/gdrive/My Drive/")
# project_path = base_path / folder / project_dir
# os.chdir(project_path)
# for x in list(project_path.glob("*")):
# if x.is_dir():
# dir_name = str(x.relative_to(project_path))
# os.rename(dir_name, dir_name.split(" ", 1)[0])
# print(f"현재 디렉토리 위치: {os.getcwd()}")
# -
# TensorFlow 는 `1.14.0` 버전을 기준으로 합니다. Colab 사용시, 첫번째 코드를 실행해보시고 만약에 버전이 다르다면 두 번째 주석을 해제하고 실행해주세요.
## 첫번째 코드블록
import tensorflow as tf
print('tensorflow version: {}'.format(tf.__version__))
# +
## 두번째 코드블록
# # !pip install tensorflow==1.14.0
# -
# # Convolutional Neural Network: Cat-Dog Classifier
#
# <img src="http://drive.google.com/uc?export=view&id=16c7SWB6wboraKe4OFRmSU6RywZhWB9Xc" width="600px" height="400px" />
#
# * 이미지 출처: ImageNet Classification with Deep Convolutional Neural Networks(A Krizhevsky, 2012)
#
# ImageNet classifcation challenge는 컴퓨터를 통해 총 1000가지 종류의 사물에 대한 이미지를 분류하는 대회입니다. 이 대회에서 모델 학습을 위해 제공하는 학습 이미지는 100만장이 훌쩍 넘습니다. 지난 수 년간 대회가 진행되었는데, 초창기에는 이 시각 인지(visual recognition) 문제를 인간만큼 잘 처리하는 알고리즘이 나오지 못했습니다. 하지만 2012년 CNN의 등장과 함께 엄청난 성능 향상이 시작되었습니다.
#
# <img src="http://drive.google.com/uc?export=view&id=1twuxI3IlZbgF2glLKyGoD6yGh5LQZLys" width="800px" height="400px" /><caption><center><2012년 AlexNet이 등장하면서 CNN은 많은 주목을 받기 시작합니다></center></caption>
#
# * 이미지 출처: [이곳](https://www.researchgate.net/figure/Winner-results-of-the-ImageNet-large-scale-visual-recognition-challenge-LSVRC-of-the_fig7_324476862)
#
# 2011년에 우승한 알고리즘의 오차율(error rate)이 26% 였는데, 2012년 CNN기반의 AlexNet이 1년만에 무려 10%의 오차율을 줄이면서 우승하게 되었습니다. 2015년에는 마찬가지로 CNN기반의 ResNet이 등장하면서 3.6%의 오차율을 기록했는데, 이는 실제 사람의 평균 오차율인 5%를 넘는 기록입니다. 이제 우리는 컴퓨터 알고리즘을 통해 사람보다 사물을 정확하게 인식할 수 있는 것입니다. 그리고 이러한 변화는 CNN의 등장과 함께 매우 짧은 시간에 이루어진 것이었습니다. 실제로 최근 딥러닝의 인기는 CNN의 등장 그리고 컴퓨터 비전(computer vision)분야의 엄청난 발전과 깊은 관련이 있다고 할 수 있습니다. 이번 프로젝트에서는 CNN을 직접 설계하여 고양이와 강아지를 분류하는 문제를 해결해 볼 것입니다. 여기에 약간의 제약 조건을 추가해, 상당히 적은 양의 데이터를 가지고도 매우 빠른 시간안에 높은 성능의 모델을 학습시키는 방법에 대해 배울 것입니다.
#
# 이번 실습의 목표는 다음과 같습니다.
# - CNN을 설계하고 이미지 분류기를 학습시킨다.
# - 학습 과정에서 데이터 증식(data augmentation)을 적용한다.
# - 학습된 모델을 저장하고 불러올 수 있다.
# - 전이학습(transfer learning)을 구현할 수 있다.
#
# 이번 과정을 통해 얻는 최종 결과물은 아래 그림과 같습니다.
#
# <img src="http://drive.google.com/uc?export=view&id=12IySBKqWiWdBR-IjkKWW41czlAjEqm5U" width="800px" height="300px" /><caption><center><CNN을 통해 강아지와 고양이분류를 구현합니다></center></caption>
#
# ### 이제부터 본격적으로 프로젝트를 시작하겠습니다.
#
# **"[TODO] 코드 구현"** 부분의 **"## 코드 시작 ##"** 부터 **"## 코드 종료 ##"** 구간에 필요한 코드를 작성해주세요. **나머지 작성구간이 명시 되지 않은 구간은 임의로 수정하지 마세요!**
#
# 실습코드는 Python 3.6, TensorFlow 1.14.0 버전을 기준으로 작성되었습니다.
#
# **본문 중간중간에 TensorFlow 함수들에 대해 [TensorFlow API 문서](https://www.tensorflow.org/api_docs/python/tf) 링크를 걸어두었습니다. API 문서를 직접 확인하는 일에 익숙해지면 나중에 여러분이 처음부터 모델을 직접 구현해야 할 때 정말 큰 도움이 됩니다.**
# + [markdown] toc=true
# <h1>목차<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Colab-사용자를-위한-안내" data-toc-modified-id="Colab-사용자를-위한-안내-1">Colab 사용자를 위한 안내</a></span></li><li><span><a href="#Convolutional-Neural-Network:-Cat-Dog-Classifier" data-toc-modified-id="Convolutional-Neural-Network:-Cat-Dog-Classifier-2">Convolutional Neural Network: Cat-Dog Classifier</a></span><ul class="toc-item"><li><span><a href="#1.-Package-load" data-toc-modified-id="1.-Package-load-2.1">1. Package load</a></span></li><li><span><a href="#2.-데이터셋-다운로드-및-훈련,-검증,-테스트-데이터셋-구분" data-toc-modified-id="2.-데이터셋-다운로드-및-훈련,-검증,-테스트-데이터셋-구분-2.2">2. 데이터셋 다운로드 및 훈련, 검증, 테스트 데이터셋 구분</a></span></li><li><span><a href="#3.-하이퍼파라미터-세팅" data-toc-modified-id="3.-하이퍼파라미터-세팅-2.3">3. 하이퍼파라미터 세팅</a></span></li><li><span><a href="#4.-tf.data.Dataset을-이용하여-Input-pipeline-만들기" data-toc-modified-id="4.-tf.data.Dataset을-이용하여-Input-pipeline-만들기-2.4">4. <code>tf.data.Dataset</code>을 이용하여 Input pipeline 만들기</a></span></li><li><span><a href="#5.-네트워크-설계" data-toc-modified-id="5.-네트워크-설계-2.5">5. 네트워크 설계</a></span></li><li><span><a href="#6.-Loss-function,-Optimizer-정의" data-toc-modified-id="6.-Loss-function,-Optimizer-정의-2.6">6. Loss function, Optimizer 정의</a></span></li><li><span><a href="#7.-train,-validation,-test-함수-정의" data-toc-modified-id="7.-train,-validation,-test-함수-정의-2.7">7. train, validation, test 함수 정의</a></span></li><li><span><a href="#8.-Training" data-toc-modified-id="8.-Training-2.8">8. Training</a></span></li><li><span><a href="#9.-저장된-모델-불러오기-및-test" data-toc-modified-id="9.-저장된-모델-불러오기-및-test-2.9">9. 저장된 모델 불러오기 및 test</a></span></li><li><span><a href="#10.-Transfer-Learning" data-toc-modified-id="10.-Transfer-Learning-2.10">10. Transfer Learning</a></span></li><li><span><a href="#11.-Summary" data-toc-modified-id="11.-Summary-2.11">11. Summary</a></span></li></ul></li><li><span><a href="#Self-Review" data-toc-modified-id="Self-Review-3">Self-Review</a></span></li></ul></div>
# -
# ## 1. Package load
#
# 필요한 패키지들을 로드합니다.
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from tensorflow.keras import layers
import tensorflow as tf
import check_util.checker as checker
from PIL import Image
from IPython.display import clear_output
import os
import time
import re
import glob
import shutil
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
tf.enable_eager_execution()
print('tensorflow version: {}'.format(tf.__version__))
print('GPU 사용 가능 여부: {}'.format(tf.test.is_gpu_available()))
# -
# ## 2. 데이터셋 다운로드 및 훈련, 검증, 테스트 데이터셋 구분
#
# 이번 실습에서는 고양이와 강아지 이미지를 분류하는 네트워크를 학습시킬 것입니다.
# 우리가 이번에 사용할 데이터셋은 2013년 후반에 '캐글'이라는 데이터사이언스 대회 플랫폼에서 열린 컴퓨터 비전 경연 대회의 일환으로 제작되었습니다. 참고로 CNN을 사용한 참가자가 95% 정확도를 달성하여 당시 대회 우승을 차지했습니다.
#
# 이번 예제에서 사용할 강아지-고양이 이미지 데이터셋 원본 출처는 [이곳](https://www.kaggle.com/c/dogs-vs-cats/data)입니다. 데이터셋은 편의상 미리 다운로드 받아 압축해 제공해드렸습니다.
#
# 아래 주석을 해제하시고 실행하면 로컬에서 압축을 풀지 않고, 압축을 해제할 수 있습니다. (로컬, Colab 동일)
#
#
# +
# import zipfile
# from pathlib import Path
# current_path = Path().absolute()
# data_path = current_path / "data"
# print("현재 디렉토리 위치: {}".format(current_path))
# if (data_path / "my_cat_dog").exists():
# print("이미 'data/my_cat_dog' 폴더에 압축이 풀려있습니다. 확인해보세요!")
# else:
# with zipfile.ZipFile(str(data_path / "my_cat_dog.zip"), "r") as zip_ref:
# zip_ref.extractall(str(data_path / "my_cat_dog"))
# print("Done!")
# -
# 원본 데이터셋에는 고양이와 강아지 이미지가 각각 12500개로 총 25000개의 학습용 데이터셋이 구성되어 있습니다.
# 이 정도면 강아지와 고양이를 학습시키에는 어느정도 충분한 양입니다. 이번 실습에서 우리는 훨씬 더 적은 양의 데이터만 사용해 볼 것입니다. 이처럼 비교적 매우 적은 양의 데이터만 가지고도 많은 양의 데이터를 통해 학습시킨 모델에 크게 뒤지지 않는 모델을 학습시키는 방법에 대해서 배워볼 것입니다.
#
# 제공해드린 데이터셋에는, 훈련용 데이터가 class당 1000개, 검증용 데이터가 class당 500개, 마지막으로 테스트용 데이터는 class당 1000개로 구성되어 있습니다.
data_dir = './data/my_cat_dog' # 압축 해제된 데이터셋의 디렉토리 경로
# 아래의 코드를 실행하면 데이터셋 구성이 올바르게 되었는지 확인할 수 있습니다.
#
# 다음과 같은 결과가 나온다면 데이터셋 구성이 정상적으로 이루어진 것입니다.
#
# 만약 아래와 같은 결과가 나오지 않는다면 `./data/my_cat_dog.zip`을 같은 경로인 'data' 폴더에 수동으로 압축 해제하시기 바랍니다.
#
# ```
# 훈련용 고양이 이미지 개수: 1000
# 훈련용 강아지 이미지 개수: 1000
# 검증용 강아지 이미지 개수: 500
# 검증용 강아지 이미지 개수: 500
# 테스트용 강아지 이미지 개수: 1000
# 테스트용 강아지 이미지 개수: 1000
# ```
checker.dataset_check(data_dir)
# ## 3. 하이퍼파라미터 세팅
#
# 학습에 필요한 하이퍼파라미터의 값을 초기화해줍니다. 하이퍼파라미터는 뉴럴네트워크를 통하여 학습되는 것이 아니라 학습율(learning rate), 사용할 레이어의 수 등 설계자가 결정해줘야 하는 값들을 의미합니다.
#
# 미니배치의 크기(`batch_size`), 학습 할 epoch 수(`max_epochs`), 학습률(`learning_rate`) 등의 값들을 다음과 같이 정했습니다.
batch_size = 20
max_epochs = 20
learning_rate = 1e-4
IMG_SIZE = 150
# ## 4. `tf.data.Dataset`을 이용하여 Input pipeline 만들기
#
# * `tf.data.Dataset`에 대한 자세한 설명은 [Importing Data](https://www.tensorflow.org/guide/datasets) 페이지 참조하실 수 있습니다.
# * 이전 실습에서는 `tf.keras.datasets`에 이미 정의되어 있는 Fashion-MNIST dataset을 불러와서 사용하였습니다.
# * 이번 실습에서는 `tf.data.Dataset`를 이용하여 폴더 안에 있는 파일 리스트를 읽고 데이터 증식(data augmentation)하는 과정을 만들어보겠습니다.
# * `tf.data.Dataset.list_files`을 이용하여 폴더 안에 파일 리스트를 가지고 `tf.data.Dataset`을 만들어봅니다.
# * `tf.data.Dataset.map` 함수를 이용하여 data augmentation을 적용해봅니다.
# ### 데이터 증식(Data Augmentation) 함수
#
# 우리는 이번 실습에서 모델을 학습시키는 데에 비교적 적은 양의 데이터셋을 사용하고 있습니다. 이처럼 적은 양의 훈련 데이터를 통해 학습시킨 모델은 오버피팅의 문제가 매우 심각할 수 있습니다. 데이터 증식(data augmentation) 기법은 이러한 작은 데이터셋의 한계를 어느정도 극복하기 위한 좋은 방법입니다. 데이터 증식은 학습할 때 기존의 데이터에 약간의 변형을 가해 모델에 넣어주는 기법을 말합니다. 이렇게 함으로써 모델은 실질적으로 매 epoch마다 서로 다른 데이터를 학습하게 됩니다. 실제로 데이터를 늘리는 것은 아니지만 모델 입장에서는 같은 데이터가 매번 다른 변형을 통해 들어가기 때문에 다양한 이미지처럼 느끼고 다양한 패턴을 고려하여 학습하게 됩니다.
#
# 데이터 증식을 통한 학습과정을 도식화하면 다음과 같습니다.
#
# <img src="http://drive.google.com/uc?export=view&id=1b0o9nVH8sDQyv_jSvRjjm5WT0jQ6XYxH" width="600px" height="400px" />
# <caption><center><데이터 증식을 통한 학습과정 도식화></center></caption>
#
# 데이터 증식 기법에는 여러가지가 있습니다. 우리는 모델 훈련과정에서 다양한 증식 기법을 랜덤하게 적용하여 훈련용 데이터를 증식하는 효과를 얻을 것입니다.
#
# 이번 실습에서 적용할 데이터 증식(data augmentation) 기법은 다음과 같습니다.
# * `random_crop`: 우리가 가진 원본 데이터는 이미지마다 크기가 다르기 때문에 모델에 입력으로 주기 위해서는 반드시 일정한 크기로 맞추어 줘야 합니다. 우리 모델에서는 150 $\times$ 150로 입력 이미지의 크기를 고정하겠습니다.
# * `random_rotation`: 이미지를 random 한 각도(-0.3radian ~ 0.3radian)로 회전을 시킵니다.
# * `flip_left_right`: 1/2의 확률로 이미지를 좌우 반전시킵니다.
#
# 검증 또는 테스트를 진행하는 경우에는, 실행할 때마다 일관된 결과가 나오도록 하기 위해 데이터 증식 기법을 적용하지 않습니다. 하지만 `resize`를 통해 마찬가지로 150 $\times$ 150으로 이미지 크기를 고정해주도록 합니다.
#
# 각각의 기법을 적용한 예시는 다음과 같습니다.
#
# <img src="http://drive.google.com/uc?export=view&id=1wvJplIH2Ky04-75m8I6g5lmCO6fcx16I" width="800px" height="200px" />
# ### <font color='red'>[TODO] 코드 구현</font>
#
# Augmentation 관련 코드들은 직접 구현해봅니다. TensorFlow 안에 다양한 image 처리 API는 [`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image)를 참고 하시면 됩니다. 본 실습에서 사용하는 방법이 아닌 자동으로 이미지 데이터를 증식 시키는 API인 [`tf.keras.preprocessing.image.ImageDataGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator)가 있지만 이 실습에서는 여러분만의 데이터 증식 기법을 만들어 보게 될겁니다.
# #### load 함수 구현
#
# * `tf.io.read_file(filename)` 함수를 이용하여 `filename`에 해당하는 파일을 읽습니다.
# * `filename`은 `string`으로 된 파일 경로 입니다.
# * `tf.io.read_file()` 함수를 통해 읽으면 바이너리 형태로 읽고 그것을 `tf.image.decode_jpeg()` 함수를 통해 decode 하여 `tf.uint8` 타입의 이미지 데이터로 변환합니다.
# * 그 후 `tf.cast`를 통해 `tf.uint8`타입을 `tf.float32`타입으로 바꾸어 주어야 합니다.
def load(image_file, label):
# 해당경로의 파일을 읽어서 float 타입으로 변환합니다.
image = tf.io.read_file(image_file)
image = tf.image.decode_jpeg(image)
image = tf.cast(image, tf.float32)
return image, label
# 아래의 코드블록은 구현된 이미지 로드 함수를 테스트 합니다.
image, label = load(os.path.join(data_dir, 'train/cat/cat.100.jpg'), 0)
# casting to int for matplotlib to show the image
print("label: {}".format(label))
plt.figure()
plt.imshow(image/255.0)
# #### `resize`함수를 구현
#
# * 이미지를 원하는 사이즈 (높이 `height`와 너비 `width`)로 변환하는 작업을 합니다.
# * argument로 `image`와 resize하고자 하는 `height`, `width`를 받아서 resize를 합니다.
# * [`tf.image.resize_images`](https://www.tensorflow.org/api_docs/python/tf/image/resize_images) 사용합니다.
#
# **resize 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def resize(input_image, height, width):
# 원하는 height와 width로 이미지를 resize 합니다.
## 코드 시작 ##
input_image = tf.image.resize_images(input_image, (height, width))
## 코드 종료 ##
return input_image
checker.resize_fn_check(resize)
# #### `random_rotation`함수를 구현
#
# * 이미지를 임의의(random) 각도로 회전하는 작업을 합니다.
# * Random 하게 회전할 각도를 `angle`이라는 변수로 두고 [`tf.random.uniform`](https://www.tensorflow.org/api_docs/python/tf/random/uniform)을 통해서 (-0.3, 0.3) 구간에서 1개의 숫자로 뽑아냅니다.
# * 아무 각도나 회전시키지 않고 우리가 임의로 (-0.3 radian에서 0.3 radian) 범위를 제한합니다.
# * [`tf.contrib.image.rotate`](https://www.tensorflow.org/api_docs/python/tf/contrib/image/rotate) 사용하여 입력이미지와 회전하려는 각도를 전달해주면 됩니다.(*참고 : tf.contrib 은 TensorFlow 2.0 버젼부터는 사용되지 않을 수 있다고 합니다.*)
#
# **random_rotation 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def random_rotation(input_image):
# 이미지를 원하는 각도(radian)로 회전 시킵니다.
## 코드 시작 ##
angles = tf.random.uniform([1], seed = 0.3)
rotated_image = tf.contrib.image.rotate(input_image, angles)
## 코드 종료 ##
return rotated_image
# #### `random_crop`함수를 구현
#
# * 이미지를 원하는 size로 crop (영역 추출) 합니다.
# * 특정영역에서 crop하지 않고 random 하게 하기 때문에 `random_crop`입니다.
# * 차후 `central_crop` 구현도 있으니 주의하세요.
# * [`tf.image.random_crop`](https://www.tensorflow.org/api_docs/python/tf/image/random_crop) 사용
# * `tf.image.random_crop(image, size)` 함수의 인자(argument)로는 이미지와 crop할 size(높이, 너비, 채널)를 알려주면 됩니다.
#
# **random_crop 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def random_crop(input_image, crop_size):
# 이미지를 원하는 crop_size로 random crop 합니다.
# 여기서 crop_size 는 이미지의 높이(혹은 너비, 이번 실습에서는 높이와 너비가 같습니다.)를 나타내는 정수입니다.
assert isinstance(crop_size, int), "crop_size 는 정수형태여야 합니다."
## 코드 시작 ##
cropped_image = tf.image.random_crop(input_image, [crop_size, crop_size, 3])
## 코드 종료 ##
return cropped_image
checker.random_crop_fn_check(random_crop)
# #### `normalize`함수를 구현
#
# * 이미지는 [0, 255]사이의 픽셀 값으로 이루어져 있습니다.
# * [0, 255] 범위에 있는 픽셀 값을 [-1, 1]로 만드는 것을 normalize 라고 합니다.
# * 아래 방법을 이용하여 `normalize`함수를 만들어 보세요. 아래의 절차를 한줄로 구현하시면 됩니다(TensorFlow 함수를 이용하지 않고 일반적인 나눗셈과 뺌셈 연산을 수행하면 됩니다).
# * 이미지 픽셀값을 127.5로 나눈다: [0, 255] -> [0, 2]
# * 다시 1을 빼준다: [0, 2] -> [-1, 1]
#
# **normalize 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
# normalizing the images to [-1, 1]
def normalize(input_image):
# 이미지 픽셀값의 범위를 normalize 합니다. [0, 255] -> [-1, 1]
# 코드 시작
input_image = (input_image / 127.5) - 1
# 코드 종료
return input_image
checker.normalize_fn_check(normalize)
# #### `random_jitter`함수를 구현
#
# * `random_jitter`함수는 학습 시 이미지 증식(augment)를 수행하기 위하여 위에서 직접 구현한 `resize`, `random_rotation`, `random_crop`들을 이용합니다.
# * `random_jitter` 구현 할때 추가로 random 이미지 좌우반전도 적용해봅니다.
# * [`tf.image.random_flip_left_right`](https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)을 사용
#
# * 전체 구현 pipeline
# 1. `resize`: 176 x 176 size로 resize
# 2. `random_crop`: 150 x 150 size로 random crop
# 3. `random_rotation`: 임의의 각도로 (위에 random_rotation 함수에서 정한) 이미지 회전
# 4. `random_flip_left_right`: 반반의 확률로 좌우 반전
#
# **random_jitter 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def random_jitter(input_image):
# resize, random_crop, random_rotation, random_flip 함수들을 이용하여 augmentation을 합니다.
## 코드 시작 ##
# resizing to 176 x 176 x 3
input_image = resize(input_image, 176, 176)
# randomly cropping to 150 x 150 x 3
input_image = random_crop(input_image, 150)
# randomly rotation
input_image = random_rotation(input_image)
# randomly mirroring
input_image = tf.image.random_flip_left_right(input_image)
## 코드 종료 ##
return input_image
# #### `central_crop` 함수를 구현
#
# * 학습 할 때는 이미지 증식을 위해 `random_jitter`함수를 이용합니다.
# * 하지만 검증(validation) 데이터나 테스트 데이터를 랜덤하게 잘라 사용한다면 매번 다른 결과를 보일 것이고 분류기 검증을 어렵게 할 것 입니다.
# * 따라서 테스트를 할때는 이미지 증식을 적용하지 않고 학습에 사용한 이미지 크기만큼 가운데를 잘라서 사용하겠습니다. 이를위해 함수 `central_crop`을 구현 하겠습니다.
# * [`tf.image.central_crop`](https://www.tensorflow.org/api_docs/python/tf/image/central_crop) 사용
# * `central_crop`의 옵션으로 전체 이미지에서 얼마의 비율로 중앙 잘라내기(crop)를 수행할지 정하는 `central_fraction` 인자가 있습니다.
# * (176 x 176)으로 resize한 후 `central_fraction`을 잘 조절하여 (150 x 150) 으로 `crop`을 합니다.
# * 비율을 계산하는 수식은 다음과 같습니다. "비율 = 잘라낸 후(crop) 후 이미지 크기 / 잘리기 전 이미지 크기" 입니다.
#
# **central_crop 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def central_crop(input_image):
# 176 x 176 으로 resize후 150 x 150 중앙 crop 합니다.
## 코드 시작 ##
# resizing to 176 x 176 x 3
input_image = resize(input_image, 176, 176)
# central cropping to 150 x 150 x 3
input_image = tf.image.central_crop(input_image, 150/176)
## 코드 종료 ##
return input_image
checker.central_crop_fn_check(central_crop)
# #### `load_image_train`함수를 구현
#
# * 학습용 이미지 load pipeline은 다음과 같습니다. 앞서 구현했던 3개의 함수들을 이용하여 구성해 봅니다.
# * `load` -> `random_jitter` -> `normalize`
# 1. image load
# 2. random_jitter를 이용한 데이터 증식
# 3. 이미지의 normalization
#
# **load_image_train 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def load_image_train(image_file, label):
## 코드 시작 ##
input_image, label = load(os.path.join('./data/my_cat_dog/train'), 0)
input_image = random_jitter(input_image)
input_image = normalize(input_image)
## 코드 종료 ##
return input_image, label
# #### `load_image_val_and_test`함수를 구현
#
# * 검증(validation) 및 테스트 이미지 load pipeline은 다음과 같습니다. 위에서 구현한 것과 마찬가지로 우리가 구현한 3개의 함수를 이용합니다.
# * `load` -> `central_crop` -> `normalize`
# 1. image load
# 2. 변형을 최소화하는 `central_crop` 정도만 합니다.
# 3. input 데이터 normalize
#
# **load_image_val_and_test 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def load_image_val_and_test(image_file, label):
## 코드 시작 ##
input_image, label = None
input_image = None
input_image = None
## 코드 종료 ##
return input_image, label
# ### Input pipeline
#
# * 위에 정의한 augmentation 함수를 이용하여 input data pipeline 만듭니다.
# +
def add_label(image_file, label):
return image_file, label
def get_dataset(mode, data_dir='./data/my_cat_dog'):
# mode 는 train, valid, test 로 나눠져 있습니다.
folder_list = [f for f in os.listdir(
os.path.join(data_dir, mode)) if not f.startswith('.')]
# 위 코드는 각 mode에 해당하는 folder에 있는 폴더 이름을 list로 나타냅니다.
# 즉 학습에 사용할 category의 이름을 list로 나타내는 것입니다.
dataset = tf.data.Dataset.list_files( # 1번
os.path.join(data_dir, mode, folder_list[0], '*.jpg'))
dataset = dataset.map(lambda x: add_label(x, 0)) # 2번
for label, category_name in enumerate(folder_list[1:], 1): # 3번
temp_dataset = tf.data.Dataset.list_files( # 4번
os.path.join(data_dir, mode, category_name, '*.jpg'))
temp_dataset = temp_dataset.map(lambda x: add_label(x, label)) # 5번
dataset = dataset.concatenate(temp_dataset) # 6번
return dataset
# -
# `get_dataset` 함수는 `DatasetV1Adapter` 라는 것을 반환하는데 일종의 데이터를 담는 컨테이너 역할을 합니다.
#
# 1. `tf.data.Dataset.list_files`을 이용하여 첫번째 폴더('cat')에 있는 `*.jpg`의 파일이름을 가져와서 `train_dataset`으로 만듭니다.
# 2. 해당 `dataset`에 `tf.data.Dataset.map`함수를 이용하여 label(0)을 추가합니다.
# 3. for문을 통해 `folder_list`에 있늘 폴더들을 두번째 부터 읽어옵니다.
# 4. `tf.data.Dataset.list_files`을 이용하여 두번째 폴더('dog')에 있는 `*.jpg`의 파일이름을 가져와서 `temp_dataset`으로 만듭니다.
# 5. 해당 `dataset`에 `tf.data.Dataset.map`함수를 이용하여 label(1)을 추가합니다.
# 6. `train_dataset`('cat' 데이터)와 `temp_dataset`('dog' 데이터)를 `tf.data.Dataset.concatenate`를 이용하여 합칩니다.
# 7. for문을 반복합니다. (각 카테고리의 데이터와 label이 차례로 `train_dataset`에 합쳐집니다.
# 이제 "my_cat_dog" 폴더에서 각 "train", "val", "test" 경로별로 `train_dataset`, `valid_dataset`, `test_dataset` 을 만들어줍니다.
#
# * `tf.data.Dataset.shuffle` : 데이터 셋을 shuffle 시켜줍니다.
# * `tf.data.Dataset.map` : 데이터 셋에 이전에 정의했던 Augmentation 함수를 포함한 다양한 전처리 함수를 적용시켜줍니다.
# * `tf.data.Dataset.batch` : 데이터 셋의 batch_size를 결정합니다.
# train_dataset
train_dataset = get_dataset(mode="train", data_dir=data_dir)
# shuffle의 인자로 buffer_size가 필요한데 이는 전체 데이터셋 갯수로 하는게 좋습니다.
N = BUFFER_SIZE = len(list(train_dataset))
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.map(load_image_train,
num_parallel_calls=16)
train_dataset = train_dataset.batch(batch_size)
# `val`, `test`데이터 셋은 shuffle할 필요가 없습니다.
# * `train`시 shuffle하는 목적은 mini-batch gradient descent를 하기 위해 mini-batch 데이터를 random 하게 뽑는 것입니다.
# * `val`, `test` 데이터 셋의 목적은 모든 데이터셋을 다 보고 성능을 평가하기 위함입니다. 그렇기 때문에 `val`, `test`데이터 셋은 shuffle할 필요가 없습니다.
# +
# val_dataset
val_dataset = get_dataset(mode="val", data_dir=data_dir)
val_dataset = val_dataset.map(load_image_val_and_test)
val_dataset = val_dataset.batch(batch_size)
# test_dataset
test_dataset = get_dataset(mode="val", data_dir=data_dir)
test_dataset = test_dataset.map(load_image_val_and_test)
test_dataset = test_dataset.batch(batch_size)
# -
# 아래의 코드를 실행해 코드를 성공적으로 완성했는지 확인해보세요.
#
# 별다른 문제가 없다면 이어서 진행하면 됩니다.
checker.customized_dataset_check(train_dataset)
# ### Augmented image 출력
#
# 데이터 증식이 적용된 훈련 이미지를 출력해봅니다.
for images, labels in train_dataset.take(1):
break
fig, axes = plt.subplots(1, 4, figsize=(12, 20))
for i, ax in enumerate(axes):
ax.imshow(images[i].numpy()*0.5+0.5)
ax.axis("off")
plt.show()
# ## 5. 네트워크 설계
#
# 이제 학습할 뉴럴네트워크를 설계합니다. 일반적으로 CNN 기반의 분류기는 먼저 일련의 convolutional layer를 통해 이미지에서 특징을 추출하고, 마지막에 추출된 특징을 dense layer를 통해 분류하는 구조를 많이 사용합니다.
# ### Conv layer(Conv - BatchNorm - ReLU - Pool) 만들기
#
# ### <font color='red'>[TODO] 코드 구현</font>
#
# * 보통 Convolution 연산을 하고 나면 주로 Batch normalization - ReLU - MaxPooling 등의 연산을 차례대로 하게 됩니다. 따라서 자주 쓰이는 (Conv - BN - ReLU - MaxPooling) 연산을 `class` 형태로 만들어 두면 나중에 재사용하기 쉽습니다.
# * TensorFlow는 [`tf.keras.layers.Conv2d`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)를 이용해 convolutional layer를 만들 수 있습니다. 필요한 인자(arguments)는 필터의 개수(혹은 채널의 숫자)를 뜻하는 `num_filters` 와 필터(커널)의 크기인 `kernel_size` 입니다.
# * 모든 convolutional layer 뒤에는 batch normalization([`tf.keras.layers.BatchNormalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization))을 적용하고 비선형 activation function으로 ReLU([`tf.keras.layers.ReLU`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU))를 사용하겠습니다.
# * ReLU 적용 이후에는 2x2의 maxpooling([`tf.keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D))을 적용하겠습니다. maxpooling 의 필터 크기는 `pool_size` 를 통해 조정합니다.
# * 지금 만들 `class`에 `BatchNormalization`이 쓰이기 때문에 `class` 내부 `call`함수에 꼭 `training` 옵션을 주어야 합니다.
#
# **이제 Conv layer 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
class Conv(tf.keras.Model):
def __init__(self, num_filters, kernel_size):
super(Conv, self).__init__()
## 코드 시작 ##
self.conv = None
self.bn = None
self.relu = None
self.pool = None
## 코드 종료 ##
def call(self, inputs, training=True):
## 코드 시작 ##
x = None # self.conv forward
x = None # self.bn forward
x = None # self.relu forward
x = None # self.pool forward
## 코드 종료 ##
return x
# 방금 만든 Conv class가 올바르게 작동되는지 아래 코드를 통해 check 해보세요.
filters = 32
kernel_size = 3
conv = Conv(num_filters=filters, kernel_size=kernel_size)
for images, labels in train_dataset.take(1):
outputs = conv(images)
checker.conv_layer_check(conv, filters, kernel_size)
# ### <font color='red'>[TODO] 코드 구현: 네트워크 설계</font>
#
# 우리는 총 4개의 `Conv` 층을 통해 입력 이미지에서 특징을 추출할 것입니다. 다음을 읽고 코드를 완성해보세요.
# - 각각의 `self.conv` 변수에 이전에 만든 `Conv` 층를 쌓을 것입니다.
# - 모든 `Conv` 층의 필터 크기(kerner_size)는 3x3으로 하겠습니다.
# - 먼저 쌓을 `Conv` 층의 입력 채널 수는 3입니다. 입력 이미지가 RGB 3채널이기 때문입니다. 출력 채널 수는 우리가 자유롭게 정해줄 수 있습니다. 이번에는 출력 채널 수를 32로 하겠습니다.
# - 두 번째 `Conv` 층의 출력 채널 수는 64로 합니다.
# - 세 번째 `Conv` 층의 출력 채널 수는 128로 합니다.
# - 마지막 `Conv` 층의 출력 채널 수는 128로 합니다.
#
# 그 후, 분류기 역할을 하는 dense layer를 정의할 차례입니다. 다음을 읽고 코드를 완성해보세요.
# - 우리는 총 2개의 dense layer를 쌓을 것입니다.
# - 첫 번째 dense layer(`self.dense1`)의 입력 뉴런 수는 마지막 `Conv` 층에서 추출된 특징의 크기에 따라 정해야 합니다. 여러분이 위의 설명대로 convolution layer를 쌓았다면 가장 마지막에 추출되는 특징의 크기는 5x5x128(height x width x channels)입니다.
# - 두 번째 dense layer(`self.dense2`)의 입력 뉴런수는 개, 고양이를 구별하기 때문에 2로 합니다.
# - 여기서는 사실 두개의 class 밖에 없는 *binary classification* 문제라 마지막 출력 뉴런을 하나로하고 `sigmoid`를 취할 수도 있습니다.
# - 단 그럴경우에는 `CategoricalCrossentropy`가 아니라 `BinaryCrossentropy`를 쓰셔야 합니다.
# - **이번 프로젝트에서는 마지막 dense layer의 출력 뉴런 수를 2개로 주고 `CategoricalCrossentropy`를 쓸 것 입니다**.
# - 첫전째 dense layer 의 활성화 함수는 ReLU, 두번째 dense layer 의 활성화 함수는 Softmax 로 지정해주세요.
#
# **이제 모델 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
class SimpleCNN(tf.keras.Model):
def __init__(self):
super(SimpleCNN, self).__init__()
## 코드 시작 ##
self.conv1 = None
self.conv2 = None
self.conv3 = None
self.conv4 = None
self.flatten = None
self.dense1 = None
self.dense2 = None
## 코드 종료 ##
def call(self, inputs, training=True):
## 코드 시작 ##
x = None # self.conv1 forward
x = None # self.conv2 forward
x = None # self.conv3 forward
x = None # self.conv4 forward
x = None # flatten
x = None # self.dense1 forward
x = None # self.dense2 forward
## 코드 종료 ##
return x
# 아래의 코드를 실행해 코드를 성공적으로 완성했는지 확인해보세요.
#
# 별다른 문제가 없다면 이어서 진행하면 됩니다.
model = SimpleCNN()
for images, labels in train_dataset.take(1):
outputs = model(images, training=False)
model.summary()
checker.model_check(model)
# ## 6. Loss function, Optimizer 정의
#
# 생성한 모델을 학습 시키기 위해서 손실함수를 정의해야 합니다. 뉴럴네트워크는 경사하강(gradient descent)방법을 이용하여 손실함수의 값을 줄이는 방향으로 파라미터를 갱신(update) 하게 됩니다. 또한 효과적인 경사하강 방법을 적용하기 위해 옵티마이져를 함께 사용할 겁니다.
#
# ### <font color='red'>[TODO] 코드 구현</font>
#
# 다음을 읽고 코드를 완성해보세요.
#
# * 분류 문제에서는 Cross Entropy Loss를 사용합니다. [`tf.keras.losses.CategoricalCrossentropy()`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy)을 만들고 `loss_object` 변수에 저장합니다.
# * 이번 실습에서는 Adam optimizer를 사용하겠습니다. [`tf.train.AdamOptimizer()`](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)를 `optimizer` 변수에 저장합니다. **<3. 하이퍼파라미터 세팅>** 에서 정의한 `learning_rate` 로 학습률을 지정해보세요!
#
#
# **이제 손실함수와 옵티마이저 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
## 코드 시작 ##
loss_object = None
optimizer = None
## 코드 종료 ##
# 훈련 단계에서 모델이 얼마나 정확하게 정답을 찾아내는지 확인하게 위해서 [tf.keras.metrics.Accuracy()](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Accuracy)을 정의하고, 손실값의 평균값들을 계산하기 위해서 [tf.keras.metrics.Mean()](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Mean)을 정의했습니다.
mean_loss = tf.keras.metrics.Mean("loss")
mean_accuracy = tf.keras.metrics.Accuracy("accuracy")
# 아래의 코드를 실행해 코드를 성공적으로 완성했는지 확인해보세요.
#
# 별다른 문제가 없다면 이어서 진행하면 됩니다.
checker.loss_function_check(loss_object)
checker.optimizer_check(optimizer)
# ## 7. train, validation, test 함수 정의
#
# 이번에는 훈련, 검증, 테스트를 진행하는 함수를 정의하겠습니다.
# ### <font color='red'>[TODO] 코드 구현: 훈련 함수</font>
#
# 먼저 훈련 함수입니다. 다음을 읽고 코드를 완성해 보세요.
#
# 1 epoch 내에서 훈련을 진행하는 `train_step` 함수는 `model`, `images`와 `labels`를 인자로 받습니다.
# 1. `predictions`: `model`에게 데이터를 입력하여 결과를 `predictions`로 저장합니다.
# 2. **<6. Loss function, Optimizer 정의>** 에서 정의한 `loss_object`를 이용하여 loss를 구합니다. (`labels`와 `predictions`의 shape을 맞추는게 중요합니다.즉 `labels`를 **one-hot 벡터**로 변환하여야 합니다.)
# 3. `tape`로 정의된 [`tf.GradientTape()`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 객체의 `gradient` 메서드를 이용하여 `loss_value`를 `model.trainable_variables`로 미분하고, 이 값을 `gradients` 변수에 저장합니다.
# 4. [`optimizer.apply_gradients()`](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer#methods)를 이용하여 파라미터를 업데이트 합니다. 함수 인자로 `gradients`와 `model.trainable_variables`를 `zip`으로 묶어서 전달하면 됩니다.
#
# ### Define training one step function
#
# **train_step 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def train_step(model, images, labels):
## 코드 시작 ##
with tf.GradientTape() as tape:
predictions = None # 위의 설명 1. 을 참고하여 None을 채우세요.
loss_value = None # 위의 설명 2. 를 참고하여 None을 채우세요.
gradients = None # 위의 설명 3. 을 참고하여 None을 채우세요.
optimizer.apply_gradients(None) # 위의 설명 4. 를 참고하여 None을 채우세요.
## 코드 종료 ##
mean_accuracy(labels, tf.argmax(predictions, axis=1))
return loss_value
# ### Define validation and test functions
#
# ### <font color='red'>[TODO] 코드 구현: 검증 함수</font>
#
# 검증 함수입니다. 다음을 읽고 코드를 완성해보세요.
#
# 검증 과정에서는 파라미터 업데이트를 하지 않기 때문에 gradients를 계산할 필요가 없습니다. 일정한 에폭마다 검증 데이터인 `val_dataset` 을 이용하여 검증을 수행합니다. 모델 검증을 수행했을 때, 만약 검증 과정의 평균 loss가 현재까지 가장 낮다면 가장 잘 훈련된 모델로 가정하고 그때까지 학습한 모델을 저장합니다. 저장은 추후에 구현할 `save_model` 함수가 수행합니다.
#
# `validation` 함수는 `model`, `val_dataset`와 `epoch`을 인자로 받습니다.
#
# 1. `predictions`: `model`에게 데이터를 입력하여 결과를 `predictions`로 저장합니다.
# * 모델 검증 과정에서는 모델을 평가(evaluation) 모드로 작동해줘야 함을 기억하시기 바랍니다. `Batch normalization` 과 `Dropout`은 훈련과 검증시에 작동하는 방식이 다르기 때문입니다. 지금 `model`이 `training mode`인지 `validation mode`인지는 `model.call(..., training=True)`함수의 `training` 인자로 조절합니다. 해당 인자가 `True`이면 `training mode`, `False`이면 `validation mode` 또는 `test mode` 입니다.
# 2. 마찬가지로 **<6. Loss function, Optimizer 정의>** 에서 정의한 `loss_object`를 이용하여 loss를 구합니다. (`labels`와 `predictions`의 shape을 맞추는게 중요합니다.)
# * 즉 `labels`를 **one-hot 벡터**로 변환하여야 합니다.
#
# **validation 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def validation(model, val_dataset, epoch):
print('Start validation..')
val_mean_loss = tf.keras.metrics.Mean("val_loss")
val_mean_accuracy = tf.keras.metrics.Accuracy("val_acc")
for step, (images, labels) in enumerate(val_dataset):
## 코드 시작 ##
predictions = None # 위의 설명 1. 을 참고하여 None을 채우세요.
val_loss_value = None # 위의 설명 2. 를 참고하여 None을 채우세요.
## 코드 종료 ##
val_mean_loss(val_loss_value)
val_mean_accuracy(labels, tf.argmax(predictions, axis=1))
print('Validation #{} epoch Average Loss: {:.4g} Accuracy: {:.4g}%\n'.format(
epoch, val_mean_loss.result(), val_mean_accuracy.result() * 100))
return val_mean_loss.result(), val_mean_accuracy.result()
# ### <font color='red'>[TODO] 코드 구현: 테스트 함수</font>
#
# 테스트 함수입니다. 다음을 읽고 코드를 완성해보세요.
#
# * `model`에 입력 이미지를 주어 얻은 결과를 `predictions`에 저장하세요.
# * 테스트 단계에서는 loss를 계산할 필요가 없고, 전체 정확도를 통해 모델의 성능을 확인하면 됩니다.
#
# **test 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def test(model, test_dataset):
print('Start test..')
test_mean_accuracy = tf.keras.metrics.Accuracy("test_acc")
for step, (images, labels) in enumerate(test_dataset):
## 코드 시작 ##
predictions = None
## 코드 종료 ##
test_mean_accuracy(labels, tf.argmax(predictions, axis=1))
print('Test accuracy: {:.4g}%'.format(test_mean_accuracy.result() * 100))
return test_mean_accuracy.result()
# ### 모델 저장 함수 정의
def save_model(model, epoch, train_dir):
model_name = 'my_model_' + str(epoch)
model.save_weights(os.path.join(train_dir, model_name))
# ## 8. Training
#
# `train_step` 함수를 통해 학습을 진행합니다. 네트워크의 규모가 큰 편이 아니지만, GPU 가 없다면, CPU를 통해 학습되기 때문에 시간이 조금 필요합니다. 컴퓨터 성능에 따라 20~30분의 시간이 소요될 수 있습니다. 시간이 여유가 없는 분들은 모델 학습이 적당히 진행된다는 정도만 확인하고 다음 단계로 넘어가셔도 됩니다.
#
# 만약 어느정도 기다렸음에도 학습 accuracy가 증가하지 않는다면 구현한 코드에 문제가 있을 수 있습니다. 이러한 경우에는 구현한 `train_step` 함수를 다시 한 번 확인하시기 바랍니다.
#
# 또한, 모델 저장 코드를 제대로 구현했다면 첫 에폭 학습후에 `train_dir` 경로에 `my_model_{}.data` 파일이 저장되어 있어야 합니다. 만약에 파일이 존재하지 않는다면 모델 저장 코드를 다시 확인하시기 바랍니다.
#
# `val_epoch`은 검증을 몇 에폭마다 진행할지 정하는 변수입니다. `print_steps` 는 훈련 과정의 출력 정도를 관장하는 변수입니다. 작으면 작을 수록 자주 출력하게 됩니다. 원하는 만큼 조절해보시길 바랍니다.
train_dir = os.path.join('./train/exp1')
print_steps = 25
val_epoch = 1
# ### <font color='red'>[TODO] 코드 구현: main 함수</font>
#
# 이전에 정의한 `train_step`, `validation` 함수를 사용하여 전체 훈련을 진행하는 `main` 함수 코드를 작성합니다. 전체 프로세스는 다음과 같습니다.
#
# * `max_epochs` 동안 훈련하는데 아래 과정을 반복합니다.
# * `train_dataset` 에서 미니배치 데이터와 라벨을 가져옵니다.
# * `train_step` 함수를 통해 훈련합니다.
# * `validation` 함수를 통해 모델을 검증합니다. 분류 성능이 이전 단계보다 좋다면 저장하게 됩니다.
#
# **main 함수 코드를 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def main(model, train_dataset, val_dataset, val_epoch, print_steps, save_dir):
print('Start training..')
num_batches_per_epoch = len(list(train_dataset))
global_step = 0
best_acc = 0.
for epoch in range(max_epochs):
for step, (images, labels) in enumerate(train_dataset):
start_time = time.time()
# train_step 함수 사용하여 loss 구하기
## 코드 시작 ##
loss_value = None
## 코드 종료 ##
mean_loss(loss_value)
global_step += 1
if global_step % print_steps == 0:
duration = time.time() - start_time
examples_per_sec = batch_size / float(duration)
print("Epochs: [{}/{}] step: [{}/{}] loss: {:.4g} acc: {:.4g}% ({:.2f} examples/sec; {:.3f} sec/batch)".format(
epoch+1, max_epochs, step+1, num_batches_per_epoch,
mean_loss.result(), mean_accuracy.result() * 100, examples_per_sec, duration))
# clear the history
mean_loss.reset_states()
mean_accuracy.reset_states()
if (epoch + 1) % val_epoch == 0:
# validation 함수 사용하여 검증하기
# 여기서 epoch는 0부터 시작하기 때문에 + 1 을 해주시길 바랍니다.
## 코드 시작 ##
val_mean_loss, val_mean_accuracy = None
## 코드 종료 ##
if val_mean_accuracy > best_acc:
print('Best performance at epoch: {}'.format(epoch + 1))
print('Save in {}\n'.format(save_dir))
best_acc = val_mean_accuracy
save_model(model, epoch+1, save_dir)
print('training done..')
# 다음 코드를 통해 모델 훈련을 시작하세요!
main(model, train_dataset, val_dataset, val_epoch, print_steps, save_dir=train_dir)
# ## 9. 저장된 모델 불러오기 및 test
#
# 학습한 모델의 성능을 테스트합니다. 저장한 모델 파일을 `model.load_weights()`를 통하여 불러올 수 있습니다. 모델의 저장 및 불러오기를 좀 더 자세히 알고 싶으면 [Save and restore](https://www.tensorflow.org/guide/keras#save_and_restore) 문서를 참고하세요. 위에서 학습을 끝까지 진행하지 않았다면, 첫번째 주석 처리된 부분을 주석 해제하여, 제공해드린 미리 학습시킨 모델을 불러올 수 있습니다.
# +
# train_dir = './train/pretrained/SimpleCNN' # 모델 학습을 끝까지 진행하지 않은 경우에 사용
# 아래의 모델 불러오기를 정확히 구현했는지 확인하기 위해 새로 모델을 선언하여 학습 이전 상태로 초기화
model = SimpleCNN()
# inputs을 넣어 모델 생성
for images, labels in train_dataset.take(1):
outputs = model(images, training=False)
# 모델 파라미터 불러오기
model.load_weights(tf.train.latest_checkpoint(train_dir))
# -
# 마지막으로 모델의 성능을 테스트합니다. 76% 내외의 성능이 나온다면 학습 및 모델 불러오기가 성공적으로 진행된 것입니다.
test_acc_value = test(model, test_dataset)
# 학습된 모델의 예측 결과를 시각화하면 다음과 같습니다. label이 <font color='blue'>파란색</font>으로 표시되면 모델이 정확한 예측을 한 것이고 <font color='red'>빨간색</font>으로 표시되면 틀린 예측을 한 것입니다. 틀린 경우에는 모델의 예측과 함께 실제 정답을 표기해두었습니다. (ex. 오답/정답)
# +
test_batch_size = 25
for images, labels in test_dataset.take(1):
predictions = model(images)
images = images[:test_batch_size]
labels = labels[:test_batch_size]
predictions = predictions[:test_batch_size]
labels_map = {0: 'cat', 1: 'dog'}
# 시각화
fig = plt.figure(figsize=(10, 10))
for i, (px, py, y_pred) in enumerate(zip(images, labels, predictions)):
p = fig.add_subplot(5, 5, i+1)
if np.argmax(y_pred.numpy()) == py.numpy():
p.set_title("{}".format(labels_map[py.numpy()]), color='blue')
else:
p.set_title("{}/{}".format(labels_map[np.argmax(y_pred.numpy())],
labels_map[py.numpy()]), color='red')
p.imshow(px.numpy()*0.5+0.5)
p.axis('off')
# -
# ## 10. Transfer Learning
#
# 실습을 끝내기전에, 우리는 전이 학습(transfer learning)을 구현해 보고 성능을 확인해 볼 것입니다.
# 전이학습이란 비슷한 목적으로 미리 학습된 모델의 파라미터로 나의 모델의 파라미터를 초기화한 후 학습을 이어서 진행하는 것을 말합니다. 그렇다면 이러한 전이학습은 어떤 장점이 있기에 사용하는 것일까요?
#
# 여러분이 현실에서 딥러닝을 활용할 때 흔히 마주칠 수 있는 현실적인 제약들이 있습니다. 데이터 부족, 컴퓨팅 리소스 부족, 시간의 부족이 대표적인 현실적인 제약들에 속합니다. 우리가 풀고자 하는 문제와 완전히 똑같지는 않지만 어느정도 연관성이 있는 문제를, 아주 많은 양의 데이터로, 미리 학습한 모델이 있다면 그 모델은 정말 아무것도 모르는 백지 상태의 모델보다 우리가 풀고 싶은 문제에 대해 훨씬 더 빨리 배우고 잘 배울 수 있습니다.
# 구현은 전혀 어렵지 않습니다. 먼저 미리 학습된 모델을 불러와야 합니다.
# 여기서 불러올 모델은 VGG16으로, 대규모 ImageNet 데이터로 이미지 분류를 학습한 모델입니다.
# `tf.keras.application`을 이용하여 VGG16을 불러옵니다. 학습된 파라미터를 다운받기 위해 몇 분의 시간이 소요될 수 있습니다.
conv_base = tf.keras.applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(IMG_SIZE, IMG_SIZE, 3))
# 우리가 불러온 ImageNet에 학습된 VGG16은 1000개의 class를 구분하는 네트워크입니다. 그리고 크게 다음과 같은 레이어로 이루어져 있습니다.
#
# >```
# conv_block1
# conv_block2
# conv_block3
# conv_block4
# conv_block5
# dense1 (4096)
# dense2 (4096)
# dense3 (1000)
# >```
#
# `tf.keras.applications.VGG16()` 이 함수로 네트워크를 불러오면 모든 레이어를 다 다운 받아 저장합니다. 여기서 `include_top`이라는 중요한 인자(argument)가 있습니다. `include_top=False`라면 VGG16에서 `dense1` ~ `dense3`레이어를 불러오지 않고 오직 `conv`레이어만 불러옵니다. 보통 레이어가 많아서 깊은 네트워크일 경우 input쪽에 가까운 layer에서는 일반화된(general) 성질의 피쳐(feature)가 뽑히고 아랫쪽으로 갈 수록 dataset의 영향을 더 받는 피쳐들이 학습이 됩니다. 따라서 전이학습을 통하여 자신의 데이터를 학습시키고 싶을 때는 일반화된 성질을 뽑아내는 `conv`레이어만 살리고 `dense`레이어는 제거 후, 우리 데이터 셋에 맞게 `dense` 레이어를 재조직합니다. 이렇게 `conv`레이어만 살리는 모델을 만들때 사용하는 argument가 `include_top=False`입니다.
#
# 이제 방금 불러온 `conv_base`에 새로 `dense layer`를 붙여 우리의 모델을 만들 것 입니다. 그리고 학습은 방금 추가한 `dense layer`들과 마지막 `conv block` (정식 이름은 `block5_conv1` ~ `block5_conv3`)에 대해서만 진행합니다.
#
# 물론 상황에 따라서는 우리가 이번에 하는 것처럼 마지막 dense layer만 학습을 진행하는 것이 아니라 전체 네트워크에 대해서 학습을 이어서 진행하는 경우도 있습니다. 이를 파라미터 fine tuning(미세 조정)이라고 합니다. ImageNet에는 다양한 동물 class도 포함이 되어있습니다. 즉, 우리가 불러온 ResNet은 강아지와 고양이 같은 동물에 대한 특징을 이미 어느정도 잘 추출하는 네트워크인 것입니다. 따라서 fine tuning이 굳이 필요하지 않습니다. 게다가, 이번 실습과 같이 적은 양의 데이터를 통해 모델을 학습시키는 상황에서 fine tuning을 진행하는 것은 overfitting의 가능성이 커지는 것이기 때문에 오히려 성능을 낮추는 결과를 가져올 수 있습니다.
#
# ### <font color='red'>[TODO] 코드 구현: dense layer</font>
#
# 다음을 읽고 코드를 완성해보세요.
#
# - hidden_size = 256 짜리 dense layer와 hidden_size = num_class를 갖는 마지막 dense layer를 구현합니다.
# - 첫번째 layer는 `ReLU`, 두번째는 `softmax`를 activation function으로 갖습니다.
#
# **전이학습을 위해 dense layer 코드를 추가해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
def get_transfer_learning_model(conv_base):
model = tf.keras.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
## 코드 시작 ##
model.add(None)
model.add(None)
## 코드 종료 ##
return model
model = get_transfer_learning_model(conv_base)
# training variable 확인
for var in model.trainable_variables:
print(var.name)
# VGG16은 다음과 같이 이루어져 있습니다.
#
# >```
# conv_block1
# conv_block2
# conv_block3
# conv_block4
# conv_block5
# new_dense1
# new_dense2
# >```
#
# 우리는 `conv_block5`부터 새로붙인 `new_dense`레이어를 이용하여 fine tuning을 할 것 입니다. 그렇게 하기 위해 training 할 layer와 아닌 layer를 밑에 코드로 구분짓습니다.
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
# training variable 확인
for var in model.trainable_variables:
print(var.name)
# trainig flag(True, False)를 조절하여 `conv_block5` 이후만 training하게 만들었습니다.
# 아래의 코드를 실행해 코드를 성공적으로 완성했는지 확인해보세요.
#
# 별다른 문제가 없다면 이어서 진행하면 됩니다.
checker.final_dense_check(model)
# 불러온 모델에 대해 학습을 진행합니다. 우리가 직접 만든 네트워크는 20epoch 정도 돌렸습니다. 하지만 transfer learning은 이미 학습되어 있는 파라미터를 초깃값으로 써서 학습을 시작하기 때문에 짧은 epoch만 돌려도 좋은 성능을 얻을 수 있습니다. 우리는 5epoch만 돌려보겠습니다.
# ### <font color='red'>[TODO] 코드 구현: 재학습을 위한 손실함수 및 옵티마이저 정의</font>
#
# 불러온 모델에 대한 학습을 진행하기 위해 손실함수와 옵티마이저를 다시 정의합니다.
#
# **전이학습을 위해 손실함수와 옵티마이저를 새로 작성해보세요! "<font color='45A07A'>## 코드 시작 ##</font>"과 "<font color='45A07A'>## 코드 종료 ##</font>" 사이의 <font color='075D37'>None</font> 부분을 채우시면 됩니다.**
# +
## 코드 시작 ##
loss_object = None
optimizer = None
## 코드 종료 ##
train_dir = os.path.join('./train/exp1/resnet')
# -
# ### Training for fine tuning
#
# 다음 코드를 실행하여 새로 정의한 VGG Net을 학습시켜보세요!
main(model, train_dataset, val_dataset, val_epoch, print_steps, save_dir=train_dir)
# 학습이 끝난 후 모델을 새로 만들고 최고 성능을 내는 모델을 load 하여 test를 수행합니다. 위에서 학습을 끝까지 진행하지 않았다면, 첫번째 주석 처리된 부분을 주석 해제하여, 제공해드린 미리 학습시킨 모델을 불러올 수 있습니다.
# +
# train_dir = './train/pretrained/resnet' # 모델 학습을 끝까지 진행하지 않은 경우에 사용
model = get_transfer_learning_model(conv_base)
model.load_weights(tf.train.latest_checkpoint(train_dir))
# -
# 테스트를 수행합니다. 정확도가 95% 내외가 나온다면 성공적으로 진행된 것입니다. 마지막 레이어 몇개만 학습시켰음에도 우리의 SimpleCNN보다 성능이 훨씬 좋은 것을 볼 수 있습니다.
test_acc_value = test(model, test_dataset)
# ## 11. Summary
#
# 이로써 또 하나의 프로젝트를 완료했습니다. 고생하셨습니다!
#
# 우리는 이번 실습을 통해 다음과 같은 내용을 학습했습니다.
# - Dataset class를 우리가 가진 데이터셋에 맞게 customize 하여 정의할 수 있다.
# - CNN을 설계하고 이미지 분류기를 학습시킬 수 있다.
# - 학습된 모델을 저장하고 불러올 수 있다.
# - 데이터, 리소스, 시간이 부족한 상황에서 전이학습을 사용하여 이를 극복할 수 있다.
# ---
# # Self-Review
#
# 학습 환경에 맞춰 알맞는 제출방법을 실행하세요!
#
# ### 로컬 환경 실행자
#
# 1. 모든 실습 완료 후, Jupyter Notebook 을 `Ctrl+S` 혹은 `File > Save and checkpoint`로 저장합니다.
# 2. 제일 하단의 코드를 실행합니다. 주의할 점은 Jupyter Notebook 의 파일이름을 수정하시면 안됩니다! 만약에 노트북 이름을 수정했다면 "tensorflow-cnn-project" 로 바꿔주시길 바랍니다. 모든 평가 기준을 통과하면, 함수 실행 후 프로젝트 "submit" 디렉토리와 압축된 "submit.zip"이 생깁니다. "cnn_submission.tsv" 파일을 열고 모두 Pass 했는지 확인해보세요!
# * "cnn_submission.tsv" : 평가 기준표에 근거해 각 세부항목의 통과여부(Pass/Fail) 파일
# * "cnn_submission.html" : 여러분이 작성한 Jupyter Notebook 을 html 형식으로 전환한 파일
# 3. 코드 실행결과 안내에 따라서 `submit.zip` 파일을 확인하시고 제출해주시길 바랍니다.
#
# ### Colab 환경 실행자
#
# 1. 모든 실습 완료 후, Jupyter Notebook 을 `Ctrl+S` 로 저장합니다.
# 2. 제일 하단의 코드를 실행합니다. 코드 실행결과 안내에 따라서 재작성하거나 다음스텝으로 넘어갑니다. 모든 평가 기준을 통과하면, 함수 실행 후 프로젝트 "submit" 디렉토리와 압축된 "cnn_submission.tsv"만 생깁니다. "cnn_submission.tsv" 파일을 열고 모두 Pass 했는지 확인해보세요!
# * "cnn_submission.tsv" : 평가 기준표에 근거해 각 세부항목의 통과여부(Pass/Fail) 파일
# 3. 프로젝트를 저장한 드라이브의 `submit` 폴더에서 `cnn_submission.tsv` 파일을 다운 받습니다.
# 4. Colab Notebook 에서 `파일 > .ipynb 다운로드`를 통해서 노트북을 다운로드 받습니다.
# 5. 로컬에서 Jupyter Notebook 프로그램을 실행시킵니다.
# 6. 4번 스텝에서 다운받은 노트북을 열고 `File > Download as > HTML(.html)` 로 재 다운로드 합니다.
# 7. 3번 스텝에서 받은 파일과 6번 스텝에서 받은 파일을 하나의 폴더에 넣고, `submit.zip` 이라는 이름으로 압축하고 제출해주시길 바랍니다.
import check_util.submit as submit
submit.process_submit()
| 43,364 |
/LinearRegression/Intro.ipynb
|
a1aab6de78d5041c08a06557e4d67876dcc2971a
|
[] |
no_license
|
RayDragon/ML
|
https://github.com/RayDragon/ML
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,730 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="G3V3uxxIi_Mi"
# 1. What exactly is [ ]?
# + [markdown] id="5Qo0ZIXnjHHc"
# **[ ], square brackets** are used to create the list
# + [markdown] id="qzLzesSpjyI7"
# 2. In a list of values stored in a variable called spam, how would you assign the value 'hello' as the third value? (Assume [2, 4, 6, 8, 10] are in spam.)
# + colab={"base_uri": "https://localhost:8080/"} id="97-jDu_8ibvc" outputId="a0ea7de2-0bb7-4658-d3be-b2b684d4dd9f"
spam = [2, 4, 6, 8, 10]
spam[2] = 'hello' #Lists are changeable data types
spam
# + [markdown] id="GeQ2iVX8kxg3"
# Let's pretend the spam includes the list ['a', 'b', 'c', 'd'] for the next three queries.
# 3. What is the value of spam[int(int('3' * 2) / 11)]?
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="ApvnUvGzkR0n" outputId="6e4bd634-0a77-4b13-8431-f17add278c0a"
spam = ['a', 'b', 'c', 'd']
spam[int(int('3' * 2) / 11)] # int(33/11)
# + [markdown] id="ZbTNbJdympRr"
# 4. What is the value of spam[-1]?
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="Acz08Bi8m7xv" outputId="718c115e-9ea4-40b2-bbda-96aabeabf0f0"
spam[-1]
# + [markdown] id="NLZ1mdGenBPt"
# 5. What is the value of spam[:2]?
# + colab={"base_uri": "https://localhost:8080/"} id="MZwPW0IZnFFX" outputId="63ec81af-bffd-4a17-db31-a0966f2ddd03"
spam[:2]
# + [markdown] id="Q6g5AlGVnLXq"
# Let's pretend bacon has the list [3.14, 'cat,' 11, 'cat,' True] for the next three questions.
# 6. What is the value of bacon.index('cat')?
#
# + colab={"base_uri": "https://localhost:8080/"} id="eIexJhm9ni2_" outputId="e2318a5f-e29c-4c38-cbfd-b1ccece5f51b"
bacon = [3.14, 'cat', 11, 'cat', True]
bacon.index('cat') # it will give you the first index when it comes to repitative data
# + [markdown] id="b9XzTwfao6uc"
# 7. How does bacon.append(99) change the look of the list value in bacon?
# + colab={"base_uri": "https://localhost:8080/"} id="wtXTXmX_pV0I" outputId="f29dbf29-987b-4eb3-e971-16b557a82914"
bacon.append(99)
bacon # here 3 times I executed the append operation, so the value of 99 is added in the end of the list
# + [markdown] id="Z9fW5nHCpqsJ"
# 8. How does bacon.remove('cat') change the look of the list in bacon?
# + colab={"base_uri": "https://localhost:8080/"} id="9AEdOff2pX9S" outputId="0d7e0f12-0363-462a-eaac-07896b2acc6a"
bacon.remove('cat') # The complier remove the least index value because it executes the code from left
bacon
# + [markdown] id="gn9R27KgqTtm"
# 9. What are the list concatenation and list replication operators?
# + [markdown] id="ZkF5w-MWqYXq"
# The operator for list concatenation is **+** and operator for list replication is *
# + [markdown] id="H3dMHacIq22t"
# 10. What is difference between the list methods append() and insert()?
# + [markdown] id="JwGjxGzmq_W6"
# **append()** method add values at the end of the list and **insert()** method add values any where in the list.
# + [markdown] id="2wwqRjhKrTJp"
# 11. What are the two methods for removing items from a list?
# + [markdown] id="raHkPHiprW-P"
# The **del and remove()** are the two methods used to remove the values from a list.
# + [markdown] id="RpYoZlKvrpGY"
# 12. Describe how list values and string values are identical.
# + [markdown] id="WVkdfrdQrtjv"
# Both lists and strings can be passed to len(), have indexes and slices, be used in for loops, be concatenated or replicated, and be used with the in and not in operators.
# + [markdown] id="dC8WzMGVsDRu"
# 13. What's the difference between tuples and lists?
# + [markdown] id="gJKQIFZBsIjD"
# **Lists are mutable and tuples are immutable. We define tuples with ( ) and lists with [ ]**
# + [markdown] id="o67RxXpxshaP"
# 14. How do you type a tuple value that only contains the integer 42?
# + [markdown] id="4sum97Nas3Bs"
# (42,) **trailing comma is mandatory**
# + [markdown] id="904RlXrLtEhv"
# 15. How do you get a list value's tuple form? How do you get a tuple value's list form?
# + [markdown] id="7KAPF2fMtlvZ"
# We can use **tuple( ) and list[ ]** functions respectively.
# + [markdown] id="TIzaM0AktssX"
# 16. Variables that "contain" list values are not necessarily lists themselves. Instead, what do they contain?
# + [markdown] id="z3_3Pnfxt3Xt"
# They contain **references** to the list values.
# + [markdown] id="_ZVZp2fQuHxU"
# 17. How do you distinguish between copy.copy() and copy.deepcopy()?
# + [markdown] id="4nxKe0CwuLTU"
# The copy.copy() function will do a shallow copy of a list, while the copy.deepcopy() function will do a deep copy of a list. That is, only copy.deepcopy() will duplicate any lists inside the list.
目前為止,大致可以觀察到,直接使用簡單的模型以及訓練方式在這組數據上應該可以在訓練集和測試集上都得到一個還不錯的結果,說明這組資料其實不會很難。
model_with_augment = get_model()
model_with_augment.compile(loss='mean_squared_error', optimizer='adam')
# Your code
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(horizontal_flip = True)
train_image = imgs_train.reshape(-1, 96, 96, 1)
train_generator = datagen.flow(train_image, points_train,batch_size = 8)
hist_model=model_with_augment.fit_generator(train_generator, steps_per_epoch=len(train_image)//8,epochs=50,
callbacks=[checkpoint, hist],shuffle=True,verbose=1,
validation_data=(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1), points_test))
# +
# loss 值的圖
plt.title('Optimizer : Adam', fontsize=10)
plt.ylabel('Loss', fontsize=16)
plt.plot(hist_model.history['loss'], color='b', label='Training Loss')
plt.plot(hist_model.history['val_loss'], color='r', label='Validation Loss')
plt.legend(loc='upper right')
# 在灰階圖像上畫關鍵點的函數
def plot_keypoints(img, points):
plt.imshow(img, cmap='gray')
for i in range(0,30,2):
plt.scatter((points[i] + 0.5)*96, (points[i+1]+0.5)*96, color='red')
fig = plt.figure(figsize=(15,15))
# 在測試集圖片上用剛剛訓練好的模型做關鍵點的預測
points_test = model.predict(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1))
for i in range(25):
ax = fig.add_subplot(5, 5, i + 1, xticks=[], yticks=[])
plot_keypoints(imgs_test[i], np.squeeze(points_test[i]))
# -
# Your code
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(horizontal_flip = True,
zoom_range = [0.8 , 1.2 ],
rotation_range = 30 ,
width_shift_range = 10.0 ,
height_shift_range = 10.0 )
train_image = imgs_train.reshape(-1, 96, 96, 1)
train_generator = datagen.flow(train_image, points_train,batch_size = 8)
hist_model=model_with_augment.fit_generator(train_generator, steps_per_epoch=len(train_image)//8,epochs=50,
callbacks=[checkpoint, hist],shuffle=True,verbose=1,
validation_data=(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1), points_test))
# +
# loss 值的圖
plt.title('Optimizer : Adam', fontsize=10)
plt.ylabel('Loss', fontsize=16)
plt.plot(hist_model.history['loss'], color='b', label='Training Loss')
plt.plot(hist_model.history['val_loss'], color='r', label='Validation Loss')
plt.legend(loc='upper right')
# 在灰階圖像上畫關鍵點的函數
def plot_keypoints(img, points):
plt.imshow(img, cmap='gray')
for i in range(0,30,2):
plt.scatter((points[i] + 0.5)*96, (points[i+1]+0.5)*96, color='red')
fig = plt.figure(figsize=(15,15))
# 在測試集圖片上用剛剛訓練好的模型做關鍵點的預測
points_test = model.predict(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1))
for i in range(25):
ax = fig.add_subplot(5, 5, i + 1, xticks=[], yticks=[])
plot_keypoints(imgs_test[i], np.squeeze(points_test[i]))
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
# %matplotlib inline
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras_lookahead import Lookahead
from keras_radam import RAdam
Ranger=Lookahead(RAdam(total_steps=10000, warmup_proportion=0.1, min_lr=1e-5))
def pairwise_distance(feature, squared=False):
"""Computes the pairwise distance matrix with numerical stability.
output[i, j] = || feature[i, :] - feature[j, :] ||_2
Args:
feature: 2-D Tensor of size [number of data, feature dimension].
squared: Boolean, whether or not to square the pairwise distances.
Returns:
pairwise_distances: 2-D Tensor of size [number of data, number of data].
"""
pairwise_distances_squared = math_ops.add(
math_ops.reduce_sum(math_ops.square(feature), axis=[1], keepdims=True),
math_ops.reduce_sum(
math_ops.square(array_ops.transpose(feature)),
axis=[0],
keepdims=True)) - 2.0 * math_ops.matmul(feature,
array_ops.transpose(feature))
# Deal with numerical inaccuracies. Set small negatives to zero.
pairwise_distances_squared = math_ops.maximum(pairwise_distances_squared, 0.0)
# Get the mask where the zero distances are at.
error_mask = math_ops.less_equal(pairwise_distances_squared, 0.0)
# Optionally take the sqrt.
if squared:
pairwise_distances = pairwise_distances_squared
else:
pairwise_distances = math_ops.sqrt(
pairwise_distances_squared + math_ops.to_float(error_mask) * 1e-16)
# Undo conditionally adding 1e-16.
pairwise_distances = math_ops.multiply(
pairwise_distances, math_ops.to_float(math_ops.logical_not(error_mask)))
num_data = array_ops.shape(feature)[0]
# Explicitly set diagonals to zero.
mask_offdiagonals = array_ops.ones_like(pairwise_distances) - array_ops.diag(
array_ops.ones([num_data]))
pairwise_distances = math_ops.multiply(pairwise_distances, mask_offdiagonals)
return pairwise_distances
def masked_maximum(data, mask, dim=1):
"""Computes the axis wise maximum over chosen elements.
Args:
data: 2-D float `Tensor` of size [n, m].
mask: 2-D Boolean `Tensor` of size [n, m].
dim: The dimension over which to compute the maximum.
Returns:
masked_maximums: N-D `Tensor`.
The maximized dimension is of size 1 after the operation.
"""
axis_minimums = math_ops.reduce_min(data, dim, keepdims=True)
masked_maximums = math_ops.reduce_max(
math_ops.multiply(data - axis_minimums, mask), dim,
keepdims=True) + axis_minimums
return masked_maximums
def masked_minimum(data, mask, dim=1):
"""Computes the axis wise minimum over chosen elements.
Args:
data: 2-D float `Tensor` of size [n, m].
mask: 2-D Boolean `Tensor` of size [n, m].
dim: The dimension over which to compute the minimum.
Returns:
masked_minimums: N-D `Tensor`.
The minimized dimension is of size 1 after the operation.
"""
axis_maximums = math_ops.reduce_max(data, dim, keepdims=True)
masked_minimums = math_ops.reduce_min(
math_ops.multiply(data - axis_maximums, mask), dim,
keepdims=True) + axis_maximums
return masked_minimums
def triplet_loss_adapted_from_tf(y_true, y_pred):
del y_true
margin = 1.
labels = y_pred[:, :1]
labels = tf.cast(labels, dtype='int32')
embeddings = y_pred[:, 1:]
### Code from Tensorflow function [tf.contrib.losses.metric_learning.triplet_semihard_loss] starts here:
# Reshape [batch_size] label tensor to a [batch_size, 1] label tensor.
# lshape=array_ops.shape(labels)
# assert lshape.shape == 1
# labels = array_ops.reshape(labels, [lshape[0], 1])
# Build pairwise squared distance matrix.
pdist_matrix = pairwise_distance(embeddings, squared=True)
# Build pairwise binary adjacency matrix.
adjacency = math_ops.equal(labels, array_ops.transpose(labels))
# Invert so we can select negatives only.
adjacency_not = math_ops.logical_not(adjacency)
# global batch_size
batch_size = array_ops.size(labels) # was 'array_ops.size(labels)'
# Compute the mask.
pdist_matrix_tile = array_ops.tile(pdist_matrix, [batch_size, 1])
mask = math_ops.logical_and(
array_ops.tile(adjacency_not, [batch_size, 1]),
math_ops.greater(
pdist_matrix_tile, array_ops.reshape(
array_ops.transpose(pdist_matrix), [-1, 1])))
mask_final = array_ops.reshape(
math_ops.greater(
math_ops.reduce_sum(
math_ops.cast(mask, dtype=dtypes.float32), 1, keepdims=True),
0.0), [batch_size, batch_size])
mask_final = array_ops.transpose(mask_final)
adjacency_not = math_ops.cast(adjacency_not, dtype=dtypes.float32)
mask = math_ops.cast(mask, dtype=dtypes.float32)
# negatives_outside: smallest D_an where D_an > D_ap.
negatives_outside = array_ops.reshape(
masked_minimum(pdist_matrix_tile, mask), [batch_size, batch_size])
negatives_outside = array_ops.transpose(negatives_outside)
# negatives_inside: largest D_an.
negatives_inside = array_ops.tile(
masked_maximum(pdist_matrix, adjacency_not), [1, batch_size])
semi_hard_negatives = array_ops.where(
mask_final, negatives_outside, negatives_inside)
loss_mat = math_ops.add(margin, pdist_matrix - semi_hard_negatives)
mask_positives = math_ops.cast(
adjacency, dtype=dtypes.float32) - array_ops.diag(
array_ops.ones([batch_size]))
# In lifted-struct, the authors multiply 0.5 for upper triangular
# in semihard, they take all positive pairs except the diagonal.
num_positives = math_ops.reduce_sum(mask_positives)
semi_hard_triplet_loss_distance = math_ops.truediv(
math_ops.reduce_sum(
math_ops.maximum(
math_ops.multiply(loss_mat, mask_positives), 0.0)),
num_positives,
name='triplet_semihard_loss')
### Code from Tensorflow function semi-hard triplet loss ENDS here.
return semi_hard_triplet_loss_distance
# -
model_with_augment = get_model()
model_with_augment.compile(loss=triplet_loss_adapted_from_tf, optimizer=Ranger)
# Your code
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(horizontal_flip = True,
zoom_range = [0.8 , 1.2 ],
rotation_range = 30 ,
width_shift_range = 10.0 ,
height_shift_range = 10.0 )
train_image = imgs_train.reshape(-1, 96, 96, 1)
train_generator = datagen.flow(train_image, points_train,batch_size = 8)
hist_model=model_with_augment.fit_generator(train_generator, steps_per_epoch=len(train_image)//8,epochs=10,
callbacks=[checkpoint, hist],shuffle=True,verbose=1,
validation_data=(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1), points_test))
# +
# loss 值的圖
plt.title('Optimizer : Adam', fontsize=10)
plt.ylabel('Loss', fontsize=16)
plt.plot(hist_model.history['loss'], color='b', label='Training Loss')
plt.plot(hist_model.history['val_loss'], color='r', label='Validation Loss')
plt.legend(loc='upper right')
# 在灰階圖像上畫關鍵點的函數
def plot_keypoints(img, points):
plt.imshow(img, cmap='gray')
for i in range(0,30,2):
plt.scatter((points[i] + 0.5)*96, (points[i+1]+0.5)*96, color='red')
fig = plt.figure(figsize=(15,15))
# 在測試集圖片上用剛剛訓練好的模型做關鍵點的預測
points_test = model.predict(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1))
for i in range(9):
ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
plot_keypoints(imgs_test[i], np.squeeze(points_test[i]))
# +
# 實現圖片以及關機點 label 做左右翻轉的函數
def augment_data(img, points):
rows, cols = img.shape
new_img = np.copy(img)
# 翻轉圖片
for i in range(96):
for j in range(48):
temp = img[i][j]
new_img[i][j] = img[i][cols-j-1]
new_img[i][cols-j-1] = temp
# 翻轉關鍵點 label
new_points = np.copy(points)
for i in range(0,30,2):
new_points[i] = -points[i]
# 調整對稱的 label
new_points_temp = np.copy(new_points)
new_points[0:2] = new_points_temp[2:4]
new_points[2:4] = new_points_temp[0:2]
new_points[4:6] = new_points_temp[8:10]
new_points[6:8] = new_points_temp[10:12]
new_points[8:10] = new_points_temp[4:6]
new_points[10:12] = new_points_temp[6:8]
new_points[12:14] = new_points_temp[16:18]
new_points[14:16] = new_points_temp[18:20]
new_points[16:18] = new_points_temp[12:14]
new_points[18:20] = new_points_temp[14:16]
new_points[22:24] = new_points_temp[24:26]
new_points[24:26] = new_points_temp[22:24]
return new_img, new_points
flip_img, flip_points = augment_data(imgs_train[0], points_train[0])
fig = plt.figure()
ax = fig.add_subplot(1, 2, 1, xticks=[], yticks=[])
plot_keypoints(imgs_train[0], points_train[0]) # 原來的圖片
ax = fig.add_subplot(1, 2, 2, xticks=[], yticks=[])
plot_keypoints(flip_img, flip_points) # 翻轉後的圖片
# +
# 創建 list
aug_imgs_train = []
aug_points_train = []
# 對所有原始資料做 augmentation
for i in range(imgs_train.shape[0]):
# 做左右翻轉
aug_img, aug_point = augment_data(imgs_train[i], points_train[i])
# append 原始資料
aug_imgs_train.append(imgs_train[i])
aug_points_train.append(points_train[i])
# append 做過 augmentation 後的資料
aug_imgs_train.append(aug_img)
aug_points_train.append(aug_point)
# convert to numpy
aug_imgs_train = np.array(aug_imgs_train)
aug_points_train = np.copy(aug_points_train)
print(aug_imgs_train.shape)
print(aug_points_train.shape)
# -
model_with_augment = get_model()
model_with_augment.compile(loss=triplet_loss_adapted_from_tf, optimizer=Ranger)
# Your code
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(zoom_range = [0.8 , 1.2 ] )
train_image = aug_imgs_train.reshape(-1, 96, 96, 1)
train_generator = datagen.flow(train_image, aug_points_train,batch_size = 8)
hist_model=model_with_augment.fit_generator(train_generator, steps_per_epoch=len(train_image)//8,epochs=10,
callbacks=[checkpoint, hist],shuffle=True,verbose=1,
validation_data=(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1), points_test))
# +
# loss 值的圖
plt.title('Optimizer : Adam', fontsize=10)
plt.ylabel('Loss', fontsize=16)
plt.plot(hist_model.history['loss'], color='b', label='Training Loss')
plt.plot(hist_model.history['val_loss'], color='r', label='Validation Loss')
plt.legend(loc='upper right')
# 在灰階圖像上畫關鍵點的函數
def plot_keypoints(img, points):
plt.imshow(img, cmap='gray')
for i in range(0,30,2):
plt.scatter((points[i] + 0.5)*96, (points[i+1]+0.5)*96, color='red')
fig = plt.figure(figsize=(15,15))
# 在測試集圖片上用剛剛訓練好的模型做關鍵點的預測
points_test = model.predict(imgs_test.reshape(imgs_test.shape[0], 96, 96, 1))
for i in range(25):
ax = fig.add_subplot(5, 5, i + 1, xticks=[], yticks=[])
plot_keypoints(imgs_test[i], np.squeeze(points_test[i]))
# -
shape[0])
print("Number of nodes in the test data graph without edges", X_test_neg.shape[0],"=",y_test_neg.shape[0])
#removing header and saving
X_train_pos.to_csv('/content/drive/My Drive/train_pos_after_eda.csv',header=False, index=False)
X_test_pos.to_csv('/content/drive/My Drive/test_pos_after_eda.csv',header=False, index=False)
X_train_neg.to_csv('/content/drive/My Drive/train_neg_after_eda.csv',header=False, index=False)
X_test_neg.to_csv('/content/drive/My Drive/test_neg_after_eda.csv',header=Fals
| 19,890 |
/Machine Learning (CS60050)/Assignments/Assignment 4/.ipynb_checkpoints/k-nearest means (1)-checkpoint.ipynb
|
2cabffeb651565e89b52ba3165d419149df79f79
|
[] |
no_license
|
Anshul718/CSE-Semester-5-IITKGP
|
https://github.com/Anshul718/CSE-Semester-5-IITKGP
| 1 | 5 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 8,142 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import csv
def read_input(fileAddress):
data = [];
with open(fileAddress) as csvFile:
csvReader = csv.reader(csvFile)
for row in csvReader:
data.append(row[0:5])
data = np.asarray(data);
return data
def unique_list(d):
tmp =[]
for x in d:
tmp.append(x)
return np.unique(tmp)
def get_index_list(unique, data):
list_ = []
for x in unique:
index = np.where(data==x)
list_.append(index)
return list_
# initialisation
def init(data,k):
index = np.random.choice(len(data),k,replace=False)
return data[index]
def distance(a,b):
x = ((float(a[0])-float(b[0]))**2 + ((float(a[1])-float(b[1]))**2))
x+= ((float(a[2])-float(b[2]))**2 + ((float(a[3])-float(b[3]))**2))
return x
def closest(means, x):
nearest = -1;
minDist = float("inf")
count=0
for a in means:
d = distance(a,x)
if(d<=minDist):
minDist = d
nearest = count
count+=1
return nearest
def get_buckets_modified(means):
bucket = []
for j in range(0,3):
temp = []
temp = np.asarray(temp,dtype=np.int16)
bucket.append(temp)
# print(data.shape)
for x in range(data.shape[0]):
i = closest(means,data[x])
bucket[i] = np.append(bucket[i],x)
bucket = np.asarray(bucket)
return bucket
def calc_new_mean_modified(bucket):
means = []
for i in range(bucket.shape[0]):
temp = [0.0,0.0,0.0,0.0]
# print(bucket[i].shape)
for j in bucket[i]:
for k in range(len(temp)):
temp[k] += float(data[j][k])
for k in range(len(temp)):
temp[k] /= (bucket[i].shape[0])
means.append(temp)
# print('\n')
return means
def k_means_iteration_modified(iteration,means):
for i in range(iterations):
bucket = get_buckets_modified(means)
means = calc_new_mean_modified(bucket)
return means, bucket
data = read_input('data4_19.csv')
unique = unique_list(data[:,4])
index_list = get_index_list(unique,data[:,4])
k=3
means = init(data,k)
def calc_jd(bucket,index_list):
min_ind = -1
min_d = 2
for i in range(len(index_list)):
d = 1.0
intersection = np.intersect1d(bucket,index_list[i]).shape[0]
union = np.union1d(bucket,index_list[i]).shape[0]
d -= intersection/union
if(d<min_d):
min_d=d
min_ind=i
return min_d, min_ind
def print_jaccard_dist(bucket, mean, index_list, unique_list):
for i in range(len(bucket)):
jd, jind = calc_jd(bucket[i],index_list)
print("Jaccard dist for cluster ",i,"\n\t mean : ", mean[i], "\n\t with ", unique_list[jind], "\n\t is ", jd,"\n")
iterations = 10
bucket = get_buckets_modified(means)
means_f, bucket_f = k_means_iteration_modified(iterations,means)
print_jaccard_dist(bucket_f,means_f,index_list,unique)
# print("Jaccard dist for cluster %d is %.2f" % 1 ,calc_jd(bucket_f[0],index_list))
| 3,334 |
/COVID19_API_Calls.ipynb
|
a267e350670e78e416266872ead68a6c39dce812
|
[] |
no_license
|
warnerm06/COVID19_TravelStats_Project
|
https://github.com/warnerm06/COVID19_TravelStats_Project
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 37,970 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # OVERVIEW #
#
# This notebook will import United States COVID case data through an API, convert the data into useful datatypes, and then export a Pandas DataFrame to .csv file to your local machine. The COVID19 Data has more than 4,480,000 rows; the exported DataFrame will only have 255 rows.
#
# *After the first run, do not re-run this notebook unless you want to re-call/update the COVID data*
#import Pandas, Numpy, and MatPlotLib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
#import requests in order to call the COVID data API data
#import time in order to pause between API calls and not freeze in the process
#import pprint in order to pretty-print the data.
import requests
import time
from pprint import pprint
# # SECTION 1: Execute an API call to the CDC Public Use Data Website #
# +
#This is a variable which calls the API link from the COVID data website
#it orders the data by CDC Report Date
#and it has a limit larger than the max number of rows in the CDC data
r = requests.get('https://data.cdc.gov/resource/vbim-akqf.json?$order=cdc_report_dt&$limit=5500000')
#This is the API call with a 1 second pause between requests
covid_json = r.json()
time.sleep(1)
# -
#pretty-print the first 2 entries to preview the data.
pprint(covid_json[:2])
# **I want to compare the following COVID19 data points:**
#
# **case report date** = 'cdc_rpt_dt' - number of confirmed cases
#
# **death status** = 'death_yn' - number of deaths
#
# **hospitalization status** = 'hosp_yn' - number of people hospitalized
#
# These are the key stats reported re: COVID19 counts in the news. So these are the data columns used to create a Pandas DataFrame.
#
# *Note: technically, I would pull positive result confirmation date, but this is missing from some cases reported*
# +
#Right now, the API data is a list with nested dictionaries
print(type(covid_json))
# +
#Loop through the list and call each desired datapoint, by its key, into its own list.
#Here is a sample
case_date, death, hospital = [],[],[]
for data in covid_json:
case_date.append(data['cdc_report_dt'])
death.append(data['death_yn'])
hospital.append(data['hosp_yn'])
# +
#Now that each datapoint is separated, add them back into a dictionary
#This will make sure that each datapoint column is labelled
covid_values = {'cdc_report_dt': case_date, 'death_yn': death, 'hosp_yn': hospital}
#Convert the dictionary into a DataFrame
covid_df = pd.DataFrame(covid_values)
# +
#Check the COVID DataFrame with a summary
covid_df.info()
# -
# # SECTION 2: Data Conversion #
# **Here is a list of next steps:**
#
# 1. convert the COVID dates into standard date format
#
# 2. group the covid data by days (cdc_report_dt) - each day will have total number of cases, deaths, hospitalizations
#Preview the COVID DataFrame created in Section 1
covid_df.head()
# #### Convert the COVID dates into standard date format ####
# +
#Import the datetime module and use it to extract date details
import datetime
rpt_date = pd.to_datetime(covid_df["cdc_report_dt"])
report_date = pd.DataFrame(rpt_date)
# -
#Check the datatype - it has changed from object to datetime64
report_date.info()
# #### Create a count of cases reported each day ####
# +
#Count the number of cases each day by adding a column with '1' for each case
#The numpy 'where' function will add the column based on the condition I define
#np.where(condition, value if condition is true, value if condition is false)
#my code asks for '1' to be listed in a new column if the report date is not blank (list '0' if blank)
report_date['case_reported'] = np.where(report_date['cdc_report_dt']!= '[]', 1, 0)
#here is a preview of the new DataFrame
report_date.head()
# -
# #### Re-Create the COVID DataFrame with the new date format ####
# +
#Create a separate dataframe with each column from the original
deceased = pd.DataFrame(covid_df["death_yn"])
hospital = pd.DataFrame(covid_df["hosp_yn"])
#Concatenate the report_date, deceased, and hospital DataFrames
covid_data = pd.concat([report_date, deceased, hospital], axis=1)
covid_data.sort_values('death_yn')
# -
# #### Create a count of number of deaths and hospitalizations reported each day ####
# +
#Count the number of deaths each day by adding a column with '1' for each death or '0' if no death
#death = Yes ('1'), No ('0'), Missing ('0'), or Unknown ('0')
#This code is using a for loop to create the column
num_deaths = []
for value in covid_data["death_yn"]:
if value == 'Yes':
num_deaths.append(1)
else:
num_deaths.append(0)
covid_data["deaths"] = num_deaths
#print(covid_data)
covid_data_r = pd.DataFrame(covid_data)
covid_data_r.sort_values('deaths')
# +
#Count the number of hospitalizations each day by adding a column for each hosp_yn value
#hosptialization = Yes ('1'), No ('0'), Missing ('0'), or Unknown ('0')
#This code is using a for loop to create the column
num_hospital = []
for value in covid_data_r["hosp_yn"]:
if value == 'Yes':
num_hospital.append(1)
else:
num_hospital.append(0)
covid_data_r["hospitalizations"] = num_hospital
#print(covid_data)
covid_data_r2 = pd.DataFrame(covid_data_r)
covid_data_r2.sort_values('hospitalizations')
# -
# #### Create a new DataFrame which only shows counts of each datapoint ####
# +
#filter the previous DataFrame by the new counts columns
covid_counts = covid_data_r2[["cdc_report_dt", "case_reported", "deaths", "hospitalizations"]]
covid_counts.tail()
# -
# # SECTION 3: Group the COVID data and export as .csv #
#
# Cases reported, deaths, and hospitalizations will be grouped by the total number reported each day
# +
#Group the COVID data by report date, give it a new variable name
covid_by_day = covid_counts.groupby(['cdc_report_dt'])
#Create total count of for each value, by day
daily_cases = covid_by_day['case_reported'].sum()
daily_deaths = covid_by_day['deaths'].sum()
daily_hospitalizations = covid_by_day['hospitalizations'].sum()
#Create a new DataFrame with these values
covid_daily_counts = pd.concat([daily_cases, daily_deaths, daily_hospitalizations], axis=1)
covid_daily_counts.columns = ["Cases", "Deaths", "Hospitalizations"]
covid_daily_counts
# -
#Now, there are only 255 rows of data and this can be saved as a .csv file
covid_daily_counts.to_csv('covid_data.csv')
| 6,681 |
/2017_E65_MP-clean/5_remove_cellcycle_effect.ipynb
|
50aec0609ee746f60632cfffb452029eff0764fe
|
[] |
no_license
|
mtekman/scRNA_2017_SA
|
https://github.com/mtekman/scRNA_2017_SA
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.r
| 76,163 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
matrix_destination = "matrix.cellQC.geneQC.normalised.hk_controls.rds"
source("load_matrix.R")
library(data.table)
library(ccRemover)
# https://cran.r-project.org/web/packages/ccRemover/vignettes/ccRemover_tutorial.html
# ### Removing the Cell-Cycle Effect
#
# Here we will remove the cell-cycle effect on our data to help remove variability that is not related to cell-type differentiation.
#
# To use this tool, we must first re-normalise our gene data once more so that it is zero-centric.
# +
mean_gene_exp <- rowMeans(logcounts(sce))
t_cell_data_cen <- logcounts(sce) - mean_gene_exp
gret <- rbind(
summary(apply(t.cell_data,1, mean)),
summary(apply(t_cell_data_cen,1,mean))
)
rownames(gret) <- c("Before ZeroNorm:", "After ZeroNorm:")
signif(gret,6)
# -
# We can see now that our data is correctly normalised with a mean value that is practically zero.
#
#
# ## Identifying the cell-cycle genes
#
# Now we need to find sets of genes that are related to the cell-cycle so that ccRemove can correct for them. Luckily it provides an inbuilt tool that checks against a variety of different sources (in our case, ensembl).
gene_names <- rownames(t_cell_data_cen)
cut_gene_names <- sub("(.+)-[^-]+", "\\1", gene_names)
#cut_gene_names2 <- unlist(lapply(gene_names, function(x){return(strsplit(x, "-|\\s")[[1]][1])}))
#setdiff(cut_gene_names, cut_gene_names2)
head(gene_names)
# +
cell_cycle_gene_indices <- gene_indexer(cut_gene_names, species = "mouse", name_type = "ensemble")
if_cc <- rep(FALSE,nrow(t_cell_data_cen))
if_cc[cell_cycle_gene_indices] <- TRUE
summary(if_cc)
dat <- list(x=t_cell_data_cen, if_cc=if_cc)
# -
# So now we have 678 genes that are implicit with cell-cycle variations. We can now correct against them:
xhat2 <- capture.output(xhat3 <- ccRemover(dat, bar=FALSE)) # ~ 5 mins
head(xhat)
#head(xhat2)
head(xhat3)
# Add back our mean expr
xhat2 <- xhat + mean_gene_exp
dim(sce)
#dim()
# Now that we have finished cleaning for the cell-cyle variation, let us take a look to see how it affects a PCA:
# +
# clone of the original
original <- copy(sce)
# Apply xhat2 to our logcounts matrix
logcounts(sce) <- xhat2
original <- calculateQCMetrics(original)
sce <- calculateQCMetrics(sce)
# Compare PCAs
multiplot(
plotPCA(original, exprs_values = "logcounts"),
plotPCA(sce, exprs_values = "logcounts")
)
# -
# The top plot is the original logcount matrix, and the bottom plot is the cell-cycle corrected logcount matrix.
#
# As we can see, the cell-cycle variation was not significant enough to create a measurable change in biological signal.
saveRDS(sce, "matrix.cellQC.geneQC.normalised.hk_controls.ccClean.rds")
| 2,924 |
/anpr_ocr/src/image_ocr.ipynb
|
eb5eedcf6536b783dd42e991666323e0c137e317
|
[
"MIT"
] |
permissive
|
BEugen/supervisely-tutorials
|
https://github.com/BEugen/supervisely-tutorials
| 0 | 0 |
MIT
| 2018-02-28T18:49:50 | 2018-02-28T17:01:18 | null |
Jupyter Notebook
| false | false |
.py
| 263,791 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import keras
import tensorflow as tf
print('TensorFlow version:', tf.__version__)
print('Keras version:', keras.__version__)
import os
from os.path import join
import json
import random
import itertools
import re
import datetime
import cairocffi as cairo
import editdistance
import numpy as np
from scipy import ndimage
import pylab
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from keras import backend as K
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers import Input, Dense, Activation
from keras.layers import Reshape, Lambda
from keras.layers.merge import add, concatenate
from keras.models import Model, load_model
from keras.layers.recurrent import GRU
from keras.optimizers import SGD
from keras.utils.data_utils import get_file
from keras.preprocessing import image
import keras.callbacks
import cv2
sess = tf.Session()
K.set_session(sess)
# # Get alphabet
from collections import Counter
def get_counter(dirpath):
dirname = os.path.basename(dirpath)
ann_dirpath = join(dirpath, 'ann')
letters = ''
lens = []
for filename in os.listdir(ann_dirpath):
json_filepath = join(ann_dirpath, filename)
description = json.load(open(json_filepath, 'r'))['description']
lens.append(len(description))
letters += description
print('Max plate length in "%s":' % dirname, max(Counter(lens).keys()))
return Counter(letters)
c_val = get_counter('/data/val/anpr_ocr/train/')
c_train = get_counter('/data/train/anpr_ocr/train/')
letters_train = set(c_train.keys())
letters_val = set(c_val.keys())
if letters_train == letters_val:
print('Letters in train and val do match')
else:
raise Exception()
# print(len(letters_train), len(letters_val), len(letters_val | letters_train))
letters = sorted(list(letters_train))
print('Letters:', ' '.join(letters))
# # Input data generator
# +
def labels_to_text(labels):
return ''.join(list(map(lambda x: letters[int(x)], labels)))
def text_to_labels(text):
return list(map(lambda x: letters.index(x), text))
def is_valid_str(s):
for ch in s:
if not ch in letters:
return False
return True
class TextImageGenerator:
def __init__(self,
dirpath,
img_w, img_h,
batch_size,
downsample_factor,
max_text_len=8):
self.img_h = img_h
self.img_w = img_w
self.batch_size = batch_size
self.max_text_len = max_text_len
self.downsample_factor = downsample_factor
img_dirpath = join(dirpath, 'img')
ann_dirpath = join(dirpath, 'ann')
self.samples = []
for filename in os.listdir(img_dirpath):
name, ext = os.path.splitext(filename)
if ext == '.png':
img_filepath = join(img_dirpath, filename)
json_filepath = join(ann_dirpath, name + '.json')
description = json.load(open(json_filepath, 'r'))['description']
if is_valid_str(description):
self.samples.append([img_filepath, description])
self.n = len(self.samples)
self.indexes = list(range(self.n))
self.cur_index = 0
def build_data(self):
self.imgs = np.zeros((self.n, self.img_h, self.img_w))
self.texts = []
for i, (img_filepath, text) in enumerate(self.samples):
img = cv2.imread(img_filepath)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.resize(img, (self.img_w, self.img_h))
img = img.astype(np.float32)
img /= 255
# width and height are backwards from typical Keras convention
# because width is the time dimension when it gets fed into the RNN
self.imgs[i, :, :] = img
self.texts.append(text)
def get_output_size(self):
return len(letters) + 1
def next_sample(self):
self.cur_index += 1
if self.cur_index >= self.n:
self.cur_index = 0
random.shuffle(self.indexes)
return self.imgs[self.indexes[self.cur_index]], self.texts[self.indexes[self.cur_index]]
def next_batch(self):
while True:
# width and height are backwards from typical Keras convention
# because width is the time dimension when it gets fed into the RNN
if K.image_data_format() == 'channels_first':
X_data = np.ones([self.batch_size, 1, self.img_w, self.img_h])
else:
X_data = np.ones([self.batch_size, self.img_w, self.img_h, 1])
Y_data = np.ones([self.batch_size, self.max_text_len])
input_length = np.ones((self.batch_size, 1)) * (self.img_w // self.downsample_factor - 2)
label_length = np.zeros((self.batch_size, 1))
source_str = []
for i in range(self.batch_size):
img, text = self.next_sample()
img = img.T
if K.image_data_format() == 'channels_first':
img = np.expand_dims(img, 0)
else:
img = np.expand_dims(img, -1)
X_data[i] = img
Y_data[i] = text_to_labels(text)
source_str.append(text)
label_length[i] = len(text)
inputs = {
'the_input': X_data,
'the_labels': Y_data,
'input_length': input_length,
'label_length': label_length,
#'source_str': source_str
}
outputs = {'ctc': np.zeros([self.batch_size])}
yield (inputs, outputs)
# -
tiger = TextImageGenerator('/data/val/anpr_ocr/train/', 128, 64, 8, 4)
tiger.build_data()
for inp, out in tiger.next_batch():
print('Text generator output (data which will be fed into the neutral network):')
print('1) the_input (image)')
if K.image_data_format() == 'channels_first':
img = inp['the_input'][0, 0, :, :]
else:
img = inp['the_input'][0, :, :, 0]
plt.imshow(img.T, cmap='gray')
plt.show()
print('2) the_labels (plate number): %s is encoded as %s' %
(labels_to_text(inp['the_labels'][0]), list(map(int, inp['the_labels'][0]))))
print('3) input_length (width of image that is fed to the loss function): %d == %d / 4 - 2' %
(inp['input_length'][0], tiger.img_w))
print('4) label_length (length of plate number): %d' % inp['label_length'][0])
break
# # Loss and train functions, network architecture
# +
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
y_pred = y_pred[:, 2:, :]
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
def train(img_w, load=False):
# Input Parameters
img_h = 64
# Network parameters
conv_filters = 16
kernel_size = (3, 3)
pool_size = 2
time_dense_size = 32
rnn_size = 512
if K.image_data_format() == 'channels_first':
input_shape = (1, img_w, img_h)
else:
input_shape = (img_w, img_h, 1)
batch_size = 32
downsample_factor = pool_size ** 2
tiger_train = TextImageGenerator('/data/train/anpr_ocr/train/', img_w, img_h, batch_size, downsample_factor)
tiger_train.build_data()
tiger_val = TextImageGenerator('/data/val/anpr_ocr/train/', img_w, img_h, batch_size, downsample_factor)
tiger_val.build_data()
act = 'relu'
input_data = Input(name='the_input', shape=input_shape, dtype='float32')
inner = Conv2D(conv_filters, kernel_size, padding='same',
activation=act, kernel_initializer='he_normal',
name='conv1')(input_data)
inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max1')(inner)
inner = Conv2D(conv_filters, kernel_size, padding='same',
activation=act, kernel_initializer='he_normal',
name='conv2')(inner)
inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max2')(inner)
conv_to_rnn_dims = (img_w // (pool_size ** 2), (img_h // (pool_size ** 2)) * conv_filters)
inner = Reshape(target_shape=conv_to_rnn_dims, name='reshape')(inner)
# cuts down input size going into RNN:
inner = Dense(time_dense_size, activation=act, name='dense1')(inner)
# Two layers of bidirecitonal GRUs
# GRU seems to work as well, if not better than LSTM:
gru_1 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru1')(inner)
gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru1_b')(inner)
gru1_merged = add([gru_1, gru_1b])
gru_2 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru2')(gru1_merged)
gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru2_b')(gru1_merged)
# transforms RNN output to character activations:
inner = Dense(tiger_train.get_output_size(), kernel_initializer='he_normal',
name='dense2')(concatenate([gru_2, gru_2b]))
y_pred = Activation('softmax', name='softmax')(inner)
Model(inputs=input_data, outputs=y_pred).summary()
labels = Input(name='the_labels', shape=[tiger_train.max_text_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
# Keras doesn't currently support loss funcs with extra parameters
# so CTC loss is implemented in a lambda layer
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length])
# clipnorm seems to speeds up convergence
sgd = SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
if load:
model = load_model('./tmp_model.h5', compile=False)
else:
model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out)
# the loss calc occurs elsewhere, so use a dummy lambda func for the loss
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=sgd)
if not load:
# captures output of softmax so we can decode the output during visualization
test_func = K.function([input_data], [y_pred])
model.fit_generator(generator=tiger_train.next_batch(),
steps_per_epoch=tiger_train.n,
epochs=1,
validation_data=tiger_val.next_batch(),
validation_steps=tiger_val.n)
return model
# -
# # Model description and training
# Next block will take about 30 minutes.
model = train(128, load=False)
# # Function to decode neural network output
# +
For a real OCR application, this should be beam search with a dictionary
and language model. For this example, best path is sufficient.
def decode_batch(out):
ret = []
for j in range(out.shape[0]):
out_best = list(np.argmax(out[j, 2:], 1))
out_best = [k for k, g in itertools.groupby(out_best)]
outstr = ''
for c in out_best:
if c < len(letters):
outstr += letters[c]
ret.append(outstr)
return ret
# -
# # Test on validation images
# +
tiger_test = TextImageGenerator('/data/test/anpr_ocr/test/', 128, 64, 8, 4)
tiger_test.build_data()
net_inp = model.get_layer(name='the_input').input
net_out = model.get_layer(name='softmax').output
for inp_value, _ in tiger_test.next_batch():
bs = inp_value['the_input'].shape[0]
X_data = inp_value['the_input']
net_out_value = sess.run(net_out, feed_dict={net_inp:X_data})
pred_texts = decode_batch(net_out_value)
labels = inp_value['the_labels']
texts = []
for label in labels:
text = ''.join(list(map(lambda x: letters[int(x)], label)))
texts.append(text)
for i in range(bs):
fig = plt.figure(figsize=(10, 10))
outer = gridspec.GridSpec(2, 1, wspace=10, hspace=0.1)
ax1 = plt.Subplot(fig, outer[0])
fig.add_subplot(ax1)
ax2 = plt.Subplot(fig, outer[1])
fig.add_subplot(ax2)
print('Predicted: %s\nTrue: %s' % (pred_texts[i], texts[i]))
img = X_data[i][:, :, 0].T
ax1.set_title('Input img')
ax1.imshow(img, cmap='gray')
ax1.set_xticks([])
ax1.set_yticks([])
ax2.set_title('Acrtivations')
ax2.imshow(net_out_value[i].T, cmap='binary', interpolation='nearest')
ax2.set_yticks(list(range(len(letters) + 1)))
ax2.set_yticklabels(letters + ['blank'])
ax2.grid(False)
for h in np.arange(-0.5, len(letters) + 1 + 0.5, 1):
ax2.axhline(h, linestyle='-', color='k', alpha=0.5, linewidth=1)
#ax.axvline(x, linestyle='--', color='k')
plt.show()
break
# -
roway"="aerodrome"]["aerodrome"="international"];
out geom;
>;
out skel qt;
"""
url = "http://overpass-api.de/api/interpreter"
r = requests.get(url, params={'data': query})
tmp['airports'] = osmtogeojson.process_osm_json(r.json())
if (region == 'Санкт-Петербург'):
region = 'Ленинградская область'
query = """
[out:json][timeout:100];
(area[name~"Санкт-Петербург"];)->.a;
nwr(area.a)["aeroway"="aerodrome"]["aerodrome"="international"];
nwr(area.a)["aeroway"="aerodrome"]["aerodrome:type"="international"];
out geom;
>;
out skel qt;
"""
url = "http://overpass-api.de/api/interpreter"
r = requests.get(url, params={'data': query})
tmp['airports'] = osmtogeojson.process_osm_json(r.json())
if (region == 'Башкортостан'):
query = """
[out:json][timeout:100];
(area[name="Башкортостан"];)->.a;
nwr(area.a)["aeroway"="aerodrome"]["aerodrome:type"="international"];
out geom;
>;
out skel qt;
"""
url = "http://overpass-api.de/api/interpreter"
r = requests.get(url, params={'data': query})
tmp['airports'] = osmtogeojson.process_osm_json(r.json())
if (region == 'Нижегородская область'):
query = """
[out:json][timeout:100];
(area[name="Нижегородская область"];)->.a;
nwr(area.a)["aeroway"="aerodrome"]["aerodrome"="international"];
out geom;
>;
out skel qt;
"""
url = "http://overpass-api.de/api/interpreter"
r = requests.get(url, params={'data': query})
tmp['airports'] = osmtogeojson.process_osm_json(r.json())
if (region == 'Новосибирская область'):
query = """
[out:json][timeout:100];
(area[name="Новосибирская область"];)->.a;
nwr(area.a)["aeroway"="aerodrome"]["aerodrome"="international"];
out geom;
>;
out skel qt;
"""
url = "http://overpass-api.de/api/interpreter"
r = requests.get(url, params={'data': query})
tmp['airports'] = osmtogeojson.process_osm_json(r.json())
if (region == 'Красноярский край'):
query = """
[out:json][timeout:100];
(area[name="Красноярский край"];)->.a;
nwr(area.a)["aeroway"="aerodrome"]["name"~"Аэропорт Красноярск"];
out geom;
>;
out skel qt;
"""
url = "http://overpass-api.de/api/interpreter"
r = requests.get(url, params={'data': query})
tmp['airports'] = osmtogeojson.process_osm_json(r.json())
if (region != 'Башкортостан' and region != 'Московская область' and region != 'Ленинградская область' and region != 'Новосибирская область' and region != 'Красноярский край' and region != 'Нижегородская область'):
api = overpass.API(endpoint="https://overpass.kumi.systems/api/interpreter", timeout=100)
tmp['airports'] = api.get(f'(area[name="{region}"];)->.a;nwr(area.a)["aeroway"="aerodrome"]["aerodrome"="international"];nwr(area.a)["aeroway"="aerodrome"]["aerodrome:type"="international"];', verbosity='geom')
api = overpass.API(endpoint="https://overpass.kumi.systems/api/interpreter", timeout=100)
tmp['hotels'] = api.get(f'(area[name="{name}"];)->.a;node(area.a)["tourism"="hotel"];', verbosity='geom')
api = overpass.API(endpoint="https://overpass.kumi.systems/api/interpreter", timeout=100)
tmp['metro'] = api.get(f'(area[name="{name}"];)->.a;node(area.a)["railway"="station"]["station"="subway"];', verbosity='geom')
responses.append(tmp)
return responses
responses = getData(train)
# +
# test_responses = getData(test)
# -
responses_df = pd.DataFrame(responses)
# test_responses_df = pd.DataFrame(test_responses)
columns = ['train_station', 'airports', 'hotels', 'metro']
for col in columns:
exec(f'{col}_df = gpd.GeoDataFrame()')
for reg in responses_df.region.values:
tmp = gpd.GeoDataFrame(responses_df[responses_df['region'] == reg][col].values[0]['features'][::])
tmp['region'] = reg
exec(f'{col}_df = {col}_df.append(tmp, ignore_index=True)')
metro_df
for i in airports_df['geometry'].index:
if type(airports_df['geometry'][i]) != LineString and type(airports_df['geometry'][i]) != Polygon:
lst = airports_df['geometry'][i]['coordinates'][0]
tmp = []
for l in lst:
l = tuple(l)
tmp.append(l)
# print(airports_df['geometry'][i])
airports_df['geometry'][i] = Polygon(tmp)
# Нарусуем карту Москвы, на которую нанесем все полученные географические объекты - вокзалы, станции метро, отели и аэропорты
msk_airports = airports_df[airports_df['region'] == 'Москва']
msk_airports.set_crs(epsg=4326, inplace=True)
msk_airports = msk_airports.to_crs(epsg=3857)
msk_metro = metro_df[metro_df['region'] == 'Москва']
msk_metro.set_crs(epsg=4326, inplace=True)
msk_metro = msk_metro.to_crs(epsg=3857)
msk_hotels = hotels_df[hotels_df['region'] == 'Москва']
msk_hotels.set_crs(epsg=4326, inplace=True)
msk_hotels = msk_hotels.to_crs(epsg=3857)
msk_train_station = train_station_df[train_station_df['region'] == 'Москва']
msk_train_station.set_crs(epsg=4326, inplace=True)
msk_train_station = msk_train_station.to_crs(epsg=3857)
plot = msk_airports.plot(color='blue', figsize=(20, 19))
msk_metro.plot(color='yellow', ax=plot)
msk_hotels.plot(color='violet', ax=plot)
msk_train_station.plot(color='green', ax=plot)
geo_train[geo_train.region=='Москва'].plot(color='red', ax=plot)
ctx.add_basemap(plot)
# +
from functools import partial
import pyproj
from shapely.ops import transform
proj_wgs84 = pyproj.Proj('+proj=longlat +datum=WGS84')
def geodesic_point_buffer(lon, lat, m):
# Azimuthal equidistant projection
aeqd_proj = '+proj=aeqd +lat_0={lat} +lon_0={lon} +x_0=0 +y_0=0'
project = partial(
pyproj.transform,
pyproj.Proj(aeqd_proj.format(lon=lon, lat=lat)),
proj_wgs84)
buf = Point(0, 0).buffer(m)
return transform(project, buf)
# -
geo_train[['lon', 'lat']]
# print(x)
# +
# msk_geodesic_buffers = gpd.GeoDataFrame()
# msk_geodesic_buffers['geometry'] = gpd.GeoDataFrame(
# geometry=Polygon([Point(
p = [geodesic_point_buffer(geo_train['lon'][x], geo_train['lat'][x], 500) for x in geo_train[geo_train.region == 'Москва'].index]
# -
msk_geodesic_buffers = gpd.GeoDataFrame(geometry = p)
msk_geodesic_buffers.set_crs(epsg=4326, inplace=True)
msk_geodesic_buffers = msk_geodesic_buffers.to_crs(epsg=3857)
# Нарисуем интерактивную карту москвы, на которую также нанесем все объекты, только метки, соответствующие салонам, заменим на буферные зоны - можем визуально посмотреть, сколько объетов пападает в буферную зону. Была выбрана буферная зона в 500 м, так как это расстояние которое можно без труда пройти пешком, не планируя поход или поездку
# +
m = folium.Map(
location=[55.75370903771494, 37.61981338262558],
tiles="cartodbpositron",
zoom_start=12,
)
folium.GeoJson(msk_geodesic_buffers, name="geojson").add_to(m)
folium.GeoJson(msk_airports).add_to(m)
folium.GeoJson(msk_train_station, marker=folium.Marker(icon=folium.Icon(icon='train', prefix='fa', color='green', icon_color='#fff'))).add_to(m)
folium.GeoJson(msk_metro, marker=folium.Marker(icon=folium.Icon(icon='subway', prefix='fa', color='red', icon_color='#fff'))).add_to(m)
folium.GeoJson(msk_hotels, marker=folium.Marker(icon=folium.Icon(icon='hotel', prefix='fa', color='black', icon_color='#fff'))).add_to(m)
folium.LayerControl().add_to(m)
m
# -
airports_center = [[airports_df['geometry'][i].centroid, airports_df['region'][i]] for i in airports_df.index]
airports_center_df = gpd.GeoDataFrame(airports_center, columns=['geometry', 'region'])
airports_center_df
metro_df = metro_df.set_crs(epsg=4326, inplace=True)
hotels_df = hotels_df.set_crs(epsg=4326, inplace=True)
train_station_df = train_station_df.set_crs(epsg=4326, inplace=True)
airports_center_df = airports_center_df.set_crs(epsg=4326, inplace=True)
train_station_df
def calc_dist(df):
dist = {}
airports_min_dist = []
metro_min_dist = []
metro_count = []
hotels_count = []
train_min_dist = []
for i in df.index:
x = df.loc[i]
point = (x['lat'], x['lon'])
dists_metro = []
dists_airport = []
dists_train = []
dists_hotels = []
for j in metro_df.index:
# if x['region'] == metro_df['region'][j]:
metro = metro_df['geometry'][j]
dists_metro.append(distance.geodesic((metro.y, metro.x), point).m)
cnt_metro = len(list(filter(lambda x: x < 500, dists_metro)))
min_dist = min(dists_metro) if dists_metro else None
metro_min_dist.append(min_dist)
metro_count.append(cnt_metro)
for k in airports_center_df.index:
# if x['region'] == airports_center_df['region'][k]:
airport = airports_center_df['geometry'][k]
dists_airport.append(distance.geodesic((airport.y, airport.x), point).m)
min_dist_airport = min(dists_airport) if dists_airport else None
airports_min_dist.append(min_dist_airport)
for t in train_station_df.index:
# if x['region'] == train_station_df['region'][t]:
train = train_station_df['geometry'][t]
dists_train.append(distance.geodesic((train.y, train.x), point).m)
min_dist_train = min(dists_train) if dists_train else None
train_min_dist.append(min_dist_train)
for h in hotels_df.index:
# if x['region'] == hotels_df['region'][h]:
hotels = hotels_df['geometry'][h]
dists_hotels.append(distance.geodesic((hotels.y, hotels.x), point).m)
cnt_hotels = len(list(filter(lambda x: x < 500, dists_hotels)))
hotels_count.append(cnt_hotels)
dist['metro_min_dist'] = metro_min_dist
dist['metro_count'] = metro_count
dist['airports_min_dist'] = airports_min_dist
dist['hotels_count'] = hotels_count
dist['train_min_dist'] = train_min_dist
return dist
d = calc_dist(geo_train)
d_test = calc_dist(geo_test)
ttrain = geo_train.copy()
ttest = geo_test.copy()
geo_train
# Рассчитала расстояния - расстояние до центра ближайщего аэропорта, расстояние до ближайщей станции метро, сколько станций метро попадает в буферную зону, сколько отелей попадает в буферную зону и расстояние до ближайшего вокзала
ttrain['airports_min_dist'] = d['airports_min_dist']
ttrain['metro_min_dist'] = d['metro_min_dist']
ttrain['metro_count'] = d['metro_count']
ttrain['airports_min_dist'] = d['airports_min_dist']
ttrain['hotels_count'] = d['hotels_count']
ttrain['train_min_dist'] = d['train_min_dist']
ttest['airports_min_dist'] = d_test['airports_min_dist']
ttest['metro_min_dist'] = d_test['metro_min_dist']
ttest['metro_count'] = d_test['metro_count']
ttest['airports_min_dist'] = d_test['airports_min_dist']
ttest['hotels_count'] = d_test['hotels_count']
ttest['train_min_dist'] = d_test['train_min_dist']
ttrain
# ### Fit model
# +
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
le = preprocessing.LabelEncoder()
scaler = StandardScaler()
ttrain['region'] = le.fit_transform(ttrain['region'])
ttrain['airports_min_dist'] = scaler.fit_transform(ttrain['airports_min_dist'].values.reshape(-1,1))
ttrain['metro_min_dist'] = scaler.fit_transform(ttrain['metro_min_dist'].values.reshape(-1,1))
ttrain['train_min_dist'] = scaler.fit_transform(ttrain['train_min_dist'].values.reshape(-1,1))
ttest['region'] = le.fit_transform(ttest['region'])
ttest['airports_min_dist'] = scaler.fit_transform(ttest['airports_min_dist'].values.reshape(-1,1))
ttest['metro_min_dist'] = scaler.fit_transform(ttest['metro_min_dist'].values.reshape(-1,1))
ttest['train_min_dist'] = scaler.fit_transform(ttest['train_min_dist'].values.reshape(-1,1))
# -
ttrain
X_train, X_valid, y_train, y_valid = train_test_split(ttrain.drop(columns=['target', 'geometry', 'lon', 'lat'], axis=1), ttrain[['target']])
model = LinearRegression().fit(X_train.drop('point_id', axis=1), y_train)
ttest = ttest.drop(columns=['target', 'city', 'geometry', 'lon', 'lat'], axis=1)
mean_absolute_error(y_valid, model.predict(X_valid.drop('point_id', axis=1)))
# ### Make submission
ttest
submission = pd.read_csv('data/sample_submission.csv')
submission['target'] = model.predict(ttest.drop('point_id', axis=1))
submission.to_csv('data/my_submission_01.csv', index=False)
| 26,357 |
/Semana_7/main.ipynb
|
a7b9cf88a02422ffb410217351bf4b60e4f92359
|
[] |
no_license
|
rdurbano/Codenation_Projects
|
https://github.com/rdurbano/Codenation_Projects
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 21,336 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import os
from datetime import datetime, timedelta
# %matplotlib inline
import matplotlib.pyplot as plt
import weatherdataprocesstool as wt
#原始資料
df=pd.read_csv('/Users/benbilly3/Desktop/資策會專題/rawMaterialPricePrediction/cornsMacroIndex/Macro_corn.csv')
s1=df
s1['Year']=s1['Year'].astype(str)
s1['Month']=s1['Month'].astype(str)
s1['date2']=s1['Year']+'-'+s1['Month']
s1['date2']=s1['date2'].astype('datetime64[ns]')
s1
# +
df=pd.read_pickle('/Users/benbilly3/Desktop/資策會專題/rawMaterialPricePrediction/RM_Price/rawMaterialPrice.pickle')
df2=df.reset_index()
#抽出月份
df2['date2']=df2['Date'].apply(lambda s:s.strftime('%Y%m'))
df2['M']=df2['date2'].apply(lambda s:s[4:])
df2=df2.set_index('stock_id')
dataset=[]
i = 'cmeCorn'#'cmeCorn'
df3=df2.loc[i]
df3['date2']=df3['date2'].apply(lambda s:pd.to_numeric(s, errors='coerce'))
df3['YM-2']=df3['date2'].shift(1)
df3['newMonth']=(df3['date2']-df3['YM-2']).fillna(1)#解決12越跨一月的問題
upAndDownS=df3[df3['newMonth']!=0]#留每月第一日的價格
upAndDownS['nextSeasonClose']=upAndDownS['Close'].shift(-3)
upAndDownS['return']=upAndDownS['nextSeasonClose']/upAndDownS['Close']
upAndDownS=upAndDownS[~upAndDownS['return'].isna()]
upAndDownS['win']=upAndDownS['return'].apply(lambda s:1 if s>1 else 0)
dataset.append(upAndDownS)
upAndDownS=pd.concat(dataset)
upAndDownS
#每月第一日的價格統一改為1號
upAndDownS['date2']=upAndDownS['date2'].astype(str).apply(lambda s:s[:4]+'-'+s[4:]).astype('datetime64[ns]')
upAndDownS=upAndDownS.reset_index()
upAndDownS=upAndDownS.set_index('date2')
upAndDownS
# -
M1=s1.set_index('date2').dropna()
M1['return']=upAndDownS['return']
M1['win']=upAndDownS['win']
M1=M1.iloc[:,2:]
M1.dropna().to_pickle('/Users/benbilly3/Desktop/資策會專題/rawMaterialPricePrediction/US_Weather/mecroEcnomicsCorn_traindata.pickle')
Você pode utilizar o método `str.strip()` para remover esses espaços.
# ## Inicia sua análise a partir daqui
# Sua análise começa aqui.
countries.columns
variaveis_numericas = ["Pop_density","Coastline_ratio","Net_migration","Infant_mortality","Literacy",
"Phones_per_1000","Arable","Crops","Other","Climate","Birthrate","Deathrate",
"Agriculture","Agriculture","Industry","Service"]
countries[variaveis_numericas] = countries[variaveis_numericas].replace(',', '.', regex=True)
countries[variaveis_numericas] = countries[variaveis_numericas].astype(float)
countries.dtypes
# ## Questão 1
#
# Quais são as regiões (variável `Region`) presentes no _data set_? Retorne uma lista com as regiões únicas do _data set_ com os espaços à frente e atrás da string removidos (mas mantenha pontuação: ponto, hífen etc) e ordenadas em ordem alfabética.
def q1():
lstRegion = list(countries['Region'].unique())
stripList = list(map(lambda x: x.strip(), lstRegion))
stripList.sort()
return stripList
pass
q1()
# ## Questão 2
#
# Discretizando a variável `Pop_density` em 10 intervalos com `KBinsDiscretizer`, seguindo o encode `ordinal` e estratégia `quantile`, quantos países se encontram acima do 90º percentil? Responda como um único escalar inteiro.
#
#
# Aqui, você poderia apenas comparar que valores de "pop_density_discretizer" são iguais a 9. O proecesso de "binning" ou "discretização" coloca cada uma das observações em um "balde". Como você separou suas visualizações em 10 bins (ou baldes, para seguir o exemplo), bastava saber quantos estão no último balde pra saber quantos estão no percentil 90.
def q2():
discretizer = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='quantile')
discretizer.fit(countries['Pop_density'].values.reshape(-1, 1))
popT = discretizer.transform(countries['Pop_density'].values.reshape(-1, 1))
return int(sum(popT == 9)[0])
pass
q2()
# # Questão 3
#
# Se codificarmos as variáveis `Region` e `Climate` usando _one-hot encoding_, quantos novos atributos seriam criados? Responda como um único escalar.
def q3():
one_hot_encoder = OneHotEncoder(sparse=False, dtype=np.int)
countries["Climate"] = countries[['Climate']].fillna(countries['Climate'].mean())
region_climate_encoded = one_hot_encoder.fit_transform(countries[["Region","Climate"]])
return int(region_climate_encoded.shape[1])
pass
q3()
# ## Questão 4
#
# Aplique o seguinte _pipeline_:
#
# 1. Preencha as variáveis do tipo `int64` e `float64` com suas respectivas medianas.
# 2. Padronize essas variáveis.
#
# Após aplicado o _pipeline_ descrito acima aos dados (somente nas variáveis dos tipos especificados), aplique o mesmo _pipeline_ (ou `ColumnTransformer`) ao dado abaixo. Qual o valor da variável `Arable` após o _pipeline_? Responda como um único float arredondado para três casas decimais.
test_country = [
'Test Country', 'NEAR EAST', -0.19032480757326514,
-0.3232636124824411, -0.04421734470810142, -0.27528113360605316,
0.13255850810281325, -0.8054845935643491, 1.0119784924248225,
0.6189182532646624, 1.0074863283776458, 0.20239896852403538,
-0.043678728558593366, -0.13929748680369286, 1.3163604645710438,
-0.3699637766938669, -0.6149300604558857, -0.854369594993175,
0.263445277972641, 0.5712416961268142
]
def q4():
# Retorne aqui o resultado da questão 4.\n",
cols = countries.columns[2:len(countries.columns)]
num_pipeline = Pipeline(steps = [('imputer', SimpleImputer(strategy = 'median')),('scaler', StandardScaler())])
num_pipeline.fit(countries[cols])
pipeline = num_pipeline.transform([test_country[2:]])
return float(pipeline[0][9].round(3))
pass
q4()
# ## Questão 5
#
# Descubra o número de _outliers_ da variável `Net_migration` segundo o método do _boxplot_, ou seja, usando a lógica:
#
# $$x \notin [Q1 - 1.5 \times \text{IQR}, Q3 + 1.5 \times \text{IQR}] \Rightarrow x \text{ é outlier}$$
#
# que se encontram no grupo inferior e no grupo superior.
#
# Você deveria remover da análise as observações consideradas _outliers_ segundo esse método? Responda como uma tupla de três elementos `(outliers_abaixo, outliers_acima, removeria?)` ((int, int, bool)).
def q5():
q1 = countries['Net_migration'].quantile(0.25)
q3 = countries['Net_migration'].quantile(0.75)
iqr = q3 - q1
outliers_abaixo = (countries['Net_migration'] < (q1 - 1.5 * iqr)).sum()
outliers_acima = (countries['Net_migration'] > q3 + 1.5 * iqr).sum()
return int(outliers_abaixo),int(outliers_acima),bool(False)
pass
q5()
# ## Questão 6
# Para as questões 6 e 7 utilize a biblioteca `fetch_20newsgroups` de datasets de test do `sklearn`
#
# Considere carregar as seguintes categorias e o dataset `newsgroups`:
#
# ```
# categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']
# newsgroup = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)
# ```
#
#
# Aplique `CountVectorizer` ao _data set_ `newsgroups` e descubra o número de vezes que a palavra _phone_ aparece no corpus. Responda como um único escalar.
categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']
newsgroup = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)
def q6():
# Retorne aqui o resultado da questão 4.
count_vectorizer = CountVectorizer()
newsgroups_counts = count_vectorizer.fit_transform(newsgroup.data)
count = count_vectorizer.get_feature_names().index('phone')
return int(newsgroups_counts[:, count].sum())
pass
q6()
# ## Questão 7
#
# Aplique `TfidfVectorizer` ao _data set_ `newsgroups` e descubra o TF-IDF da palavra _phone_. Responda como um único escalar arredondado para três casas decimais.
def q7():
# Retorne aqui o resultado da questão 4.
tfidf_vectorizer = TfidfVectorizer()
tfidf_vectorizer.fit(newsgroup.data)
newsgroups_tfidf_vectorized = tfidf_vectorizer.transform(newsgroup.data)
count = tfidf_vectorizer.get_feature_names().index('phone')
return float(newsgroups_tfidf_vectorized[:, count].sum().round(3))
pass
q7()
.imshow(V, extent=[-1,1,-1,1], cmap=plt.get_cmap('bwr'), animated=True, origin='lower', interpolation='None')])
n+= 1
# animate the final result
anim = animation.ArtistAnimation(fig, ims, interval=10, blit=False, repeat_delay=1, repeat=True)
print("ΔV = %8.5E in %d steps" % ((ΔV/N**2),n))
# -
# ### This is better but still slow. There must be a better way!
| 8,628 |
/Modelos.ipynb
|
2afd92bf7b03fc4bf46859a08f30705a39c89756
|
[] |
no_license
|
samuelsoaress/Predict-Future-Sales
|
https://github.com/samuelsoaress/Predict-Future-Sales
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 64,738 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import datetime
from statsmodels.tsa.arima_model import ARIMA
ts = pd.read_csv('dados/ts.csv')
dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m-%d')
ts = pd.read_csv('dados/ts.csv', parse_dates = ['mes'], index_col = 'mes', date_parser = dateparse)
# -
# !conda activate pmdarima
from pyramid.arima import auto_arima
ts.head()
ts.dtypes
ts.drop('aceleracao' ,axis=1, inplace=True)
ts.drop('Unnamed: 0' ,axis=1, inplace=True)
ts.drop('aumento' ,axis=1, inplace=True)
for linha in ts.index:
ts.loc[linha, 'vendas'] = int(ts.loc[linha, 'vendas'])
ts_no_constants = ts.loc[:, (ts != ts.iloc[0]).any()]
ts.shape
ts_no_constants.dropna()
ori_data = (ts_no_constants - ts_no_constants.mean()) / ts_no_constants.std()
ori_data
modelo = ARIMA(ori_data, order=(2,1,2))
modelo_train = modelo.fit()
modelo_train.summary()
previsoes = modelo_train.forecast(steps = 12) # prevendo os proximos 12 meses
previsoes[0] # previsão pros proximos 12 meses
eixo = ori_data.plot()
modelo_train.plot_predict('2015-09-01','2015-11-01', ax = eixo, plot_insample = True) # Grafico de previsão de vendas
import pickle
pickle.dump(modelo_train, open('arima_model.sav','wb'))
ori_data.to_csv('data_scaled.csv')
go_data);
# +
#community with PCI > 60k
# %sql SELECT community_area_name FROM chicago_data WHERE per_capita_income_ > 60000
# +
# plot between PCI and H_index
# Plot = %sql SELECT per_capita_income_ , hardship_index FROM chicago_data
# -
import matplotlib.pyplot as plt
import pandas as pd
# +
plot = Plot.DataFrame()
Y = plot['per_capita_income_']
X = plot['hardship_index']
# -
plt.scatter(x=X, y=Y, marker='o');
plt.xlabel('hardship_index')
plt.ylabel('per_capita_income_')
# As per capital income increases, the hardship_index reduces also.
| 2,099 |
/pythonFormatting.ipynb
|
703100af8a578ec945dcb0102a8beb5a44190cd8
|
[] |
no_license
|
Sooryajagadeesan/Python-output-Formatting-Practice
|
https://github.com/Sooryajagadeesan/Python-output-Formatting-Practice
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 21,672 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="mMOA-ZhSJFOB"
class Solution:
def minimumSwaps(self, nums: List[int]) -> int:
max_v, min_v = max(nums), min(nums)
max_idx, min_idx = -1, -1
n = len(nums)
# Time Complexity will be O(n)
# Space Complexity will be O(1)
for idx, curnum in enumerate(nums):
if curnum == min_v and min_idx == -1:
min_idx = idx
if curnum == max_v:
max_idx = idx
rightmost = n - max_idx - 1 # max num distance to rightmost
leftmost = min_idx # min num distance to leftmost
ans = rightmost + leftmost
return ans if min_idx <= max_idx else ans - 1
day_period) !=8:
day_period.extend(["NIL"]*(8-len(day_period)))
period.append(day_period)
print("-"*90)
print("{:^90}".format("KITE"))
print("{:^90}".format("TIME TABLE"))
print("-"*90)
# period = [["DE","SS","DC","EC1","DS","BEIE","ED","CA"],["CA","EG","Py","C","DS","OOPS","C","Py"],["SS","DC","EC1","DS","BEIE","ED","CA","ED"],["DS","OOPS","C","Py","SS","DC","EC1","DC"],["SS","DC","EC1","Py","DS","OOPS","C","Py"],["SS","DC","EC1","DS","OOPS","OOPS","C","Py"]]
print(" "*10,end = "")
for i in range(1,9):
print("{:^10}".format(i),end = "")
print()
print("-"*90)
for day in range(len(days)):
print("{:^10}".format(days[day]),end = "")
for pe in range(len(period[day])):
print("{:^10}".format(period[day][pe]),end = "")
print()
print("-"*90)
# + [markdown] id="CzsJuUjFm5CX"
# A simple HARD CODED Time Table
# + colab={"base_uri": "https://localhost:8080/"} id="TRhgcoIZiH46" outputId="0affe538-3d9a-49b0-9d7e-6e3027aa7c07"
print('''
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Monday | Python | C | C++ | OS | DBMS | Python | OOPS | Java |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Tuesday | Java | Python | DBMS | C | Data Structures in C Laboratory |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
|Wednesday| OS | Python | C++ | Java | OOPS | Business English |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Thurday | Python | Java | C | OOPS | DBMS | C++ | Java |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Friday | C++ | Python | OS | Java | C | OOPS |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Saturday| Data Structures in C Laboratory | OOPS | C++ | DBMS | Python |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Sunday | H | O | L | I | D | A | Y | !!!! |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
''')
# + [markdown] id="yBUKxNe6mvml"
# Bank Cheque generator
#
# + colab={"base_uri": "https://localhost:8080/"} id="7ieylNBgjYRc" outputId="217dd442-ec98-4996-a87f-905f5a6b1d32"
bank = input("Enter Bank name with Branch : ")
date = input("Enter date in (DD-MM-YY) format : ")
name = input("Enter your name : ")
amount = int(input("Enter the amount (in Rs.) : "))
amount = str(amount)+"/-"
amountInWords = input("Enter amount in words (end with 'only'): ")
accountNumber= int(input("Enter Account Number : "))
signature = input("Enter your e-Signature : ")
i = 5
print("-"*100)
print("| ","Bank & Branch: ",bank," "*(86-len(bank)-len(date)-7-7),"Date : ",date," |",sep="")
print("| "," "*94," |",sep="")
#print("|", " Pay"," "*5,"_"*5, "Soorya","_"*5 ," "*75, "|",sep="")
#print("| ",pay," "*(i-2),"-"*(i*i),name,"-"*(i*i)," "*(94-len(pay)- len(name)-i-2 -i*i -i*i)," |",sep="")
print("| ","PAY"," "*(i-2),"-"*i,name,"-"*(i*i*2)," "*(94-3 - (i-2) - i - len(name)- (i*i*2)-8),"OR ORDER"," |",sep = "")
print("| "," "*(94)," |",sep = "") #gap
print("| ","RUPEES ","-"*i,amountInWords,"-"*(i*i*2)," "*(94-7-i - len(amountInWords)-(i*i*2))," |",sep ="")
print("| "," "*(94)," |",sep = "") #gap
print("| ","-"*(94-len(str(amount))-4 -3)," ","Rs. ",amount," |",sep = "")
print("| "," "*(94)," |",sep = "")#gap
print("| ","ACCOUNT NO. ",accountNumber," "*(94-12 - len(str(accountNumber)))," |",sep = "")
print("| "," "*(94)," |",sep = "") #gap
print("| "," "*(94)," |",sep = "")#gap
print("| "," "*(94-14-len(signature)),"e-Signature : ",signature," |",sep = "")
#print("| "," "*(94)," |",sep = "") #gap
print("-"*100)
# + [markdown] id="0jIyCHb-scFs"
# BILL Generator
#
# + colab={"base_uri": "https://localhost:8080/"} id="AP_fyOmruYGd" outputId="4d14e816-c2b8-493b-ca40-866e6c4c0b5c"
print("-"*100)
print('''
TODAY's SALES
123 - Hamam Soap
456 - Parle G Biscuit
789 - Colgate Tooth Paste 100g
246 - Axe Perfume 100ml
135 - Amrutanjan Pain Balm 8ml
''')
print("-"*100)
storeName = "Kannan Departmental Stores"
branch = "Ram Nagar, Coimbatore."
billno = "123456"
date = input("Enter date in DD-MM-YY format : ")
customerName = input("Enter Customer Name : ")
customerMobileNumber = "+91 " + input("Enter Customer Mobile Number : ")
fields = ["SL NO","Product","Quantity","Amount"]
total_amount = 0
productDB = {
123:["Hamam Soap",35.0],
456:["Parle G Biscuit",20.0],
789:["Colgate Tooth Paste 100g",50.0],
246:["Axe Perfume 100ml",105.0],
135:["Amrutanjan Pain Balm 8ml",38.0]
}
productCount = int(input("Enter number of Products(Don't include the individual pieces of same product) : "))
productSerials = []
for i in range(productCount):
serial = int(input("Enter product({}) Serial Number : ".format(i+1)))
while serial not in productDB.keys():
print("OOPS ! Please Enter a valid Serial Number")
serial = int(input("Re-enter product({}) Serial Number : ".format(i+1)))
quantity = int(input("Enter product({}) Quantity : ".format(i+1)))
productSerials.append([serial,quantity])
print("-"*100)
print("{:^100}".format(storeName))
print("{:^100}".format(branch))
print("-"*100)
print("Bill no : {:<67} Date : {:>10}".format(billno,date))
print("-"*100)
print("Customer Name : {:<100}".format(customerName))
print("Customer Mobile : {:<100}".format(customerMobileNumber))
print("-"*100)
for f in fields:
print("{:^25}".format(f),end = "")
print();print("-"*100)
for i in range(productCount):
serialNo = productSerials[i][0]
quantity = productSerials[i][1]
price = productDB[serialNo][1]
amount = quantity*price
total_amount += amount
print("{:^25}".format(serialNo),end="")
print("{:^25}".format(productDB[serialNo][0]),end = "")
print("{:^25}".format(quantity),end = "")
print("{:^25}".format(amount))
print("-"*100)
print("{:^25}".format(""),end = "")
print("{:^25}".format(""),end = "")1
print("{:^25}".format("TOTAL"),end = "")
print("{:^25}".format(total_amount))
print("-"*100)
print()
print()
print("{:^100}".format("THANK YOU !!"))
print("{:^100}".format("HAVE A NICE DAY !!!!"))
| 7,663 |
/Web_Scraping/Day 2/14_doctor_decoder.ipynb
|
bd52f01fb7f5e8947c7ff79eadcade1610ed91fb
|
[] |
no_license
|
Flaw1331/Class
|
https://github.com/Flaw1331/Class
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 4,724 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Start Python Imports
import math, time, random, datetime
# Data Manipulation
import numpy as np
import pandas as pd
# Visualization
import matplotlib.pyplot as plt
import missingno
import seaborn as sns
plt.style.use('seaborn-whitegrid')
import warnings
warnings.filterwarnings('ignore')
# Import train & test data
train = pd.read_csv('./Data/train.csv')
test = pd.read_csv('./Data/test.csv')
# +
all_is_null = zip(train.isnull(), train.isnull().sum())
for is_null in all_is_null:
if is_null[1] > 0:
print(f'{is_null[0]}: {is_null[1]}')
print()
print(f'Total: {len(train)}')
missingno.matrix(train)
# +
all_is_null = zip(test.isnull(), test.isnull().sum())
for is_null in all_is_null:
if is_null[1] > 0:
print(f'{is_null[0]}: {is_null[1]}')
print()
print(f'Total: {len(train)}')
missingno.matrix(test)
# +
total_stroke = train[train['stroke_in_2018'] == '1']['stroke_in_2018'].count()
total_samples = train['stroke_in_2018'].count()
print('Training data')
print(f'Total people who had strokes: {total_stroke}')
print(f'Fraction of people who had strokes: {1 - (total_stroke / total_samples)}')
print()
print('Test data')
print(f'Fraction of people who had strokes: 0.98480 (approx. from submission)')
print()
fig = plt.figure(figsize=(20,5))
sns.countplot(y='stroke_in_2018', data=train);
print(train.stroke_in_2018.value_counts())
# +
from word2number import w2n
def getGender(x):
return 1 if x.lower() == 'm' else (0 if x.lower() == 'f' else (2 if x.lower() == 'other' else ''))
def splitAgeGender(x, ret_age=True):
if not x or str(x) == 'nan':
return ''
parts = str(x).replace(',', ' ').split()
age = gender = ''
if ret_age:
try:
age = w2n.word_to_num(parts[0])
except ValueError:
try:
age = float(parts[0])
except ValueError:
pass
if not age and len(parts) > 1:
try:
age = w2n.word_to_num(parts[1])
except ValueError:
try:
age = float(parts[1])
except ValueError:
pass
return age
else:
gender = getGender(parts[0])
if gender == '' and len(parts) > 1:
gender = getGender(parts[1])
return gender
def discreteAge(x):
if not x:
return 0
return int(x // 5) + 1
# Added - Job and location
def split_job_livivng(line, job = True):
if not line or str(line) == 'nan':
return 'AAA'
part_1, part_2 = line.split('?')[:2:]
if job:
if 'gov' in part_1.lower() or 'gov' in part_2.lower():
part_1 = 'GOVERNMENT'
elif 'pri' in part_1.lower() or 'pri' in part_2.lower():
part_1 = 'PRIVATE'
elif 'bus' in part_1.lower() or 'bus' in part_2.lower() or 'biz' in part_1.lower() or 'biz' in part_2.lower():
part_1 = 'BUSINESS'
elif 'parent' in part_1.lower() or 'parent' in part_2.lower():
part_1 = 'PARENTAL_LEAVE'
elif 'unemp' in part_1.lower() or 'unemp' in part_2.lower():
part_1 = 'UNEMPLOYED'
else:
part_1 = 'AAA'
return part_1
else:
if 'city' in part_1.lower() or 'city' in part_2.lower():
part_2 = 'CITY'
elif 'remo' in part_1.lower() or 'remo' in part_2.lower():
part_2 = 'REMOTE'
elif part_1 == 'c' or part_2 == 'c':
part_2 = 'CITY'
elif part_1 == 'r' or part_2 == 'r':
part_2 = 'REMOTE'
else:
part_2 = 'AAA'
return part_2
def getSmokeStatus(x):
x = str(x).lower()
x = ''.join([i for i in x if i.isalpha()])
return 1 if 'non' in x else (2 if 'quit' in x else (3 if 'active' in x else 0))
def fixBmi(x):
x = str(x)
if x == 'nan' or x == '?' or x == '.':
x = 0
return float(x)
def discreteBmi(x):
if x < 0.5:
return 0
elif x < 18.5:
return 1
elif x < 25:
return 2
elif x < 30:
return 3
elif x < 35:
return 4
elif x < 40:
return 5
return 6
def discreteBloodSugar(x):
if x < 70:
return 1
elif x < 120:
return 2
elif x < 200:
return 3
elif x < 280:
return 3
return 4
def cleanBinary(x, flip=False):
val = x
try:
val = int(x)
if flip:
val = 1 if val == 1 else 0
else:
val = 0 if val == 0 else 1
except ValueError:
val = ''
return val
def checkTreated(x):
if str(x['TreatmentA']) == 'nan':
return 0
return 1 if (x['TreatmentA'] == 1 or x['TreatmentB'] == 1 or x['TreatmentC'] == 1 or x['TreatmentD_2'] == 1) else 0
def bmiMean(x, m):
if x > 0.5:
return x
return m
# -
def createCleanedData(test_type):
if test_type == 'test':
old_df = test
else:
old_df = train
new_df = pd.DataFrame(old_df)
new_df['sex'] = old_df['sex and age'].apply(lambda x: splitAgeGender(x, False))
new_df['age'] = old_df['sex and age'].apply(lambda x: splitAgeGender(x, True))
new_df['age_2'] = old_df['age'].apply(lambda x: discreteAge(x))
new_df['job'] = old_df['job_status and living_area'].apply(lambda x: split_job_livivng(x, True))
new_df['location'] = old_df['job_status and living_area'].apply(lambda x: split_job_livivng(x, False))
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
new_df['job_encoded'] = encoder.fit_transform(new_df['job'])
new_df['location_encoded'] = encoder.fit_transform(new_df['location'])
new_df['smoker_status_2'] = old_df['smoker_status'].apply(getSmokeStatus)
new_df['BMI'] = old_df['BMI'].apply(fixBmi)
new_df['BMI_2'] = new_df['BMI'].apply(discreteBmi)
if 'stroke_in_2018' in new_df:
new_df_2 = new_df[new_df['stroke_in_2018'].isin(['1', '0'])]
else:
new_df_2 = new_df
new_df_2['average_blood_sugar_2'] = new_df_2['average_blood_sugar'].apply(discreteBloodSugar)
new_df_2['high_BP_2'] = new_df_2['high_BP'].apply(cleanBinary)
new_df_2['heart_condition_detected_2017_2'] = new_df_2['heart_condition_detected_2017'].apply(cleanBinary)
new_df_2['married_2'] = new_df_2['married'].apply(cleanBinary)
new_df_2['TreatmentD_2'] = new_df_2['TreatmentD'].apply(cleanBinary)
new_df_2['treated'] = new_df_2.apply(lambda row: checkTreated(row), axis=1)
new_df_3 = new_df_2.replace('', np.nan, regex=True)
BMI_mean = new_df_3['BMI'].mean()
new_df_3['BMI_3'] = new_df_3['BMI'].apply(lambda x: bmiMean(x, BMI_mean))
new_df_3['sex'].fillna(0, inplace=True)
new_df_3['age'].fillna(new_df_3['age'].mean(), inplace=True)
new_df_3['high_BP_2'].fillna(0, inplace=True)
new_df_3['heart_condition_detected_2017_2'].fillna(0, inplace=True)
new_df_3['average_blood_sugar'].fillna(new_df_3['average_blood_sugar'].mean(), inplace=True)
new_df_3['married_2'].fillna(1, inplace=True)
new_df_3['TreatmentA'].fillna(0, inplace=True)
new_df_3['TreatmentB'].fillna(0, inplace=True)
new_df_3['TreatmentC'].fillna(0, inplace=True)
new_df_3['TreatmentD_2'].fillna(0, inplace=True)
new_df_3.to_csv(f'./Data/{test_type}_cleaned.csv')
return new_df_3
train_clean = createCleanedData('train')
test_clean = createCleanedData('test')
train_clean.BMI_3.plot.hist()
test_clean.BMI_3.plot.hist()
train_clean.average_blood_sugar.plot.hist()
test_clean.average_blood_sugar.plot.hist()
train_clean.age.plot.hist()
test_clean.age.plot.hist()
missingno.matrix(train_clean)
# +
from sklearn import svm, datasets
from sklearn.linear_model import LinearRegression, LogisticRegressionCV
from sklearn.metrics import roc_curve, auc
from scipy import interp
from sklearn.model_selection import StratifiedKFold
train_clean_2 = pd.read_csv('./Data/train_cleaned.csv')
x_train = train_clean_2[['sex', 'age', 'high_BP_2', 'heart_condition_detected_2017_2', 'married_2', 'average_blood_sugar_2', 'BMI_3', 'smoker_status_2', 'treated']].copy()
y_train = train_clean_2[['stroke_in_2018']].copy()
# Run classifier with cross-validation and plot ROC curves
cv = StratifiedKFold(n_splits=3)
classifier = LogisticRegressionCV()
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
i = 0
for train, test in cv.split(x_train, y_train):
probas_ = classifier.fit(x_train.iloc[train], y_train.iloc[train]).predict_proba(x_train.iloc[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y_train.iloc[test], probas_[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
# +
# Learning Curve for Bias analysis
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.neighbors import KNeighborsClassifier
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
X, y = x_train, y_train
# Logistic
title = "Learning Curves (Logistic Regression)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = LogisticRegressionCV(cv=5, random_state=0, multi_class='ovr')
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)
print('Logistic Done...')
# SVM
title = r"Learning Curves (SVM, RBF kernel, $\gamma=0.001$)"
# SVC is more expensive so we do a lower number of CV iterations:
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
estimator = SVC(gamma=0.001)
plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=4)
print('SVM Done...')
# Naive Bayse
title = "Learning Curves (Naive Bayes)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = GaussianNB()
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)
print('Naive Bayes Done...')
# Decision Tree
title = "Learning Curves (Decision Tree)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = tree.DecisionTreeRegressor()
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)
print('DT Done...')
# KNN
title = "Learning Curves (KNN)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = KNeighborsClassifier(n_neighbors=10)
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)
print('KNN Done...')
# +
# Now resample the data
import imblearn
data = pd.read_csv('./Data/train_cleaned.csv')
np.random.seed(seed=1)
mask = np.random.rand(data.shape[0]) < 0.7
train = data[mask]
test = data[~mask]
train.to_csv('./Data/train_clean_split.csv')
test.to_csv('./Data/test_clean_split.csv')
used_features = ['age', 'high_BP_2', 'average_blood_sugar', 'BMI_3', 'TreatmentA', 'TreatmentB', 'TreatmentC', 'TreatmentD_2']
#used_features = ['sex', 'age', 'high_BP_2', 'heart_condition_detected_2017_2', 'married_2', 'average_blood_sugar_2', 'BMI_3', 'smoker_status_2', 'TreatmentA', 'TreatmentB', 'TreatmentC', 'TreatmentD_2']
x_train = train[used_features].copy()
y_train = train[['stroke_in_2018']].copy()
train.describe()
# -
sns.countplot('stroke_in_2018', data = train)
plt.title('No Stroke (0) vs. Stroke (1)')
# +
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
# random oversampling
ros = RandomUnderSampler(random_state=0)
#x_resampled, y_resampled = ros.fit_resample(x_train, y_train)
# applying SMOTE to our data and checking the class counts
x_resampled, y_resampled = SMOTE().fit_resample(x_train, y_train)
print(f'Total size of resampled data: {len(x_resampled)}')
# -
from ModelsTest import runTests
runTests(x_resampled, y_resampled, test_type='cv', data_used=1)
# +
from sklearn.linear_model import LogisticRegressionCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from sklearn import metrics
x_test = test[used_features].copy()
y_test = test[['stroke_in_2018']].copy()
def run_model(algo_name):
algos = {'lr': LogisticRegressionCV(class_weight='balanced', scoring='roc_auc', max_iter=1000), 'knn': KNeighborsClassifier(n_neighbors=15),
'svm': svm.SVC(), 'dt': DecisionTreeClassifier(max_depth=5, min_samples_split=3, class_weight={0:9, 1:7})}
algo = algos[algo_name]
algo.fit(x_resampled, y_resampled)
pred_train_lr = algo.predict(x_resampled)
print('Accuracy for train: ' + str(round(metrics.accuracy_score(y_resampled, pred_train_lr) * 100, 2)))
print('Confusion matrix for train:')
print(metrics.confusion_matrix(y_resampled, pred_train_lr))
print()
pred_test_lr = algo.predict(x_test)
print('Accuracy for test: ' + str(round(metrics.accuracy_score(y_test, pred_test_lr) * 100, 2)))
print('Confusion matrix for test:')
print(metrics.confusion_matrix(y_test, pred_test_lr))
print('F1 score for test: ' + str(metrics.f1_score(y_test, pred_test_lr, average='binary')))
fpr, tpr, thresholds = metrics.roc_curve(y_test, pred_test_lr)
print('AUC score for test: ' + str(metrics.auc(fpr, tpr)))
print()
return algo
print('Logistic regression')
lr_fit = run_model('lr')
#print('K-Nearest Neighbors')
#run_model('knn')
#print('SVM')
#run_model('svm')
print('Decisition Tree')
dt_fit = run_model('dt')
# +
import xgboost as xgb
xgb2 = xgb.XGBClassifier(
learning_rate =0.5,
n_estimators=100,
max_depth=8,
min_child_weight=3,
gamma=5,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=2,
seed=27)
train_model7 = xgb2.fit(x_resampled, y_resampled)
pred7 = train_model7.predict(x_test.to_numpy())
print("Accuracy for model XGBoost train: %.2f" % (metrics.accuracy_score(y_test, pred7) * 100))
print(metrics.confusion_matrix(y_test, pred7))
print('F1 Score for XGBoost test: ' + str(metrics.f1_score(y_test, pred7, average='binary')))
fpr, tpr, thresholds = metrics.roc_curve(y_test, pred7)
print('AUC score for XGBoost test: ' + str(metrics.auc(fpr, tpr)))
# +
test_data = pd.read_csv('./Data/test_cleaned.csv')
x_test_final = test_data[used_features].copy()
pred_f = lr_fit.predict(x_test_final.to_numpy())
print(pred_f.mean())
# -
output = test_data[['id']]
output['stroke_in_2018'] = pd.DataFrame(pred_f)
print(output)
output.to_csv('./Data/final.csv')
les:**
# + id="e6jeDGEpXQ76" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="f901beda-bc73-4a7a-8c8b-d1e7a9eb050a"
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors
print(v.dot(w))
print(np.dot(v, w))
##Assert the value
assert(np.dot(v,w) == (v[0] * w[0] + v[1] * w[1]) )
# + id="OleS9R2XXQ78" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="0da800d0-0fc4-42cc-dec5-03caa170e74a"
x = np.array([[1,2],[3,4]])
# Matrix / vector product; both produce the rank 1 array
print(x.dot(v))
print(np.dot(x, v))
xformed = np.dot(x,v)
##Retrieve the original vector
print(np.dot( np.linalg.inv(x), xformed ) )
# + id="W6E5lkHVXQ7-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="ca67a254-5c99-4e28-b078-566aa1f11eb6"
y = np.array([[5,6],[7,8]])
# Matrix / matrix product; both produce the rank 2 array
print(x * y)
print(np.dot(x, y))
xformed = np.dot(x,y)
##Retrieve the original matrix
print(np.dot( np.linalg.inv(x), xformed ))
# + [markdown] id="6vmKfrImXQ8A" colab_type="text"
# Numpy provides many useful functions for performing computations on arrays; one of the most useful is `sum`:
# + id="XSTvjf3YXQ8A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="5aeb18e8-4cf4-4a33-97d0-b8c369b8959a"
import numpy as np
x = np.array([[1,2],[3,4]])
print('x =')
print(x)
print('sum =')
print(np.sum(x)) # Compute sum of all elements
print('sum along axis 0 =')
print(np.sum(x, axis=0)) # Compute sum of each column
print('sum along axis 1 =')
print(np.sum(x, axis=1)) # Compute sum of each row
# + [markdown] id="olOcsERol2lN" colab_type="text"
# To transpose a matrix, simply use the T attribute of an array object:
# + id="d-jS9i5iXQ8C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="1fa87fec-3343-4456-bad2-47cd9233e2f9"
print(x)
print(x.T)
##Assert that tranposes were applied
assert( x[1,0] == x.T[0,1] )
assert( x[0,0] == x.T[0,0] ) # Diagonabl element unchanged
# + [markdown] id="vLvszOw_l_nS" colab_type="text"
# Numpy arrays can also be reshaped using `.reshape()`
# + id="b69dRW1bl-N1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="5d8343f8-1cfd-4a21-ed28-d8b4251da4bb"
x = np.array([[1,2],[3,4]])
print(x.shape)
y = np.reshape(x, (1, 4))
print(y.shape)
print(y)
# + [markdown] id="8pF1yyhiXQ8B" colab_type="text"
# Mathematical functions [documentation](http://docs.scipy.org/doc/numpy/reference/routines.math.html).
# + [markdown] id="6Y70yMgAXQ8E" colab_type="text"
# ##➛Introduction to Broadcasting (intuition)
# + [markdown] id="kZakfoZeXQ8F" colab_type="text"
# * Broadcasting is a powerful mechanism that **allows numpy to work with arrays of different shapes when performing arithmetic operations**
# * Frequently we have **a smaller array and a larger array**, and we want to **use the smaller array multiple times to perform some operation on the larger array**
#
# Suppose that we want to add a constant vector to each row of a matrix:
# + id="0C6PHInsXQ8F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="52eb98c3-93ff-4656-bc83-ebb2ae96aaee"
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
print('x: ', x)
print('v: ', v)
# + [markdown] id="8HssZd8mnh0p" colab_type="text"
# We would like to add `v` to `x`. Here is a naive way of doing so (**Method 1**):
# + id="4Z_KEwltnC7g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="4da2855d-990e-4119-9cdc-b199ee1fc95f"
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
x[i, :] = x[i, :] + v
print(x)
# + [markdown] id="5-I9Nf-sXQ8G" colab_type="text"
# **This works**; however when the matrix `x` is very large, computing an explicit loop in Python **could be slow**.
# + [markdown] id="U53xdShpoVSH" colab_type="text"
# Note that adding the vector v to each row of the matrix `x` is equivalent to forming a matrix `vv` by stacking multiple copies of `v` vertically, then performing elementwise summation of `x` and `vv`. We could implement this approach like this (**Method 2**):
# + id="vCRexPyNXQ8G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="f88c421f-cd18-48c3-ebd7-ac08d8b5ef76"
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
assert( vv.shape == (4,3))
print(vv)
# + id="nnWsd_9uXQ8I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="b5f5dfcd-99c0-49ec-9834-ac0a386fe57c"
y = x + vv # Add x and vv elementwise
print(y)
# + [markdown] id="cckaw_HxXQ8K" colab_type="text"
# Numpy broadcasting allows us to perform this computation **without actually creating multiple copies of v**. Consider this version, **using broadcasting** (**Method 3**):
# + id="MUP7ja6RXQ8K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="84187a4e-bdc3-4157-d916-d5dbec58c9d6"
import numpy as np
# We will add the vector v to each row of the matrix x,
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print(x)
print('')
print(v)
print('')
print(y)
# + [markdown] id="vnsTScwIXQ8M" colab_type="text"
# ##➛Broadcasting
# * The term "broadcasting" describes how numpy treats arrays with different shapes during arithmetic operations
# * The smaller array is "broadcast" across the larger array so that they have compatible shapes
#
# The simplest broadcasting example occurs when an array and a scalar value are combined in an operation:
# + id="NxnpBm9wjvFR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="55c9d811-6f54-4e17-fc57-5d615c68a153"
import numpy as np
a = np.array([1.0, 2.0, 3.0])
b = 2.0
print(a * b)
# + [markdown] id="6HY5XCzZkEqJ" colab_type="text"
# **Explanation:**
# * We can think of the **scalar b being stretched during the arithmetic operation** into an array with the same shape as a
# * The new elements in b are simply copies of the original scalar
# * The **stretching analogy is only conceptual**. NumPy is smart enough to use the original scalar value **without actually making copies**, so that broadcasting operations are **as memory and computationally efficient** as possible.
# + [markdown] id="kn8J2clskoNA" colab_type="text"
# ##➛Broadcasting Rules
#
# When operating on two arrays, NumPy **compares their shapes element-wise**. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible for broadcasting when
#
# 1. they are equal, or
# 1. one of them is 1
#
# If these conditions are not met, a `ValueError: operands could not be broadcast together` exception is thrown, indicating that the arrays have incompatible shapes. The **size of the resulting array is the maximum size along each dimension of the input arrays**.
#
# Arrays do not need to have the same number of dimensions. For example, if you have a 256x256x3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:
# ```
# Image (3d array): 256 x 256 x 3
# Scale (1d array): 3
# Result (3d array): 256 x 256 x 3
# ```
# 
#
# When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other. Here, the scaling vector can be thought of having the dimensions `1 x 1 x 3`.
# + [markdown] id="z999K1wEpjyL" colab_type="text"
# ##➛Broadcasting Examples
#
# Examples where broadcasting works:
#
# ```
# A (4d array): 8 x 1 x 6 x 1
# B (3d array): 7 x 1 x 5
# Result (4d array): 8 x 7 x 6 x 5
# ```
#
# ```
# A (2d array): 5 x 4
# B (1d array): 1
# Result (2d array): 5 x 4
# ```
#
# ```
# A (2d array): 5 x 4
# B (1d array): 4
# Result (2d array): 5 x 4
# ```
#
# ```
# A (3d array): 15 x 3 x 5
# B (3d array): 15 x 1 x 5
# Result (3d array): 15 x 3 x 5
# ```
#
# ```
# A (3d array): 15 x 3 x 5
# B (2d array): 3 x 5
# Result (3d array): 15 x 3 x 5
# ```
#
# ```
# A (3d array): 15 x 3 x 5
# B (2d array): 3 x 1
# Result (3d array): 15 x 3 x 5
# ```
#
# Examples of shapes that do not broadcast:
# ```
# A (1d array): 3
# B (1d array): 4 # trailing dimensions do not match
# ```
# ```
# A (2d array): 2 x 1
# B (3d array): 8 x 4 x 3 # second from last dimensions mismatched
# ```
# + [markdown] id="TTdi93LpqfkQ" colab_type="text"
# **Practice Example 1:**
# + id="NtbSUK-oqRSU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="92be748f-5455-4015-8aff-479ca0653df7"
x = np.arange(4)
y = np.ones(5)
print(x, x.shape)
print(y, y.shape)
# + id="6n10YHyCrUCU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 180} outputId="bbe25f0f-dc17-4bdc-e10b-9d4a4b38ae49"
# What will be the output?
print((x + y).shape)
# + [markdown] id="F7io4Bwarino" colab_type="text"
# **Practice Example 2:**
# + id="8ZZFz5uqrvJ2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="08101088-7886-47b3-a581-4351e14d2c9d"
xx = x.reshape(4,1)
print(xx, xx.shape)
print(y, y.shape)
# + id="x-LqF-2mr3o3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e56943c6-e0ee-4d70-fb94-6ad0c06315ec"
# What will be the output?
print((xx + y).shape)
assert( (xx + y).shape == (4,5))
# + [markdown] id="szrxcNXYsX9I" colab_type="text"
# ##➛Outer Product using Broadcasting
# + [markdown] id="34d7mdavjvb7" colab_type="text"
# Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following **example shows an outer product operation** of two 1-d arrays:
# + id="GfOKBe-QXQ8N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="a1b68496-ca6f-4964-da87-130512c27de0"
import numpy as np
a = np.array([0.0, 10.0, 20.0, 30.0])
b = np.array([1.0, 2.0, 3.0])
print(a, a.shape)
print(b, b.shape)
# + id="pEuI7UBiYSAM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="fb481f8f-b11b-432a-f23f-b095a84dfbc6"
print(a * b)
# + [markdown] id="DAerJHzEdRyG" colab_type="text"
# 
#
# + id="Y0MZIe29cAll" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="67375f05-7a43-4ca9-e25e-60310fcf0e34"
#Here the newaxis index operator inserts a new axis into a, making it a two-dimensional 4x1 array
print( a * b.reshape(3,1) )
# + [markdown] id="L1wMP325XQ8U" colab_type="text"
# Functions that support broadcasting are known as universal functions. Here is the [list](http://docs.scipy.org/doc/numpy/reference/ufuncs.html#available-ufuncs).
#
# Broadcasting [documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
#
# + [markdown] id="qRkss7ZLXiow" colab_type="text"
# ##➛Matplotlib
#
#
# + id="xeI5ajLiXoDK" colab_type="code" colab={}
import matplotlib.pyplot as plt
import numpy as np
# + id="iz2aNUoBX_ZG" colab_type="code" colab={}
# %matplotlib inline
# + id="SjLfWmfsYFD7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="b17817df-905a-4e3b-ba02-fae60d1a422a"
# Computer x and y coordinates for sing curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
print(x[:10], y[:10])
# + id="FX-iH3NHYZcn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="a7d63d29-aace-4451-9c09-50fce683a196"
## Plot the points using matplotlib
plt.plot(x,y)
plt.show()
# + id="bPv2e9mLYudL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="d9d2e6a9-1adc-4305-8858-c78f753c0a32"
## Display multiple plots
y_sin = np.sin(x)
y_cos = np.cos(x)
## Plot points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel( 'x' )
plt.ylabel( 'y' )
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
plt.show()
# + [markdown] id="cne6tO59Zl7q" colab_type="text"
# Plot Parabola
# + id="eTvErMMvZn5l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="6daf5ef8-9267-4670-b704-7faed75c332d"
## Create the x and y values for a parabola
x_parabola = np.arange(-2, 2, 0.1)
y_pos = x_parabola * x_parabola
y_neg = -1 * y_pos + 1
## Plot the points
plt.plot(x_parabola, y_pos)
plt.plot(x_parabola, y_neg)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# + [markdown] id="vaAvhyqhaz27" colab_type="text"
# Multiple Plots
# + id="VnMBjdG2a3V_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="d9bf80bc-bb82-43ac-f631-d63d4b2cabc3"
## Setup a subplot grid with height 2 and width 2
plt.subplot(2, 2, 1)
## Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
## Set second subplot as active to make second plot
plt.subplot(2, 2, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
## Set the third subplot as active to make third plot
plt.subplot(2, 2, 3)
plt.plot(x_parabola, y_pos)
plt.title('Parabola Up')
## Set the fourth subplot as active tomekt he fourth plot
plt.subplot(2, 2, 4)
plt.plot(x_parabola, y_neg)
plt.title('Parabola Down')
## Show the figure
plt.show()
# + [markdown] id="I7Yy6KYvcKr3" colab_type="text"
# ##➛Plotly
# + id="juYbOymzcPD7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="75c8bea5-3dae-46fa-f8f3-3e4125a90c2d"
# !pip3 install plotly_express
import plotly.express as px
# + id="5ugdsERgcaBE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="ec60ccb7-b9c1-4fba-e250-6e1ccf82962d"
iris = px.data.iris()
print(iris.shape)
iris.head()
# + id="HfbFhKKYcjyY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="f33f40eb-c407-47a9-fe39-44b1c6448b93"
fig = px.scatter_3d(iris,
x = 'sepal_length',
y = 'sepal_width',
z = 'petal_width',
color = 'species')
fig.show()
| 32,391 |
/Mineração_de_Dados_do_Facebook.ipynb
|
02ff78eef1972b33074fb38b2c6285ec798310eb
|
[] |
no_license
|
DiegoMacielDM/Mineracao-AnaliseDeDadosFacebook
|
https://github.com/DiegoMacielDM/Mineracao-AnaliseDeDadosFacebook
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,595,717 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="o5d4t7HfVRw8" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1605134644718, "user_tz": 180, "elapsed": 1777, "user": {"displayName": "Jean Firmino", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFJV5ZmLshBCa0RNoJDjlS6LiSQzvU83ieVt7v9w=s64", "userId": "00886621537172662755"}} outputId="a4add085-9c10-436c-b99f-dfcb184b52d6"
# 1.A) MÉTODO DE VANDERMONDE
import numpy as np
from numpy import linalg
import matplotlib.pyplot as plt
from sympy import *
from sympy.plotting import plot
# VALORES A SER INTERPOLADOS, O POLINOMIO SERÁ DE GRAU N(números de pares) - 1
# DADOS DA QUESTÃO
xi = np.array([ 2, 4, 6, 8, 10, 12, 14 ], dtype='double')
yi = np.array([ 195.4, 190.2, 197.1, 215.2, 199.2, 193.7, 190.9 ], dtype='double')
# MATRIZ DE VANDERMONDE
A = np.array([ xi**0, xi**1, xi**2, xi**3, xi**4, xi**5, xi**6 ]).transpose()
# A.a = yi -> a = (A^-1).yi
a = (np.linalg.inv(A)).dot(yi)
x = Symbol('x')
yv = a[0]*(x**0) + a[1]*(x**1) + a[2]*(x**2) + a[3]*(x**3) + a[4]*(x**4) + a[5]*(x**5) + a[6]*(x**6) #Polinomial
print(f'O polinomio é \n{yv}\n\n Os coeficiente são \n{a}\n em ordem crescente (x**0, x**1, x**2, ...)')
# + id="MjyI79A74AW0" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1605134644721, "user_tz": 180, "elapsed": 1671, "user": {"displayName": "Jean Firmino", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFJV5ZmLshBCa0RNoJDjlS6LiSQzvU83ieVt7v9w=s64", "userId": "00886621537172662755"}} outputId="b7cf0879-ad28-479b-b0f7-2da5a58467e8"
# 1.B) MÉTODO DE LAGRANGE
# quantidade de pares x e y
n = 7
# dados da questão
xi = np.array([ 2, 4, 6, 8, 10, 12, 14 ], dtype='double')
yi = np.array([ 195.4, 190.2, 197.1, 215.2, 199.2, 193.7, 190.9 ], dtype='double')
# ponto desejado
x = Symbol( 'x')
# para iniciar a variavel
yl = 0
# interpolação de lagrange
for i in range(n):
c = 1
for j in range(n):
if i != j:
c = c * (x - xi[j])/(xi[i] - xi[j])
yl = yl + c * yi[i]
print(f'Para o método de Lagrange temos como resultante o polinomio \n{simplify(yl)}\n')
# + id="RVrY-t8r6uOv" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1605134644723, "user_tz": 180, "elapsed": 1624, "user": {"displayName": "Jean Firmino", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFJV5ZmLshBCa0RNoJDjlS6LiSQzvU83ieVt7v9w=s64", "userId": "00886621537172662755"}} outputId="95aa4189-1493-45bf-c166-ec901feaa414"
# 1.C) DIFERENÇAS DIVIDIDAS (Newton)
# para encontrar o produto dos termos
def produto(i, x, xi):
prod = 1
for j in range(i):
prod = prod * (x - xi[j])
return prod
# Para carcular a tabela das diferenças divididas
def calculandotab(xi, yi, n):
for i in range(1, n):
for j in range(n - i):
yi[j][i] = ((yi[j][i - 1] - yi[j + 1][i - 1]) / (xi[j] - xi[i + j]))
return yi
# aplicando newton, metodo da diferenças divididas
def formula(x, xi, yi, n):
soma = yi[0][0]
for i in range(1, n):
soma = soma + (produto(i, x, xi) * yi[0][i])
return soma
# apresentando a tabela
def apresentatab(yi, n):
for i in range(n):
for j in range(n - i):
print(round(yi[i][j], 4), "\t", end = " ")
print("")
#Dados da questão
#n numero de pares x e y
n = 7
xi = [ 2, 4, 6, 8, 10, 12, 14 ]
yi = [[0 for i in range(n)] #criando a matriz y de zeros
for j in range(n)]
#y[][0] usado para iniciar a tabela das diferenças divididas
yi[0][0] = 195.4
yi[1][0] = 190.2
yi[2][0] = 197.1
yi[3][0] = 215.2
yi[4][0] = 199.2
yi[5][0] = 193.7
yi[6][0] = 190.9
# calculando a tabela
yi=calculandotab(xi, yi, n);
# Apresentando a tabela
apresentatab(yi, n)
# Valor para interpolar
x = Symbol('x') # dado na questão
yn = simplify(formula(x, xi, yi, n))
print(f'\n Interpolando pelo método de Newton temos \n {yn}')
# + id="4Y4qMiTJ_kur" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1605134645865, "user_tz": 180, "elapsed": 2744, "user": {"displayName": "Jean Firmino", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFJV5ZmLshBCa0RNoJDjlS6LiSQzvU83ieVt7v9w=s64", "userId": "00886621537172662755"}} outputId="3e8501c6-b568-46ef-dabf-44151a3d464d"
# 1.d
#para vandemonde
xi = np.array([ 2, 4, 6, 8, 10, 12, 14 ], dtype='double')
yi = np.array([ 195.4, 190.2, 197.1, 215.2, 199.2, 193.7, 190.9 ], dtype='double')
A = np.array([ xi**0, xi**1, xi**2, xi**3, xi**4, xi**5, xi**6 ]).transpose()
a = (np.linalg.inv(A)).dot(yi)
x = 7.6
yv2 = a[0]*(x**0) + a[1]*(x**1) + a[2]*(x**2) + a[3]*(x**3) + a[4]*(x**4) + a[5]*(x**5) + a[6]*(x**6)
print(f'Para Vandermonde x = 7.6 , y = {yv2}\n')
# para lagrange
n = 7
xi = np.array([ 2, 4, 6, 8, 10, 12, 14 ], dtype='double')
yi = np.array([ 195.4, 190.2, 197.1, 215.2, 199.2, 193.7, 190.9 ], dtype='double')
x = 7.6
yl2 = 0
for i in range(n):
c = 1
for j in range(n):
if i != j:
c = c * (x - xi[j])/(xi[i] - xi[j])
yl2 = yl2 + c * yi[i]
print(f'Para Lagrange x = 7.6 y= {yl2}\n')
#para newton
def produto(i, x, xi):
prod = 1
for j in range(i):
prod = prod * (x - xi[j])
return prod
def calculandotab(xi, yi, n):
for i in range(1, n):
for j in range(n - i):
yi[j][i] = ((yi[j][i - 1] - yi[j + 1][i - 1]) / (xi[j] - xi[i + j]))
return yi
def formula(x, xi, yi, n):
soma = yi[0][0]
for i in range(1, n):
soma = soma + (produto(i, x, xi) * yi[0][i])
return soma
n = 7
xi = [ 2, 4, 6, 8, 10, 12, 14 ]
yi = [[0 for i in range(n)]
for j in range(n)]
yi[0][0] = 195.4
yi[1][0] = 190.2
yi[2][0] = 197.1
yi[3][0] = 215.2
yi[4][0] = 199.2
yi[5][0] = 193.7
yi[6][0] = 190.9
yi=calculandotab(xi, yi, n);
x = 7.6
yn2 = simplify(formula(x, xi, yi, n))
print(f'Para Newton x = 7.6 y = {yn2}')
# + id="v31jJIHyPXLX" colab={"base_uri": "https://localhost:8080/", "height": 470} executionInfo={"status": "ok", "timestamp": 1605134659082, "user_tz": 180, "elapsed": 15937, "user": {"displayName": "Jean Firmino", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFJV5ZmLshBCa0RNoJDjlS6LiSQzvU83ieVt7v9w=s64", "userId": "00886621537172662755"}} outputId="f50266f1-c7d4-4975-ccbb-fdc499e46adf"
#1. E
def f(x):
return -0.0060026041666699*x**6 + 0.287083333333481*x**5 - 5.36666666666932*x**4 + 49.4802083333573*x**3 - 233.374791666776*x**2 + 525.910833333579*x - 241.700000000207
def g(x):
return -0.00600260416666668*x**6 + 0.287083333333333*x**5 - 5.36666666666667*x**4 + 49.4802083333334*x**3 - 233.374791666667*x**2 + 525.910833333333*x - 241.700000000004
def h(x):
return -0.00600260416666666*x**6 + 0.287083333333333*x**5 - 5.36666666666666*x**4 + 49.4802083333333*x**3 - 233.374791666667*x**2 + 525.910833333333*x - 241.7
xi = np.array([ 2, 4, 6, 8, 10, 12, 14 ], dtype='double')
yi = np.array([ 195.4, 190.2, 197.1, 215.2, 199.2, 193.7, 190.9 ], dtype='double')
import matplotlib.pyplot as plt
x = np.arange(2,14,0.000001)
plt.figure(figsize=(6, 6))
plt.plot(x,f(x),'b-', color='green', label = 'Vandermonde')
plt.plot(x,g(x),'b-', color='black', label = 'Lagrange')
plt.plot(x,h(x),'b-',color='orange', label = 'Newton')
plt.scatter(xi,yi,s=70,c='red',label = 'Pontos da questão')
plt.style.use('classic')
plt.title('Funções analisadas')
plt.xlabel('X')
plt.ylabel('Y')
plt.ylim(189,216)
plt.xlim(1.9,14.1)
plt.legend()
plt.grid('True')
plt.show()
# + id="v-xv5zl8HpVx" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1605134659147, "user_tz": 180, "elapsed": 15974, "user": {"displayName": "Jean Firmino", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFJV5ZmLshBCa0RNoJDjlS6LiSQzvU83ieVt7v9w=s64", "userId": "00886621537172662755"}} outputId="f872b321-d6e1-4835-c397-d3b2c55f66bd"
# 1.E
from scipy.interpolate import lagrange
x = np.array([ 2, 4, 6, 8, 10, 12, 14 ], dtype='double')
y = np.array([ 195.4, 190.2, 197.1, 215.2, 199.2, 193.7, 190.9 ], dtype='double')
poly = lagrange(x, y)
print(f'\nLagrange =\n {poly}\n')
print(f'\nPara x= 7.6 temos y={poly(7.6)}\n')
#p2 = np.polynomial.polynomial.polyfit(x,y,deg=6)
#print(f'\nPolinomial =\n {p2}\n')
#print(f'Para x=7.6 temos y={p2(7.6)}')
0: 'Quantidade'}, inplace=True)
fans_idioma_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="-hZjYjwzhgEL" outputId="14f41495-5f71-4fe8-c416-1b99734a92c2"
#gerando um gráfico
grafico = px.bar(fans_idioma_df, x ='Idioma', y='Quantidade', title = 'Fãns por idioma')
grafico.show()
# + [markdown] id="C8gWnBElfos2"
# ## Fãs por cidade
# + colab={"base_uri": "https://localhost:8080/"} id="iPmOEBzNjPxv" outputId="2ba6e8b7-a63f-4eb3-eb72-a636d19508f2"
fans_cidade = graph.get_connections(id=page_id, connection_name='insights', metric='page_fans_city',
since = '2020-12-01', until = '2021-03-01')
fans_cidade
# + colab={"base_uri": "https://localhost:8080/"} id="icgXE2lDjdv3" outputId="36ae3bc1-e453-415f-9422-584c878feb15"
fans_cidade['data'][0]['values'][0]['value']
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="C3cQnu4Sjrjn" outputId="535870c1-bf4a-493f-ed4a-68cd9fd73053"
#Criando o DataFrame
fans_cidade_df = pd.DataFrame.from_dict(fans_cidade['data'][0]['values'][0]['value'],
orient='index')
#Resetando o index
fans_cidade_df.reset_index(inplace=True)
#Renomeando as colunas
fans_cidade_df.rename(columns={'index':'Cidade', 0: 'Quantidade'}, inplace=True)
fans_cidade_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="8z3XVVEGkped" outputId="73f24791-3a28-4399-90f5-13c8e86ea91b"
grafico = px.bar(fans_cidade_df, x ='Cidade', y='Quantidade', title = 'Fãns por cidade')
grafico.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="F48AhqVZlQq7" outputId="e813f864-541f-4dfa-db5c-d2f876406007"
grafico = px.treemap(fans_cidade_df, path= ['Cidade', 'Quantidade'])
grafico.show()
# + [markdown] id="MtfJTzbifsyO"
# ## Fãs por país
# + colab={"base_uri": "https://localhost:8080/"} id="c-JOU3UFmPih" outputId="94863629-8c0f-487e-8243-e0798256262a"
fans_pais = graph.get_connections(id=page_id, connection_name='insights', metric='page_fans_country',
since = '2020-12-01', until = '2021-03-01')
fans_pais
# + colab={"base_uri": "https://localhost:8080/"} id="MixVU_6rmXyI" outputId="15b27f39-61fe-46f6-dfd3-702274bc3cf3"
fans_pais['data'][0]['values'][0]['value']
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="EF72I57Oma24" outputId="43ddcd4e-ff5a-40fa-f409-fc6241eefce2"
#Criando o DataFrame
fans_pais_df = pd.DataFrame.from_dict(fans_pais['data'][0]['values'][0]['value'],
orient='index')
#Resetando o index
fans_pais_df.reset_index(inplace=True)
#Renomeando as colunas
fans_pais_df.rename(columns={'index':'Pais', 0: 'Quantidade'}, inplace=True)
fans_pais_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="2BdjpBVImiaH" outputId="da9507e2-597f-42cf-fa9b-5382151c9cf9"
grafico = px.bar(fans_pais_df, x="Pais", y="Quantidade", title = 'Fãs por país')
grafico.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="MrWh6e6Imqb4" outputId="71ffd6f5-e8c7-45e3-81e0-25ef3088f31c"
grafico = px.treemap(fans_pais_df, path=['Pais', 'Quantidade'])
grafico
# + [markdown] id="JUPsBWvfm4eX"
# ## Fãs por idade e gênero
# + colab={"base_uri": "https://localhost:8080/"} id="BFwdH2ZRnUwe" outputId="5c149971-2da1-4320-f498-8a5475339502"
fans_idade_genero = graph.get_connections(id=page_id, connection_name='insights', metric='page_fans_gender_age',
since = '2020-12-01', until = '2021-03-01')
fans_idade_genero
# + colab={"base_uri": "https://localhost:8080/"} id="KdJSbz1endo2" outputId="74b20fda-7a5f-415c-f6c7-43ddeb8a44b0"
fans_idade_genero['data'][0]['values'][0]['value']
# + colab={"base_uri": "https://localhost:8080/", "height": 483} id="LfDpICWfng0G" outputId="e0604aed-ca45-44a9-b383-07a0f3032be7"
#Criando o DataFrame
idade_genero_df = pd.DataFrame.from_dict(fans_idade_genero['data'][0]['values'][0]['value'],
orient='index')
#Resetando o index
idade_genero_df.reset_index(inplace=True)
#Renomeando as colunas
idade_genero_df.rename(columns={'index':'Idade genero', 0: 'Quantidade'}, inplace=True)
idade_genero_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="ZJ6hgn6mni_e" outputId="012bf43f-6e95-42c0-85ec-b69c69f63001"
grafico = px.bar(idade_genero_df, x="Idade genero", y="Quantidade", title = 'Fãs por idade e gênero')
grafico.show()
# + [markdown] id="7YuhOZhCoeqN"
# ## Relação entre curtidas e descurtidas
# + colab={"base_uri": "https://localhost:8080/"} id="iZk7Zgj-ofwk" outputId="72741d20-47d2-499b-a78d-e465957266f5"
curtidas = graph.get_connections(id=page_id, connection_name='insights', metric='page_fan_adds',
since = '2020-12-01', until = '2021-03-01')
curtidas
# + colab={"base_uri": "https://localhost:8080/"} id="igGKkMSspCVz" outputId="68190ac4-3b92-4ca4-dfa0-2b399dead4a1"
#Criando um for para percorrer e criar um vetor com numero de curtidas
numero_curtidas = []
for i in curtidas['data'][0]['values']:
numero_curtidas.append(i['value'])
numero_curtidas = np.array(numero_curtidas)
numero_curtidas
# + id="pJGQaQ0JpuWA"
descurtidas = graph.get_connections(id=page_id, connection_name='insights', metric='page_fan_removes',
since = '2020-12-01', until = '2021-03-01')
# + colab={"base_uri": "https://localhost:8080/"} id="bJFkjGMop_B4" outputId="5bee79fa-9899-425a-cb45-6f09b2d8641a"
#Criando um for para percorrer e criar um vetor com numero de curtidas
numero_descurtidas = []
for i in descurtidas['data'][0]['values']:
numero_descurtidas.append(i['value'])
numero_descurtidas = np.array(numero_descurtidas)
numero_descurtidas
# + colab={"base_uri": "https://localhost:8080/"} id="grGsrxT9qKhw" outputId="b5e33f27-a6a6-49dd-ff56-0da514add472"
len(dataframe_facebook), len(numero_curtidas), len(numero_descurtidas)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="s458S2_pqXgP" outputId="6d8fc35e-910b-41a2-c9f3-5d4c35ad03de"
#Criando novas colunas
dataframe_facebook['Numero de curtidas'] = numero_curtidas
dataframe_facebook['Numero de descurtidas'] = numero_descurtidas
dataframe_facebook
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="UxKXgz6xq1Rm" outputId="78e31611-5c68-46e0-d8ba-2d454e4713f4"
#Visualizando algumas estatística do DataFrame
dataframe_facebook.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="Z-myptHyrJUm" outputId="92691d51-6966-4188-de5c-1683bed435c1"
grafico = px.line(title = 'Curtidas x descurtidas')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['Numero de curtidas'], name = 'Curtidas')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['Numero de descurtidas'], name = 'Descurtidas')
grafico.show()
# + id="PS7EOyBMrvFd" colab={"base_uri": "https://localhost:8080/"} outputId="c2a25361-c841-4639-95e4-f7965c4d57f0"
dataframe_facebook['Numero de curtidas'].sum(), dataframe_facebook['Numero de descurtidas'].sum()
# + colab={"base_uri": "https://localhost:8080/"} id="R_eB3sV9lgC6" outputId="9fa98470-e951-4585-ab8c-e28fda7a9ed7"
24 + 4 , (4 / 28) * 100
# + colab={"base_uri": "https://localhost:8080/"} id="jo69_ntwluDh" outputId="1838fd31-feda-444e-ceae-267a02de9a89"
#Criando descurtidas tipo
descurtidas_tipo = graph.get_connections(id=page_id, connection_name = 'insights', metric = 'page_fans_by_unlike_source_unique',
since = '2020-12-01', until = '2021-03-01')
descurtidas_tipo
# + colab={"base_uri": "https://localhost:8080/"} id="0saY1Sqcn13U" outputId="13a06efd-d8d7-4976-b73f-eccfd6ec4aa5"
#Criando um for para perccorrer os dados e indentifar o motivo das descurtidas
motivos = {}
for i in descurtidas_tipo['data'][0]['values']:
if i['value']:
for k, v in i['value'].items():
if k in motivos:
motivos[k] += v
else:
motivos[k] = v
motivos
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="2WnJLYM_ptpB" outputId="5241528c-6c95-455b-b39d-f74b48a1099e"
#Criando o DataFrame motivos
motivos_df = pd.DataFrame.from_dict(motivos, orient='index')
#Resetando o index
motivos_df.reset_index(inplace=True)
#Renomeando as colunas
motivos_df.rename(columns={'index':'Motivo', 0: 'Quantidade'}, inplace=True)
motivos_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="3EibTvKNqeMo" outputId="25a5300e-72da-407e-d36d-0c959acba218"
#Criando um gráfico motivos
grafico = px.bar(motivos_df, x = 'Motivo', y = 'Quantidade', title = 'Motivos das descurtidas na página')
grafico.show()
# + [markdown] id="gqgGFSqfrq_s"
#
# + [markdown] id="2YNnIaVBru3G"
# # Visualizações, cliques, engajamento e impressões
# + [markdown] id="YOiQEWfxr42T"
# ## Visualizações de abas
# + colab={"base_uri": "https://localhost:8080/"} id="4u9zyFz7sCBr" outputId="b78bbf6b-8c74-4023-f475-30e80b2ffac8"
visualizacao_abas = graph.get_connections(id=page_id, connection_name='insights', metric = 'page_tab_views_login_top_unique',
since = '2020-12-01', until = '2021-03-01')
visualizacao_abas
# + id="QnDmbOJmxp3P"
#Criando um for para percorrer os dados
fonte_visualizacao = {}
for i in visualizacao_abas['data'][0]['values']:
#print(i)
if i['value']:
for k, v in i['value'].items():
#print(k,v)
if k in fonte_visualizacao:
fonte_visualizacao[k] += v
else:
fonte_visualizacao[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="4GQ5h11jxtz3" outputId="c9b40603-cb00-4355-908e-164e72dca103"
#Visualizando os dados
fonte_visualizacao
# + colab={"base_uri": "https://localhost:8080/", "height": 390} id="fS0SlKDXzXc7" outputId="fae00603-7e8b-4871-c45e-41e55c9e7703"
#Criando um Dataframe
fonte_visualizacao_df = pd.DataFrame.from_dict(fonte_visualizacao, orient='index')
fonte_visualizacao_df.reset_index(inplace=True)
fonte_visualizacao_df.rename(columns={"index": "fonte", 0: "quantidade"}, inplace=True)
fonte_visualizacao_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="NUBty8V5zb7E" outputId="203a98be-8f17-4a9f-fc61-5b9af6e00777"
#Criando um gráfico
grafico = px.bar(fonte_visualizacao_df, x='fonte', y="quantidade", title = 'Visualização por abas')
grafico.show()
# + [markdown] id="NL1e9hzxsCvD"
# ## Cliques nas informações de contato e chamada para ação
# + colab={"base_uri": "https://localhost:8080/"} id="gDI0edE7sE-b" outputId="fdf33e11-2c0e-495d-d465-dff11e819786"
cliques_contato_acao = graph.get_connections(id=page_id, connection_name='insights', metric = 'page_total_actions',
since = '2020-12-01', until = '2021-03-01')
cliques_contato_acao
# + id="hq5IaXTe0cWy"
#Criando um vetor
numero_cliques_contato_acao = []
#Criando um for para percorrer os valores
for i in cliques_contato_acao['data'][0]['values']:
#print(i)
numero_cliques_contato_acao.append(i['value'])
#Tranformando o vetor em um array com Numpy
numero_cliques_contato_acao = np.array(numero_cliques_contato_acao)
# + colab={"base_uri": "https://localhost:8080/"} id="mvEzfAJp0pJ4" outputId="ff4023c5-3bbd-48cd-ed82-aa2111a4e87d"
#Visualizando o array
numero_cliques_contato_acao
# + colab={"base_uri": "https://localhost:8080/"} id="JV8gY2Qu0uqw" outputId="a8fa4dea-3cbb-44ed-a288-11133e82b837"
#vendo o tamanho do array
len(numero_cliques_contato_acao)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="Hf-0YvMp03OQ" outputId="00f9cde2-88ec-43d9-83f3-f0ae0dcdff7d"
#Adicionando nova coluna ao DataFrame (Número de clique no contato ação)
dataframe_facebook['numero cliques contato acao'] = numero_cliques_contato_acao
dataframe_facebook
# + colab={"base_uri": "https://localhost:8080/"} id="pCuYMUg51HIB" outputId="ef8fc426-6790-419e-a62d-efef8012e326"
#Análisando as estatística da nova coluna
dataframe_facebook['numero cliques contato acao'].describe()
# + colab={"base_uri": "https://localhost:8080/"} id="03pXkdqI1k2O" outputId="94bbaf57-4e66-4335-9b44-bddbfd92aa56"
#Média é 0.02
quantidade = 1/0.02
quantidade
#valor de 1 clique a cada (Quantidade) dias
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="NfHrDhkd160f" outputId="a7bce0a5-011c-4d13-8854-6177ba5221d4"
#Criando um gráfico
grafico = px.line(dataframe_facebook, x="Data", y="numero cliques contato acao", title = 'Número de cliques em contato e link para ação')
grafico.show()
# + [markdown] id="tV_cavgLsGRr"
# ## Cliques em como chegar
# + colab={"base_uri": "https://localhost:8080/"} id="HK-BnXAQsIl7" outputId="9191d7ac-fd0a-4069-e915-14b1956afc2b"
graph.get_connections(id=page_id, connection_name='insights', metric='page_get_directions_clicks_logged_in_unique')
# + id="5QVhq7802tru"
#Como a página não tem endereço fisíco, não teremos muitod dados a explorar nessa opção.
#Caso a empresa tenha endereço fisíco, fazer o mesmo procedimento dos demais.
# + [markdown] id="DBftxtn8sJQa"
# ## Engajamento na página - número de cliques
# + colab={"base_uri": "https://localhost:8080/"} id="FbU6o1EzsONb" outputId="cecd8249-e808-4a3c-edfc-5b3321e9061c"
engajamento = graph.get_connections(id=page_id, connection_name='insights', metric='page_engaged_users',
since = '2020-12-01', until = '2021-03-01')
engajamento
# + id="CS-cTceo2-ZE"
#criando um vetor
engajamento_cliques = []
#Criando um for para percorrer o vetor
for i in engajamento['data'][0]['values']:
engajamento_cliques.append(i['value'])
#Convertendo o vetor em um array com o Numpy
engajamento_cliques = np.array(engajamento_cliques)
# + colab={"base_uri": "https://localhost:8080/"} id="FdOqaFQm3BLL" outputId="6370be76-abc1-486a-e64c-ee73b41525f2"
#visualizando
engajamento_cliques
# + colab={"base_uri": "https://localhost:8080/"} id="L30hoy1A3Em7" outputId="8c017cb1-711a-470e-f0bd-ede293f61430"
#Observando o tamanho do array
len(engajamento_cliques)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="beJbDe3e3GAE" outputId="144e6f46-c5e9-4870-a6fd-4403ea5efa79"
#adicionando nova coluna ao DataFrame (numero cliques engajamento)
dataframe_facebook['numero cliques engajamento'] = engajamento_cliques
dataframe_facebook
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="wdTtoldB3Hl3" outputId="a4ea15ea-0075-4861-c938-dd0020ea458c"
#Criando um gráfico
grafico = px.line(dataframe_facebook, x="Data", y="numero cliques engajamento", title = 'Engajamento - número de cliques')
grafico.show()
# + [markdown] id="04QuyLxisOny"
# ## Engajamento por tipo
# + colab={"base_uri": "https://localhost:8080/"} id="Ae49_nmAsQg7" outputId="927ecee8-18a1-4449-c30d-b31bbc995a2b"
engajamento_tipo = graph.get_connections(id=page_id, connection_name='insights', metric='page_consumptions_by_consumption_type',
since = '2020-12-01', until = '2021-03-01')
engajamento_tipo
# + id="P6es32gx4zoA"
#Criando um vetor
fonte_engajamento = {}
#Criando um for para percorrer os dados
for i in engajamento_tipo['data'][0]['values']:
if i['value']:
for k, v in i['value'].items():
if k in fonte_engajamento:
fonte_engajamento[k] += v
else:
fonte_engajamento[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="s-_xMz0d47nQ" outputId="1a045c24-cd3d-4a06-a66e-2444c5c11ee4"
#Visualizando os dados
fonte_engajamento
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="5wwzagaC4--v" outputId="802ee65d-dfcd-4745-dd86-7816d3a04b21"
#Criando o dataframe com oreientação pelo index
fonte_engajamento_df = pd.DataFrame.from_dict(fonte_engajamento, orient='index')
#Resentando o index
fonte_engajamento_df.reset_index(inplace=True)
#Renomeando as colunas
fonte_engajamento_df.rename(columns={"index": "fonte", 0: "quantidade"}, inplace=True)
#Visualizando
fonte_engajamento_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="dd1rQMaC5R7-" outputId="d328e579-4036-454b-fc98-3be6f3317549"
#Criando o gráfico
grafico = px.bar(fonte_engajamento_df, x='fonte', y="quantidade", title = 'Fonte do engajamento')
grafico.show()
# + [markdown] id="Nbz9QsJJsQyb"
# ## Número de check-ins na página
# + colab={"base_uri": "https://localhost:8080/"} id="QP3Tt2kJsUfD" outputId="2f8ab2d3-a836-4802-e9ab-b033d8a2c90c"
graph.get_connections(id=page_id, connection_name='insights', metric='page_places_checkin_total')
# + [markdown] id="TzQ81tDLsU0b"
# ## Ações positivas por tipo
# + colab={"base_uri": "https://localhost:8080/"} id="qz1f3Z2asXfT" outputId="bb2a7064-6f80-4f02-b22d-bb3353ab6773"
acoes_positivas = graph.get_connections(id=page_id, connection_name='insights', metric='page_positive_feedback_by_type',
since = '2020-12-01', until = '2021-03-01')
acoes_positivas
# + id="IUHU5ZY96Pwk"
#Criando um dicionário
fonte_positivas = {}
#Criando um for para percorrer o dicionário
for i in acoes_positivas['data'][0]['values']:
if i['value']:
for k, v in i['value'].items():
if k in fonte_positivas:
fonte_positivas[k] += v
else:
fonte_positivas[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="Gq0X7buM6WCN" outputId="655b3e77-60cb-40e6-d51e-964eda740675"
#Visualizando
fonte_positivas
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="TM8Fx5ca6dY0" outputId="fbfe8b7b-d0cf-4201-bb46-702b0bd836de"
#Criando o Dataframe
fonte_positivas_df = pd.DataFrame.from_dict(fonte_positivas, orient='index')
fonte_positivas_df.reset_index(inplace=True)
fonte_positivas_df.rename(columns={"index": "fonte", 0: "quantidade"}, inplace=True)
fonte_positivas_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="jDIw7bTd6h9d" outputId="1f0d20ae-c361-47d6-de03-23b16ec04395"
#Criando o gráfico
grafico = px.bar(fonte_positivas_df, x='fonte', y="quantidade", title = 'Fonte de ações positivas')
grafico.show()
# + [markdown] id="dWt3gnO2sX96"
# ## Ações negativas por tipo
# + colab={"base_uri": "https://localhost:8080/"} id="5K5dOG9Qsafb" outputId="b57bfce6-21cf-4fed-d583-0afe7301c672"
acoes_negativas = graph.get_connections(id=page_id, connection_name='insights', metric='page_negative_feedback_by_type',
since = '2020-12-01', until = '2021-03-01')
acoes_negativas
# + id="zjl_gd_Q6s2E"
#Criando um dicionário
fonte_negativas = {}
#Criando um for para percorrer o dicionário
for i in acoes_negativas['data'][0]['values']:
if i['value']:
for k, v in i['value'].items():
if k in fonte_negativas:
fonte_negativas[k] += v
else:
fonte_negativas[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="RAE2bfZz6xzM" outputId="5a3b9d9a-3b95-42df-b0c8-8f75db5f8480"
#Visualizando
fonte_negativas
# + [markdown] id="DXeGsFHdss8Z"
# ## Visualizações pela hora do dia
# + colab={"base_uri": "https://localhost:8080/"} id="bQmQ6LSXswBK" outputId="a4f8e706-5b4a-4053-d365-5cf4a9082db4"
hora_dia = graph.get_connections(id=page_id, connection_name='insights', metric='page_fans_online',
since = '2020-12-01', until = '2021-03-01')
hora_dia
# + id="bmf753lw7D2C"
#Criando um dicionário
visualizacao_hora = {}
#criando um for para percorrer o dicionário
for i in hora_dia['data'][0]['values']:
if i['value']:
for k, v in i['value'].items():
if k in visualizacao_hora:
visualizacao_hora[k] += v
else:
visualizacao_hora[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="rle7sBqo7KKi" outputId="6fc90079-c6f7-4894-d685-93c370395fba"
#Visualizando
visualizacao_hora
# + colab={"base_uri": "https://localhost:8080/", "height": 793} id="jqaAQb-x7OIS" outputId="2cd5d639-139a-4808-e006-34cecf0b6dee"
#criando o DataFrame
hora_dia_df = pd.DataFrame.from_dict(visualizacao_hora, orient='index')
hora_dia_df.reset_index(inplace=True)
hora_dia_df.rename(columns={"index": "hora", 0: "quantidade"}, inplace=True)
hora_dia_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="kBUYIuBp7TWa" outputId="9af090a3-b4a9-488a-9225-348af43999ad"
#Criando o gráfico
grafico = px.bar(hora_dia_df, x='hora', y="quantidade", title = 'Visualizações por hora do dia')
grafico.show()
# + [markdown] id="7SYrOJHesxMd"
# ## Curtidas pagas e não pagar
# + colab={"base_uri": "https://localhost:8080/"} id="ZU4cuSVSsz5K" outputId="7ecb2779-8773-4447-b4db-f961ba881ec5"
curtidas_pagas_nao_pagas = graph.get_connections(id=page_id, connection_name='insights',
metric='page_fan_adds_by_paid_non_paid_unique',
since = '2020-12-01', until = '2021-03-01')
curtidas_pagas_nao_pagas
# + id="t70KwM5X9-nM"
#Criando vetores
curtidas_pagas = []
curtidas_nao_pagas = []
#Criando um for para percorrer os dados
for i in curtidas_pagas_nao_pagas['data'][0]['values']:
#print(i, i['value']['paid'], i['value']['unpaid'])
curtidas_pagas.append(i['value']['paid'])
curtidas_nao_pagas.append(i['value']['unpaid'])
#Convertendo um vetor para array com Numpy
curtidas_pagas = np.array(curtidas_pagas)
curtidas_nao_pagas = np.array(curtidas_nao_pagas)
# + colab={"base_uri": "https://localhost:8080/"} id="9CsAGtOR-ApE" outputId="6c023105-9e2a-449d-f640-f0ea65cecffd"
#Visualizando as curtidas pagas
curtidas_pagas
# + colab={"base_uri": "https://localhost:8080/"} id="P93UvsNU-B5l" outputId="25c51a3d-14bc-4645-ffcc-7e277d7d9b02"
#Visualizando as curtidas não pagas
curtidas_nao_pagas
# + colab={"base_uri": "https://localhost:8080/", "height": 660} id="9Ptlb4Wf-Dms" outputId="247e2e90-ce5b-47c5-9668-4c94d8cd0408"
#Criando um data frame e adicionando colunas
dataframe_facebook['curtidas pagas'] = curtidas_pagas
dataframe_facebook['curtidas nao pagas'] = curtidas_nao_pagas
dataframe_facebook
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="8yInp9tc-Fy0" outputId="3e6565b4-b3b0-402d-8499-d5c82ac7bfa7"
#Criando um gráfico
grafico = px.line(title = 'Curtidas pagas x curtidas não pagas')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['curtidas pagas'], name = 'Curtidas pagas')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['curtidas nao pagas'], name = 'Curtidas não pagas')
grafico.show()
# + [markdown] id="ftZwWIcbs0JJ"
# ## Distribuição de conteúdo pago, orgânico e viral
# + id="X_VZPsbBs5by" colab={"base_uri": "https://localhost:8080/"} outputId="8765d663-38a4-41c0-ff0a-c64002ae3aae"
distribuicao_paga = graph.get_connections(id=page_id, connection_name='insights', metric='page_impressions_paid',
since = '2020-12-01', until = '2021-03-01')
distribuicao_paga
# + id="8p8EQLjX-M1U"
#Criando um vetor
numero_distribuicao_paga = []
#Criando um for para percorrer os dados
for i in distribuicao_paga['data'][0]['values']:
numero_distribuicao_paga.append(i['value'])
#Convertendo um vetor em array
numero_distribuicao_paga = np.array(numero_distribuicao_paga)
# + id="DxOG0WJL-QYc" colab={"base_uri": "https://localhost:8080/"} outputId="e31d3060-9a5d-4325-8450-03ccf6a840b2"
numero_distribuicao_paga
# + colab={"base_uri": "https://localhost:8080/"} id="qAzmy0SsdIg9" outputId="1369d302-398c-448a-f8ef-12ccb5c79f28"
len(numero_distribuicao_paga)
# + id="if2Bm0jC-SwM" colab={"base_uri": "https://localhost:8080/"} outputId="b7be9de6-f020-448c-8038-6333e909993e"
distribuicao_organica = graph.get_connections(id=page_id, connection_name='insights',
metric='page_impressions_organic',
since = '2020-12-01', until = '2021-03-01')
distribuicao_organica
# + id="6r2CXEvl-VbX"
numero_distribuicao_organica = []
for i in distribuicao_organica['data'][0]['values']:
numero_distribuicao_organica.append(i['value'])
numero_distribuicao_organica = np.array(numero_distribuicao_organica)
# + id="YTYKUtIN-Wzc" colab={"base_uri": "https://localhost:8080/"} outputId="f4b36a5d-de46-4ca4-83f2-36cde6971033"
numero_distribuicao_organica
# + colab={"base_uri": "https://localhost:8080/"} id="8NRBrcwYdPBs" outputId="89cbe09d-da43-403e-be21-161d172bfb8c"
len(numero_distribuicao_organica)
# + id="MVYD_VHF-Ylr" colab={"base_uri": "https://localhost:8080/"} outputId="3f53793a-49c3-453d-e63e-7ef8cbfdbf8d"
distribuicao_viral = graph.get_connections(id=page_id, connection_name='insights',
metric='page_impressions_viral',
since = '2020-12-01', until = '2021-03-01')
distribuicao_viral
# + id="uXoM2HGl-brk"
numero_distribuicao_viral = []
for i in distribuicao_viral['data'][0]['values']:
numero_distribuicao_viral.append(i['value'])
numero_distribuicao_viral = np.array(numero_distribuicao_viral)
# + id="96-FjLiY-c2k" colab={"base_uri": "https://localhost:8080/"} outputId="ae3cd5b5-eea8-43b9-d6cb-f65c4290c8b6"
numero_distribuicao_viral
# + colab={"base_uri": "https://localhost:8080/"} id="BaPe2zCmdUiM" outputId="a3d21511-fd7a-43ee-d993-e485f478cb03"
len(numero_distribuicao_viral)
# + id="v4ts6Ed7-eMr" colab={"base_uri": "https://localhost:8080/", "height": 640} outputId="e806be02-88b8-4472-88f0-cfbcc8c44bdd"
dataframe_facebook['numero distribuicao paga'] = numero_distribuicao_paga
dataframe_facebook['numero distribuicao organica'] = numero_distribuicao_organica
dataframe_facebook['numero distribuicao viral'] = numero_distribuicao_viral
dataframe_facebook
# + id="t6fq9Uuw-gHT" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="86194408-613e-4308-a6d1-85bed322b92d"
grafico = px.line(title = 'Distribuição paga x orgânica x viral')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['numero distribuicao paga'], name = 'Paga')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['numero distribuicao organica'], name = 'Orgânica')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['numero distribuicao viral'], name = 'Viral')
grafico.show()
# + id="AF5DaDDl-h1L" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="5f26fb61-0be8-4dee-a56a-25217f9a5df4"
grafico = px.line(title = 'Distribuição orgânica x viral')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['numero distribuicao organica'], name = 'Orgânica')
grafico.add_scatter(x = dataframe_facebook['Data'], y = dataframe_facebook['numero distribuicao viral'], name = 'Viral')
grafico.show()
# + [markdown] id="Jci6Yv-Pe0zy"
# Caso seus números sejam altos ou estejão de forma não normalizada, é indicado fazer o próximo passo, onde será normalizado os números e todos iniciaram com 1. No caso abaixo os números não são relevantes e por isso não terá muito retorno.
# + id="reWYiEuB-k90" colab={"base_uri": "https://localhost:8080/"} outputId="4595d55c-8fd8-4622-a8ae-f1b03c77a43c"
#Visualizando todas as colunas do DataFrame
dataframe_facebook.columns
# + id="jFSQxsCi-mrE" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="39bef269-b496-4979-d419-1278af97923f"
#Imprimindo somente as colunas selecionadas pelo index
#(1) - Data | (8) - numero distribuicao paga | (9) - numero distribuicao organica | (10) - numero distribuição viral|
dataframe_distribuicao = dataframe_facebook.iloc[:, [1,8,9,10]]
dataframe_distribuicao
# + id="zkySe8Eq-o2b"
dataframe_distribuicao_normalizado = dataframe_distribuicao.copy()
for i in dataframe_distribuicao_normalizado.columns[1:]:
#print(i)
dataframe_distribuicao_normalizado[i] = dataframe_distribuicao_normalizado[i] / dataframe_distribuicao_normalizado[i][0]
# + id="51DhVEYw-qcj"
30 / 7, 98 / 7
# + id="AhJODfDU-sPT" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="74e5e156-d3a5-4604-b03e-7441c64772e2"
dataframe_distribuicao_normalizado
# + id="DwT9Cx4B-t5z" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="db83eca8-7cdc-48d0-e7a6-c4b6f3055f31"
grafico = px.line(title = 'Distribuição paga x orgânica x viral - normalizado')
grafico.add_scatter(x = dataframe_distribuicao_normalizado['Data'], y = dataframe_distribuicao_normalizado['numero distribuicao paga'], name = 'Paga')
grafico.add_scatter(x = dataframe_distribuicao_normalizado['Data'], y = dataframe_distribuicao_normalizado['numero distribuicao organica'], name = 'Orgânica')
grafico.add_scatter(x = dataframe_distribuicao_normalizado['Data'], y = dataframe_distribuicao_normalizado['numero distribuicao viral'], name = 'Viral')
grafico.show()
# + [markdown] id="39nEmhROs6CJ"
# ## Séries temporais para previsões
# + id="eaSl8ts-s92h" colab={"base_uri": "https://localhost:8080/"} outputId="ceb1e101-551c-488a-df5c-0b9fb736772b"
#Instalando nova biblioteca - Um dos algoritmos mais utilizado para séries temporais
# !pip install pmdarima
# + id="l37APYTGfp2o"
#pacote para visualizar séries temporais
from statsmodels.tsa.seasonal import seasonal_decompose
from pmdarima.arima import auto_arima
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="eNNzqS7JfrrP" outputId="687d0910-c50d-49ac-d732-3fd0b91fc402"
#Visualizando o Dataframe
dataframe_facebook
# + id="5LKAUR06ftuA"
#Salvando o dataframe em um arquivo CSV
dataframe_facebook.to_csv('dataframe_facebook.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 671} id="wihE7YURfvPP" outputId="a1003639-5112-447c-ead4-f724e2d306d9"
#Importando nova biblioteca
import datetime
dateparse = lambda dates: datetime.datetime.strptime(dates, '%Y-%m-%d')
dataframe_facebook = pd.read_csv('/content/dataframe_facebook.csv', parse_dates=['Data'], index_col = 'Data', date_parser = dateparse)
dataframe_facebook
# + [markdown] id="e5DaSEyVg1JV"
# #### Número de fãs
# + colab={"base_uri": "https://localhost:8080/", "height": 450} id="3YPD0bghg1wX" outputId="9217e51b-b12a-4345-939d-f901ca2d086c"
#Criando a váriavel TS_Fâs
ts_fas = dataframe_facebook.iloc[:, [1]]
ts_fas
# + id="2VxzHoFjg5n1"
#Decomposição da série temporal
decomposicao = seasonal_decompose(ts_fas)
tendencia = decomposicao.trend
sazonal = decomposicao.seasonal
residuo = decomposicao.resid
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="a9zbt21dg64V" outputId="9e2cfd5e-f03d-468a-c2e7-88d03ac56740"
#Criando o gráfico
grafico = px.line(ts_fas, y = 'Números de fãns', title = 'Número de fãs por dia')
grafico.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="JVoViro6g97E" outputId="e275b33d-494f-4c57-8913-5c609efbb99d"
#Criando o gráfico Tendência
grafico = px.line(tendencia, title = 'Tendência')
grafico.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="BkGC8N7zg_9U" outputId="316deb3a-2788-4830-d1e6-1c4475ba8b93"
#Criando o gráfico Sazonalidade
grafico = px.line(sazonal, title = 'Sazonalidade')
grafico.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="lEkneEaFhCJk" outputId="1a6ab68d-305f-47cd-fcb5-bf0186819c58"
#Criando o gráfico Resíduos
grafico = px.line(residuo, title = 'Resíduos')
grafico.show()
# + id="Pq2qPThohDz1"
#Criando um objeto
arima = auto_arima(ts_fas)
# + colab={"base_uri": "https://localhost:8080/"} id="9ve6EfjFhFUE" outputId="37aacd76-db51-4f3a-88aa-e6dafb7be16a"
# Parâmetros P, Q e D
arima.order
# + colab={"base_uri": "https://localhost:8080/", "height": 979} id="qAFlcIpKhGhM" outputId="6addcf68-574e-4131-ea98-3ac881f66047"
#Criando previsões
previsoes = pd.DataFrame(arima.predict(n_periods=30), columns=['previsao'])
#Transformando um número float para int
previsoes['previsao'] = np.round(previsoes['previsao']).astype(int)
previsoes
# + colab={"base_uri": "https://localhost:8080/"} id="LriohLbZhIvr" outputId="2a9681b8-4cbb-4cad-ae8a-96cabb670e1e"
#Criando um Dataframe de datas
dias = pd.date_range(start = '2021-03-02', end = '2021-03-31')
dias
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="CUeHFvm6hKT1" outputId="cb424d21-6978-412f-e270-9a30ef265e7a"
#Visualizando o DataFrame
previsoes['Data'] = dias
previsoes.set_index('Data', inplace=True)
previsoes
# + colab={"base_uri": "https://localhost:8080/"} id="c-RXK_LRhNCb" outputId="7e8f4c99-91c1-4f48-d70c-ea36bbdd7374"
fas_teste = graph.get_connections(id=page_id, connection_name='insights', metric = 'page_fans',
since = '2021-03-02', until = '2021-03-07')
fas_teste
# + colab={"base_uri": "https://localhost:8080/"} id="3KY_Gn2BhO0U" outputId="78a46e61-028a-4d69-a7b1-e075128993e9"
fas_teste['data'][0]['values']
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="QkXckzFrhQO7" outputId="fc3e8783-bb5d-47b5-d656-92ebf664277f"
fas_teste_df = pd.DataFrame(fas_teste['data'][0]['values'])
fas_teste_df.rename(columns={'value': 'numero fas', 'end_time': 'data'}, inplace=True)
fas_teste_df['data'] = pd.to_datetime(fas_teste_df['data'])
fas_teste_df['data'] = fas_teste_df['data'].dt.strftime('%Y-%m-%d')
fas_teste_df.set_index('data', inplace=True)
fas_teste_df
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="Jv9SteTrhSiL" outputId="05943dc4-c643-4915-ccd3-44d33ada56de"
previsoes['2021-03-02':'2021-03-07']
# + colab={"base_uri": "https://localhost:8080/"} id="KAbbv9CxhUd2" outputId="67eeb142-66a3-4851-c3ba-354cc8b4b923"
from sklearn.metrics import mean_absolute_error
mean_absolute_error(fas_teste_df['numero fas'], previsoes['2021-03-03':'2021-03-06']['previsao'])
# + colab={"base_uri": "https://localhost:8080/"} id="pDFPtG5MhWNr" outputId="a448dbb3-fdbe-44aa-9a5e-531fef538260"
abs(previsoes['2021-03-03':'2021-03-06']['previsao'] - fas_teste_df['numero fas']) / len(fas_teste_df)
# + colab={"base_uri": "https://localhost:8080/"} id="X0Wrrzczn-WG" outputId="e6df7288-a950-42d9-e867-72ea44b6b4ce"
sum(abs(previsoes['2021-03-03':'2021-03-06']['previsao'] - fas_teste_df['numero fas']) / len(fas_teste_df))
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="cgg0En_lhXcD" outputId="8b1a7301-a4b2-4b9f-f0f1-7dd0b833ed14"
grafico = px.line(title = 'Previsões - ARIMA')
grafico.add_scatter(x = ts_fas.index, y = ts_fas['Números de fãns'], name = 'Real')
grafico.add_scatter(x = previsoes.index, y = previsoes['previsao'], name = 'Previsões')
grafico.add_scatter(x = fas_teste_df.index, y = fas_teste_df['numero fas'], name = 'Teste')
grafico.show()
# + [markdown] id="LvP6IwvloJun"
# #### Engajamento
# + colab={"base_uri": "https://localhost:8080/"} id="ZcYW-YxkoKc3" outputId="2b7024bc-27f6-4f6d-be1f-fc03550c9a46"
# !pip install fbprophet
# + id="Qra58sEopr0S"
from fbprophet import Prophet
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="gtNRFzeJoOGQ" outputId="d68aaca1-a54b-45ad-f8cf-504ce588e3eb"
ts_engajamento = dataframe_facebook.iloc[:, [5]]
ts_engajamento.reset_index(inplace=True)
ts_engajamento = ts_engajamento[['Data', 'numero cliques engajamento']].rename(columns = {'Data': 'ds', 'numero cliques engajamento': 'y'})
ts_engajamento
# + colab={"base_uri": "https://localhost:8080/"} id="bvLTPWiloQqv" outputId="0e5d1117-604b-4490-a525-ad759a48f5e5"
prophet = Prophet()
prophet.fit(ts_engajamento)
# + id="LanF8FYmoR-X"
futuro = prophet.make_future_dataframe(periods=30)
previsoes = prophet.predict(futuro)
# + colab={"base_uri": "https://localhost:8080/", "height": 309} id="RnEdsyZOoTn3" outputId="c3f9e312-353f-4789-9add-0ee275c04478"
previsoes.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Z4wvrQmUoU60" outputId="b7ebd3fc-73df-44d0-fa3f-f0abdf10d25c"
previsoes.tail(30)
# + colab={"base_uri": "https://localhost:8080/"} id="TcN9yROuoXDs" outputId="f125d77b-81f2-4552-8dab-4be4c5de48e8"
len(ts_engajamento), len(previsoes), len(previsoes) - len(ts_engajamento)
# + id="REHxZjMuoYkf"
from fbprophet.plot import plot_plotly, plot_components_plotly
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="wJrYvoInob0M" outputId="f8e0d988-01b4-45e3-aea4-bd61b32db7a0"
plot_plotly(prophet, previsoes)
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="DWWWuF0podRc" outputId="3dc9f7ce-5b12-49bf-bb1d-5e0cff5eeab8"
plot_components_plotly(prophet, previsoes)
# + colab={"base_uri": "https://localhost:8080/"} id="rNlfF3tcoe7U" outputId="3be78db6-f7b5-4b4a-9189-625063f6854c"
engajamento_teste = graph.get_connections(id=page_id, connection_name='insights', metric = 'page_engaged_users',
since = '2021-03-02', until = '2021-03-07')
engajamento_teste
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="SuWHRrhbogxc" outputId="a9871410-0fc9-4821-da07-67e6d183fd54"
engajamento_teste_df = pd.DataFrame(engajamento_teste['data'][0]['values'])
engajamento_teste_df.rename(columns={"value": "engajamento", "end_time": "data"}, inplace=True)
engajamento_teste_df['data'] = pd.to_datetime(engajamento_teste_df['data'])
engajamento_teste_df['data'] = engajamento_teste_df['data'].dt.strftime('%Y-%m-%d')
engajamento_teste_df
# + colab={"base_uri": "https://localhost:8080/"} id="DkY46HGvoiK0" outputId="57470f9b-6a63-4478-a00d-d34c4441367e"
previsoes[92:96]['yhat']
# + colab={"base_uri": "https://localhost:8080/"} id="8KxSWuqjojzE" outputId="6132c504-fdb9-4adb-fe03-4efd0b5f76b2"
from sklearn.metrics import mean_absolute_error
mean_absolute_error(engajamento_teste_df['engajamento'], previsoes[92:96]['yhat'])
# + [markdown] id="mT09P9wBs-Io"
# # Análise dos posts
# + [markdown] id="URJVjFWAs743"
# ## Reações nas publicações
# + colab={"base_uri": "https://localhost:8080/"} id="EaxBS5PSs_Fi" outputId="a58717f5-b412-4a70-b6f9-20a9b069b14e"
reacoes_posts = graph.get_connections(id=page_id, connection_name='insights', metric = 'page_actions_post_reactions_total',
since = '2020-12-01', until = '2021-03-01')
reacoes_posts
# + colab={"base_uri": "https://localhost:8080/"} id="qhNpktt0tzjh" outputId="27d8ee8d-8a13-4027-f9b7-805875819b42"
reacoes_posts['data'][1]['values']
# + id="e1qLcoP-t3jg"
fonte_reacao = {}
for i in reacoes_posts['data'][1]['values']:
if i['value']:
for k, v in i['value'].items():
if k in fonte_reacao:
fonte_reacao[k] += v
else:
fonte_reacao[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="mEe9S6htt4zQ" outputId="94232cb1-97dc-451a-8147-ef8ab4c7dd43"
fonte_reacao
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="WmI3BIhSt6Q4" outputId="07a7b1b4-4c68-4075-bb15-38009820e309"
fonte_reacao_df = pd.DataFrame.from_dict(fonte_reacao, orient='index')
fonte_reacao_df.reset_index(inplace=True)
fonte_reacao_df.rename(columns={"index": "fonte", 0: "quantidade"}, inplace=True)
fonte_reacao_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="JhefQMeat8Do" outputId="86edf469-6dea-4451-9255-0cbd02cd4735"
grafico = px.bar(fonte_reacao_df, x='fonte', y="quantidade", title = 'Tipos de reações nos posts')
grafico.show()
# + [markdown] id="1Fa3sNjVs_ei"
# ## Reprodução de vídeos por 3 e 30 segundos
# + colab={"base_uri": "https://localhost:8080/"} id="YM7sYSYIvNAm" outputId="2752a35c-e842-415f-f3d0-e1f10519caf4"
video_3s = graph.get_connections(id=page_id, connection_name='insights', metric = 'page_video_views',
since = '2020-12-01', until = '2021-03-01')
video_3s
# + id="IRTYKFy-tFPK"
numero_video_3s = []
for i in video_3s['data'][0]['values']:
numero_video_3s.append(i['value'])
numero_video_3s = np.array(numero_video_3s)
# + colab={"base_uri": "https://localhost:8080/"} id="ddODNxigvSHu" outputId="483e96ab-3da0-4a2a-90ef-77eca27fe251"
numero_video_3s
# + colab={"base_uri": "https://localhost:8080/"} id="xt1gbVzsvUgQ" outputId="a98090b4-6ebe-4c27-ff25-b56c6c5ff5e1"
video_30s = graph.get_connections(id=page_id, connection_name='insights', metric='page_video_complete_views_30s',
since = '2020-12-01', until = '2021-03-01')
video_30s
# + id="YGWoLol_vZMF"
numero_video_30s = []
for i in video_30s['data'][0]['values']:
numero_video_30s.append(i['value'])
numero_video_30s = np.array(numero_video_30s)
# + colab={"base_uri": "https://localhost:8080/"} id="mLeAPQ7yvaRu" outputId="0cfe7d8c-ba82-4d84-b5dc-62a0603580d3"
numero_video_30s
# + colab={"base_uri": "https://localhost:8080/", "height": 671} id="H1jp6tfivcK8" outputId="2a18d289-f911-4d1f-fab4-11e9dcbed785"
dataframe_facebook['video 3s'] = numero_video_3s
dataframe_facebook['video 30s'] = numero_video_30s
dataframe_facebook
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="-P3fNUhvveRt" outputId="366fd3d2-2564-49ef-cf9c-590f4ee841f7"
grafico = px.line(title = '3 segundos x 30 segundos')
grafico.add_scatter(x = dataframe_facebook.index, y = dataframe_facebook['video 3s'], name = '3 segundos')
grafico.add_scatter(x = dataframe_facebook.index, y = dataframe_facebook['video 30s'], name = '30 segundos')
grafico.show()
# + id="F9Ig5Xoovgal"
dataframe_facebook.to_csv('dataframe_facebook.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="orRREjMavitt" outputId="949def8f-727e-458f-e727-3ad5ec9923ec"
graph.get_connections(id=page_id, connection_name='insights', metric = 'post_video_views_organic')
# + [markdown] id="w-cqRpmatHh5"
# ## Stories por tipo
# + colab={"base_uri": "https://localhost:8080/"} id="RPD4Zg1StJr3" outputId="3e12c243-b501-42cb-81e9-0cf58af9aeb6"
stories_tipo = graph.get_connections(id=page_id, connection_name='insights',
metric = 'page_content_activity_by_action_type_unique',
since = '2020-12-01', until = '2021-03-01')
stories_tipo
# + id="8gFhh8M2v7oF"
fonte_stories = {}
for i in stories_tipo['data'][0]['values']:
if i['value']:
for k, v in i['value'].items():
if k in fonte_stories:
fonte_stories[k] += v
else:
fonte_stories[k] = v
# + colab={"base_uri": "https://localhost:8080/"} id="d53GItkhv9nk" outputId="0ce2cb9f-4d22-4dda-e249-56f102419202"
fonte_stories
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="4Rkwno7dwAt8" outputId="77186880-6f08-43c1-90e6-14a4ef6cdf05"
fonte_stories_df = pd.DataFrame.from_dict(fonte_stories, orient='index')
fonte_stories_df.reset_index(inplace=True)
fonte_stories_df.rename(columns={"index": "fonte", 0: "quantidade"}, inplace=True)
fonte_stories_df
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="Xc85Zl5gwDdl" outputId="736cdf24-cbfc-41ea-d1ae-aab06e7c6a7c"
grafico = px.bar(fonte_stories_df, x='fonte', y="quantidade", title = 'Stories')
grafico.show()
# + [markdown] id="m13mqi-8tKJx"
# ## Extração do texto dos posts
# + colab={"base_uri": "https://localhost:8080/"} id="yaKiOmEwwS-7" outputId="84dc0b3a-7b96-40da-fe01-1ff7b304d576"
graph.get_connections(id=page_id, connection_name='posts')
# + colab={"base_uri": "https://localhost:8080/"} id="iJPabD5-tMCp" outputId="924d0c0d-9332-4438-9931-1482340ba40c"
todos_posts = graph.get_all_connections(id=page_id, connection_name='posts',
since = '2012-12-01', until = '2021-03-01',
fields='message,created_time,id,permalink_url')
todos_posts
# + id="5YYnsr9OwbkT"
lista_posts = []
for post in todos_posts:
#print(post)
lista_posts.append(post)
# + colab={"base_uri": "https://localhost:8080/", "height": 759} id="7Sur6x6awdYz" outputId="9806dc36-83db-4394-9853-ba1edd1ca761"
dataframe_posts = pd.DataFrame(lista_posts)
dataframe_posts
# + colab={"base_uri": "https://localhost:8080/", "height": 425} id="SL8D3UiTwjk0" outputId="2534a7af-57fe-4482-d38b-04cbc0e764f1"
dataframe_posts['likes'] = None
dataframe_posts.head()
# + id="9tQF-u-Lwo8T"
for index, post in dataframe_posts.iterrows():
likes = graph.get_connections(id=post['id'], connection_name='likes', summary = 'total_count')
dataframe_posts['likes'][index] = likes['summary']['total_count']
# + colab={"base_uri": "https://localhost:8080/", "height": 425} id="21L6mnfGwrsr" outputId="c2bfe4d0-48f0-4ff1-d02e-de728a9e4703"
dataframe_posts.head()
# + id="orowKDGswwrr"
pd.options.display.max_colwidth = 100
# + colab={"base_uri": "https://localhost:8080/", "height": 745} id="hE5NQO3owzET" outputId="c24942ca-b890-43e1-c08b-94a894bfa820"
dataframe_posts.sort_values(by = ['likes'], ascending=False, inplace = True)
dataframe_posts
| 52,871 |
/0f0基础1-5网课作业.ipynb
|
1b7e1c3c2b048535744bc9c2eaa7d4d9c5f02fdc
|
[] |
no_license
|
Big-big-orange/OFO_Assignment
|
https://github.com/Big-big-orange/OFO_Assignment
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,230,901 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %cd ..
import torch
import torch.distributions as dist
import matplotlib.pyplot as plt
from rec.utils import kl_estimate_with_mc, plot_running_sum_1d, plot_1d_distribution
from tqdm.notebook import trange
import seaborn as sns; sns.set(); sns.set_style('whitegrid')
import math
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
# +
torch.manual_seed(100)
q_loc = 15.
q_var = 0.25
p_loc = 0.
p_var = 1.
q = dist.normal.Normal(loc=q_loc, scale = math.pow(q_var, 0.5))
p = dist.normal.Normal(loc=p_loc, scale = math.pow(p_var, 0.5))
z_sample = q.sample()
total_kl = dist.kl_divergence(q, p)
print(f"The KL between q and p is {total_kl:.3f} nats.")
# ==============================
# PLOT DISTRIBUTIONS
# ==============================
xs = torch.linspace(-2., q.loc+5., 500)
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(xs, torch.exp(q.log_prob(xs)), label='Target')
ax.plot(xs, torch.exp(p.log_prob(xs)), label='Coding')
ax.axvline(z_sample, c='k', ls='--', label='Target Sample')
ax.legend()
plt.show()
# -
# ## Using Empirical Samples
# The aim is to encode a sample from $q$ using coding distribution $p$. To make things easier we approximate $q$ with a convinient form. Instead of usign a variational factored Gaussian posterior, we can use an empirical posterior by sampling $z_1, ..., z_D \sim q(z |\mathcal{D})$. Then we can set our approximate distribution to
# $$
# \begin{aligned}
# \hat{q}(z) = \frac{1}{D} \sum_{i=1}^{D} \delta_{z_i} (z).
# \end{aligned}
# $$
#
# For the REC scheme we need to compute,
#
# $$
# \begin{aligned}
# \hat{q}(a_k \mid a_{1:k-1})
# &= \int \hat{q}(a_k \mid a_{1:k-1}, z) \hat{q}(z\mid a_{1:k-1}) \text{ d}z \\
# &= \int p(a_k \mid a_{1:k-1}, z) \hat{q}(z\mid a_{1:k-1}) \text{ d}z
# \end{aligned}
# $$
#
# The final equality is due to our target distribution satifying the condition that the KL between $q$ and $p$ is equal when using either $z$ or auxiliary variables, namely that both
# $$
# \hat{q}\left(a_{k} \mid a_{1: k-1}, z\right)=p\left(a_{k} \mid a_{1: k-1}, z\right) \quad \text { and } \quad \hat{q}\left(a_{1: k} \mid z\right)=p\left(a_{1: k} \mid z\right)
# $$
#
# To compute the posterior above, note that both
# $$
# \begin{aligned}
# \hat{q}(z \mid a_{1:k-1})
# &= \frac{\hat{q}(a_{1:k-1} \mid z)\hat{q}(z)}{\hat{q}(a_{1:k-1})}
# \end{aligned}
# $$
#
#
# which follows from Bayes' rule, and
# $$
# \begin{aligned}
# \hat{q}(a_{1:k-1})
# &= \int \hat{q}(a_{1:k-1} \mid z) \hat{q}(z) \text{ d}z \\
# &= \int p(a_{1:k-1} \mid z) \hat{q}(z) \text{ d}z
# \end{aligned}
# $$
# which again follows from assurances on the KL.
#
# Substituting the form of $\hat{q}$ defined earlier we find that,
# $$
# \begin{aligned}
# \hat{q}(a_{1:k-1})
# &= \int p(a_{1:k-1} \mid z) \frac{1}{D} \sum_{i=1}^{D} \delta_{z_i} (z) \text{ d}z \\
# &= \frac{1}{D} \sum_{i=1}^{D} p(a_{1:k-1} \mid z_i).
# \end{aligned}
# $$
#
# Therefore, substituting this back into the posterior on $z$,
# $$
# \begin{aligned}
# \hat{q}(z \mid a_{1:k-1})
# &= \frac{\hat{q}(a_{1:k-1} \mid z)\hat{q}(z)}{\hat{q}(a_{1:k-1})} \\
# &= \frac{p(a_{1:k-1} \mid z)\hat{q}(z)}{\hat{q}(a_{1:k-1})} \\
# & = \frac{p(a_{1:k-1} \mid z)\frac{1}{D} \sum_{i=1}^{D} \delta_{z_i} (z)}{\frac{1}{D} \sum_{i=1}^{D} p(a_{1:k-1} \mid z_i)} \\
# &= \sum_{i=1}^{D} \delta_{z_i}(z) \frac{p(a_{1:k-1} \mid z_i)}{\sum_{j=1}^{D} p(a_{1:k-1} \mid z_j)}.
# \end{aligned}
# $$
#
# Altogether, we can substitute back into the integrals defined earlier to find the posterior on the auxiliary variable $a_k$ as,
#
# $$
# \begin{aligned}
# \hat{q}(a_k \mid a_{1:k-1})
# &= \int p(a_k \mid a_{1:k-1}, z) \hat{q}(z\mid a_{1:k-1}) \text{ d}z \\
# &= \int p(a_k \mid a_{1:k-1}, z) \sum_{i=1}^{D} \delta_{z_i}(z) \frac{p(a_{1:k-1} \mid z_i)}{\sum_{j=1}^{D} p(a_{1:k-1} \mid z_j)} \text{ d}z \\
# &= \sum_{i=1}^{D} \frac{p(a_{1:k-1} \mid z_i)}{\sum_{j=1}^{D} p(a_{1:k-1} \mid z_j)} p(a_k \mid a_{1:k-1}, z_i).
# \end{aligned}
# $$
#
# This defines a mixture of Gaussian with mixing weight $\frac{p(a_{1:k-1} \mid z_i)}{\sum_{j=1}^{D} p(a_{1:k-1} \mid z_j)}$ and each component distributed by $p(a_k \mid a_{1:k-1}, z_i)$.
# # Implementing the scheme
omega = 8.
epsilon = 0.
num_empirical_samples = 1
num_aux = math.ceil(total_kl / omega)
num_samples_per_aux = math.ceil(math.exp(omega * (1. + epsilon)))
print(f"The KL is {total_kl:.3f}, Omega: {omega:.0f} and epsilon is: {epsilon}."
f"\nSo num auxiliaries: {num_aux} and num samples per aux: {num_samples_per_aux}.")
# ## Can we guarantee the scheme will work?
#
# For 1D the problem is simply, we need to reach the mode. What does this entail?
#
# First we can say that to reach the mode, defined as $D$, when sampling from aux priors with 0 mean and variance $\sigma_k^2 = 1/K$ where $K$ is the number of auxiliary variables given by $\lceil\operatorname{KL}(q||p)\hspace{3pt} / \hspace{3pt} \Omega\rceil$, that on-average we need each aux var to take value $\pm D / K$, here the plus-minus refers to which sign of the mode.
#
#
def rudimentary_coding_check(kl, mode, omega):
"""
Checks whether the scheme will work. If it doens't work, compute the necessary epsilon to make things work.
"""
num_samples = math.ceil(torch.exp(torch.tensor(omega)))
print(f"{num_samples}")
average_target_deviation = math.pow(math.ceil(kl / omega), 0.5)
necessary_samples = 1. / dist.normal.Normal(loc=0., scale=1).cdf(-mode / average_target_deviation)
print(f"{necessary_samples}")
if num_samples > necessary_samples:
print(f"The coding scheme will always work.")
else:
epsilon = torch.log(torch.ceil(necessary_samples)) / omega - 1
print(f"Coding scheme should always work when epsilon is: {epsilon:.3f}")
rudimentary_coding_check(total_kl, z_sample, omega)
def p_ak_given_traj_and_z(aux_traj, z_samples, auxiliary_vars, total_var):
b_k = torch.sum(aux_traj)
s_k_minus_one = total_var - torch.sum(auxiliary_vars[:-1])
s_k = s_k_minus_one - auxiliary_vars[-1]
mean_scalar = auxiliary_vars[-1] / s_k_minus_one
variance = auxiliary_vars[-1] * s_k / s_k_minus_one
mean = (z_samples - b_k) * mean_scalar
return mean, variance
def gmm_mixing_weights(aux_traj_log_probs):
"""Takes p(a_k | a_{1:k-1}, z_i) for each z_i and computes the GMM mixing weights"""
return torch.softmax(aux_traj_log_probs, dim=0)
def q_ak_given_traj(component_means, component_vars, mixing_weights):
mixing_categorical = dist.categorical.Categorical(probs=mixing_weights)
component_gaussians = dist.normal.Normal(loc=component_means, scale=torch.sqrt(component_vars))
gmm = dist.mixture_same_family.MixtureSameFamily(mixing_categorical, component_gaussians)
return gmm
# +
# For omega = 8 we have 'optimised variances', otherwise uniform does okay
# if omega == 8:
# auxiliary_vars = torch.tensor([0.0750, 0.0722, 0.0712, 0.0709, 0.0710, 0.0712, 0.0715, 0.0713, 0.0714,
# 0.0718, 0.0717, 0.0700, 0.0647, 0.0556, 0.0204])
# else:
auxiliary_vars = p_var * torch.ones(num_aux) / num_aux
total_var = p_var
#
z_samples = q.sample((num_empirical_samples,))
aux_traj = torch.zeros([0])
aux_traj_mixing_weights = torch.ones((num_empirical_samples,))
aux_traj_log_q = torch.zeros(1)
aux_traj_log_p = torch.zeros(1)
biggest_sample = torch.zeros(1)
pbar = trange(1, num_aux + 1)
for i in pbar:
#
if i < num_aux:
# first compute the prior
aux_prior = dist.normal.Normal(loc=0., scale=math.pow(auxiliary_vars[i-1], 0.5))
# compute the posterior
mixing_weights = gmm_mixing_weights(aux_traj_mixing_weights)
means, variances = p_ak_given_traj_and_z(aux_traj, z_samples, auxiliary_vars[:i], total_var)
trial_samples = aux_prior.sample((num_samples_per_aux,))
aux_post = q_ak_given_traj(means, variances, mixing_weights)
# sample trial aks
trial_samples = aux_prior.sample((num_samples_per_aux,))
biggest_sample += torch.max(trial_samples)
# compute log probabilities under prior and posterior
log_p = aux_prior.log_prob(trial_samples)
log_q = aux_post.log_prob(trial_samples)
# compute the joint probability of the full trajectories
log_p_joint = log_p + aux_traj_log_p
log_q_joint = log_q + aux_traj_log_q
# compute log importance sampling weight log(q/p)
log_importance_sampling_weight = log_q_joint - log_p_joint
# greedily choose the best weight
best_sample_idx = torch.argmax(log_importance_sampling_weight)
# compute the corresponding sample
best_sample = trial_samples[best_sample_idx]
# append to trajectory
aux_traj = torch.cat([aux_traj, best_sample[None]])
aux_traj_log_p += log_p[best_sample_idx]
aux_traj_log_q += log_q[best_sample_idx]
# compute the new mixing weights by adding new component densities to old joint component densities
aux_traj_mixing_weights += aux_post.component_distribution.log_prob(best_sample)
else:
# last sample we complete the sum and choose best sample under q
# first compute the prior
aux_prior = dist.normal.Normal(loc=0., scale=math.pow(auxiliary_vars[-1], 0.5))
# sample trial aKs
trial_samples = aux_prior.sample((num_samples_per_aux,))
biggest_sample += torch.max(trial_samples)
# complete the sum
final_trial_zs = torch.sum(aux_traj, dim=0) + trial_samples
# compute log q
log_q = q.log_prob(final_trial_zs)
# choose best sample
best_sample_idx = torch.argmax(log_q)
best_sample = trial_samples[best_sample_idx]
aux_traj = torch.cat([aux_traj, best_sample[None]])
#print(f"Best sample for aux {i} was {best_sample}")
pbar.set_description(f"Encoded Auxiliary Variable {i}")
# +
final_z_sum = torch.sum(aux_traj)
# ==============================
# PLOT DISTRIBUTIONS
# ==============================
xs = torch.linspace(-2., q.loc+5., 500)
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(xs, torch.exp(q.log_prob(xs)), label='Target')
ax.plot(xs, torch.exp(p.log_prob(xs)), label='Coding')
ax.axvline(z_sample, c='k', ls='--', label='Target Sample')
ax.axvline(final_z_sum, c='r', ls='--', label='Coded Sample')
ax.legend()
plt.show()
# -
final_z_sum
biggest_sample
# ## Potential Solutions
# Is there a way to make a beam-search such that the random seeds depend on the trajectories, but we can take samples from one trajectory and combine it with another trajectory, without needing to transmit more information?
#
# Can we use a different prior on the auxiliaries; for example an exponential distribution - downsides is this only works well for unimodal targets?
#
# We know that varying the prior variance on z will lower the KL and thus improve chances of the scheme working - can we somehow transmit the prior variance on z too without increasing the overhead drastically? (In my mind the cost would be equivalent to using an extra auxiliary variable but maybe I am misunderstanding)
#
#
场
period=30,# 持有期
benchmark_price=dv.data_benchmark, # 基准价格 可不传入,持有期收益(return)计算为绝对收益
commission = 0.0008,#手续费,alphalens里不能选择手续费
)
signal_data2 = obj.signal_data
# signal_data.head()
result2 = analysis(signal_data2, is_event=False, period=30)# is_event=False,一般因子分析不认为是一个事件
print("——period=30——")
obj.create_full_report()
plt.show()
obj = SignalDigger(output_folder='./output',
output_format='pdf')
# 处理因子 计算目标股票池每只股票的持有期收益,和对应因子值的quantile分类
obj.process_signal_before_analysis(signal=dv.get_ts("pe"),
price=dv.get_ts("open_adj"),
high=dv.get_ts("high_adj"), # 可为空
low=dv.get_ts("low_adj"),# 可为空
group=dv.get_ts("sw1"),# 可为空
n_quantiles=5,# quantile分类数
mask=mask,# 过滤条件
can_enter=can_enter,# 是否能进场
can_exit=can_exit,# 是否能出场
period=60,# 持有期
benchmark_price=dv.data_benchmark, # 基准价格 可不传入,持有期收益(return)计算为绝对收益
commission = 0.0008,#手续费,alphalens里不能选择手续费
)
signal_data3 = obj.signal_data
# signal_data.head()
result3 = analysis(signal_data3, is_event=False, period=30)# is_event=False,一般因子分析不认为是一个事件
print("——period=60——")
obj.create_full_report()
plt.show()
# +
print("——--------——")
print("——period=5——")
print("——ic分析——")
print(result1["ic"])
print("——选股收益分析——")
print(result1["ret"])
print("——最大潜在盈利/亏损分析——")
print(result1["space"])
print("——--------——")
print("——period=30——")
print("——ic分析——")
print(result2["ic"])
print("——选股收益分析——")
print(result2["ret"])
print("——最大潜在盈利/亏损分析——")
print(result2["space"])
print("——--------——")
print("——period=60——")
print("——ic分析——")
print(result3["ic"])
print("——选股收益分析——")
print(result3["ret"])
print("——最大潜在盈利/亏损分析——")
print(result3["space"])
# -
#
#
# 5.3 用add_formula方法定义反转因子:
# * Divert:最近20天收盘价(close_adj)与成交量(volume)的相关系数
# 完整文档
dv.func_doc().doc
dv.add_formula("Diver", "Correlation(close_adj,volume,20)", is_quarterly=False, add_data=True)
dv.get_ts("Diver").head()
# 这里的is_quarterly=False表示是日度因子,is_quarterly=True表示是季度因子
#
# 5.4 用append_df方法定义CCI (提示:a. 需处理好停牌期的K线数据 b.可以用内置好的signal_function_mod.ta方法调用talib库计算CCI)
# +
from jaqs.research.signaldigger import process
from jaqs.data import signal_function_mod as sfm
from talib import abstract
# help(sfm.ta(ta_method='MA'))
Open = dv.get_ts("open_adj")
High = dv.get_ts("high_adj")
Low = dv.get_ts("low_adj")
Close = dv.get_ts("close_adj")
trade_status = dv.get_ts('trade_status')
mask_sus = trade_status == 0
# 剔除掉停牌期的数据 再计算指标
open_masked = process._mask_df(Open,mask=mask_sus).dropna(axis=1, how='all')
high_masked = process._mask_df(High,mask=mask_sus).dropna(axis=1, how='all')
low_masked = process._mask_df(Low,mask=mask_sus).dropna(axis=1, how='all')
close_masked = process._mask_df(Close,mask=mask_sus).dropna(axis=1, how='all')
# -
CCI = sfm.ta(ta_method='CCI',
ta_column=0,
Open=open_masked,
High=high_masked,
Low=low_masked,
Close=close_masked,
Volume=None,
timeperiod=14)
dv.append_df(CCI,'CCI')
dv.get_ts("CCI").tail()
| 15,077 |
/jupyter/.ipynb_checkpoints/AGE_YEARLY-checkpoint.ipynb
|
2534d0dc4d8b3f40db491cc4566c40a66ed1be95
|
[] |
no_license
|
HyunJungChoe/stress-and-civil-complaint
|
https://github.com/HyunJungChoe/stress-and-civil-complaint
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 56,608 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="V7FpZOga0G_d"
#
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 106} colab_type="code" executionInfo={"elapsed": 77581, "status": "ok", "timestamp": 1532943565473, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="IDtS2MMLw3Qj" outputId="3a663c9f-9b6b-4c5e-e0c9-4eb557c05abf"
# !apt-get install -y -qq software-properties-common python-software-properties module-init-tools
# !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
# !apt-get update -qq 2>&1 > /dev/null
# !apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
# !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
# !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="puEeSLRF00s0"
# !mkdir -p drive
# !google-drive-ocamlfuse drive
# !apt-get -qq install -y libsm6 libxext6 && pip3 install -q -U opencv-python
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 69} colab_type="code" executionInfo={"elapsed": 4569, "status": "ok", "timestamp": 1532943587072, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="1-q5cSg-51Bj" outputId="a57cec27-380a-4b20-d7d8-9ea68f0d9c60"
import os
os.chdir("drive/Collab/opencv")
# !ls
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1441, "status": "ok", "timestamp": 1532943590576, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="QVzTPGWp1NTN" outputId="2c551477-6a05-4bc4-b129-3284db4a6271"
import cv2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# %pylab inline
from IPython.display import clear_output
# + [markdown] colab_type="text" id="jWbJM-d02Mhg"
# -1 == IMREAD_UNCHANGED
#
# ---
#
#
# 0 == cv2.IMREAD_GRAYSCALE
#
# ---
#
#
# 1 == cv2.IMREAD_COLOR
#
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 5083, "status": "ok", "timestamp": 1532530298580, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="s3lSuxxn1eTR" outputId="5241fd11-3d61-4437-b2d8-241d9ebaeddd"
img = cv2.imread('lenna.png', 0)
plt.imshow(img, cmap='gray', interpolation='bicubic')
plt.show()
cv2.imwrite('granna.png', img)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 26, "status": "ok", "timestamp": 1532550487501, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="tRmagfp6twHC" outputId="bb172e09-f33e-44bf-fd4e-6b69ec1206fc"
feed = cv2.VideoCapture('heavy.mkv')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('gravy.avi', fourcc, 20.0, (360,360))
counter = 0
try:
while(True):
ret, frame = feed.read()
if ret == False or counter == 100:
feed.release
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(gray)
axis('off')
print(counter)
imshow(frame)
imshow(gray)
show()
clear_output(wait=True)
counter+=1
except KeyboardInterrupt:
feed.release()
out.release()
print('released video source')
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 2241, "status": "ok", "timestamp": 1532553237770, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="j2AiyBiBEqJH" outputId="a229fe92-f64a-4b62-889d-8d38d2397e05"
img = cv2.imread('lenna.png', 1)
cv2.line(img, (0,0),(200,200), (255,0,255), 3)
cv2.rectangle(img, (40,40), (100,100),(125,255,100),5)
cv2.circle(img, (200,200), 185, (0,0,0), 3, cv2.LINE_AA)
pts = np.array([[1,2],[23,34],[56,378],[124,56]],np.int32)
cv2.polylines(img,[pts], True, (255,255,255), 6)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img, '||-//', (100,200), font, 3,(0,0,0), 5, cv2.LINE_AA)
axis('off')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 1726, "status": "ok", "timestamp": 1532592820856, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="5MzQz4adSxfg" outputId="fcd93807-9369-4fde-957c-d0a3be1bb2cd"
img = cv2.imread('lenna.png', 1)
px = [0,255,255]
img[100,100] = px
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
cv2.rectangle(img,(200,200),(350,390), (0,0,0),4)
plt.imshow(img)
plt.show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 1450, "status": "ok", "timestamp": 1532592829025, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="203Z0qL6pbqA" outputId="35e2152a-e7da-4117-97ee-ebac2958719a"
axis('off')
lenna_face = img[200:390,200:350]
img[0:190,0:150] = lenna_face
plt.imshow(img)
plt.show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 9352, "status": "ok", "timestamp": 1532598344085, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="pzGWK_Spp08Y" outputId="78e46494-b436-4a85-8153-09a5be1690b0"
img1 = cv2.imread('me.jpg')
img2 = cv2.imread('dd_me.jpg')
shape = img2.shape
print(shape)
img1 = cv2.resize(img1,shape[:2])
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2RGB)
imshow(img1)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 16, "status": "ok", "timestamp": 1532596767613, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="E79mW7OBvzD6" outputId="ebe13a44-a4bc-4cde-fac3-12f504284029"
img = cv2.subtract(img2, img1)
imshow(img)
show()
cv2.imwrite('filter.jpg', img)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 13754, "status": "ok", "timestamp": 1532596844007, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="Om4UyF5f4vN5" outputId="377c93ad-2558-4e01-c7f8-d3dc523cdb8f"
img = img2-img1
imshow(img)
show()
cv2.imwrite('filter2.jpg', img)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 11301, "status": "ok", "timestamp": 1532596992497, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="LpPxTK3c5WXR" outputId="9e3d6064-3d94-4e5d-d902-b376a7907f2e"
img = img2-img1
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imshow(img)
show()
cv2.imwrite('filter3.jpg', img)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 3078, "status": "ok", "timestamp": 1532598406781, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="2dyvt2rR7T1A" outputId="f3ad56e9-0de5-438a-fc1f-9c748fdc306e"
cv2.rectangle(img1, (275,100),(730,650),(0,0,0),(4))
my_face = img1[100:650, 275:730]
imshow(img1)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 2249, "status": "ok", "timestamp": 1532601642851, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="VJRQ0Vyg--Dh" outputId="00ccf0f6-74ac-46cb-ef15-354aa84e78cf"
h,w = lenna_face.shape[:2]
my_face = cv2.resize(my_face, (w,h))
imshow(my_face)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 304} colab_type="code" executionInfo={"elapsed": 1851, "status": "ok", "timestamp": 1532601702748, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="fb7cGs5n_b8a" outputId="ac6b1b2a-efc7-4549-a516-a0fa89c3253b"
print(my_face.shape)
print(lenna_face.shape)
creepy_baby = cv2.addWeighted(my_face, 0.3, lenna_face,0.7,0)
imshow(creepy_baby)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 3142, "status": "ok", "timestamp": 1532602341400, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="329HvkxiLlcf" outputId="d931e8c8-81b0-412e-857d-dde29ebbb47c"
granna_face = cv2.cvtColor(lenna_face, cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(granna_face, 100, 255, cv2.THRESH_BINARY_INV)
imshow(mask)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 3439, "status": "ok", "timestamp": 1532612456180, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="Ftqq5dXwy3vm" outputId="2c6b2244-eb8e-4765-808d-37e5bbe3a346"
mask_inv = cv2.bitwise_not(mask)
img_fg = cv2.bitwise_and(lenna_face,lenna_face, mask = mask_inv)
axis('off')
imshow(img_fg)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 4269, "status": "ok", "timestamp": 1532618422939, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="aUSyassP1K4P" outputId="13cbe27c-e154-4080-94cd-7777095855fc"
grace = cv2.cvtColor(my_face, cv2.COLOR_BGR2GRAY)
ret, me_mask = cv2.threshold(grace,80 , 255, cv2.THRESH_BINARY_INV)
axis('off')
imshow(me_mask)
show()
print(ret)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 269} colab_type="code" executionInfo={"elapsed": 1863, "status": "ok", "timestamp": 1532618482662, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="05VEHbCoKiBB" outputId="bafe8e02-f5d3-443c-9287-03ee91107204"
me_inv = cv2.bitwise_not(me_mask)
me_fwd = cv2.bitwise_and(my_face,my_face,mask = me_inv)
imshow(me_fwd)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 278} colab_type="code" executionInfo={"elapsed": 12091, "status": "ok", "timestamp": 1532619754417, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="0MYr0hgbLomi" outputId="46e1f97b-8a07-41a2-e1bc-d1bae4bae02d"
# !wget https://wallpapercave.com/wp/YPR58OX.jpg
# !ls
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 275} colab_type="code" executionInfo={"elapsed": 5282, "status": "ok", "timestamp": 1532620518678, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="L6hCmYQlQWzY" outputId="a91fd767-d0cd-42ec-d077-9ce3b90da854"
bg = cv2.imread('YPR58OX.jpg')
bg = cv2.cvtColor(bg, cv2.COLOR_BGR2RGB)
y,x = lenna_face.shape[:2]
aster = bg[0:y,0:x]
lenna_bg = cv2.bitwise_and(aster, aster, mask= mask)
print(lenna_bg.shape)
print(img_fg.shape)
total = cv2.add(lenna_bg, img_fg)
bg[0:y,0:x] = total
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(bg,'This bitch is crazy sexy! Vote for her!!',(0,300), font,2,(0,0,0),3)
imshow(bg)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 240} colab_type="code" executionInfo={"elapsed": 4048, "status": "ok", "timestamp": 1532943608863, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="ZkyJGO6vJ9Q0" outputId="714426ee-ac21-4b25-e3da-b983b24635ef"
img = cv2.imread('bookpage.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
imshow(img)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 240} colab_type="code" executionInfo={"elapsed": 3636, "status": "ok", "timestamp": 1532943620894, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="8_F3OdawKh-z" outputId="4d5f6ea6-4a98-46a6-cee7-3dbc80dfa2a8"
retVal, thres = cv2.threshold(img, 11, 255, cv2.THRESH_BINARY)
imshow(thres)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 362} colab_type="code" executionInfo={"elapsed": 2222, "status": "ok", "timestamp": 1532945146109, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="KdagKTplLHMv" outputId="8c20f52a-93e6-4c28-a499-5457e9ac7e0c"
graybefore = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
graybefore = 255 - graybefore
imshow(graybefore)
show()
print(graybefore)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 240} colab_type="code" executionInfo={"elapsed": 1555, "status": "ok", "timestamp": 1532945173666, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="sfjCW6jPoZAd" outputId="e46d0ea2-c737-4e33-d8b5-ef87bb7e4b56"
retVal, binary = cv2.threshold(graybefore, 255-11, 255, cv2.THRESH_BINARY)
imshow(binary)
show()
# + [markdown] colab_type="text" id="Bvf-_HRygn-J"
# retVal is only used for Otsu filter, otherwise its value is equal to the threshold value we've used.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 240} colab_type="code" executionInfo={"elapsed": 3963, "status": "ok", "timestamp": 1532945191729, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="uj2tuQ7eNyXG" outputId="4a0d0807-7cc6-461b-d898-3cf2c2dcad45"
retVal, trunc = cv2.threshold(graybefore, 255-11, 255, cv2.THRESH_TRUNC)
imshow(trunc)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 240} colab_type="code" executionInfo={"elapsed": 1482, "status": "ok", "timestamp": 1532945215507, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="3_LCkwbem1ou" outputId="5502d0f0-5dc4-40b6-e790-5f994eee447a"
retVal, zero = cv2.threshold(graybefore, 255-11,255, cv2.THRESH_TOZERO)
imshow(zero)
show()
# + [markdown] colab_type="text" id="W78PpPrqDJ9U"
# Time for Adaptive thresholding: Gaussian and Otsu
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 240} colab_type="code" executionInfo={"elapsed": 2487, "status": "ok", "timestamp": 1532945390550, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="oF5633V-DYA7" outputId="4b3609d7-fafe-4576-8c03-5fa8fc95a895"
gaus = cv2.adaptiveThreshold(graybefore, 255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV, 155 ,1)
gaus = 255 - gaus
imshow(gaus)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 257} colab_type="code" executionInfo={"elapsed": 1595, "status": "ok", "timestamp": 1532945419281, "user": {"displayName": "Saransh Karira", "photoUrl": "//lh3.googleusercontent.com/-lSly4lTCZUY/AAAAAAAAAAI/AAAAAAAABGQ/e3RVDuZsvcQ/s50-c-k-no/photo.jpg", "userId": "108532283212038371787"}, "user_tz": -330} id="3KbzsG7AlNJd" outputId="cabe5a08-410b-4921-b6f3-675dff6fb5fb"
print(graybefore.shape)
retVal, otsu = cv2.threshold(graybefore, 0, 255, cv2.THRESH_OTSU)
imshow(otsu)
show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="4EQNd5NnmiUA"
| 20,120 |
/1주차/[20190519]파이썬기초_함수[강의용].ipynb
|
9eac0a51633db22168001ee6b3d5b2215369ad50
|
[] |
no_license
|
lovemin1003/lovemin34
|
https://github.com/lovemin1003/lovemin34
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,490 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
from sklearn.datasets import fetch_california_housing
# +
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
theta_val = theta.eval()
theta_val
# -
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
# +
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name='X')
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name='y')
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0), name='theta')
y_pred = tf.matmul(X, theta, name='y_pred')
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name='mse')
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - gradients * learning_rate)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("best theta: ", best_theta)
# -
g(1,3,5,7,9)
cal_avg(10,20)
# 가변인자로 값을 받을 수 있는 매개변수
# *매개변수명
# 튜플타입으로 저장
def test(*x):
return x
test(1,2,3,4)
def cal_avg(*args):
total_sum = 0
for num in args:
total_sum += num
avg = total_sum / len(args)
return avg
cal_avg(1,2,3,5,6,7,7)
# ### 리스트로 입력받은 수들에 대해 양수만 필터링하여 반환해주는 함수를 작성해보세요.
#
# - 조건1. 입력값은 인자값으로 리스트를 전달
#
# - 조건2. 결과값은 양수로만 구성된 새로운 리스트
positive_list([-10,-20,10,30,0])
# [10,30]
def positive_list(x):
new_list = []
for i in x:
if i > 0:
new_list.append(i)
return new_list
positive_list([-10,-20,10,30,0])
# +
# 키워드파라미터
# **kwargs
# 딕셔너리 타입
# -
def test(**kwargs):
return kwargs
test(name='홍길동', age=16)
print('test', 'test', end=' ')
Conv2D layer
# + colab={} colab_type="code" id="FlQRPfFzaEJx"
x_train = trainX04.reshape(trainX04.shape[0], 28, 28, 1).astype('float32')
x_test = testX04.reshape(testX04.shape[0], 28, 28, 1).astype('float32')
# + [markdown] colab_type="text" id="jLQr-b3F-hw8"
# ## 5. Normalize x_train and x_test by dividing it by 255
# + colab={} colab_type="code" id="PlEZIAG5-g2I"
x_train /= 255
x_test /= 255
# + [markdown] colab_type="text" id="pytVBaw4-vMi"
# ## 6. Use One-hot encoding to divide y_train and y_test into required no of output classes
# + colab={} colab_type="code" id="V48xiua4-uUi"
y_train = np_utils.to_categorical(trainY04, 10)
y_test = np_utils.to_categorical(testY04, 10)
print('--- THE DATA ---')
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# + [markdown] colab_type="text" id="elPkI44g_C2b"
# ## 7. Build a sequential model with 2 Convolutional layers with 32 kernels of size (3,3) followed by a Max pooling layer of size (2,2) followed by a drop out layer to be trained for classification of digits 0-4
# + colab={} colab_type="code" id="MU09mm9F89gO"
BATCH_SIZE = 32
EPOCHS = 10
# Define Model
model3 = Sequential()
# # 1st Conv Layer
# model3.add(Convolution2D(32, 3, 3, input_shape=(28, 28, 1),name='Conv_1'))
# model3.add(Activation('relu'))
# # 2nd Conv Layer
# model3.add(Convolution2D(32, 3, 3,name='Conv_2'))
# model3.add(Activation('relu'))
# # Max Pooling
# model3.add(MaxPooling2D(pool_size=(2,2),name='MaxPool_1'))
# # Dropout
# model3.add(Dropout(0.25,name='Dropout_1'))
# 1st Conv Layer
model3.add(Convolution2D(32, 3, 3, input_shape=(28, 28, 1),name='Conv_1'))
model3.add(Activation('relu'))
# 2nd Conv Layer
model3.add(Convolution2D(32, 3, 3,name='Conv_2'))
model3.add(Activation('relu'))
# Max Pooling
model3.add(MaxPooling2D(pool_size=(2,2),name='MaxPool_1'))
# Dropout
model3.add(Dropout(0.25))
# + [markdown] colab_type="text" id="sJQaycRO_3Au"
# ## 8. Post that flatten the data and add 2 Dense layers with 128 neurons and neurons = output classes with activation = 'relu' and 'softmax' respectively. Add dropout layer inbetween if necessary
# + colab={} colab_type="code" id="vOZeRbK7t9AT"
# Fully Connected Layer
model3.add(Flatten())
model3.add(Dense(128,name='Dense_1'))
model3.add(Activation('relu'))
# More Dropout
model3.add(Dropout(0.25,name='dropout_2'))
# Prediction Layer
model3.add(Dense(10,name='Dense_2'))
model3.add(Activation('softmax'))
# Loss and Optimizer
model3.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Store Training Results
early_stopping = keras.callbacks.EarlyStopping(monitor='val_acc', patience=7, verbose=1, mode='auto')
callback_list = [early_stopping]
# -
# Train the model
model3.fit(x_train, y_train, batch_size=BATCH_SIZE, nb_epoch=EPOCHS,
validation_data=(x_test, y_test), callbacks=callback_list)
# + [markdown] colab_type="text" id="my1P09bxAv8H"
# ## 9. Print the training and test accuracy
# -
loss_and_metrics = model3.evaluate(x_test, y_test)
print(loss_and_metrics)
# + colab={} colab_type="code" id="yf7F8Gdutbf0"
# + [markdown] colab_type="text" id="z78o3WIjaEJ3"
# ## 10. Make only the dense layers to be trainable and convolutional layers to be non-trainable
# + colab={} colab_type="code" id="brN7VZHFaEJ4"
#Freezing layers in the model which don't have 'dense' in their name
for layer in model3.layers:
if('Dense' not in layer.name): #prefix detection to freeze layers which does not have dense
#Freezing a layer
layer.trainable = False
# + [markdown] colab_type="text" id="4opnW7o0BJ8P"
# ## 11. Use the model trained on 0 to 4 digit classification and train it on the dataset which has digits 5 to 9 (Using Transfer learning keeping only the dense layers to be trainable)
# + colab={} colab_type="code" id="lCFcYHTm6-cE"
x_train59 = trainX59.reshape(trainX59.shape[0], 28, 28, 1).astype('float32')
x_test59 = testX59.reshape(testX59.shape[0], 28, 28, 1).astype('float32')
x_train59 /= 255
x_test59 /= 255
y_train59 = np_utils.to_categorical(trainY59, 10)
y_test59 = np_utils.to_categorical(testY59, 10)
# Train the model
model3.fit(x_train59, y_train59, batch_size=BATCH_SIZE, nb_epoch=EPOCHS,
validation_data=(x_test59, y_test59), callbacks=callback_list)
# -
# + colab={} colab_type="code" id="DITyAt3t7Tto"
# + [markdown] colab_type="text" id="SoDozqghCJZ4"
# ## 12. Print the accuracy for classification of digits 5 to 9
# + colab={} colab_type="code" id="9fCxgb5s49Cj"
loss_and_metrics = model3.evaluate(x_test59, y_test59)
print(loss_and_metrics)
# + colab={} colab_type="code" id="LRWizZIpCUKg"
# + [markdown] colab_type="text" id="FU-HwvIdH0M-"
# ## Sentiment analysis <br>
#
# The objective of the second problem is to perform Sentiment analysis from the tweets data collected from the users targeted at various mobile devices.
# Based on the tweet posted by a user (text), we will classify if the sentiment of the user targeted at a particular mobile device is positive or not.
# + [markdown] colab_type="text" id="nAQDiZHRH0M_"
# ### 13. Read the dataset (tweets.csv) and drop the NA's while reading the dataset
# + colab={} colab_type="code" id="3eXGIe-SH0NA"
import pandas as pd
import numpy as np
data = pd.read_csv('tweets.csv',encoding = "ISO-8859-1")
data.head(10)
# + colab={} colab_type="code" id="CWeWe1eJH0NF"
# + [markdown] colab_type="text" id="jPJvTjefH0NI"
# ### 14. Preprocess the text and add the preprocessed text in a column with name `text` in the dataframe.
# + colab={} colab_type="code" id="5iec5s9gH0NI"
def preprocess(text):
try:
return text.decode('ascii')
except Exception as e:
print(e)
# + colab={} colab_type="code" id="EQSmqA-vH0NT"
# data['text'] = [preprocess(text) for text in data.tweet_text]
data['text']=data.tweet_text
# + colab={} colab_type="code" id="7kX-WoJDH0NV"
# + [markdown] colab_type="text" id="OGWB3P2WH0NY"
# ### 15. Consider only rows having Positive emotion and Negative emotion and remove other rows from the dataframe.
# + colab={} colab_type="code" id="bdgA_8N2H0NY"
data=data.loc[data['is_there_an_emotion_directed_at_a_brand_or_product'].isin(['Negative emotion','Positive emotion'])]
# + colab={} colab_type="code" id="_Jlu-reIH0Na"
data.head(10)
# + [markdown] colab_type="text" id="SotCRvkDH0Nf"
# ### 16. Represent text as numerical data using `CountVectorizer` and get the document term frequency matrix
#
# #### Use `vect` as the variable name for initialising CountVectorizer.
# + colab={} colab_type="code" id="YcbkY4sgH0Ng"
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# use CountVectorizer to create document-term matrices from X_train and X_test
vect = CountVectorizer()
# + colab={} colab_type="code" id="KyXtZGr-H0Nl"
X_train_dtm = vect.fit_transform(data['text'])
# + colab={} colab_type="code" id="Z4LUM-XPH0Nn"
# + colab={} colab_type="code" id="aIdZYxJtH0Nq"
# + [markdown] colab_type="text" id="5pxd5fSHH0Nt"
# ### 17. Find number of different words in vocabulary
# + colab={} colab_type="code" id="p1DQ2LdNH0Nu"
len(vect.get_feature_names())
# + [markdown] colab_type="text" id="dwtgjTBeH0Ny"
# #### Tip: To see all available functions for an Object use dir
# + colab={} colab_type="code" id="2n_iCcTNH0N0"
dir(vect)
# + [markdown] colab_type="text" id="ShA6D8jKH0N5"
# ### 18. Find out how many Positive and Negative emotions are there.
#
# Hint: Use value_counts on that column
# + colab={} colab_type="code" id="q7LAl5pzH0N6"
data.is_there_an_emotion_directed_at_a_brand_or_product.value_counts()
# + [markdown] colab_type="text" id="IUvgj0FoH0N9"
# ### 19. Change the labels for Positive and Negative emotions as 1 and 0 respectively and store in a different column in the same dataframe named 'Label'
#
# Hint: use map on that column and give labels
# + colab={} colab_type="code" id="YftKwFv7H0N9"
# creating a dict file
emotion = {'Positive emotion': 1,'Negative emotion': 0}
# traversing through dataframe
# Gender column and writing
# values where key matches
data['emotion'] = [emotion[item] for item in data.is_there_an_emotion_directed_at_a_brand_or_product]
# -
data['emotion']
# + [markdown] colab_type="text" id="3YErwYLCH0N_"
# ### 20. Define the feature set (independent variable or X) to be `text` column and `labels` as target (or dependent variable) and divide into train and test datasets
# + colab={} colab_type="code" id="lNkwrGgEH0OA"
# split X and y into training and testing sets
from sklearn.model_selection import train_test_split
# how to define X and y (from the SMS data) for use with COUNTVECTORIZER
X = data.text
y = data.emotion
print(X.shape)
print(y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# + [markdown] colab_type="text" id="Q5nlCuaaH0OD"
# ## 21. **Predicting the sentiment:**
#
#
# ### Use Naive Bayes and Logistic Regression and their accuracy scores for predicting the sentiment of the given text
# + colab={} colab_type="code" id="2AbVYssaH0OE"
# import and instantiate a Multinomial Naive Bayes model
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
# -
X_train_dtm = vect.fit_transform(X_train)
X_test_dtm = vect.transform(X_test)
# + colab={} colab_type="code" id="ktXrLhmOH0Of"
# train the model using X_train_dtm
nb.fit(X_train_dtm, y_train)
# + colab={} colab_type="code" id="clv2X0kKH0Ok"
# make class predictions for X_test_dtm
y_pred_class = nb.predict(X_test_dtm)
# + colab={} colab_type="code" id="K86LRMfdH0Ou"
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred_class)
# -
# import and instantiate a logistic regression model
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
# train the model using X_train_dtm
logreg.fit(X_train_dtm, y_train)
# make class predictions for X_test_dtm
y_pred_class = logreg.predict(X_test_dtm)
metrics.accuracy_score(y_test, y_pred_class)
# + [markdown] colab_type="text" id="sw-0B33tH0Ox"
# ## 22. Create a function called `tokenize_predict` which can take count vectorizer object as input and prints the accuracy for x (text) and y (labels)
# -
# + colab={} colab_type="code" id="okCTOs1TH0Oy"
def tokenize_test(vect):
x_train_dtm = vect.fit_transform(X_train)
print('Features: ', x_train_dtm.shape[1])
x_test_dtm = vect.transform(X_test)
nb = MultinomialNB()
nb.fit(x_train_dtm, y_train)
y_pred_class = nb.predict(x_test_dtm)
print('Accuracy: ', metrics.accuracy_score(y_test, y_pred_class))
# + [markdown] colab_type="text" id="JxZ8jfPEH0O0"
# ### Create a count vectorizer function which includes n_grams = 1,2 and pass it to tokenize_predict function to print the accuracy score
# + colab={} colab_type="code" id="kdCyAN_IH0O0"
vect = CountVectorizer(ngram_range=(1, 2))
tokenize_test(vect)
# + [markdown] colab_type="text" id="axepytmgH0O4"
# ### Create a count vectorizer function with stopwords = 'english' and pass it to tokenize_predict function to print the accuracy score
# + colab={} colab_type="code" id="HToGkq7vH0O4"
# remove English stop words
vect = CountVectorizer(stop_words='english')
tokenize_test(vect)
# + [markdown] colab_type="text" id="iOIlJRxoH0O7"
# ### Create a count vectorizer function with stopwords = 'english' and max_features =300 and pass it to tokenize_predict function to print the accuracy score
# + colab={} colab_type="code" id="6fUhff-oH0O8"
# remove English stop words and only keep 300 features
vect = CountVectorizer(stop_words='english', max_features=300)
tokenize_test(vect)
# + [markdown] colab_type="text" id="S2KZNWVkH0PA"
# ### Create a count vectorizer function with n_grams = 1,2 and max_features = 15000 and pass it to tokenize_predict function to print the accuracy score
# + colab={} colab_type="code" id="3v9XD082H0PB"
vect = CountVectorizer(ngram_range=(1, 2), max_features=100)
tokenize_test(vect)
# + [markdown] colab_type="text" id="We3JK_SRH0PO"
# ### Create a count vectorizer function with n_grams = 1,2 and include terms that appear at least 2 times (min_df = 2) and pass it to tokenize_predict function to print the accuracy score
# + colab={} colab_type="code" id="fUHrfDCyH0PP"
vect = CountVectorizer(ngram_range=(1, 2), min_df=2)
tokenize_test(vect)
# + colab={} colab_type="code" id="3H4k_lVZH0PS"
| 15,126 |
/.ipynb_checkpoints/Modeles_Arboréscent_Louis_Cross-checkpoint.ipynb
|
dceb3616a4c8e7c3fec3f140f0062cae47eadd4c
|
[] |
no_license
|
Frouby/QRT_Challenge
|
https://github.com/Frouby/QRT_Challenge
| 0 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 496,234 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# General Dependencies
import os
import numpy as np
# Denoising dependencies
from trefide.pmd import batch_decompose,\
batch_recompose,\
overlapping_batch_decompose,\
overlapping_batch_recompose,\
determine_thresholds
from trefide.reformat import overlapping_component_reformat
# Plotting & Video Rendering Dependencies
import funimag
import matplotlib.pyplot as plt
from trefide.plot import pixelwise_ranks
from trefide.video import play_cv2
# Set Demo Dataset Location
ext = os.path.join("..", "example_movies")
filename = os.path.join(ext, "demoMovie.tif")
# %load_ext autoreload
# %autoreload 2
# -
# # Load Data
# +
from skimage import io
mov = io.imread(filename).transpose([1,2,0])[:60,:60,:]
mov = np.asarray(mov,order='C',dtype=np.float64)
print(mov.shape)
fov_height, fov_width, num_frames = mov.shape
# -
# # Set Params
# +
# Maximum of rank 50 blocks (safeguard to terminate early if this is hit)
max_components = 50
# Enable Decimation
max_iters_main = 10
max_iters_init = 40
d_sub=2
t_sub=2
# Defaults
consec_failures = 3
tol = 0.0005
# Set Blocksize Parameters
block_height = 20
block_width = 20
overlapping = True
# -
# # Compress Video
# ## Simulate Critical Region with Noise
spatial_thresh, temporal_thresh = determine_thresholds((fov_height, fov_width, num_frames),
(block_height, block_width),
consec_failures, max_iters_main,
max_iters_init, tol,
d_sub, t_sub, 5, True)
# ## Decompose Each Block Into Spatial & Temporal Components
# Blockwise Parallel, Single Tiling
if not overlapping:
spatial_components,\
temporal_components,\
block_ranks,\
block_indices = batch_decompose(fov_height, fov_width, num_frames,
mov, block_height, block_width,
max_components, consec_failures,
max_iters_main, max_iters_init, tol,
d_sub=d_sub, t_sub=t_sub)
# Blockwise Parallel, 4x Overlapping Tiling
else:
spatial_components,\
temporal_components,\
block_ranks,\
block_indices,\
block_weights = overlapping_batch_decompose(fov_height, fov_width, num_frames,
mov, block_height, block_width,
spatial_thresh, temporal_thresh,
max_components, consec_failures,
max_iters_main, max_iters_init, tol,
d_sub=d_sub, t_sub=t_sub)
# # Reconstruct Denoised Video
# Single Tiling (No need for reqweighting)
if not overlapping:
mov_denoised = np.asarray(batch_recompose(spatial_components,
temporal_components,
block_ranks,
block_indices))
# Overlapping Tilings With Reweighting
else:
mov_denoised = np.asarray(overlapping_batch_recompose(fov_height, fov_width, num_frames,
block_height, block_width,
spatial_components,
temporal_components,
block_ranks,
block_indices,
block_weights))
# # Produce Diagnostics
# ### Single Tiling Pixel-Wise Ranks
if overlapping:
pixelwise_ranks(block_ranks['no_skew']['full'], fov_height, fov_width, num_frames, block_height, block_width)
else:
pixelwise_ranks(block_ranks, fov_height, fov_width, num_frames, block_height, block_width)
# ### Correlation Images
from funimag.plots import util_plot
util_plot.comparison_plot([mov, mov_denoised + np.random.randn(np.prod(mov.shape)).reshape(mov.shape)*.01],
plot_orientation="vertical")
# ## Render Videos & Residual
play_cv2(np.vstack([mov, mov_denoised, mov-mov_denoised]), magnification=2)
# # Save Results
U, V = overlapping_component_reformat(fov_height, fov_width, num_frames,
block_height, block_width,
spatial_components,
temporal_components,
block_ranks,
block_indices,
block_weights)
np.savez(os.path.join(ext, "demo_results.npz"), U, V,block_ranks,block_height,block_width)
.n_components_, pca.explained_variance_ratio_)
# train = pca.transform(train)
# test = pca.transform(testData)
# -
test = testData
# ## XGBoost
import xgboost as xgb
xgb_model = xgb.XGBClassifier(objective='multi:softmax',
eval_metric=['map@2', 'merror'],
n_estimators=700,
num_class=4,
silent=1,
max_depth=6,
nthread=4,
learning_rate=0.1,
gamma=0.5,
min_child_weight=0.6,
max_delta_step=0.1,
subsample=0.6,
colsample_bytree=0.7,
reg_lambda=0.4,
reg_alpha=0.8,
num_leaves=250,
early_stopping_rounds=20,
num_boost_round=8000,
scale_pos_weight=1)
xgb_model.fit(train, y)
pb = xgb_model.predict_proba(train)
pb = np.array(pb)
submit = pd.DataFrame()
submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
print(test_score(submit['y1'].values, submit['y2'].values, y))
pb = xgb_model.predict_proba(test)
pb = np.array(pb)
xgb_submit = pd.DataFrame()
xgb_submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
xgb_submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
# ## LightGBM
# +
import lightgbm as lgb
# create dataset for lightgbm
lgb_train = lgb.Dataset(train[:600000], y[:600000])
lgb_eval = lgb.Dataset(train[600000:], y[600000:], reference=lgb_train)
# specify your configurations as a dict
params = {
'boosting_type': 'gbdt',
'objective': 'multiclass',
'num_class': 4,
'metric': ['multi_error', 'map@2'], # 'map@2',
'num_leaves': 250, # 4
'min_data_in_leaf': 100,
'learning_rate': 0.1,
# 'feature_fraction': 0.3,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'lambda_l1': 0.4,
'lambda_l2': 0.6,
'max_depth':6,
# 'min_gain_to_split': 0.2,
'verbose': 5,
'is_unbalance': True
}
print('Start training...')
gbm = lgb.train(params,
lgb_train,
num_boost_round=8000,
valid_sets=lgb_eval,
early_stopping_rounds=500)
# -
print('Start predicting...')
pb = gbm.predict(train, num_iteration=gbm.best_iteration)
pb = np.array(pb)
submit = pd.DataFrame()
submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
print(test_score(submit['y1'].values, submit['y2'].values, y))
pb = gbm.predict(test, num_iteration=gbm.best_iteration)
pb = np.array(pb)
lgb_submit = pd.DataFrame()
lgb_submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
lgb_submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
lgb_submit.to_csv('lgb_submit.csv', index=False)
# ## NN
# +
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
enc.fit(y.reshape(-1, 1))
y_hot = enc.transform(y.reshape(-1, 1))
#构建LM神经网络模型
from keras.models import Sequential #导入神经网络初始化函数
from keras.layers.core import Dense, Activation #导入神经网络层函数、激活函数
from keras.layers import Dropout
from keras.metrics import top_k_categorical_accuracy
from keras.callbacks import EarlyStopping
netfile = './net.model' #构建的神经网络模型存储路径
def acc_top2(y_true, y_pred):
return top_k_categorical_accuracy(y_true, y_pred, k=2)
net = Sequential()
net.add(Dense(input_dim = 38, output_dim = 128))
net.add(Activation('relu'))
net.add(Dense(input_dim = 128, output_dim = 256))
net.add(Activation('relu'))
net.add(Dense(input_dim = 256, output_dim = 256))
net.add(Activation('relu'))
net.add(Dropout(0.3))
net.add(Dense(input_dim = 256, output_dim = 512))
net.add(Activation('relu'))
net.add(Dense(input_dim = 512, output_dim = 4))
net.add(Activation('softmax'))
net.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics=['accuracy']) # accuracy
early_stopping = EarlyStopping(monitor='val_loss', patience=50, verbose=2)
net.fit(train, y_hot, epochs=150, batch_size=4096, validation_data=(train[600000:], y_hot[600000:]), callbacks=[early_stopping])
net.save_weights(netfile) #保存模型
# -
predict_prob = net.predict_proba(train[600000:])
pb = np.array(predict_prob)
submit = pd.DataFrame()
submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
print(test_score(submit['y1'].values, submit['y2'].values, y[600000:]))
predict_prob = net.predict_proba(test)
pb = np.array(predict_prob)
nn_submit = pd.DataFrame()
nn_submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
nn_submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
xgb_submit.to_csv('xgb_subumit.csv', index=False)
lgb_submit.to_csv('lgb_submit.csv', index=False)
nn_submit.to_csv('nn_submit.csv', index=False)
def wsubmit(xg, lg, nn):
xg_y1 = xg['y1'].values
lg_y1 = lg['y1'].values
lg_y2 = lg['y2'].values
nn_y1 = lg['y1'].values
submitData = pd.DataFrame()
y1 = []
y2 = []
for i in range(len(xg)):
row_y1 = [xg_y1[i], lg_y1[i], nn_y1[i]]
y1.append(max(row_y1, key=row_y1.count))
if max(row_y1, key=row_y1.count) != lg_y1[i]:
y2.append(lg_y1[i])
else:
y2.append(lg_y2[i])
submitData['y1'] = y1
submitData['y2'] = y2
submitData.to_csv('submit_voting.csv', index=False)
wsubmit(xgb_submit, lgb_submit, nn_submit)
# ## 混合
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(train, y, test_size=0.2, random_state=0)##test_size测试集合所占比例
##X_train_1用于生成模型 X_train_2用于和新特征组成新训练集合
X_train_1, X_train_2, y_train_1, y_train_2 = train_test_split(X_train, y_train, test_size=0.7, random_state=0)
def mergeToOne(X,X2):
return np.hstack((X, X2))
# +
import lightgbm as lgb
from xgboost.sklearn import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
xgb = XGBClassifier(booster='gbtree',
learning_rate =0.1,
objective='multi:softmax',
num_class=4,
gamma=0.05,
subsample=0.4,
reg_alpha=1e-05,
n_estimators=50,
metric='multi_logloss',
colsample_bytree=0.7,
silent=1,
nthread=4)
gbm = lgb.LGBMClassifier(learning_rate=0.1,
boosting_type='gbdt',
objective='multiclass',
n_estimators=50,
metric='multi_logloss',
max_depth=7,
bagging_fraction=0.7,
is_unbalance=True)
rf = RandomForestClassifier(n_estimators=50,
min_samples_split=90,
min_samples_leaf=15,
max_depth=8,
oob_score=True)
# +
xgb.fit(X_train_1, y_train_1)
new_feature= xgb.apply(X_train_2)
X_train_new2 = mergeToOne(X_train_2,new_feature)
new_feature_test = xgb.apply(X_test)
X_test_new = mergeToOne(X_test,new_feature_test)
# real_test_xgb = xgb.apply(test)
# real_test = mergeToOne(test, real_test_xgb)
gbm.fit(X_train_1, y_train_1)
new_feature = gbm.apply(X_train_2)
X_train_new2 = mergeToOne(X_train_new2,new_feature)
new_feature_test = gbm.apply(X_test)
X_test_new = mergeToOne(X_test_new,new_feature_test)
# real_test_lgb = gbm.apply(test)
# real_test = mergeToOne(real_test, real_test_lgb)
rf.fit(X_train_1, y_train_1)
new_feature = rf.apply(X_train_2)
X_train_new2 = mergeToOne(X_train_new2, new_feature)
new_feature_test = rf.apply(X_test)
X_test_new = mergeToOne(X_test_new, new_feature_test)
# real_test_lgb = rf.apply(test)
# real_test = mergeToOne(real_test, real_test_lgb)
# -
X_train_new2.shape
X_train_new2.shape
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
# enc.fit(y.reshape(-1, 1))
# y = enc.transform(y.reshape(-1, 1))
enc.fit(y_train_2.reshape(-1, 1))
y_train_2 = enc.transform(y_train_2.reshape(-1, 1)).toarray()
# +
#构建LM神经网络模型
from keras.models import Sequential #导入神经网络初始化函数
from keras.layers.core import Dense, Activation #导入神经网络层函数、激活函数
from keras.layers import Dropout
from keras.callbacks import EarlyStopping
netfile = './net.model' #构建的神经网络模型存储路径
net = Sequential()
net.add(Dense(input_dim = 472, output_dim = 128))
net.add(Activation('relu'))
net.add(Dense(input_dim = 128, output_dim = 256))
net.add(Activation('relu'))
net.add(Dense(input_dim = 256, output_dim = 256))
net.add(Activation('relu'))
net.add(Dropout(0.3))
net.add(Dense(input_dim = 256, output_dim = 512))
net.add(Activation('relu'))
net.add(Dense(input_dim = 512, output_dim = 4))
net.add(Activation('softmax'))
net.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics=['accuracy'])
early_stopping = EarlyStopping(monitor='val_loss', patience=50, verbose=2)
net.fit(X_train_new2, y_train_2, epochs=100, batch_size=4096, validation_data=(X_test_new, y_test), callbacks=[early_stopping])
# net.fit(train, y, epochs=100, batch_size=4096)
net.save_weights(netfile) #保存模型
# -
predict_prob = net.predict_proba(X_test_new)
pb = np.array(predict_prob)
submit = pd.DataFrame()
submit['y1'] = pb.argsort()[np.arange(len(pb)), -1]
submit['y2'] = pb.argsort()[np.arange(len(pb)), -2]
print(test_score(submit['y1'].values, submit['y2'].values, y_test.values))
tan_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
# -
# #### Let's put that into a *pandas* dataframe
# First, let's write a function to sort the venues in descending order.
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
# Now let's create the new dataframe and display the top 10 venues for each neighborhood.
# +
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = manhattan_grouped['Neighborhood']
for ind in np.arange(manhattan_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(manhattan_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
# -
# <a id='item4'></a>
# ## 4. Cluster Neighborhoods
# Run *k*-means to cluster the neighborhood into 5 clusters.
# +
# set number of clusters
kclusters = 5
manhattan_grouped_clustering = manhattan_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(manhattan_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# -
# Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood.
# +
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
manhattan_merged = manhattan_data
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
manhattan_merged = manhattan_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
manhattan_merged.head() # check the last columns!
# -
# Finally, let's visualize the resulting clusters
# +
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(manhattan_merged['Latitude'], manhattan_merged['Longitude'], manhattan_merged['Neighborhood'], manhattan_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
# -
# <a id='item5'></a>
# ## 5. Examine Clusters
# Now, you can examine each cluster and determine the discriminating venue categories that distinguish each cluster. Based on the defining categories, you can then assign a name to each cluster. I will leave this exercise to you.
# #### Cluster 1
manhattan_merged.loc[manhattan_merged['Cluster Labels'] == 0, manhattan_merged.columns[[1] + list(range(5, manhattan_merged.shape[1]))]]
# #### Cluster 2
manhattan_merged.loc[manhattan_merged['Cluster Labels'] == 1, manhattan_merged.columns[[1] + list(range(5, manhattan_merged.shape[1]))]]
# #### Cluster 3
manhattan_merged.loc[manhattan_merged['Cluster Labels'] == 2, manhattan_merged.columns[[1] + list(range(5, manhattan_merged.shape[1]))]]
# #### Cluster 4
manhattan_merged.loc[manhattan_merged['Cluster Labels'] == 3, manhattan_merged.columns[[1] + list(range(5, manhattan_merged.shape[1]))]]
# #### Cluster 5
manhattan_merged.loc[manhattan_merged['Cluster Labels'] == 4, manhattan_merged.columns[[1] + list(range(5, manhattan_merged.shape[1]))]]
# ### Thank you for completing this lab!
#
# This notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson/) and [Polong Lin](https://www.linkedin.com/in/polonglin/). I hope you found this lab interesting and educational. Feel free to contact us if you have any questions!
# This notebook is part of a course on **Coursera** called *Applied Data Science Capstone*. If you accessed this notebook outside the course, you can take this course online by clicking [here](http://cocl.us/DP0701EN_Coursera_Week3_LAB2).
# <hr>
#
# Copyright © 2018 [Cognitive Class](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| 20,041 |
/Neural-Style/neural_style.ipynb
|
e738c5c3996d3509e677f645a1143d320ab0f248
|
[] |
no_license
|
FahadSahli/AI-generated-art
|
https://github.com/FahadSahli/AI-generated-art
| 2 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,842,013 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="p8l3cUmJiADO" colab_type="text"
# # Imports
# + id="mOR7uBxRXiOw" colab_type="code" colab={}
# %matplotlib inline
# + id="6TTPZM8oXiO4" colab_type="code" colab={}
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
# + [markdown] id="SakPwGmOiIxP" colab_type="text"
# # Device Setup
# + id="2Z8sMLxMXiO9" colab_type="code" colab={}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# + id="VAbIDwQlXiPA" colab_type="code" outputId="3e2dda8a-4387-4435-9529-f1f187f92505" colab={"base_uri": "https://localhost:8080/", "height": 34}
torch.cuda.is_available()
# + [markdown] id="5Hjd3rTy7802" colab_type="text"
# # Image Processing
# + [markdown] id="N2U_empxXiPF" colab_type="text"
# ## Loading Images
#
#
# + id="KGiw6h7VjGnK" colab_type="code" colab={}
# A method to load and transform an image
def image_loader(image_path):
image = Image.open(image_path)
# Add a fake batch dimension to fit network's input dimensions
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
# + id="TIDowu_GkvKD" colab_type="code" colab={}
# Build an image transformation object
loader = transforms.Compose([
transforms.Resize((982,256)), # Scale an image to (982,256)
transforms.ToTensor()]) # Transform the image into a torch tensor
# + id="lrwbI9a3nPe7" colab_type="code" colab={}
# The number of styles (or content images) to be used
num_of_styles = 4
# Lists to hold file names of style and content images
style_names = []
content_names = []
image_format = '.jpeg'
# Add file names to the lists
for i in range(num_of_styles):
style_names.append('style' + str(i+1) + image_format)
content_names.append('content' + str(i+1) + image_format)
# + id="GIqIyh9Jnfk0" colab_type="code" colab={}
# Lists to hold style and content images
style_images = []
content_images = []
images_directory = 'images/'
# Load images into the lists
for style_name, content_name in zip(style_names, content_names):
style_images.append(image_loader(images_directory + style_name))
content_images.append(image_loader(images_directory + content_name))
# + [markdown] id="p_f2Hz_N8F7A" colab_type="text"
# ## Display Images
# + id="OwBn3urv8v4a" colab_type="code" colab={}
# A method to display an image
def imshow(tensor, unloader, title=None):
# Clone the tensor to preserve it
image = tensor.cpu().clone()
# Remove the fake batch dimension
image = image.squeeze(0)
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
# Pause a bit so that plots are updated
plt.pause(0.001)
# + id="bez12ioCXiPK" colab_type="code" colab={}
# An image transformation object to convert a tensor into PIL object
unloader = transforms.ToPILImage()
# + id="CGqw-_Cw-Hu-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="78825089-d432-4e19-e86c-cfe427fb5c84"
# Display pairs of style and content images
for style_image, content_image in zip(style_images, content_images):
print("A pair of style and content images:")
imshow(style_image, unloader, title='Style Image')
imshow(content_image, unloader, title='Content Image')
print("")
# + [markdown] id="GHqKXSXzXiPa" colab_type="text"
# # Loss Functions
#
#
# + id="W5nb2d6YXiPb" colab_type="code" colab={}
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
# + [markdown] id="1NMtiHZUXiPe" colab_type="text"
# ## Style Loss
#
#
#
# + id="AwICNv9EXiPf" colab_type="code" colab={}
def gram_matrix(input):
a, b, c, d = input.size() # a=batch size(=1)
# b=number of feature maps
# (c,d)=dimensions of a f. map (N=c*d)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# we 'normalize' the values of the gram matrix
# by dividing by the number of element in each feature maps.
return G.div(a * b * c * d)
# + id="uhA222LnXiPj" colab_type="code" colab={}
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
# + [markdown] id="g7vEL6qTXiPm" colab_type="text"
# # Import and Setup a Pre-trained VGG16 Model
# + id="4LwUUo37XiPn" colab_type="code" colab={}
cnn = models.vgg16(pretrained=True).features.to(device).eval()
# + id="unsfA0TkXiPr" colab_type="code" colab={}
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# create a module to normalize input image so we can easily put it in a
# nn.Sequential
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img - self.mean) / self.std
# + id="vVtXBd4LXiPu" colab_type="code" colab={}
# desired depth layers to compute style/content losses :
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)
# normalization module
normalization = Normalization(normalization_mean, normalization_std).to(device)
# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []
# assuming that cnn is a nn.Sequential, so we make a new nn.Sequential
# to put in modules that are supposed to be activated sequentially
model = nn.Sequential(normalization)
i = 0 # increment every time we see a conv
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# The in-place version doesn't play very nicely with the ContentLoss
# and StyleLoss we insert below. So we replace with out-of-place
# ones here.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# add content loss:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# add style loss:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# now we trim off the layers after the last content and style losses
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
# + [markdown] id="BphahmXFXiPx" colab_type="text"
# # Input Images
# + id="-noJia49XiPy" colab_type="code" colab={}
input_images = []
for image in content_images:
input_images.append(image.clone())
# + [markdown] id="XJWIW4jyXiP_" colab_type="text"
# # Training Setup
#
#
# + id="mhw6v7f_XiP_" colab_type="code" colab={}
def get_input_optimizer(input_img):
# this line to show that input is a parameter that requires a gradient
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
# + id="kmdgtn_TXiQD" colab_type="code" colab={}
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=300,
style_weight=100000, content_weight=1):
"""Run the style transfer."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std, style_img, content_img)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# correct the values of updated input image
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 10 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
# a last correction...
input_img.data.clamp_(0, 1)
return input_img
# + [markdown] id="p3OzOrx_XiQF" colab_type="text"
# # Training
#
#
# + id="9IwV3MjPXiQG" colab_type="code" colab={}
output_images = []
"""
'style_images', 'content_images', and 'input_images' have the same length.
'style_images[i]' corresponds to 'content_images[i]' and 'content_images[i]'
corresponds to 'input_images[i]'.
"""
# Train the model 4 different times, and record the 4 output images
for i in range(len(style_images)):
output_images.append(run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_images[i], style_images[i], input_images[i]))
# + [markdown] id="yQ0GEpWlEkx3" colab_type="text"
# # Results
# + [markdown] id="VgmovWLOEyMh" colab_type="text"
# ## Individual Output Image
# + id="ozGCsZZoE3JN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="4e354213-1c94-4ce5-9e82-2f82ae8dbeaf"
# Display Individual output image
for output_image in output_images:
imshow(output_image, unloader, title='Output Image')
print("")
# + [markdown] id="XAsVdEOAFnvh" colab_type="text"
# ## Combined Results
# + id="WQLZLyCDGCaJ" colab_type="code" colab={}
# A method to concatenate several images. It is retrieved from https://note.nkmk.me/en/python-pillow-concat-images/.
def get_concat_h_multi_resize(im_list, resample=Image.BICUBIC):
min_height = min(im.height for im in im_list)
im_list_resize = [im.resize((int(im.width * min_height / im.height), min_height),resample=resample)
for im in im_list]
total_width = sum(im.width for im in im_list_resize)
dst = Image.new('RGB', (total_width, min_height))
pos_x = 0
for im in im_list_resize:
dst.paste(im, (pos_x, 0))
pos_x += im.width
return dst
# + id="-KCg-kjoHMk1" colab_type="code" colab={}
# Convert output images to PIL objects
PIL_images = []
for output_image in output_images:
# Copy the tensor to host memory
output_image = output_image.cpu().clone()
# Remove the fake batch dimension
output_image = output_image.squeeze(0)
# Convert it to PIL object, and add it to 'PIL_images'
PIL_images.append(unloader(output_image))
# + id="YfghgaoJXiQk" colab_type="code" colab={}
# Get the combined output
full_image = get_concat_h_multi_resize(PIL_images)
# + [markdown] id="PXVZd-kfK7V-" colab_type="text"
# # Final Result
# + id="2p9lhGsQXiQo" colab_type="code" outputId="ff8e2c5d-f9b2-40a4-daa7-fc6297263ada" colab={"base_uri": "https://localhost:8080/", "height": 999}
full_image
| 13,516 |
/Network-based-Intrusion-Detection-Systems-master/LCFS implementation.ipynb
|
9b48e0c3ab5bed46012e244601a1e4c6394f0cdd
|
[] |
no_license
|
wzp21142/NIDS_Reproduction
|
https://github.com/wzp21142/NIDS_Reproduction
| 20 | 1 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 35,562 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
from time import time
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
col_names = ["duration", "protocol_type", "service", "flag", "src_bytes", "dst_bytes", "land", "wrong_fragment", "urgent", "hot",
"num_failed_logins", "logged_in", "num_compromised", "root_shell", "su_attempted", "num_root", "num_file_creations",
"num_shells", "num_access_files", "num_outbound_cmds", "is_host_login", "is_guest_login", "count", "srv_count",
"serror_rate", "srv_serror_rate", "rerror_rate", "srv_rerror_rate", "same_srv_rate", "diff_srv_rate",
"srv_diff_host_rate", "dst_host_count","dst_host_srv_count", "dst_host_same_srv_rate", "dst_host_diff_srv_rate",
"dst_host_same_src_port_rate", "dst_host_srv_diff_host_rate", "dst_host_serror_rate", "dst_host_srv_serror_rate",
"dst_host_rerror_rate", "dst_host_srv_rerror_rate", "label"]
df26 = pd.read_csv("kddcup.data_10_percent_corrected", names = col_names)
df26.describe()
# +
cleanup_nums = {'protocol_type' : {'tcp' : 0, 'udp' : 1, 'icmp' : 2},
'service' : {'http' : 0, 'smtp' : 1, 'finger' : 2, 'domain_u' : 3, 'auth' : 4, 'telnet' : 5, 'ftp' : 6,
'eco_i' : 7, 'ntp_u' : 8, 'ecr_i' : 9, 'other' : 10, 'private' : 11, 'pop_3' : 12,
'ftp_data' : 13, 'rje' : 14, 'time' : 15, 'mtp' : 16, 'link' : 17, 'remote_job' : 18,
'gopher' : 19, 'ssh' : 20, 'name' : 21, 'whois' : 22, 'domain' : 23, 'login' : 24, 'imap4' : 25,
'daytime' : 26, 'ctf' : 27, 'nntp' : 28, 'shell' : 29, 'IRC' : 30, 'nnsp' : 31, 'http_443' : 32,
'exec' : 33, 'printer' : 34, 'efs' : 35, 'courier' : 36, 'uucp' : 37, 'klogin' : 38,
'kshell' : 39, 'echo' : 40, 'discard' : 41, 'systat' : 42, 'supdup' : 43, 'iso_tsap' : 44,
'hostnames' : 45, 'csnet_ns' : 46, 'pop_2' : 47, 'sunrpc' : 48, 'uucp_path' : 49,
'netbios_ns' : 50, 'netbios_ssn' : 51, 'netbios_dgm' : 52, 'sql_net' : 53, 'vmnet' : 54,
'bgp' : 55, 'Z39_50' : 56, 'ldap' : 57, 'netstat' : 58, 'urh_i' : 59, 'X11' : 60, 'urp_i' : 61,
'pm_dump' : 62, 'tftp_u' : 63, 'tim_i' : 64, 'red_i' : 64},
'flag' : {'SF' : 0, 'S1' : 1, 'REJ' : 2, 'S2' : 3, 'S0' : 4, 'S3' : 5, 'RSTO' : 6, 'RSTR' : 7, 'RSTOS0' : 8,
'OTH' : 9, 'SH' : 10},
'label' : {'normal.' : 1, 'buffer_overflow.' : 0, 'loadmodule.' : 0, 'perl.' : 0, 'neptune.' : 0,
'smurf.' : 0, 'guess_passwd.' : 0, 'pod.' : 0, 'teardrop.' : 0, 'portsweep.' : 0, 'ipsweep.' : 0,
'land.' : 0, 'ftp_write.' : 0, 'back.' : 0, 'imap.' : 0, 'satan.' : 0, 'phf.' : 0, 'nmap.' : 0,
'multihop.' : 0, 'warezmaster.' : 0, 'warezclient.' : 0, 'spy.' : 0, 'rootkit.' : 0}}
df26.replace(cleanup_nums, inplace = True)
df26.head()
# +
df1 = df26
from sklearn import preprocessing
minmax_scale = preprocessing.MinMaxScaler().fit(df1[col_names])
df1[col_names] = minmax_scale.transform(df1[col_names])
# -
corr = df1.corr()
solution = df1['logged_in']
df1.drop('logged_in', axis = 1, inplace = True)
# +
solution = pd.DataFrame()
while df1.shape[1] > 1:
maxval = -1000
maxind = -1
corr = df1.corr()
ind = -1
rpind = -1
name = 'Rahul'
for rp in df1:
rpind += 1
if rp == 'label':
break
for i in df1:
ind += 1
if i == 'label':
continue
val1 = corr.iloc[ind, rpind]
temp1 = df1[i]
temp1 = pd.DataFrame()
temp1 = temp1.append(solution, ignore_index = True)
corrtemp1 = temp1.corr()
n = solution.shape[1]
tempind = 0
for j in temp1:
if tempind == 0:
tempind += 1
continue
val1 -= 0.5 / n * corrtemp1.iloc[0, tempind]
if val1 > maxval:
maxval = val1
maxind = ind
name = i
temp1 = temp1.iloc[0 : 0]
corrtemp1 = corrtemp1.iloc[0 : 0]
ind = 0
print (name)
for i in df1:
if ind == maxind:
break
ind += 1
temp2 = df1[i]
temp2 = pd.DataFrame()
solution = solution.append(temp2, ignore_index = True)
df1.drop(i, axis = 1, inplace = True)
temp2 = temp2.iloc[0 : 0]
# -
| 4,904 |
/Portfolio/Recent_Projects/Sector_Returns/Project_csv/StdDev.ipynb
|
2db21c58f990c3828241ae12829abc464bc6d27f
|
[] |
no_license
|
scottandersen23/Data_Science
|
https://github.com/scottandersen23/Data_Science
| 0 | 6 | null | 2020-10-10T15:56:37 | 2020-09-22T13:46:39 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 426,573 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df1
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'E': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df2
result = pd.concat([df1,df2])
result
kdown] colab_type="text" id="4iICR_E0jKcC"
# ### Setup
# + colab_type="code" id="X1TYSGJVjKcD" colab={}
import sys
import re
import numpy as np
import torch.nn as nn
import torch
import torch.optim as optim
# + colab_type="code" id="rNQBqiH3jKcH" outputId="1bcef26f-6c33-42e1-f8ce-a49c028e2b4e" colab={"base_uri": "https://localhost:8080/", "height": 34}
# if this cell prints "Running on cpu", you must switch runtime environments
# go to Runtime > Change runtime type > Hardware accelerator > GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Running on {}".format(device))
# + [markdown] colab_type="text" id="o0t2Z1vRjKcL"
# ### Download pretrained word embeddings
#
# In this assignment, we will still be using [GloVe](https://nlp.stanford.edu/projects/glove/) pretrained word embeddings.
#
# **Note**: this section will take *several minutes*, since the embedding files are large. Files in Colab may be cached between sessions, so you may or may not need to redownload the files each time you reconnect.
# + colab_type="code" id="0u26MFsgjKcM" colab={"base_uri": "https://localhost:8080/", "height": 442} outputId="44708888-54d5-43ed-ee4b-8801aa045746"
# !wget http://nlp.stanford.edu/data/glove.6B.zip
# !unzip glove*.zip
# + [markdown] colab_type="text" id="njKMNfHmjKcR"
# ### Question 1. Checking for Projectivity
# In this question, you are supposed to implement the `is_projective` function below.
# * A tree structure is said to be [projective](https://en.wikipedia.org/wiki/Discontinuity_(linguistics)) if there are no crossing dependency edges and/or projection lines.
# * The function should take a sentence as input and returns True if and only if the tree is projective.
# + colab_type="code" id="fME7S_whjKcT" colab={}
def is_projective(toks):
"""
params: toks is a list of (idd, tok, pos, head, lab) for a sentence
return True if and only if the sentence has a projective dependency tree
"""
# Implement your code below
##################
# YOUR CODE HERE
##################
#need to loop through all toks and find if each arc between them is projective
#projectivity =
for i in range(len(toks)):
tok_1 = toks[i]
head = tok_1[3]
dependent = tok_1[0]
#if the tok's head is the root
if head == 0:
continue
#if dep is found at higher index than head
if dependent > head:
start = head + 1
finish = dependent
#if head is found at higher index than dep
else:
start = dependent + 1
finish = head
#go through range of arcs from start to finish (large arc looking at)
#mini_head is the head of the arc in range that we are looking at
range_arcs = [toks[i-1] for i in np.arange(start, finish)]
for i in range_arcs:
mini_head = i[3]
while True:
if mini_head == 0:
return False
if mini_head == head:
break
temp = toks[mini_head - 1]
mini_head = temp[3]
return True
# + colab_type="code" id="ccPr3IevJFd8" outputId="ebf900cf-cb12-440d-881c-b8f9d6453ec2" colab={"base_uri": "https://localhost:8080/", "height": 34}
def sanity_check_is_projective():
"""
Sanity check for the function is_projective()
"""
# "From the AP comes this story:" should be projective
proj_toks = [(1, 'From', 'IN', 3, 'case'),
(2, 'the', 'DT', 3, 'det'),
(3, 'AP', 'NNP', 4, 'obl'),
(4, 'comes', 'VBZ', 0, 'root'),
(5, 'this', 'DT', 6, 'det'),
(6, 'story', 'NN', 4, 'nsubj'),
(7, ':', ':', 4, 'punct')]
assert is_projective(proj_toks) == True
# "I saw a man today who is tall" should not be projective
non_proj_toks = [(1, 'I', 'PRP', 2, 'nsubj'),
(2, 'saw', 'VBD', 0, 'root'),
(3, 'a', 'DT', 4, 'det'),
(4, "man", 'NN', 2, 'obj'),
(5, 'today', 'NN', 2, 'nmod'),
(6, 'who', 'WP', 8, 'nsubj'),
(7, 'is', 'VBZ', 8, 'cop'),
(8, 'tall', 'JJ', 4, 'acl:relcl')]
assert is_projective(non_proj_toks) == False
print("Congrats! You have passed the basic sanity check of is_projective().")
sanity_check_is_projective()
# + [markdown] colab_type="text" id="eCZUcO60jKcW"
# ### Question 2.a.
# Implement the first helper function `perform_shift` to achieve the SHIFT operation.
# * The SHIFT Operation removes the word from the front of the input buffer and pushes it onto stack.
# + colab_type="code" id="CFzJksVqjKcX" colab={}
def perform_shift(wbuffer, stack, arcs,
configurations, gold_transitions):
"""
perform the SHIFT operation
"""
# Implement your code below
# your code should:
# 1. append the latest configuration to configurations
# 2. append the latest action to gold_transitions
# 3. update wbuffer, stack and arcs accordingly
# hint: note that the order of operations matters
# as we want to capture the configurations and transition rules
# before making changes to the stack, wbuffer and arcs
##################
# YOUR CODE HERE
##################
#1-configurations update
temp_config = wbuffer.copy(), stack.copy(), arcs.copy()
configurations.append(temp_config)
#2-gold_transitions update
gold_transitions.append('SHIFT')
#3-buff,stack,arcs update
stack.append(wbuffer.pop())
# + colab_type="code" id="Nt_K4zkCJRcV" outputId="e186dd71-2a82-41bb-aeb6-a07bada7da97" colab={"base_uri": "https://localhost:8080/", "height": 34}
def sanity_check_perform_shift():
"""
Sanity check for the function perform_shift()
"""
# Before perform SHIFT
wbuffer = [3, 2, 1]
stack = [0]
arcs = []
configurations = []
gold_transitions = []
# Perform SHIFT
perform_shift(wbuffer, stack, arcs, configurations, gold_transitions)
# After perform SHIFT
assert wbuffer == [3, 2], "The result for wbuffer is not correct"
assert stack == [0, 1], "The result for stack is not correct"
assert arcs == [], "The result for arcs is not correct"
assert configurations == [([3, 2, 1], [0], [])], "The result for configurations is not correct"
assert gold_transitions == ['SHIFT'], "The result for gold_transitions is not correct"
print("Cool! You have passed the basic sanity check of perform_shift().")
sanity_check_perform_shift()
# + [markdown] colab_type="text" id="JxqIrNnzjKcb"
# ### Question 2.b.
# Implement the second helper function `perform_arc` to achieve the ARC operation.
#
# * LEFT-ARC (label): assert relation between head at $stack_1$ and dependent at $stack_2$: remove $stack_2$
# * RIGHT-ARC (label): assert relation between head at $stack_2$ and dependent at $stack_1$; remove $stack_1$
# + colab_type="code" id="NW9_Y2fyjKcc" colab={}
def perform_arc(direction, dep_label,
wbuffer, stack, arcs,
configurations, gold_transitions):
"""
params:
- direction: {"LEFT", "RIGHT"}
- dep_label: label for the dependency relations
Perform LEFTARC_ and RIGHTARC_ operations
"""
# Implement your code below
# your code should:
# 1. append the latest configuration to configurations
# 2. append the latest action to gold_transitions
# 3. update wbuffer, stack and arcs accordingly
# hint: note that the order of operations matters
# as we want to capture the configurations and transition rules
# before making changes to the stack, wbuffer and arcs
##################
# YOUR CODE HERE
##################
#1-configurations update
temp_config = wbuffer.copy(), stack.copy(), arcs.copy()
configurations.append(temp_config)
#2-gold_transitions update
gold_transitions.append(direction+'ARC_'+dep_label)
#3-update stack when direction of arc = LEFT
if direction == 'LEFT':
head = stack[len(stack) - 1]
dependent = stack[len(stack) - 2]
stack.pop(len(stack) - 2)
#3-update stack when direction of arc = RIGHT
if direction == 'RIGHT':
dependent = stack.pop()
head = stack[len(stack) - 1]
#4-update arcs with appropriate dep_label, head, dependent
arcs.append((dep_label,head,dependent))
# + colab_type="code" id="YbvwB7kkJV95" outputId="f91bd64d-2158-4565-c053-f5ae8eeec5f5" colab={"base_uri": "https://localhost:8080/", "height": 34}
def sanity_check_perform_arc():
"""
Sanity check for the function perform_arc()
"""
# Before perform ARC
direction = 'RIGHT'
dep_label = 'punct'
wbuffer = [5, 4, 3]
stack = [0, 1, 2]
arcs = []
configurations = [([5, 4, 3, 2, 1], [0], []),
([5, 4, 3, 2], [0, 1], [])]
gold_transitions = ['SHIFT', 'SHIFT']
# Perform ARC
perform_arc(direction, dep_label, wbuffer, stack, arcs, configurations, gold_transitions)
# After perform ARC
assert wbuffer == [5, 4, 3], "The result for wbuffer is not correct"
assert stack == [0, 1], "The result for stack is not correct"
assert arcs == [('punct', 1, 2)], "The result for arcs is not correct"
assert configurations == [([5, 4, 3, 2, 1], [0], []),
([5, 4, 3, 2], [0, 1], []),
([5, 4, 3], [0, 1, 2], [])], \
"The result for configurations is not correct"
assert gold_transitions == ['SHIFT', 'SHIFT', 'RIGHTARC_punct'], "The result for gold_transitions is not correct"
print("You have passed the basic sanity check of perform_arc().")
sanity_check_perform_arc()
# + [markdown] colab_type="text" id="mmDSyQpqjKcg"
# ### Question 2.c.
# Now, since we have implemented the helper functions, let's use them to complete `tree_to_actions`.
#
# `tree_to_actions` takes wbuffer, stack, arcs and deps as input, returns configuration of the parser and action for the parser.
# + id="iEymjHMfC0kj" colab_type="code" colab={}
def tree_to_actions(wbuffer, stack, arcs, deps):
"""
params:
wbuffer: a list of word indices; the top of buffer is at the end of the list
stack: a list of word indices; the top of buffer is at the end of the list
arcs: a list of (label, head, dependent) tuples
Given wbuffer, stack, arcs and deps
Return configurations and gold_transitions (actions)
"""
# configurations:
# A list of tuples of lists
# [(wbuffer1, stack1, arcs1), (wbuffer2, stack2, arcs2), ...]
# Keeps tracks of the states at each step
configurations=[]
# gold_transitions:
# A list of action strings, e.g ["SHIFT", "LEFTARC_nsubj"]
# Keeps tracks of the actions at each step
gold_transitions=[]
# Implement your code below
# hint:
# 1. configurations[i] and gold_transitions[i] should
# correspond to the states of the wbuffer, stack, arcs
# (before the action was taken) and action to take at step i
# 2. you should call perform_shift and perform_arc in your code
##################
# YOUR CODE HERE
##################
import copy
copy_of_deps = copy.deepcopy(deps)
#perform_shift to start, since stack = [0]
perform_shift(wbuffer, stack, arcs, configurations, gold_transitions)
while stack != [0]:
#if the root is reached, create a RIGHTARC
if len(stack) == 2 and len(wbuffer) == 0:
key = stack[len(stack) - 2]
tuple_key = (stack[len(stack) - 2], stack[len(stack) - 1])
perform_arc('RIGHT', deps[key][tuple_key], wbuffer, stack, arcs, configurations, gold_transitions)
#otherwise, either create a LEFTARC or RIGHTARC
else:
head = stack[len(stack) - 1]
dependent = stack[len(stack) - 2]
#create LEFTARC if last item (head) in stack is a key in deps and (head,dep) is found at that key in deps.
if head in deps and ((head, dependent) in deps[head]):
#get the label at that tuple
dep_label = deps[head][(head, dependent)]
perform_arc('LEFT', dep_label, wbuffer, stack, arcs, configurations, gold_transitions)
#delete this arc from the copy of deps to keep track of what arcs are left to use
del copy_of_deps[head][(head, dependent)]
#Create RIGHTARC if all deps of last item in stack are used up AND if second to last item in stack is key in deps
#and (dep,head) is found at this key in deps
elif ((head not in copy_of_deps) or (head in copy_of_deps and not copy_of_deps[head])) and (dependent in copy_of_deps and (dependent, head) in copy_of_deps[dependent]):
#get the label at that tuple
dep_label = deps[dependent][(dependent, head)]
perform_arc('RIGHT', dep_label, wbuffer, stack, arcs, configurations, gold_transitions)
#delete this arc from the copy of deps to keep track of what arcs are left to use
del copy_of_deps[dependent][(dependent, head)]
elif len(wbuffer) != 0:
#SHIFT, if RIGHTARC and LEFTARC don't exist--but only when the buffer is not empty!
perform_shift(wbuffer, stack, arcs, configurations, gold_transitions)
else:
break
return configurations, gold_transitions
# + colab_type="code" id="fYpLgArZJb1X" outputId="e464c951-e522-4cfd-b340-47e75cb37381" colab={"base_uri": "https://localhost:8080/", "height": 34}
def sanity_check_tree_to_actions():
"""
Sanity check for the function tree_to_actions()
"""
# Before tree_to_actions
wbuffer = [9, 8, 7, 6, 5, 4, 3, 2, 1]
stack = [0]
arcs = []
deps = {5: {(5, 9): 'punct', (5, 8): 'obl', (5, 4): 'advmod', (5, 3): 'aux:pass', (5, 2): 'nsubj:pass'},
8: {(8, 7): 'det', (8, 6): 'case'}, 0: {(0, 5): 'root'}, 2: {(2, 1): 'nmod:poss'}}
tree_to_actions(wbuffer, stack, arcs, deps)
# After tree_to_actions
assert wbuffer == [], "The result for wbuffer is not correct"
assert stack == [0], "The result for stack is not correct"
assert arcs == [('nmod:poss', 2, 1), ('advmod', 5, 4), ('aux:pass', 5, 3), ('nsubj:pass', 5, 2),
('det', 8, 7), ('case', 8, 6), ('obl', 5, 8), ('punct', 5, 9), ('root', 0, 5)], \
"The result for arcs is not correct"
assert deps == {5: {(5, 9): 'punct', (5, 8): 'obl', (5, 4): 'advmod', (5, 3): 'aux:pass', (5, 2): 'nsubj:pass'},
8: {(8, 7): 'det', (8, 6): 'case'}, 0: {(0, 5): 'root'}, 2: {(2, 1): 'nmod:poss'}}, \
"The result for deps is not correct"
print("You have passed the basic sanity check of tree_to_actions()! One more function to go.")
sanity_check_tree_to_actions()
# + [markdown] colab_type="text" id="IlLh-Q_sjKcl"
# ### Question 3. Tree Parsing with Predictions
# Implement action_to_tree, which will update the dependency tree based on the action predictions.
# * Don't forget to use `isvalid` to check the validity of the possible actions!
# + colab_type="code" id="pFTfAwLWjKcm" colab={}
def isvalid(stack, wbuffer, action):
"""
Helper function that returns True only if an action is
legal given the current states of the stack and wbuffer
"""
if action == "SHIFT" and len(wbuffer) > 0:
return True
if action.startswith("RIGHTARC") and len(stack) > 1 and stack[-1] != 0:
return True
if action.startswith("LEFTARC") and len(stack) > 1 and stack[-2] != 0:
return True
return False
# + colab_type="code" id="RH_vzMzRjKcp" colab={}
def action_to_tree(tree, predictions, wbuffer, stack, arcs, reverse_labels):
"""
params:
tree:
a dictionary of dependency relations (head, dep_label)
{
child1: (head1, dep_lebel1),
child2: (head2, dep_label2), ...
}
predictions:
a numpy column vector of probabilities for different dependency labels
as ordered by the variable reverse_labels
predictions.shape = (1, total number of dependency labels)
wbuffer: a list of word indices; top of buffer is at the end of the list
stack: a list of word indices; top of stack is at the end of the list
arcs: a list of (label, head, dependent) tuples
"""
# Implement your code below
# hint:
# 1. the predictions contains the probability distribution for all
# possible actions for a single step, and you should choose one
# and update the tree only once
# 2. some actions predicted are not going to be valid
# (e.g., shifting if nothing is on the buffer)
# so sort probs and keep going until you find one that is valid.
##################
# YOUR CODE HERE
##################
#convert predictions to a list
list_of_predictions = predictions.tolist()
list_of_predictions = list_of_predictions[0]
#zip together predictions and reverse_labels so correct labels correspond to predictions
list_of_tuples = list(zip(list_of_predictions, reverse_labels))
#sort the actions in descending order
sorted_action_tuples = sorted(list_of_tuples, reverse = True)
for i in range(len(sorted_action_tuples)):
action = sorted_action_tuples[i][1]
if isvalid(stack, wbuffer, action):
#if the action is valid and is SHIFT, then SHIFT
if action == 'SHIFT':
stack.append(wbuffer.pop())
#otherwise check the direction of the arc after checking it's valid
else:
#get the direction and dep_label of the action
direction_of_arc, dep_label = action.split('_')
#if valid arc is RIGHT, properly assign head and dependent
#update arcs with arc
#update the stack by popping off the last element in it aka dependent
if direction_of_arc == 'RIGHTARC':
head = stack[len(stack) - 2]
dep = stack[len(stack) - 1]
arc = dep_label, head, dep
arcs.append(arc)
stack.pop()
#if valid arc is LEFT, properly assign head and dependent
#update arcs with arc
#update stack by popping off the second to last element of it aka head
if direction_of_arc == 'LEFTARC':
head = stack[len(stack) - 1]
dep = stack[len(stack) - 2]
arc = dep_label, head, dep
arcs.append(arc)
stack.pop(len(stack) - 2)
#update the tree with {dependent: (head,dep_label)} as instances
tree[dep] = (head, dep_label)
else:
continue
return
# + colab_type="code" id="W7ZoAad9JiPF" outputId="404f9f00-0c7f-4f54-d716-661b1110f0d6" colab={"base_uri": "https://localhost:8080/", "height": 34}
def sanity_check_action_to_tree():
"""
Sanity check for the function action_to_tree()
"""
# Before action
tree = {}
predictions = np.array([[ 8.904456 , 2.1306312 , -0.6716528 , -0.37662476, -0.01239625,-3.3660867 , -2.1345713 , 1.4581618 ,
-0.1688145 , -0.61321 , 0.40860286, -2.7569351 , -0.69548404, -0.7809651 , 0.7595304 ,-2.770731 ,
-0.97373027, -2.70085 , -0.26645675, -1.2353135 ,-1.4289687 , -1.3272284 , -2.4956157 , -1.0178847 ,
-1.7484616 , 1.7610879 , 0.301237 , -0.71727145, -1.9370077 , -1.3722429 , 0.9516849 , -2.6749346 ,
-1.4604743 , -1.6903474 , -2.5261753 ,-0.88417345, -0.50328434, -0.21296862, -3.4296887 , -3.3282495 ,
-4.300956 , -2.12365 , -3.3637137 , -5.570282 , -3.8983932 ,-3.0985348 , -5.818429 , -1.5155774 ,
-3.4247532 , -2.7098398 ,-4.799152 , -4.020282 , -3.5505116 , -2.7114115 , -4.1488724 ,-4.7484784 ,
-4.0955606 , -2.994336 , -4.9744525 , -4.3390574 ,-2.782462 , -4.615161 , -4.6250424 , -4.4105268 ,
-4.856515 ,-3.5684056 , -4.6808653 , -4.882898 , -4.3673973 , -5.379696 ]])
reverse_labels = ['SHIFT', 'RIGHTARC_punct', 'RIGHTARC_flat', 'LEFTARC_amod', 'LEFTARC_nsubj', 'LEFTARC_det', 'RIGHTARC_appos', 'RIGHTARC_obj', 'LEFTARC_case', 'RIGHTARC_nmod', 'RIGHTARC_obl', 'RIGHTARC_parataxis', 'RIGHTARC_root', 'LEFTARC_aux', 'LEFTARC_punct', 'RIGHTARC_iobj', 'LEFTARC_mark', 'RIGHTARC_acl', 'RIGHTARC_compound:prt', 'LEFTARC_nummod', 'RIGHTARC_ccomp', 'LEFTARC_aux:pass', 'LEFTARC_nsubj:pass', 'LEFTARC_compound', 'LEFTARC_nmod:poss', 'LEFTARC_cc', 'RIGHTARC_conj', 'LEFTARC_advmod', 'RIGHTARC_xcomp', 'LEFTARC_advcl', 'RIGHTARC_advmod', 'RIGHTARC_acl:relcl', 'RIGHTARC_advcl', 'LEFTARC_expl', 'RIGHTARC_nsubj', 'LEFTARC_obl', 'LEFTARC_cop', 'RIGHTARC_fixed', 'RIGHTARC_nummod', 'LEFTARC_det:predet', 'RIGHTARC_obl:npmod', 'RIGHTARC_obl:tmod', 'LEFTARC_obl:tmod', 'RIGHTARC_nmod:tmod', 'RIGHTARC_amod', 'LEFTARC_csubj', 'LEFTARC_csubj:pass', 'RIGHTARC_case', 'RIGHTARC_det', 'LEFTARC_obj', 'LEFTARC_nmod:tmod', 'LEFTARC_nmod', 'RIGHTARC_cop', 'RIGHTARC_expl', 'RIGHTARC_aux', 'RIGHTARC_vocative', 'RIGHTARC_csubj', 'LEFTARC_obl:npmod', 'RIGHTARC_nmod:npmod', 'RIGHTARC_list', 'LEFTARC_ccomp', 'LEFTARC_discourse', 'LEFTARC_parataxis', 'LEFTARC_xcomp', 'RIGHTARC_csubj:pass', 'LEFTARC_cc:preconj', 'RIGHTARC_flat:foreign', 'RIGHTARC_compound', 'LEFTARC_acl:relcl', 'RIGHTARC_discourse']
wbuffer = [4,3,2,1]
stack = [0]
arcs = []
# Perform action
action_to_tree(tree, predictions, wbuffer, stack, arcs, reverse_labels)
# After action (the action is SHIFT for this step)
assert not tree, "The tree should be {} after the SHIFT"
assert wbuffer == [4,3,2], "wbuffer should be [4,3,2] after the SHIFT"
assert stack == [0, 1], "stack should be [0, 1] after the SHIFT"
assert arcs == [], "arcs should be [] after the SHIFT"
print("You have passed the basic sanity check of action_to_tree()!")
sanity_check_action_to_tree()
# + [markdown] colab_type="text" id="pJsWiPY5jKcu"
# ### Implemented for you
# Now since you have the configuration $x$ and action $y$, we can now train a supervised model to predict an action $y$ given a configuration $x$. We are using a simplified version model of [A Fast and Accurate Dependency Parser using Neural Networks](https://nlp.stanford.edu/pubs/emnlp2014-depparser.pdf).
#
# * This model is alreadly implemented for you, please `train` the model, and report the evaluation and test results by calling the function `evaluate` and `test`
# + colab_type="code" id="vlx_IRrJjKcv" colab={}
# ============================================================
# THE FOLLOWING CODE IS PROVIDED
# ============================================================
def get_oracle(toks):
"""
Return pairs of configurations + gold transitions (actions)
from training data
configuration = a list of tuple of:
- buffer (top of buffer is at the end of the list)
- stack (top of buffer is at the end of the list)
- arcs (a list of (label, head, dependent) tuples)
gold transitions = a list of actions, e.g. SHIFT
"""
stack = [] # stack
arcs = [] # existing list of arcs
wbuffer = [] # input buffer
# deps is a dictionary of head: dependency relations, where
# dependency relations is a dictionary of the (head, child): label
# deps = {head1:{
# (head1, child1):dependency_label1,
# (head1, child2):dependency_label2
# }
# head2:{
# (head2, child3):dependency_label3,
# (head2, child4):dependency_label4
# }
# }
deps = {}
# ROOT
stack.append(0)
# initialize variables
for position in reversed(toks):
(idd, _, _, head, lab) = position
dep = (head, idd)
if head not in deps:
deps[head] = {}
deps[head][dep] = lab
wbuffer.append(idd)
# configurations:
# A list of (wbuffer, stack, arcs)
# Keeps tracks of the states at each step
# gold_transitions:
# A list of action strings ["SHIFT", "LEFTARC_nsubj"]
# Keeps tracks of the actions at each step
configurations, gold_transitions = tree_to_actions(wbuffer, stack, arcs, deps)
return configurations, gold_transitions
def featurize_configuration(configuration, tokens, postags, vocab, pos_vocab):
def get_id(word, vocab):
word=word.lower()
if word in vocab:
return vocab[word]
return vocab["<unk>"]
"""
Given configurations of the stack, input buffer and arcs,
words of the sentence and POS tags of the words,
return some features
The current features are the word ID and postag ID at the
first three positions of the stack and buffer.
"""
wbuffer, stack, arcs = configuration
word_features=[]
pos_features=[]
if len(stack) > 0:
word_features.append(get_id(tokens[stack[-1]], vocab))
pos_features.append(get_id(postags[stack[-1]], pos_vocab))
else:
word_features.append(get_id("<NONE>", vocab))
pos_features.append(get_id("<NONE>", pos_vocab))
if len(stack) > 1:
word_features.append(get_id(tokens[stack[-2]], vocab))
pos_features.append(get_id(postags[stack[-2]], pos_vocab))
else:
word_features.append(get_id("<NONE>", vocab))
pos_features.append(get_id("<NONE>", pos_vocab))
if len(stack) > 2:
word_features.append(get_id(tokens[stack[-3]], vocab))
pos_features.append(get_id(postags[stack[-3]], pos_vocab))
else:
word_features.append(get_id("<NONE>", vocab))
pos_features.append(get_id("<NONE>", pos_vocab))
if len(wbuffer) > 0:
word_features.append(get_id(tokens[wbuffer[-1]], vocab))
pos_features.append(get_id(postags[wbuffer[-1]], pos_vocab))
else:
word_features.append(get_id("<NONE>", vocab))
pos_features.append(get_id("<NONE>", pos_vocab))
if len(wbuffer) > 1:
word_features.append(get_id(tokens[wbuffer[-2]], vocab))
pos_features.append(get_id(postags[wbuffer[-2]], pos_vocab))
else:
word_features.append(get_id("<NONE>", vocab))
pos_features.append(get_id("<NONE>", pos_vocab))
if len(wbuffer) > 2:
word_features.append(get_id(tokens[wbuffer[-3]], vocab))
pos_features.append(get_id(postags[wbuffer[-3]], pos_vocab))
else:
word_features.append(get_id("<NONE>", vocab))
pos_features.append(get_id("<NONE>", pos_vocab))
return word_features, pos_features
def get_oracles(filename, vocab, tag_vocab):
"""
Get configurations, gold_transitions from all sentences
"""
with open(filename) as f:
toks, tokens, postags = [], {}, {}
tokens[0] = "<ROOT>"
postags[0] = "<ROOT>"
# a list of all features for each transition step
word_feats = []
pos_feats = []
# a list of labels, e.g. SHIFT, LEFTARC_DEP_LABEL, RIGHTARC_DEP_LABEL
labels = []
for line in f:
cols = line.rstrip().split("\t")
if len(cols) < 2: # at the end of each sentence
if len(toks) > 0:
if is_projective(toks): # only use projective trees
# get all configurations and gold standard transitions
configurations, gold_transitions = get_oracle(toks)
for i in range(len(configurations)):
word_feat, pos_feat = featurize_configuration(configurations[i], tokens, postags, vocab, tag_vocab)
label = gold_transitions[i]
word_feats.append(word_feat)
pos_feats.append(pos_feat)
labels.append(label)
# reset vars for the next sentence
toks, tokens, postags = [], {}, {}
tokens[0] = "<ROOT>"
postags[0] = "<ROOT>"
continue
if cols[0].startswith("#"):
continue
# construct the tuple for each word in the sentence
# for each word in the sentence
# idd: index of a word in a sentence, starting from 1
# tok: the word itself
# pos: pos tag for that word
# head: parent of the dependency
# lab: dependency relation label
idd, tok, pos, head, lab = int(cols[0]), cols[1], cols[4], int(cols[6]), cols[7]
toks.append((idd, tok, pos, head, lab))
# feature for training to predict the gold transition
tokens[idd], postags[idd] = tok, pos
return word_feats, pos_feats, labels
def load_embeddings(filename):
# 0 idx is for padding
# 1 idx is for <UNK>
# 2 idx is for <NONE>
# 3 idx is for <ROOT>
# get the embedding size from the first embedding
vocab_size=4
with open(filename, encoding="utf-8") as file:
for idx, line in enumerate(file):
if idx == 0:
word_embedding_dim=len(line.rstrip().split(" "))-1
vocab_size+=1
vocab={"<pad>":0, "<unk>":1, "<none>":2, "<root>":3}
print("word_embedding_dim: %s, vocab size: %s" % (word_embedding_dim, vocab_size))
embeddings=np.zeros((vocab_size, word_embedding_dim))
with open(filename, encoding="utf-8") as file:
for idx,line in enumerate(file):
if idx + 4 >= vocab_size:
break
cols=line.rstrip().split(" ")
val=np.array(cols[1:])
word=cols[0]
embeddings[idx+4]=val
vocab[word]=idx+4
return torch.FloatTensor(embeddings), vocab
class ShiftReduceParser(nn.Module):
def __init__(self, embeddings, hidden_dim, tagset_size, num_pos_tags, pos_embedding_dim):
super(ShiftReduceParser, self).__init__()
self.hidden_dim = hidden_dim
self.num_labels=tagset_size
_, embedding_dim = embeddings.shape
self.input_size=embedding_dim*6 + pos_embedding_dim*6
self.dropout_layer = nn.Dropout(p=0.25)
self.word_embeddings = nn.Embedding.from_pretrained(embeddings)
self.pos_embeddings = nn.Embedding(num_pos_tags, pos_embedding_dim)
self.tanh = nn.Tanh()
self.W1 = nn.Linear(self.input_size, self.hidden_dim)
self.W2 = nn.Linear(self.hidden_dim, self.num_labels)
def forward(self, words, pos_tags, Y=None):
words=words.to(device)
pos_tags=pos_tags.to(device)
if Y is not None:
Y=Y.to(device)
word_embeds = self.word_embeddings(words)
postag_embeds = self.pos_embeddings(pos_tags)
embeds=torch.cat((word_embeds, postag_embeds), 2)
embeds=embeds.view(-1, self.input_size)
embeds=self.dropout_layer(embeds)
hidden = self.W1(embeds)
hidden = self.tanh(hidden)
logits = self.W2(hidden)
if Y is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), Y.view(-1))
return loss
else:
return logits
def get_batches(W, P, Y, batch_size):
batch_W=[]
batch_P=[]
batch_Y=[]
i=0
while i < len(W):
batch_W.append(torch.LongTensor(W[i:i+batch_size]))
batch_P.append(torch.LongTensor(P[i:i+batch_size]))
batch_Y.append(torch.LongTensor(Y[i:i+batch_size]))
i+=batch_size
return batch_W, batch_P, batch_Y
def train(word_feats, pos_feats, labels, embeddings, vocab, postag_vocab, label_vocab):
"""
Train transition-based parser to predict next action (labels)
given current configuration (featurized by word_feats and pos_feats)
Return the classifier trained using Chen and Manning (2014), "A Fast
and Accurate Dependency Parser using Neural Networks"
"""
# dimensionality of linear layer
HIDDEN_DIM=100
# dimensionality of POS embeddings
POS_EMBEDDING_SIZE=50
# batch size for training
BATCH_SIZE=32
# number of epochs to train for
NUM_EPOCHS=10
# learning rate for Adam optimizer
LEARNING_RATE=0.001
num_labels=[]
for i, y in enumerate(labels):
num_labels.append(label_vocab[y])
batch_W, batch_P, batch_Y = get_batches(word_feats, pos_feats, num_labels, BATCH_SIZE)
model = ShiftReduceParser(embeddings, HIDDEN_DIM, len(label_vocab), len(postag_vocab), POS_EMBEDDING_SIZE)
model.to(device)
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
for epoch in range(NUM_EPOCHS):
model.train()
bigloss=0.
for b in range(len(batch_W)):
model.zero_grad()
loss = model.forward(batch_W[b], batch_P[b], Y=batch_Y[b])
bigloss+=loss.item()
loss.backward()
optimizer.step()
print("loss: ", bigloss)
return model
def parse(toks, model, vocab, tag_vocab, reverse_labels):
"""
parse sentence with trained model and return correctness measure
"""
tokens, postags = {}, {}
tokens[0] = "<ROOT>"
postags[0] = "<ROOT>"
wbuffer, stack, arcs = [], [], []
stack.append(0)
for position in reversed(toks):
(idd, tok, pos, head, lab) = position
tokens[idd] = tok
postags[idd] = pos
# update buffer
wbuffer.append(idd)
tree = {}
while len(wbuffer) >= 0:
if len(wbuffer) == 0 and len(stack) == 0: break
if len(wbuffer) == 0 and len(stack) == 1 and stack[0] == 0: break
word_feats, pos_feats = (featurize_configuration((wbuffer, stack, arcs), tokens, postags, vocab, tag_vocab))
predictions=model.forward(torch.LongTensor([word_feats]), torch.LongTensor([pos_feats]))
predictions=predictions.detach().cpu().numpy()
# your function will be called here
action_to_tree(tree, predictions, wbuffer, stack, arcs, reverse_labels)
return tree
def parse_and_evaluate(toks, model, vocab, tag_vocab, reverse_labels):
"""
parse sentence with trained model and return correctness measure
"""
heads, labels = {}, {}
for position in reversed(toks):
(idd, tok, pos, head, lab) = position
# keep track of gold standards for performance evaluation
heads[idd], labels[idd] = head, lab
tree = parse(toks, model, vocab, tag_vocab, reverse_labels)
# correct_unlabeled: total number of correct (head, child) dependencies
# correct_labeled: total number of correctly *labeled* dependencies
correct_unlabeled, correct_labeled, total = 0, 0, 0
for child in tree:
(head, label) = tree[child]
if head == heads[child]:
correct_unlabeled += 1
if label == labels[child]: correct_labeled += 1
total += 1
return [correct_unlabeled, correct_labeled, total]
def get_label_vocab(labels):
tag_vocab={}
num_labels=[]
for i, y in enumerate(labels):
if y not in tag_vocab:
tag_vocab[y]=len(tag_vocab)
num_labels.append(tag_vocab[y])
reverse_labels=[None]*len(tag_vocab)
for y in tag_vocab:
reverse_labels[tag_vocab[y]]=y
return tag_vocab, reverse_labels
def get_pos_tag_vocab(filename):
tag_vocab={"<none>":0, "<unk>":1}
with open(filename) as file:
for line in file:
cols=line.rstrip().split("\t")
if len(cols) < 3:
continue
pos=cols[4].lower()
if pos not in tag_vocab:
tag_vocab[pos]=len(tag_vocab)
return tag_vocab
def test(model, vocab, tag_vocab, reverse_labels):
"""
Evaluate the performance of a parser against gold standard
"""
model.eval()
toks=["I", "bought", "a", "book"]
pos=["NNP", "VBD", "DT", "NN"]
data=[]
# put it in format parser expects
for i, tok in enumerate(toks):
data.append((i+1, tok, pos[i], "_", "_"))
tree=parse(data, model, vocab, tag_vocab, reverse_labels)
for child in sorted(tree.keys()):
(head, label) = tree[child]
headStr="<ROOT>"
if head > 0: # child and head indexes start at 1; 0 denotes the <ROOT>
headStr=toks[head-1]
print("(%s %s) -> (%s %s) %s" % (child, toks[child-1], head, headStr, label))
def evaluate(filename, model, vocab, tag_vocab, reverse_labels):
"""
Evaluate the performance of a parser against gold standard
"""
model.eval()
with open(filename) as f:
toks=[]
totals = np.zeros(3)
for line in f:
cols=line.rstrip().split("\t")
if len(cols) < 2: # end of a sentence
if len(toks) > 0:
if is_projective(toks):
tots = np.array(parse_and_evaluate(toks, model, vocab, tag_vocab, reverse_labels))
totals += tots
toks = []
continue
if cols[0].startswith("#"):
continue
idd, tok, pos, head, lab = int(cols[0]), cols[1], cols[4], int(cols[6]), cols[7]
toks.append((idd, tok, pos, head, lab))
print ("UAS: %.3f, LAS:%.3f" % (totals[0]/totals[2], totals[1]/totals[2]))
# + [markdown] colab_type="text" id="Ax1VrekKjKcy"
# ### Train and evaluate the model
#
# - NOTICE: Because you are not implementing the model or the training process, You will **NOT** be graded based on the performance of the model!
#
# - You are only graded based on the correctness of each of the implemented functions.
#
# - If all the required functions are implemented correctly, you should expect a UAS in a range of [0.64, 0.67], a LAS in a range of [0.56, 0.59] without changing the parameters of the neural model or the whole training process.
# + colab_type="code" id="9PweOgvvjKc0" outputId="e6ae3633-9740-4376-c4ff-929e8c2cad1a" colab={"base_uri": "https://localhost:8080/", "height": 34}
embeddingsFile = "glove.6B.50d.txt"
trainFile = "train.projective.short.conll"
devFile = "dev.projective.conll"
embeddings, vocab=load_embeddings(embeddingsFile)
pos_tag_vocab=get_pos_tag_vocab(trainFile)
word_feats, pos_feats, labels = get_oracles(trainFile, vocab, pos_tag_vocab)
label_vocab, reverse_labels=get_label_vocab(labels)
# + colab_type="code" id="lFN4U_1ojKc5" outputId="215748cb-d8fe-4a1c-efb4-1a3676254083" colab={"base_uri": "https://localhost:8080/", "height": 272}
model = train(word_feats, pos_feats, labels, embeddings, vocab, pos_tag_vocab, label_vocab)
evaluate(devFile, model, vocab, pos_tag_vocab, reverse_labels)
test(model, vocab, pos_tag_vocab, reverse_labels)
# + colab_type="code" id="kzas-zZNjKc_" colab={}
# + colab_type="code" id="i9TZeiLLjKdF" colab={}
# + colab_type="code" id="SyLUTj-wjKdI" colab={}
# + colab_type="code" id="CRzGFyjCjKdK" colab={}
# + colab_type="code" id="wosrQJaAjKdO" colab={}
# + colab_type="code" id="b6PdWLqGjKdR" colab={}
# + colab_type="code" id="-90WLll5jKdV" colab={}
| 40,281 |
/ml/Beginners_Guide_to PySpark.ipynb
|
162bf9b97c400f9af4d94ab13b6a2b8ff6fb5e38
|
[] |
no_license
|
arunpoy/datasciencecoursera
|
https://github.com/arunpoy/datasciencecoursera
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 934,708 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Arazsh/TensorFlow-in-Practice/blob/master/NaturalLanguageProcessing_LSTM_Tokenizing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="6WoBr9EeU56Q" colab_type="text"
# This is an example project of Natural Language Processing (NLP) using TensorFlow. In this code, the sentences (sequences of words) in a body of text which are in fact parts of a poem, are tokenized and used to train a RNN that is equiped with LSTM layers. Then, a sentence is used as an input to the trained network in order to create new words that their sequence is similar to a poem itself. Since the dataset is fairly small and the model is simple, we cannat expect the model to create a meaningful sequence. But, creating new words based on the sequences of words can be one intresting application of NLP.
#
# + cellView="form" colab_type="code" id="BZSlp3DAjdYf" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + id="59aTRnMKWd0_" colab_type="code" colab={}
#loading necessary libraries
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import regularizers
import tensorflow.keras.utils as ku
import numpy as np
# + id="UV7ay4cpXnuu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="3cdfd4d0-9cf9-424f-fdce-e4a90c4e5d16"
#Calling the Tokenizer
tokenizer = Tokenizer()
#Downlaoding the training sentences
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sonnets.txt \
# -O /tmp/sonnets.txt
#Reading the sentences and tokenizing the words
data = open('/tmp/sonnets.txt').read()
corpus = data.lower().split("\n")
tokenizer.fit_on_texts(corpus)
total_words = len(tokenizer.word_index) + 1
# create input sequences using list of tokens
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
# pad sequences
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
# create predictors and label
predictors, label = input_sequences[:,:-1],input_sequences[:,-1]
label = ku.to_categorical(label, num_classes=total_words)
# + id="JYxI9ICCaODl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 370} outputId="17e7a1ed-23c1-49b1-bba0-27ce02e2ae2c"
#Building the model with two LSTM layers. The l2 regularization is used to reduce the over-fitting issue.
model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(150, return_sequences = True)))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dense(total_words/2, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
# + id="khSib4R2cXId" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="db6a0de2-dbf9-490c-b50c-6e9b3ac61f5d"
history = model.fit(predictors, label, epochs=100, verbose=1)
# + id="FYmUaPCtc_Ii" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 545} outputId="3ae3205c-5167-49ff-ada2-e15ede68fa22"
#Plotting loss and accuracy VS epochs
import matplotlib.pyplot as plt
acc = history.history['accuracy']
loss = history.history['loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b', label='Training accuracy')
plt.title('Training accuracy')
plt.figure()
plt.plot(epochs, loss, 'b', label='Training Loss')
plt.title('Training loss')
plt.legend()
plt.show()
# + id="kJ_WlS79dUv5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="6d3b3fe1-4f34-493b-f8fa-61728cf6d165"
#Creating a poem by using the seed sentence as the input sequence of words
#100 new words are going to be created
seed_text = "Look in thy glass, and tell the face thou viewest"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
lbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCkgewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwogICAgICBwZXJjZW50LnRleHRDb250ZW50ID0KICAgICAgICAgIGAke01hdGgucm91bmQoKHBvc2l0aW9uIC8gZmlsZURhdGEuYnl0ZUxlbmd0aCkgKiAxMDApfSUgZG9uZWA7CiAgICB9CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "headers": [["content-type", "application/javascript"]], "ok": true, "status": 200, "status_text": ""}}} id="LEQ2dFuUtuGv" outputId="b05196a5-d71c-415f-b573-7dca406b316d"
from google.colab import files
## Upload your kaggle json file (API Token)
files.upload()
# !mkdir ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
# + colab={"base_uri": "https://localhost:8080/", "height": 67} id="vQ1dWtvtp8ve" outputId="55379eed-87a5-4d89-8114-acc0f22e4441"
# !kaggle datasets download -d dinnymathew/usstockprices
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="ptCSjxkrs9Ol" outputId="94855533-9099-4c2e-c74e-d4c4f7a6294c"
# !ls
# + colab={"base_uri": "https://localhost:8080/", "height": 50} id="4HDyH7xuuWoO" outputId="a246769d-cfcc-46f8-cfc3-3b0b495fbae2"
# !mkdir data
# !unzip usstockprices -d data
# + colab={"base_uri": "https://localhost:8080/", "height": 50} id="SKkKznovum2U" outputId="c48715c6-75a5-47fe-b576-9a070db84843"
# !ls -l data/
# + [markdown] id="QZMQsAItvF9I"
# ## Import Modules
# + id="z5elOd3_urwh"
from pyspark.sql import functions as f
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="6nlaWsM6veh_"
# ## Read Data
# + colab={"base_uri": "https://localhost:8080/", "height": 269} id="J7AR4YTjeR-6" outputId="a5dab4f8-606a-4ab4-fbf0-fb88c846aaa9"
# Before changing schema
b_data = spark.read.csv(
'data/stocks_price_final.csv',
sep = ',',
header = True,
)
b_data.printSchema()
# + id="dGT7jy3DDuLz"
from pyspark.sql.types import *
data_schema = [
StructField('_c0', IntegerType(), True),
StructField('symbol', StringType(), True),
StructField('data', DateType(), True),
StructField('open', DoubleType(), True),
StructField('high', DoubleType(), True),
StructField('low', DoubleType(), True),
StructField('close', DoubleType(), True),
StructField('volume', IntegerType(), True),
StructField('adjusted', DoubleType(), True),
StructField('market.cap', StringType(), True),
StructField('sector', StringType(), True),
StructField('industry', StringType(), True),
StructField('exchange', StringType(), True),
]
final_struc = StructType(fields=data_schema)
# + id="WAThgJoIvcKI"
data = spark.read.csv(
'data/stocks_price_final.csv',
sep = ',',
header = True,
schema = final_struc
)
# + colab={"base_uri": "https://localhost:8080/", "height": 269} id="NCFGHCsUyamR" outputId="d74d286c-fe98-40b3-8576-64a71630c0c9"
data.printSchema()
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="bUgocSgd_gqy" outputId="0ef2e836-f2f2-41c2-e671-0ae8c87c9fd2"
data.show(5)
# + id="us8Ks-FEHTYe"
data = data.withColumnRenamed('market.cap', 'market_cap')
# + [markdown] id="a0aFk3S-mN50"
# ## Inspect the data
# + colab={"base_uri": "https://localhost:8080/", "height": 54} id="heGhvrD_mSZZ" outputId="5b629e83-b8e2-49a1-b7d2-dfc3f8c568d7"
# prints Schema of thte data
data.schema
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="11e0YeMXmhUs" outputId="87939907-5797-4994-9b29-897f885ce848"
data.dtypes
# + colab={"base_uri": "https://localhost:8080/", "height": 87} id="94m1hXVPprqU" outputId="630d15a5-8b20-4b22-ca96-64542fdd0c99"
data.head(3)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="IYZ46WYNHc01" outputId="6d4fa243-b3b4-4b5d-9f45-8d30accf1d01"
data.show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 54} id="W9AJoO7JqxWU" outputId="f1c1d7fe-aa74-4c5f-bdf9-72a04ae009b7"
data.first()
# + colab={"base_uri": "https://localhost:8080/", "height": 205} id="ea-TUnvamQvg" outputId="106304e3-98fa-4b96-8abe-015c94f2d6c0"
data.describe().show()
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="WEfAsBkRtn9W" outputId="2cb67156-45c4-4a84-8cae-f46e064d1b77"
data.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="IMZZZMo_tppK" outputId="ac3c1bfa-71a9-4e98-94ff-31666476b086"
data.count()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="JVkU70BotpxY" outputId="7e9ccad7-c103-450a-ddb3-ad6dda8bb0c0"
data.distinct().count()
# + colab={"base_uri": "https://localhost:8080/", "height": 269} id="RBq6GXzSuZIe" outputId="c726aa15-3c22-465c-a43b-8d72c6daeec3"
data.printSchema()
# + [markdown] id="Ww3XqquewkvB"
# ## Column Operations/Manipulations
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="b7NNmtMYwka1" outputId="3ed5dac0-8ebe-4b95-88bb-114a1ce212be"
data = data.withColumn('date', data.data)
data.show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="gIpRmPSNxILh" outputId="665da690-8d48-4609-894c-d2220bc50605"
data = data.withColumnRenamed('date', 'data_changed')
data.show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="ZfMRq0AwxH3R" outputId="d70a40df-dc81-4568-ed80-e82511ec9423"
data = data.drop('data_changed')
data.show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 185} id="tEGNIZJUvsbQ" outputId="26c70865-f0bb-4725-e4a8-9227917d8917"
data.select(['open', 'high', 'low', 'close', 'volume', 'adjusted']).describe().show()
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="HCnsfWXP3_6J" outputId="456b0ff7-9891-4fe3-a1f2-fac60068e8e0"
data.groupBy('sector').count().show()
# + id="CkyeKRraxIV3"
sec_x = data.select(['sector', 'open', 'close', 'adjusted']).groupBy('sector').mean().collect()
# + [markdown] id="UeASC7qHK9nh"
# Convert the data into **list**
# + colab={"base_uri": "https://localhost:8080/", "height": 218} id="ujw8raEv2sJv" outputId="3c53cbd7-3f4c-421a-b996-da288e586e10"
for row in sec_x:
print(list(row), end='\n')
# + [markdown] id="_9auYKi9LE3V"
# Convert the data into **dictionary**
# + colab={"base_uri": "https://localhost:8080/", "height": 218} id="aK9mZF24JpM9" outputId="c8677a64-3e13-4e71-f432-fd359b643eb7"
for row in sec_x:
print(row.asDict(), end='\n')
# + [markdown] id="BKU8_RHHLdSh"
# convert data into pandas **datafame**
# + id="YsCABx96Ljt7"
sec_df = data.select(['sector', 'open', 'close', 'adjusted']).groupBy('sector').mean().toPandas()
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="NGFoG1DLLpyi" outputId="a6277432-bff2-42ba-ac2a-dd352e58e954"
sec_df
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="gru1FywLL3Ih" outputId="52c1ac20-af6d-4357-a179-9d41ea85acc2"
sec_df.plot(kind = 'bar', x='sector', y = sec_df.columns.tolist()[1:], figsize=(12, 6))
# + [markdown] id="BA_lWkHsNKGc"
# Remove **basic industries** from the plot and view it again...
# + colab={"base_uri": "https://localhost:8080/", "height": 500} id="dT6IoKDTML4l" outputId="4fbe465e-33e1-4400-fb31-7354f8d98f83"
ind = list(range(12))
ind.pop(6)
sec_df.iloc[ind ,:].plot(kind = 'bar', x='sector', y = sec_df.columns.tolist()[1:], figsize=(12, 6), ylabel = 'Stock Price', xlabel = 'Sector')
plt.show()
# + [markdown] id="TEwN2D1lKbi7"
#
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="lXbvY4Ry6M7O" outputId="9b81227e-7b05-4d2e-e424-731b7ff017e9"
industries_x = data.select(['industry', 'open', 'close', 'adjusted']).groupBy('industry').mean().toPandas()
industries_x.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="oOc73jyIFEFl" outputId="9912cab7-8acb-464f-937f-22af810d1440"
industries_x.plot(kind = 'barh', x='industry', y = industries_x.columns.tolist()[1:], figsize=(10, 50))
# + [markdown] id="NV_3Pz2yaYLf"
# Remove **major chemicals** and **building products** to view the rest data clearly
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="mYKkpXL_FRa3" outputId="06885893-165a-4866-f5ca-28bc7009b93a"
q = industries_x[(industries_x.industry != 'Major Chemicals') & (industries_x.industry != 'Building Products')]
q.plot(kind = 'barh', x='industry', y = q.columns.tolist()[1:], figsize=(10, 50), xlabel='Stock Price', ylabel = 'Industry')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 454} id="HgEJ2d2CHo1-" outputId="f9dc199e-cf13-40bb-d1c4-3cf151d2a1d4"
import pyspark.sql.functions as f
health = data.filter(f.col('sector') == 'Health Care')
health.show()
# + [markdown] id="EfxSK3OSbifQ"
# ### How to use Aggregation
# + colab={"base_uri": "https://localhost:8080/", "height": 322} id="hYcg9S7eH1DG" outputId="02a8da30-bb32-45fa-d952-8956670168c8"
from pyspark.sql.functions import col, min, max, avg, lit
data.groupBy("sector") \
.agg(min("data").alias("From"),
max("data").alias("To"),
min("open").alias("Minimum Opening"),
max("open").alias("Maximum Opening"),
avg("open").alias("Average Opening"),
min("close").alias("Minimum Closing"),
max("close").alias("Maximum Closing"),
avg("close").alias("Average Closing"),
min("adjusted").alias("Minimum Adjusted Closing"),
max("adjusted").alias("Maximum Adjusted Closing"),
avg("adjusted").alias("Average Adjusted Closing"),
).show(truncate=False)
# + [markdown] id="wnEqe6C5ivLN"
# Get the min, max, avg data w.r.t sectors from **Jan 2019** to **Jan 2020**
# + colab={"base_uri": "https://localhost:8080/", "height": 322} id="texK2jlleAgi" outputId="794cc28a-d427-45a4-f1b7-2790cbcf86d2"
data.filter( (col('data') >= lit('2019-01-02')) & (col('data') <= lit('2020-01-31')) )\
.groupBy("sector") \
.agg(min("data").alias("From"),
max("data").alias("To"),
min("open").alias("Minimum Opening"),
max("open").alias("Maximum Opening"),
avg("open").alias("Average Opening"),
min("close").alias("Minimum Closing"),
max("close").alias("Maximum Closing"),
avg("close").alias("Average Closing"),
min("adjusted").alias("Minimum Adjusted Closing"),
max("adjusted").alias("Maximum Adjusted Closing"),
avg("adjusted").alias("Average Adjusted Closing"),
).show(truncate=False)
# + [markdown] id="b9DRUiPLk2IT"
# Plot the timeseries data od **technology** sector stock trade
# + colab={"base_uri": "https://localhost:8080/", "height": 454} id="61klpU8Lh0YO" outputId="e3c7fcd8-1437-4138-87b3-0c6bce9d13c6"
tech = data.where(col('sector') == 'Technology').select('data', 'open', 'close', 'adjusted')
tech.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 713} id="Wj5p8ixQkjuT" outputId="3368021d-add1-4735-c996-b9626cc61d00"
fig, axes = plt.subplots(nrows=3, ncols=1, figsize =(60, 30))
tech.toPandas().plot(kind = 'line', x = 'data', y='open', xlabel = 'Date Range', ylabel = 'Stock Opening Price', ax = axes[0], color = 'mediumspringgreen')
tech.toPandas().plot(kind = 'line', x = 'data', y='close', xlabel = 'Date Range', ylabel = 'Stock Closing Price', ax = axes[1], color = 'tomato')
tech.toPandas().plot(kind = 'line', x = 'data', y='adjusted', xlabel = 'Date Range', ylabel = 'Stock Adjusted Price', ax = axes[2], color = 'orange')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="GRpy_6Fn29rJ" outputId="4870da3e-6c78-4dc3-aa12-1ce4cb1c543f"
data.select('sector').show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="FTlxRGkw6jCu" outputId="43028000-9185-4b58-ed0d-30f9e5bd91dd"
data.select(['open', 'close', 'adjusted']).show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="EsZfCrIZ6qkd" outputId="f1a63443-ed6c-432c-b220-81283a574ee5"
data.filter(data.adjusted.between(100.0, 500.0)).show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="IYWm7iDy8dHC" outputId="df39d2c1-b00c-4b47-af0a-4afb44b1382d"
from pyspark.sql.functions import col, lit
data.filter( (col('data') >= lit('2020-01-01')) & (col('data') <= lit('2020-01-31')) ).show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="i5LKNfTGAEl0" outputId="1c9066c5-e8ab-4447-e236-0582bcad6eb1"
data.select('open', 'close', f.when(data.adjusted >= 200.0, 1).otherwise(0)).show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="exczz-BFBd9N" outputId="43641b12-f454-4d4c-b64f-f945117faf01"
data.select('sector',
data.sector.rlike('^[B,C]').alias('Sector Starting with B or C')
).distinct().show()
# + colab={"base_uri": "https://localhost:8080/", "height": 454} id="3MBAUrelDvBO" outputId="1e0e48c0-fdb1-4f57-de76-8db6350a17eb"
data.select(['industry', 'open', 'close', 'adjusted']).groupBy('industry').mean().show()
# + id="zkSdYUdRGkkk"
| 21,349 |
/Tokenizers/Tokenizers.ipynb
|
53e4d55336f7000e37f8e56e0ddf5c4cdfa4ad1c
|
[] |
no_license
|
tuvu247/NLP
|
https://github.com/tuvu247/NLP
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 3,225,971 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # q1
# +
result=1
for i in range (1,6):
result = result*i
print(result)
# -
# # q2
# +
result=0
for i in range(1,6):
result=result+i
print(result)
# -
# # q3
# +
result = 1
for i in range(3,10):
result=result * i
print (result)
# -
# # q4
# +
result=1
for i in range(4,9):
result=result*i
print(result)
# -
# # q5
# +
i=6
while (i>0):
i=i-1
if i==3:
continue
print(i)
# -
# # q6
# +
count=0
for i in 'this is my fourth string'.split():
count=count+1
print(count)
# -
# # q7
# +
num_list= [12,32,43,35]
while num_list:
num_list.remove(num_list[0])
print(num_list)
# -
# # q8
# +
string = 'this is my fourth string'.split()
count = 0
while string:
count= count+1
string.remove(string[0])
print(count)
# -
# # q9
my_tweet ={
'favorite_count':1138,
'lang':'ch',
'coordinates':(-75.14310264, 40.05701649),
'entities':{'hashtag':['preds','pens','SingintoSpring']}
}
my_tweet['entities']['hashtag']
# # q 9.1
# +
count=0
for i in my_tweet['entities']['hashtag']:
count=count+1
print(count)
# -
# # q 9.2
# +
string = my_tweet['entities']['hashtag']
count = 0
while string:
count= count+1
string.remove(string[0])
print(count)
# -
# # q 10
# +
num_list = [12,32,43,35]
max_item=num_list[0]
for current_item in num_list:
if max_item<current_item:
max_item=current_item
print(max_item)
| 1,726 |
/notebooks/Particle Filter CRP.ipynb
|
a1310ad77d8223502556997b2afbcfd187fe2a99
|
[] |
no_license
|
arne7morgen/MasterThesis
|
https://github.com/arne7morgen/MasterThesis
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 77,124 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import math
import scipy.stats
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
import copy
import os
import pickle
# %matplotlib inline
sns.set_style("whitegrid")
colors = ["#1f77b4", "#ff7f0e", "#e377c2", "#bcbd22", "#7f7f7f", "#d62728", "#98df8a"]
mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=colors)
d, tcp = GaussianTestData(l=10)
print tcp
cpd = BootstrapFilterCRP()
z2,m2,v2, cp2 = cpd.run(d)
plotMultiStateParticleSub(d, z2, m2, v2, cp2, tcp)
class BootstrapFilterCRP(object):
def __init__(self, likelihood=(0,1), prior=(1,-1), N=10, \
noise = 0.1, alpha=0.01):
self.noise = noise
self.likelihood = likelihood
self.numStates = len(prior)
self.prior = prior
self.cp = []
self.prevMax = -1
self.t=1
self.alpha = alpha
self.weights = np.ones(N)
state = np.random.randint(0, high=len(prior), size=N)
p = [state]
for a in prior:
p.append(np.random.normal(loc=a, size=N))
self.particles = np.stack(p, axis=-1)
unique, counts = np.unique(self.particles[:,0], return_counts=True)
self.d = dict(zip(unique, counts))
print self.d
def update(self, datum):
for i, p in enumerate(self.particles):
self.particles[i] = self.markovTransition(p)
#weightParticle
self.weights[i] = scipy.stats.norm(datum, \
1).pdf(p[p[0]+1])
#normalise weights
weightSum = np.sum(self.weights)
self.weights = self.weights/weightSum
if 1. / np.sum(np.square(self.weights)) < len(self.weights) / 2:
self.multinomialResampling()
self.mean = []
var = []
z = np.zeros(self.numStates)
unique, counts = np.unique(self.particles[:,0], return_counts=True)
self.d = dict(zip(unique, counts))
for i in range(self.numStates):
p = [x[1+i] for x in self.particles]
self.mean.append(np.mean(p))
var.append(np.var(p))
if i in d:
z[i] = self.d[i]
z = z/np.sum(z)
if not self.prevMax == z.argmax():
self.cp.append(self.t)
self.prevMax = z.argmax()
self.t += 1
return z, self.mean, var
def markovTransition(self, p):
observations = len(self.weights)#self.t
stateProbList = np.zeros(self.numStates+1)
#probs for existing states
for state in self.d:
#print state
stateProbList[state] = self.d[state]/(observations+self.alpha)
#prob for new state
stateProbList[-1] = self.alpha/(observations+self.alpha)
#draw a state
cumsum = np.cumsum(stateProbList)
cumsum[-1] = 1
state = np.searchsorted(cumsum, np.random.uniform())
print stateProbList
print cumsum
print '----'
if state >= self.numStates:
print 'new state'
#do stuff
#add a new theta too all paricles
p[0]=state
return p
def multinomialResampling(self):
N = len(self.weights)
cumsum = np.cumsum(self.weights)
cumsum[-1] = 1
uniform = np.random.uniform(size=N)
indexes = np.searchsorted(cumsum, uniform)
# resample according to indexes
self.particles = self.particles[indexes]
for i,t in enumerate(self.particles[0,1:]):
self.particles[:,i+1] += np.random.normal(scale=1,size=N)
self.weights.fill(1.0 / N)
def run(self, data):
particles = []
z = []
mean = []
var = []
for i, d in enumerate(data):
z_t, m, v = self.update(d)
z.append(list(z_t))
mean.append(list(m))
var.append(list(v))
return z, mean, var, self.cp
def getBCPinfo(self):
maxZ = self.prevMax
return self.mean[maxZ]
class BootstrapFilterOpperMultiState(object):
def __init__(self, d=1, likelihood=(0,1), prior=(1,-1), N=2000, \
noise = 0.1, particles=None,):
self.noise = noise
self.likelihood = likelihood
self.numStates = len(prior)
self.prior = prior
self.cp = []
self.prevMax = -1
self.t=1
if particles is not None:
self.particles = particles
self.weights = np.ones(len(self.particles))
else:
self.weights = np.ones(N)
state = np.random.randint(0, high=len(prior), size=N)
p = [state]
for a in prior:
p.append(np.random.normal(loc=a, size=N))
self.particles = np.stack(p, axis=-1)
def update(self, datum):
for i, p in enumerate(self.particles):
self.particles[i] = self.markovTransition(p)
#weightParticle
self.weights[i] = scipy.stats.norm(datum, \
1).pdf(p[p[0]+1])
#normalise weights
weightSum = np.sum(self.weights)
self.weights = self.weights/weightSum
if 1. / np.sum(np.square(self.weights)) < len(self.weights) / 2:
self.multinomialResampling()
self.mean = []
var = []
z = np.zeros(len(self.prior))
unique, counts = np.unique(self.particles[:,0], return_counts=True)
d = dict(zip(unique, counts))
for i in range(len(self.prior)):
p = [x[1+i] for x in self.particles]
self.mean.append(np.mean(p))
var.append(np.var(p))
if i in d:
z[i] = d[i]
z = z/np.sum(z)
if not self.prevMax == z.argmax():
self.cp.append(self.t)
self.prevMax = z.argmax()
self.t += 1
return z, self.mean, var
def markovTransition(self, p):
if np.random.rand() < 0.9 or self.numStates == 1:
return p
else:
newState = np.random.randint(0, high=self.numStates)
while newState == p[0]:
newState = np.random.randint(0, high=self.numStates)
p[0] = newState
return p
def multinomialResampling(self):
N = len(self.weights)
cumsum = np.cumsum(self.weights)
cumsum[-1] = 1
uniform = np.random.uniform(size=N)
indexes = np.searchsorted(cumsum, uniform)
# resample according to indexes
self.particles = self.particles[indexes]
for i,t in enumerate(self.particles[0,1:]):
self.particles[:,i+1] += np.random.normal(scale=1,size=N)
self.weights.fill(1.0 / N)
def run(self, data):
particles = []
z = []
mean = []
var = []
for i, d in enumerate(data):
z_t, m, v = self.update(d)
z.append(list(z_t))
mean.append(list(m))
var.append(list(v))
return z, mean, var, self.cp
def getBCPinfo(self):
maxZ = self.prevMax
return self.mean[maxZ]
# +
def plotMultiStateParticleSub(data, z, m, v, cp, trueCP):
f, axarr = plt.subplots(4, sharex=True, figsize=(9, 6))
axarr[0].set_xlim([-10,len(data)+10])
################ PREPARE DATA ################
states = len(z[0])
zData = []
meanData = []
varData = []
for i in range(states):
zData.append(([],range(len(data))))
meanData.append(([],range(len(data))))
varData.append(([],range(len(data))))
for t,z_t in enumerate(z):
if len(z_t)>states:
states += 1
zData.append(([],range(t,len(data))))
meanData.append(([],range(t,len(data))))
varData.append(([],range(t,len(data))))
for i,z_t_i in enumerate(z_t):
zData[i][0].append(z_t_i)
meanData[i][0].append(m[t][i])
varData[i][0].append(v[t][i])
################ PLOT DATA ################
axarr[0].plot(range(0,len(data)), data, 'r.')
axarr[0].set_title("Data")
################ PLOT Z ################
for z in zData:
axarr[1].plot(z[1][:len(z[0])], z[0])
axarr[1].set_ylim([-0.2,1.2])
axarr[1].set_title("State Probability")
axarr[1].set_yticks((0,0.5,1))
################ PLOT MEAN AND VAR################
for m, v in zip(meanData, varData):
lowV = [mi-vi for (mi, vi) in zip(m[0],v[0])]
highV = [mi+vi for (mi, vi) in zip(m[0],v[0])]
axarr[2].plot(m[1][:len(m[0])], m[0])
axarr[2].fill_between(m[1][:len(m[0])], lowV, highV, alpha=0.2)
axarr[2].set_title("Estimated Means")
axarr[2].set_ylim(min(data)-2, max(d)+2)
axarr[2].scatter(range(0,len(data)), data, marker='x', color='red')
############### CHANGEPOINT ANALYSIS ##################
for p in cp:
axarr[3].axvline(x=p-1, ymin=0, ymax = 0.5, color='red', alpha=1.0)
for p in trueCP:
axarr[3].axvline(x=p-1, ymin=0.5, ymax = 1, color='green', alpha=1.0)
axarr[3].axvline(x=cp[0]-1, ymin=0, ymax = 0.5, color='red', alpha=1.0, label='Prediction')
axarr[3].axvline(x=trueCP[0]-1, ymin=0.5, ymax = 1, color='green', alpha=1.0, label='Truth')
axarr[3].grid(axis='y')
axarr[3].set_yticks(())
axarr[3].set_title("Predicted Change Points")
axarr[3].legend(frameon=True, framealpha=0.6)
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
# -
def markovTransition(p,n):
if np.random.rand() < 0.95 or n==1:
return p, False
else:
newState = np.random.randint(0, high=n)
while newState == p:
newState = np.random.randint(0, high=n)
p = newState
return p, True
def GaussianTestData(mean=[-5,5], var=[1,1], l=500):
if len(mean) != len(var):
raise Exception("mean and var need to have the same amount of values")
n = len(mean)
currState = np.random.randint(n)
states = []
data = []
trueCP = []
for i in range(l):
currState, cp = markovTransition(currState,n)
states.append(currState)
x = np.random.normal(loc=mean[currState] , scale=var[currState])
data.append(x)
if cp:
trueCP.append(i+1)
return data, trueCP
| 10,762 |
/python/examples/example_splines/ExampleSplines.ipynb
|
593c84e3b9b80d7ea3dbc74f47120d0e0ca6e3f9
|
[
"CC0-1.0",
"MIT",
"BSD-3-Clause",
"BSD-2-Clause"
] |
permissive
|
AMICI-dev/AMICI
|
https://github.com/AMICI-dev/AMICI
| 71 | 20 |
NOASSERTION
| 2023-08-31T10:56:38 | 2023-08-29T14:14:22 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 562,040 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import time
plt.rcParams['figure.figsize'] = (20.0, 15.0)
img_01 = cv2.imread('Scripts/images/scene1.row3.col1.ppm')
img_01 = cv2.cvtColor(img_01,cv2.COLOR_BGR2RGB)
img_02 = cv2.imread('Scripts/images/scene1.row3.col2.ppm')
img_0 = cv2.cvtColor(img_02,cv2.COLOR_BGR2RGB)
img_03 = cv2.imread('Scripts/images/scene1.row3.col3.ppm')
img_03 = cv2.cvtColor(img_03,cv2.COLOR_BGR2RGB)
img_04 = cv2.imread('Scripts/images/scene1.row3.col4.ppm')
img_04 = cv2.cvtColor(img_04,cv2.COLOR_BGR2RGB)
img_05 = cv2.imread('Scripts/images/scene1.row3.col5.ppm')
img_05 = cv2.cvtColor(img_05,cv2.COLOR_BGR2RGB)
images = [img_01, img_02, img_03, img_04, img_05]
truedisp = cv2.imread('tsukuba/truedisp.row3.col3.pgm', 0)
# +
# OpenCV's own implementation
def gaussian_pyramid_images_cv(img):
p_imgs = [img]
for i in np.arange(3):
img = cv2.pyrDown(img)
p_imgs.append(img)
return p_imgs
# Selfimplemented Gaussian Pyramid function.
# Convolves with a Gaussian kernel and then removes all even rows and columns.
# Output is four images at a higher pyramid level and therefore lower resolution.
def gaussian_pyramid_images(img):
p_imgs = [img]
for i in np.arange(3):
img = cv2.GaussianBlur(img, (5,5), 2)
img_new = []
for i in np.arange(1,img.shape[0],2):
img_row = []
for j in np.arange(1,img.shape[1],2):
img_row.append(img[i][j])
img_new.append(img_row)
img = np.array(img_new)
p_imgs.append(img)
return p_imgs
# -
def plot_images(imgs):
i, j = len(imgs)//2, len(imgs)//2
fig, ax = plt.subplots(i, j, figsize=(15,15))
ii, jj = np.meshgrid(np.arange(i), np.arange(j), indexing='ij')
axes = [axis for axis in zip(ii.ravel(), jj.ravel())]
for idx, axis in enumerate(axes):
ax[axis].imshow(imgs[idx], "gray")
plt.show()
pyramid_images_img1 = gaussian_pyramid_images(img_01)
plot_images(pyramid_images_img1)
pyramid_images_img5 = gaussian_pyramid_images(img_05)
plot_images(pyramid_images_img5)
# +
def create_patching_kernel(size):
ii, jj = np.meshgrid(np.arange(-(size//2),size//2+1), np.arange(-(size//2),size//2+1), indexing='ij')
return np.array([xy for xy in zip(jj.ravel(), ii.ravel())])
def create_image_grid(image):
ii, jj = np.meshgrid(np.arange(image.shape[0]), np.arange(image.shape[1]), indexing='ij')
image_grid = []
for i in np.arange(image.shape[0]):
image_grid.append([ij for ij in zip(ii[i].ravel(), jj[i].ravel())])
return np.array(image_grid)
def create_patch_grid(img1, img2, size, M):
index, column = img1.shape[0], img2.shape[1]
patch_grid_coords = create_patching_kernel(size)
final_patch_grid = []
for i in np.arange(index):
patch_space = [row for row in np.arange(i-(size//2),i+(size//2+1)) if row >= 0 and row <= index-1]
patched_pixel_row = []
for j in np.arange(column):
patch_coords = np.array([np.array([i,j])+ij for ij in patch_grid_coords])
patch_coords_adj = patch_coords[((patch_coords >= 0).all(axis=1)) \
& ((patch_coords < [index,column]).all(axis=1))]
patched_pixel_row.append(patch_coords_adj)
final_patch_grid.append(patched_pixel_row)
return np.array(final_patch_grid)
# +
def pextract(img, y):
patch = np.empty_like(patch0)
patches = np.empty_like(patches0)
for x in range(0, ish[1]):
a = img[y:y+windowsize,x:x+windowsize,:]
#normalized cross-correlation prep
a=((a[:,:,0]+a[:,:,1]+a[:,:,2])/3).reshape(length)
#a =(a[:,:,0]+a[:,:,1]+a[:,:,2])
#a = a.reshape(length)
#patch = a / np.std(a)
patch = (a - np.mean(a)) / np.std(a)
patches[x] = patch
return patches
def hpatches(img1, img2, y):
padded1= cv2.copyMakeBorder(img1,pad,pad,pad,pad,cv2.BORDER_CONSTANT)
padded2= cv2.copyMakeBorder(img2,pad,pad,pad,pad,cv2.BORDER_CONSTANT)
patches1 = pextract(padded1, y)
patches2 = pextract(padded2, y)
return patches1, patches2
def correspondence(img1,img2, wsize = 5, limit = 25):
start = time.clock()
global windowsize
windowsize = wsize
global pad
pad = (windowsize-1)//2
global length
length = windowsize**2
global ish
ish = img1.shape
global psh
psh = (img1.shape[0]+2*pad,img1.shape[1]+2*pad)
global patch0
patch0 = np.empty((windowsize,windowsize))
global patches0
patches0 = np.empty((psh[1]-windowsize+1, length))
result = np.zeros((ish[0], ish[1]))
result_nonsqrd = np.zeros((ish[0], ish[1]))
for y in range(ish[0]):
#print(y)
patches1, patches2 = hpatches(img1, img2, y)
for x in range(ish[1]):
#normalized cross-correlation
best = -999
#pa = patches1[x]
#a = pa / (np.std(pa) * len(pa))
r = [0, patches2.shape[0]]
rr = ish[1]//limit #limits search range
if x - rr > 0:
r[0] = x - rr
if x + rr < ish[1]:
r[1] = x + rr
for h in range(r[0],r[1]):
#b = patches2[h] / patches2[h]
ncc = np.correlate(patches1[x]/length, patches2[h])
if ncc > best:
best = ncc
besth =h
result[y,x]=(x-besth)**2
result_nonsqrd[y,x]=abs(x-besth)
#print(best,ncc, x,y,h, besth)
elapsed = time.clock()
elapsed = elapsed - start
print ("Done. Time spent executing correspondence: ", elapsed)
return result, result_nonsqrd
# -
def correspondence_upscaled(image1, image2, disparity, wsize, limit=25):
global windowsize
windowsize = wsize
global pad
pad = (windowsize-1)//2
global length
length = windowsize**2
global ish
ish = image1.shape
global psh
psh = (image1.shape[0]+2*pad,image1.shape[1]+2*pad)
global patch0
patch0 = np.empty((windowsize,windowsize))
global patches0
patches0 = np.empty((psh[1]-windowsize+1, length))
result = np.zeros((ish[0], ish[1]))
result_nonsqrd = np.zeros((ish[0], ish[1]))
results = []
results_nonsqrd = []
disp_map = disparity
start = time.clock()
for y in range(ish[0]):
#print(ish)
patches1, patches2 = hpatches(image1, image2, y)
for x in range(ish[1]):
#normalized cross-correlation
best = -999
besth = -999
r = [0, patches2.shape[0]]
#rr = int(abs(disp_map[y,x]))+1 #limits search range
M = limit
dispr = int(abs(disp_map[y,x]))
new_i = x + dispr
if new_i - M > 0:
r[0] = new_i - M
if new_i + M < ish[1]:
r[1] = new_i + M
if r == [0, patches2.shape[0]]:
print('No change')
#if x - rr > 0:
# r[0] = x - rr
#if x + rr < ish[1]:
# r[1] = x + rr
for h in range(r[0],r[1]):
ncc = np.correlate(patches1[x]/length, patches2[h])
if ncc > best:
best = ncc
besth =h
result[y,x]=(x-besth)**2
result_nonsqrd[y,x]=abs(x-besth)
results.append(result)
results_nonsqrd.append(result_nonsqrd)
disp_map = result
elapsed = time.clock()
elapsed = elapsed - start
print ("Done. Time spent executing correspondence: ", elapsed)
return result, result_nonsqrd
# +
def run_stereo_matching(img1, img2, windowsize = 11):
pyramid_images_img1 = gaussian_pyramid_images(img1)
pyramid_images_img2 = gaussian_pyramid_images(img2)
disparity_map = correspondence(pyramid_images_img1[3], pyramid_images_img2[3], windowsize, limit = 8)[1]
#plt.imshow(disparity_map, "gray")
#plt.show()
imgs = []
for i in np.arange(3):
disparity_map = cv2.pyrUp(disparity_map)
disparity_map = np.multiply(disparity_map, 2)
#disparity_map = np.square(disparity_map)
disparity_map = correspondence_upscaled(pyramid_images_img1[2-i], pyramid_images_img2[2-i], disparity_map, windowsize)[1]
imgs.append(disparity_map)
#plt.imshow(disparity_map, "gray")
#plt.show()
#print(np.max(disparity_map))
plt.imshow(disparity_map, "gray")
plt.show()
#print(tester(disparity_map, truedisp))
return np.array(imgs)
disparity_run = run_stereo_matching(img_01,img_02)
# +
pyramid_images_imgcv1 = gaussian_pyramid_images_cv(img_01)
pyramid_images_imgcv2 = gaussian_pyramid_images_cv(img_02)
pyramid_images_img1 = gaussian_pyramid_images(img_01)
pyramid_images_img2 = gaussian_pyramid_images(img_02)
imgs_cv = []
for i in np.arange(4):
imgs_cv.append(correspondence(pyramid_images_imgcv1[i], pyramid_images_imgcv2[i], 7)[0])
imgs = []
for i in np.arange(4):
imgs.append(correspondence(pyramid_images_img1[i], pyramid_images_img2[i], 7)[0])
# -
#disparity_all_levels = run_stereo_matching(img_01, img_02, 7)
dispr_levels = run_stereo_matching(img_01, img_02, 7)
#np.unique(disparity_all_levels[2])
np.max(dispr_levels[2])
# +
def gaussian_pyramid_images_cv_up(images):
p_imgs = []
for i in np.arange(4):
img = images[i]
for j in np.arange(i):
img = cv2.pyrUp(img)
print(img.shape)
p_imgs.append(img)
return np.array(p_imgs)
disparity_images_upscaled = gaussian_pyramid_images_cv_up(disparity_all_levels)
image1_upscaled = gaussian_pyramid_images_cv_up(pyramid_images_img1)
image2_upscaled = gaussian_pyramid_images_cv_up(pyramid_images_img2)
plot_images(disparity_images_upscaled)
# +
pyramid_images_img1 = gaussian_pyramid_images(img_01)
pyramid_images_img2 = gaussian_pyramid_images(img_02)
#print(pyramid_images_img1[0].shape, pyramid_images_img2[0].shape, disparity_images_upscaled.shape)
level2 = correspondence_upscaled(image1_upscaled[3], image2_upscaled[3], disparity_images_upscaled[3], 7)
# +
plt.imshow(level2[0])
#level2[0].shape
print('Uniques in disp map 1')
print(np.unique(disparity_images_upscaled[0]))
print('Uniques in disp map 2')
print(np.unique(disparity_images_upscaled[1]))
print('Uniques in disp map 3')
print(np.unique(disparity_images_upscaled[2]))
print('Uniques in disp map 4')
print(np.unique(disparity_images_upscaled[3]))
print('Uniques in disp map 1')
print(np.max(disparity_images_upscaled[0]))
print('Uniques in disp map 2')
print(np.max(disparity_images_upscaled[1]))
print('Uniques in disp map 3')
print(np.max(disparity_images_upscaled[2]))
print('Uniques in disp map 4')
print(np.max(disparity_images_upscaled[3]))
plot_images(image1_upscaled)
# +
mean_img = np.mean(disparity_run, 0)
print(mean_img.shape)
smooth = cv2.GaussianBlur(mean_img, (7,7), 0)
plt.imshow(mean_img, 'gray')
plt.show()
plt.imshow(smooth, 'gray')
plt.show()
# -
def tester(test_this, true_disp):
# expected input shapes: (288, 384)
old_test_this = test_this
test_this = test_this/np.max(test_this)
true_disp = true_disp/np.max(true_disp)
test = ((test_this - true_disp) * np.max(old_test_this)).reshape(-1)
mde = test.mean()
testsq = ((test_this - true_disp) ** 2).reshape(-1)
mse = testsq.mean()
print("Mean disparity error:", mde, "\n", "Mean sqared error:", mse)
stdev = np.std(test)
print("Standard deviation of disparity error:", stdev)
i=0
for x in test:
if abs(x)>=3:
i += 1
p = i/len(test)
print("Number and fraction of large errors (error ≥ 3 pixels):", i, p)
print(test_this - true_disp)
# +
truedisp = cv2.imread('tsukuba/truedisp.row3.col3.pgm', 0)
#plt.imshow(truedisp, "gray")
#plt.show()
#plt.imshow(mean_img, 'gray')
#plt.show()
plt.imshow(disparity_run[0])
#tester(mean_img, truedisp)
#tester(disparity_run[0], truedisp)
# And here we have a periodic spline.
spline = amici.splines.CubicHermiteSpline(
sbml_id='f',
evaluate_at=amici.sbml_utils.amici_time_symbol,
nodes=amici.splines.UniformGrid(0, 1, number_of_nodes=3),
values_at_nodes=[-2, 1, -2], # first and last node must coincide
extrapolate='periodic',
)
spline.plot(xlabel='time', xlim=(0, 3));
sbml_doc = libsbml.SBMLReader().readSBML('example_splines.xml')
sbml_model = sbml_doc.getModel()
spline.add_to_sbml_model(sbml_model)
simulate(sbml_model, T=3);
# The spline annotation in this case is
# ```xml
# <amici:spline xmlns:amici="https://github.com/AMICI-dev/AMICI" amici:spline_method="cubic_hermite" amici:spline_bc="periodic" amici:spline_extrapolate="periodic">
# <amici:spline_evaluation_point> ... </amici:spline_evaluation_point>
# <amici:spline_uniform_grid> ... </amici:spline_uniform_grid>
# <amici:spline_values> ... </amici:spline_values>
# </amici:spline>
# ```
# ---
# We can modify the spline's boundary conditions, for example requiring that the derivatives is zero.
spline = amici.splines.CubicHermiteSpline(
sbml_id='f',
evaluate_at=amici.sbml_utils.amici_time_symbol,
nodes=amici.splines.UniformGrid(0, 1, number_of_nodes=4),
values_at_nodes=[-1, 2, 4, 2],
bc='zeroderivative',
)
spline.plot(xlabel='time');
# ```xml
# <amici:spline xmlns:amici="https://github.com/AMICI-dev/AMICI" amici:spline_method="cubic_hermite" amici:spline_bc="zeroderivative">
# <amici:spline_evaluation_point> ... </amici:spline_evaluation_point>
# <amici:spline_uniform_grid> ... </amici:spline_uniform_grid>
# <amici:spline_values> ... </amici:spline_values>
# </amici:spline>
# ```
# Or we can impose natural boundary conditions.
spline = amici.splines.CubicHermiteSpline(
sbml_id='f',
evaluate_at=amici.sbml_utils.amici_time_symbol,
nodes=amici.splines.UniformGrid(0, 1, number_of_nodes=4),
values_at_nodes=[-1, 2, 4, 2],
bc='natural',
)
spline.plot(xlabel='time');
# ```xml
# <amici:spline xmlns:amici="https://github.com/AMICI-dev/AMICI" amici:spline_method="cubic_hermite" amici:spline_bc="natural">
# <amici:spline_evaluation_point> ... </amici:spline_evaluation_point>
# <amici:spline_uniform_grid> ... </amici:spline_uniform_grid>
# <amici:spline_values> ... </amici:spline_values>
# </amici:spline>
# ```
# ---
# Even if all node values are positive, due to under-shooting a cubic Hermite spline can assume negative values. In certain settings (e.g., when the spline represents a chemical reaction rate) this should be avoided. A possible solution is to carry out the interpolation in log-space (the resulting function is no longer a spline, but it is still a smooth interpolant).
spline = amici.splines.CubicHermiteSpline(
sbml_id='f',
evaluate_at=amici.sbml_utils.amici_time_symbol,
nodes=amici.splines.UniformGrid(0, 1, number_of_nodes=5),
values_at_nodes=[2, 0.05, 0.1, 2, 1],
)
# This spline assumes negative values!
spline.plot(xlabel='time');
spline = amici.splines.CubicHermiteSpline(
sbml_id='f',
evaluate_at=amici.sbml_utils.amici_time_symbol,
nodes=amici.splines.UniformGrid(0, 1, number_of_nodes=5),
values_at_nodes=[2, 0.05, 0.1, 2, 1],
logarithmic_parametrization=True,
)
# Instead of under-shooting we now have over-shooting,
# but at least the "spline" is always positive
spline.plot(xlabel='time');
# The spline annotation in this case is
# ```xml
# <amici:spline xmlns:amici="https://github.com/AMICI-dev/AMICI" amici:spline_method="cubic_hermite" amici:spline_logarithmic_parametrization="true">
# <amici:spline_evaluation_point> ... </amici:spline_evaluation_point>
# <amici:spline_uniform_grid> ... </amici:spline_uniform_grid>
# <amici:spline_values> ... </amici:spline_values>
# </amici:spline>
# ```
# ### Comparing model import time for the SBML-native piecewise implementation and the AMICI spline implementation
import pandas as pd
import seaborn as sns
import tempfile
import time
nruns = 6 # number of replicates
num_nodes = [5, 10, 15, 20, 25, 30, 40] # benchmark model import for these node numbers
amici_only_nodes = [50, 75, 100, 125, 150, 175, 200, 225, 250] # for these node numbers, only benchmark the annotation-based implementation
# If running as a Github action, just do the minimal amount of work required to check whether the code is working
if os.getenv('GITHUB_ACTIONS') is not None:
nruns = 1
num_nodes = [4]
amici_only_nodes = [5]
df = None
for n in num_nodes + amici_only_nodes:
# Create model
spline = amici.splines.CubicHermiteSpline(
sbml_id='f',
evaluate_at=amici.sbml_utils.amici_time_symbol,
nodes=amici.splines.UniformGrid(0, 1, number_of_nodes=n),
values_at_nodes=np.random.rand(n),
)
sbml_doc = libsbml.SBMLReader().readSBML('example_splines.xml')
sbml_model = sbml_doc.getModel()
spline.add_to_sbml_model(sbml_model)
# Benchmark model creation
timings_amici = []
timings_piecewise = []
for _ in range(nruns):
with tempfile.TemporaryDirectory() as tmpdir:
t0 = time.perf_counter_ns()
amici.SbmlImporter(sbml_model).sbml2amici('benchmark', tmpdir)
dt = time.perf_counter_ns() - t0
timings_amici.append(dt / 1e9)
if n in num_nodes:
with tempfile.TemporaryDirectory() as tmpdir:
t0 = time.perf_counter_ns()
amici.SbmlImporter(sbml_model, discard_annotations=True).sbml2amici('benchmark', tmpdir)
dt = time.perf_counter_ns() - t0
timings_piecewise.append(dt / 1e9)
# Append benchmark data to dataframe
df_amici = pd.DataFrame(dict(num_nodes=n, time=timings_amici, use_annotations=True))
df_piecewise = pd.DataFrame(dict(num_nodes=n, time=timings_piecewise, use_annotations=False))
if df is None:
df = pd.concat([df_amici, df_piecewise], ignore_index=True, verify_integrity=True)
else:
df = pd.concat([df, df_amici, df_piecewise], ignore_index=True, verify_integrity=True)
kwargs = dict(markersize=7.5)
df_avg = df.groupby(['use_annotations', 'num_nodes']).mean().reset_index()
fig, ax = plt.subplots(1, 1, figsize=(6.5, 3.5))
ax.plot(df_avg[np.logical_not(df_avg['use_annotations'])]['num_nodes'], df_avg[np.logical_not(df_avg['use_annotations'])]['time'], '.', label='MathML piecewise', **kwargs)
ax.plot(df_avg[df_avg['use_annotations']]['num_nodes'], df_avg[df_avg['use_annotations']]['time'], '.', label='AMICI annotations', **kwargs)
ax.set_ylabel('model import time (s)')
ax.set_xlabel('number of spline nodes')
ax.set_yscale('log')
ax.yaxis.set_major_formatter(mpl.ticker.FuncFormatter(lambda x, pos: f"{x:.0f}"))
ax.xaxis.set_ticks([10, 20, 30, 40, 60, 70, 80, 90, 110, 120, 130, 140, 160, 170, 180, 190, 210, 220, 230, 240, 260], minor=True)
ax.yaxis.set_ticks([20, 30, 40, 50, 60, 70, 80, 90, 200, 300, 400], ['20', '30', '40', '50', None, None, None, None, '200', '300', '400'], minor=True)
ax.legend()
ax.figure.tight_layout()
#ax.figure.savefig('benchmark_import.pdf')
| 19,476 |
/assignment_4.ipynb
|
7dfebb2f211587ce68295c3e452fbd81ddad2ec1
|
[] |
no_license
|
mustufainahmed/python_assignment
|
https://github.com/mustufainahmed/python_assignment
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,684 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
info = {
"first_name":"Mustufain",
"last_name":"Ahmed",
"age":"22",
"city":"Karachi"
}
info.update({"qualification":"Intermediate"}) #After add qualification key:
print(info)
info.update({"qualification":"BSCS"}) #After update qualification key:
print(info)
del info["qualification"] #After delete qualification key:
print(info)
# +
print("Wellcome to Online ticket Booking System")
pre = int(input("How many tickets you want?: "))
cost = 0
for c in range(1,pre+1):
age = int(input("Enter age: "))
if age <= 3 and pre == 1:
print("You are not allow")
elif age <= 12 and age >= 3:
print("Your ticket price will be 10$: ")
cost += 10
elif age >= 12:
print("Your ticket price is will be 15$: ")
cost += 15
print("Total price: "+str(cost))
# +
cities = {
"jcd_city":{
"information":{
"country":"pakistan",
"population":"200,845",
"fact":"Jacobabad is Hottest place of pakistan"
}
},
"jeddah_city":{
"information":{
"country":"saudi arabia",
"population":"3.431 million",
"fact":"Largest city in makkah"
}
},
"oman_city":{
"information":{
"country":"Oman",
"population":"4.636",
"fact":"Muscat city is the capital of Oman"
}
},
"dubai_city":{
"information":{
"country":"UAE",
"population":"3.137 million",
"fact":"Dubai is beutiful city of UAE"
}
}
}
print("we have four cities in dictionary \n")
jacobabad = cities["jcd_city"]
print(jacobabad,"\n\n\t--------------------------------------------------------------------------\n")
jeddah = cities["jeddah_city"]
print(jeddah,"\n\n\t--------------------------------------------------------------------------\n")
oman = cities["oman_city"]
print(oman,"\n\n\t--------------------------------------------------------------------------\n")
dubai = cities["dubai_city"]
print(dubai,"\n\n\t--------------------------------------------------------------------------\n")
# +
def fav_book(title):
print("One of the favorit book is",title)
fav_book(title="Web Development using Python Django")
# +
import random
print("Guess the number (game)")
number = random.randint(1,30)
print("you have three chance to win this game.")
for x in ("abc"):
user_number = int(input("Enter any number between 1 to 30 \n"))
if user_number == number:
print("you win")
break
if user_number >= number:
print("numberr is less than < ",user_number)
if user_number <= number:
print("number is grater than > ",user_number)
else:
print("you lose")
print("Answer and Random ganrated number is: ",number)
# -
| 3,105 |
/Day15/.ipynb_checkpoints/Jan28b-checkpoint.ipynb
|
3351d88c7ddae5245ab0ea961c67d770d8522bde
|
[] |
no_license
|
missbelinda/Modul-3-Purwadhika
|
https://github.com/missbelinda/Modul-3-Purwadhika
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,794 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3.8.13 ('Monai')
# language: python
# name: python3
# ---
# # 3D Multi-organ Segmentation with Swin UNETR (BTCV Challenge)
#
#
# This tutorial uses a Swin UNETR [1] model for the task of multi-organ segmentation task using the BTCV challenge dataset. The architecture of Swin UNETR is demonstrated as below
# 
#
# The following features are included in this tutorial:
# 1. Transforms for dictionary format data.
# 1. Define a new transform according to MONAI transform API.
# 1. Load Nifti image with metadata, load a list of images and stack them.
# 1. Randomly adjust intensity for data augmentation.
# 1. Cache IO and transforms to accelerate training and validation.
# 1. Swin UNETR model, DiceCE loss function, Mean Dice metric for multi-organ segmentation task.
#
# For this tutorial, the dataset needs to be downloaded from: https://www.synapse.org/#!Synapse:syn3193805/wiki/217752.
#
# In addition, the json file for data splits needs to be downloaded from this [link](https://drive.google.com/file/d/1t4fIQQkONv7ArTSZe4Nucwkk1KfdUDvW/view?usp=sharing). Once downloaded, place the json file in the same folder as the dataset.
#
# For BTCV dataset, under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial, and a retrospective ventral hernia study. The 50 scans were captured during portal venous contrast phase with variable volume sizes (512 x 512 x 85 - 512 x 512 x 198) and field of views (approx. 280 x 280 x 280 mm3 - 500 x 500 x 650 mm3). The in-plane resolution varies from 0.54 x 0.54 mm2 to 0.98 x 0.98 mm2, while the slice thickness ranges from 2.5 mm to 5.0 mm.
#
# Target: 13 abdominal organs including 1. Spleen 2. Right Kidney 3. Left Kideny 4.Gallbladder 5.Esophagus 6. Liver 7. Stomach 8.Aorta 9. IVC 10. Portal and Splenic Veins 11. Pancreas 12 Right adrenal gland 13 Left adrenal gland.
#
# Modality: CT
# Size: 30 3D volumes (24 Training + 6 Testing)
# Challenge: BTCV MICCAI Challenge
#
# The following figure shows image patches with the organ sub-regions that are annotated in the CT (top left) and the final labels for the whole dataset (right).
#
# Data, figures and resources are taken from:
#
#
# 1. [Self-Supervised Pre-Training of Swin Transformers
# for 3D Medical Image Analysis](https://arxiv.org/abs/2111.14791)
#
# 2. [Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images](https://arxiv.org/abs/2201.01266)
#
# 3. [High-resolution 3D abdominal segmentation with random patch network fusion (MIA)](https://www.sciencedirect.com/science/article/abs/pii/S1361841520302589)
#
# 4. [Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning (MIA)](https://www.sciencedirect.com/science/article/abs/pii/S1361841515000766?via%3Dihub)
#
#
# 
#
#
#
# The image patches show anatomies of a subject, including:
# 1. large organs: spleen, liver, stomach.
# 2. Smaller organs: gallbladder, esophagus, kidneys, pancreas.
# 3. Vascular tissues: aorta, IVC, P&S Veins.
# 4. Glands: left and right adrenal gland
#
# If you find this tutorial helpful, please consider citing [1] and [2]:
#
# [1]: Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V. and Hatamizadeh, A., 2022. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20730-20740).
#
# [2]: Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.
#
# [](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb)
#
# # Pre-trained Swin UNETR Encoder
#
# We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Tranformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain).
#
# 
#
# Please download the pre-trained weights from this [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) and place it in the root directory of this tutorial.
#
# If training from scratch is desired, please skip the step for initializing from pre-trained weights.
# ## Setup environment
# !pip install git+https://github.com/Project-MONAI/MONAI#[email protected]+271.g07de215c
# !pip install nibabel==3.1.1
# !pip install tqdm==4.63.0
# !pip install einops
# !python -c "import matplotlib" || pip install -q matplotlib
# %matplotlib inline
# +
import os
import shutil
import tempfile
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
from monai.losses import DiceCELoss
from monai.inferers import sliding_window_inference
from monai.transforms import (
AsDiscrete,
AddChanneld,
Compose,
CropForegroundd,
LoadImaged,
Orientationd,
RandFlipd,
RandCropByPosNegLabeld,
RandShiftIntensityd,
ScaleIntensityRanged,
Spacingd,
RandRotate90d,
ToTensord,
)
from monai.config import print_config
from monai.metrics import DiceMetric
from monai.networks.nets import SwinUNETR
from monai.data import (
DataLoader,
CacheDataset,
load_decathlon_datalist,
decollate_batch,
)
import torch
print_config()
# -
# ## Setup transforms for training and validation
# To save on GPU memory utilization, the num_samples can be reduced to 2.
# +
num_samples = 4
train_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
AddChanneld(keys=["image", "label"]),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
ScaleIntensityRanged(
keys=["image"],
a_min=-175,
a_max=250,
b_min=0.0,
b_max=1.0,
clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
RandCropByPosNegLabeld(
keys=["image", "label"],
label_key="label",
spatial_size=(96, 96, 96),
pos=1,
neg=1,
num_samples=num_samples,
image_key="image",
image_threshold=0,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[0],
prob=0.10,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[1],
prob=0.10,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[2],
prob=0.10,
),
RandRotate90d(
keys=["image", "label"],
prob=0.10,
max_k=3,
),
RandShiftIntensityd(
keys=["image"],
offsets=0.10,
prob=0.50,
),
ToTensord(keys=["image", "label"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
AddChanneld(keys=["image", "label"]),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
ScaleIntensityRanged(
keys=["image"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True
),
CropForegroundd(keys=["image", "label"], source_key="image"),
ToTensord(keys=["image", "label"]),
]
)
# -
# ## Download dataset and format in the folder.
# 1. Download dataset from here: https://www.synapse.org/#!Synapse:syn3193805/wiki/89480\n
# 2. Put images in the ./data/imagesTr
# 3. Put labels in the ./data/labelsTr
# 4. make JSON file accordingly: ./data/dataset_0.json
# Example of JSON file:
# {
# "description": "btcv yucheng",
# "labels": {
# "0": "background",
# "1": "spleen",
# "2": "rkid",
# "3": "lkid",
# "4": "gall",
# "5": "eso",
# "6": "liver",
# "7": "sto",
# "8": "aorta",
# "9": "IVC",
# "10": "veins",
# "11": "pancreas",
# "12": "rad",
# "13": "lad"
# },
# "licence": "yt",
# "modality": {
# "0": "CT"
# },
# "name": "btcv",
# "numTest": 20,
# "numTraining": 80,
# "reference": "Vanderbilt University",
# "release": "1.0 06/08/2015",
# "tensorImageSize": "3D",
# "test": [
# "imagesTs/img0061.nii.gz",
# "imagesTs/img0062.nii.gz",
# "imagesTs/img0063.nii.gz",
# "imagesTs/img0064.nii.gz",
# "imagesTs/img0065.nii.gz",
# "imagesTs/img0066.nii.gz",
# "imagesTs/img0067.nii.gz",
# "imagesTs/img0068.nii.gz",
# "imagesTs/img0069.nii.gz",
# "imagesTs/img0070.nii.gz",
# "imagesTs/img0071.nii.gz",
# "imagesTs/img0072.nii.gz",
# "imagesTs/img0073.nii.gz",
# "imagesTs/img0074.nii.gz",
# "imagesTs/img0075.nii.gz",
# "imagesTs/img0076.nii.gz",
# "imagesTs/img0077.nii.gz",
# "imagesTs/img0078.nii.gz",
# "imagesTs/img0079.nii.gz",
# "imagesTs/img0080.nii.gz"
# ],
# "training": [
# {
# "image": "imagesTr/img0001.nii.gz",
# "label": "labelsTr/label0001.nii.gz"
# },
# {
# "image": "imagesTr/img0002.nii.gz",
# "label": "labelsTr/label0002.nii.gz"
# },
# {
# "image": "imagesTr/img0003.nii.gz",
# "label": "labelsTr/label0003.nii.gz"
# },
# {
# "image": "imagesTr/img0004.nii.gz",
# "label": "labelsTr/label0004.nii.gz"
# },
# {
# "image": "imagesTr/img0005.nii.gz",
# "label": "labelsTr/label0005.nii.gz"
# },
# {
# "image": "imagesTr/img0006.nii.gz",
# "label": "labelsTr/label0006.nii.gz"
# },
# {
# "image": "imagesTr/img0007.nii.gz",
# "label": "labelsTr/label0007.nii.gz"
# },
# {
# "image": "imagesTr/img0008.nii.gz",
# "label": "labelsTr/label0008.nii.gz"
# },
# {
# "image": "imagesTr/img0009.nii.gz",
# "label": "labelsTr/label0009.nii.gz"
# },
# {
# "image": "imagesTr/img0010.nii.gz",
# "label": "labelsTr/label0010.nii.gz"
# },
# {
# "image": "imagesTr/img0021.nii.gz",
# "label": "labelsTr/label0021.nii.gz"
# },
# {
# "image": "imagesTr/img0022.nii.gz",
# "label": "labelsTr/label0022.nii.gz"
# },
# {
# "image": "imagesTr/img0023.nii.gz",
# "label": "labelsTr/label0023.nii.gz"
# },
# {
# "image": "imagesTr/img0024.nii.gz",
# "label": "labelsTr/label0024.nii.gz"
# },
# {
# "image": "imagesTr/img0025.nii.gz",
# "label": "labelsTr/label0025.nii.gz"
# },
# {
# "image": "imagesTr/img0026.nii.gz",
# "label": "labelsTr/label0026.nii.gz"
# },
# {
# "image": "imagesTr/img0027.nii.gz",
# "label": "labelsTr/label0027.nii.gz"
# },
# {
# "image": "imagesTr/img0028.nii.gz",
# "label": "labelsTr/label0028.nii.gz"
# },
# {
# "image": "imagesTr/img0029.nii.gz",
# "label": "labelsTr/label0029.nii.gz"
# },
# {
# "image": "imagesTr/img0030.nii.gz",
# "label": "labelsTr/label0030.nii.gz"
# },
# {
# "image": "imagesTr/img0031.nii.gz",
# "label": "labelsTr/label0031.nii.gz"
# },
# {
# "image": "imagesTr/img0032.nii.gz",
# "label": "labelsTr/label0032.nii.gz"
# },
# {
# "image": "imagesTr/img0033.nii.gz",
# "label": "labelsTr/label0033.nii.gz"
# },
# {
# "image": "imagesTr/img0034.nii.gz",
# "label": "labelsTr/label0034.nii.gz"
# }
# ],
# "validation": [
# {
# "image": "imagesTr/img0035.nii.gz",
# "label": "labelsTr/label0035.nii.gz"
# },
# {
# "image": "imagesTr/img0036.nii.gz",
# "label": "labelsTr/label0036.nii.gz"
# },
# {
# "image": "imagesTr/img0037.nii.gz",
# "label": "labelsTr/label0037.nii.gz"
# },
# {
# "image": "imagesTr/img0038.nii.gz",
# "label": "labelsTr/label0038.nii.gz"
# },
# {
# "image": "imagesTr/img0039.nii.gz",
# "label": "labelsTr/label0039.nii.gz"
# },
# {
# "image": "imagesTr/img0040.nii.gz",
# "label": "labelsTr/label0040.nii.gz"
# }
# ]
# }
#
# +
root_dir = "./"
data_dir = "./data/"
split_JSON = "dataset_1.json"
datasets = data_dir + split_JSON
datalist = load_decathlon_datalist(datasets, True, "training")
val_files = load_decathlon_datalist(datasets, True, "validation")
train_ds = CacheDataset(
data=datalist,
transform=train_transforms,
#cache_num=24,
#cache_rate=1.0,
#num_workers=8,
)
train_loader = DataLoader(
train_ds, batch_size=1, shuffle=True,
# num_workers=8,
# pin_memory=True
)
val_ds = CacheDataset(
data=val_files,
transform=val_transforms,
#cache_num=6, cache_rate=1.0,
#num_workers=4
)
val_loader = DataLoader(
val_ds, batch_size=1, shuffle=False,
#num_workers=4,
#pin_memory=True
)
# -
# ## Check data shape and visualize
slice_map = {
"img0035.nii.gz": 170,
"img0036.nii.gz": 230,
"img0037.nii.gz": 204,
"img0038.nii.gz": 204,
"img0039.nii.gz": 204,
"img0040.nii.gz": 180,
}
case_num = 1
img_name = os.path.split(val_ds[case_num]['image_meta_dict']["filename_or_obj"])[1]
img = val_ds[case_num]["image"]
label = val_ds[case_num]["label"]
img_shape = img.shape
label_shape = label.shape
print(f"image shape: {img_shape}, label shape: {label_shape}")
plt.figure("image", (18, 6))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(img[0, :, :, slice_map[img_name]].detach().cpu(), cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[0, :, :, slice_map[img_name]].detach().cpu())
plt.show()
# ### Create Swin UNETR model
#
# In this scetion, we create Swin UNETR model for the 14-class multi-organ segmentation. We use a feature size of 48 which is compatible with self-supervised pre-trained weights. We also use gradient checkpointing (use_checkpoint) for more memory-efficient training.
# +
device = "cuda" if torch.cuda.is_available() else "cpu"
#device = "cpu"
if device == "cuda":
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
model = SwinUNETR(
img_size=(96, 96, 96),
in_channels=1,
out_channels=14,
feature_size=48,
use_checkpoint=True,
).to(device)
# -
# ### Initialize Swin UNETR encoder from self-supervised pre-trained weights
#
# In this section, we intialize the Swin UNETR encoder from weights downloaded from this [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt). If training from scratch is desired, please skip this section.
weight = torch.load("./weights/model_swinvit.pt", map_location=device)
model.load_from(weights=weight)
print("Using pretrained self-supervied Swin UNETR backbone weights !")
# ### Optimizer and loss function
if device == "cuda":
torch.backends.cudnn.benchmark = True
loss_function = DiceCELoss(to_onehot_y=True, softmax=True)
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4, weight_decay=1e-5)
# ### Execute a typical PyTorch training process
use_amp = True
scaler = torch.cuda.amp.GradScaler(enabled=use_amp)
# +
def validation(epoch_iterator_val):
model.eval()
with torch.no_grad():
for step, batch in enumerate(epoch_iterator_val):
val_inputs, val_labels = (batch["image"].to(device), batch["label"].to(device))
#val_inputs, val_labels = (batch["image"].cuda(), batch["label"].cuda())
val_outputs = sliding_window_inference(val_inputs, (96, 96, 96), 4, model)
val_labels_list = decollate_batch(val_labels)
val_labels_convert = [
post_label(val_label_tensor) for val_label_tensor in val_labels_list
]
val_outputs_list = decollate_batch(val_outputs)
val_output_convert = [
post_pred(val_pred_tensor) for val_pred_tensor in val_outputs_list
]
dice_metric(y_pred=val_output_convert, y=val_labels_convert)
epoch_iterator_val.set_description(
"Validate (%d / %d Steps)" % (global_step, 10.0)
)
mean_dice_val = dice_metric.aggregate().item()
dice_metric.reset()
return mean_dice_val
def train(global_step, train_loader, dice_val_best, global_step_best):
model.train()
epoch_loss = 0
step = 0
epoch_iterator = tqdm(
train_loader, desc="Training (X / X Steps) (loss=X.X)", dynamic_ncols=True
)
for step, batch in enumerate(epoch_iterator):
step += 1
x, y = (batch["image"].to(device), batch["label"].to(device))
#x, y = (batch["image"].cuda(), batch["label"].cuda())
#with torch.cuda.amp.autocast():
with torch.autocast(
device_type=device,
dtype=torch.float16 if device == 'cuda' else torch.bfloat16
):
logit_map = model(x)
loss = loss_function(logit_map, y)
# Scales loss. Calls backward() on scaled loss to create scaled gradients.
scaler.scale(loss).backward()
epoch_loss += loss.item()
# scaler.step() first unscales the gradients of the optimizer's assigned params.
# If these gradients do not contain infs or NaNs, optimizer.step() is then called,
# otherwise, optimizer.step() is skipped.
scaler.step(optimizer)
# Updates the scale for next iteration.
scaler.update()
optimizer.zero_grad(set_to_none=True)
epoch_iterator.set_description(
"Training (%d / %d Steps) (loss=%2.5f)"
% (global_step, max_iterations, loss)
)
'''
if (
global_step % eval_num == 0 and global_step != 0
) or global_step == max_iterations:
epoch_iterator_val = tqdm(
val_loader, desc="Validate (X / X Steps) (dice=X.X)", dynamic_ncols=True
)
dice_val = validation(epoch_iterator_val)
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
metric_values.append(dice_val)
if dice_val > dice_val_best:
dice_val_best = dice_val
global_step_best = global_step
torch.save(
model.state_dict(), os.path.join(root_dir, "weights", "best_metric_model.pth")
)
print(
"Model Was Saved ! Current Best Avg. Dice: {} Current Avg. Dice: {}".format(
dice_val_best, dice_val
)
)
else:
print(
"Model Was Not Saved ! Current Best Avg. Dice: {} Current Avg. Dice: {}".format(
dice_val_best, dice_val
)
)
'''
global_step += 1
return global_step, dice_val_best, global_step_best
max_iterations = 30000
eval_num = 500
post_label = AsDiscrete(to_onehot=14)
post_pred = AsDiscrete(argmax=True, to_onehot=14)
dice_metric = DiceMetric(include_background=True, reduction="mean", get_not_nans=False)
global_step = 0
dice_val_best = 0.0
global_step_best = 0
epoch_loss_values = []
metric_values = []
while global_step < max_iterations:
global_step, dice_val_best, global_step_best = train(
global_step, train_loader, dice_val_best, global_step_best
)
model.load_state_dict(torch.load(os.path.join(root_dir, "weights", "best_metric_model.pth")))
# -
torch.cuda.empty_cache()
print(
f"train completed, best_metric: {dice_val_best:.4f} "
f"at iteration: {global_step_best}"
)
# ### Plot the loss and metric
plt.figure("train", (12, 6))
plt.subplot(1, 2, 1)
plt.title("Iteration Average Loss")
x = [eval_num * (i + 1) for i in range(len(epoch_loss_values))]
y = epoch_loss_values
plt.xlabel("Iteration")
plt.plot(x, y)
plt.subplot(1, 2, 2)
plt.title("Val Mean Dice")
x = [eval_num * (i + 1) for i in range(len(metric_values))]
y = metric_values
plt.xlabel("Iteration")
plt.plot(x, y)
plt.show()
# ### Check best model output with the input image and label
case_num = 0
#model.load_state_dict(torch.load(os.path.join(root_dir, "weights", "best_metric_model.pth")))
model.eval()
with torch.no_grad():
img_name = os.path.split(val_ds[case_num]['image_meta_dict']["filename_or_obj"])[1]
img = val_ds[case_num]["image"].to(device)
label = val_ds[case_num]["label"].to(device)
val_inputs = torch.unsqueeze(img, 1)
val_labels = torch.unsqueeze(label, 1)
val_outputs = sliding_window_inference(
val_inputs, (96, 96, 96), 4, model, overlap=0.8
)
plt.figure("check", (18, 6))
plt.subplot(1, 3, 1)
plt.title("image")
plt.imshow(val_inputs.cpu().numpy()[0, 0, :, :, slice_map[img_name]], cmap="gray")
plt.subplot(1, 3, 2)
plt.title("label")
plt.imshow(val_labels.cpu().numpy()[0, 0, :, :, slice_map[img_name]])
plt.subplot(1, 3, 3)
plt.title("output")
plt.imshow(
torch.argmax(val_outputs, dim=1).detach().cpu()[0, :, :, slice_map[img_name]]
)
plt.show()
# ### Cleanup data directory
#
# Remove directory if a temporary was used.
if directory is None:
shutil.rmtree(root_dir)
| 23,503 |
/chapter-10/.ipynb_checkpoints/chapter-10-checkpoint.ipynb
|
5031bf400e1c7d962430463664c704ac899129aa
|
[] |
no_license
|
Mumujane/book-code
|
https://github.com/Mumujane/book-code
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 5,116 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
import torch
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
# %matplotlib inline
# +
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])])
dataset_train = datasets.MNIST(root = "./data",
transform = transform,
train = True,
download = True)
dataset_test = datasets.MNIST(root = "./data",
transform = transform,
train = False)
# +
train_load = torch.utils.data.DataLoader(dataset = dataset_train,
batch_size = 64,
shuffle = True)
test_load = torch.utils.data.DataLoader(dataset = dataset_test,
batch_size = 64,
shuffle = True)
images, label = next(iter(train_load))
images_example = torchvision.utils.make_grid(images)
images_example = images_example.numpy().transpose(1,2,0)
mean = [0.5,0.5,0.5]
std = [0.5,0.5,0.5]
images_example = images_example*std + mean
plt.imshow(images_example)
plt.show()
# -
class RNN(torch.nn.Module):
def __init__(self):
super(RNN, self).__init__()
self.rnn = torch.nn.RNN(input_size = 28,
hidden_size = 128,
num_layers = 1,
batch_first = True)
self.output = torch.nn.Linear(128,10)
def forward(self, input):
output,_ = self.rnn(input, None)
output = self.output(output[:,-1,:])
return output
model = RNN()
optimizer = torch.optim.Adam(model.parameters())
loss_f = torch.nn.CrossEntropyLoss()
epoch_n =10
for epoch in range(epoch_n):
running_loss = 0.0
running_correct = 0
testing_correct = 0
print("Epoch {}/{}".format(epoch, epoch_n))
print("-"*10)
for data in train_load:
X_train,y_train = data
X_train = X_train.view(-1,28,28)
X_train,y_train = Variable(X_train),Variable(y_train)
y_pred = model(X_train)
loss = loss_f(y_pred, y_train)
_,pred = torch.max(y_pred.data,1)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss +=loss.data[0]
running_correct += torch.sum(pred == y_train.data)
for data in test_load:
X_test, y_test = data
X_test = X_test.view(-1,28,28)
X_test, y_test = Variable(X_test), Variable(y_test)
outputs = model(X_test)
_, pred = torch.max(outputs.data, 1)
testing_correct += torch.sum(pred == y_test.data)
print("Loss is:{:.4f}, Train Accuracy is:{:.4f}%, Test Accuracy is:{:.4f}".format(running_loss/len(dataset_train),100*running_correct/len(dataset_train),100*testing_correct/len(dataset_test)))
| 3,342 |
/2018-10-20/numpy_pandas.ipynb
|
f0201c10789adf6c7cf6cc06434ef6e0397efce8
|
[] |
no_license
|
solidjerryc/NMX-learn-python
|
https://github.com/solidjerryc/NMX-learn-python
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 90,188 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### numpy
# 创建数组
# +
import numpy as np
a=np.array([2,3,4])
print(a)
# -
# 生成全是0的数组
a=np.zeros( (3,4) )
print(a)
# 生成全是1的数组
a=np.ones( (3,4) )
print(a)
# 生成随机的数据
# +
# 指定范围内整数均匀分布
a=np.random.randint(0,10,(5,5))
print(a)
# 0-1均匀分布
a=np.random.rand(10)
print(a)
a=np.random.rand(5,5)
print(a)
# 标准正态分布
a=np.random.randn(10)
print(a)
# +
# 指定范围内整数均匀分布
a=np.random.randint(0,10,(5,5))
print(a)
a.mean(axis=0)
# -
# 用三维的方式来描述上面的三维数组0轴,1轴,2轴已经标注:
# 
b=np.array([[[1,2],[3,4]],[[5,6],[7,8]],[[5,4],[7,8]]])
print(b)
b.mean(axis=0)
# ### pandas
# pandas 主要有两个关键的数据结构 Series和Dataframe
#
# ### Series
# 有索引的序列化数据,前面的ndarray是没有索引的。
# +
import pandas as pd
s = pd.Series([10,24,55,33])
s
# -
# 索引也可以自行定义
s.index=(['a','b','c','d'])
s
# 甚至可以是时间序列
s = pd.Series([10,24,55,33],index=pd.date_range('1/1/2012', periods=4, freq='D'))
s
# ### DataFrame
# 数据框,类比于Excel二维表
# 
# 有几个关键的属性
# * index 索引
# * columns 列名
# * 行
# * 列
# * 数据类型
#
# +
data=np.array([[1,2],[3,4],[5,6],[7,8]])
df=pd.DataFrame(data,columns=['C1','C2'])
df
# -
# 添加列
# +
df['C3']=[1,2,3,4]
df
# -
# 选取数据
#
# * 类似numpy使用索引号或者切片
# * 使用条件筛选
df.iloc[2:4,1:3]
df[df['C3']>2] # 单条件,多条件
# ---- 另一种定义方式 ----
#
# 用字典定义DataFrame
# +
name=['Jerry','Tom','Jessica']
grade=[82,89,85]
df=pd.DataFrame({'name':name,'grade':grade})
df
# -
# 运算
#
# 一般都是列进行运算,需要注意的是列要是可运算的数据类型才能运算
#
df['grade']+10
# 可以定义新的列存储运算结果
# +
df['new_grade']=df['grade']+10
df
# -
# 计算均值、最大值、最小值
#
# 使用内建函数运算时会跳过不可计算的列
df.mean()
# ### csv文件
# 逗号分隔值(Comma-Separated Values,CSV,有时也称为字符分隔值,因为分隔字符也可以不是逗号),其文件以纯文本形式存储表格数据(数字和文本)。纯文本意味着该文件是一个字符序列,不含必须像二进制数字那样被解读的数据。
#
# 把下列字符串复制到文本文件后,将文件后缀保存为csv
#
# `
# 学号,姓名,语文成绩,数学成绩,英语成绩
# 100001,A,86,75,94
# 100002,B,76,66,93
# 100003,C,87,99,99
# 100004,D,81,85,94
# 100005,E,95,86,81
# 100006,F,93,64,80
# 100007,G,65,83,99
# 100008,H,74,60,96
# 100009,I,74,71,61
# 100010,J,66,76,80
# `
# ### 使用pandas读取数据集
# +
import pandas as pd
df=pd.read_csv(r"C:\Users\JerryC\Desktop\student.csv",encoding='gbk')
# -
# 数据读进来之后保存为pandas 的dataframe格式
#
# 
# 查看数据前几行
print(df.head())
# 选中某一列
df['学号']
# 去重、去空值、填空值
| 2,476 |
/scikit_learn_metrics.ipynb
|
8f4b5a7704ffddb830cb6e9add527a8034dff439
|
[] |
no_license
|
aman31kmr/code-data-science
|
https://github.com/aman31kmr/code-data-science
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 10,123 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Setup
import pandas as pd
import numpy as np
import sklearn.datasets
import sklearn.metrics
import sklearn.dummy
import sklearn.linear_model
import sklearn.preprocessing
# # logloss
# ### Binary
iris = sklearn.datasets.load_iris()
ind_bin = iris.target < 2
X = iris.data[ind_bin]
y = iris.target[ind_bin]
sklearn.metrics.log_loss(y, y)
sklearn.metrics.log_loss(y, np.clip(y, 0.05, 0.95))
sklearn.metrics.log_loss(y, np.ones_like(y)*y.mean())
# +
y_duplicated = np.concatenate([y]*5)
sklearn.metrics.log_loss(y_duplicated, np.ones_like(y_duplicated)*y.mean())
# -
sklearn.metrics.log_loss(y, np.random.uniform(low=0.0, high=1.0, size=len(y)))
y
def flip(y):
return -(y-0.5) + 0.5
y_flip = flip(y)
sklearn.metrics.log_loss(y, y_flip)
y_duplicated_flip = flip(y_duplicated)
sklearn.metrics.log_loss(y_duplicated, y_duplicated_flip)
mdl = sklearn.dummy.DummyClassifier()
mdl = sklearn.linear_model.LogisticRegression()
mdl = sklearn.svm.SVC(probability=True)
mdl.fit(X, y)
pred = mdl.predict(X)
pred_prob = mdl.predict_proba(X)
sklearn.metrics.log_loss(y, pred)
sklearn.metrics.log_loss(y, pred_prob)
# ### Multiple class
X = iris.data
y = iris.target
y_encoded = sklearn.preprocessing.LabelBinarizer().fit_transform(y)
sklearn.metrics.log_loss(y, y_encoded)
mdl = sklearn.dummy.DummyClassifier()
# mdl = sklearn.linear_model.LogisticRegression()
# mdl = sklearn.svm.SVC(probability=True)
mdl.fit(X, y)
# +
pred = mdl.predict(X)
pred_encoded = sklearn.preprocessing.LabelBinarizer().fit_transform(pred)
pred_prob = mdl.predict_proba(X)
# -
sklearn.metrics.log_loss(y, pred_encoded)
sklearn.metrics.log_loss(y, pred_prob)
| 1,928 |
/ASSIGNMENT- SIMILARITY_WORD2VEC.ipynb
|
d082732c110b67978ae4ddec4ca00b4eab5b28c4
|
[] |
no_license
|
PraveerSaxena/Natural-Language-Processing
|
https://github.com/PraveerSaxena/Natural-Language-Processing
| 1 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 6,627,984 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div style="background-color:rgb(255, 250, 210); padding:5px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
# <img src="images/logo_text.png" width="250px" align="right">
#
# # Binder Jupyter Tutorial: Metabolomics Data Analysis Workflow
#
# #### <font size="3" color='red'>To begin: Click anywhere in this cell and press 'Run' on the menu bar.<br> The next cell in the notebook is then automatically highlighted. Press 'Run' again.<br>Repeat this process until the end of the notebook.<br> Alternatively click 'Kernel' followed by 'Restart and Run All'.<br> NOTE: Some code cells may take several seconds to execute, please be patient.<br></font>
#
# <div style="text-align: justify">
# <b>Workflow:</b> This is a typical metabolomics data analysis workflow for a study with a binary classification outcome. The main steps included are: Import metabolite & meta data from an Excel sheet; QC-based data cleaning; Principal Component Analysis visualisation to check QC precision; simple 2-class univariate statistics; multivariate analysis using Partial Least Squares Discriminant Analysis (PLS-DA) including model optimisation (R<sup>2</sup> vs Q<sup>2</sup>), permutation testing, model prediciton metrics, feature importance; associated data visulaisations; and data export to Excel sheets.<br>
# </div>
# <br>
# <div style="text-align: justify">
# <b>Dataset:</b> The study used in this tutorial has been previously published as an open access article, <a href="https://www.nature.com/articles/bjc2015414">Chan et al. (2016)</a>, in the British Journal of Cancer, and the deconvolved and annotated data file deposited at the Metabolomics Workbench data repository (<a href="http://www.metabolomicsworkbench.org">http://www.metabolomicsworkbench.org</a> Project ID PR000699). The data can be accessed directly via its project <a href="http://dx.doi.org/DOI:10.21228/M8B10B">DOI:10.21228/M8B10B</a>. <sup>1</sup>H-NMR spectra were acquired at Canada’s National High Field Nuclear Magnetic Resonance Centre (NANUC) using a 600 MHz Varian Inova spectrometer. Spectral deconvolution and metabolite annotated was performed using the <a href="https://www.chenomx.com/software/">Chenomx NMR Suite v7.6</a>. Unfortunately, the Raw NMR data is unavailable.
# </div>
# <br>
# <div style="text-align: justify">
# <b>Note for uploading datasets</b>: The current implementation of this workflow requires data to be uploaded as a Microsoft Excel file, using the <a href="https://en.wikipedia.org/wiki/Tidy_data">Tidy Data</a> framework as illustrated in the <a href="GastricCancer_NMR.xlsx">dataset</a> provided. As such, the Excel file should contain a <i>Data Sheet</i> and <i>Peak Sheet</i>. The <i>Data Sheet</i> requires the inclusion of the columns: <i>Idx</i>, <i>SampleID</i>, and <i>Class</i>. The <i>Peak Sheet</i> requires the inclusion of the columns: <i>Idx</i>, <i>Name</i>, and <i>Label</i>.
# </div>
# <br>
# <div style="background-color:rgb(210,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px; text-align: justify">
# Before starting this tutorial, it is important that the reader familiarises themselves with the fundamental operation of the Jupyter Notebook environment. <b>Please click on the "Help" dropdown menu then select User Interface Tour</b>. All the code embedded in this example notebook is written using the Python programming language (<a href="http://www.python.org">python.org</a>) and is based upon extensions of popular open source packages with high levels of support. Note, a tutorial on the python programming language in itself is beyond the scope of this publication. For more information on using Python and Jupyter Notebooks please refer to the excelent:
# <a href="https://mybinder.org/v2/gh/jakevdp/PythonDataScienceHandbook/master?filepath=notebooks%2FIndex.ipynb">Python Data Science Handbook (Jake VanderPlas, 2016)</a>, which is in itself a Jupyter Notebook deployed via <a href="https://mybinder.org">Binder</a>.
# <br>
# </div> </div></div>
# + [markdown] toc-hr-collapsed=false
# <div style="background-color:rgb(255, 250, 210); padding:5px 0; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ## 1. Import Packages/Modules
# The first code cell of this tutorial (below this text box) imports packages and modules into the Jupter environment to extend our capability beyond the basic functionality of python. These packages are:
# - **NumPy**: a fundamental package for mathematical calculations (http://www.numpy.org)
# - **Pandas**: a fundamental package for importing and manipulating tables (https://pandas.pydata.org)
# - **BeakerX**: a package used specifically in this workflow to display pandas tables more interactively (http://beakerx.com)
# - **Scikit-learn**: a fundamental package containing tools for data mining and analysis that is used directly in this workflow (with the train_test_split module to split data into train and test subsets (https://scikit-learn.org)
# - **Cimcb_lite**: a core package developed by the authors for this tutorial that integrates the functionality of the above packages into a unified set of basic methods specific to metabolomics (https://github.com/cimcb/cimcb_lite)
#
# **Run the cell by clicking anywhere in the cell and then clicking "Run" in the Menu.** <br>
# When sucessfully executed the cell will print "All packages successfully loaded" in the notebook below the cell.
# -
import numpy as np
import pandas as pd
from beakerx.object import beakerx
from sklearn.model_selection import train_test_split
import cimcb_lite as cb
beakerx.pandas_display_table() # by default display pandas tables as BeakerX interactive tables
print('All packages successfully loaded')
# <div style="background-color:rgb(255, 250, 210); padding:5px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ## 2. Load Data and Peak sheet
# The next code cell loads the *Data* and *Peak* sheets from an Excel file:
#
# 1. Firstly, we set the <i>filename</i> and folder location, (<i>home directory</i>), of the source excel spreadsheet
# 2. The <b>cimcb_lite</b> function <span style="font-family: monaco; font-size: 14px; background-color:white;">cimcb_lite.utils.load_dataXL</span> is then used to import the requisite <span style="font-family: monaco; font-size: 14px; background-color:white;">DataTable</span> and <span style="font-family: monaco; font-size: 14px; background-color:white;">PeakTable</span>. This function incorporates some basic integtery checks to make sure the metabolite names (M<sub>1</sub> ... M<sub>n</sub>) in the Data table match exatly the names in the Peak table, and also check that the mandatory columns are specified.
# 3. Upon completion the function <span style="font-family: monaco; font-size: 14px; background-color:white;">cimcb_lite.utils.load_dataXL</span> prints the details of the imported data to the screen.
#
# <br>
# <div style="background-color:rgb(255,210,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <b>Red boxes (cog icon) indicate optional exercises in which the user can change the code and thus change the fuctionality of the cell. Change and rerun the cell only after you have run the cell a first time to observe the default functionality</b>
# <ul>
# <li>Try replacing the metabolomics data set: <span style="font-family: monaco; font-size: 14px; background-color:white;">file = 'GastricCancer_NMR.xlsx'</span> with another data set of your choosing (e.g. <span style="font-family: monaco; font-size: 14px; background-color:white;">file = 'MyData.xlsx'</span> ).</li>
# <li>Remember that data set must be a similarly formatted Excel file (with 'Data' and 'Peak' Sheets) and it must be copied into the home directory (or change <span style="font-family: monaco; font-size: 14px; background-color:white;">home = ''</span> to point to the correct directory).</li>
# <li>Or if running on Binder you must upload the file to the virtual environment using the 'upload' button on the Binder homepage tab in your browser.</li>
# </ul>
# </div></div></div>
#
# +
home = '' # '' Use a blank home folder location when running in Binder
# home = '/Users/davidbroadhurst/Documents/DATA/JupyterTutorial/' #OSX home example
# home = '\Users\davidbroadhurst\Documents\DATA\JupyterTutorial\' #Miscrosoft Windows home example
file = 'GastricCancer_NMR.xlsx' # expects an excel spreadsheet
DataTable,PeakTable = cb.utils.load_dataXL(home + file, DataSheet='Data', PeakSheet='Peak')
# -
# <div style="background-color:rgb(255, 250, 210); padding:5px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### Display the Data sheet
#
# Using the <b>BeakerX</b> package we can interactively view and check the imported Data table simply by calling the function <span style="font-family: monaco; font-size: 14px; background-color:white;">display(DataTable)</span><br>
# For this example the mported data consists of 140 samples and 149 metabolites. Note that each row descibes a single urine sample.<br>
# - Columns **M1** ... **M149** descibe metabolite concentrations.<br>
# - Column **SampleType** indicates whether the sample was a pooled QC or an indviual's sample.<br>
# - Column **Class** indicates the outcome observed for that indivudual if not a *QC*. *GC* = Gastric Cancer , *BN* = Benign Tumor , *HE* = Healthy Control.<br>
#
# <br>
# <div style="background-color:rgb(210,250,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <b>Green boxes (mouse icon) indicate oportunities in which the user can interact with the outputs of the code cell.</b>
# <ul>
# <li>Scroll up/down & left/right using the scroll bars</li>
# <li> Click on the column header to sort by that column (sort alternates between ascending and decending order)</li>
# <li> Click on the left side of a header column for futher options (e.g. for column **Class** click on *'color by unique'*)</li>
# </ul>
# </div> </div></div>
#
display(DataTable) # View and check the DataTable
# <div style="background-color:rgb(255, 250, 210); padding:5px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### Display the Peak sheet
# Using the <b>BeakerX</b> package we can interactively view and check the imported Peak table simply by calling the function <span style="font-family: monaco; font-size: 14px; background-color:white;">display(PeakTable)</span><br>
# Here we display the imported Peak table. It consists of 149 metabolites. Note that each row descibes a single metabolite.<br>
# - Column **Name** maps to the metabolite names in the Data sheet.<br>
# - Column **Label** liss the metabolite annotation (e.g. definative or putative chemical name).<br>
# - Column **Perc_missing** indicates the percentage of missing values for that metabolite peak<br>
# - Column **QC_RSD** indicates the realtive standard deviation clculated using only the QC samples<br>
#
# <br>
# <div style="background-color:rgb(210,250,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <ul>
# <li>Scroll up/down using the scroll bars</li>
# <li> Click on the column header to sort by that column (sort alternates between ascending and decending order)
# <li> Click on the left side of a header column for futher options (e.g. for column 'QC_RSD' click on 'Heatmap')</li>
# </ul>
# </div> </div></div>
display(PeakTable) # View and check PeakTable
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ## 3. Data Cleaning
#
# It is good practice to assess the quality of your data, and remove (clean out) any poorly measured metabolites, before performing any statistical or machine learning modelling. <a href="https://link.springer.com/article/10.1007/s11306-018-1367-3">(Broadhurst *et al.* 2018).</a> For the Gastric Cancer NMR data set used in this example we have already calculated some basic statistics for each metabolite and stored them in the Peak table (see table above). We have decided to only keep any metabolites with a QC-RSD less than 20% and with a percentage missing values less than 10%.To do this we:
#
# 1. Copy the **QC_RSD** values from the table **PeakTable** into a variable named **RSD** ... <span style="font-family: monaco; font-size: 14px; background-color:white;">RSD = PeakTable['QC_RSD']</span>
# 2. Copy the **Perc_missing** values from table **PeakTable** into a variable named **PercMiss** ... <span style="font-family: monaco; font-size: 14px; background-color:white;">PercMiss = PeakTable['Perc_missing']</span>
# 3. Then create a new Peak table named **PeakTableClean** that only contains those peaks with both (RSD < 20) & (PercMiss < 10) ... <span style="font-family: monaco; font-size: 14px; background-color:white;">PeakTableClean = PeakTable[(RSD < 20) & (PercMiss < 10)]</span>. In this function call the term<span style="font-family: monaco; font-size: 14px; background-color:white;"> & </span>represents the logical operator AND.
# 4. This reduces the number of metabolites from 149 measured to 52 of suitable quality for modeling.
#
# <br>
# <div style="background-color:rgb(255,210,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <ul>
# <li>Replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> PeakTableClean = PeakTable[(RSD < 20) & (PercMiss < 10)]</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;">PeakTableClean = PeakTable[(RSD < 10) & (PercMiss == 0)]</span>. In doing this you will see the effect of relaxing the data cleaning criteria. This will change the number of 'clean' metabolites.</li>
# <li><b>Note: Changing the number of clean metabolites will significantly affect all subsequent code cells.</b> So be sure to click on <b>'Cell' → 'Run All Below'</b> . Then scroll down the notebook to see how changing this setting has changed all the cell outputs.</li>
# <li><b>It is probably best to come back to this excercise after finishing an initial walk-through of the complete tutorial using the default settings.</b></li>
# </ul>
# </div></div>
# </div>
# +
# Create a clean peak table
RSD = PeakTable['QC_RSD']
PercMiss = PeakTable['Perc_missing']
PeakTableClean = PeakTable[(RSD < 20) & (PercMiss < 10)]
print("Number of peaks remaining: {}".format(len(PeakTableClean)))
# -
# <a id='4'></a>
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ## 4. PCA - Quality Assesment
#
# We now have a cleaned peak table, **PeakTableClean**, that can used to select only the clean data for statisical and machine learning modelling.<br><br>
# To provide a multivariate assesment of the quality of the cleaned data set it is good practice perform a simple principal components analysis (PCA; after suitable tranforming & scaling) labelled by quality control (QC) or biological sample (Sample). Typically, data of high quality have QCs that cluster tightly compared to the biological samples in the PCA score plot. To perform the PCA analysis in Python we:
#
# 1. Create a new variable named **peaklist** which contains the names (*Mi,Mj...Mn*) of the metabolites you wish to extract from the data table. In this example we simple copy all the names from the new peak table **PeakTableClean** ... <span style="font-family: monaco; font-size: 14px; background-color:white;">peaklist = PeakTableClean['Name']</span>
# 2. Extract from **DataTable** the **X** matrix ... <span style="font-family: monaco; font-size: 14px; background-color:white;">X = DataTable[peaklist]</span>
# 3. Log transform **X**, scale to unit variance, and impute any missing values using the *K-nearest neighbor* algrothm ... <span style="font-family: monaco; font-size: 14px; background-color:white;">Xlog = np.log10(X); XScale = cb.utils.scale(Xlog, method='auto'); Xknn = cb.utils.knnimpute(XScale, k=3) </span>
# 4. Perform the PCA and plot the resulting PC scores and PC loadings using the <b>cimcb_lite</b> function <span style="font-family: monaco; font-size: 14px; background-color:white;">cb.plot.pca(X,pcx,pcy,group_label,sample_label,peak_label)</span>
#
#
# <b>Notice how the variable name output for one funcion call is the input variable for the next funcion call</b>
#
#
# <br>
# <div style="background-color:rgb(210,250,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <ul>
# <li> Hover over points in the PCA Score Plot to reveal corresponding sample information ('IDX' and 'SampleType'). </li>
# <li> Hover over points in the PCA Loading Plot to reveal corresponding metabolite information ('Name','Label', and 'QC_RSD'). </li>
# <li> To save the figures ...
# </ul>
#
# </div></div>
# <br>
# <div style="background-color:rgb(255,210,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <ul>
# <li>Replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> XScale = cb.utils.scale(Xlog, method='auto')</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;"> XScale = cb.utils.scale(Xlog, method='pareto')</span>. In doing this you will see the effect of changing the type of X column scaling on the PCA plot. </li>
# <li>In the PCA function call <span style="font-family: monaco; font-size: 14px; background-color:white;">cb.plot.pca</span> replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> pcy=2</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;"> pcy=3</span> to change the plot from (PC1 vs. PC2) to (PC1 vs. PC3) </li>
# <li>Replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> peak_label=PeakTableClean[['Name','Label']]</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;"> peak_label=PeakTableClean[['Label','QC_RSD']]</span>, now hover over the points in the loadings plot</li>
# </ul>
# </div></div>
# </div>
# +
# Extract and scale the metabolite data from the DataTable
peaklist = PeakTableClean['Name'] # Set peaklist to the metabolite names in the DataTableClean
X = DataTable[peaklist] # Extract X matrix from DataTable using peaklist
Xlog = np.log10(X) # Log scale (base-10)
Xscale = cb.utils.scale(Xlog, method='auto') # methods include auto, pareto, vast, and level
Xknn = cb.utils.knnimpute(Xscale, k=3) # missing value imputation (knn - 3 nearest neighbors)
# Perform PCA analysis
cb.plot.pca(Xknn,
pcx=1, # pc for x-axis
pcy=2, # pc for y-axis
group_label=DataTable['SampleType'], # colour in PCA score plot
sample_label=DataTable[['Idx','SampleType']], # labels for Hover in PCA score plot
peak_label=PeakTableClean[['Name','Label']]) # labels for Hover in PCA loadings plot
# -
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ## 5. Univariate Statistics for the comparison of Gastric Cancer (GC) vs Healthy Controls (HE)
#
# Here we create a simple statistical comparison table comparing the means of the GC vs HE patients groups.
# 1. First create a new Data table containing only the GC and HE samples (and thus ignoring the QC and BN samples)... <span style="font-family: monaco; font-size: 14px; background-color:white;">DataTable2 = DataTable[(DataTable.Class == "GC") | (DataTable.Class == "HE")]</span><br>
# 2. We assign GC to be the postive class for the statistical tests ... <span style="font-family: monaco; font-size: 14px; background-color:white;">pos_outcome = "GC"</span>
# 3. We then run a generic two class univaraite statistics function from the <b>cimcb_lite</b> package: <span style="font-family: monaco; font-size: 14px; background-color:white;"> cb.utils.univariate_2class(DataTable,PeakTable,group,posclass)</span>, which calculates various basic statistics, including the Student's T-test, for each metabolite. Correction for multiple comparisons is then performed using the Benjamini-Hochberg procedure, and q-values reported.
# 4. Finally, we create an Excel sheet ... <span style="font-family: monaco; font-size: 14px; background-color:white;"> writer = pd.ExcelWriter("Stats.xlsx") </span> ... and copy the stats table into a Sheet named 'StatsTable' ... <span style="font-family: monaco; font-size: 14px; background-color:white;">StatsTable.to_excel(writer, sheet_name='StatsTable', index=False)</span> ... and then save the file ... <span style="font-family: monaco; font-size: 14px; background-color:white;">writer.save()</span>
#
# <br>
# <div style="background-color:rgb(210,250,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <ul>
# <li> Scroll up/down using the scroll bars. </li>
# <li> Click on the column header to sort by that column (sort alternates between ascending and decending order). </li>
# <li> Click on the left side of a header column for futher options (e.g. for column 'TtestStat' click on 'Data Bars'). </li>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(255,210,210); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <ul>
# <li>Replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> DataTable[(DataTable.Class == "GC") | (DataTable.Class == "HE")]</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;">DataTable[(DataTable.Class == "BN") | (DataTable.Class == "HE")]</span> and replace <span style="font-family: monaco; font-size: 14px; background-color:white;"> pos_outcome = "GC"</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;"> pos_outcome = "BN"</span>. Now you are performing a 2 class statistical comparsion between the patients with benign tumors and healthy controls</li>
# <li>In the statistical function call <span style="font-family: monaco; font-size: 14px; background-color:white;">cb.utils.univariate_2class</span> replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> parametric=True</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;">parametric=False</span> to change the the statistical test to a non-paramentric Wilcoxon rank-sum test.</li>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(210,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
# +
# Select subset of Data for statistical comparison
DataTable2 = DataTable[(DataTable.Class == "GC") | (DataTable.Class == "HE")]
pos_outcome = "GC"
# Calculate basic statistics and create a statistics table.
StatsTable = cb.utils.univariate_2class(DataTable2,
PeakTableClean,
group='Class', # Column used to determine the groups
posclass=pos_outcome, # Value of posclass in the group column
parametric=True) # Set parametric = True or False
# View and check StatsTable
display(StatsTable)
# Save StatsTable to Excel
writer = pd.ExcelWriter("Stats.xlsx")
StatsTable.to_excel(writer, sheet_name='StatsTable', index=False)
writer.save()
# -
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ## 6. Machine Learning
#
# The remainder of the tutorial will focus on implementing a simple 2-class Partial Least Squares Discriminant Analysis (PLS-DA) model. This will be split into 6 parts:
# 1. <a href='#6.1'>Spliting the Data table into a **Training** table and **Test** table.</a>
# 2. <a href='#6.2'>Determining optimal number of *Latent Vectors* (or *Components*) for a PLS-DA model using cross validation of the training data.</a>
# 3. <a href='#6.3'>Train a model with optimal struture, and evalute both graphically and statistically using the training data.</a>
# 4. <a href='#6.4'>Perform *Permutation Testing* to verify the model structure.</a>
# 5. <a href='#6.5'>Visualise the the *Projection to Latent Space*.</a>
# 6. <a href='#6.6'>Determine the metabolites of importance using *Bootstrap Resampling*.</a>
# 7. <a href='#6.7'>Test the model by projecting through the test data, and evaluate the resulting test predicitons.</a>
# 8. <a href='#6.8'>Save the model coefficients and the traning/test predictions.</a>
# <a id='6.1'></a>
#
# </div>
# <a id='6.1'></a>
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 6.1 Spliting the metabolimics data into a Training and Test sets.
#
# <p style="text-align: justify"> Multivarite predictive models are prone to <a href="https://en.wikipedia.org/wiki/Overfitting">overfitting</a>. In order to provide some level of independent evaluation it is common practice to split the source data set into two parts: <b>training set</b> and <b>test set</b>. The model is then optimised using the training data and indepenedently evaluated using the test data. The true effectiveness of a model can only be assessed using the test data (<a href= https://link.springer.com/article/10.1007/s11306-007-0099-6>Westerhuis <i>et al.2008</i></a>, <a href= https://link.springer.com/article/10.1007/s11306-012-0482-9>Xia <i>et al.2012</i></a>). It is vitally important that both the training and test data are equally representative of the the sample population (in our example the urine phenotype of <i>Gastric Cancer</i> and the urine phenotype of <i>Healthy Control</i>). It is typical to split the data using a ratio of 2:1 (⅔ training, ⅓ test) using <a href= https://en.wikipedia.org/wiki/Stratified_samplingby outcome>stratified random selction</a>. If the puropose of the model building is exploratory, or sample numbers are small, this step is often ignored; however, care must be taken in interpreting a model that has not been independently tested. <b>NOTE: Cross-validaton is not independent testing</b>. Cross-validaton is covered in <a href='#6,2'>Section 6.2</a>. </p>
#
# <p style="text-align: justify"> The code cell below first selects a subset of data for a 2-class comparsion (GC vs HE), and then splits the resulting Data table into DataTrain and DataTest tables, such that number of DataTest samples is 25% of the the total samples. In order to ensure that the random sample-split is stratified we need to supply a binary vector indicating stratifiaction group membership. To do this we:</p>
#
#
# 1. Create a <i>list</i> varaible named <b>Outcomes</b> and then assigin to it the contents of the the <b>DataTable2</b> column <b>'Class'</b> ... <span style="font-family: monaco; font-size: 14px; background-color:white;"> Outcomes = DataTable2['Class'] </span>
# 2. Convert the entries in <b>Outcomes</b> ('GC'/'HE') into binary values ... <span style="font-family: monaco; font-size: 14px; background-color:white;"> [1 if outcome == 'GC' else 0 for outcome in Outcomes] </span> <br>This is quite a complex line of code called a *list comprehension*. For each entry (outcome) in the <b>Outcomes</b> list perform the logical comparison <span style="font-family: monaco; font-size: 14px; background-color:white;"> outcome == 'GC' </span>. If true then the corresponding value in <b>Y</b> is set to '1' else it is set to '0'.
# 3. Finally split the Data table and Y into training and test sets using the <span style="font-family: monaco; font-size: 14px; background-color:white;">sklearn.model_selection.train_test_split()</span> algorithm.
#
#
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <ul>
# <li>Replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> train_test_split(DataTable2, Y, test_size=0.25, stratify=Y)</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;">train_test_split(DataTable2, Y, test_size=0.1, stratify=Y)</span>. This will decrease the number of samples in the test set. How does this effect the results?</li>
# <li>Replace code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> pos_outcome = "GC"</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;"> pos_outcome = "BN"</span>. You will now be constructing a PLS-DA model to discriminate between patients with benign tumors and healthy controls</li>
# <li><b>Note: Everytime you rerun this cell you will randomly assign data proortionally to the DataTrain and DataTest tables. So every model you produce in later cells will be slighly different.</b>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
# +
# Select subset of Data for the PLS-DA model
DataTable2 = DataTable[(DataTable.Class == "GC") | (DataTable.Class == "HE")]
# Create a Binary Y vector for stratifiying the samples
Outcomes = DataTable2['Class'] # Column that corresponds to Y class (should be 2 groups)
Y = [1 if outcome == 'GC' else 0 for outcome in Outcomes] # Change Y into binary (GC = 1, HE = 0)
Y = np.array(Y) # convert boolean list into to a numpy array
# Split DataTable2 and Y into train and test (with stratification)
DataTrain, DataTest, Ytrain, Ytest = train_test_split(DataTable2, Y, test_size=0.25, stratify=Y)
print("DataTrain = {} samples with {} postive cases.".format(len(Ytrain),sum(Ytrain)))
print("DataTest = {} samples with {} postive cases.".format(len(Ytest),sum(Ytest)))
# -
# <a id='6.2'></a>
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 6.2. Determine number of components for PLS-DA model
# <img align="right" width="300" src="images/R2Q2.png">
#
# <p style="text-align: justify; padding-right:320px">The most common method to determine the optimal number of components for a PLS-DA model without overfitting is to use <a href="https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html"><i>k-fold cross-validation</i></a>. First, we decide on the range of models to evaluate. Typically this wil be a linear search of <i>1 to n</i> latent variables (components). We then train each PLD-DA model using all the training data (<b>X</b> = metabolite matrix and <b>Y</b> = observed outcome vector), and evalute each model's performance using all the training data. This will generate <i>n</i> evaluation scores (typically for PLS-DA we calulate the <a href=https://en.wikipedia.org/wiki/Coefficient_of_determination>coefficient of determination</a> - R<sup>2</sup>). We then split the training data into <i>k</i> equally sized subsets (folds). We then build <i>k</i> models. Each model is trained using <i>k-1</i> <i>folds</i>, with one fold left-out to be used as a test-set for model evaluation. After <i>k</i> models, each fold will have been used once as a test-set. All the test-set evaluations can then be combined and a single cross-validated evaluation score calculated (e.g. cross-validated coefficient of determination - Q<sup>2</sup>). If the values for R<sup>2</sup> and Q<sup>2</sup> are plotted against model compexity (number of latent variables), typically you will see the Q<sup>2</sup> rise and then fall. When Q<sup>2</sup> is at its apex it is generally considered to indicate the optimal number of components have been met without overfitting*.</p>
#
# For the code cell, we first extract and scale the X data in the same way as <a href='#4'>Section 4</a>. We then:
#
# 1. Create a list of latent variables to compare ... <span style="font-family: monaco; font-size: 14px; background-color:white;">param_dict = {'n_components': [1, 2, 3, 4, 5, 6]}</span>
# 2. Initalise a cross-validation object by passing a PLS-DA model class <span style="font-family: monaco; font-size: 14px; background-color:white;">cimcb.model.PLS_SIMPLS</span> to the <span style="font-family: monaco; font-size: 14px; background-color:white;">cimcb.cross_val.kfold()</span> algorithm, along with the training data, <span style="font-family: monaco; font-size: 14px; background-color:white;">XTknn</span> and <span style="font-family: monaco; font-size: 14px; background-color:white;">Ytrain</span>, the <span style="font-family: monaco; font-size: 14px; background-color:white;">param_dict</span>, and set the number of fold to <span style="font-family: monaco; font-size: 14px; background-color:white;">k=5</span>.
# 3. Run the cross-validation ... <span style="font-family: monaco; font-size: 14px; background-color:white;">cv.run()</span>
# 4. Plot the training and cross-validation evaluation scores against the number of components ... <span style="font-family: monaco; font-size: 14px; background-color:white;">cv.plot()</span>
#
# We then choose the optimal model structure by examining the output plot. In this example the <b>optimal model uses 2 components</b>.
# <br>
# <div style="background-color:rgb(230,248,230); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <ul>
# <li> Hover over the data points in each of the plots </li>
# <li> Click on a point in one of the plots. Notice that the two plots are linked</li>
# <br>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <ul>
# <li>Change the number of folds from <span style="font-family: monaco; font-size: 14px; background-color:white;">k=5</span> to <span style="font-family: monaco; font-size: 14px; background-color:white;">k=10</span>. How does that effect the corss-validation, and subsequent model predictions?</li>
# <li> Change the code <span style="font-family: monaco; font-size: 14px; background-color:white;">param_dict = {'n_components': [1, 2, 3, 4, 5, 6]}</span> to <span style="font-family: monaco; font-size: 14px; background-color:white;">param_dict = {'n_components': [1,2,3,4,5,6,7,8,9,10,11,12]}</span> How does this change the perfomance of the cross-validation and the resulting plot.</li>
# <li>Replace the code: <span style="font-family: monaco; font-size: 14px; background-color:white;"> XScale = cb.utils.scale(Xlog, method='auto')</span> with: <span style="font-family: monaco; font-size: 14px; background-color:white;"> XScale = cb.utils.scale(Xlog, method='pareto')</span>. What is the effect of changing the type of X column scaling?</li>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <img align="right" width="150" src="images/R2Q2_ab.png">
# <div style="padding-left:80px">
# <ul>
# <li style="text-align: justify; padding-right:200px">For more information on the PLS SIMPLS algorithm refer to: De Jong, S., 1993. <a href= "https://www.sciencedirect.com/science/article/abs/pii/016974399385002X">SIMPLS: an alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 18: 251–263</a></li>
# <li style="text-align: justify; padding-right:200px">*Although it is common practice to assume the optimal number of components for the PLS-DA model is chosen when Q<sup>2</sup> is at its apex (A), this is incorrect. Overtraining starts as soon as Q<sup>2</sup> deviates from the parallel R<sup>2</sup> trajectory. If the distance between R<sup>2</sup> and Q<sup>2</sup> gets large (>0.2 or the 95% CI stop overlapping) then one has to assume that the model is already overtrained. As such the optimal model acually occurs when R<sup>2</sup> is the greatest given that the difference (R<sup>2</sup> - Q<sup>2</sup>) is within a tolerence (B) - i.e. the optimisation is <a href=https://en.wikipedia.org/wiki/Multi-objective_optimization>multi-objective</a>. The <i>R<sup>2</sup> vs. (R<sup>2</sup> - Q<sup>2</sup>)</i> plot is provided to aid decison making.</li>
# </ul>
# </div></div>
#
#
# +
# Extract and scale the metabolite data from the DataTable
peaklist = PeakTableClean['Name'] # Set peaklist to the metabolite names in the DataTableClean
XT = DataTrain[peaklist] # Extract X matrix from DataTrain using peaklist
XTlog = np.log(XT) # Log scale (base-10)
XTscale = cb.utils.scale(XTlog, method='auto') # methods include auto, pareto, vast, and level
XTknn = cb.utils.knnimpute(XTscale, k=3) # missing value imputation (knn - 3 nearest neighbors)
# Set the number of latent variables to search
param_dict = {'n_components': [1,2,3,4,5,6]}
# initalise cross_val kfold (stratified)
cv = cb.cross_val.kfold(model=cb.model.PLS_SIMPLS, # model; we are using the PLS_SIMPLS model
X=XTknn,
Y=Ytrain,
param_dict=param_dict,
folds=5, # folds; for the number of splits (k-fold)
bootnum=100) # num bootstraps for the Confidence Intervals
# run the cross validation
cv.run()
# plot cross validation statistics
cv.plot()
# -
# <a id='6.3'></a>
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 6.3 Train and evaluate PLS-DA model
#
# In <a href='#6,2'>Section 6,2</a>, we determined the optimal number of components for this exampl data set is 2. So create a PLS-DA model with 2 components and evaluate the predictive ability. To do this we:
#
# 1. Create a <span style="font-family: monaco; font-size: 14px; background-color:white;">cimcb.model.PLS_SIMPLS</span> model and give it a name ... <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS = cb.model.PLS_SIMPLS(n_components=2)</span> as the PLS_SIMPLS model with n_components=2
# 2. Train <b>modelPLS</b> with <b>X=XXTrain</b>, <b>Y=Ytrain</b> (defined in the last code cell) ... <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS.train(XTknn,Ytrain)</span>
# 3. Evaluate <b>modelPLS</b> graphically, and calculate classifaction statistics based on a fixed specificity of 0.9 ... <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS.evaluate(specificity=0.9)</span>
# <br>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <ul>
# <li> Change the code <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS.evaluate(specificity=0.9)</span> to <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS.evaluate(specificity=0.7)</span>. How does this change both the plots and the summary statistics?</li>
# <li> Change the code <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS.evaluate(specificity=0.9)</span> to <span style="font-family: monaco; font-size: 14px; background-color:white;">modelPLS.evaluate(cutoffscore=0.5)</span>. How does changing the variable name passed to the function to <span style="font-family: monaco; font-size: 14px; background-color:white;">cutoffscore</span> change its operation?</li>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
# +
# Initalise the model with n_components = 2
modelPLS = cb.model.PLS_SIMPLS(n_components=2)
# Train the model
modelPLS.train(XTknn,Ytrain)
# Evaluate the model
modelPLS.evaluate(specificity=0.9)
# -
# <a id='6.4'></a>
# <div style="background-color:rgb(255, 250, 210); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 10. Perform a permutation test for PLS-DA model
#
# <p style="text-align: justify">Although cross-validation can be effectively used to optimize a model's structure (e.g. choosee the number of components in a PLS-DA model) and provides a resonable estimate of the the predictability of the parameterised model (<a href="https://doi.org/10.1007/s11306-006-0022-6">Rubingh <i>et al.</i> Metabolomics 2006)</a>, a second level of model validation can be performed using a technique known as permutation testing (<a href="https://doi.org/10.1002/9780470937273.biblio">Good 2011</a>). In the context of metabolomics this has been discussed in detail <a href="https://doi.org/10.1007/s11306-012-0482-9">Xia <i>et al.</i> Metabolomics (2013)</a>. In its most basic form, <i>permutation testing</i>, is used to assess the significance of a classification. The null hypothesis is that the optimal model's classification ability could also have been found if each patient sample had been randomly assigned a clinical outcome (positive or negative) in the same proportion as the true assignment. In this test, the model structure is fixed, and multiple <i>randomly permuted</i> models evaluated (e.g. n = 1,000). This results in the creation of a non-parametric reference distribution of the null hypothesis. The original model's performance is then statistically compared to this reference distribution and a p-value calculated. Permutation testing indicates whether a given model is significantly different from a null model (random guessing) for the sample population while CV gives an indication of how well a given model might work in predicting new samples. Permutation testing extended to also encompass cross-validation. In the exanple shown here the null hypothesis of both a given model structure's R<sup>2</sup> and Q<sup>2</sup> can be tested.</p>
# <br>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
modelPLS.permutation_test(nperm=100) #nperm refers to the number of permutations
# <a id='6.5'></a>
# <div style="background-color:rgb(255, 250, 240); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 11. Plot latent variable projections for PLS-DA model
# This grid contains 3 types of plots:
# - **Scatterplot**: LVx vs. LVy with the line indicating the direction of maximum discrimination
# - **ROC plot**: LVx / LVy with the maximum discrimination
# - **Distribution plot**: Each LV (with group 0 and group 1)
#
# <br>
# <div style="background-color:rgb(230,248,230); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <ul>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
#
modelPLS.plot_projections(label=DataTrain[['Idx','SampleID']], size=12) # size changes circle size
# <a id='6.6'></a>
# <div style="background-color:rgb(255, 250, 240); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 12. Plot feature importance (Coefficient plot and VIP) for PLS-DA model
# This plots the Coefficient and VIP plots (with bootstrapped confidence intervals), and then adds those metrics to a Peaksheet.
#
# 1. Calculate the bootstrapped confidence intervals
# 2. Plot the feature importance plots, and return a new Peaksheet
#
# <br>
# <div style="background-color:rgb(230,248,230); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <ul>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
# +
# Calculate the bootstrapped confidence intervals
modelPLS.calc_bootci(type='bca', bootnum=200) # decrease bootnum if it is taking too long
# Plot the feature importance plots, and return a new Peaksheet
Peaksheet = modelPLS.plot_featureimportance(PeakTableClean,
peaklist,
ylabel='Label', # change ylabel to 'Name'
sort=False) # change sort to False
# -
# <a id='6.7'></a>
# <div style="background-color:rgb(255, 250, 240); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 13. Test model with new data (using test set from section 7)
# Now lets test the model that was previously trained using a new dataset. In this example, we will use the test set (DataTest, Ytest) from the train_test_split in section 7. Alternatively, a new dataset could be loaded in and used.
#
# 1. Get mu and sigma from the training dataset to use for the Xtest scaling
# 2. Pull out Xtest from DataTest using peaklist ('Name' column in PeakTable)
# 3. Log transform, unit-scale and knn-impute missing values for Xtest
# 4. Calculate Ypredicted score using modelPLS.test
# 5. Evaluate Ypred against Ytest
#
# <br>
# <div style="background-color:rgb(230,248,230); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <ul>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
# +
# Get mu and sigma from the training dataset to use for the Xtest scaling
mu, sigma = cb.utils.scale(XTlog, return_mu_sigma=True)
# Pull of Xtest from DataTest using peaklist ('Name' column in PeakTable)
peaklist = PeakTableClean.Name
XV = DataTest[peaklist].values
# Log transform, unit-scale and knn-impute missing values for Xtest
XVlog = np.log(XV)
XVscale = cb.utils.scale(XVlog, method='auto', mu=mu, sigma=sigma)
XVknn = cb.utils.knnimpute(XVscale, k=3)
# Calculate Ypredicted score using modelPLS.test
YVpred = modelPLS.test(XVknn)
# Evaluate Ypred against Ytest
evals = [Ytest, YVpred] # alternative formats: (Ytest, Ypred) or np.array([Ytest, Ypred])
#modelPLS.evaluate(evals, specificity=0.9)
modelPLS.evaluate(evals, cutoffscore=0.5)
# -
# <a id='6.8'></a>
# <div style="background-color:rgb(255, 250, 240); padding:10px; border: 1px solid lightgrey; padding-left: 1em; padding-right: 1em;">
#
# ### 14. Export results to excel
# Finally, we will save a Datasheet for the test data (with Ypred), and export the StatsTable, Datasheet, and Peaksheet as an excel file ("modelPLS.xlsx"):
# 1. Save DataSheet as 'Idx', 'SampleID', and 'Class' from DataTest
# 2. Add 'Ypred' to Datasheet
# 3. Create an empty excel file
# 4. Add each table to the excel file (StatsTable, Datasheet, and Peaksheet)
# 5. Close the excel writer and output the excel file
#
# <font color='red'>**Note:** To download the excel file; click File, open, checklist box (next to the file) and download.</font>
# <br>
# <div style="background-color:rgb(230,248,230); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="75" src="images/mouse.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <ul>
# </ul>
# </div></div>
# <br>
# <div style="background-color:rgb(255,220,220); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/cog2.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# <br>
# <div style="background-color:rgb(230,250,255); padding:2px; border: 1px solid lightgrey; padding-right: 1em;">
# <img align="left" width="80" src="images/bulb.png">
# <div style="padding-left:80px">
# <br>
# <br>
# <br>
# <br>
# </div></div>
# </div>
# +
# Save DataSheet as 'Idx', 'SampleID', and 'Class' from DataTest
Datasheet = DataTest[["Idx", "SampleID", "Class"]].copy()
# Add 'Ypred' to Datasheet
Datasheet['Ypred'] = YVpred
Datasheet # View and check the DataTable
# +
# Create an empty excel file
writer = pd.ExcelWriter("modelPLS.xlsx") # name of the excel spreadsheet
Datasheet.to_excel(writer, sheet_name='Datasheet', index=False)
Peaksheet.to_excel(writer, sheet_name='Peaksheet', index=False)
# Close the excel writer and output the excel file
writer.save()
print("Done!")
# -
| 52,482 |
/week2/未命名.ipynb
|
303ce9d91e672aab503d3f701137a53d81611f55
|
[] |
no_license
|
liuyh73/NeuralNetwork
|
https://github.com/liuyh73/NeuralNetwork
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 16,805 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import numpy as np
import matplotlib.pyplot as plt
xs = torch.load('data/xs.pt').numpy()
ys = torch.load('data/ys.pt').numpy()
for i in range(len(xs)):
print('(', xs[i],',', ys[i], ')', end=' ')
coefficients = np.polyfit(xs, ys, 3)
func = np.poly1d(coefficients)
x = np.linspace(-2,6,1000)
y = func(x)
plt.xlim(-2.5,6.5)
plt.ylim(-2.0,2.0)
plt.xticks(np.linspace(-2,6,9))
plt.yticks(np.linspace(-2.0,2.0,9))
plt.plot(x,y,color='orange')
plt.scatter(xs,ys)
plt.show()
import torch
tsr = torch.Tensor([i for i in range(10,50)])
print(tsr, '\n', torch.max(tsr), '\n', torch.min(tsr))
# +
import torch
import numpy as np
def Convolution(matrix1, matrix2):
res = np.zeros((5,5))
for i in range(5):
for j in range(5):
matrix1_sub = matrix1[i:i+3,j:j+3]
res[i][j] = np.sum(matrix1_sub * matrix2)
return res
def main():
mtx1 = np.array([[0,0,0,0,0,0,0],[0,1,0,1,2,1,0],[0,0,2,1,0,1,0],[0,1,1,0,2,0,0],[0,2,2,1,1,0,0],[0,2,0,1,2,0,0],[0,0,0,0,0,0,0]])
mtx2 = np.array([[0,0,0,0,0,0,0],[0,2,0,2,1,1,0],[0,0,1,0,0,2,0],[0,1,0,0,2,1,0],[0,1,1,2,1,0,0],[0,1,0,1,1,1,0],[0,0,0,0,0,0,0]])
mtx1_mul = np.array([[1,0,1],[-1,1,0],[0,-1,0]])
mtx2_mul = np.array([[-1,0,1],[0,0,1],[1,1,1]])
res = Convolution(mtx1, mtx1_mul) + Convolution(mtx2, mtx2_mul)
print(res)
if __name__ == '__main__':
main()
# -
| 1,644 |
/week1/example code/Affine Transform.ipynb
|
edccaf23c49f09138e8d146c7e23cf16fb88f441
|
[] |
no_license
|
yao790536267/AI-for-CV
|
https://github.com/yao790536267/AI-for-CV
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 1,368 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import matplotlib.pyplot as plt
import numpy as np
img = cv2.imread('/Users/yaozeming/Downloads/man.png')
rows, cols, chs = img.shape
pts1 = np.float32([[0, 0], [cols - 1, 0], [0, rows - 1]])
pts2 = np.float32([[cols * 0.2, rows * 0.3], [cols * 0.7, rows * 0.5], [cols * 0.4, rows * 0.1]])
M = cv2.getAffineTransform(pts1, pts2)
img_Affine_Transfrom = cv2.warpAffine(img, M, (cols, rows))
cv2.imshow('img_Affine_Transform', img_Affine_Transfrom)
key = cv2.waitKey(0)
if key == 27:
cv2.destroyAllWindows()
key = cv2.waitKey(1)
# -
| 824 |
/5_POS.ipynb
|
01f002bcf69cf753a0cce21b3eb587f2d3acf8b7
|
[] |
no_license
|
rargerique/TCC2
|
https://github.com/rargerique/TCC2
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,383 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
print(__doc__)
# Code source: Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(diabetes_y_test, diabetes_y_pred))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color='black')
plt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
# -
(word_tokenize, textList));
data['POS'] = tagged_texts;
# +
tags = []
for pos in data.POS:
tag_list = [x[1] for x in pos]
tag_str = " ".join(tag_list)
tags.append(tag_str)
# -
pos_vectorizer = TfidfVectorizer(max_features=2000)
pos = pos_vectorizer.fit_transform(pd.Series(tags)).toarray()
vectorizer = TfidfVectorizer(max_features=2000)
texts = vectorizer.fit_transform(pd.Series(data.text)).toarray()
X = np.concatenate([texts,pos],axis=1)
# +
# separate train and test datasets
y = data['class'];
X_train_final, X_test_final, y_train, y_test = train_test_split(X, y, test_size=0.2)
# creating dictionary
X_train_final = np.array(X_train_final)
X_test_final = np.array(X_test_final)
X_train_final = np.reshape(X_train_final, (X_train_final.shape[0], 1, X_train_final.shape[1]))
X_test_final = np.reshape(X_test_final, (X_test_final.shape[0], 1, X_test_final.shape[1]))
# -
# define network architecture and compile
model = Sequential()
model.add(CuDNNLSTM(100, input_shape=(1,2034)))
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# train model
history = model.fit(X_train_final, y_train, epochs=5, verbose=2, validation_split=0.1)
# check score
score = model.evaluate(X_test_final, y_test)
print(score)
import matplotlib.pyplot as plt
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model train vs validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
# +
from sklearn.metrics import classification_report
y_pred = model.predict(X_test_final)
print(classification_report(y_test, y_pred.round()))
| 3,406 |
/.ipynb_checkpoints/Mall_Customers-checkpoint.ipynb
|
d1c476df3b482e858f8fb1d229980971aac33722
|
[] |
no_license
|
aurellio18/Machine-Learning
|
https://github.com/aurellio18/Machine-Learning
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 67,993 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mall Customers Clustering
# #### Import libary yang dibutuhkan
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# #### Memasukkan Dataset
data = pd.read_csv('Mall_Customers.csv')
# #### Menggunakan Algoritma Hierarchical
#
# ##### Membuat Dendrogram untuk mendapatkan jumlah cluster
X = data.iloc[:, [3,4]].values
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = "ward"))
plt.title ('Dendrogram')
plt.xlabel ('Customers')
plt.ylabel ('Euclidean distances')
plt.show()
# ##### Menyesuaikan Clustering pada dataset menggunakan Algoritma Hierarchical
# +
from sklearn.cluster import AgglomerativeClustering
hc = AgglomerativeClustering(n_clusters = 5, affinity = 'euclidean', linkage ='ward')
# Lets try to fit the hierarchical clustering algorithm to dataset #X while creating the clusters vector that tells for each customer #which cluster the customer belongs to.
y_hc=hc.fit_predict(X)
# -
# #### Membuat visualisasi hasil clustering menggunakan Hierarchical
plt.scatter(X[y_hc==0, 0], X[y_hc==0, 1], s=100, c='red', label ='Cluster 1')
plt.scatter(X[y_hc==1, 0], X[y_hc==1, 1], s=100, c='blue', label ='Cluster 2')
plt.scatter(X[y_hc==2, 0], X[y_hc==2, 1], s=100, c='green', label ='Cluster 3')
plt.scatter(X[y_hc==3, 0], X[y_hc==3, 1], s=100, c='cyan', label ='Cluster 4')
plt.scatter(X[y_hc==4, 0], X[y_hc==4, 1], s=100, c='magenta', label ='Cluster 5')
plt.title('Clusters of Customers (Hierarchical Clustering Model)')
plt.xlabel('Annual Income(k$)')
plt.ylabel('Spending Score(1-100')
plt.show()
# #### Menggunakan Algoritma DBSCAN
# ##### Mempersiapkan data untuk dimasukkan kedalam algoritma DBSCAN dengan menggunakan library sklearn
x=data.iloc[:,[2,3]].values
from sklearn.cluster import DBSCAN
db=DBSCAN(eps=3,min_samples=4,metric='euclidean')
model=db.fit(x)
label=model.labels_
# ##### Melakukan data processing untuk mengetahui jumlah cluster
# +
from sklearn import metrics
#identifying the points which makes up our core points
sample_cores=np.zeros_like(label,dtype=bool)
sample_cores[db.core_sample_indices_]=True
#Calculating the number of clusters
n_clusters=len(set(label))- (1 if -1 in label else 0)
print('No of clusters:',n_clusters)
# -
# ##### Melakukan visualisasi data dengan menggunakan plot
y_means = db.fit_predict(x)
plt.figure(figsize=(7,5))
plt.scatter(x[y_means == 0, 0], x[y_means == 0, 1], s = 50, c = 'pink')
plt.scatter(x[y_means == 1, 0], x[y_means == 1, 1], s = 50, c = 'yellow')
plt.scatter(x[y_means == 2, 0], x[y_means == 2, 1], s = 50, c = 'cyan')
plt.scatter(x[y_means == 3, 0], x[y_means == 3, 1], s = 50, c = 'magenta')
plt.scatter(x[y_means == 4, 0], x[y_means == 4, 1], s = 50, c = 'orange')
plt.scatter(x[y_means == 5, 0], x[y_means == 5, 1], s = 50, c = 'blue')
plt.scatter(x[y_means == 6, 0], x[y_means == 6, 1], s = 50, c = 'red')
plt.scatter(x[y_means == 7, 0], x[y_means == 7, 1], s = 50, c = 'black')
plt.scatter(x[y_means == 8, 0], x[y_means == 8, 1], s = 50, c = 'violet')
plt.scatter(x[y_means == 9, 0], x[y_means == 9, 1], s = 50, c = 'green')
plt.scatter(x[y_means == 10, 0], x[y_means == 10, 1], s = 50, c = 'ivory')
plt.scatter(x[y_means == 11, 0], x[y_means == 11, 1], s = 50, c = 'brown')
plt.scatter(x[y_means == 12, 0], x[y_means == 12, 1], s = 50, c = 'crimson')
plt.scatter(x[y_means == 13, 0], x[y_means == 13, 1], s = 50, c = 'indigo')
plt.xlabel('Annual Income in (1k)')
plt.ylabel('Spending Score from 1-100')
plt.title('Clusters of data')
plt.show()
| 3,786 |
/Find Motif.ipynb
|
41dacc2f065bbfebcd82e6bcaf2667ecd7819251
|
[] |
no_license
|
miyazakizachary/Rosalind-Computational-Biology-Python
|
https://github.com/miyazakizachary/Rosalind-Computational-Biology-Python
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 2,334 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
s = """TCTAAAAGTACAAAAGTAGAAAAGTAAAAAGTAGAAAAGTAGACGCAAAAGTACACAAAAGTAAAAAGTATAAAAGTACGTCCTCAAAAGTACGAAAAGTAAAAAGTAGGAAAAGTACACCACAAAAGTAGGCCGAAAAGTAAAAAGTAAAAGCGACAAAAGTACAAAAGTACTGAAAAAAGTATTAAAAGTATTCCAAAAGTACAAAAGTAATCGAAAAGTACAAAAGTAGGAAAAGTACCGAAAAGTAGTCAAAAAGTAAAAAGTAGCTGTAAAAGTACCACAGTAAAAAGTAATAAAAGTATGCAGGCTAAACAAAAGTACACAAAAGTAATAAAAGTAACACAAAAGTATCCCCGTAAAAGTAAAAAGTAGAAAAGTAGTAAAAGTAACAAAAGTAAAAAAGTAAGAAAAGTATAAAAGTACGATTTAAAAGTAAAAAGTAAAAAGTAAAAAAGTATAAAAGTAAAAAAGTATCAAAAGTAAAAAGTAAAAAAGTATTCAAAAAAAGTAAAAAGTAAAAAGTAAAAAAGTAAAAAGTAGTCGAAAAGTAAAAAGTAAAAAGTAAAAAGTAAAAAGTAGACATTTGCAAAAGTACACAAAAGTAAAAAGTACAAATAAAAGTAAAAAGTAAAAAGTATTCCAGAGAGATCGACCTGATGAAAAGTATCGGAAAAGTAAAAAGTATAAAAAGTAAAAAGTAAAAAGTAAGTAAAAGTAGGCACTGAAAAGTAGAAAAGTATTTAAAAGTAAAAAGTATGGCGACAAAAGTATAAAAGTAGAAAAGTATAAAAGTAGAAAAGTACCAGAAAAAGTATTAAAAGTATTTAAAAGTATAAAAGTACAAAAGTATCTACATAAAAGTAAGAAAAAGTACCAGAAAAGTAATAAAAGTAGCTAAAAGTAAAAAGTATGTCAAAAGTAGGCAAAAGTACAAAAGTA"""
t = "AAAAGTAAA"
def findMotif(a,b):
start = 0
while True:
start = s.find(t,start)
if start == -1: return
yield start + 1
start += 1
x = findMotif(s,t)
answer = ''
for y in x:
answer = answer + str(y) + " "
answer
# -
e a analisar um grafo da rede e determinar qual o número mínimo de arestas a remover, com o intuito de separar o grafo correspondente a rede de metro em duas partes independentes e de uma dimensão significativamente aproximada conservando a interconexão dentro de cada grafo produzido. Para este fim foi necessário implementar um tipo abstrato de dados (TAD) grafo com métodos de consulta, inserção, remoção e pesquisa, com aplicação prática nesta fase do problema, usando dados abertos sobre a rede de metropolitano de Londres foi possível determinar as estações(vértices do grafo) e as conexões entre estações(arestas do grafo). No nosso caso de estudo, ao dividir a rede em duas, esta divisão deve garantir que as sub-redes tinham um número significativo de interconexões.
#
# Para atingir este propósito o algoritmo da Bisseção Espectral( _Spectral Bissection_ ) foi utilizado, com este algoritmo foi possível atingir todos os propósitos determinados na primeira fase. O resultado deste algoritmo forneceu uma divisão proporcional em dois grafos robustos bem como a determinação do corte mínimo de arestas para este objetivo. Verificando estes resultados na minimização do _ratio cut partition_ que corresponde ao rácio entre o número de cortes e o produto da cardinalidade de elementos em ambos os grafos.
#
# Na segunda fase do projeto, o objetivo foi de determinar o caminho ótimo entre duas estações (vértices) da linha de metro (grafo). O caminho ótimo é aquele que tem o menor custo (peso da aresta, neste caso minutos) entre estações, sem repetir estações. Este tempo de viagem tem três periodos horários do dia: " Off Peak Running Time (mins )", " AM peak (0700 - 1000) Running Time (mins )", " Inter peak (1000 - 1600) Running time (mins)". Para calcular o tempo total da viagem foi necessário contabilizar os tempos de comutação da linha, considerando 10 mins o tempo de troca.
# Utilizamos para a resolução deste problema o algoritmo de _Dijkstra_ . Este algoritmo é um dos algoritmos mais populares na ciência da computação. É também popular na investigação operacional. É geralmente visto e apresentado como um greedy algorithm [1].
# ## Fundamentos teóricos
# O TAD grafo é uma estrutura que pode representar na perfeição uma rede de metro como a Londres, para esse efeito é produzida a abstração na qual o grafo não orientado **G** corresponde a rede de metro de Londres. Em **G** temos dois tipos de constituintes, estações **V** que serão os vértices do grafo e ligações entre estações **E** arestas de **G** . O preenchimento sequencial das linhas de **G** consiste na colocação de um valor de 1 para coluna do vértice que estabelece uma ligação com o vértice da linha e 0 para os casos contrários.
# Após a produção desta estrutura temos o grafo **G( V, E )** para aplicar sobre o mesmo um método de partição em duas subestruturas.
# O critério de partição do grafo em dois tem por meta a subdivisão em dois grafos ( $ G_1 $ e $ G_2 $ ) de cardinalidades aproximadas e o menor valor de arestas do grafo cortadas (**c**) para esta solução.
# $$ f = \frac{c}{|G_1|*|G_2|} $$
# A minimização do valor de **___f___** corresponde ao corte do grafo por aquilo que é conhecido como ___"ratio cut partition"___ . A resolução deste problema numérico de minimização não é de fácil solução no entanto é possível atingir uma solução adequada para este problema recorrendo ao algoritmo da **Spectral Bissection** [2]
# ### Bisseção Espectral (Spectral Bisection)
# O algoritmo de bissecção não utiliza informação geométrica nem opera sobre o grafo em si, utilizando a sua representação matemática na forma matricial. O foco deste método está em operações com valores em formato de vírgula flutuante (float) e com vetores. A abordagem à divisão do grafo tem em consideração todo o grafo e deste modo uma abordagem global do problema. Este método tem uma explicação simples na sua aplicação, mas a razão pela qual a solução heurística funciona não é óbvia.[3]
#
# De uma forma abreviada este algoritmo de partição, tem como base o grafo **G( V, A )** produzindo uma matriz de adjacências **A(V)** o que corresponde no caso deste grafo não orientado a uma matriz quadrada simétrica. Em que as linhas e as colunas de **A** são os vértices com igual ordenação, ou seja, as estações de metro, e no caso de existir uma aresta, i.e., uma ligação entre estações esta situação é identificada pelo valor de 1 e em caso contrário o valor 0.
#
# Com **A(V)** é elaborada a matriz de grau **D** , com as mesmas dimensões de **A**, em que nos elementos da diagonal tem o somatório de todos os elementos da linha de **A** desse vértice e os restantes valores são 0.
# Finalmente é elaborada uma matriz Laplaciana **L** através das duas anteriores entidades [3].
# $$ L= D-A $$
# Com recurso aos valores próprios e vetores próprios de **L** é possível elaborar a partição em $G_1$ e $G_2$. Os valores próprios de **L** não negativos vão fornecer informação sobre a conetividade dos elementos do grafo.
# O menor valor próprio não nulo é chamado de " Spectral gap " e fornece a noção da densidade do gráfico. O seu valor seria tanto maior consoante o grafo estivesse densamente ligado com arestas a interligar-se entre todos os outros vértices.
#
# O segundo menor valor próprio denomina-se de "___Fiedler value___" ou valor de Fiedler representado por $ \lambda_1 $, e o vetor próprio correspondente a $ \lambda_1 $ é o vetor de Fiedler $ v_1 $ ou $ v_{Fiedler}$. O valor de Fiedler aproxima-se da noção de conectividade algébrica e permite abordar a problemática do corte mínimo do gráfico necessário para separar o grafo em dois componentes ligados preservando a conectividade. Cada valor no vetor de Fiedler dá-nos informação sobre qual a partição a que o nó pertence, uma separação por sinal positivo e caso contrário dos vértices correspondentes aos valores do vetor é um critério de separação em grafos [5]. Com a produção de um vetor de sinal $ v_{sinal} $ tendo como base o vector de Fiedler e a conversão dos valores negativos do mesmo em -1 e os restantes em +1. Esta distribuição de sinal é possível verificar na figura 1.
#
#
#
# 
# Figura 1 - Distribuição do sinal dos componentes do $v_{Fidler}$
# Como é possível verificar do nosso $ v_{Fiedler}$ ordenado, a distribuição dos valores é quase sinusoidal e balançada em termos de sinal [6]. O valor do corte mínimo de arestas, **c** para a subdivisão do grafo **G** nesta abordagem poderá ser calculada aplicando a seguinte expressão:
# $$ c = \frac{1}{4}*v_{sinal}*L*v_{sinal^T} $$
# Com a resolução do problema numérico de uma forma heurística é assim determinada uma solução ótima ou a mais próxima da ótima para a subdivisão do grafo **G**.
# ### Algoritmo de Dijkstra
# Na nossa implementação deste algoritmo, com vista a encontrar o caminho ótimo, seguimos os seguintes passos:
#
# **Passo 0:** Verificar se a estação de partida é igual à estação de chegada, nesse caso terminar o algoritmo.
#
# **Passo 1:** Produção de um TAD grafo **unvisited** , implementado através de um dicionário, com o objetivo de guardar todos os vértices não visitados e as respectivas conexões que os ligam. A sua finalidade é de determinar a paragem do algoritmo. Criação de outro dicionário ( *track_prodecessor* ) no qual as chaves correspondem aos vértices percorridos e os valores a estas associados são os vértices anteriores a estes. Esta estrutura vai permitir numa fase posterior realizar o _Backtracking_ do caminho ótimo.
#
# **Passo 2:** Criação de um dicionário (**shortest_dist**)cujas chaves são os valores pertencentes a lista dos não visitados (todos os vértices do grafo). Os valores correspondentes a cada chave ( _i_ ) representam a distância (tempo) entre o vértice de partida e os vértices i.
#
# **Passo 3:** No dicionário anteriormente referido, os valor da chave correspondente ao vértice de partida é colocado a zero e os restantes com o valor de infinito (dado não existir nenhuma informação sobre a distância até esses vértices).
#
# **Passo 4:** Enquanto existirem vértices por visitar no grafo **m2** :
#
# **Passo 4.1:** Escolher o vértice que apresenta a menor distância e defini-lo como o vértice atual ( _current_ ).
#
# **Passo 4.2:** Atualizar os valores do dicionário no caso de serem encontradas distâncias menores as existentes. Nesta comparação é necessário ter em conta a troca de linha nas arestas entre vértices. Para determinar se existe mudança de linha é efétuada uma comparação entre a linha da última conexão ( *prev_con* ) e a linha da conexão atual ( _con_ ). Caso exista uma troca de linha ( *prev_con* != *con* ) é incrementado ao tempo 10 min. Por fim é
#
# **Passo 4.3:** Remover ( *pop* ) o vértice atual do *unvisited* após a atualização de todas distâncias.
#
# **Passo 5:** Definir o vértice de destino ( _end_ ) como o vértice atual ( _current_ ). Realizar o _Backtracking_ enquanto o vértice _current_ for diferente do vértice de partida ( _start_ ).
#
# **Passo 6:** Caso todas as chaves existam no *track_prodecessor* e seja atingido o vértice de partida, o algoritmo devolve o caminho ótimo.
# ## Solução produzida
# Para melhor entendimento do código desenvolvido decidimos dividi-lo em vários módulos com encadeamento lógico. Estas partes estão sequêncialmente representadas na figura 2, onde podem ser encontradas as classes dentro de cada um dos módulos. Existem 6 ficheiros de código, cada um com o seu objetivo:
#
# * g12_structures - Bases para a criação grafo (vértices, conexões);
#
#
# * g12_metro_fase1 - Estrutura da classe grafo para a fase 1 do projeto;
#
#
# * g12_metro_fase2 - Estrutura da classe grafo para a fase 2 do projeto;
#
#
# * g12_min_cut_graph - Funções de leitura necessárias para alimentar o grafo da fase 1 ( *m1* ) e funções necessárias para resolver o problema do corte mínimo;
#
#
# * g12_min_cut_graph - Funções de leitura necessárias para alimentar o grafo da fase 2 ( *m2* ) e funções necessárias para resolver o problema do caminho ótimo;
#
#
# * g12_main - Este ficheiro serve como "maestro", ou seja, inicia todas as classes e chama todas as funções necessárias para a resolução dos dois problemas (parte 1 e parte 2). Para além desse papel esta última subdivisão do código cria as representações gráficas para cada uma das partes. Este ficheiro permite selecionar os outputs devolvidos:
# +
# show results part 1
results_p1 = True
# show results part 2
results_p2 = True
# graph part 1
graph_1 = True
# graph part 2
graph_2 = True
# -
# 
# Figura 2 - Representação esquematica do código.
# Para representar a linha de metro utilizando um grafo, vai ser necessária a criação de 3 classes. A primeira classe define as estações (vértices) e guarda todas as suas informações. A classe “Stations” possui um conjunto de métodos de consulta das propriedades.
# ### Estruturas de base (g12_structures)
# +
# Class Station ________________________________________________________________________________________________________
class Station:
def __init__(self, id_s, latitude, longitude, name, display_name, zone, total_lines, rail):
self._id = id_s
self._latitude = latitude
self._longitude = longitude
self._name = name
self._display_name = display_name
self._zone = zone
self._total_lines = total_lines
self._rail = rail
def get_id(self):
return self._id
def get_latitude(self):
return self._latitude
def get_longitude(self):
return self._longitude
def get_name(self):
return self._name
def get_display_name(self):
return self._display_name
def get_zone(self):
return self._zone
def get_total_lines(self):
return self._total_lines
def get_rail(self):
return self._rail
# -
# A classe seguinte a ser produzida é a classe Connections que vai conter os vértices do grafo com informações das estações ligadas bem como outras propriedades sobre a ligação para futuras aplicações.
# +
# Class Connections ____________________________________________________________________________________________________
class Connections:
def __init__(self, st1, st2, line=None, time=None):
self._st1 = st1
self._st2 = st2
self._line = {line}
self._time = [time]
def get_time(self):
return self._time
def get_line(self):
return self._line
def get_st1_st2(self):
return [self._st1, self._st2]
def get_start(self):
return self._st1
def get_end(self):
return self._st2
def opposite(self, st):
if st == self._st1:
return self._st2
if st == self._st2:
return self._st1
else:
raise Exception('The station you are looking for does not exist')
def get_time_average(self):
t_sum1, t_sum2, t_sum3 = 0, 0 , 0
for time_tuple in self._time:
t_sum1, t_sum2, t_sum3 = t_sum1 + time_tuple[0], t_sum2 + time_tuple[1], t_sum3 + time_tuple[0]
n = len(self._time)
self._time = [(t_sum1/n, t_sum2/n, t_sum3/n)]
# ______________________________________________________________________________________________________________________
# -
# ### Estrutura do grafo para fase 1 (g12_metro_fase1)
# Em seguida a classe London_M é a base do grafo com operação de adição de Stations (vértices do grafo) e a adição de Connections (arestas do grafo). Armazenando estas estruturas num dicionário para operações e desenvolvimento do grafo.
# +
from g12_structures import Station
from g12_structures import Connections
class London_M:
def __init__(self):
self._graph = {} # criar dicionário
self._station = 0 # contador de estações
self._connections = 0 # contador de conecções
self.rawData = {} # guarda informações sobre as stations
def add_station(self, id_s, latitude, longitude, name, display_name, zone, total_lines, rail):
if id_s not in self._graph.keys():
station = Station(id_s, latitude, longitude, name, display_name, zone, total_lines, rail)
self._graph[station.get_id()] = set([])
self.rawData[station.get_id()] = station
# adicionar a estação ao dicionário
self._station = self._station + 1
def add_connection(self, st1, st2, line=None, time=None):
con = Connections(st1, st2, line, time) # criar objeto do tipo Connections
self._graph[st1].add(con) # adicionar à lista da estação 1
self._graph[st2].add(con) # adicionar à lista da estação 2
self._connections = self._connections + 1
# -
# ### Estrutura do grafo para fase 2 (g12_metro_fase2)
# Em seguida a classe Metro é a base do grafo com operação de adição de Stations (vértices do grafo) e a adição de Connections (arestas do grafo). Armazenando estas estruturas em dois dicionários para operações e desenvolvimento do grafo.
# +
from g12_structures import Station
from g12_structures import Connections
# Class Metro __________________________________________________________________________________________________________
class Metro:
def __init__(self):
self.graph = {} # criar dicionário
self._station = 0 # contador de estações
self._connections = 0 # contador de conecções
self.rawData = {} # guarda informações sobre as stations
self.graph1 = {} # necessário para calcular o tempo ótimo
def add_station(self, id_s, latitude, longitude, name, display_name, zone, total_lines, rail):
if id_s not in self.graph.keys():
station = Station(id_s, latitude, longitude, name, display_name, zone, total_lines, rail)
self.graph[station.get_id()] = set([])
self.rawData[station.get_id()] = station
# adicionar a estação ao dicionário
self._station = self._station + 1
if id_s not in self.graph1.keys():
station = Station(id_s, latitude, longitude, name, display_name, zone, total_lines, rail)
self.graph1[station.get_id()] = set([])
def add_connection(self, st1, st2, line=None, time=None):
con = Connections(st1, st2, line, time) # criar objeto do tipo Connections
self.graph[st1].add(con) # adicionar à lista da estação 1
self.graph1[st1].add(con)
con = Connections(st2, st1, line, time)
self.graph[st2].add(con) # adicionar à lista da estação 2
self.graph1[st2].add(con)
self._connections = self._connections + 1
# -
# ### Código para o método Spectral Bissection ( g12_min_cut_graph )
# Herdando a estrutura da classe London_M, que é o template de produção do grafo, a classe Metro vai produzir a leitura sequencial linha a linha dos dados reais das estações de Londres (london.stations.txt). Após leitura dos dados são produzidas estações, ligações com o cuidado de colocar as estações sem repetição na produção das mesmas.
# Com esta estrutura em lugar são produzidos métodos para elaborar as matrizes de adjacência, de grau e Laplaciana bem como a recolha de valores próprios e vetores próprios.
#
# +
from g12_metro_fase1 import London_M
import csv
import numpy as np
# ______________________________________________________________________________________________________________________
class Metro(London_M):
def __init__(self):
London_M.__init__(self)
self._a_matrix = None
# ______________________________________________________________________________________________________________________
def read_stations(self):
return self._read_stations()
@staticmethod
def _read_stations(): # ler o ficheiro do tipo csv relativo às estações
info = []
with open('london.stations.txt') as stations1:
stations2 = csv.DictReader(stations1)
for line in stations2:
info.append((int(line["id"]), float(line["latitude"]), float(line["longitude"]), line["name"],
line["display_name"], float(line["zone"]), int(line["total_lines"]), int(line["rail"])))
return info
def create_stations(self):
sts = self.read_stations()
for i in sts:
self.add_station(i[0], i[1], i[2], i[3], i[4], i[5], i[6], i[7])
# ______________________________________________________________________________________________________________________
def read_connections(self):
return self._read_connections()
@staticmethod
def _read_connections(): # ler o ficheiro do tipo csv relativo às conexões
info = []
with open('london.connections.txt') as connections1:
connections2 = csv.DictReader(connections1)
for line in connections2:
info.append((int(line["station1"]), int(line["station2"]), int(line["line"]), float(line["time"])))
return info
def _uniques_connections(self):
con = self.read_connections()
uniques = set([])
for i in con:
uniques.add((i[0], i[1]))
return list(uniques)
def create_connections(self):
con = self._uniques_connections()
for i in con:
self.add_connection(i[0], i[1])
# ______________________________________________________________________________________________________________________
def adjacency(self): # criar matriz de adjacencias
stations = self._read_stations()
self._a_matrix = np.zeros([len(stations), len(stations)])
for i in self._uniques_connections():
j = 0
while i[0] != stations[j][0]: # enquanto não encontrar o primeiro vértice
j = j + 1
w = 0
while i[1] != stations[w][0]: # esquanto não encontrar o segundo vértice
w = w + 1
self._a_matrix[j, w] = 1
self._a_matrix[w, j] = 1
return self._a_matrix
def degree(self, adjacency): # matriz diagonal que guarda o grau de ligação de cada vertice
degree = np.diag(adjacency.sum(axis=1))
return degree
@staticmethod
def laplacian(degree, adjacency): # matriz de laplace ( L = D - A ), ligações (-1), grau (degree)
return degree - adjacency
@staticmethod
def eigenvalues_eigenvectors(l):
values, vectors = np.linalg.eig(l)
return values, vectors
def cuts(self, g1, g2): # elaborar as arestas de corte entre grafos g1 e g2
arestas = []
for i in g1: # vetor com os nós do grafo negativo
arestas_gi = self._graph[i] # lista com as arestas de g1[i]
for j in g2: # percorrer os valores de g2 (vértices do outro cluster)
for w in arestas_gi: # percorrer as ligações de g1[i]
st1_2 = w.get_st1_st2()
st1 = st1_2[0] # estação inicial (id)
st2 = st1_2[1] # estação final (id)
if j == st1 or j == st2: # ver se g2[j] pertence a alguma das ligações de g1[i]
arestas.append((st1, st2)) # lista com tuplo ( vertice de g1 , vertice de g2 )
return arestas
# -
# ### Código para o algoritmo de Dijkstra ( g12_dijkstra )
# Criação do grafo baseado nos ficheiros "london.stations.txt" e "Interstations v3.csv". Formulação do algosritmo de Dijkstra, bem como outras funções necessárias para dar resposta ao problema da fase 2.
# +
import csv
from g12_metro_fase2 import Metro
# Class London Metro ___________________________________________________________________________________________________
class London_Metro(Metro):
def __init__(self):
Metro.__init__(self)
# Create Stations ______________________________________________________________________________________________________
def read_stations(self):
return self._read_stations()
@staticmethod
def _read_stations():
info = []
with open('london.stations.txt') as stations1:
stations2 = csv.DictReader(stations1)
for line in stations2:
info.append((int(line["id"]), float(line["latitude"]), float(line["longitude"]), line["name"],
line["display_name"], float(line["zone"]), int(line["total_lines"]), int(line["rail"])))
return info
def create_stations(self):
sts = self.read_stations()
for i in sts:
self.add_station(i[0], i[1], i[2], i[3], i[4], i[5], i[6], i[7])
# Create Connections ___________________________________________________________________________________________________
def read_connections(self):
return self._read_connections()
@staticmethod
def _read_connections():
info = []
with open('Interstation v3.csv') as connections1:
connections2 = csv.reader(connections1)
i = 0
for line in connections2:
if i != 0:
info.append((int(line[0]), int(line[1]), int(line[2]), float(line[3]), float(line[4]),
float(line[5]), float(line[6])))
i = i + 1
return info
def create_connections(self):
st1_st2 = []
uniques = set([])
con = self.read_connections()
for i in con:
st1_st2.append((i[1], i[2]))
for i in con:
if (i[1], i[2]) not in uniques:
self.add_connection(i[1], i[2], i[0], (i[4], i[5], i[6]))
else:
edge_modified = self.find_edge(i[1], i[2])
edge_modified._line.add(i[0])
edge_modified._time.append((i[4], i[5], i[6]))
uniques.add((i[1], i[2]))
uniques.add((i[2], i[1]))
for i in con:
edge_modified = self.find_edge(i[1], i[2])
edge_modified.get_time_average()
# Dijkstra Algorithm ___________________________________________________________________________________________________
def dijkstra(self, start, end, n):
if start == end:
raise ValueError("A estação de destino igual a estação de partida!")
shortest_dist = {} # guardar todas as distancias minimas desde start até ao nó
track_prodecessor = {} # guarda o caminho até ao nó atual
unvisited = self.graph # guarda os não visitados
infinity = float("inf") # número muito grande
path = [] # caminho ótimo
prev_con = None
# dizer que a distancia de start até qualquer outro nó é infinito, porque não foram visitados
for node in unvisited:
shortest_dist[node] = infinity
shortest_dist[start] = 0
while unvisited:
current = None
for node in unvisited:
if current is None:
current = node
elif shortest_dist[node] < shortest_dist[current]:
current = node
con_options = self.graph[current] # Guardar todas as conexões do nó atual
for con in con_options:
change_line = False
if prev_con is not None and prev_con.get_line() != con.get_line():
change_line = True
if con.get_time()[0][n] + shortest_dist[current] + change_line * 10 < shortest_dist[con.opposite(current)]:
shortest_dist[con.opposite(current)] = con.get_time()[0][n] + shortest_dist[current]
track_prodecessor[con.opposite(current)] = current
prev_con = con
unvisited.pop(current)
current = end
while current != start:
try:
path.insert(0, current)
current = track_prodecessor[current]
except KeyError:
print("The path is not reachable")
break
path.insert(0, start)
if shortest_dist[end] < infinity:
return path
# Optimal time _________________________________________________________________________________________________________
def find_edge(self, start, end):
return self._find_edge(start, end)
def _find_edge(self, start, end):
connections = self.graph1[start]
for con in connections:
if end == con.get_start() or end == con.get_end():
return con
def get_list(self, l):
return self._get_list(l)
@staticmethod
def _get_list(l):
route = l
tuples = []
i = 0
while i != len(route) - 1:
tuples.append((route[i], route[i + 1]))
i = i + 1
return tuples
def get_travel_time(self, optimal_path, n):
travel_time = 0
optimal_path = self._get_list(optimal_path)
for i in optimal_path:
con = self.find_edge(i[0], i[1])
travel_time = travel_time + con.get_time()[0][n]
return travel_time
def get_change_time(self, l):
change_time = 0
path = self._get_list(l)
prev_path = self.find_edge(path[0][0], path[0][1])
for i in path:
path = self.find_edge(i[0], i[1])
if path.get_line() != prev_path.get_line():
change_time = change_time + 10
prev_path = path
return change_time
# -
# ### Código "Maestro" ( g12_main )
# Este código funde os módulos anteriores ( g12_min_cut_graph e g12_dijkstra ) com a finalidade da resolução dos problemas da fase 1 e 2 com a correspondente representação gráfica. Correndo todo o código com uma inicialização que elabora o conjunto de matrizes e valores para a resolução do método de bissecção espectral. Para além da produção de leituras é colocado em dois grafos valores para representação gráfica da solução figura 3 e 4. Em seguida, é criado um grafo para aplicar o algoritmo de Dijkstra, devolvendo o caminho ótimo, o temp total de viagem, tempo de troca e representação gráfica da solução (figura 5).
# +
import numpy as np
import plotly.graph_objects as go
from g12_dijkstra import London_Metro
from g12_minimum_cut_graph import Metro
# Conditions ___________________________________________________________________________________________________________
# Condições para visualização de resultados e visualização gráfica
# show results part 1
results_p1 = True
# show results part 2
results_p2 = True
# graph part 1
graph_1 = True
# graph part 2
graph_2 = True
# Main 1st part _______________________________________________________________________________________________________
if __name__ == '__main__':
# Criar classe Metro
m1 = Metro()
# Adicionar primeiro estações e depois criar ligações
m1.create_stations()
m1.create_connections() # já com a limpeza de connections repetidas
# Criar matriz de adjacências para o grafo (A)
A = m1.adjacency()
# Criar matriz de grau (degree matrix)
D = m1.degree(A)
# Criar matriz de Laplace (L = D - A)
L = m1.laplacian(D, A)
# Devolve os valores e o vétores prórios de L
valores, vetores = m1.eigenvalues_eigenvectors(L)
# Encontrar o vetor de Fiedler
index_fiedler_value = np.argsort(valores)[1]
partition = np.array([val >= 0 for val in vetores[:, index_fiedler_value]])
c = len(partition)
v_sinal = []
for i in partition:
if i:
v_sinal.append(1)
else:
v_sinal.append(-1)
v_sinal = np.array(v_sinal)
# devole o id das estações pertencentes a g1 e a g2 (partição 1 e partição 2)
g1 = []
g2 = []
names1 = []
names2 = []
coord_x1 = []
coord_y1 = []
coord_x2 = []
coord_y2 = []
for i in range(0, len(v_sinal)):
if v_sinal[i] == - 1:
g1.append(m1.read_stations()[i][0])
names1.append(m1.read_stations()[i][3])
coord_x1.append(m1.read_stations()[i][2])
coord_y1.append(m1.read_stations()[i][1])
else:
g2.append(m1.read_stations()[i][0])
names2.append(m1.read_stations()[i][3])
coord_x2.append(m1.read_stations()[i][2])
coord_y2.append(m1.read_stations()[i][1])
# número mínimo de cortes
n_cuts = 0.25 * np.dot(np.dot(v_sinal.transpose(), L), v_sinal)
# Ratio cut partition
f = n_cuts / (len(g1) * len(g2))
if results_p1:
print('=====================================================')
print('Results for the Spectral Bissection Algorithm', end="\n\n")
print('The number of cut edges is: ' + str(n_cuts))
print('The number of nodes in graph 1 is ' + str(len(g1)))
print('The number of nodes in graph 2 is ' + str(len(g2)))
print('The ratio cut partition, f = ' + str(f))
# Graph minimum cut info _______________________________________________________________________________________________
edge_x = []
edge_y = []
for edge in m1.read_connections():
st1 = edge[0]
st2 = edge[1]
x0, y0 = m1.rawData[st1].get_longitude(), m1.rawData[st1].get_latitude()
x1, y1 = m1.rawData[st2].get_longitude(), m1.rawData[st2].get_latitude()
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_c_x = []
edge_c_y = []
for edge in m1.cuts(g1, g2):
st1, st2 = edge
x0, y0 = m1.rawData[st1].get_longitude(), m1.rawData[st1].get_latitude()
x1, y1 = m1.rawData[st2].get_longitude(), m1.rawData[st2].get_latitude()
edge_c_x.append(x0)
edge_c_x.append(x1)
edge_c_x.append(None)
edge_c_y.append(y0)
edge_c_y.append(y1)
edge_c_y.append(None)
# Edge Generator _______________________________________________________________________________________________________
edge_trace = go.Scatter(
x=edge_x, y=edge_y,
line=dict(width=0.5, color='#00ff00'),
hoverinfo='none',
mode='lines')
edge_cuts = go.Scatter(
x=edge_c_x, y=edge_c_y,
line=dict(width=1, color='#fe2ec8'),
hoverinfo='none',
mode='lines')
# Node Generator _______________________________________________________________________________________________________
node_x = []
node_y = []
for id_station in m1.read_stations():
node_x.append(id_station[2])
node_y.append(id_station[1])
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
# colorscale options
# 'Greys' | 'YlGnBu' | 'Greens' | 'YlOrRd' | 'Bluered' | 'RdBu' |
# 'Reds' | 'Blues' | 'Picnic' | 'Rainbow' | 'Portland' | 'Jet' |
# 'Hot' | 'Blackbody' | 'Earth' | 'Electric' | 'Viridis' |
colorscale='Bluered',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
i = 0
for key in m1.rawData.keys():
node_adjacencies.append(D[i, i])
node_text.append('# of connections: ' + str(D[i, i]) + ' ' + m1.rawData[key].get_name())
i = i + 1
node_trace.marker.color = v_sinal
node_trace.text = node_text
# Plot options _________________________________________________________________________________________________________
fig = go.Figure(data=[edge_trace, node_trace, edge_cuts],
layout=go.Layout(
title='<br> London Subway Network',
titlefont_size=25,
showlegend=False,
plot_bgcolor='#000000',
hovermode='closest',
margin=dict(b=20, l=5, r=5, t=40),
annotations=[dict(
text="Python code: <a Trabalho EDA /</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002)],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
if graph_1:
fig.show()
# Main 2nd part ________________________________________________________________________________________________________
# Criar classe Metro
m2 = London_Metro()
# Adicionar estações e depois criar ligações
m2.create_stations()
m2.create_connections() # já com a limpeza de algumas connections repetidas
# Definição dos parametro (seeds) para o algoritmo de Dijkstra (número das estações)
start = 299
end = 6
n = 1
# Veriável que guarda o caminho ótimo
optimal_path = m2.dijkstra(start, end, n)
# Cálculo dos tempo de deslocação (metro) , tempo de mudança de linha, tempo total da viagem
travel_time = m2.get_travel_time(optimal_path, n)
change_time = m2.get_change_time(optimal_path)
total_time = travel_time + change_time
if results_p2:
print('=====================================================')
print('Results for the shortest path (Dijkstra Algorithm)', end="\n\n")
print('The optimal path is ' + str(optimal_path))
print('Total time = ' + str(total_time))
print('Change time = ' + str(change_time))
# Graph Dijkstra Info __________________________________________________________________________________________________
edge_x = []
edge_y = []
for edge in m2.read_connections():
st1 = edge[1]
st2 = edge[2]
x0, y0 = m2.rawData[st1].get_longitude(), m2.rawData[st1].get_latitude()
x1, y1 = m2.rawData[st2].get_longitude(), m2.rawData[st2].get_latitude()
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_c_x = []
edge_c_y = []
for edge in m2.get_list(optimal_path):
st1, st2 = edge
x0, y0 = m2.rawData[st1].get_longitude(), m2.rawData[st1].get_latitude()
x1, y1 = m2.rawData[st2].get_longitude(), m2.rawData[st2].get_latitude()
edge_c_x.append(x0)
edge_c_x.append(x1)
edge_c_x.append(None)
edge_c_y.append(y0)
edge_c_y.append(y1)
edge_c_y.append(None)
# Edge Generator _______________________________________________________________________________________________________
edge_trace = go.Scatter(
x=edge_x, y=edge_y,
line=dict(width=0.5, color='#00ff00'),
hoverinfo='none',
mode='lines')
edge_cuts = go.Scatter(
x=edge_c_x, y=edge_c_y,
line=dict(width=3, color='#fe2ec8'),
hoverinfo='none',
mode='lines')
# Node Generator _______________________________________________________________________________________________________
node_x = []
node_y = []
for id_station in m2.read_stations():
node_x.append(id_station[2])
node_y.append(id_station[1])
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=False,
# colorscale options
# 'Greys' | 'YlGnBu' | 'Greens' | 'YlOrRd' | 'Bluered' | 'RdBu' |
# 'Reds' | 'Blues' | 'Picnic' | 'Rainbow' | 'Portland' | 'Jet' |
# 'Hot' | 'Blackbody' | 'Earth' | 'Electric' | 'Viridis' |
colorscale='ice',
reversescale=True,
color=[],
size=7,
colorbar=dict(
thickness=45,
title='Number of lines',
xanchor='left',
titleside='right'
),
line_width=2))
node_text = []
for key in m2.rawData.keys():
node_text.append('Station: ' + m2.rawData[key].get_name())
node_trace.text = node_text
node_trace.marker.color = np.zeros(302)
# Plot options _________________________________________________________________________________________________________
fig = go.Figure(data=[edge_trace, node_trace, edge_cuts],
layout=go.Layout(
title='<br> Optimal path',
titlefont_size=25,
showlegend=False,
plot_bgcolor='#000000',
hovermode='closest',
margin=dict(b=20, l=5, r=5, t=40),
annotations=[dict(
text="Python code: <a Trabalho EDA /</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002)],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
if graph_2:
fig.show()
# -
# ## Resultados obtidos
# Como resultado do código para a fase 1 conseguimos os seguintes parâmetros como solução para este problema. O **n_cuts** é o número mínimo de cortes **c** que é de 8, as cardinalidades de **G1** e de **G2** são respectivamente de 172 e 130 e o ___ratio cut partition___ também conhecido como **f** vale 0.00035778175313059033.
=====================================================
Results for the Spectral Bissection Algorithm
The number of cut edges is: 8.0
The number of nodes in graph 1 is 172
The number of nodes in graph 2 is 130
The ratio cut partition, f = 0.00035778175313059033
# Como resultado do código para a fase 2 conseguimos os seguintes parâmetros como solução para este problema. O caminho ótimo (optimal path) que liga as estações Amersham e Wimbledon (6 , 299) percorre 23 estações demorando 136,475 minutos em média no período AM peak. Para efectuar este percurso ótimo são efectuadas 6 trocas de linha com o tempo de 60 minutos. Nota: o nome das estações está patente na representação gráfica da resolução.
=====================================================
Results for the shortest path (Dijkstra Algorithm)
The optimal path is [299, 300, 231, 80, 205, 195, 96, 287, 74, 99, 236, 229, 273, 107, 28, 11, 94, 115, 168, 214, 53, 46, 6]
Total time = 136.475
Change time = 60
# ## Visualização dos resultados
# Com o recurso ao módulo plotly.graph é possível elaborar uma visualização gráfica do resultado da bissecção. Com a adaptação de código de um gráfico e com a estrutura existente do código anterior o seguinte resultado foi produzido.
# 
# Figura 3 - Representação dos dois grafos separados.
# Os círculos são a representação dos vértices do grafo, no nosso problema as estações de metro, e as linhas entre círculos as arestas as ligações entre estações. Com maior detalhe as arestas marcadas a cor-de-rosa as ligações cortadas no corte mínimo do grafo.
# 
# Figura 4 - Arestas a cortar para a divisão em dois do grafo.
# Representação gráfica do caminho ótimo entre as estações Amersham e Wimbledon como resultado do algoritmo de Dijkstra.
# 
# Figura 5 - Caminho ótimo determinado.
# ## Discussão dos resultados obtidos
# O algoritmo de bissecção espectral permite alcançar uma solução satisfatória, onde o valor do corte **c** é minimizado e o valor do produto das cardinalidades dos grafos criados de **|G1|x|G2|** é maximizado com valores próximos. Com isto o __ratio cut partition__ **f** é minimizado. A complexidade temporal do algoritmo é **$O(n)$** na maioria das estruturas algumas funções com complexidade **$O(n^2)$** poderiam ser otimizadas. A visualização gráfica, apesar de não ser essencial para resolução, permite uma compreensão dos outputs do código e a sua validade.
# No caso do caminho ótimo devido a falhas no ficheiro "Interstations v3.csv" não foi possível criar ligações entre vértices pertencentes da linha DLR. Esta falha pode ser verificada na figura 6. Para determinar o tempo ótimo foi necessário elaborar uma média entre tempos existentes entre estações. A solução apresentada aparenta cumprir todos os requesitos solicitados no enunciado do projeto.
# 
# Figura 6 - Conexões ausentes da linha DLR
# ## Bibliografia
# [1] Sniedovich, Moshe. (2006). Dijkstra's algorithm revisited: The dynamic programming connexion. Control and Cybernetics. 35.
# [2] Alex Pothen 1, Horst D. Simon 2, and Kang-Pu Paul Liu (July 1989) -Partitioning Sparse Matrices with Eigenvectors of Graphs, Report RNR-89-009,
#
# [3] Elsner, Ulrich. (1999). - Graph Partitioning - A Survey. 1619-7186.
#
# [4] Lars Hagen , Andrew B. KahngNew (1992) - Spectral methods for ratio cut partition and clustering
#
# [5] U.C. Berkeley CS267 Home Page, Applications of Parallel Computers (1996) Graph Partitioning, Part 2 - https://people.eecs.berkeley.edu/~demmel/cs267/lecture20/lecture20.html
#
# [6] William Fleshman(2019) - Towards Data Science, Spectral Clustering https://towardsdatascience.com/spectral-clustering-aba2640c0d5b
#
#
#
| 45,609 |
/.ipynb_checkpoints/PCA_digits-Copy1-checkpoint.ipynb
|
3cac542e98c867c68fbc42eaed661ea9206839e9
|
[] |
no_license
|
nhorton04/Reddit_Comments
|
https://github.com/nhorton04/Reddit_Comments
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 25,152 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Dimensionality reduction and PCA
#
# Let's revisit the MNIST digits dataset.
#
# Notebook objectives:
#
# * Learn sklearn syntax for PCA
# * Look at an example of using PCA for visualization
# * Learn how to use a scree plot to explore how many principal components to keep
# +
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn import svm
from matplotlib import pyplot as plt
# %matplotlib inline
# -
# load the digits dataset
digits = datasets.load_digits()
print(digits.data.shape)
digits
# This is what one digit looks like in numbers
digits.data[166].reshape(8,8)
# Take the shading for each pixel and plot it as color
plt.gray()
plt.matshow(digits.images[166])
plt.show()
# +
# Center the data on 0
# We should do this (almost) all of the time so that we don't fit to covariates
# that happen to be on larger scales and have more variance
X_centered = digits.data - digits.data.mean()
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X_centered, y, test_size=0.5,random_state=42)
print(X_train.shape)
# -
# ## Let's do some PCA!
data_df = pd.read_pickle('./pickles/data_clean.pickle')
data_df
# Take all of the data and plot it on 2 dimensions
pca = PCA(n_components=2)
pca.fit(X_train)
pcafeatures_train = pca.transform(X_train)
# +
# Create a plot of the PCA results
from itertools import cycle
def plot_PCA_2D(data, target, target_names):
colors = cycle(['r','g','b','c','m','y','orange','w','aqua','yellow'])
target_ids = range(len(target_names))
plt.figure(figsize=(10,10))
for i, c, label in zip(target_ids, colors, target_names):
plt.scatter(data[target == i, 0], data[target == i, 1],
c=c, label=label, edgecolors='gray')
plt.legend()
# -
# plot of all the numbers
plot_PCA_2D(pcafeatures_train, target=y_train, target_names=digits.target_names)
# ## Transforming our input matrix X for use in classification/clustering
#
# Here we did PCA for visualization. But we can take our new N x k matrix (where k = number principal components) as input to regression, classification, clustering, etc.
X_transf = pca.transform(X_train)
print("shape of original X_train:", X_train.shape)
print("shape of X_train using 2 principal components:", X_transf.shape, "\n")
print(X_transf)
pca.explained_variance_ratio_
# +
# to understand the importance of each variable in each PC, look at the correlations:
pd.DataFrame(pca.components_, index = ['PC1','PC2'])
# remember, signs don't matter, just direction in space
# -
pca.singular_values_
# ## Choosing number of components with a scree plot
#
# Choosing two or three components makes sense if we're using PCA for visualization. But what if we're trying to do feature extraction and need to use the components as input for our classifcation/clustering task? Then we might use a scree plot to choose the number of components.
pca2 = PCA(n_components=15)
pca2.fit(X_train)
pcafeatures_train2 = pca2.transform(X_train)
plt.plot(pca2.explained_variance_ratio_)
plt.xlabel('# components')
plt.ylabel('explained variance');
plt.title('Scree plot for digits dataset');
plt.plot(np.cumsum(pca2.explained_variance_ratio_))
plt.xlabel('# components')
plt.ylabel('cumulative explained variance');
plt.title('Cumulative explained variance by PCA for digits');
# ## Bonus: t-SNE
#
# Adapted from the [O-Reilly Media t-SNE tutorial](https://github.com/oreillymedia/t-SNE-tutorial).
# +
# sklearn implements t-SNE.
from sklearn.manifold import TSNE
from sklearn.preprocessing import scale
from sklearn.manifold.t_sne import (_joint_probabilities,
_kl_divergence)
# Import seaborn and matplotlib.patheffects to make nice plots.
import seaborn as sns
import matplotlib.patheffects as PathEffects
sns.set_style('darkgrid')
sns.set_palette('muted')
sns.set_context("notebook", font_scale=1.5,
rc={"lines.linewidth": 2.5})
# Random state.
RS = 20200807
# -
# We first reorder the data points according to the handwritten numbers.
X = np.vstack([digits.data[digits.target==i]
for i in range(10)])
y = np.hstack([digits.target[digits.target==i]
for i in range(10)])
digits_proj = TSNE(random_state=RS).fit_transform(X)
def scatter(x, colors):
# We choose a color palette with seaborn.
palette = np.array(sns.color_palette("hls", 10))
# We create a scatter plot.
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40,
c=palette[colors.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
# We add the labels for each digit.
txts = []
for i in range(10):
# Position of each label.
xtext, ytext = np.median(x[colors == i, :], axis=0)
txt = ax.text(xtext, ytext, str(i), fontsize=24)
txt.set_path_effects([
PathEffects.Stroke(linewidth=5, foreground="w"),
PathEffects.Normal()])
txts.append(txt)
return f, ax, sc, txts
scatter(digits_proj, y)
# ## t-SNE Visualized!
#
# [How t-SNE runs on the digits dataset](https://github.com/oreillymedia/t-SNE-tutorial/raw/master/images/animation.gif)
e entire training dataset is used at once. This is very useful in situations where there is a huge amount of data and it is computationally infeasible to train the entire dataset because of the sheer size of the data. We can simply say that an online-learning algorithm will get a training example, update the classifier, and then throw away the example.
#
# A very good example of this would be to detect fake news on a social media website like Twitter, where new data is being added every second. To dynamically read data from Twitter continuously, the data would be huge, and using an online-learning algorithm would be ideal.
#
# Passive-Aggressive algorithms are somewhat similar to a Perceptron model, in the sense that they do not require a learning rate. However, they do include a regularization parameter.
#
# **How Passive-Aggressive Algorithms Work:**
#
# Passive-Aggressive algorithms are called so because :
#
# **Passive**: If the prediction is correct, keep the model and do not make any changes. i.e., the data in the example is not enough to cause any changes in the model.
#
# **Aggressive**: If the prediction is incorrect, make changes to the model. i.e., some change to the model may correct it.
# +
from sklearn.linear_model import PassiveAggressiveClassifier
pipe = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', PassiveAggressiveClassifier())
])
model = pipe.fit(x_train, y_train)
prediction = model.predict(x_test)
score = metrics.accuracy_score(y_test, prediction)
print("accuracy: %0.3f" % (score*100))
cm = metrics.confusion_matrix(y_test, prediction, labels=[0,1])
fig, ax = plot_confusion_matrix(conf_mat=confusion_matrix(y_test, prediction),
show_absolute=True,
show_normed=True,
colorbar=True)
plt.show()
# -
import pickle
# Save trained model to file
pickle.dump(model, open("news.pkl", "wb"))
import pickle
# Save trained model to file
pickle.dump(vect, open("news.pkl", "wb"))
loaded_model = pickle.load(open("news.pkl", "rb"))
loaded_model.predict(x_test)
loaded_model.score(x_test,y_test)
## CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features=5000,ngram_range=(1,3))
X = cv.fit_transform(corpus).toarray()
y = df['label']
## Divide the dataset into Train and Test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)
cv.get_feature_names()[:20]
cv.get_params()
count_df = pd.DataFrame(X_train, columns=cv.get_feature_names())
count_df.head()
# +
from sklearn.naive_bayes import MultinomialNB
classifier=MultinomialNB()
from sklearn import metrics
import numpy as np
import itertools
# -
classifier.fit(X_train, y_train)
pred = classifier.predict(X_test)
score = metrics.accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
cm = metrics.confusion_matrix(y_test, pred)
plot_confusion_matrix(cm,show_absolute=True,
show_normed=True,
colorbar=True)
from sklearn.linear_model import PassiveAggressiveClassifier
model = PassiveAggressiveClassifier().fit(X_train, y_train)
pred = model.predict(X_test)
score = metrics.accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
cm = metrics.confusion_matrix(y_test, pred)
plot_confusion_matrix(cm,show_absolute=True,
show_normed=True,
colorbar=True)
import pickle
# Save trained model to file
pickle.dump(model, open("news.pkl", "wb"))
import pickle
# Save trained model to file
pickle.dump(cv, open("cv.pkl", "wb"))
news = ["Exclusive: Trump says he thought being president would be easier than his old life WASHINGTON (Reuters) - He misses driving, feels as if he is in a cocoon, and is surprised how hard his new job is. President Donald Trump on Thursday reflected on his first 100 days in office with a wistful look at his life before the White House. “I loved my previous life. I had so many things going,” Trump told Reuters in an interview. “This is more work than in my previous life. I thought it would be easier.” A wealthy businessman from New York, Trump assumed public office for the first time when he entered the White House on Jan. 20 after he defeated former Secretary of State Hillary Clinton in an upset. REUTERS RECOMMENDSRacism on the rise: Reuters pollHow North Korea gets its oil from China More than five months after his victory and two days shy of the 100-day mark of his presidency, the election is still on Trump’s mind. Midway through a discussion about Chinese President Xi Jinping, the president paused to hand out copies of what he said were the latest figures from the 2016 electoral map. “Here, you can take that, that’s the final map of the numbers,” the Republican president said from his desk in the Oval Office, handing out maps of the United States with areas he won marked in red. “It’s pretty good, right? The red is obviously us.” He had copies for each of the three Reuters reporters in the room. Trump, who said he was accustomed to not having privacy in his “old life,” expressed surprise at how little he had now. And he made clear he was still getting used to having 24-hour Secret Service protection and its accompanying constraints. “You’re really into your own little cocoon, because you have such massive protection that you really can’t go anywhere,” he said. When the president leaves the White House, it is usually in a limousine or an SUV. He said he missed being behind the wheel himself. “I like to drive,” he said. “I can’t drive any more.” Many things about Trump have not changed from the wheeler-dealer executive and former celebrity reality show host who ran his empire from the 26th floor of Trump Tower in New York and worked the phones incessantly. He frequently turns to outside friends and former business colleagues for advice and positive reinforcement. Senior aides say they are resigned to it. The president has been at loggerheads with many news organizations since his election campaign and decided not to attend the White House Correspondents’ Dinner in Washington on Saturday because he felt he had been treated unfairly by the media. “I would come next year, absolutely,” Trump said when asked whether he would attend in the future. The dinner is organized by the White House Correspondents’ Association. Reuters correspondent Jeff Mason is its president. "]
new_X_test = cv.transform(news).toarray()
model.predict(new_X_test)
| 12,360 |
/Assignment_8.ipynb
|
06b7f3a7280d47aa63cc1253fafa5548857d64bd
|
[] |
no_license
|
Prema16/DS_ExcelR
|
https://github.com/Prema16/DS_ExcelR
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,092,906 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PCA - Assignment 8
Perform Principal component analysis and perform clustering using first
3 principal component scores (both heirarchial and k mean clustering(scatter plot or elbow curve) and obtain
optimum number of clusters and check whether we have obtained same number of clusters with the original data
(class column we have ignored at the begining who shows it has 3 clusters)df
# ## Step 1 - Importing the relevant libraries
# +
import pandas as pd
import numpy as np
import seaborn as sns
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.subplots as tls
import seaborn as sns
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import matplotlib
# %matplotlib inline
# Import the 3 dimensionality reduction methods
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler # Data Preprocessing
from scipy.spatial.distance import cdist
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
# -
# ## Import the dataset
data = pd.read_csv('wine.csv')
data.head()
print(data.shape)
print(data.columns)
data.describe()
sns.pairplot(data, hue='Type')
f,ax=plt.subplots(figsize=(12,12))
sns.heatmap(data.astype(float).corr(),linewidth=0.20,square=True,annot=True)
train = data.iloc[:,1:14]
target = data.iloc[:,:1]
# ## Standardize columns
# Standardize the data
X = train.values
X_std = StandardScaler().fit_transform(X)
# ## Compute Covariance Matrix
# Cov matirx
mean_vec = np.mean(X_std, axis=0)
cov_mat = np.cov(X_std.T)
print(cov_mat)
# ## Compute Eigen values and Eigen Vectors
#Calculating Eigenvectors and eigenvalues of cov Matrix
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
# Create a list of (eigenvalue, eigenvector) tuples
eig_pairs = [ (np.abs(eig_vals[i]),eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the eigenvalue, eigenvector pair from high to low
eig_pairs.sort(key = lambda x: x[0], reverse= True)
print(' *** Eigen Pairs *** \n ',eig_pairs)
# Calculation of Explained Variance from the eigenvalues
tot = sum(eig_vals)
var_exp = [(i/tot)*100 for i in sorted(eig_vals, reverse=True)] # Individual explained variance
print('*** Individual explained variance : **** \n',var_exp)
cum_var_exp = np.cumsum(var_exp) # Cumulative explained variance
print('*** Cumulative explained variance : **** \n',cum_var_exp)
# PLOT OUT THE EXPLAINED VARIANCES SUPERIMPOSED
plt.figure(figsize=(12,8))
plt.bar(range(len(var_exp)), var_exp, alpha=0.3333, align='center', label='individual explained variance', color = 'r')
plt.step(range(len(cum_var_exp)), cum_var_exp, where='mid',label='cumulative explained variance',color='g')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.show()
# ## PCA model with 3 Components
pcamodel = PCA(n_components=3)
pca = pcamodel.fit_transform(X_std)
df_pca = pd.DataFrame(pca)
#Relationship of principal components to the original dataset
componen =pd.DataFrame(pcamodel.components_,columns=train.columns)
print(componen)
print('*** Individual explained variance : **** \n',pcamodel.explained_variance_ratio_)
print('*** Cumulative explained variance : **** \n',np.cumsum(pcamodel.explained_variance_ratio_))
# PLOT OUT THE EXPLAINED VARIANCES SUPERIMPOSED
plt.figure(figsize=(10,6))
plt.bar(range(len(pcamodel.explained_variance_ratio_)), pcamodel.explained_variance_ratio_, alpha=0.3333, align='center', label='individual explained variance', color = 'r')
plt.step(range(len(np.cumsum(pcamodel.explained_variance_ratio_))), np.cumsum(pcamodel.explained_variance_ratio_), where='mid',label='cumulative explained variance',color='g')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.title('Variance Plot')
plt.legend(loc='best')
plt.show()
plt.scatter(df_pca[0], df_pca[1], alpha=.3, color='green')
plt.xlabel('PCA 1')
plt.ylabel('PCA 2')
plt.title('PCA Projection with Scatter plot')
plt.show()
# ## Clustering
# ### Considering the complete dataset
# Selecting 5 clusters from the above scree plot which is the optimum number of clusters
model1=KMeans(n_clusters=3)
model1.fit(X_std)
model1.labels_ # getting the labels of clusters assigned to each row
md=pd.Series(model1.labels_) # converting numpy array into pandas series object
data['clust']=md
sns.pairplot(data, hue='clust')
# ### Considering only 3 PC without defining clusters
inertias = []
# Creating 10 K-Mean models while varying the number of clusters (k)
for k in range(1,10):
model2 = KMeans(n_clusters=k)
# Fit model to samples
model2.fit(df_pca.iloc[:,:3])
# Append the inertia to the list of inertias
inertias.append(model2.inertia_)
plt.plot(range(1,10), inertias, '-p', color='green')
plt.xlabel('number of clusters, k')
plt.ylabel('inertia')
plt.show()
# ### Considering 3 PC with 5 clusters
# +
model3 = KMeans(n_clusters=5)
model3.fit(df_pca.iloc[:,:2])
labels = model3.predict(df_pca.iloc[:,:2])
plt.scatter(df_pca[0], df_pca[1], c=labels,marker='*')
plt.show()
# -
model4 = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='ward')
model4.fit_predict(df_pca.iloc[:,:3])
plt.figure(figsize=(10, 6))
plt.title("Agglomerative Clustering Model - Graph")
plt.scatter(Crime.iloc[:,2], Crime.iloc[:,3], c=model2.labels_, cmap='tab10')
plt.scatter(Crime.iloc[:,3], Crime.iloc[:,4], c=model2.labels_, cmap='rainbow')
plt.scatter(Crime.iloc[:,1], Crime.iloc[:,2], c=model2.labels_, cmap='cubehelix')
plt.show()
| 5,824 |
/Q2.ipynb
|
51bf43148085e3d44c26b12e34a76d9ce0f7f0b2
|
[] |
no_license
|
Yasaman1997/Data-Science-Mini-Project
|
https://github.com/Yasaman1997/Data-Science-Mini-Project
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 39,727 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
mt=pd.read_csv('MT_cleaned.csv',delimiter=',')
vt=pd.read_csv('VT_cleaned.csv',delimiter=',')
mt.info()
mt.columns
# 1.The proportion of traffic stops in MT involving male drivers
male_mt=mt[mt['driver_gender']=='M']
male_mt.head()
male_mt.tail()
male_driver=len(male_mt)
prop=male_driver/len(mt)
prop
# the proportion: 0.6749749732765495
# 2.Factor increase in a traffic stop arrest likelihood in MT from OOS plates
# 3.The proportion of traffic stops in MT involving speeding violations
mt['violation'].dropna
v=mt['violation']
mt_speed_violation=v.str.contains('Speeding',regex='True')
mt_speed_violation
speed_violation_data_frame=pd.DataFrame(mt_speed_violation)
mt_speed_violation_filtered=speed_violation_data_frame.loc[speed_violation_data_frame['violation'] == True]
len(mt_speed_violation_filtered)/len(mt)
# The proportion of traffic stops in MT involving speeding violations: 0.6580998111785223
# 4.Factor increase in traffic stop DUI likelihood in MT over VT:
# 5.The average manufacture year of vehicles stopped in MT in 2020
#stop_mt=mt['stop_date'].dropna()
manufacture_year=mt[['vehicle_year','stop_date']]
manufacture_year
manufacture_year['stop_date']= pd.to_datetime(manufacture_year['stop_date'])
#extract year
manufacture_year['stop_year']=manufacture_year['stop_date'].apply(lambda x: x.year)
manufacture_year.dropna()
manufacture_year.info()
vehicle_year=pd.DataFrame(manufacture_year['vehicle_year'])
vehicle_year_avg=vehicle_year.astype(str).mean()
vehicle_year_avg
# 6.The difference in the total number of stops that occurred between min and max hours in both MT and VT
mt.stop_time.dropna()
stop_time=pd.DataFrame(mt.stop_time.dropna())
stop_time.head()
# 7.The area in sq. km of the largest county in MT
#
mt.iloc[105]
mt['county_name'].describe()
| 2,112 |
/lasso.ipynb
|
6b163295a5fb7dfc90e2e60a4f25cc51d07afbcb
|
[] |
no_license
|
Judith3197/TM10007_PROJECT
|
https://github.com/Judith3197/TM10007_PROJECT
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 570,956 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# name: python37464bitbaseconda3e433ad4382c432eac770014b82f80bd
# ---
# # Lasso
# +
# Import packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
import seaborn
from hn.load_data import load_data
from sklearn import model_selection, metrics, feature_selection, preprocessing, neighbors, decomposition, svm
from sklearn.impute import KNNImputer
from sklearn.model_selection import train_test_split, StratifiedKFold, GridSearchCV, learning_curve, RandomizedSearchCV
from sklearn.feature_selection import SelectFromModel
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.linear_model import LogisticRegression, SGDClassifier, Lasso
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier, BaggingClassifier, VotingClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA, KernelPCA
from sklearn.kernel_approximation import RBFSampler
# Functions for plotting ROC curve
from sklearn.metrics import roc_curve, auc
from scipy import interp
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import label_binarize
# -
def plot_roc_curve(y_score, y_truth):
'''
Plot an ROC curve.
'''
# Only take scores for class = 1
y_score = y_score[:, 1]
# Compute ROC curve and ROC area for each class
fpr, tpr, _ = roc_curve(y_truth, y_score)
roc_auc = auc(fpr, tpr)
# Plot the ROC curve
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
def plot_learning_curve(estimator, title, X, y, axes, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
axes.set_title(title)
if ylim is not None:
axes.set_ylim(*ylim)
axes.set_xlabel("Training examples")
axes.set_ylabel("Score")
train_sizes, train_scores, test_scores = \
learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs,
train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
# Plot learning curve
axes.grid()
axes.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
axes.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1,
color="g")
axes.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
axes.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
axes.legend(loc="best")
return plt
# +
# Data loading and preprocessing
data = load_data()
print(f'The number of samples: {len(data.index)}')
print(f'The number of features: {len(data.columns)-1}')
y_labels = data['label']
del data['label']
y = sklearn.preprocessing.label_binarize(y_labels, ['T12', 'T34']) # 0 now stands for T12 and 1 for T34
y = [i[0] for i in y]
y = np.array(y)
cv_4fold = model_selection.StratifiedKFold(n_splits=4, shuffle=True)
split_X_train, split_X_test, split_y_train, split_y_test = train_test_split(data, y,
stratify=y,
test_size=0.2)
# Loop over the folds
#for _ in range(0,20):
for training_index, validation_index in cv_4fold.split(split_X_train, split_y_train):
train_scores = []
test_scores = []
X_validation = split_X_train.iloc[validation_index]
y_validation = split_y_train[validation_index]
X_train = split_X_train.iloc[training_index]
y_train = split_y_train[training_index]
# 1. Scaling
scaler = preprocessing.StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_validation_scaled = scaler.transform(X_validation)
# 2. Lasso
# Now first use the selectfrom model module. Select all features with a weight above the median.
selector = SelectFromModel(estimator=Lasso(alpha=10**(-10), random_state=None), threshold='median')
selector.fit(X_train_scaled, y_train)
n_original = X_train_scaled.shape[1]
X_train_lasso = selector.transform(X_train_scaled)
X_validation_lasso = selector.transform(X_validation_scaled)
n_selected = X_train_lasso.shape[1]
print(X_train_lasso.shape)
print(f"Selected {n_selected} from {n_original} features.")
# # LDA classifier
# clf = LDA()
# clf.fit(X_train_lasso, y_train)
# y_score = clf.predict_proba(X_validation_lasso)
# plot_roc_curve(y_score, y_validation)
# +
# Lasso with random forest
data = load_data()
print(f'The number of samples: {len(data.index)}')
print(f'The number of features: {len(data.columns)-1}')
y_labels = data['label']
del data['label']
y = sklearn.preprocessing.label_binarize(y_labels, ['T12', 'T34']) # 0 now stands for T12 and 1 for T34
y = [i[0] for i in y]
y = np.array(y)
cv_4fold = model_selection.StratifiedKFold(n_splits=4, shuffle=True)
split_X_train, split_X_test, split_y_train, split_y_test = train_test_split(data, y,
stratify=y,
test_size=0.2)
# Loop over the folds
for _ in range(0,1):
for training_index, validation_index in cv_4fold.split(split_X_train, split_y_train):
train_scores = []
test_scores = []
X_validation = split_X_train.iloc[validation_index]
y_validation = split_y_train[validation_index]
X_train = split_X_train.iloc[training_index]
y_train = split_y_train[training_index]
# 1. Scaling
scaler = preprocessing.StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_validation_scaled = scaler.transform(X_validation)
# 2. Lasso
# WEET NIET: hoe moet je alpha tunen en de treshold bepalen, bij alpha zal hij altijd de grootste alpha kiezen en een getal keer de median is volledig random gekozen nu
# Now first use the selectfrom model module. Select all features with a weight above the median.
# grid_param = {'alpha': [10000000000000, 1000, 100, 10, 10**(-2), 10**(-4), 10**(-6), 10**(-8), 10**(-10), 10**(-12)]}
# grid_search = GridSearchCV(Lasso(),param_grid=grid_param,cv=skf,n_jobs=-1,verbose=2)
# grid_search.fit(X_train_scaled, y_train)
# best_hyperparameters = grid_search.best_params_
# # Best hyperparameters
# alpha = best_hyperparameters.get('alpha')
# print(alpha)
# best_result = grid_search.best_score_
# print(best_result)
# # Apply classifier with tuned hyperparameters
# classifier = RandomForestClassifier(n_estimators=n_estimators, criterion=criterion, bootstrap=bootstrap)
# classifier.fit(X_train_lasso, y_train)
selector = SelectFromModel(estimator=Lasso(alpha=(10**(-10)), random_state=None), threshold='1.25*median')
selector.fit(X_train_scaled, y_train)
n_original = X_train_scaled.shape[1]
X_train_lasso = selector.transform(X_train_scaled)
X_validation_lasso = selector.transform(X_validation_scaled)
n_selected = X_train_lasso.shape[1]
print(X_train_lasso.shape)
print(f"Selected {n_selected} from {n_original} features.")
## RandomForest Classification
# Stratified K-fold Cross validation
k = 4
skf = StratifiedKFold(k, random_state=0) # cv kan ook op None --> geeft default --> 5-fold cross validation
# Tuning the hyperparameters
grid_param = {'n_estimators': [10, 50, 100, 200, 400],'criterion': ['gini', 'entropy'],'bootstrap': [True, False]}
grid_search = GridSearchCV(RandomForestClassifier(),param_grid=grid_param,cv=skf,n_jobs=-1,verbose=2)
grid_search.fit(X_train_lasso, y_train)
best_hyperparameters = grid_search.best_params_
# Best hyperparameters
n_estimators = best_hyperparameters.get('n_estimators')
criterion = best_hyperparameters.get('criterion')
bootstrap = best_hyperparameters.get('bootstrap')
print(n_estimators)
print(criterion)
print(bootstrap)
best_result = grid_search.best_score_
print(best_result)
# Apply classifier with tuned hyperparameters
classifier = RandomForestClassifier(n_estimators=n_estimators, criterion=criterion, bootstrap=bootstrap)
classifier.fit(X_train_lasso, y_train)
# Calculate accuracy
classifier_predictions_test = classifier.predict(X_validation_lasso)
accuracy = metrics.accuracy_score(y_validation, classifier_predictions_test)
print(accuracy)
print('#'*80)
# Learning curve
title = 'Learning curve for RandomForest Classifier'
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
plot_learning_curve(classifier, title, X_train_lasso, y_train, ax, ylim=(0.3, 1.01), cv=skf)
# -
stribution
# $p_{ij}$. Furthermore assume that $p_{ij}(x) = p_{ij}(-x)$, i.e., that the
# distribution is symmetric (see e.g., :cite:`Wigner.1958` for details).
# * Prove that the distribution over eigenvalues is also symmetric. That is, for any eigenvector $\mathbf{v}$ the probability that the associated eigenvalue $\lambda$ satisfies $P(\lambda > 0) = P(\lambda < 0)$.
# * Why does the above *not* imply $P(\lambda > 0) = 0.5$?
# 1. What other challenges involved in deep learning optimization can you think of?
# 1. Assume that you want to balance a (real) ball on a (real) saddle.
# * Why is this hard?
# * Can you exploit this effect also for optimization algorithms?
#
#
# ## [Discussions](https://discuss.mxnet.io/t/2371)
#
# 
| 10,872 |
/theory/LossFunctions.ipynb
|
3d4b38e211fb9793c43d788aea20cac2b693719e
|
[
"MIT"
] |
permissive
|
ankitgupta04/machine-learning-1
|
https://github.com/ankitgupta04/machine-learning-1
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 7,066 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Loss Functions
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import os
import sys
p = os.path.join(os.path.dirname('__file__'), '..')
sys.path.append(p)
from common import *
# ### Classification
# +
# BCE
# -
# ### Regression
# +
# MSE
# -
# ### Clustering
# +
# # Distance?
# -
# ### Advanced
# +
class DiceLoss():
'''
http://campar.in.tum.de/pub/milletari2016Vnet/milletari2016Vnet.pdf
https://github.com/faustomilletari/VNet/blob/master/pyLayer.py
https://github.com/pytorch/pytorch/issues/1249
'''
def __init__(self):
self.__class__.__name__ = 'Dice'
def __call__(self, output, target):
return 1.0 - get_torch_dice_score(output, target)
class DiceBCELoss():
def __init__(self, dice_weight=1.0):
self.__class__.__name__ = 'DiceBCE'
self.dice_weight = dice_weight
self.bce_weight = 1.0 - dice_weight
def __call__(self, output, target):
bce = F.binary_cross_entropy(output, target)
dice = 1 - get_torch_dice_score(output, target)
return (dice * self.dice_weight) + (bce * self.bce_weight)
class WeightedBCELoss():
def __init__(self, weights):
self.weights = weights
self.__class__.__name__ = 'WeightedBCE'
def __call__(self, output, target):
return F.binary_cross_entropy(output, target, self.weights)
class KnowledgeDistillLoss():
def __init__(self, target_weight=0.25):
self.__class__.__name__ = 'KnowledgeDistill'
self.target_weight = target_weight
def __call__(self, output, target, soft_target):
target_loss = F.binary_cross_entropy(output, target) * self.target_weight
soft_target_loss = F.binary_cross_entropy(output, soft_target)
return target_loss + soft_target_loss
class HuberLoss():
def __init__(self, c=0.5):
self.c = c
self.__class__.__name__ = 'Huber'
def __call__(self, output, target):
bce = F.binary_cross_entropy(output, target)
return self.c**2 * (torch.sqrt(1 + (bce/self.c)**2) - 1)
class SmoothF2Loss():
def __init__(self, c=10.0, f2_weight=0.2, bce_weight=1.0):
self.__class__.__name__ = 'SmoothF2'
self.c = c
self.f2_weight = f2_weight
self.bce_weight = bce_weight
def __call__(self, output, target, thresholds):
f2 = get_smooth_f2_score(output, target, thresholds, self.c) * self.f2_weight
bce = F.binary_cross_entropy(output, target) * self.bce_weight
return f2 + bce
# -
# ## Helpers
# +
def get_torch_dice_score(outputs, targets):
eps = 1e-7
batch_size = outputs.size()[0]
outputs = outputs.view(batch_size, -1)
targets = targets.view(batch_size, -1)
total = torch.sum(outputs, dim=1) + torch.sum(targets, dim=1)
intersection = torch.sum(outputs * targets, dim=1).float()
dice_score = (2.0 * intersection) / (total + eps)
return torch.mean(dice_score)
def sigmoid(z, c=1.0):
return 1.0 / (1.0 + torch.exp(-c*z))
def get_smooth_f2_score(outputs, targets, thresholds, c=10.0):
eps = 1e-9
outputs = sigmoid(thresholds - outputs, c).float()
tot_out_pos = torch.sum(outputs, dim=1)
tot_tar_pos = torch.sum(targets, dim=1)
TP = torch.sum(outputs * targets, dim=1)
P = TP / (tot_out_pos + eps)
R = TP / tot_tar_pos + eps
F2 = 5.0 * (P*R / (4*P + R))
return torch.mean(F2)
# -
| 3,679 |
/Pandas/Pandas_part1.ipynb
|
27063e48baf921145b79370ff85bebe8e0850ae0
|
[] |
no_license
|
rajaashok/DataScience
|
https://github.com/rajaashok/DataScience
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 8,400 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Matplotlib (맷플롯립)
# - 시각화 패키지 파이썬 표준화 도구라고 부를 수 있을 정도로
# 2D평면 그래프에 대한 포맷과 기능 지원
# 데이터 분석 결과를 시각화하는데 필요한 다양한 기능 제공
#
# ### Matplotlib 패키지 사용법
# 1. matplotlib 주 패키지 사용 시: import matplotlib as mpl
# 2. plyplot 서브 패키지 사용 시: import matplotlib.pyplot as plt
#
# ### pyplot 서브 패키지
# - matlab이라는 수해석 소프트웨어의 시각화 명령을
# 거의 그대로 사용할 수 있도록 해주는 패키지
# ### Matplotlib에서 지원되는 플롯 유형
# - 라인 플롯(line plot) 선 그래프: plot()
# - 바 차트(bar chart) 막대 그래프: bar()
# - 스캐터 플롯(scatter plot) 선점도: scatter()
# - 박스 플롯(box plot) 상자 그림: boxplot()
# - 파이 차트(pie chart) 파이 그래프: pie()
# - 키다 다양한 유형의 플롯 지원
import matplotlib.pyplot as plt
# %matplotlib inline
# 매직 명령: 주피터 노트북 사용시
# 노트북 내부에 그림을 표시하도록 지정하는 명령어
# +
# 한글 문제
# matplotlib의 기본 폰트에서 한글이 지원되지 않기 때문에
# matplotlib의 폰트 변경 필요
import platform
from matplotlib import font_manager, rc
plt.rcParams['axes.unicode_minus'] = False
if platform.system() == 'Darwin': # 맥OS
rc('font', family='AppleGothic')
elif platform.system() == 'Windows': # 윈도우
path = "c:/Windows/Fonts/malgun.ttf"
font_name = font_manager.FontProperties(fname=path).get_name()
rc('font', family=font_name)
else:
print('Unknown system... sorry~~~')
# -
import pandas as pd
import numpy as np
# ## plot() 함수: 선을 그리는 함수
# - 데이터 시간, 순서 등에 따라 데이터가 어떻게 변화하는 지 보여 주기 위해 사용
#
# #### show() 함수
# - 시각화 명령을 실제로 차트로 렌더링하고 마우스 움직임의 이벤트를 기다리는 지시
# - 주피터 노트북에서는 셀 단위로 플롯 명령을 자동 렌더링 하므로 사실 show() 명령 없어도 그래프가 출력은 되지만
# 일단 파이썬 인터프리터로 가동되는 경우를 대비해서
# 마지막에 show() 명령어를 사용해서 실행
plt.plot([1, 4, 9, 16]) # 값이 하나인 경우: 주어진 값은 y값에 할당
plt.show()
# x축은 자동으로 0, 1, 2, 3 으로 설정
# ### 라인 플롯에서 자주 사용되는 스타일(약자로 표기)
# - color: `c` (선 색상)
# - linewidth: `lw` (선 굵기)
# - linestyle: `ls` (선 굵기)
# - marker: 마커 종료
# - markersize: `ms` (마커 크기)
# - markeredgecolor: `mec` (마커 선 색상)
# - markeredgewidth: `mew` (마커 선 굵기)
# - markerfacecolor: `mfc` (마커 내부 색상)
# +
x = [0, 1, 2, 3, 4, 5, 6]
y = [1, 4, 5, 8, 9, 5, 3]
# 출력되는 그래프 크기 설정
plt.figure(figsize=(10,6))
plt.plot(x, y, color='green')
plt.show()
# +
x = [0, 1, 2, 3, 4, 5, 6]
y = [1, 4, 5, 8, 9, 5, 3]
# 출력되는 그래프 크기 설정
plt.figure(figsize=(10,6))
plt.plot(x, y, color='blue', linestyle='dashdot')
plt.show()
# +
x = [0, 1, 2, 3, 4, 5, 6]
y = [1, 4, 5, 8, 9, 5, 3]
# 출력되는 그래프 크기 설정
plt.figure(figsize=(10,6))
plt.plot(x, y, color='blue', linestyle='dashdot', marker='o')
plt.show()
# -
plt.plot([10, 20, 30, 40], [1, 4, 9, 16],
c='black', # color
lw=5, # linewidth
ls='--', # linestyle
marker='o',
mec='y',
ms=15,
mew=5,
mfc='r')
plt.title("마커 스타일 적용 예")
plt.show()
# ### tick 설정
# > tick: 축 상의 위치 표시점
# - tick label: tick에 표시되는 숫자 또는 문자
# - tick의 위치나 tick 라벨은 Matplotlib에서 자동으로 정해주지만
# 수동으로 설정하려면 xticks(), yticks() 함수 사용
# +
# tick 설정
x = [10, 20, 30, 40, 50, 60]
y = [11, 15, 50, 30, 40, 10]
plt.plot(x, y, color='green',
linestyle='dashed',
marker='o')
plt.xticks(x, ("10대", "20대", "30대", "40대", "50대", "60대"))
plt.yticks(y, (y[i] for i in range(6)))
plt.show()
# -
# ## 바 차트 (막대 그래프) : `bar()`
# +
y = [2, 3, 1]
x = np.arange(len(y)) # => 0, 1, 2
xlabel = ['가', '나', '다']
plt.figure(figsize=(8,3))
plt.title('Bar Chart')
plt.bar(x, y)
plt.xticks(x, xlabel)
plt.yticks(y) # == plt.yticks(sorted(y))
plt.xlabel('그룹명')
plt.ylabel('빈도수')
plt.show()
# +
# 수평 바 그리기: barh() 함수 사용 - 'barh'orizontal
np.random.seed(0)
people = ['몽룡', '춘향', '방자', '향단']
y = np.arange(len(people)) # == array([0, 1, 2, 3])
performance = 3 + 10 * np.random.rand(len(people))
plt.title("수평 Bar Chart")
plt.barh(y, performance, alpha=0.4)
plt.yticks(y, people)
plt.xlabel("performance")
plt.ylabel('이름')
plt.grid(True)
plt.show()
# -
# ## 스캐터 플롯 (scatter plot): `scatter()`
# +
# 분산형 그래프
# scatter() 함수 사용
# numpy 배열 사용
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([9, 8, 7, 9, 8, 3, 2, 4, 5, 3])
plt.figure(figsize=(10,6))
plt.scatter(x, y)
plt.show()
# +
# 리스트 사용한 경우
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
y = [9, 8, 7, 9, 8, 3, 2, 4, 5, 3]
plt.figure(figsize=(10,6))
plt.scatter(x, y)
plt.show()
# +
# 마커 지정
plt.figure(figsize=(10,6))
plt.scatter(x, y, marker='>')
plt.show()
# +
# 히스토그램: 연속된 구간에 막대형식으로 그래프를 출력
np.random.seed(0) # 항상 동일한 값이 나오게 하기 위해서
x = np.random.randn(1000)
plt.title("Histogramd")
plt.hist(x)
plt.show()
# -
# ### 좀 더 편리한 시각화 도구: Seaborn
# - Matplotlib의 기능과 스타일을 확장한 파이썬 시각화 도구의 고급 버전
# - Seaborn 라이브러리 설치해야 하지만
# 아나콘다 사용하는 경우 기본으로 설치되기 때문에 추가 설치 필요 없음
# - Seaborn 사용할 때 Matplotlib도 같이 import 되어 있어야 함
# +
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
# -
# ## box plot 그리기: `boxplot()`
# - 최대값
# - 3 사분위수
# - 중앙값 (박스 내부 가로선)
# - 1 사분위수
# - 최소값
# - 점: 이상치
# 사용할 데이터 셋: tips 데이터 사용
# 요일별, 성별, 점심, 저녁, 흡연 여부와 식사 금액과 팁을 정리한 데이터
tips = sns.load_dataset('tips')
tips.tail()
# +
sns.set_style('whitegrid')
plt.figure(figsize=(6, 3))
sns.boxplot(x=tips['total_bill'])
plt.show()
# +
sns.set_style('whitegrid')
plt.figure(figsize=(6, 3))
sns.boxplot(y=tips['total_bill'])
plt.show()
# -
# x, y 값 지정
plt.figure(figsize=(8, 5))
sns.boxplot(x='day', y='total_bill', data=tips)
plt.show()
# ## 범주형 변수 처리
# - 카테고리 값을 갖는 변수의 이름을 hue 인수에 지정
# - 카테고리 값에 따라 다르게 시각화 됨
# 범주형 변수: smoker(Yes/No)
# 팁
# 흡연 / 비흡연 나눠서 요일별로 팁을 박스 플롯으로 표현
plt.figure(figsize=(8, 5))
sns.boxplot(x='day', y='total_bill', hue='smoker', data=tips, palette='Set3')
plt.show()
plt.figure(figsize=(8, 5))
sns.boxplot(x='day', y='total_bill', hue='smoker', data=tips, palette='RdYlGn')
plt.show()
# 성별로 나눠서 요일별 총 지불액을 박스 플롯으로 표현
plt.figure(figsize=(8, 5))
sns.boxplot(x='day', y='total_bill', hue='sex', data=tips, palette='RdBu')
plt.show()
# ## 지도를 이용해서 데이터를 시각화
#
# ### folium 패키지
# - 지도를 이용해서 데이터를 시각화하는 도구
# - 설치 필요 (아나콘다 프롬프트에서 설치: `pip install folium`)
# - 지도 데이터에 자바 스크립트를 이용해서 위치 정보를 시각화하기 위한 라이브러리
# - 마커 형태로 위치 정보를 지도 상에 표시할 수 있음
import folium
# +
# 초기 지도 객체 생성
# Map() 메소드에 중점 좌표값을 지정해서 간단하게 지도 생성
# 위도와 경도의 위치로 지도가 그려줌
# -
# 서울 특별시청의
# 위도: 37도 56분 63.45초
# 경도: 126도 97분 78.93초
seoulCityMap = folium.Map(location=[37.566345, 126.977893], zoom_start=17)
seoulCityMap
# +
# 서울 시청 마커 표시: popup 출력: '서울 시청'
# 덕수궁에 서클 마커 표시, popup 출력
# 덕수궁: 36.5658048, 126.97514610007
seoulCityMap = folium.Map(location=[37.566345, 126.977893], zoom_start=17)
folium.Marker([37.566345, 126.977893], popup='시청').add_to(seoulCityMap)
folium.CircleMarker([37.5658048, 126.97514610007],
popup='덕수궁',
radius=50,
color='red',
fill_color='blue').add_to(seoulCityMap)
seoulCityMap.save('../data/seoul_city_map.html')
seoulCityMap
# -
# ### folium에 포함된 choropleth() 함수를 사용해서 지도를 표시
# - 지도 데이터 파일 (json)
# - 시각화하고자 하는 데이터 파일
#
# ``` python
# map.choropleth() 함수 (
# geo_data = "지도 데이터 파일 경로",
# data = "시각화하고자 하는 데이터",
# columns = "지도 데이터와 맵핑할 값, 시각화하고자 하는 변수",
# key_on = "feature.데이터 파일과 맵핑할 값",
# fill_color = "시각화에 사용될 색상"
# )
# ```
#
# #### 인구 수에 따른 인구 분포도 - 전국
# #### 인구 소멸 위기 지역을 나타내는 분포도
# #### 서울시 구별 살인 분포도
# #### 서울시 구별 폭력 분포도
# 인구 데이터 파일 읽어오기 (excel 파일)
koreaPop = pd.read_excel('../data/korea_population.xls', index_col='ID')
koreaPop.head()
koreaPop.index # length=252
import folium
import json
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
geoPath = '../data/skorea_municipalities_geo_simple.json'
geoStr = json.load(open(geoPath, encoding='UTF-8'))
map = folium.Map(location=[36.2002, 127.054], zoom_start=7)
map.choropleth(geo_data = geoStr,
data=koreaPop['인구수합계'],
columns=[koreaPop.index, koreaPop['인구수합계']],
fill_color='YlGnBu',
key_on='feature.id')
map
map = folium.Map(location=[36.2002, 127.054], zoom_start=7) # 중심점 잡기
map.choropleth(geo_data = geoStr,
data=koreaPop['소멸위기지역'],
columns=[koreaPop.index, koreaPop['소멸위기지역']],
fill_color='PuRd',
key_on='feature.id')
map
crime = pd.read_csv('../data/crime_in_Seoul.csv', index_col=0, encoding='UTF-8')
crime.head()
crime.index # json 파일에 index의 값들이 들어가있어야 match 가능
geoPath = '../data/skorea_municipalities_geo_simple.json'
geoStr = json.load(open(geoPath, encoding='UTF-8'))
# +
# 서울시의 중심 위도와 경도로 지정
map = folium.Map(location=[37.5502, 126.982], zoom_start=11) # 중심점 잡기(서울시 중심)
# 컬러맵은 '살인' 발생 건수로 지정
map.choropleth(geo_data = geoStr,
data=crime['살인'],
columns=[crime.index, crime['살인']],
fill_color='PuRd',
key_on='feature.id')
map
# +
# 서울시의 중심 위도와 경도로 지정
map = folium.Map(location=[37.5502, 126.982], zoom_start=11) # 중심점 잡기(서울시 중심)
# 컬러맵은 '폭력' 발생 건수로 지정
map.choropleth(geo_data = geoStr,
data=crime['폭력'],
columns=[crime.index, crime['폭력']],
fill_color='PuRd',
key_on='feature.id')
map
# +
# 서울시의 중심 위도와 경도로 지정
map = folium.Map(location=[37.5502, 126.982], zoom_start=11) # 중심점 잡기(서울시 중심)
# 컬러맵은 'CCTV' 발생 건수로 지정
map.choropleth(geo_data = geoStr,
data=crime['CCTV'],
columns=[crime.index, crime['CCTV']],
fill_color='PuRd',
key_on='feature.id')
map
| 9,430 |
/KNN_Regressor.ipynb
|
0647f79aa10f5328fcdb35a59e9c8f93c51a3aee
|
[] |
no_license
|
arthpatel573/Machine-Learning-in-R
|
https://github.com/arthpatel573/Machine-Learning-in-R
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.r
| 35,303 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# ### KNN Regressor
# KNN Regressor takes the training data and their labels (continuous values), the test data, and the size of the neighborhood (K). It returns the regressed values for the test data points. The distance function used to measure the distance between a pair of data points is Euclidean distance function.
# remove warning messages while installation
options(repos = getOption("repos")["CRAN"])
library(reshape2)
library(ggplot2)
library(corrplot)
train_data <- read.csv('Data//1A_train.csv')
head(train_data)
# dimension of train data
dim(train_data)
summary(train_data)
ggplot(data=train_data, aes(x=x1, y=y)) +
geom_point() + geom_rug()+ theme_minimal()
# ### Train and test data
# +
# create training and testing subsets:
train.data <- data.frame(x1 = train_data[, 'x1']) # grab all records, leave out the last column
train.label <- train_data[, 'y']
test_data <- read.csv('Data//1A_test.csv')
test.data <- data.frame(x1 = test_data[, 'x1']) # grab all records, leave out last column
test.label <- test_data[, 'y']
dim(train.data) # 42 records
dim(test.data) # 42 records
# -
# ### KNN Regressor
# KNN function
knn <- function(train.data, train.label, test.data, K=4, distance = 'euclidean'){
# count number of train samples
train.len <- nrow(train.data)
# count number of test samples
test.len <- nrow(test.data)
# calculate distances between samples
dist <- as.matrix(dist(rbind(test.data, train.data), method= distance))[1:test.len, (test.len+1):(test.len+train.len)]
# initialize testLabel
testLabel <- rep(0.0, test.len)
# for each test sample...
for (i in 1:test.len){
# ...find its K nearest neighbours from training samples...
nn <- as.data.frame(sort(dist[i,], index.return = TRUE))[1:K,2]
#... and calculate the predicted labels by averaging nearest neighbours values
testLabel[i]<- mean(train.label[nn])
}
# return the class labels as output
return (testLabel)
}
# calculate the train and test sum of square errors for K in 1:25
miss <- data.frame('K'=1:25, 'train'=rep(0,25), 'test'=rep(0,25))
for (k in 1:25){
miss[k,'train'] <- sum((knn(train.data, train.label, train.data, K=k) - train.label)^2)
miss[k,'test'] <- sum((knn(train.data, train.label, test.data, K=k) - test.label)^2)
}
# **Plot the training and the testing errors versus 1/K for K=1,.., 25 in one plot.**
# plot sum of sq error percentage for train and test data sets
miss.m <- melt(miss, id='K') # reshape for visualization
names(miss.m) <- c('K', 'type', 'error')
ggplot(data=miss.m, aes(x=(1/K), y=error, color=type)) + geom_line() +
scale_color_discrete(guide = guide_legend(title = NULL)) + geom_point()+
ggtitle("Sum of square Error vs 1/K")
cat('Optimum value for K is', which.min(miss$test))
# To get the optimal value of k, we need to choose value such that it neither underfits nor overfits on test data. If we look at the above graph, at `1/k = 0.09` **(k ~ 11)**, the test error is minimum, hence it becomes our optimal value for k.
#
# If we look closely at the graph, training error and testing error are highest for smaller values of 1/k (or large value of k). So, at that particular value of k, we can say that model is underfitting on the train data as well as test data.
#
# As value of 1/k increases or **k decreases**, training error decreases constantly but testing error decreases to some point and then testin error increases.
#
# For larger values of 1/k or **smaller values of k**, we see that training error is minimum, however test error is higher at this value of 1/k, so the model is overfitting on train data and underfitting on test data.
| 3,974 |
/hw5/congcheng_yan_cy2550_homework5.ipynb
|
f1889457c21805291d8bb3add3c5208400ef6490
|
[] |
no_license
|
cy2550/nlp2020
|
https://github.com/cy2550/nlp2020
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 2,375,012 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="z1C5OET_ij47"
# # COMS W4705 - Homework 5
# ## Image Captioning with Conditioned LSTM Generators
# Yassine Benajiba <[email protected]>
# + [markdown] colab_type="text" id="N47qNqhvij49"
# Follow the instructions in this notebook step-by step. Much of the code is provided, but some sections are marked with **todo**.
#
# Specifically, you will build the following components:
#
# * Create matrices of image representations using an off-the-shelf image encoder.
# * Read and preprocess the image captions.
# * Write a generator function that returns one training instance (input/output sequence pair) at a time.
# * Train an LSTM language generator on the caption data.
# * Write a decoder function for the language generator.
# * Add the image input to write an LSTM caption generator.
# * Implement beam search for the image caption generator.
#
# Please submit a copy of this notebook only, including all outputs. Do not submit any of the data files.
# + [markdown] colab_type="text" id="f1ZasW6eij4_"
# ### Getting Started
#
# First, run the following commands to make sure you have all required packages.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="hVueJjMSij4_" outputId="28bb0a26-24d7-4a9b-a2ce-4714a33d8d5f"
import os
from collections import defaultdict
import numpy as np
import PIL
import string
import random
import time
from matplotlib import pyplot as plt
# %matplotlib inline
from keras import Sequential, Model
from keras.layers import Embedding, LSTM, Dense, Input, Bidirectional, RepeatVector, Concatenate, Activation
from keras.activations import softmax
from keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.applications.inception_v3 import InceptionV3
from keras.optimizers import Adam
from google.colab import drive
# + [markdown] colab_type="text" id="NOHXYxGpij5D"
# ### Access to the flickr8k data
#
# We will use the flickr8k data set, described here in more detail:
#
# > M. Hodosh, P. Young and J. Hockenmaier (2013) "Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics", Journal of Artificial Intelligence Research, Volume 47, pages 853-899 http://www.jair.org/papers/paper3994.html when discussing our results
#
# I have uploaded all the data and model files you'll need to my GDrive and you can access the folder here:
# https://drive.google.com/drive/folders/1i9Iun4h3EN1vSd1A1woez0mXJ9vRjFlT?usp=sharing
#
# Google Drive does not allow to copy a folder, so you'll need to download the whole folder and then upload it again to your own drive. Please assign the name you chose for this folder to the variable `my_data_dir` in the next cell.
#
# N.B.: Usage of this data is limited to this homework assignment. If you would like to experiment with the data set beyond this course, I suggest that you submit your owndownload request here: https://forms.illinois.edu/sec/1713398
# + colab={} colab_type="code" id="W6mdv4xhij5E"
#this is where you put the name of your data folder.
#Please make sure it's correct because it'll be used in many places later.
my_data_dir="hw5_data"
# + [markdown] colab_type="text" id="DziiSayAij5G"
# ### Mounting your GDrive so you can access the files from Colab
# + colab={"base_uri": "https://localhost:8080/", "height": 121} colab_type="code" id="pXjXSURwij5H" outputId="5a1abcea-cdc2-439a-b253-67986c4b053e"
#running this command will generate a message that will ask you to click on a link where you'll obtain your GDrive auth code.
# #copy paste that code in the text box that will appear below
drive.mount('/content/gdrive')
# + [markdown] colab_type="text" id="vElcj02dij5K"
# Please look at the 'Files' tab on the left side and make sure you can see the 'hw5_data' folder that you have in your GDrive.
# + [markdown] colab_type="text" id="JX9RpMYwij5K"
# ## Part I: Image Encodings (14 pts)
# + [markdown] colab_type="text" id="u4fNh7vfij5L"
# The files Flickr_8k.trainImages.txt Flickr_8k.devImages.txt Flickr_8k.testImages.txt, contain a list of training, development, and test images, respectively. Let's load these lists.
# + colab={} colab_type="code" id="yaeiKz07ij5M"
def load_image_list(filename):
with open(filename,'r') as image_list_f:
return [line.strip() for line in image_list_f]
# + colab={} colab_type="code" id="5sdJ8Vqjij5Q"
train_list = load_image_list('/content/gdrive/My Drive/'+my_data_dir+'/Flickr_8k.trainImages.txt')
dev_list = load_image_list('/content/gdrive/My Drive/'+my_data_dir+'/Flickr_8k.devImages.txt')
test_list = load_image_list('/content/gdrive/My Drive/'+my_data_dir+'/Flickr_8k.testImages.txt')
# + [markdown] colab_type="text" id="KJSfq3_Qij5T"
# Let's see how many images there are
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="AXz-kAdkij5U" outputId="80e42b5e-3fb3-400b-d7fe-610e57ca3425"
len(train_list), len(dev_list), len(test_list)
# + [markdown] colab_type="text" id="5SS0YaPfij5X"
# Each entry is an image filename.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="jjHGVzAhij5X" outputId="971024f1-9bbd-4509-c810-baf095ea6d1a"
dev_list[20]
# + [markdown] colab_type="text" id="RUI2S7XFij5a"
# The images are located in a subdirectory.
# + colab={} colab_type="code" id="3QAnEAOOij5a"
IMG_PATH = '/content/gdrive/My Drive/'+my_data_dir+"/Flickr8k_Dataset"
# + colab={} colab_type="code" id="SUclDvCYwj9d"
#os.path.join(IMG_PATH, dev_list[20])
# + [markdown] colab_type="text" id="OO-z6oxWij5e"
# We can use PIL to open the image and matplotlib to display it.
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" id="Aw3q34Slij5f" outputId="77494b30-431e-4a31-fffc-a3e3411d3d36"
image = PIL.Image.open(os.path.join(IMG_PATH, dev_list[20]))
image
# + [markdown] colab_type="text" id="hXI2FeFUij5j"
# if you can't see the image, try
# + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="g0RON7ibij5j" outputId="c3f7b79c-f244-4a01-c000-3d69b4466bc1"
plt.imshow(image)
# + [markdown] colab_type="text" id="1QuLG1A1ij5m"
# We are going to use an off-the-shelf pre-trained image encoder, the Inception V3 network. The model is a version of a convolution neural network for object detection. Here is more detail about this model (not required for this project):
#
# > Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826).
# > https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.html
#
# The model requires that input images are presented as 299x299 pixels, with 3 color channels (RGB). The individual RGB values need to range between 0 and 1.0. The flickr images don't fit.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="ODU8gWv3ij5m" outputId="7ace3984-fce4-4738-eb2b-a0628dbcf162"
np.asarray(image).shape
# + [markdown] colab_type="text" id="Ls_dK9eLij5o"
# The values range from 0 to 255.
# + colab={"base_uri": "https://localhost:8080/", "height": 867} colab_type="code" id="MtQDUUuHij5o" outputId="fecc3192-f045-4199-fbf1-ccf52f654fc9"
np.asarray(image)
# + [markdown] colab_type="text" id="vVXuGS7Zij5s"
# We can use PIL to resize the image and then divide every value by 255.
# + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="1jLVRElyij5t" outputId="55688d22-d0af-4be0-a031-169964629f84"
new_image = np.asarray(image.resize((299,299))) / 255.0
plt.imshow(new_image)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="tOjk94oGij5w" outputId="e4806013-ce09-4f4d-a823-6aeba705a66e"
new_image.shape
# + [markdown] colab_type="text" id="Lrd7em5Mij5y"
# Let's put this all in a function for convenience.
# + colab={} colab_type="code" id="BXBjNhlnij5y"
def get_image(image_name):
image = PIL.Image.open(os.path.join(IMG_PATH, image_name))
return np.asarray(image.resize((299,299))) / 255.0
# + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="bSePNKkzij51" outputId="2bcac901-39f3-4317-cb4b-7b70b59dc906"
plt.imshow(get_image(dev_list[25]))
# + [markdown] colab_type="text" id="YuzBJIA2ij53"
# Next, we load the pre-trained Inception model.
# + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="zdLA0YQHij53" outputId="dc7e5d1a-592a-44c0-f326-45fd0c3688b9"
img_model = InceptionV3(weights='imagenet') # This will download the weight files for you and might take a while.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="DlnFuOQ0ij55" outputId="d49a22b2-db3c-4357-c9e0-41602f41ea6b"
img_model.summary() # this is quite a complex model.
# + [markdown] colab_type="text" id="dHYuH9sRij58"
# This is a prediction model,so the output is typically a softmax-activated vector representing 1000 possible object types. Because we are interested in an encoded representation of the image we are just going to use the second-to-last layer as a source of image encodings. Each image will be encoded as a vector of size 2048.
#
# We will use the following hack: hook up the input into a new Keras model and use the penultimate layer of the existing model as output.
# + colab={} colab_type="code" id="XrVBvVYeij58"
new_input = img_model.input
new_output = img_model.layers[-2].output
img_encoder = Model(new_input, new_output) # This is the final Keras image encoder model we will use.
# + [markdown] colab_type="text" id="3TFI9x6Pij5_"
# Let's try the encoder.
# + colab={} colab_type="code" id="5eQEDAtQij5_"
encoded_image = img_encoder.predict(np.array([new_image]))
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="8Kp4OWaHij6C" outputId="a689f06c-1379-41e9-a9a5-b8bebf9f46e7"
encoded_image
# + [markdown] colab_type="text" id="rqbGlDePij6F"
# **TODO:** We will need to create encodings for all images and store them in one big matrix (one for each dataset, train, dev, test).
# We can then save the matrices so that we never have to touch the bulky image data again.
#
# To save memory (but slow the process down a little bit) we will read in the images lazily using a generator. We will encounter generators again later when we train the LSTM. If you are unfamiliar with generators, take a look at this page: https://wiki.python.org/moin/Generators
#
# Write the following generator function, which should return one image at a time.
# `img_list` is a list of image file names (i.e. the train, dev, or test set). The return value should be a numpy array of shape (1,299,299,3).
# + colab={} colab_type="code" id="PM3MClDBij6F"
def img_generator(img_list):
#l = len(img_list)
for i in img_list:
yield(np.asarray([get_image(i)]))
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="1qoERmuMuyBf" outputId="d31511a3-2464-4ae7-8d17-19b859d582d0"
t1 = new_image
np.asarray([t1]).shape
# + [markdown] colab_type="text" id="eZDwSerXij6H"
# Now we can encode all images (this takes a few minutes).
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="Of2X3JNNij6I" outputId="ec4b42df-a5ef-448e-b899-98c155e9e7ba"
enc_train = img_encoder.predict_generator(img_generator(train_list), steps=len(train_list), verbose=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="28iTVlfwij6K" outputId="0fcc21ce-8682-4d77-942a-1968d4852192"
enc_train[11]
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="egJcNOCUij6M" outputId="c4b6e64c-a8b5-446d-d42f-4d7aede11151"
enc_dev = img_encoder.predict_generator(img_generator(dev_list), steps=len(dev_list), verbose=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="O9_5L8zHij6O" outputId="fd9aa3b7-79e0-4dbc-b5b1-3029069a213f"
enc_test = img_encoder.predict_generator(img_generator(test_list), steps=len(test_list), verbose=1)
# + [markdown] colab_type="text" id="_ly98AQBij6R"
# It's a good idea to save the resulting matrices, so we do not have to run the encoder again.
# + colab={} colab_type="code" id="uv4G5iU7ij6S"
np.save("gdrive/My Drive/"+my_data_dir+"/outputs/encoded_images_train.npy", enc_train)
np.save("gdrive/My Drive/"+my_data_dir+"/outputs/encoded_images_dev.npy", enc_dev)
np.save("gdrive/My Drive/"+my_data_dir+"/outputs/encoded_images_test.npy", enc_test)
# + [markdown] colab_type="text" id="MbyF_7Jhij6U"
# ## Part II Text (Caption) Data Preparation (14 pts)
#
# Next, we need to load the image captions and generate training data for the generator model.
# + [markdown] colab_type="text" id="__-FiQ6Aij6U"
# ### Reading image descriptions
# + [markdown] colab_type="text" id="pWjuvBTlij6U"
# **TODO**: Write the following function that reads the image descriptions from the file `filename` and returns a dictionary in the following format. Take a look at the file `Flickr8k.token.txt` for the format of the input file.
# The keys of the dictionary should be image filenames. Each value should be a list of 5 captions. Each caption should be a list of tokens.
#
# The captions in the file are already tokenized, so you can just split them at white spaces. You should convert each token to lower case. You should then pad each caption with a START token on the left and an END token on the right.
# + colab={} colab_type="code" id="AAf_EPz_ij6V"
def read_image_descriptions(filename):
image_descriptions = defaultdict(list)
f = open(filename, 'r')
for s in f:
k, c = s.strip().split('#', 1)
b, c = c.split('\t')
r1 = ['<START>'] + [x.lower() for x in c.split()] + ['<END>']
if b == '0':
#print (b,r1)
image_descriptions[k] = [r1]
else:
image_descriptions[k].append(r1)
return image_descriptions
# + colab={} colab_type="code" id="UG_P8ynuij6X"
descriptions = read_image_descriptions("gdrive/My Drive/"+my_data_dir+"/Flickr8k.token.txt")
# + colab={"base_uri": "https://localhost:8080/", "height": 55} colab_type="code" id="Eg8Q-gd4ij6Z" outputId="682f4080-9a21-46d9-bfeb-21c0c9f95bb5"
print(descriptions[dev_list[0]])
# + [markdown] colab_type="text" id="K859GNtFij6b"
# Running the previous cell should print:
# `[['<START>', 'the', 'boy', 'laying', 'face', 'down', 'on', 'a', 'skateboard', 'is', 'being', 'pushed', 'along', 'the', 'ground', 'by', 'another', 'boy', '.', '<END>'], ['<START>', 'two', 'girls', 'play', 'on', 'a', 'skateboard', 'in', 'a', 'courtyard', '.', '<END>'], ['<START>', 'two', 'people', 'play', 'on', 'a', 'long', 'skateboard', '.', '<END>'], ['<START>', 'two', 'small', 'children', 'in', 'red', 'shirts', 'playing', 'on', 'a', 'skateboard', '.', '<END>'], ['<START>', 'two', 'young', 'children', 'on', 'a', 'skateboard', 'going', 'across', 'a', 'sidewalk', '<END>']]
# `
# + [markdown] colab_type="text" id="Q3PVqig-ij6b"
# ### Creating Word Indices
# + [markdown] colab_type="text" id="iugD7IuNij6c"
# Next, we need to create a lookup table from the **training** data mapping words to integer indices, so we can encode input
# and output sequences using numeric representations. **TODO** create the dictionaries id_to_word and word_to_id, which should map tokens to numeric ids and numeric ids to tokens.
# Hint: Create a set of tokens in the training data first, then convert the set into a list and sort it. This way if you run the code multiple times, you will always get the same dictionaries.
# + colab={} colab_type="code" id="JTl7729q_udI"
s1 = set()
for i in train_list:
for j in descriptions[i]:
s1.update(j)
list_word = sorted(list(s1))
list_id = list(range(len(list_word)))
# + colab={} colab_type="code" id="_LwvVLjDij6c"
id_to_word = dict(zip(list_id, list_word))
# + colab={} colab_type="code" id="J2igB_EOij6f"
word_to_id = dict(zip(list_word, list_id))
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="zHBd7KKHij6k" outputId="5cdd6431-11b0-4680-da04-964ea611c2b9"
word_to_id['dog'] # should print an integer
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="EvwmvgGbij6m" outputId="628c8d83-6760-45a8-b22f-bc6b8907f795"
id_to_word[1985] # should print a token
# + [markdown] colab_type="text" id="JbXuS-HEij6p"
# Note that we do not need an UNK word token because we are generating. The generated text will only contain tokens seen at training time.
# + [markdown] colab_type="text" id="sIXoZj1yij6p"
# ## Part III Basic Decoder Model (24 pts)
#
# For now, we will just train a model for text generation without conditioning the generator on the image input.
#
# There are different ways to do this and our approach will be slightly different from the generator discussed in class.
# + [markdown] colab_type="text" id="UyL0fAuIij6q"
# The core idea here is that the Keras recurrent layers (including LSTM) create an "unrolled" RNN. Each time-step is represented as a different unit, but the weights for these units are shared. We are going to use the constant MAX_LEN to refer to the maximum length of a sequence, which turns out to be 40 words in this data set (including START and END).
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="EnrkG4pxij6q" outputId="88f6cbf1-c771-4aa2-c07c-38ab3a54cce6"
max(len(description) for image_id in train_list for description in descriptions[image_id])
# + [markdown] colab_type="text" id="X0w4rnviij6s"
# In class, we discussed LSTM generators as transducers that map each word in the input sequence to the next word.
# <img src="http://www.cs.columbia.edu/~bauer/4705/lstm1.png" width="480px">
# + [markdown] colab_type="text" id="Xqkt21Z2ij6u"
# Instead, we will use the model to predict one word at a time, given a partial sequence. For example, given the sequence ["START","a"], the model might predict "dog" as the most likely word. We are basically using the LSTM to encode the input sequence up to this point.
# <img src="http://www.cs.columbia.edu/~bauer/4705/lstm2.png" width="480px">
#
# + [markdown] colab_type="text" id="KgfAVpvpij6v"
# To train the model, we will convert each description into a set of input output pairs as follows. For example, consider the sequence
#
# `['<START>', 'a', 'black', 'dog', '.', '<END>']`
#
# We would train the model using the following input/output pairs
#
# | i | input | output |
# |---|------------------------------|--------|
# | 0 |[`START`] | `a` |
# | 1 |[`START`,`a`] | `black`|
# | 2 |[`START`,`a`, `black`] | `dog` |
# | 3 |[`START`,`a`, `black`, `dog`] | `END` |
#
#
# + [markdown] colab_type="text" id="PjhajDakij6w"
# Here is the model in Keras Keras. Note that we are using a Bidirectional LSTM, which encodes the sequence from both directions and then predicts the output.
# Also note the `return_sequence=False` parameter, which causes the LSTM to return a single output instead of one output per state.
#
# Note also that we use an embedding layer for the input words. The weights are shared between all units of the unrolled LSTM. We will train these embeddings with the model.
# + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" id="tnc3iBhUij6w" outputId="e05790b4-ded9-40c3-9bc4-b41fd599f576"
MAX_LEN = 40
EMBEDDING_DIM=300
vocab_size = len(word_to_id)
# Text input
text_input = Input(shape=(MAX_LEN,))
embedding = Embedding(vocab_size, EMBEDDING_DIM, input_length=MAX_LEN)(text_input)
x = Bidirectional(LSTM(512, return_sequences=False))(embedding)
pred = Dense(vocab_size, activation='softmax')(x)
model = Model(inputs=[text_input],outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
model.summary()
# + [markdown] colab_type="text" id="nDROoPOTij6y"
# The model input is a numpy ndarray (a tensor) of size `(batch_size, MAX_LEN)`. Each row is a vector of size MAX_LEN in which each entry is an integer representing a word (according to the `word_to_id` dictionary). If the input sequence is shorter than MAX_LEN, the remaining entries should be padded with 0.
#
# For each input example, the model returns a softmax activated vector (a probability distribution) over possible output words. The model output is a numpy ndarray of size `(batch_size, vocab_size)`. vocab_size is the number of vocabulary words.
# + [markdown] colab_type="text" id="VwGIhV3Yij6z"
# ### Creating a Generator for the Training Data
# + [markdown] colab_type="text" id="MfLF6pb0ij60"
# **TODO**:
#
# We could simply create one large numpy ndarray for all the training data. Because we have a lot of training instances (each training sentence will produce up to MAX_LEN input/output pairs, one for each word), it is better to produce the training examples *lazily*, i.e. in batches using a generator (recall the image generator in part I).
#
# Write the function `text_training_generator` below, that takes as a paramater the batch_size and returns an `(input, output)` pair. `input` is a `(batch_size, MAX_LEN)` ndarray of partial input sequences, `output` contains the next words predicted for each partial input sequence, encoded as a `(batch_size, vocab_size)` ndarray.
#
# Each time the next() function is called on the generator instance, it should return a new batch of the *training* data. You can use `train_list` as a list of training images. A batch may contain input/output examples extracted from different descriptions or even from different images.
#
# You can just refer back to the variables you have defined above, including `descriptions`, `train_list`, `vocab_size`, etc.
#
# + [markdown] colab_type="text" id="FQEX4YWIij60"
# Hint: To prevent issues with having to reset the generator for each epoch and to make sure the generator can always return exactly `batch_size` input/output pairs in each step, wrap your code into a `while True:` loop. This way, when you reach the end of the training data, you will just continue adding training data from the beginning into the batch.
# + colab={} colab_type="code" id="bT3wfUQLij60"
def text_training_generator(batch_size=128):
#mlen = max(len(description) for image_id in train_list for description in descriptions[image_id])
rin = np.zeros([batch_size, MAX_LEN])
rout = np.zeros([batch_size, vocab_size])
p = 0
# def pru(t):
# if t == '<START>':
# return 'START'
# elif t == '<END>':
# return 'END'
# else:
# return t
while True:
tlist = train_list.copy()
random.shuffle(tlist)
#print(' shuffled here')
for i in tlist:
for des in descriptions[i]:
# if des[-2] in string.punctuation:
# temp = des[:-2] + des[-1:]
# else:
# temp = des
temp = [word_to_id[x] for x in des if x not in string.punctuation]
tlen = len(temp)
for j in range(tlen-1):
rin[p][0:j+1] = np.array(temp[0:j+1])
rout[p][temp[j+1]] = 1
p += 1
if p == batch_size:
yield (rin, rout)
p = 0
rin = np.zeros([batch_size, MAX_LEN])
rout = np.zeros([batch_size, vocab_size])
# + [markdown] colab_type="text" id="jhhtgQGPij63"
# ### Training the Model
# + [markdown] colab_type="text" id="jgOax84Rij63"
# We will use the `fit_generator` method of the model to train the model. fit_generator needs to know how many iterator steps there are per epoch.
#
# Because there are len(train_list) training samples with up to `MAX_LEN` words, an upper bound for the number of total training instances is `len(train_list)*MAX_LEN`. Because the generator returns these in batches, the number of steps is len(train_list) * MAX_LEN // batch_size
# + colab={} colab_type="code" id="33ICSqjmij64"
batch_size = 128
generator = text_training_generator(batch_size)
steps = len(train_list) * MAX_LEN // batch_size
# steps = None
# + colab={"base_uri": "https://localhost:8080/", "height": 436} colab_type="code" id="e_e1uPWlVnYx" outputId="3eaf2982-7218-47a8-e231-9bdada42ea93"
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=10)
# + colab={"base_uri": "https://localhost:8080/", "height": 381} colab_type="code" id="XOxO5PNRtIbd" outputId="b2b17124-e07f-4f4e-e684-633bd31c9496"
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=10)
# + colab={"base_uri": "https://localhost:8080/", "height": 104} colab_type="code" id="ss4Z2CqH6OVz" outputId="ff941399-3551-49da-f588-a3b71335fe99"
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=2)
# + colab={"base_uri": "https://localhost:8080/", "height": 208} colab_type="code" id="cf2LYQeFHJwt" outputId="a257c994-4467-4358-fc12-4113d4545e12"
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=5)
# + colab={"base_uri": "https://localhost:8080/", "height": 104} colab_type="code" id="vtKyIQVdPGW7" outputId="cd4e2788-e5fc-4904-eeb4-d82a1eed5963"
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=2)
# + [markdown] colab_type="text" id="YbGii9qGij68"
# Continue to train the model until you reach an accuracy of at least 40%.
# + [markdown] colab_type="text" id="Zvz4V-Piij69"
# ### Greedy Decoder
#
# **TODO** Next, you will write a decoder. The decoder should start with the sequence `["<START>"]`, use the model to predict the most likely word, append the word to the sequence and then continue until `"<END>"` is predicted or the sequence reaches `MAX_LEN` words.
# + colab={} colab_type="code" id="KvePfELpij69"
def decoder():
rlist = np.zeros(MAX_LEN)
rlist[0] = word_to_id['<START>']
i = 1
ilist = list(range(MAX_LEN))
end = word_to_id['<END>']
e = MAX_LEN
while i < MAX_LEN:
j = model.predict(np.array([rlist]))
k = max(ilist, key=lambda x:j[0][x])
if k == end:
rlist[i] = k
e = i+1
break
else:
rlist[i] = k
i += 1
res = rlist[0:e]
return [id_to_word[x] for x in res]
# + colab={"base_uri": "https://localhost:8080/", "height": 55} colab_type="code" id="BJHk8I-cij6_" outputId="9d55e9fc-d972-4c52-f48a-5665ad2411f4"
print(decoder())
# + [markdown] colab_type="text" id="Oea2nHf3ij7D"
# This simple decoder will of course always predict the same sequence (and it's not necessarily a good one).
#
# Modify the decoder as follows. Instead of choosing the most likely word in each step, sample the next word from the distribution (i.e. the softmax activated output) returned by the model. Take a look at the [np.random.multinomial](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multinomial.html) function to do this.
# + colab={} colab_type="code" id="zuTY4jYQij7D"
def sample_decoder():
rlist = np.zeros(MAX_LEN)
rlist[0] = word_to_id['<START>']
i = 1
ilist = list(range(MAX_LEN))
end = word_to_id['<END>']
e = MAX_LEN
while i < MAX_LEN:
j = model.predict(np.array([rlist]))[0]
#print(sum(j/sum(j)))
j = j.astype(float)
j /= j.sum()
k = np.argmax(np.random.multinomial(100, j))
rlist[i] = k
if k == end:
e = i + 1
break
else:
i += 1
res = rlist[0:e]
return [id_to_word[x] for x in res]
# + [markdown] colab_type="text" id="qjJdUrY8ij7H"
# You should now be able to see some interesting output that looks a lot like flickr8k image captions -- only that the captions are generated randomly without any image input.
# + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="WpgLgEKJij7H" outputId="3ff350a3-cd7a-44f6-9405-d3f85ef04052"
for i in range(10):
print(sample_decoder())
# + [markdown] colab_type="text" id="bG51rI1uij7J"
# ## Part III - Conditioning on the Image (24 pts)
# + [markdown] colab_type="text" id="dIOb1m6eij7J"
# We will now extend the model to condition the next word not only on the partial sequence, but also on the encoded image.
#
# We will project the 2048-dimensional image encoding to a 300-dimensional hidden layer.
# We then concatenate this vector with each embedded input word, before applying the LSTM.
#
# Here is what the Keras model looks like:
# + colab={"base_uri": "https://localhost:8080/", "height": 437} colab_type="code" id="Yz8xTvqdij7K" outputId="b6e7f55e-7e05-4098-f505-6cb177b74bf1"
MAX_LEN = 40
EMBEDDING_DIM=300
IMAGE_ENC_DIM=300
# Image input
img_input = Input(shape=(2048,))
img_enc = Dense(300, activation="relu") (img_input)
images = RepeatVector(MAX_LEN)(img_enc)
# Text input
text_input = Input(shape=(MAX_LEN,))
embedding = Embedding(vocab_size, EMBEDDING_DIM, input_length=MAX_LEN)(text_input)
x = Concatenate()([images,embedding])
y = Bidirectional(LSTM(256, return_sequences=False))(x)
pred = Dense(vocab_size, activation='softmax')(y)
model = Model(inputs=[img_input,text_input],outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer="RMSProp", metrics=['accuracy'])
model.summary()
# + [markdown] colab_type="text" id="TLQFx1d3ij7N"
# The model now takes two inputs:
#
# 1. a `(batch_size, 2048)` ndarray of image encodings.
# 2. a `(batch_size, MAX_LEN)` ndarray of partial input sequences.
#
# And one output as before: a `(batch_size, vocab_size)` ndarray of predicted word distributions.
# + [markdown] colab_type="text" id="Joc4EqfFij7O"
# **TODO**: Modify the training data generator to include the image with each input/output pair.
# Your generator needs to return an object of the following format: `([image_inputs, text_inputs], next_words)`. Where each element is an ndarray of the type described above.
#
# You need to find the image encoding that belongs to each image. You can use the fact that the index of the image in `train_list` is the same as the index in enc_train and enc_dev.
#
# If you have previously saved the image encodings, you can load them from disk:
# + colab={} colab_type="code" id="hdV8qBNHij7O"
enc_train = np.load("gdrive/My Drive/"+my_data_dir+"/outputs/encoded_images_train.npy")
enc_dev = np.load("gdrive/My Drive/"+my_data_dir+"/outputs/encoded_images_dev.npy")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="80gddWy0OU5B" outputId="a521fc97-9b0a-4205-8e1f-cd1d79ccc384"
np.shape(enc_train[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="pwRVzHWdO0U6" outputId="30d0a6ea-8864-4626-9886-f64b73cbbd10"
np.shape(train_list)
# + colab={} colab_type="code" id="JQ-QoAaTij7Q"
def training_generator(batch_size=128):
rin = np.zeros([batch_size, MAX_LEN])
rout = np.zeros([batch_size, vocab_size])
rimage = np.zeros([batch_size, 2048])
p = 0
while True:
tlist = train_list.copy()
#random.shuffle(tlist)
#print(' shuffled here')
for kk, i in enumerate(tlist):
image = enc_train[kk]
for des in descriptions[i]:
# if des[-2] in string.punctuation:
# temp = des[:-2] + des[-1:]
# else:
# temp = des
temp = [word_to_id[x] for x in des if x not in string.punctuation]
tlen = len(temp)
for j in range(tlen-1):
rin[p][0:j+1] = np.array(temp[0:j+1])
rout[p][temp[j+1]] = 1
rimage[p] = image
p += 1
if p == batch_size:
yield ([rimage, rin], rout)
p = 0
rin = np.zeros([batch_size, MAX_LEN])
rout = np.zeros([batch_size, vocab_size])
rimage = np.zeros([batch_size, 2048])
# + [markdown] colab_type="text" id="lIBPTg1sij7U"
# You should now be able to train the model as before:
# + colab={} colab_type="code" id="fC1EKi1nij7U"
batch_size = 128
generator = training_generator(batch_size)
steps = len(train_list) * MAX_LEN // batch_size
# + colab={"base_uri": "https://localhost:8080/", "height": 759} colab_type="code" id="oMtC8tLYij7X" outputId="c8df3dfe-0a77-464c-cf1e-4c08c7b16818"
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=20)
# + [markdown] colab_type="text" id="mjE7LW0lij7Z"
# Again, continue to train the model until you hit an accuracy of about 40%. This may take a while. I strongly encourage you to experiment with cloud GPUs using the GCP voucher for the class.
# + [markdown] colab_type="text" id="m0ZxK2B-ij7Z"
# You can save your model weights to disk and continue at a later time.
# + colab={} colab_type="code" id="m6Ifa5jxij7Z"
model.save_weights("gdrive/My Drive/"+my_data_dir+"/outputs/model.h5")
# + [markdown] colab_type="text" id="gpM_DG7_ij7c"
# to load the model:
# + colab={} colab_type="code" id="34T-grq8ij7c"
model.load_weights("gdrive/My Drive/"+my_data_dir+"/outputs/model.h5")
# + [markdown] colab_type="text" id="9Nu0YA2lij7g"
# **TODO**: Now we are ready to actually generate image captions using the trained model. Modify the simple greedy decoder you wrote for the text-only generator, so that it takes an encoded image (a vector of length 2048) as input, and returns a sequence.
# + colab={} colab_type="code" id="-wK0azRFij7g"
def image_decoder(enc_image):
rlist = np.zeros(MAX_LEN)
rlist[0] = word_to_id['<START>']
i = 1
ilist = list(range(MAX_LEN))
end = word_to_id['<END>']
e = MAX_LEN
while i < MAX_LEN:
j = model.predict([np.array([enc_image]), np.array([rlist])])[0]
#print(sum(j/sum(j)))
# j = j.astype(float)
# j /= j.sum()
k = np.argmax(j)
rlist[i] = k
if k == end:
e = i + 1
break
else:
i += 1
res = rlist[0:e]
return [id_to_word[x] for x in res]
# + [markdown] colab_type="text" id="U5LH770sij7i"
# As a sanity check, you should now be able to reproduce (approximately) captions for the training images.
# + colab={"base_uri": "https://localhost:8080/", "height": 538} colab_type="code" id="5XDOVTCSij7i" outputId="f38c8098-238d-47f7-8395-b04a1ee14e78"
plt.imshow(get_image(train_list[0]))
image_decoder(enc_train[0])
# + [markdown] colab_type="text" id="icKCsjpNij7k"
# You should also be able to apply the model to dev images and get reasonable captions:
# + colab={"base_uri": "https://localhost:8080/", "height": 588} colab_type="code" id="6S-rGqLIij7k" outputId="0ca79ecd-f81d-4a20-ef25-51de45bf9802"
plt.imshow(get_image(dev_list[1]))
image_decoder(enc_dev[1])
# + [markdown] colab_type="text" id="AXfKv0pFij7o"
# For this assignment we will not perform a formal evaluation.
# + [markdown] colab_type="text" id="DQnLnFMBij7p"
# Feel free to experiment with the parameters of the model or continue training the model. At some point, the model will overfit and will no longer produce good descriptions for the dev images.
# + [markdown] colab_type="text" id="FiQE8dnVij7p"
# ## Part IV - Beam Search Decoder (24 pts)
# + [markdown] colab_type="text" id="4kSJH9fWij7q"
# **TODO** Modify the simple greedy decoder for the caption generator to use beam search.
# Instead of always selecting the most probable word, use a *beam*, which contains the n highest-scoring sequences so far and their total probability (i.e. the product of all word probabilities). I recommend that you use a list of `(probability, sequence)` tuples. After each time-step, prune the list to include only the n most probable sequences.
#
# Then, for each sequence, compute the n most likely successor words. Append the word to produce n new sequences and compute their score. This way, you create a new list of n*n candidates.
#
# Prune this list to the best n as before and continue until `MAX_LEN` words have been generated.
#
# Note that you cannot use the occurence of the `"<END>"` tag to terminate generation, because the tag may occur in different positions for different entries in the beam.
#
# Once `MAX_LEN` has been reached, return the most likely sequence out of the current n.
# + colab={} colab_type="code" id="g7GwwxqNij7q"
#with distribution
def img_beam_decoder(n, image_enc):
rlist = np.zeros(MAX_LEN)
rlist[0] = word_to_id['<START>']
res = [(0,rlist), (0,rlist)]*n
res[0] = (1,rlist)
i = 1
ilist = list(range(vocab_size))
e = MAX_LEN
while i < MAX_LEN:
c = []
for q in res:
j = model.predict([np.array([image_enc]), np.array([q[1]])])[0]
#print(sum(j/sum(j)))
j = j.astype(float)
j /= j.sum()
list3 = np.random.multinomial(100, j)
list1 = sorted(ilist, key = lambda x: list3[x],reverse=True)[0:n]
for k in list1:
w = q[1].copy()
w[i] = k
c.append((q[0]*list3[k]/100, w))
c.sort(key = lambda x: x[0],reverse = True)
#print('c[0]', [x[0] for x in c])
res = c[0:n]
i += 1
#print('res[0]', [x[0] for x in res])
res_i = max(res, key = lambda x: x[0])[1]
return [id_to_word[x] for x in res_i]
# + colab={} colab_type="code" id="pLoxfTkw7Enj"
#with simple greedy
def img_beam_decoder(n, image_enc):
rlist = np.zeros(MAX_LEN)
rlist[0] = word_to_id['<START>']
res = [(0,rlist), (0,rlist)]*n
res[0] = (1,rlist)
i = 1
ilist = list(range(vocab_size))
e = MAX_LEN
while i < MAX_LEN:
c = []
for q in res:
j = model.predict([np.array([image_enc]), np.array([q[1]])])[0]
#print(sum(j/sum(j)))
#j = j.astype(float)
list1 = sorted(ilist, key = lambda x: j[x],reverse=True)[0:n]
# print('list1', list1)
for k in list1:
w = q[1].copy()
w[i] = k
c.append((q[0]*j[k], w))
c.sort(key = lambda x: x[0],reverse = True)
# print('c', c)
res = c[0:n]
i += 1
# print('res', res)
res_i = max(res, key = lambda x: x[0])[1]
return [id_to_word[x] for x in res_i]
# + colab={"base_uri": "https://localhost:8080/", "height": 306} colab_type="code" id="S7bfC66sij7r" outputId="b82a1881-37f3-4a10-a2ea-5fc2533aa778"
plt.imshow(get_image(dev_list[0]))
print(img_beam_decoder(3, enc_dev[0]))
# + [markdown] colab_type="text" id="03pQ3GQBij7t"
# **TODO** Finally, before you submit this assignment, please show 5 development images, each with 1) their greedy output, 2) beam search at n=3 3) beam search at n=5.
# + colab={} colab_type="code" id="3dojIw347cQj"
ilist = list(range(len(dev_list)))
random.shuffle(ilist)
def myprint(i):
j = ilist[i]
plt.imshow(get_image(dev_list[j]))
print(image_decoder(enc_dev[j]))
print(img_beam_decoder(3, enc_dev[j]))
print(img_beam_decoder(5, enc_dev[j]))
# + colab={"base_uri": "https://localhost:8080/", "height": 339} colab_type="code" id="FtRdvpLmXs4A" outputId="06510d7d-173f-422d-e7e5-38aa72130ffe"
myprint(1)
# + colab={"base_uri": "https://localhost:8080/", "height": 339} colab_type="code" id="t4Ra_UymXtRy" outputId="61b5b261-c7a6-43a2-d103-7ca206fa47f6"
myprint(2)
# + colab={"base_uri": "https://localhost:8080/", "height": 339} colab_type="code" id="AhBBruCoXtXo" outputId="148ba5ac-7d23-4d2d-afd5-56fe5f7a029b"
myprint(3)
# + colab={"base_uri": "https://localhost:8080/", "height": 339} colab_type="code" id="M-Qe7Gv6Xtej" outputId="821869e6-5ad2-4873-a5ad-4a9b51e09320"
myprint(4)
# + colab={"base_uri": "https://localhost:8080/", "height": 339} colab_type="code" id="7Oj3cjyxXtmj" outputId="50d6c7e4-a86a-48c4-fbc0-48178cb39441"
myprint(5)
| 40,659 |
/proj/AgungMisc/BMKGlocs_v_RedPyCat.ipynb
|
0ab7e78db85f30dd12a0471cbf0587fcdcd2593a
|
[] |
no_license
|
jwellik/pywellik
|
https://github.com/jwellik/pywellik
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 4,390 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Compare BMKG location catalog to RedPy catalog
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
bmkg = pd.read_csv('../AgungSummaryPaper/catalog4jay.gmt',
header = None,
index_col=3, parse_dates=[3],
names=['lat','lon','depth','datetime','mag'])
# +
#bmkg
# +
redpycatalog = '/Users/jjw2/REDPy_GRLB/agung_grlB1/catalog.txt'
redpyC = pd.read_csv(redpycatalog,
delimiter='\t',
index_col=1, parse_dates=[1])
redpyOcatalog = '/Users/jjw2/REDPy_GRLB/agung_grlB1/orphancatalog_original.txt'
redpyO = pd.read_csv(redpyOcatalog,
header=None,
index_col=0, parse_dates=[0],
names=['datetime'])
# -
# original redpy orphan catalog (lots of orphans)
og_redpyOcatalog = ''
og_redpyO = pd.read_csv(redpyOcatalog,
header=None,
index_col=0, parse_dates=[0],
names=['datetime'])
# +
#for idx, row in redpyC.sort_index(axis=0).iterrows():
# print('{}, {}'.format(idx, row.cnum))
# + active=""
# # compare bmkg to redpy
# for row, i in bmkg.iterrows():
# #print(row)
# #print('')
# #print(i)
# tdC = redpyC.index - row
# tdCmin = tdC.min()
#
# tdO = redpyO.index - row
# tdOmin = tdO.min()
#
# print(tdCmin)
# print(tdOmin)
# print('')
# +
# compare redpy orphans to redpy group
# -
cnum = 1
cluster_events = redpyC[redpyC.cnum==cnum]
for row, i in cluster_events.iterrows():
#tdC = redpyC.index - row
#tdCmin = tdC.min()
tdO = og_redpyO.index - row
tdOmin = tdO.min()
print(tdCmin)
print(tdOmin)
print('')
| 1,971 |
/03_Prelab_UNIX-II/UNIX-II_Prelab.ipynb
|
4fe430a05a611b6f0ed7e7183a461aa83835fa24
|
[
"BSD-3-Clause"
] |
permissive
|
boonepeter/GCB535
|
https://github.com/boonepeter/GCB535
| 1 | 0 | null | 2019-02-08T20:57:13 | 2019-02-06T23:13:28 | null |
Jupyter Notebook
| false | false |
.py
| 29,685 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Unix II - Prelab
# # Table of Contents
# 1. More on data manipulation commands (cut)
# 2. How to search through files (grep)
# 3. A gentle introduction to regular expressions (sed)
# 4. Tools for compressing files and directories
# 5. Accessing the internet through the command line
# 6. Extra footnotes
# 7. Questions
# ## 1. More on data manipulation commands
#
# As we saw in the assignment from the first Unix module, data files can be more complicated than just a list of items in a single column, and so we often want to be able to manipulate these more complex files. There are many useful Unix tools for doing so, and an important and useful one of these is called **cut**. This command is used to extract specific columns from a data file, and in the Unix I assignment, was how we manipulated the full Pokémon data file into the separate data files containing the Pokémon names, main types, secondary types, and combination of types. Now we will learn how to use the cut command. First, move to the directory called 'move_here', as usual:
#
# ```bash
# \$ cd move_here/
# ```
#
# Now list the files in this directory:
#
# ```bash
# \$ ls
# ```
#
# There are a few directories in here. First, we are going to look at the poke_data directory, which contains similar data to what we used for the unix module I assignment. Now move into that directory and see what's in there
#
# ```bash
# \$ cd poke_data/
#
# \$ ls
# ```
#
# There are three files: orig_151_pokemon.csv, orig_151_pokemon.txt, and orig_151_pokemon.txt.2. Let's first look at the orig_151_pokemon.txt file, which is the same as the one we used in the first Unix assignment, and learn how to generate the data files that we used. Recall that we had four data files for the assignment: pokemon_names.txt, pokemon_main_types.txt, pokemon_secondary_types.txt, and pokemon_both_types.txt, corresponding to the first column, the second column, the third column, and the second and third columns together. You may also have noticed that these files did not contain the entries in the first line of the input file (the header describing what each column is). Let's start with a quick aside, by learning how we accomplished that.
#
# The 'tail' command has a flag, '-n', that describes how many lines before the end you want to look at. However, there is a very useful construction you can use to tell it to essentially skip the first line. Try running this command:
#
# ```bash
# \$ tail -n +2 orig_151_pokemon.txt | head
# ```
#
# See how the header is now gone! Now, let's learn how we generated the assignment data files. As mentioned, we used the **cut** command to accomplish this, which is pretty simple: you tell it which columns you want to include in your output using the "-f" flag. Try running the following commands to generate the four data files:
#
# ```bash
# \$ tail -n +2 orig_151_pokemon.txt | cut -f1 > pokemon_names.txt
#
# \$ tail -n +2 orig_151_pokemon.txt | cut -f2 > pokemon_main_types.txt
#
# \$ tail -n +2 orig_151_pokemon.txt | cut -f3 > pokemon_secondary_types.txt
#
# \$ tail -n +2 orig_151_pokemon.txt | cut -f2,3 > pokemon_both_types.txt
# ```
#
# This is a fairly straightforward command. One thing to notice is that if you want to include more than one column, you can specify which columns you want by separating them with commas. You can also include a range of columns by using a flag such as "-f2-4", which would include the second, third, and fourth column.
#
# Now that we learned the simplest use of cut, let's look at the other data files in this directory. Use the 'head' command to look at these files, remembering that head can take in as many files as you'd like to look at:
#
# ```bash
# \$ head orig_151_pokemon.csv orig_151_pokemon.txt orig_151_pokemon.txt.2
# ```
#
# So we can see that these look very similar, with one difference: where there appear to be blank spaces between the columns in the 'orig_151_pokemon.txt' file as well as in the 'orig_151_pokemon.txt.2' file, there are commas in the orig_151_pokemon.csv file. This raises an issue that you will often run into in bioinformatics, which is that data files can be in many different formats: some people prefer to use commas to separate columns, some use tabs, and some just use spaces. So, it is important to know both how to tell what kind of file you're looking at, and how to deal with it.
#
# Let's learn how to figure out exactly how your data files are organized. Typically, the first thing you want to do when you have a new data file is to use the 'head' command to look at it, as we have done here. If there are commas, you can see them right away, and so you don't need any further processing. However, if there aren't commas, you can run into the issue we have here, which is that we have two text files with spaces between the columns, but they seem to be slightly different. This may not seem super important at the moment, but many programs that read in data need to know exactly what the delimiting character is, or they will not work correctly. So, how do we figure out how the two .txt files are different?
#
# To do this, we will use a very powerful flag for the 'cat' program, which is '-A'. Try running the following commands:
#
# ```bash
# \$ cat -A orig_151_pokemon.txt | head
#
# \$ cat -A orig_151_pokemon.txt.2 | head
# ```
#
# First, notice that every line now has a '$' character at its end. This doesn't refer to the command prompt, as we've been using it here, but rather represents what is called a new line character, meaning that the line ends at that point. Next, notice that the orig_151_pokemon.txt file has '^I' between each column. When you call the cat command with this flag, '^I' represents a tab character (which is **not** the same as four spaces). This tells us that this file is tab separated. On the other hand, when we look at the orig_151_pokemon.txt.2 file, we just see single spaces between the columns, so this one is delimited by spaces.
#
# Now, what do we do with this information? Let's try to get the first column from each file, as we just did to get the Pokémon names. Try these commands:
#
# ```bash
# \$ cut -f1 orig_151_pokemon.txt | head
#
# \$ cut -f1 orig_151_pokemon.txt.2 | head
#
# \$ cut -f1 orig_151_pokemon.csv | head
# ```
#
# Notice that only the first one actually did what we wanted. The issue here is that cut does not know what we now know, which is how the columns are separated. The reason it works for the tab separated file is that it assumes that the columns are tab separated, by default. To fix this issue, we can use the '-d' flag, which lets you tell cut how the columns are separated. Try running these commands:
#
# ```bash
# \$ cut -f1 -d',' orig_151_pokemon.csv | head
#
# \$ cut -f1 -d' ' orig_151_pokemon.txt.2 | head
# ```
#
# Now this works! We have provided the space-separated file to illustrate an important point, which is that it can be very dangerous to use spaces to separate a data file. Any Pokémon fans out there may remember Mr. Mime, which of course has a space in its name. What happens to Mr. Mime when we try to pull out the name column from the space separated data file? See for yourself (it should be right between Starmie and Scyther)!
#
# ```bash
# \$ cut -f1 -d' ' orig_151_pokemon.txt.2 | less
# ```
#
# See how there is just a 'Mr.' where the name should be? The 'cut' command doesn't know anything about Pokémon, so it just split that row of the file right between 'Mr.' and 'Mime', just like we told it to. This is an easy trap to fall into when doing your own data analysis, since if you just look at the top few lines using head, you might not notice such an issue. That's why it's always better to use tabs or commas to separate columns in a data file!
# ## 2. How to search through files
#
# So we have now equipped you with several useful tools that can be used to manipulate your data files. However, we have not yet shown you something really useful: how to search through files to find the specific lines you're interested in. This can be done using an indispensable tool that can make your life much easier, called **grep**. There isn't a really intuitive reason why it's called this: it's short for **g**lobally search a **r**egular **e**xpression and **p**rint, which isn't very catchy. However, the tool itself is incredibly powerful and useful. Let's start by saying we'd like to find a specific Pokémon in our file, such as Dugtrio. We could look through the file ourselves and find where it is, or we could use grep. Try running this:
#
# ```bash
# \$ grep "Dugtrio" orig_151_pokemon.txt
# ```
#
# Instead of having to look ourselves, this returns the line we're interested in. There are a ton of different flags for grep, and as usual, the best way to learn them is to read the man page and try to use them to accomplish a specific task you're interested in. Let's learn a few of these flags.
#
# Notice how we capitalized the 'D' in Dugtrio in the above command. If we don't do this, it won't return anything, as you can see if you run this:
#
# ```bash
# \$ grep "dugtrio" orig_151_pokemon.txt
# ```
#
# Let's say we're lazy and don't want to hit the shift key, or we just don't care about capitalization. The '-i' flag tells grep to ignore capitalization, allowing us to just match by letter. All of these will find the line we want. See for yourself:
#
# ```bash
# \$ grep -i "dugtrio" orig_151_pokemon.txt
#
# \$ grep -i "DuGtRiO" orig_151_pokemon.txt
#
# \$ grep -i "DUGTRIO" orig_151_pokemon.txt
# ```
#
# Now let's say we want to know where in the list of Pokemon Dugtrio is. The '-n', or '--line-number' flag allows us to do this:
#
# ```bash
# \$ grep -n "Dugtrio" orig_151_pokemon.txt
#
# \$ grep --line-number "Dugtrio" orig_151_pokemon.txt
# ```
#
# This is the first time we have seen two different flags doing the same thing, but this is actually a common construction. A flag, given by a single letter prefixed by a single dash, is often equivalent to a longer, more descriptive flag prefixed by two dashes.
#
# Now let's give grep a bit more of a workout. Let's say, instead of looking for a specific Pokémon, we just want to find all the other ground-type Pokémon in the file. The grep command can handle this with ease, as you can see if you run this:
#
# ```bash
# \$ grep "Ground" orig_151_pokemon.txt
# ```
#
# And of course, the flags we already tried will still work here:
#
# ```bash
# \$ grep -i "ground" orig_151_pokemon.txt
#
# \$ grep -n "Ground" orig_151_pokemon.txt
# ```
#
# What if we'd like to count all the ground-type Pokémon? We saw previously how to use pipes and the 'wc' command to count, so let's see how this can work with grep. Run this:
#
# ```bash
# \$ grep "Ground" orig_151_pokemon.txt | wc -l
# ```
#
# However, there is an even easier method we can use to do this, that doesn't involve piping to another command. Instead, we can use the '-c' or '--count' flag to get the same output. Run this:
#
# ```bash
# \$ grep -c "Ground" orig_151_pokemon.txt
# ```
#
# Another useful flag is to pick out all the lines that do not match the pattern. For example, if we want all the non-ground Pokémon, we can use the -v flag. Run this if you want (but it will spit out many Pokémon!):
#
# ```bash
# \$ grep -v "Ground" orig_151_pokemon.txt
# ```
# ## 3. A gentle introduction to regular expressions
# So far, we have just been searching through files by looking for exact matches to the pattern we're interested in. However, this has barely scratched the surface of what grep is capable of. Instead of looking for these exact matches, we can also use grep to find lines that fit certain patterns. If you recall from the prelab of the first Unix module, we used the \* (wild card) character to match all the files ending in a certain pattern (such as \*.txt vs \*.text). This character can also be used in grep, and is a member of a very expressive system of pattern matching called regular expressions. These can get quite complicated, and we could easily spend several classes showing you all the different things these are capable of. We don't have time for that, so let's start by seeing some simple examples. Let's say we want to pull out all the Nidoran-related Pokémon, which are split by gender, and each gender evolves into two different forms, for a total of 6 Pokémon we want to pull out. These all start with 'Nido', so let's see how we can use the wild card \* character to find these:
#
# ```bash
# \$ grep "Nido*" orig_151_pokemon.txt
# ```
#
# See how we now get all 6 of these Pokémon from this simple command. The \* character, in this context, tells grep to find all the lines that include "Nido" and not to care about what comes after that. In technical terms, it means that grep should match the previous character 0 or more times.
#
# This is just one of the many special characters that can be used as part of regular expressions. We will dive a little more deeply into grep for the in-class assignment, but a full exploration of regular expressions is out of the scope of this course. If you would like to learn more, here are some useful websites:
#
# Intro: http://zytrax.com/tech/web/regex.htm
#
# Examples: http://www.regular-expressions.info/examples.html
# ## 4. Tools for compressing files and directories
#
# So far, we have only been working with relatively small text files that are easy to look at and process. However, in the realm of bioinformatics (and any computational field), most data will not be so small and easy to handle. So, it is very useful to know how to use tools for compression, the process of encoding data or information using fewer bits (which means that the files will take up less space on the computer). We will now go through a few of the different options for performing compression on Unix systems. First, move to the folder called "compression_data" and look at the files:
#
# ```bash
# \$ cd ../compression_data/
#
# \$ ls
# ```
#
# You may recognize some of these file extensions, including ".zip", which you have probably come across in your day-to-day computer usage; these are supported by programs like WinZip and the Mac OS finder. These files all contain the same data compressed using different programs. First let's compare their sizes:
#
# ```
# \$ ls -l
# ```
#
# As an aside, let's learn how to sort this list by size. The sort command accepts a flag, "-k", which lets you define a specific column that you'd like to sort by, and how you want to do it. Here, we want to sort by the 5th column (the size), and do it numerically, so we can use this command:
#
# ```bash
# \$ ls -l | sort -k5n
# ```
#
# We can also sort using the -S flag for 'ls':
#
# ```bash
# \$ ls -lS
# ```
#
# Using either approach, we can see that the file ending in ".gz" is the smallest, followed closely by ".zip" and "tar.gz", with ".tar" trailing far behind. This is because .gz, corresponding to the gzip program, and .zip, corresponding to the zip program, are actual compression tools, while .tar, corresponding to the tar program, is actually an archiving tool: it doesn't compress the data directly, but it provides a way to collect many files (and their metadata) into a single file. It is most useful for compressing multiple files, which we will see in the in class assignment.
#
# Now let's learn how to compress and decompress these files, starting with .zip files. Although you can use interactive programs like WinZip to compress and decompress these files, there are also useful command line tools that do the trick. For .zip files, they are very easy to remember: the 'zip' command compresses files, and the 'unzip' command decompresses files. Let's try decompressing:
#
# ```bash
# \$ unzip meow.zip
#
# \$ ls
# ```
#
# You should now see the 'meow.txt' file in this directory, and you can peruse it at your leisure. Let's make a new subdirectory so that we can practice compressing files without overwriting what we already have:
#
# ```bash
# \$ mkdir compression_practice
#
# \$ mv meow.txt compression_practice/
#
# \$ cd compression_practice
# ```
#
# Now we have the original text file in this folder, so let's try zipping it back up. The syntax of the zip command is that you first give it the name of your desired zip file, followed by a list of the files you want to compress. In our case, we only have one file to compress, so the command is:
#
# ```bash
# \$ zip meow.zip meow.txt
# ```
#
# Let's compare this to the file I provided:
#
# ```bash
# \$ ls -l meow.zip ../meow.zip
# ```
#
# They should be the same size! Now let's move on to learning about gzip, which gives the file extension ".gz". Similarly to zip, gzip has a matching command for decompression called gunzip. Let's try it out:
#
# ```bash
# \$ cd ..
#
# \$ gunzip meow.gz
#
# \$ ls -l
# ```
#
# Notice how this time, instead of creating a separate file called 'meow.txt', gunzip replaced the 'meow.gz' file with a file simply called 'meow'. This file is identical to the meow.txt file, but illustrates the default behavior of gzip/gunzip, which is that it doesn't store the original file extensions, and replaces the input file. Storing the original file extensions is where 'tar' comes in, as we'll see in a second. First, let's recompress the 'meow' file and learn how to send the gunzip output to a file of our choice without overwriting the compressed file:
#
# ```bash
# \$ gzip meow
#
# \$ ls
# ```
#
# Now we have meow.gz back. Let's try extracting it into the practice folder and compare it to the file we extracted from zip. The way to do this is to use the "-c" flag to gunzip, which tells it to decompress to standard output (i.e. to print it in the terminal), and then redirect that output into our file of interest (recall that redirection is done using ">"):
#
# ```bash
# \$ gunzip -c meow.gz > compression_practice/meow_gzip.txt
# ```
#
# Now let's check that this is the same as the file we got from zip:
#
# ```bash
# \$ diff compression_practice/meow.txt compression_practice/meow_gzip.txt
# ```
#
# It worked! Finally, let's learn how to use the tar program. As I mentioned above, the tar command on its own is just a way to combine the metadata from several files, creating a .tar file, such as 'meow.tar', which we saw was actually bigger than the original input file. First, let's learn how to recover the original file from that one. The most important flags to remember for the tar command are "-c", which **c**reates a new archive, and "-x", which e**x**tracts files from an archive. We also need to use "-f", which tells the command that we want to extract/compress a specific **f**ile. Finally, the "-v" flag, which tells it to be **v**erbose, is useful for seeing exactly what is being extracted or compressed. Like the flags we've seen before, these may seem very hard to remember, but as you continually use these commands, they will start to become second nature.
#
# Let's start by getting some practice and extracting the meow.tar file (notice how we can combine the letters for all three flags into a single "-" argument and have it be correctly interpreted):
#
# ```bash
# \$ tar -xvf meow.tar
#
# \$ ls
# ```
#
# Notice how the output tells you that meow.txt was the file that was extracted, and that instead of removing the meow.tar file, this program writes out the original files again. Let's move that file to the compression_practice folder and rename it so that we can practice re-compressing it:
#
# ```bash
# \$ mv meow.txt compression_practice/meow_for_tar.txt
#
# \$ cd compression_practice/
# ```
#
# Compressing using tar follows a similar syntax to gzip. We give it the three flags we used before, except with "-c" replacing "-x", followed by the name of the compressed file you want to create followed by the list of files to compress into that file (note that we only are using one file, for now):
#
# ```bash
# \$ tar -cvf meow.tar meow_for_tar.txt
#
# \$ ls
# ```
#
# See how we now have the meow.tar file back, as well as the meow_for_tar.txt file. Now, we have one final file type to learn about, which is the meow.tar.gz file. You can probably figure out from the file extension that this is a combination of the tar and the gzip commands; this is a powerful approach, since tar is an archiving program and gzip is a compression program, and combining the two usually gives the best compression. It is also very convenient to use, as the tar command has a single flag that allows it to call gzip to either compress or decompress these files: "-z" (for g**z**ip), so these commands look very similar to the ones we just learned.
#
# Let's start by decompressing the meow.tar.gz file:
#
# ```bash
# \$ cd ..
#
# \$ tar -xzvf meow.tar.gz
# ```
#
# See how the behavior is basically the same as what we used for meow.tar, except now we operate on the .tar.gz file. Let's move and rename this as well so we can see how to compress:
#
# ```bash
# \$ mv meow.txt compression_practice/meow_for_targz.txt
#
# \$ cd compression_practice/
#
# \$ tar -czvf meow.tar.gz meow_for_targz.txt
# ```
#
# Now we've recreated all the compressed files, and we can compare to the ones I gave you to see that they are the same size:
#
# ```bash
# \$ ls -l
#
# \$ ls -l ..
# ```
# ## 5. Accessing the internet through the command line
#
# Now you should have a grasp on many of the very useful tools in the Unix tool kit, so to speak, and are hopefully becoming more and more comfortable with the command line. There is another useful class of tools that we haven't yet discussed, and those are the commands for accessing the internet through the command line.
#
# One common task you will want to accomplish is to download software from the web directly into your Unix system. There are two tools that can be used to accomplish this: wget and curl. By default, Mac systems do not come with wget, so we will show you how to use both tools. For this, we will try downloading an example configuration file for a track hub on the UCSC Genome Browser. First, let's move out of the compression practice folder and make a new folder for downloading. Run these commands:
#
# ```bash
# \$ cd ../../
#
# \$ mkdir ucsc_downloads
#
# \$ cd ucsc_downloads
# ```
#
# The file we want to download is located at http://genome.ucsc.edu/goldenPath/help/examples/hubDirectory/hub.txt. First, let's try downloading it using wget by running this command:
#
# ```bash
# \$ wget http://genome.ucsc.edu/goldenPath/help/examples/hubDirectory/hub.txt
#
# \$ ls
# ```
#
# Notice how the hub.txt file is now present. In wget, we can also use the '-O' flag to specify a different filename that we want to download a file to (this will prove useful in the Unix module II assignment). Try running this:
#
# ```bash
# \$ wget -O wget_hub.txt http://genome.ucsc.edu/goldenPath/help/examples/hubDirectory/hub.txt
#
# \$ ls
# ```
#
# See how the same file is now downloaded to wget_hub.txt, instead of hub.txt! Now, let's see how to download files with curl. Unlike wget, curl will, by default, print the transferred file to the standard output, rather than put it in a file, so we can use output redirection to put it in the right place. First just try running curl alone to see the file output:
#
# ```bash
# \$ curl http://genome.ucsc.edu/goldenPath/help/examples/hubDirectory/hub.txt
# ```
#
# Now, we can redirect this to a file:
#
# ```bash
# \$ curl http://genome.ucsc.edu/goldenPath/help/examples/hubDirectory/hub.txt > curl_hub.txt
# ```
#
# Finally, there are also ways to interact more deeply with a remote server. These tools are used when you have a remote server that you can work on. For example, if you use PMACS, the Penn medicine computing cluster service, you will have to use these tools to access those servers. However, because of the way we've set this course up, we don't have remote servers for you, so here we will just show you what the commands look like. The ssh command, which represents **s**ecure **sh**ell, is a way to get onto a remote server and access a Unix terminal on the server. It is used as follows:
#
# ```bash
# \$ ssh username@servername
# ```
#
# Where servername might be something like consign.pmacs.upenn.edu, for example, and username is your username on that server (i.e. for PMACS, this is your PennKey). The scp command, which represents **s**ecure **c**o**p**y, is analagous to the 'cp' command that we've been using, but is used to copy files to or from a remote server. The syntax is:
#
# ```bash
# \$ scp myfile username@servername:/path/to/directory
# ```
#
# For sending myfile to the server and putting it in the folder specified following the colon. To copy files from a remote server, the syntax is:
#
# ```bash
# \$ scp username@servername:/path/to/directory/myfile /path/to/local/dir/
# ```
#
# Which puts the file at /path/to/directory/myfile from the remote server into /path/to/local/dir/ on the local machine.
# ## 6. Extra footnotes
#
# Note that if you try to use the 'cat -A' flag on a Mac terminal, it will not work, because the version of 'cat' that is on Macs is out of date. To reproduce this behavior, you can give '-et' as a flag.
# ## 7. Questions
# Write a command to find all lines in orig_151_pokemon.csv with type "Poison".
#
# Which flags would you use with the tar command to extract a file?
#
| 25,569 |
/HAI_ling_analysis.ipynb
|
a9f93df03beb4d1cfc5ef3c5a3d39131e08f60fb
|
[
"MIT"
] |
permissive
|
georgialoukatou/Conversational-Analysis-HAI
|
https://github.com/georgialoukatou/Conversational-Analysis-HAI
| 0 | 0 |
MIT
| 2021-05-17T19:32:32 | 2021-04-07T09:51:12 |
Jupyter Notebook
|
Jupyter Notebook
| false | false |
.py
| 2,604,495 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="xj9NT6va65Q0" colab={"base_uri": "https://localhost:8080/"} outputId="cd373263-051e-4e44-a2e4-6c8fc118d739"
# !pip install git+https://github.com/georgialoukatou/Cornell-Conversational-Analysis-Toolkit.git/@Hai
#my github for extra pos and lemma analysis
import convokit
from convokit import Corpus, download
corpus = Corpus(download('subreddit-Cornell'))
# !python -m spacy download en_core_web_sm
# + colab={"base_uri": "https://localhost:8080/"} id="pNGlnDVZIzTI" outputId="31ca5ee5-efb1-4b83-a03d-64e50f72b378"
from convokit.text_processing import TextParser
textparser = TextParser()
textparser.transform(corpus) #perform SpaCy analysis
# + colab={"base_uri": "https://localhost:8080/"} id="eTqsvwTr2Pbq" outputId="77f6518c-b4f2-4231-9df0-c8e05de0a37c"
#check use of PoS
import collections
convo = corpus.random_conversation() #pick random conversation from corpus
d = collections.defaultdict(list)
d2 = collections.defaultdict(list)
for spkr in convo.iter_speakers():
spkr_utts = list(spkr.iter_utterances())
for i in spkr_utts:
d[spkr.id].append(len(i.text.split(" ")))
for j in i.meta["parsed"]:
d2[spkr.id].append(list())
for k in range(len(j["toks"])) :
d2[spkr.id][-1].append(j["toks"][k]["pos"]) #find "parsed" pos metadata for each speaker (separate by utt)
for i in range(len(d2[spkr.id])):
d2[spkr.id][i] = collections.Counter(d2[spkr.id][i])
#d2 gives parts of speech for each utterance and each speaker)
print(d2)
# + colab={"base_uri": "https://localhost:8080/"} id="oGtHtn5UceUG" outputId="50d7108a-97b9-4e52-d82f-fc7a134e6cee"
#check length of utterances
import pandas as pd
df=pd.DataFrame.from_dict(d,orient='index').T #length of utterance for each speaker
print(df)
# + id="U8s_er2B8sLG"
# code minimally adapted from Doyle et al, 2016 / Yurovski et al., 2016
import operator, itertools
def group(utterances):
utterances.sort(key=operator.itemgetter('convId'))
list1 = []
for key, items in itertools.groupby(utterances, operator.itemgetter('convId')):
list1.append(list(items))
return list1
def allMarkers(markers):
categories = []
for marker in markers:
categories.append(marker)
return categories
def checkMarkers(markers):
toReturn = []
for marker in markers:
if isinstance(marker, str):
toReturn.append({"marker": marker, "category": marker})
else:
toReturn.append(marker)
return toReturn
def findMarkersInConvo(markers,convo):
ba = {} # Number of times Person A and person B says the marker["marker"]
bna = {}
nbna = {}
nba = {}
for utterance in convo:
for j, marker in enumerate(markers):
word = marker["marker"]
msgMarker = word in utterance["msgMarkers"]
replyMarker = word in utterance["replyMarkers"]
if msgMarker and replyMarker:
ba[word] = ba.get(word,0) + 1
elif replyMarker and not msgMarker:
bna[word] = bna.get(word,0) + 1
elif not replyMarker and msgMarker:
nba[word] = nba.get(word,0) + 1
else:
nbna[word] = nbna.get(word,0) + 1
# print(msgMarker, replyMarker)
print(ba, bna, nba, nbna)
return({'ba': ba,'bna': bna,'nba': nba,'nbna': nbna})
def metaDataExtractor(groupedUtterances, markers,corpusType=''):
results = []
for i, convo in enumerate(groupedUtterances):
toAppend = findMarkersInConvo(markers,convo)
results.append(toAppend)
return results
def readMarkers(markersFile,dialect=None):
if dialect is None:
reader = csv.reader(open(markersFile))
else:
reader = csv.reader(open(markersFile),dialect=dialect)
markers = []
print('marker\tcategory')
for i, row in enumerate(reader):
toAppend = {}
toAppend["marker"] = row[0]
if(len(row) > 1):
toAppend["category"] = row[1]
else:
toAppend["category"] = row[0]
markers.append(toAppend)
#print(toAppend["marker"]+'\t'+toAppend["category"])
return markers
def writeFile(results, outputFile, shouldWriteHeader):
if len(results) == 0:
print("x")
return
toWrite = []
header = sorted(list(results[0].keys()))
for row in results:
toAppend = []
for key in header:
toAppend.append(row[key])
toWrite.append(toAppend)
if shouldWriteHeader:
with open(outputFile, "w", newline='') as f:
writer = csv.writer(f)
writer.writerows([header])
f.close()
with open(outputFile, "a", newline='') as f:
writer = csv.writer(f)
writer.writerows(toWrite)
f.close()
def determineCategories(msgMarkers,catdict,useREs=False):
msgCats = []
#iterate over catdict items {category: [words/REs]}
for cd in catdict.items():
if useREs:
if any(any(wordre.match(marker) for marker in msgMarkers) for wordre in cd[1]): #if REs, see if any tokens match each RE
msgCats.append(cd[0])
else:
if any(word in msgMarkers for word in cd[1]): #if just words, see if any word in category also in msg
msgCats.append(cd[0])
return msgCats
def makeCatDict(markers,useREs=False):
mdict = {}
for m in markers:
marker = re.compile(''.join([m["marker"], '$'])) if useREs else m["marker"]
if m["category"] in mdict:
mdict[m["category"]].append(marker)
else:
mdict[m["category"]] = [marker]
#mdict[m["category"]] = mdict.get(m["category"],[]).append(m["marker"]) #Need to swap marker and category labels
#mdict[m["marker"]] = mdict.get(m["marker"],[]).append(m["category"])
return(mdict)
def createAlignmentDict(category,result,smoothing,corpusType=''):
toAppend = {}
print(category)
print("R", result)
ba = int(result["ba"].get(category, 0))
bna = int(result["bna"].get(category, 0))
nbna = int(result["nbna"].get(category, 0))
nba = int(result["nba"].get(category, 0))
print(ba,bna,nbna,nba)
#Calculating alignment only makes sense if we've seen messages with and without the marker
if (((ba+nba)==0 or (bna+nbna)==0)):
return(None)
toAppend["category"] = category
#Calculating Echoes of Power alignment
powerNum = ba
powerDenom = ba+nba
baseNum = ba+bna
baseDenom = ba+nba+bna+nbna
if(powerDenom != 0 and baseDenom != 0):
dnmalignment = powerNum/powerDenom - baseNum/baseDenom
toAppend["dnmalignment"] = dnmalignment
else:
toAppend["dnmalignment"] = False
powerNum = ba
powerDenom = ba+nba
baseDenom = bna+nbna
baseNum = bna
powerProb = math.log((powerNum+smoothing)/float(powerDenom+2*smoothing))
baseProb = math.log((baseNum+smoothing)/float(baseDenom+2*smoothing))
alignment = powerProb - baseProb
toAppend["alignment"] = alignment
toAppend["ba"] = ba
toAppend["bna"] = bna
toAppend["nba"] = nba
toAppend["nbna"] = nbna
#print("toapp:", toAppend)
return(toAppend)
def runFormula(results, markers, smoothing,corpusType):
toReturn = []
categories = allMarkers(markers)
#print("h", results)
#print("c", categories)
for i, result in enumerate(results):
for j, category in enumerate(categories):
toAppend = createAlignmentDict(category["category"],result,smoothing,corpusType)
# print("here", toAppend)
if toAppend is not None:
toReturn.append(toAppend)
toReturn = sorted(toReturn, key=lambda k: (k["speakerId"],k["replierId"],k["category"]))
return toReturn
# + id="SRp43m6D7fUJ"
import pprint
# pair utterances from convokit
markers = ["Hufflepuff", "Cornell", "Rowling", "absolutely", "Magic", "Howgarts", "the", "of", "very", "to", "through", "and","each", "other", "'d", "more", "less", "front", "back", "please", "any", "due", "just", "a", "in", "that", "for", "on", "are", "you", "it", "was", "were", "I", "as", "with", "they", "be", "at", "too", "have", "does", "this", "from", "or", "had", "by", "but", "some", "also", "what", "can", "out", "all", "your", "my", "up", "even", "so", "yes", "when", "almost", "no", "must", "should", "will", "would", "not"]
utterances =[]
previous_utt = dict()
#convo = corpus.random_conversation()
convo_utts= list(convo.iter_utterances())
for i in convo.traverse('bfs'): #iterate through utterances in tree-like (breadth) structure, level by level
d = {} #each pair of speakers is a 'd' dictionary
previous_utt[i.id] = (i.text,i.speaker.id)
if i.reply_to == None: #ignore if no reply
continue
d['convId'] = (previous_utt[i.reply_to][1], i.speaker.id)
d['speakerId'] = previous_utt[i.reply_to][1] #?
d['msg'] = previous_utt[i.reply_to][0]
d['reply'] = i.text
d['msgMarkers'] = []
d['replyMarkers'] = []
d['msgTokens']= []
d['replyTokens']= []
d["replierId"] = i.speaker.id
# pprint.pprint(d)
utterances.append(d)
for idx, utt in enumerate(utterances): #append marker in utterance metadata if it exists
for marker in markers:
if marker in utt['msg']:
utterances[idx]['msgMarkers'].append(marker)
if marker in utt['reply']:
utterances[idx]['replyMarkers'].append(marker)
# + colab={"base_uri": "https://localhost:8080/"} id="MA54ujO2rgVr" outputId="11f96971-317a-431c-fa3f-bb7c51589288"
pprint.pprint(utterances)
# + colab={"base_uri": "https://localhost:8080/"} id="ua96ao0G67KK" outputId="36ec30e3-a47a-481a-aa92-202b7241fbeb"
#main function - run alignment
smoothing=1
shouldWriteHeader=True
outputFile='aaaaaaaaa'
markers = checkMarkers(markers)
import math, csv
groupedUtterances = group(utterances)
#print(groupedUtterances)
metaData = metaDataExtractor(groupedUtterances,markers)
#print(metaData)
results = runFormula(metaData, markers, smoothing,'')
writeFile(results, outputFile, shouldWriteHeader)
# + id="nL8WanNR4EZC"
erform the `and` of the result of the figure parameter(s), thereby obtaining the intersection of figures. Similarly, we could try the same strategy on unary operators such as `not`, leading us to the inversion of figures:
# +
def pic_and(p,q):
return lambda n,m,i,j: p(n,m,i,j) and q(n,m,i,j)
print(draw(3,3,pic_and(upper_line, left_line)))
def pic_or(p,q):
return union(p,q)
print(draw(3,3,pic_or(upper_line, left_line)))
def pic_not(p):
return lambda n,m,i,j: not p(n,m,i,j)
print(draw(3,3,pic_not(left_line)))
# -
# Manipulating multiple return values with boolean operators is one strategy for combining and transforming figures. We can of course also perform some transformation of the parameters, for example with offsets, scales, or other transformations. Here we only show the flipping transformation which horizontally or vertically turns a picture around its middle axis.
#
# This is simply done by, for example, invoking the actual figure function with `n-1` instead of `0`, `n-2` instead of `1`, etc.
def flip_hor(p):
return lambda n,m,i,j: p(n,m,i,m-1-j)
def flip_ver(p):
return lambda n,m,i,j: p(n,m,n-1-i,j)
upper_right_triangle = lambda n,m,i,j: i <= j
upper_left_triangle = flip_hor(upper_right_triangle)
lower_right_triangle = flip_ver(upper_right_triangle)
lower_left_triangle = flip_ver(upper_left_triangle)
print(draw(3,3,upper_left_triangle))
print(draw(3,3,upper_right_triangle))
print(draw(3,3,lower_left_triangle))
print(draw(3,3,lower_right_triangle))
# Complex shapes which required quite a lot of thought, such as the pyramid, can now simply be decomposed into simpler shapes such as two lower triangles. Flipping helps us also in the building of variations of the same figure, such as a rotated pyramid, which would have required radically different code (with respect to the horizontal pyramid) to build.
#
# <div class="alert alert-block alert-info">
# This mismatch in the code that implements similar figures in very different ways (such as horizontal and vertical pyramids) can become a very serious issue: it severly limits our ability to quickly experiment with related concepts, and therefore limits our ability to build flexible and robust software.
# </div>
# +
upper_pyramid = pic_and(upper_left_triangle, upper_right_triangle)
lower_pyramid = pic_and(lower_left_triangle, lower_right_triangle)
left_pyramid = pic_and(lower_left_triangle, upper_left_triangle)
right_pyramid = pic_and(lower_right_triangle, upper_right_triangle)
print(draw(5,5,lower_pyramid))
print(draw(5,5,left_pyramid))
butterfly = pic_or(left_pyramid, right_pyramid)
print(draw(5,5,butterfly))
# -
# We can even put useful formulae to the test in order to create new shapes. A good example of this could be using Pythagora's formula in order to draw circles:
circle = lambda n,m,i,j: (i-n//2)*(i-n//2)+(j-n//2)*(j-n//2) <= n*n//4
print(draw(5,5,circle))
# By combining circles and other figures we can even draw some recognizable figures quite easily:
eye = lambda n,m,i,j: i == n//4 and j == m//2
mouth = right_pyramid
print(draw(15,15,pic_and(circle, pic_not(pic_or(eye, mouth)))))
| 12,736 |
/P2-fashion-keras/fashion-keras-2.ipynb
|
56f31c420d3e88f2645faa354796a593bc690f69
|
[] |
no_license
|
mcred89/JupyterNotebooks
|
https://github.com/mcred89/JupyterNotebooks
| 0 | 0 | null | null | null | null |
Jupyter Notebook
| false | false |
.py
| 41,683 |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Кирилл Конча БКЛ181
# ## Задание 1
#
# Для удобства импортируем сразу все нужные модули, чтобы потом было проще.
#
# %load_ext pycodestyle_magic
# %pycodestyle_on
import collections
import matplotlib.pyplot as plt
# %matplotlib inline
from nltk.collocations import ngrams
from nltk.probability import FreqDist
from nltk.collocations import BigramCollocationFinder
import seaborn as sns
import random
from collections import defaultdict
random.seed = 23
import pandas as pd
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import re
from pymorphy2 import MorphAnalyzer
from wordcloud import WordCloud
from nltk.corpus import stopwords
from nltk.draw.dispersion import dispersion_plot
# Используем часть кода из предыдущей части задания. Далее составим словарь грамматических категорий и заполним его. Создадим дата фрейм. Также сохраним потом получивший файл, как тоже было в семинаре.
# +
with open('ochered.txt') as f:
text = f.read()
sw = stopwords.words('russian')
words = [w.lower() for w in word_tokenize(text) if w.isalpha()]
filtered = [w for w in words if w not in sw]
first = morph.parse(filtered[0])
dict_frame = {}
dict_frame['lex'] = []
dict_frame['word'] = []
dict_frame['case'] = []
dict_frame['POS'] = []
dict_frame['number'] = []
dict_frame['gender'] = []
for word in filtered:
result = morph.parse(word)
dict_frame['lex'].append(result[0].normal_form)
dict_frame['word'].append(result[0].word)
dict_frame['case'].append(result[0].tag.case)
dict_frame['POS'].append(result[0].tag.POS)
dict_frame['number'].append(result[0].tag.number)
dict_frame['gender'].append(result[0].tag.gender)
data_frame = pd.DataFrame(dict_frame)
data_frame.to_csv(
'sorokin.csv',
sep='\t',
index=False,
)
data_frame
# -
# ## Задание 2
#
# Сделаем облако слов (без стоп-слов)
# +
text = ' '.join(data_frame['lex'])
wordcloud = WordCloud(background_color='white',
width=800, height=800).generate(text)
plt.figure(figsize=(8, 8), facecolor=None)
plt.imshow(wordcloud)
plt.axis("off")
plt.title('Облако слов')
plt.show()
# -
# Посчитаем количество слов с каждым родом.
data_frame['gender'].value_counts().plot.bar(color='purple')
plt.title('Gender')
plt.xlabel('gender')
plt.ylabel('number of entries')
# А вот это гистограмма длины слов (правда я не уверен, что она должна так выглядеть)
length = data_frame['lex'].apply(len)
plt.figure(figsize=(10, 6))
sns.distplot(df['length'], bins=17, color='green')
plt.title('Distribution of lemma length')
plt.ylabel('%')
plt.xlabel('Length of word')
# По аналогии с семинаром также сделаем график, который показывает слова какого рода самые употребительные
plt.figure(figsize=(6, 6))
data_frame['gender'].value_counts().plot(kind='pie')
plt.title('Most frequency gender')
# ## Задание 3
# Из предыдущей части задания возьмем код, считающий самые частотные биграммы, сделаем из него дата фрейм.
# +
finder = BigramCollocationFinder.from_words(filtered, window_size=3)
bgram = list(finder.ngram_fd.items())
bgram.sort(key=lambda item: item[-1], reverse=True)
bgram_dict = {'bigram': [], 'frequency': []}
for i in bgram:
bgram_dict['bigram'].append(str(i[0][0]) + ' ' + str(i[0][1]))
bgram_dict['frequency'].append(i[1])
data_frame_1 = pd.DataFrame(bgram_dict)
data_frame_1
# -
# Теперь посмотрим, как распределяются между собой части речи.
# +
list_of_POS = []
morph = MorphAnalyzer()
plot_dict = {'POS': []}
for word in filtered:
ana = morph.parse(word)
if ana[0].tag.POS:
plot_dict['POS'].append((ana[0].tag.POS))
df2 = pd.DataFrame(plot_dict)
plt.figure(figsize=(6, 6))
df2['POS'].value_counts().plot(kind='pie')
plt.title('POS Frequency')
# -
# И наконец, посчитаем для романа "Очередь" распределение слова "очередь" по падежам. Для этого надо будет прочитать csv файл, который мы сохранили выше.
# +
sorokin = pd.read_csv('sorokin.csv', sep='\t').fillna('')
data_frame3 = sorokin[sorokin['lex'] == 'очередь'][
'case'].value_counts().plot.barh(color='pink')
plt.title('Падежи очереди')
plt.xlabel('number of entries')
plt.ylabel('case')
# -
# ## Задание 4
#
# Посчитаем упоминания героев текста.
# +
characters = ['володя', 'лена']
disp = dispersion_plot(filtered, characters,
ignore_case=True, title='Lexical Dispersion Plot')
disp
# -
value of (1, 1). So each of our 28x28 pictures is going to be scanned in 3x3 blocks and the scanner can be thought of as moving from left to right, top to bottom, moving over 1 square at a time.
# - Example: Imagine our 28x28 grid. We're starting in a 3x3 box at the top left. We then move our 3x3 box over 1 column to the right. The 2 right most columns in the first box are now the 2 left most columns.
# - **activation:** Remember when we talked about how enough incoming connections activating resulted in a node "lighting up"? Well we need an equation that determines the 'lit' vs 'not lit' state. We call this the activation function.
# - **RELU:** You're typically going to see either Sigmoid or RELU activation function. All you really need to know for now is that RELU is faster and more performant.
# - **input_shape:** The shape of the picutes that will be fed into this layer. Note that this does NOT include the number of features dimension from the "Weird-o shapes" section above, but the others are the same. The convolutional layer needs to know the size of the picture and the channels, but will just be getting one picture at a time. It doesn't care about the overall number of pictures.
#
# - **MaxPooling2D:** CNN are computationally expensive. You can focus the data, speed up your training, and use less resources by shrinking your picture in a pooling layer. We're specifically using a **max** pooling layer. Imagine we're 'scanning' our picutre again. We start in a 2x2 square in the top left of the picutre. We take whatever number is largest in that 2x2 square and write that as the number in the top left corner of our new picture. This allows us to shrink the image while maintaining the spatial ratios.
# - **pool_size:** The size of the square we're scanning with. We're using 2x2 in our first pooling layer. Every 2x2 square will be reduced to a single square with a value equal to the largest number in the 2x2 square.
#
# - **Dropout:** This layer isn't CNN specific. It's a common technique used in ML to help prevent overfitting. The number in this layer is the percentage of inputs that will randomly be set to zero in a given batch. If 25% of the data is randomly dropped, you're going to have a hard time overfitting your data.
#
# - **Flatten:** What's our output layer look like? It's a 1D softmax. But we're feeding 3D, 28x28x1 images into the CNN layers. This layer just flattens our 3D picture into 1D for processing by the final, normal, fully connected layers. Note that this also includes a Dense layer.
# +
model = Sequential()
# This creates a 32 filter, 2D convolutional layer.
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28, 28, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(training_features_shaped, training_labels_oh,
batch_size=128,
epochs=10,
verbose=1,
validation_data=(test_features_shaped, test_labels_oh))
score = model.evaluate(test_features_shaped, test_labels_oh, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# +
# Same print function as other notebooks, with modifications to account for updated input shapes.
def print_prediction(model, index, test_features, test_features_shaped, test_labels, one_hot_test_labels):
our_clothing = test_features.iloc[index]
our_clothing_shaped_for_graphing = our_clothing.values.reshape(28, 28)
our_clothing_shaped_for_prediction = our_clothing.values.reshape(1, -1)
print(f'Index: {index}')
label_num = test_labels.iloc[index]
print(f'Label_Number: {label_num}')
print(f'Label: {label_dict[label_num]}')
# Make the prediction, then convert from one hot encode.
predict_num = argmax(model.predict(test_features_shaped)[index])
print(f'Prediction_Number: {predict_num}')
print(f'Prediction_Number: {label_dict[predict_num]}')
plt.imshow(our_clothing_shaped_for_graphing, cmap=matplotlib.cm.binary)
plt.axis("off")
plt.show()
print_prediction(model, 1001, test_features, test_features_shaped, test_labels, test_labels_oh)
# -
model.save('model3.h5')
# ## More of the same
#
# Other than the pieces of CNNs that I exaplined earlier, everything in those results should seem pretty straight forward by now.
#
# ### But can we imporve?!
#
# Let's see:
#
# **Note:** I'm going to leave all of the training epcohs in. Scroll down for more commentary
# +
model2 = Sequential()
model2.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28, 28, 1)))
model2.add(Conv2D(64, (3, 3), activation='relu'))
model2.add(Dropout(0.25))
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(10, activation='softmax'))
model2.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model2.fit(training_features_shaped, training_labels_oh,
batch_size=128,
epochs=100,
verbose=1,
validation_data=(test_features_shaped, test_labels_oh))
score2 = model2.evaluate(test_features_shaped, test_labels_oh, verbose=0)
print('Test loss:', score2[0])
print('Test accuracy:', score2[1])
# -
model2.save('model4.h5')
# ### A brief, important side bar: Why so bouncy?
#
# This is the first large training I've left in any of my notebooks. You may have noticed that:
#
# 1. We didn't just keep improving forever.
# 2. Rather than just hitting a best value and staying, we bounced around our top value.
#
# Think back to the Optimizer explanation from the last notebook. We feed a batches worth of values in then use back propagation to tweak the weights in our network. Our network will likely never be perfect, so it's going to just bounce around the best weights, and therefore bounce around the best accuracy.
#
# This brings us to our next important ML term: **Learning rate**.
#
# If we want, we're able to tweak the learning rate of our optimizer. This basically represents how large our wieght tweaking jumps are.
# - A large learning rate will reach optimal weights more quickly but risks severe overshooting and the possibility of never actually finding the best wieghts.
# - A small learning rate will potentially bring you to a more exact answer, but risks longer traning times and getting 'stuck' in a small valley.
# - Picture a jagged graph that is overall shaped like a 'U'. This graph represents your optimal weights. It's possible to get stuck inside of small valley on either end of the 'U' without ever getting to the true bottom. A large learning rate will 'see' far enough outisde of the small valley to not get stuck.
#
# You can also typically set a **decay** for your learning rate. This represents the amount your learning rate will lower over time. Think of it this way: You can set your learning rate to a higher number so it quickly finds the major valley, but it will lower with every epoch so that it settles into the valley with a lower learning rate over time.
#
# #### What do I do with this?
#
# If you're feeling really spunky, you can go play with the optimizer's settings. [Here's](https://keras.io/optimizers/) the docs on optimizers. And here's how you would update the learning rate and decay for the optimizer that we used in our training:
#
# ```python
# # The listed numbers are the default
# ada = keras.optimizers.Adadelta(lr=1.0 , decay=0.0)
# model.compile(loss=keras.losses.categorical_crossentropy, optimizer=ada)
# ```
#
# ### Back to our scheduled program:
# #### Turns out the Keras team knows what they're doing....
#
# You'll notice my final CNN is almost exactly the same as the keras provided example.
#
# I spent a few hours toying with different CNN architectures. Most of my results had roughly the same accuracy with slower training speeds. Many of scenarios I tried resulted in severe overfitting. I was getting a lot of 99.6% accuracy on the training data and close to 88% on the validation and test data. Apparently those dropout layers are doing some serious lifting!
#
# Ditching the pooling layer helped. Remember, those are really for training speed and reducing computation costs. There are obviously uses, but for our small, simple dataset it was just holding us back. This seemed to add ~0.5% accuracy.
#
# I increased the epochs from 10 to 100. This also seemed to add ~0.5% accuracy. But we were bouncing around 92% for almost all of the training. We first hit 92% at epoch 21. Loss did much better with more epochs and didn't seem to start bottoming out until around epoch 60-75. I don't think any more than 100 epochs would be helpful.
#
# ### How'd we do overall?
#
# Let's check our progress across all the NNs form both notebooks.
#
# Just as a reminder, here's the to few kaggle entries:
# 1. [92.72%](https://www.kaggle.com/bugraokcu/cnn-with-keras)
# 2. [91.76%](https://www.kaggle.com/pavansanagapati/fashion-mnist-cnn-model-with-tensorflow-keras)
# 3. [76.80%](https://www.kaggle.com/kmader/capsulenet-on-fashion-mnist)
#
# |Run Num |Training Loss|Training Accuracy|Testing Loss|Testing Accuracy|
# |--------|------------ |-----------------|------------|----------------|
# |1 |12.89 |19.97% |12.9 |19.92% |
# |2 |0.14 |94.93% |0.57 |87.28% |
# |3 |0.23 |91.83% |0.25 |91.33% |
# |4 |0.07 |98.21% |0.51 |92.53% |
#
# Given our ranking among top Kaggle performers and my failed architecture experiments, I'm not convinced we could do much better. I think we'd need more data. With more time I'd like to run transformations on our data to artificially increase the amount of training data (doing things like rotating and warping pictures).
#
# We'll get to that later. I'm on to new projects for now!
| 14,973 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.