content
stringlengths 73
1.12M
| license
stringclasses 3
values | path
stringlengths 9
197
| repo_name
stringlengths 7
106
| chain_length
int64 1
144
|
---|---|---|---|---|
<jupyter_start><jupyter_text>## Summary
This notebook reworks code Shanna wrote to process the screening data. It does the following:
1. Load data from DrugBank, ReFRAME, and Broad
2. Creates RDkit molecules from the SMILES, standardizes and sanitizes the molecules by removing salts
3. Calculate Morgan Fingerprints.
4. Save fingerints, molecule names, and source dataset.<jupyter_code>import numpy as np
import pandas as pd
import rdkit
from molvs import Standardizer
from rdkit.Chem import PandasTools, SaltRemover, rdMolDescriptors, MolFromSmiles
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*') # suppresses annoying RDKIT errors and warnings
from tqdm import tqdm
tqdm.pandas()<jupyter_output><empty_output><jupyter_text>### 1. Load Data<jupyter_code>drugbank = PandasTools.LoadSDF('../data/screening_data/drugbank.sdf') #auto-sanitize function; don't need to do again
reframe = pd.read_csv('../data/screening_data/reframe.csv', encoding='latin1')
broad = pd.read_csv('../data/screening_data/broad.csv', delimiter="\t")
print('len(drugbank) =', len(drugbank))
print('len(reframe) =', len(reframe))
print('len(broad) =', len(broad))
# combine into one dataframe
screening_data = pd.DataFrame(columns=['source', 'name', 'smiles'])
screening_data.source = ['drugbank']*len(drugbank) + ['reframe']*len(reframe) + ['broad']*len(broad)
screening_data.name = pd.concat([drugbank.GENERIC_NAME, reframe.Name, broad.pert_iname], ignore_index=True)
screening_data.smiles = pd.concat([drugbank.SMILES, reframe.SMILES, broad.smiles], ignore_index=True)
print(f"Dropping {screening_data['smiles'].isna().sum()} rows with missing SMILES")
screening_data.dropna(inplace=True)<jupyter_output><empty_output><jupyter_text>### 2. Create, standardize and sanitize molecules<jupyter_code>screening_data['rdkit_mol'] = screening_data['smiles'].progress_apply(MolFromSmiles)
print(f"Dropping {screening_data['rdkit_mol'].isna().sum()} rows which failed molecule creation")
screening_data.dropna(inplace=True)
# standardize molecules
screening_data['rdkit_mol'] = screening_data['rdkit_mol'].progress_apply(Standardizer().standardize)
# remove salts
screening_data['rdkit_mol'] = screening_data['rdkit_mol'].progress_apply(SaltRemover.SaltRemover().StripMol)<jupyter_output><empty_output><jupyter_text>### 3. Calculate Morgan Fingerprints<jupyter_code>def calculate_morgan_fingerprint(mol):
fp = rdMolDescriptors.GetMorganFingerprintAsBitVect(mol, radius=2, useChirality=True)
bit_string = fp.ToBitString()
return np.array([int(char) for char in bit_string], dtype=np.uint8)
screening_data['morgan_fingerprint'] = screening_data['rdkit_mol'].progress_apply(calculate_morgan_fingerprint)<jupyter_output><empty_output><jupyter_text>### 4. Save Results<jupyter_code>assert not screening_data.isna().values.any() # confirm clean data
screening_data.drop(columns=['smiles', 'rdkit_mol']).to_pickle('../processed_data/screening_data_processed.pkl')<jupyter_output><empty_output>
|
permissive
|
/notebooks/process_screening_data.ipynb
|
Mingchenchen/tmprss2
| 5 |
<jupyter_start><jupyter_text>## 1. Import Libraries<jupyter_code>import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>## 2. Load Data<jupyter_code>df=pd.read_csv("Social_Network_Ads.csv")
X=df.iloc[:,2:4]
y=df.iloc[:,4]
X.head(2)
y.head(2)<jupyter_output><empty_output><jupyter_text>## 3. split data in train and test<jupyter_code>X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=1)<jupyter_output><empty_output><jupyter_text>## unit difference in Age and EstimatedSalary so apply StandardScaler<jupyter_code>sc=StandardScaler()
X_train=sc.fit_transform(X_train)
X_test=sc.transform(X_test)<jupyter_output><empty_output><jupyter_text>## 4. train model <jupyter_code>from sklearn.svm import SVC
classifier=SVC(kernel='linear',random_state=0)
classifier.fit(X_train, y_train)
predict=classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm=confusion_matrix(y_test, predict)
cm
from sklearn.metrics import accuracy_score
accuracy=accuracy_score(y_test, predict)
accuracy<jupyter_output><empty_output><jupyter_text>## Accuracy of SVM is 82%## 5. check if the accuracy get increses or not using GridSearchCV<jupyter_code>from sklearn.model_selection import GridSearchCV
parameter=[{'C':[1,10,100,1000], 'kernel':['linear']},
{'C':[1,10,100,1000], 'kernel':['rbf'], 'gamma':[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]}]
grid_search=GridSearchCV(estimator=classifier,
param_grid=parameter,
scoring='accuracy',
cv=10,
n_jobs=-1)
grid_search=grid_search.fit(X_train,y_train)
pred=grid_search.predict(X_test)
cm=confusion_matrix(y_test,pred)
accuracy=accuracy_score(y_test,pred)
accuracy<jupyter_output><empty_output>
|
no_license
|
/Social_Network_ads_GridSearchCV.ipynb
|
NikShrish/GridSearchCV
| 6 |
<jupyter_start><jupyter_text># ASSIGNMENT DAY 6# 1:Assuming that we have some email addresses in the "[email protected]" format, please write program to print the company name of a given email address. Both user names and company names are composed of letters only.<jupyter_code>s=input("enter mail address").strip()
print("Entered emain address is : ",s)
print(s.split('@')[1].split('.')[0])<jupyter_output>enter mail [email protected]
Entered emain address is: [email protected]
google
<jupyter_text># 2:Write a program that accepts a comma-separated sequence of words as input and prints the words in a comma separated sequence after sorting them alphabetically.<jupyter_code>s=input("Enter comma seperated sequence").strip()
print(sorted(s.split(',')))<jupyter_output>Enter comma seperated sequencewithout,hello,bag,world
['bag', 'hello', 'without', 'world']
<jupyter_text># 3:SETS# 3.1: creating a set<jupyter_code>s={1,2,3}
print(type(s))
l=[1,2,3]
d=set(l)
print(type(d))<jupyter_output><class 'set'>
<class 'set'>
<jupyter_text># 3.2: Acessing elements in a set and checking the presence of elements<jupyter_code>s={1,2,3,'mani'}
for i in s:
print(i)
print('mani' in s)<jupyter_output>1
2
3
mani
True
<jupyter_text># 3.3: Adding items to a set and checking the usage of len() function<jupyter_code>s={1,2,3}
print(len(s))
s.add('mani')
print(s)
print(len(s))
s.update(['nbh',5,6])
print(s)
print(len(s))<jupyter_output>3
{1, 2, 3, 'mani'}
4
{1, 2, 3, 5, 6, 'mani', 'nbh'}
7
<jupyter_text># 3.4: usage of remove,del,pop,clear,discard functions on sets<jupyter_code>s={1,2,3,4}
s.pop()
print(s)
s.remove(3) #if element is not present it will raise error
print(s)
s.discard(1) #if element is not present it will not raise error
print(s)
s.clear()
print(s)<jupyter_output>{2, 3, 4}
{2, 4}
{2, 4}
set()
<jupyter_text># 3.5: mathematical operations<jupyter_code>s1={1,2,3}
s2={4,1,6}
print(s1.union(s2))
print(s1|s2)
print(s1.intersection(s2))
print(s1-s2)
print(s2-s1)<jupyter_output>{1, 2, 3, 4, 6}
{1, 2, 3, 4, 6}
{1}
{2, 3}
{4, 6}
<jupyter_text># 4: Given a list of n-1 numbers ranging from 1 to n, your task is to find the missing number. There are no duplicates.<jupyter_code>l=list(map(int,input().split()))
for i in range(min(l),max(l)):
if i not in l:
print(i,end =" ")<jupyter_output>1 2 4 6 3 7 8
5 <jupyter_text># 5:With a given list L, write a program to print this list L after removing all duplicate values with original order reserved.<jupyter_code>s=list(map(int,input().split()))
k=[]
for i in s:
if i not in k:
k.append(i)
print(*k)<jupyter_output>12 24 35 24 88 120 155 88 120 155
12 24 35 88 120 155
|
no_license
|
/day 6/day 6.ipynb
|
Boddapatisaimaniteja/LetsUpgrade-AI-ML
| 9 |
<jupyter_start><jupyter_text>Import important parameter<jupyter_code>inputsize = 193
e = 0.0063<jupyter_output><empty_output><jupyter_text>Import Package<jupyter_code>import torch
import numpy as np
import matplotlib.pyplot as plt
import autograd.numpy as np
from autograd import grad
import pyamg
import dmg.gallery as gallery
import dmg.dgmg as dgmg
import dmg.gmg_linear as gmg_linear
import dmg.classical_amg as classical_amg
from mpl_toolkits.mplot3d import Axes3D
import xlsxwriter
from scipy.ndimage import convolve<jupyter_output><empty_output><jupyter_text>Define function<jupyter_code>def Restriction(inputsize):
inputsize = int(inputsize)
outputsize = int(inputsize/2)
OUTPUT = np.zeros([outputsize, inputsize])
for i in range(outputsize):
OUTPUT[i][2*i] = 1/4
OUTPUT[i][1+2*i] = 1/2
OUTPUT[i][2+2*i] = 1/4
return OUTPUT
def Poisson(inputsize):
inputsize = int(inputsize)
outputsize = int(inputsize/2)
A1 = 2*np.eye(inputsize)
for i in range(inputsize-1):
A1[i, i+1] = -1
A1[i+1, i] = -1
OUTPUT = A1
return OUTPUT
def Multigrid_circle(inputsize, A_A, B, R_A, s, w, error):
A = np.matrix(A_A)
P = 2.*np.transpose(R_A)
R = np.matrix(R_A)
M = np.matrix(w**(-1)*np.diag(np.diag(A)))
K = M - A
C = np.linalg.inv(M)*K
b = np.linalg.inv(M)*B
U0 = np.matrix(np.zeros([inputsize, 1]))
RESIDUAL = []
Residual=1
i=0
while Residual > error:
for j in range(s):
U0 = C*U0+b
r = B - A*U0
Residual = np.linalg.norm(r,2)
rc = R*r
Ac = R*A*P
Uc = np.linalg.solve(Ac, rc)
U = U0 + P*Uc
for k in range(s):
U = C*U+b
U0 = U
RESIDUAL.append(Residual)
i=i+1
print("Residual = {}".format(Residual))
print("Interation = {}".format(i))
print("===================")
return U0, RESIDUAL
def rho(inputsize,A,P,R,w,s):
M = (w**(-1)) * np.diag(np.diag(A))
K = M - A
MK = np.matmul(np.linalg.inv(M),K)
I = np.eye(inputsize)
IPRAPRA = I - np.matmul(np.matmul(np.matmul(P,np.linalg.inv(np.matmul(np.matmul(R,A),P))),R),A)
C = np.matmul(np.matmul(MK,IPRAPRA),MK)
for i in range(5):
C = np.matmul(C,C)
radius = np.linalg.norm(C)**(1/32)
return radius
def optimizer_GD(inputsize, A1, R, w, s, learning_rate, lam):
rhoold = rho(inputsize,A1,2.*np.transpose(R),R,w,s)
device = torch.device('cpu')
R = torch.tensor(R,dtype = torch.double, requires_grad=True,device=device)
w = torch.tensor(w,dtype = torch.double, requires_grad=True, device=device)
lam = torch.tensor(lam,dtype = torch.double, requires_grad=True, device=device)
A = torch.tensor(A1,dtype = torch.double, device=device)
P = 2.*torch.t(R)
M = (w**(-1)) * torch.diag(torch.diag(A))
K = M - A
MK = torch.mm(torch.inverse(M),K)
I = torch.eye(inputsize,dtype = torch.double, device=device)
I1 = torch.ones([inputsize,1],dtype = torch.double, device=device)
I2 = torch.ones([outputsize,1],dtype = torch.double, device=device)
IPRAPRA = I - torch.mm(torch.mm(torch.mm(P,torch.inverse(torch.mm(torch.mm(R,A),P))),R),A)
C = torch.mm(torch.mm(MK,IPRAPRA),MK)
for i in range(5):
C = torch.mm(C,C)
loss = torch.norm(C)**(1/32) + torch.mm(lam,torch.mm(R,I1) - I2)
loss.backward()
with torch.no_grad():
R-=learning_rate*R.grad
w-=learning_rate*w.grad
lam-=learning_rate*lam.grad
R = R.detach().numpy()
w = w.detach().numpy()
lam = lam.detach().numpy()
rhonew = rho(inputsize,A1,2.*np.transpose(R),R,w,s)
return R, w, lam, rhoold, rhonew
def direct_optimizer_GD(inputsize, A0, A1, R, w, s, learning_rate, lam):
Rhoold = rho(inputsize,A1,2.*np.transpose(R),R,w,s)
Rhonew = [Rhoold]
for i in range(100):
Rnew, wnew, lamnew, rhoold, rhonew = optimizer_GD(inputsize, A1, R, w, s, learning_rate, lam)
R = Rnew
w = wnew
lam = lamnew
print("rho = {}".format(rhonew))
print("===================")
Rhonew.append(rhonew)
i+=1
print("======End======")
return R, w, Rhoold, Rhonew
def homotopy_optimizer_GD(inputsize, A0, A1, R, w, s, learning_rate ,accept_radius ,step_length,lam):
Rhoold = rho(inputsize,A1,2.*np.transpose(R),R,w,s)
Radius = [Rhoold]
L = step_length
print("======Section 1======")
while L < 1:
M = (1-L)*A0 + L*A1
Rnew, wnew, lamnew, rhoold, rhonew = optimizer_GD(inputsize, M, R, w, s, learning_rate, lam)
Radius.append(rhonew)
if rhonew > accept_radius:
step_length = 0.1*step_length
learning_rate = 0.1*learning_rate
print('Decrease the step_length, learning_rate and Restart!!')
print("step_length = {}".format(step_length))
print("learning_rate = {}".format(learning_rate))
print("rhonew = {}".format(rhonew))
print("===================")
R = Restriction(inputsize)
lam = np.zeros([1,outputsize])
w = 2/3
L = step_length
else:
R = Rnew
w = wnew
L += step_length
lam = lamnew
print("L = {}".format(L))
print("rho = {}".format(rhonew))
print("===================")
print("======Section 2======")
i = 0
while rhoold>rhonew and i <20000:
Rnew, wnew, lamnew, rhoold, rhonew = optimizer_GD(inputsize, A1, R, w, s, learning_rate, lam)
Radius.append(rhonew)
R = Rnew
w = wnew
lam = lamnew
print("the {} steps".format(i))
print("rho = {}".format(rhonew))
print("===================")
i+=1
Rhonew = rhonew
print("======End======")
return R, w, Rhoold, Rhonew, Radius<jupyter_output><empty_output><jupyter_text>Define parameter<jupyter_code>s = 1
w = 2/3
MAX_ITER = 50
def u_real(x): return np.sin(4.*np.pi*x)
def rightf(x): return -4*np.pi**2*np.cos(4*np.pi*x)*np.cos(np.pi*x/e)/e+16*(np.pi**2)*(2+np.sin(np.pi*x/e))*np.sin(4*np.pi*x)
def a(x): return 2+np.sin(np.pi*x/e)
outputsize = int(inputsize/2)
h = 1/(inputsize-1)
X = np.linspace(0, 1, inputsize)
H = 1/(outputsize-1)
Media = a(np.linspace(-h,1+h,inputsize+1))
A0 = Poisson(inputsize)
A1 = gallery.divkrad((inputsize,),Media)
RightF = rightf(X)*h**2
RightF = RightF.reshape(inputsize, 1)
UREAL = u_real(X).reshape(inputsize, 1)
R = Restriction(inputsize)
P = 2.*np.transpose(R)<jupyter_output><empty_output><jupyter_text>Local DMM on fine grid<jupyter_code>K = 10
batch_size = 10
num_iter = 4000
step_size = 5e-5
PR_stencil_type = "m3p"
init_point = None
convergence = {}
deep_gmm = dgmg.DeepMG(A1, K=K, PR_stencil_type=PR_stencil_type, max_levels=2)
opt_par = deep_gmm.optimize(num_iter=num_iter, step_size=step_size, batch_size=batch_size, init_point=init_point)
deep_gmm.update_prd(opt_par)
current_rho = deep_gmm.compute_rho()
print("Optimized rho = {}".format(current_rho))
localR = opt_par[1][0]
LOCALR = np.zeros([outputsize-1,inputsize-2])
for i in range(outputsize-1):
C = localR[i][:]
for l in range(3):
LOCALR[i][2*i+l] = C[l]
plt.imshow(LOCALR)
plt.colorbar()
plt.imshow(np.log(np.abs(LOCALR)))
plt.colorbar()
Rnew, wnew, Rhoold, Rhonew, Radius = homotopy_optimizer_GD(inputsize, A0, A1.to_full(), R, 2/3, s, learning_rate = 1e-6,accept_radius = 1 ,step_length = 1e-2, lam = np.zeros([1,outputsize]))
GLOBALR = Rnew
plt.imshow(GLOBALR)
plt.colorbar()
plt.imshow(np.log(np.abs(GLOBALR)))
plt.colorbar()
plt.plot(np.log(np.abs(GLOBALR[int(outputsize/2)][:])))
Xfine = np.linspace(0,1,inputsize)
Xcoarse = np.linspace(0,1,outputsize)
Mediafine = a(Xfine)
Mediacoarse = a(Xcoarse)
Urealfine = u_real(Xfine).reshape([inputsize,1])
Urealcoarse = u_real(Xcoarse).reshape([outputsize,1])
GLOBALA = np.zeros([outputsize,outputsize])
for i in range(outputsize):
for j in range(outputsize):
basisi = GLOBALR[i][:]
basisj = GLOBALR[j][:]
gxbasisi = np.gradient(basisi,h)
gxbasisj = np.gradient(basisj,h)
GLOBALA[i][j] = (np.sum(Mediafine*gxbasisi*gxbasisj))*(H)
GLOBALF = np.zeros([outputsize,1])
F = rightf(Xfine)
for j in range(outputsize):
basisj = GLOBALR[j][:]
GLOBALF[j][0] = (np.sum(F*basisj))*(H)
GLOBALU = np.linalg.solve(GLOBALA,GLOBALF)
GlobalU = np.zeros([inputsize,1])
for i in range(outputsize):
GlobalU = GlobalU + GLOBALU[i]*GLOBALR[i][:].reshape([inputsize,1])
deltaU = GlobalU - Urealfine
Mediafine = a(np.linspace(0,1,inputsize-1)).reshape([inputsize-1,1])
energynorm = np.sqrt(np.sum(np.multiply(np.multiply(np.diff(Urealfine,axis=0)/h,np.diff(Urealfine,axis=0)/h),Mediafine)*(h)))
l2norm = np.sqrt((np.sum((Urealfine)**2)*(h)))
h1norm = np.sqrt((np.sum((Urealfine)**2)*(h))+(np.sum((np.diff(Urealfine,axis=0)/h)**2)*(h)))
print('Energy norm: ')
print(np.sqrt(np.sum(np.multiply(np.multiply(np.diff(deltaU,axis=0)/h,np.diff(deltaU,axis=0)/h),Mediafine)*(h))))
print('L2 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h))))
print('H1 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h))+(np.sum((np.diff(deltaU,axis=0)/h)**2)*(h))))
print('Energy norm: ')
print(np.sqrt(np.sum(np.multiply(np.multiply(np.diff(deltaU,axis=0)/h,np.diff(deltaU,axis=0)/h),Mediafine)*(h)))/energynorm)
print('L2 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h)))/l2norm)
print('H1 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h))+(np.sum((np.diff(deltaU,axis=0)/h)**2)*(h)))/h1norm)
print(Rhonew)
plt.plot(GlobalU)
plt.plot(Urealfine)
outputsize = np.size(LOCALR,0)
inputsize = np.size(LOCALR,1)
Xfine = np.linspace(0,1,inputsize)
Xcoarse = np.linspace(0,1,outputsize)
Mediafine = a(Xfine)
Mediacoarse = a(Xcoarse)
Urealfine = u_real(Xfine).reshape([inputsize,1])
Urealcoarse = u_real(Xcoarse).reshape([outputsize,1])
h = 1/(inputsize-1)
H = 1/(outputsize-1)
LOCALA = np.zeros([outputsize,outputsize])
for i in range(outputsize):
for j in range(outputsize):
basisi = LOCALR[i][:]
basisj = LOCALR[j][:]
gxbasisi = np.gradient(basisi,h)
gxbasisj = np.gradient(basisj,h)
LOCALA[i][j] = (np.sum(Mediafine*gxbasisi*gxbasisj))*(H)
LOCALF = np.zeros([outputsize,1])
F = rightf(Xfine)
for j in range(outputsize):
basisj = LOCALR[j][:]
LOCALF[j][0] = (np.sum(F*basisj))*(H)
LOCALU = np.linalg.solve(LOCALA,LOCALF)
LocalU = np.zeros([inputsize,1])
for i in range(outputsize):
LocalU = LocalU + LOCALU[i]*LOCALR[i][:].reshape([inputsize,1])
deltaU = LocalU - Urealfine
Mediafine = a(np.linspace(0,1,inputsize-1)).reshape([inputsize-1,1])
energynorm = np.sqrt(np.sum(np.multiply(np.multiply(np.diff(Urealfine,axis=0)/h,np.diff(Urealfine,axis=0)/h),Mediafine)*(h)))
l2norm = np.sqrt((np.sum((Urealfine)**2)*(h)))
h1norm = np.sqrt((np.sum((Urealfine)**2)*(h))+(np.sum((np.diff(Urealfine,axis=0)/h)**2)*(h)))
print('Energy norm: ')
print(np.sqrt(np.sum(np.multiply(np.multiply(np.diff(deltaU,axis=0)/h,np.diff(deltaU,axis=0)/h),Mediafine)*(h))))
print('L2 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h))))
print('H1 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h))+(np.sum((np.diff(deltaU,axis=0)/h)**2)*(h))))
print('Energy norm: ')
print(np.sqrt(np.sum(np.multiply(np.multiply(np.diff(deltaU,axis=0)/h,np.diff(deltaU,axis=0)/h),Mediafine)*(h)))/energynorm)
print('L2 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h)))/l2norm)
print('H1 norm: ')
print(np.sqrt((np.sum((deltaU)**2)*(h))+(np.sum((np.diff(deltaU,axis=0)/h)**2)*(h)))/h1norm)
print(current_rho)
plt.plot(LocalU)
plt.plot(Urealfine)<jupyter_output>Energy norm:
9.343056959916584
L2 norm:
0.2321267386734781
H1 norm:
6.621796934790554
Energy norm:
0.7424821016150928
L2 norm:
0.3282767820214679
H1 norm:
0.7430001269570468
0.5035089478021977
|
no_license
|
/1dFEM/0.0063-193-1_2.ipynb
|
moslandwez/DMM_Autograd
| 5 |
<jupyter_start><jupyter_text># Statefarm Data - Phase5 - including the Validation Data in TrainingComparing various models after removal of marginal quality data and using 14000 cases of pseudo labeled data<jupyter_code>import theano
from theano.sandbox import cuda
cuda.use('gpu0')
%matplotlib inline
IMPORT_DIR = '/home/ubuntu/nbs'
%cd $IMPORT_DIR
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
import daveutils
from daveutils import *
import davenet
from davenet import *
import my_cv_modeler
from my_cv_modeler import *
ALL_DATA_DIR = '/home/ubuntu/'
DATA_HOME_DIR = ALL_DATA_DIR+'statefarm/'
TRAIN_DIR = DATA_HOME_DIR+'train/'
VALID_DIR = DATA_HOME_DIR+'valid/'
SAMPLE_DIR = DATA_HOME_DIR+'sample/'
MODELS_DIR = DATA_HOME_DIR+'models/'
RESULTS_DIR = DATA_HOME_DIR+'results/'
TEST_DIR = DATA_HOME_DIR+'test/'<jupyter_output><empty_output><jupyter_text># 1. Prepare Data#### Identify and remove poor quality training dataPreviously Identified Data that is badly classified or multi-class:<jupyter_code>%cd $DATA_HOME_DIR
ALL_DATA_DIR = '/home/ubuntu/'
TRAIN_DIR = ALL_DATA_DIR+'statefarm/train' # yes, this still includes the pseudo labelled data
VALID_DIR = ALL_DATA_DIR+'statefarm/valid' #nb Notice that I've gone back to the orginal directory here
from shutil import copy
#%cd $DATA_HOME_DIR
def copyFromValidToTrain(): #bad_dir must not already exist
count = 0
g = glob(VALID_DIR+'/c?/*.jpg')
for filename in g:
#print(TRAIN_DIR+filename[28:])
copy(filename, TRAIN_DIR+filename[28:])
count+=1
print(count,"items successfully copied from ",VALID_DIR,"folder to: ",TRAIN_DIR)
copyFromValidToTrain()<jupyter_output>3827 items successfully copied from /home/ubuntu/statefarm/valid folder to: /home/ubuntu/statefarm/train
<jupyter_text># 2. Reload our previous best Sequential Vgg16 Model <jupyter_code>vgg = Dave16()
model = vgg.model
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
count_frozen = 0
for layer in conv_layers:
layer.trainable = False
if layer.trainable == False: count_frozen+=1
print(count_frozen,"layers are frozen")
conv_model = Sequential(conv_layers)
top_hat_model = read_model(4, cross='old')
def add_bn_layers(p, model):
new_model = model
new_model.add(MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]))
new_model.add(Flatten())
new_model.add(Dropout(p/2))
new_model.add(Dense(128, activation='relu'))
#new_model.layers[len(new_model.layers)].set_weights(top.layers[3].get_weights())
new_model.add(BatchNormalization())
new_model.add(Dropout(p/2))
new_model.add(Dense(128, activation='relu'))
#new_model.layers[len(new_model.layers)].set_weights(top.layers[6].get_weights())
new_model.add(BatchNormalization())
new_model.add(Dropout(p))
new_model.add(Dense(10, activation='softmax'))
#new_model.layers[len(new_model.layers)].set_weights(top.layers[9].get_weights())
return new_model
full_model = add_bn_layers(0.5, conv_model)
full_model.layers[last_conv_idx+3+1].set_weights(top_hat_model.layers[3].get_weights())
full_model.layers[last_conv_idx+6+1].set_weights(top_hat_model.layers[6].get_weights())
full_model.layers[last_conv_idx+9+1].set_weights(top_hat_model.layers[9].get_weights())
full_model.load_weights('/home/ubuntu/statefarm/cache/model_weights1vgg_minus_val.h5')
full_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])<jupyter_output><empty_output><jupyter_text># 3. Train the Model - including use of 14k pseudo label test casesn.b. Mixiterator was not used. Only test data having a prediction probability >0.995 has been used.
This data is considered to be of such good quality that it can be mixed with real data. The pseudo training data will make up 43% of the training data at this stage (39% after validation data is added). Yes, it's a little high, but lets see how it goes.. Create the image generator (no augmentation)<jupyter_code>gen = ImageDataGenerator()
val_generator = gen.flow_from_directory(
'valid',
target_size=(224, 224),
batch_size=64,
class_mode='categorical',
shuffle=True)
val_generator.N<jupyter_output>Found 3827 images belonging to 10 classes.
<jupyter_text>## Adding Data Augmentation<jupyter_code>dgen = ImageDataGenerator( rotation_range=5,
width_shift_range=0.1,
height_shift_range=0.05,
channel_shift_range = 20
)
tgenerator = dgen.flow_from_directory(
'train',
target_size=(224, 224),
batch_size=64,
class_mode='categorical',
shuffle=True)
full_model.optimizer.lr=0.00005
full_model.fit_generator(
tgenerator,
samples_per_epoch=tgenerator.N,
nb_epoch=1,
validation_data=val_generator,
nb_val_samples=500)
save_model(full_model, 1, cross='vgg16final')
full_model.optimizer.lr=0.00001
full_model.fit_generator(
tgenerator,
samples_per_epoch=tgenerator.N,
nb_epoch=1,
validation_data=val_generator,
nb_val_samples=val_generator.N)
save_model(full_model, 2, cross='vgg16final')
full_model.optimizer.lr=0.00001
full_model.fit_generator(
tgenerator,
samples_per_epoch=tgenerator.N,
nb_epoch=1,
validation_data=val_generator,
nb_val_samples=val_generator.N)
save_model(full_model, 3, cross='vgg16final')
full_model.optimizer.lr=0.00001
full_model.fit_generator(
tgenerator,
samples_per_epoch=tgenerator.N,
nb_epoch=1,
validation_data=val_generator,
nb_val_samples=val_generator.N)
save_model(full_model, 4, cross='vgg16final')<jupyter_output><empty_output>
|
no_license
|
/my_statefarm_phase5_including_the_valdn_data_in_training.ipynb
|
macnaughtond/distracted-driver
| 5 |
<jupyter_start><jupyter_text>
# Reading Files Python
Estimated time needed: **40** minutes
## Objectives
After completing this lab you will be able to:
- Read text files using Python libraries
Table of Contents
Download Data
Reading Text Files
A Better Way to Open a File
Download Data
<jupyter_code>import urllib.request
url = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt'
filename = 'Example1.txt'
urllib.request.urlretrieve(url, filename)
# Download Example file
!wget -O /resources/data/Example1.txt https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt<jupyter_output>--2021-01-18 13:08:28-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 45 [text/plain]
Saving to: ‘/resources/data/Example1.txt’
/resources/data/Exa 100%[===================>] 45 --.-KB/s in 0s
2021-01-18 13:08:29 (26.2 MB/s) - ‘/resources/data/Example1.txt’ saved [45/45]
<jupyter_text>
Reading Text Files
One way to read or write a file in Python is to use the built-in open function. The open function provides a File object that contains the methods and attributes you need in order to read, save, and manipulate the file. In this notebook, we will only cover .txt files. The first parameter you need is the file path and the file name. An example is shown as follow:
The mode argument is optional and the default value is r. In this notebook we only cover two modes:
r Read mode for reading files
w Write mode for writing files
For the next example, we will use the text file Example1.txt. The file is shown as follow:
We read the file:
<jupyter_code># Read the Example1.txt
example1 = "Example1.txt"
file1 = open(example1, "r")<jupyter_output><empty_output><jupyter_text> We can view the attributes of the file.
The name of the file:
<jupyter_code># Print the path of file
file1.name<jupyter_output><empty_output><jupyter_text> The mode the file object is in:
<jupyter_code># Print the mode of file, either 'r' or 'w'
file1.mode<jupyter_output><empty_output><jupyter_text>We can read the file and assign it to a variable :
<jupyter_code># Read the file
FileContent = file1.read()
FileContent<jupyter_output><empty_output><jupyter_text>The /n means that there is a new line.
We can print the file:
<jupyter_code># Print the file with '\n' as a new line
print(FileContent)<jupyter_output>This is line 1
This is line 2
This is line 3
<jupyter_text>The file is of type string:
<jupyter_code># Type of file content
type(FileContent)<jupyter_output><empty_output><jupyter_text>It is very important that the file is closed in the end. This frees up resources and ensures consistency across different python versions.
<jupyter_code># Close file after finish
file1.close()<jupyter_output><empty_output><jupyter_text>
A Better Way to Open a File
Using the with statement is better practice, it automatically closes the file even if the code encounters an exception. The code will run everything in the indent block then close the file object.
<jupyter_code># Open file using with
with open(example1, "r") as file1:
FileContent = file1.read()
print(FileContent)<jupyter_output>This is line 1
This is line 2
This is line 3
<jupyter_text>The file object is closed, you can verify it by running the following cell:
<jupyter_code># Verify if the file is closed
file1.closed<jupyter_output><empty_output><jupyter_text> We can see the info in the file:
<jupyter_code># See the content of file
print(FileContent)<jupyter_output>This is line 1
This is line 2
This is line 3
<jupyter_text>The syntax is a little confusing as the file object is after the as statement. We also don’t explicitly close the file. Therefore we summarize the steps in a figure:
We don’t have to read the entire file, for example, we can read the first 4 characters by entering three as a parameter to the method **.read()**:
<jupyter_code># Read first four characters
with open(example1, "r") as file1:
print(file1.read(4))<jupyter_output>This
<jupyter_text>Once the method .read(4) is called the first 4 characters are called. If we call the method again, the next 4 characters are called. The output for the following cell will demonstrate the process for different inputs to the method read():
<jupyter_code># Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(4))
print(file1.read(4))
print(file1.read(7))
print(file1.read(15))<jupyter_output>This
is
line 1
This is line 2
<jupyter_text>The process is illustrated in the below figure, and each color represents the part of the file read after the method read() is called:
Here is an example using the same file, but instead we read 16, 5, and then 9 characters at a time:
<jupyter_code># Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(16))
print(file1.read(5))
print(file1.read(9))<jupyter_output>This is line 1
This
is line 2
<jupyter_text>We can also read one line of the file at a time using the method readline():
<jupyter_code># Read one line
with open(example1, "r") as file1:
print("first line: " + file1.readline())<jupyter_output>first line: This is line 1
<jupyter_text>We can also pass an argument to readline() to specify the number of charecters we want to read. However, unlike read(), readline() can only read one line at most.
<jupyter_code>with open(example1, "r") as file1:
print(file1.readline(20)) # does not read past the end of line
print(file1.read(20)) # Returns the next 20 chars
<jupyter_output>This is line 1
This is line 2
This
<jupyter_text> We can use a loop to iterate through each line:
<jupyter_code># Iterate through the lines
with open(example1,"r") as file1:
i = 0;
for line in file1:
print("Iteration", str(i), ": ", line)
i = i + 1<jupyter_output>Iteration 0 : This is line 1
Iteration 1 : This is line 2
Iteration 2 : This is line 3
<jupyter_text>We can use the method readlines() to save the text file to a list:
<jupyter_code># Read all lines and save as a list
with open(example1, "r") as file1:
FileasList = file1.readlines()<jupyter_output><empty_output><jupyter_text> Each element of the list corresponds to a line of text:
<jupyter_code># Print the first line
FileasList[0]<jupyter_output><empty_output><jupyter_text># Print the second line
FileasList[1]
<jupyter_code># Print the third line
FileasList[2]<jupyter_output><empty_output><jupyter_text>
Exercise
Weather Data
Your friend, a rising star in the field of meterology, has called on you to write a script to perform some analysis on weather station data. Given below is a file "resources/ex4.csv", which contains some precipiation data for the month of June.
Each line in the file has the format - Date,Precipation (upto two decimal places). Note how the data is seperated using ','. The first row of the file contains headers and should be ignored.
Your task is to complete the getNAvg function that computes a simple moving average for N days for the precipiation data, where N is a parameter. Your function should return a list of moving averages for the given data.
The formula for a k day moving average over a series - $n_{0},n_{2},n_{3}....n_{m}$is:
\begin{align}
M_{i} = M_{i-1} + \frac{n_{i} - n_{i-k}}{k}, \text{for i = k to m }
\\ \text{where $M_{i}$ is the moving average}
\end{align}
The skeleton code has been provided below. Edit only the required function.
Click here for the solution
```python
- Each line of the file has a '\n' char which should be removed
- The lines in the file are read as strings and need to be typecasted to floats
- For a k day moving average, The data points for the last k days must be known
```
<jupyter_code>##Download the file
!wget https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%204/ex4.csv
import matplotlib.pyplot as plt
statData ="ex4.csv"
def getNAvg(file,N):
"""
file - File containting all the raw weather station data
N - The number of days to compute the moving average over
Return a list of containg the moving average of all data points
"""
pass
def plotData(mean,N):
"""
mean - series to plot
N - parameter for legend
Plots running averages
"""
mean = [round(x,3) for x in mean]
plt.plot(mean,label=str(N) + ' day average')
plt.xlabel('Day')
plt.ylabel('Precipiation')
plt.legend()
<jupyter_output><empty_output><jupyter_text>#### Once you have finished, you can you use the block below to plot your data
<jupyter_code>plotData(getNAvg(statData,1),1)
plotData ([0 for x in range(1,5)]+ getNAvg(statData,5),5 )
plotData([0 for x in range(1,7)] + getNAvg(statData,7),7)<jupyter_output><empty_output><jupyter_text>You can use the code below to verify your progress -
<jupyter_code>avg5 =[4.18,4.78,4.34,4.72,5.48,5.84,6.84,6.76,6.74,5.46,4.18,2.74,2.52,2.02,2.16,2.82,2.92,4.36,4.74,5.12,5.34,6.4,6.56,6.1,5.74,5.62,4.26]
avg7 =[4.043,4.757,5.071,5.629,6.343,5.886,6.157,5.871,5.243,4.386,3.514,2.714,2.586,2.443,2.571,3.643,4.143,4.443,4.814,5.6,6.314,6.414,5.429,5.443,4.986]
def testMsg(passed):
if passed:
return 'Test Passed'
else :
return ' Test Failed'
print("getNAvg : ")
try:
sol5 = getNAvg(statData,5)
sol7 = getNAvg(statData,7)
if(len(sol5)==len( avg5) and (len(sol7)==len(avg7))):
err5 = sum([abs(avg5[index] - sol5[index])for index in range(len(avg5))])
err7 = sum([abs(avg7[index] - sol7[index])for index in range(len(avg7))])
print(testMsg((err5 < 1) and (err7 <1)))
else:
print(testMsg(false))
except NameError as e:
print('Error! Code: {c}, Message: {m}'.format(c = type(e).__name__, m = str(e)))
except:
print("An error occured. Recheck your function")
<jupyter_output><empty_output><jupyter_text>Click here for the solution
```python
import matplotlib.pyplot as plt
statData ="ex4.csv"
def getNAvg(file,N):
"""
file - File containting all the raw weather station data
N - The number of days to compute the moving average over
Return a list of containg the moving average of all data points
"""
row = 0 # keep track of rows
lastN = [] # keep track of last N points
mean = [0] # running avg
with open(file,"r") as rawData:
for line in rawData:
if (row == 0): # Ignore the headers
row = row + 1
continue
line = line.strip('\n')
lineData = float(line.split(',')[1])
if (row<=N):
lastN.append(lineData)
mean[0] = (lineData + mean[0]*(row-1))/row
else:
mean.append( mean[row - N -1]+ (lineData - lastN[0])/N)
lastN = lastN[1:]
lastN.append(lineData)
row = row +1
return mean
def plotData(mean,N):
""" Plots running averages """
mean = [round(x,3) for x in mean]
plt.plot(mean,label=str(N) + ' day average')
plt.xlabel('Day')
plt.ylabel('Precipiation')
plt.legend()
plotData(getNAvg(statData,1),1)
plotData ([0 for x in range(1,5)]+ getNAvg(statData,5),5 )
plotData([0 for x in range(1,7)] + getNAvg(statData,7),7)
```
<jupyter_code>file ="ex4.csv"
with open(file,"r") as rawData:
rawData.rea<jupyter_output><empty_output>
|
no_license
|
/4. Python_for_Data_Science_and_AI/Week 4 - Working with Data in Python/1. Reading Files with Open/PY0101EN-4-1-ReadFile.ipynb
|
NicoGangi5/IBM-Data-Science-Professional-Certificate
| 24 |
<jupyter_start><jupyter_text># Session 0: Preliminaries with Python/Notebook
Parag K. Mital
Creative Applications of Deep Learning w/ Tensorflow
Kadenze Academy
#CADL
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
# Learning Goals
* Install and run Jupyter Notebook with the Tensorflow library
* Learn to create a dataset of images using `os.listdir` and `plt.imread`
* Understand how images are represented when using float or uint8
* Learn how to crop and resize images to a standard size.
# Table of Contents
- [Introduction](#introduction)
- [Using Notebook](#using-notebook)
- [Cells](#cells)
- [Kernel](#kernel)
- [Importing Libraries](#importing-libraries)
- [Loading Data](#loading-data)
- [Structuring data as folders](#structuring-data-as-folders)
- [Using the `os` library to get data](#using-the-os-library-to-get-data)
- [Loading an image](#loading-an-image)
- [RGB Image Representation](#rgb-image-representation)
- [Understanding data types and ranges \(uint8, float32\)](#understanding-data-types-and-ranges-uint8-float32)
- [Visualizing your data as images](#visualizing-your-data-as-images)
- [Image Manipulation](#image-manipulation)
- [Cropping images](#cropping-images)
- [Resizing images](#resizing-images)
- [Cropping/Resizing Images](#croppingresizing-images)
- [The Batch Dimension](#the-batch-dimension)
- [Conclusion](#conclusion)
# Introduction
This preliminary session will cover the basics of working with image data in Python, and creating an image dataset. Please make sure you are running at least Python 3.4 and have Tensorflow 0.9.0 or higher installed. If you are unsure of how to do this, please make sure you have followed the [installation instructions](../README.md#installation-preliminaries). We'll also cover loading images from a directory, resizing and cropping images, and changing an image datatype from unsigned int to float32. If you feel comfortable with all of this, please feel free to skip straight to Session 1. Otherwise, launch `jupyter notebook` and make sure you are reading the `session-0.ipynb` file.
# Using Notebook
*Make sure you have launched `jupyter notebook` and are reading the `session-0.ipynb` file*. If you are unsure of how to do this, please make sure you follow the [installation instructions](../README.md#installation-preliminaries). This will allow you to interact with the contents and run the code using an interactive python kernel!
## Cells
After launching this notebook, try running/executing the next cell by pressing shift-enter on it.<jupyter_code>4*2<jupyter_output><empty_output><jupyter_text>Now press 'a' or 'b' to create new cells. You can also use the toolbar to create new cells. You can also use the arrow keys to move up and down.
## Kernel
Note the numbers on each of the cells inside the brackets, after "running" the cell. These denote the current execution count of your python "kernel". Think of the kernel as another machine within your computer that understands Python and interprets what you write as code into executions that the processor can understand.
## Importing Libraries
When you launch a new notebook, your kernel is a blank state. It only knows standard python syntax. Everything else is contained in additional python libraries that you have to explicitly "import" like so:<jupyter_code>import os<jupyter_output><empty_output><jupyter_text>After exectuing this cell, your kernel will have access to everything inside the `os` library which is a common library for interacting with the operating system. We'll need to use the import statement for all of the libraries that we include.
# Loading Data
Let's now move onto something more practical. We'll learn how to see what files are in a directory, and load any images inside that directory into a variable.
## Structuring data as folders
With Deep Learning, we'll always need a dataset, or a collection of data. A lot of it. We're going to create our dataset by putting a bunch of images inside a directory. Then, whenever we want to load the dataset, we will tell python to find all the images inside the directory and load them. Python lets us very easily crawl through a directory and grab each file. Let's have a look at how to do this.
## Using the `os` library to get data
We'll practice with a very large dataset called Celeb Net. This dataset has about 200,000 images of celebrities. The researchers also provide a version of the dataset which has every single face cropped and aligned so that each face is in the middle! We'll be using this aligned dataset. To read more about the dataset or to download it, follow the link here:
http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
For now, we're not going to be using the entire dataset but just a subset of it. Run the following cell which will download the first 10 images for you:<jupyter_code># Load the os library
import os
# Load the request module
import urllib.request
# Create a directory
os.mkdir('img_align_celeba')
# Now perform the following 10 times:
for img_i in range(1, 11):
# create a string using the current loop counter
f = '000%03d.jpg' % img_i
# and get the url with that string appended the end
url = 'https://s3.amazonaws.com/cadl/celeb-align/' + f
# We'll print this out to the console so we can see how far we've gone
print(url, end='\r')
# And now download the url to a location inside our new directory
urllib.request.urlretrieve(url, os.path.join('img_align_celeba', f))<jupyter_output><jupyter_text>Using the `os` package, we can list an entire directory. The documentation or docstring, says that `listdir` takes one parameter, `path`:<jupyter_code>help(os.listdir)<jupyter_output>Help on built-in function listdir in module posix:
listdir(path=None)
Return a list containing the names of the files in the directory.
path can be specified as either str or bytes. If path is bytes,
the filenames returned will also be bytes; in all other circumstances
the filenames returned will be str.
If path is None, uses the path='.'.
On some platforms, path may also be specified as an open file descriptor;\
the file descriptor must refer to a directory.
If this functionality is unavailable, using it raises NotImplementedError.
The list is in arbitrary order. It does not include the special
entries '.' and '..' even if they are present in the directory.
<jupyter_text>This is the location of the directory we need to list. Let's try this with the directory of images we just downloaded:<jupyter_code>pwd
files = os.listdir('img_align_celeba')
print(files)<jupyter_output>['000001.jpg', '000002.jpg', '000003.jpg', '000004.jpg', '000005.jpg', '000006.jpg', '000007.jpg', '000008.jpg', '000009.jpg', '000010.jpg']
<jupyter_text>We can also specify to include only certain files like so:<jupyter_code>[file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i]<jupyter_output><empty_output><jupyter_text>or even:<jupyter_code>[file_i for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i and '00000' in file_i]<jupyter_output><empty_output><jupyter_text>We could also combine file types if we happened to have multiple types:<jupyter_code>[file_i for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i or '.png' in file_i or '.jpeg' in file_i]<jupyter_output><empty_output><jupyter_text>Let's set this list to a variable, so we can perform further actions on it:<jupyter_code>files = [file_i
for file_i in os.listdir('img_align_celeba')
if file_i.endswith('.jpg')]<jupyter_output><empty_output><jupyter_text>And now we can index that list using the square brackets:<jupyter_code>print(files[0])
print(files[1])<jupyter_output>000001.jpg
000002.jpg
<jupyter_text>We can even go in the reverse direction, which wraps around to the end of the list:<jupyter_code>print(files[-1])
print(files[-2])<jupyter_output>000010.jpg
000009.jpg
<jupyter_text>
## Loading an image
`matplotlib` is an incredibly powerful python library which will let us play with visualization and loading of image data. We can import it like so:<jupyter_code>import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>Now we can refer to the entire module by just using `plt` instead of `matplotlib.pyplot` every time. This is pretty common practice.
We'll now tell matplotlib to "inline" plots using an ipython magic function:<jupyter_code>%matplotlib inline<jupyter_output><empty_output><jupyter_text>This isn't python, so won't work inside of any python script files. This only works inside notebook. What this is saying is that whenever we plot something using matplotlib, put the plots directly into the notebook, instead of using a window popup, which is the default behavior. This is something that makes notebook really useful for teaching purposes, as it allows us to keep all of our images/code in one document.
Have a look at the library by using `plt`:<jupyter_code>#help(plt)
plt.absolute_import?<jupyter_output><empty_output><jupyter_text>`plt` contains a very useful function for loading images:<jupyter_code>plt.imread?<jupyter_output><empty_output><jupyter_text>Here we see that it actually returns a variable which requires us to use another library, `NumPy`. NumPy makes working with numerical data *a lot* easier. Let's import it as well:<jupyter_code>import numpy as np
# help(np)
# np.<tab><jupyter_output><empty_output><jupyter_text>Let's try loading the first image in our dataset:
We have a list of filenames, and we know where they are. But we need to combine the path to the file and the filename itself. If we try and do this:<jupyter_code># img = plt.imread(files[0])
# outputs: FileNotFoundError<jupyter_output><empty_output><jupyter_text>`plt.imread` will not know where that file is. We can tell it where to find the file by using os.path.join:<jupyter_code>print(os.path.join('img_align_celeba', files[0]))
plt.imread(os.path.join('img_align_celeba', files[0]))<jupyter_output>img_align_celeba/000001.jpg
<jupyter_text>Now we get a bunch of numbers! I'd rather not have to keep prepending the path to my files, so I can create the list of files like so:<jupyter_code>files = [os.path.join('img_align_celeba', file_i)
for file_i in os.listdir('img_align_celeba')
if '.jpg' in file_i]<jupyter_output><empty_output><jupyter_text>Let's set this to a variable, `img`, and inspect a bit further what's going on:<jupyter_code>img = plt.imread(files[0])
# img.<tab>
img.any?<jupyter_output><empty_output><jupyter_text>
## RGB Image Representation
It turns out that all of these numbers are capable of describing an image. We can use the function `imshow` to see this:<jupyter_code>img = plt.imread(files[0])
plt.imshow(img)<jupyter_output><empty_output><jupyter_text>Let's break this data down a bit more. We can see the dimensions of the data using the `shape` accessor:<jupyter_code>img.shape
# outputs: (218, 178, 3)<jupyter_output><empty_output><jupyter_text>This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. We can use the square brackets just like when we tried to access elements of our list:<jupyter_code>plt.figure()
plt.imshow(img[:, :, 0])
plt.figure()
plt.imshow(img[:, :, 1])
plt.figure()
plt.imshow(img[:, :, 2])<jupyter_output><empty_output><jupyter_text>We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels.
What we see now is a heatmap of our image corresponding to each color channel.
## Understanding data types and ranges (uint8, float32)
Let's take a look at the range of values of our image:<jupyter_code>np.min(img), np.max(img)<jupyter_output><empty_output><jupyter_text>The numbers are all between 0 to 255. What a strange number you might be thinking. Unless you are one of 10 types of people in this world, those that understand binary and those that don't. Don't worry if you're not. You are likely better off.
256 values is how much information we can stick into a byte. We measure a byte using bits, and each byte takes up 8 bits. Each bit can be either 0 or 1. When we stack up 8 bits, or 10000000 in binary, equivalent to 2 to the 8th power, we can express up to 256 possible values, giving us our range, 0 to 255. You can compute any number of bits using powers of two. 2 to the power of 8 is 256. How many values can you stick in 16 bits (2 bytes)? Or 32 bits (4 bytes) of information? Let's ask python:<jupyter_code>2**32<jupyter_output><empty_output><jupyter_text>numpy arrays have a field which will tell us how many bits they are using: `dtype`:<jupyter_code>img.dtype<jupyter_output><empty_output><jupyter_text>`uint8`: Let's decompose that: `unsigned`, `int`, `8`. That means the values do not have a sign, meaning they are all positive. They are only integers, meaning no decimal places. And that they are all 8 bits.
Something which is 32-bits of information can express a single value with a range of nearly 4.3 billion different possibilities (2**32). We'll generally need to work with 32-bit data when working with neural networks. In order to do that, we can simply ask numpy for the correct data type:<jupyter_code>img.astype(np.float32)<jupyter_output><empty_output><jupyter_text>This is saying, let me see this data as a floating point number, meaning with decimal places, and with 32 bits of precision, rather than the previous data types 8 bits. This will become important when we start to work with neural networks, as we'll need all of those extra possible values!
## Visualizing your data as images
We've seen how to look at a single image. But what if we have hundreds, thousands, or millions of images? Is there a good way of knowing what our dataset looks like without looking at their file names, or opening up each image one at a time?
One way we can do that is to randomly pick an image.
We've already seen how to read the image located at one of our file locations:<jupyter_code>plt.imread(files[0])<jupyter_output><empty_output><jupyter_text>to pick a random image from our list of files, we can use the numpy random module:<jupyter_code>print(np.random.randint(0, len(files)))
print(np.random.randint(0, len(files)))
print(np.random.randint(0, len(files)))<jupyter_output>8
6
1
<jupyter_text>This function will produce random integers between a range of values that we specify. We say, give us random integers from 0 to the length of files.
We can now use the code we've written before to show a random image from our list of files:<jupyter_code>filename = files[np.random.randint(0, len(files))]
img = plt.imread(filename)
plt.imshow(img)<jupyter_output><empty_output><jupyter_text>This might be something useful that we'd like to do often. So we can use a function to help us in the future:<jupyter_code>def plot_image(filename):
img = plt.imread(filename)
plt.imshow(img)<jupyter_output><empty_output><jupyter_text>This function takes one parameter, a variable named filename, which we will have to specify whenever we call it. That variable is fed into the plt.imread function, and used to load an image. It is then drawn with plt.imshow. Let's see how we can use this function definition:<jupyter_code>f = files[np.random.randint(0, len(files))]
plot_image(f)<jupyter_output><empty_output><jupyter_text>or simply:<jupyter_code>plot_image(files[np.random.randint(0, len(files))])<jupyter_output><empty_output><jupyter_text>We use functions to help us reduce the main flow of our code. It helps to make things clearer, using function names that help describe what is going on.
# Image Manipulation
## Cropping images
We're going to create another function which will help us crop the image to a standard size and help us draw every image in our list of files as a grid.
In many applications of deep learning, we will need all of our data to be the same size. For images this means we'll need to crop the images while trying not to remove any of the important information in it. Most image datasets that you'll find online will already have a standard size for every image. But if you're creating your own dataset, you'll need to know how to make all the images the same size. One way to do this is to find the longest edge of the image, and crop this edge to be as long as the shortest edge of the image. This will convert the image to a square one, meaning its sides will be the same lengths. The reason for doing this is that we can then resize this square image to any size we'd like, without distorting the image. Let's see how we can do that:<jupyter_code>def imcrop_tosquare(img):
"""Make any image a square image.
Parameters
----------
img : np.ndarray
Input image to crop, assumed at least 2d.
Returns
-------
crop : np.ndarray
Cropped image.
"""
if img.shape[0] > img.shape[1]:
extra = (img.shape[0] - img.shape[1])
if extra % 2 == 0:
crop = img[extra // 2:-extra // 2, :]
else:
crop = img[max(0, extra // 2 - 1):min(-1, -extra // 2), :]
elif img.shape[1] > img.shape[0]:
extra = (img.shape[1] - img.shape[0])
if extra % 2 == 0:
crop = img[:, extra // 2:-extra // 2]
else:
crop = img[:, max(0, extra // 2 - 1):min(-1, -extra // 2)]
else:
crop = img
return crop<jupyter_output><empty_output><jupyter_text>There are a few things going on here. First, we are defining a function which takes as input a single variable. This variable gets named `img` inside the function, and we enter a set of if/else-if conditionals. The first branch says, if the rows of `img` are greater than the columns, then set the variable `extra` to their difference and divide by 2. The `//` notation means to perform an integer division, instead of a floating point division. So `3 // 2 = 1`, not 1.5. We need integers for the next line of code which says to set the variable `crop` to `img` starting from `extra` rows, and ending at negative `extra` rows down. We can't be on row 1.5, only row 1 or 2. So that's why we need the integer divide there. Let's say our image was 128 x 96 x 3. We would have `extra = (128 - 96) // 2`, or 16. Then we'd start from the 16th row, and end at the -16th row, or the 112th row. That adds up to 96 rows, exactly the same number of columns as we have.
Let's try another crop function which can crop by an arbitrary amount. It will take an image and a single factor from 0-1, saying how much of the original image to crop:<jupyter_code>def imcrop(img, amt):
if amt <= 0:
return img
row_i = int(img.shape[0] * amt) // 2
col_i = int(img.shape[1] * amt) // 2
return img[row_i:-row_i, col_i:-col_i]<jupyter_output><empty_output><jupyter_text>
## Resizing images
For resizing the image, we'll make use of a python library, `scipy`. Let's import the function which we need like so:<jupyter_code>#from scipy.<tab>misc import <tab>imresize<jupyter_output><empty_output><jupyter_text>Notice that you can hit tab after each step to see what is available. That is really helpful as I never remember what the exact names are.<jupyter_code>from scipy.misc import imresize
imresize?<jupyter_output><empty_output><jupyter_text>The `imresize` function takes a input image as its first parameter, and a tuple defining the new image shape as rows and then columns.
Let's see how our cropped image can be imresized now:<jupyter_code>square = imcrop_tosquare(img)
crop = imcrop(square, 0.2)
rsz = imresize(crop, (64, 64))
plt.imshow(rsz)<jupyter_output><empty_output><jupyter_text>Great! To really see what's going on, let's turn off the interpolation like so:<jupyter_code>plt.imshow(rsz, interpolation='nearest')<jupyter_output><empty_output><jupyter_text>Each one of these squares is called a pixel. Since this is a color image, each pixel is actually a mixture of 3 values, Red, Green, and Blue. When we mix those proportions of Red Green and Blue, we get the color shown here.
We can combine the Red Green and Blue channels by taking the mean, or averaging them. This is equivalent to adding each channel, `R + G + B`, then dividing by the number of color channels, `(R + G + B) / 3`. We can use the numpy.mean function to help us do this:<jupyter_code>mean_img = np.mean(rsz, axis=2)
print(mean_img.shape)
plt.imshow(mean_img, cmap='gray')<jupyter_output><empty_output><jupyter_text>This is an incredibly useful function which we'll revisit later when we try to visualize the mean image of our entire dataset.
## Cropping/Resizing Images
We now have functions for cropping an image to a square image, and a function for resizing an image to any desired size. With these tools, we can begin to create a dataset. We're going to loop over our 10 files, crop the image to a square to remove the longer edge, and then crop again to remove some of the background, and then finally resize the image to a standard size of 64 x 64 pixels.<jupyter_code>imgs = []
for file_i in files:
img = plt.imread(file_i)
square = imcrop_tosquare(img)
crop = imcrop(square, 0.2)
rsz = imresize(crop, (64, 64))
imgs.append(rsz)
print(len(imgs))<jupyter_output>10
<jupyter_text>We now have a list containing our images. Each index of the `imgs` list is another image which we can access using the square brackets:<jupyter_code>plt.imshow(imgs[0])<jupyter_output><empty_output><jupyter_text>Since all of the images are the same size, we can make use of numpy's array instead of a list.
Remember that an image has a shape describing the height, width, channels:<jupyter_code>imgs[0].shape<jupyter_output><empty_output><jupyter_text>
## The Batch Dimension
there is a convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape should be:
N x H x W x C
The Number of images, or the batch size, is first; then the Height or number of rows in the image; then the Width or number of cols in the image; then finally the number of channels the image has. A Color image should have 3 color channels, RGB. A Grayscale image should just have 1 channel.
We can combine all of our images to look like this in a few ways. The easiest way is to tell numpy to give us an array of all the images:<jupyter_code>data = np.array(imgs)
data.shape<jupyter_output><empty_output><jupyter_text>We could also use the `numpy.concatenate` function, but we have to create a new dimension for each image. Numpy let's us do this by using a special variable `np.newaxis`<jupyter_code>data = np.concatenate([img_i[np.newaxis] for img_i in imgs], axis=0)
data.shape<jupyter_output><empty_output>
|
permissive
|
/session-0/session-0.ipynb
|
martin-martin/apply_deep_learning
| 45 |
<jupyter_start><jupyter_text># **Support Vector Machine**
### **Import the Libraries**<jupyter_code>import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
%matplotlib inline
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>### **Read the Data using Pandas Dataframe**<jupyter_code>cell_df = pd.read_csv("cell_samples.csv")
cell_df.head()<jupyter_output><empty_output><jupyter_text>### **Visualising the distribution of the classes based on Clump thickness and Uniformity of cell size**<jupyter_code>
ax = cell_df[cell_df['Class'] == 4][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='DarkBlue', label='malignant');
cell_df[cell_df['Class'] == 2][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='Yellow', label='benign', ax=ax);
plt.show()<jupyter_output><empty_output><jupyter_text>### **Checking the data type of features**<jupyter_code>cell_df.dtypes<jupyter_output><empty_output><jupyter_text>### **Removing the "object" data type from the dataset**<jupyter_code>cell_df = cell_df[pd.to_numeric(cell_df['BareNuc'], errors='coerce').notnull()]
cell_df['BareNuc'] = cell_df['BareNuc'].astype('int')
cell_df.dtypes<jupyter_output><empty_output><jupyter_text>### **Select some features to train the model**<jupyter_code>feature_df = cell_df[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize', 'BareNuc', 'BlandChrom', 'NormNucl', 'Mit']]
X = np.asarray(feature_df)
print(X[0:5])<jupyter_output>[[ 5 1 1 1 2 1 3 1 1]
[ 5 4 4 5 7 10 3 2 1]
[ 3 1 1 1 2 2 3 1 1]
[ 6 8 8 1 3 4 3 7 1]
[ 4 1 1 3 2 1 3 1 1]]
<jupyter_text>### **Select the output label**<jupyter_code>cell_df['Class'] = cell_df['Class'].astype('int')
y = np.asarray(cell_df['Class'])
print(y [0:5])<jupyter_output>[2 2 2 2 2]
<jupyter_text>### **Create the test and train set**<jupyter_code>X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)<jupyter_output>Train set: (546, 9) (546,)
Test set: (137, 9) (137,)
<jupyter_text>## **Model train and Evaluation**
### **Import the model and train the model**<jupyter_code>from sklearn import svm
clf = svm.SVC(kernel='linear')
clf.fit(X_train, y_train) <jupyter_output><empty_output><jupyter_text>### **Predict the class of the testset**<jupyter_code>yhat = clf.predict(X_test)
print(yhat [0:10])
print(y_test[0:10])<jupyter_output>[2 4 2 4 2 2 2 2 4 2]
[2 4 2 4 2 2 2 2 4 2]
<jupyter_text>### **Evaluation using F1 score and Jaccard Similarity Score**<jupyter_code>from sklearn.metrics import f1_score
from sklearn.metrics import jaccard_similarity_score
f1 = f1_score(y_test, yhat, average='weighted')
jac = jaccard_similarity_score(y_test, yhat)
print("F1 Score: %.2f" % f1)
print("Jaccard Score: %.2f" % jac)<jupyter_output>F1 Score: 0.96
Jaccard Score: 0.96
<jupyter_text>### **Evaluation using Confusion Matrix**<jupyter_code>from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')<jupyter_output><empty_output><jupyter_text>### **Compute the Confusion Matrix**<jupyter_code># Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[2,4])
np.set_printoptions(precision=2)
print (classification_report(y_test, yhat))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Benign(2)','Malignant(4)'],normalize= False, title='Confusion matrix')<jupyter_output> precision recall f1-score support
2 1.00 0.94 0.97 90
4 0.90 1.00 0.95 47
accuracy 0.96 137
macro avg 0.95 0.97 0.96 137
weighted avg 0.97 0.96 0.96 137
Confusion matrix, without normalization
[[85 5]
[ 0 47]]
|
no_license
|
/Classification/.ipynb_checkpoints/SVM-checkpoint.ipynb
|
touhid-ahmed/ml_basics
| 13 |
<jupyter_start><jupyter_text>## Customer Lifetime Value Analysis - Hopper<jupyter_code>pd.set_option('display.max_columns', 100)
sns.set_palette("husl")
sns.set(rc={'image.cmap': 'coolwarm'})
%matplotlib inline
from lifetimes.plotting import *
from lifetimes.utils import *
from lifetimes import BetaGeoFitter
from lifetimes.plotting import plot_frequency_recency_matrix
from lifetimes.plotting import plot_probability_alive_matrix
from lifetimes.plotting import plot_period_transactions
from lifetimes.utils import calibration_and_holdout_data
from lifetimes.plotting import plot_calibration_purchases_vs_holdout_purchases
from lifetimes.plotting import plot_history_alive
from lifetimes import GammaGammaFitter<jupyter_output><empty_output><jupyter_text>### Data Preprocessing<jupyter_code>watch_df = pd.read_csv('search_sample_data/watch_sample.csv')
watch_df['status_latest'].value_counts()
booked_df = watch_df.loc[watch_df['status_latest'] == 'booked']
cols_of_interest = ['user_id', 'latest_status_change_dt', 'lowest_price']
booked_df = booked_df[cols_of_interest]
booked_df = booked_df[pd.notnull(booked_df['lowest_price'])]
booked_df['latest_status_change_dt'] = pd.to_datetime(booked_df['latest_status_change_dt'])
booked_df['latest_status_change_dt'] = booked_df['latest_status_change_dt'].dt.date
booked_df['latest_status_change_dt'] = pd.to_datetime(booked_df['latest_status_change_dt'])
booked_df['latest_status_change_dt'].min(), booked_df['latest_status_change_dt'].max()
booked_df['user_id'].nunique()
booked_df.head(10)
data = summary_data_from_transaction_data(booked_df, 'user_id', 'latest_status_change_dt', monetary_value_col='lowest_price', observation_period_end='2019-09-17')
data.head(10)
booked_df.loc[booked_df['user_id'] == '00702b0d3e9ad967e0df6b9f062a5cf5b83524d10ae32bc972b09da4c786dd69']<jupyter_output><empty_output><jupyter_text>### Modeling<jupyter_code>bgf = BetaGeoFitter(penalizer_coef=0.0)
bgf.fit(data['frequency'], data['recency'], data['T'])
print(bgf)
plot_frequency_recency_matrix(bgf);
plot_probability_alive_matrix(bgf);
plot_period_transactions(bgf);
t = 180
data['predicted_bookings'] = bgf.conditional_expected_number_of_purchases_up_to_time(t, data['frequency'], data['recency'], data['T'])
data.sort_values(by='predicted_bookings').tail(10)
booked_df.loc[booked_df['user_id'] == 'd4b3f8ddf91c4fbeb9b709eec30710c31c7643e5b5f647a87b66b262972bb09c'].sort_values(by='latest_status_change_dt')
data.sort_values(by='predicted_bookings').head(10)
booked_df.loc[booked_df['user_id'] == '22aec3ce1c62b59aa956d4ede63f5e660c2d457eabab8a70f128fc062c0d3305'].sort_values(by='latest_status_change_dt')
summary_cal_holdout = calibration_and_holdout_data(booked_df, 'user_id', 'latest_status_change_dt',
calibration_period_end='2018-11-20',
observation_period_end='2019-09-17' )
print(summary_cal_holdout.head())
bgf.fit(summary_cal_holdout['frequency_cal'], summary_cal_holdout['recency_cal'], summary_cal_holdout['T_cal'])
plot_calibration_purchases_vs_holdout_purchases(bgf, summary_cal_holdout)
plt.figure(figsize=(9,5))
user = 'd4b3f8ddf91c4fbeb9b709eec30710c31c7643e5b5f647a87b66b262972bb09c'
days_since_birth = 365
sp_trans = booked_df.loc[booked_df['user_id'] == user]
plot_history_alive(bgf, days_since_birth, sp_trans, 'latest_status_change_dt');
user = '22aec3ce1c62b59aa956d4ede63f5e660c2d457eabab8a70f128fc062c0d3305'
days_since_birth = 365
sp_trans = booked_df.loc[booked_df['user_id'] == user]
plot_history_alive(bgf, days_since_birth, sp_trans, 'latest_status_change_dt')
returning_customers_summary = data[data['frequency']>0]
print(returning_customers_summary.head())
returning_customers_summary[['monetary_value', 'frequency']].corr()
data['frequency'].plot(kind='hist', bins=50)
print(data['frequency'].describe())
print(sum(data['frequency'] == 0)/float(len(data)))<jupyter_output>count 4596.000000
mean 0.778503
std 1.609312
min 0.000000
25% 0.000000
50% 0.000000
75% 1.000000
max 26.000000
Name: frequency, dtype: float64
0.6303307223672759
|
no_license
|
/Hopper Regression Predict CLV.ipynb
|
roipolanitzer/Regression-Hopper-Predict-CLV
| 3 |
<jupyter_start><jupyter_text># Homework 2: More Exploratory Data Analysis
## Gene Expression Data and Election Polls
Due: Thursday, October 2, 2014 11:59 PM
Download this assignment
#### Submission Instructions
To submit your homework, create a folder named lastname_firstinitial_hw# and place your IPython notebooks, data files, and any other files in this folder. Your IPython Notebooks should be completely executed with the results visible in the notebook. We should not have to run any code. Compress the folder (please use .zip compression) and submit to the CS109 dropbox in the appropriate folder. If we cannot access your work because these directions are not followed correctly, we will not grade your work.
---
## Introduction
John Tukey wrote in [Exploratory Data Analysis, 1977](http://www.amazon.com/Exploratory-Data-Analysis-Wilder-Tukey/dp/0201076160/ref=pd_bbs_sr_2/103-4466654-5303007?ie=UTF8&s=books&qid=1189739816&sr=8-2): "The greatest value of a picture is when it forces us to notice what we never expected to see." In this assignment we will continue using our exploratory data analysis tools, but apply it to new sets of data: [gene expression](http://en.wikipedia.org/wiki/Gene_expression) and polls from the [2012 Presidental Election](http://en.wikipedia.org/wiki/United_States_presidential_election,_2012) and from the [2014 Senate Midterm Elections](http://en.wikipedia.org/wiki/United_States_Senate_elections,_2014).
**First**: You will use exploratory data analysis and apply the [singular value decomposition](http://en.wikipedia.org/wiki/Singular_value_decomposition) (SVD) to a gene expression data matrix to determine if the the date that the gene expression samples are processed has large effect on the variability seen in the data.
**Second**: You will use the polls from the 2012 Presidential Elections to determine (1) Is there a pollster bias in presidential election polls? and (2) Is the average of polls better than just one poll?
**Finally**: You will use the [HuffPost Pollster API](http://elections.huffingtonpost.com/pollster/api) to extract the polls for the current 2014 Senate Midterm Elections and provide a preliminary prediction of the result of each state.
#### Data
We will use the following data sets:
1. A gene expression data set called `exprs_GSE5859.csv` and sample annotation table called `sampleinfo_GSE5859.csv` which are both available on Github in the 2014_data repository: [expression data set](https://github.com/cs109/2014_data/blob/master/exprs_GSE5859.csv) and [sample annotation table](https://github.com/cs109/2014_data/blob/master/sampleinfo_GSE5859.csv).
2. Polls from the [2012 Presidential Election: Barack Obama vs Mitt Romney](http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama). The polls we will use are from the [Huffington Post Pollster](http://elections.huffingtonpost.com/pollster).
3. Polls from the [2014 Senate Midterm Elections](http://elections.huffingtonpost.com/pollster) from the [HuffPost Pollster API](http://elections.huffingtonpost.com/pollster/api).
---## Load Python modules<jupyter_code># special IPython command to prepare the notebook for matplotlib
%matplotlib inline
import requests
from StringIO import StringIO
import numpy as np
import pandas as pd # pandas
import matplotlib.pyplot as plt # module for plotting
import datetime as dt # module for manipulating dates and times
import numpy.linalg as lin # module for performing linear algebra operations
# special matplotlib argument for improved plots
from matplotlib import rcParams
#colorbrewer2 Dark2 qualitative color table
dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]
rcParams['figure.figsize'] = (10, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.color_cycle'] = dark2_colors
rcParams['lines.linewidth'] = 2
rcParams['axes.facecolor'] = 'white'
rcParams['font.size'] = 14
rcParams['patch.edgecolor'] = 'white'
rcParams['patch.facecolor'] = dark2_colors[0]
rcParams['font.family'] = 'StixGeneral'<jupyter_output><empty_output><jupyter_text>## Problem 1
In this problem we will be using a [gene expression](http://en.wikipedia.org/wiki/Gene_expression) data set obtained from a [microarray](http://en.wikipedia.org/wiki/DNA_microarray) experiement [Read more about the specific experiment here](http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE5859). There are two data sets we will use:
1. The gene expression intensities where the rows represent the features on the microarray (e.g. genes) and the columsns represent the different microarray samples.
2. A table that contains the information about each of the samples (columns in the gene expression data set) such as the sex, the age, the treatment status, the date the samples were processed. Each row represents one sample. #### Problem 1(a) Read in the two files from Github: [exprs_GSE5859.csv](https://github.com/cs109/2014_data/blob/master/exprs_GSE5859.csv) and [sampleinfo_GSE5859.csv](https://github.com/cs109/2014_data/blob/master/sampleinfo_GSE5859.csv) as pandas DataFrames called `exprs` and `sampleinfo`. Use the gene names as the index of the `exprs` DataFrame.<jupyter_code>#your code here
url_exprs = "https://raw.githubusercontent.com/cs109/2014_data/master/exprs_GSE5859.csv"
exprs = pd.read_csv(url_exprs, index_col=0)
url_sampleinfo = "https://raw.githubusercontent.com/cs109/2014_data/master/sampleinfo_GSE5859.csv"
sampleinfo = pd.read_csv(url_sampleinfo)<jupyter_output><empty_output><jupyter_text>Make sure the order of the columns in the gene expression DataFrame match the order of file names in the sample annotation DataFrame. If the order of the columns the `exprs` DataFrame do not match the order of the file names in the `sampleinfo` DataFrame, reorder the columns in the `exprs` DataFrame.
**Note**: The column names of the gene expression DataFrame are the filenames of the orignal files from which these data were obtained.
**Hint**: The method `list.index(x)` [[read here](https://docs.python.org/2/tutorial/datastructures.html)] can be used to return the index in the list of the first item whose value is x. It is an error if there is no such item. To check if the order of the columns in `exprs` matches the order of the rows in `sampleinfo`, you can check using the method `.all()` on a Boolean or list of Booleans:
Example code: `(exprs.columns == sampleinfo.filename).all()`First, let's check if order of the columns in the `exprs` DataFrame match the order of the `filename` in the `sampleinfo` DataFrame using the `==` Boolean operator. <jupyter_code>(exprs.columns == sampleinfo.filename).all()<jupyter_output><empty_output><jupyter_text>Now we know the columns in the `exprs` DataFrame are out of order compared to the order of the rows in the `sampleinfo` DataFrame. To check if there are any columns in the correct order, we can test for which columns are equal to the `filename` in the `sampleinfo` DataFrame. <jupyter_code>sampleinfo[exprs.columns == sampleinfo.filename]<jupyter_output><empty_output><jupyter_text>Now, we want to re-order the columns in the `exprs` DataFrame to match the order of the file names in the `sampleinfo` DataFrame. One way of doing this is to use the `list.index()` method. First, we convert the file names in `sampleinfo` and the column names of `exprs` and to two lists: `a` and `b`. Then, we use a list comprehension to iterate through each element in `a` and return the index in `b` that matches using the `list.index()` method. Once we know all the indexes, we can re-order `exprs` so that the columns match the same order as the `filenames` in `sampleinfo`. <jupyter_code>#your code here
a = list(sampleinfo.filename)
b = list(exprs.columns)
matchIndex = [b.index(x) for x in a]
exprs = exprs[matchIndex]
# check if all the column names match the file names in sampleinfo
(exprs.columns == sampleinfo.filename).all()<jupyter_output><empty_output><jupyter_text>Show the head of the two tables: `exprs` and `sampleinfo`. <jupyter_code>exprs.head()
sampleinfo.head()<jupyter_output><empty_output><jupyter_text>#### Problem 1(b)
Extract the year and month as integers from the `sampleinfo` table.
**Hint**: To convert a Series or a column of a pandas DataFrame that contains a date-like object, you can use the `to_datetime` function [[read here](http://pandas.pydata.org/pandas-docs/stable/timeseries.html)]. This will create a `DatetimeIndex` which can be used to extract the month and year for each row in the DataFrame. <jupyter_code>#your code here
sampleinfo["date"] = pd.to_datetime(sampleinfo.date)
sampleinfo["month"] = map(lambda x: x.month, sampleinfo.date)
sampleinfo["year"] = map(lambda x: x.year, sampleinfo.date)<jupyter_output><empty_output><jupyter_text>#### Problem 1(c)
Convert the dates in the `date` column from the `sampleinfo` table into days since October 31, 2002. Add a column to the `sampleinfo` DataFrame titled `elapsedInDays` containing the days since October 31, 2002. Show the head of the `sampleinfo` DataFrame which includes the new column.
**Hint**: Use the `datetime` module to create a new `datetime` object for the specific date October 31, 2002. Then, subtract the October 31, 2002 date from each date from the `date` column in the `sampleinfo` DataFrame. <jupyter_code>#your code here
oct31 = dt.datetime(2002,10,31,0,0)
oct31
sampleinfo["elapsedInDays"] = map(lambda x: (x - oct31).days, sampleinfo.date)
sampleinfo.head()<jupyter_output><empty_output><jupyter_text>#### Problem 1(d)
Use exploratory analysis and the singular value decomposition (SVD) of the gene expression data matrix to determine if the date the samples were processed has large effect on the variability seen in the data or if it is just ethnicity (which is confounded with date).
**Hint**: See the end of the [lecture from 9/23/2014 for help with SVD](http://nbviewer.ipython.org/github/cs109/2014/blob/master/lectures/lecture07/data_scraping_transcript.ipynb).
First subset the the `sampleinfo` DataFrame to include only the CEU ethnicity. Call this new subsetted DataFrame `sampleinfoCEU`. Show the head of `sampleinfoCEU` DataFrame. <jupyter_code>#your code here
sampleinfoCEU = sampleinfo[sampleinfo.ethnicity == "CEU"]
sampleinfoCEU.head()<jupyter_output><empty_output><jupyter_text>Next, subset the `exprs` DataFrame to only include the samples with the CEU ethnicity. Name this new subsetted DataFrame `exprsCEU`. Show the head of the `exprsCEU` DataFrame. <jupyter_code>#your code here
exprsCEU = exprs[sampleinfoCEU.filename]
exprsCEU.head()<jupyter_output><empty_output><jupyter_text>Check to make sure the order of the columns in the `exprsCEU` DataFrame matches the rows in the `sampleinfoCEU` DataFrame. <jupyter_code>#your code here
(exprsCEU.columns == sampleinfoCEU.filename).all()<jupyter_output><empty_output><jupyter_text>Compute the average gene expression intensity in the `exprsCEU` DataFrame across all the samples. For each sample in the `exprsCEU` DataFrame, subtract the average gene expression intensity from each of the samples. Show the head of the mean normalized gene expression data. <jupyter_code>#your code here
data = exprsCEU.apply(lambda x: x - exprsCEU.mean(axis=1), axis = 0)
data.head()<jupyter_output><empty_output><jupyter_text>Using this mean normalized gene expression data, compute the projection to the first Principal Component (PC1).
**Hint**: Use the `numpy.linalg.svd()` function in the `numpy.linalg` module (or the `scipy.linalg.svd()` function in the `scipy.linalg` module) to apply an [singular value decomposition](http://en.wikipedia.org/wiki/Singular_value_decomposition) to a matrix. <jupyter_code>#your code here
U,s,Vh = lin.svd(data.values)
V = Vh.T<jupyter_output><empty_output><jupyter_text>Create a histogram using the values from PC1. Use a bin size of 25. <jupyter_code>#your code here
plt.hist(V[:,0], bins = 25)
plt.xlabel('PC1')
plt.ylabel('Frequency')
plt.title('Distribution of the values from PC1')<jupyter_output><empty_output><jupyter_text>Create a scatter plot with the days since October 31, 2002 on the x-axis and PC1 on the y-axis.<jupyter_code>#your code here
plt.scatter(sampleinfoCEU.elapsedInDays, V[:,0])
plt.xlabel('Date sample was processed (Number of days since Oct 31, 2012)')
plt.ylabel('PC1')
plt.title('Relationship between the PC1 and the date the samples were processed')<jupyter_output><empty_output><jupyter_text>Around what day do you notice a difference in the way the samples were processed?<jupyter_code>#your code here
plt.scatter(sampleinfoCEU.elapsedInDays, V[:,0])
plt.xlim(0,160)
plt.xlabel('Date sample was processed (Number of days since Oct 31, 2012)')
plt.ylabel('PC1')
plt.title('Relationship between the PC1 and the date the samples were processed')
plt.axvline(x=100, color='r')<jupyter_output><empty_output><jupyter_text>Answer: There is a difference around day 100. ## Discussion for Problem 1
*Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less.*
Using exploratory data analysis and the SVD of the gene expression data matrix, we see the date the samples were processed does have a large effect on the variability seen in the data. This can been seen in the scatter plots in Problem 1(d) which shows a difference around day 100.
---
## Problem 2: Is there a pollster bias in presidential election polls?#### Problem 2(a)
The [HuffPost Pollster](http://elections.huffingtonpost.com/pollster) contains many political polls. You can access these polls from individual races as a CSV but you can also access polls through the [HuffPost Pollster API](http://elections.huffingtonpost.com/pollster/api) to access the data.
Read in the polls from the [2012 Presidential Election: Barack Obama vs Mitt Romney](http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama) into a pandas DataFrame called `election`. For this problem, you may read in the polls for this race directly using [the CSV file](http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv) available from the HuffPost Pollster page.<jupyter_code>#your code here
url = "http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv"
source = requests.get(url).text
s = StringIO(source)
election = pd.DataFrame.from_csv(s, index_col=None).convert_objects(
convert_dates="coerce", convert_numeric=True) # Access polls as a CSV file<jupyter_output><empty_output><jupyter_text>Show the head of the `election` DataFrame. <jupyter_code>#your code here
election.head()<jupyter_output><empty_output><jupyter_text>How many polls were conducted in November? Define this number as M.
**Hint**: Subset the `election` DataFrame for only dates in the `Start Date` column that are in November 2012. <jupyter_code>#your code here
filtered = election[map(lambda x: (x.month == 11) and (x.year ==2012), election["Start Date"])]
filtered.drop_duplicates('Pollster', inplace = True) # Removes duplicate pollsters
M = len(filtered)
print "Number of polls in November: %i" % M <jupyter_output>Number of polls in November: 18
<jupyter_text>Answer: There were 18 polls with a Start Date in November after accounting for multiple polls within a given pollster (or 19 polls total with a Start Date in November). What was the median of the number of observations in the November polls? Define this quantity as N. <jupyter_code>#your code here
N = np.median(filtered["Number of Observations"])
print N<jupyter_output>1200.0
<jupyter_text>Answer: The median number of observations in the November polls was 1200. #### Problem 2(b)
Using the median sample size $N$ from Problem 2(a), simulate the results from a single poll: simulate the number of votes for Obama out of a sample size $N$ where $p$ = 0.53 is the percent of voters who are voting for Obama.
**Hint**: Use the binomial distribution with parameters $N$ and $p$ = 0.53. <jupyter_code>#your code here<jupyter_output><empty_output><jupyter_text>To simulate the *number* of votes for Obama from a single poll, we can use the `random.binomial` function in `numpy` to simulate one binomial random variable with a sample size $N$ and $p$ which is the percent of voters who are voting for Obama. <jupyter_code>p = 0.53
"Simulated number of votes for Obama: %i" % np.random.binomial(N, p, size=1)<jupyter_output><empty_output><jupyter_text>Now, perform a Monte Carlo simulation to obtain the estimated percentage of Obama votes with a sample size $N$ where $N$ is the median sample size calculated in Problem 2(a). Let $p$=0.53 be the percent of voters are voting for Obama.
**Hint**: You will repeat the simulation above 1,000 times and plot the distribution of the estimated *percent* of Obama votes from a single poll. The results from the single poll you simulate is random variable and will be different every time you sample. <jupyter_code>#your code here<jupyter_output><empty_output><jupyter_text>To simulate the the estimate *percentage* of Obama votes, we can repeat the above simulation using a Bernoulli distribution, but just divide by the total number of votes. <jupyter_code>p = 0.53
B = 1000
obs = np.random.binomial(N, p, size = B) / N<jupyter_output><empty_output><jupyter_text>We can also write this in terms of a Bernoulli random variable $X$ where $X$ has a binary outcome with probability $p$ of success where
$$E(X) = p$$
and
$$Var(X) = p(1-p)$$
In a single simulation, we simulate N binary outcomes of 1 or 0 representing a vote for Obama or not where N represents the poll sample size. The average of the binary outcomes represents the *percent* of Obama votes from a single poll. We repeat this process 1000 times. <jupyter_code>p = 0.53
B = 1000
obs = map(lambda x: np.mean(np.random.binomial(1, p, size = N)), xrange(B))<jupyter_output><empty_output><jupyter_text>Plot the distribution of the estimated percentage of Obama votes from your single poll. What is the distribution of the estimated percentage of Obama votes? <jupyter_code>#your code here
plt.hist(obs)<jupyter_output><empty_output><jupyter_text>At first glance the distribution looks normally distributed. We can use a qqplot to visually inspect if this distribution is normally distributed. <jupyter_code>import scipy.stats as stats
stats.probplot((obs - np.mean(obs)) / np.std(obs, ddof=1), dist="norm", plot = plt)
plt.show()<jupyter_output><empty_output><jupyter_text>Answer: From the histogram and qqplot, we see the distribution looks normally distributed. What is the standard error (SE) of the estimated percentage from the poll.
**Hint**: Remember the SE is the standard deviation (SD) of the distribution of a random variable. In this example, the standard error (SE) is the standard deviation of the distribution of the percent of Obama votes from a single poll. Therefore, we can compute the standard deviation as<jupyter_code>#your code here
np.std(obs, ddof=1)<jupyter_output><empty_output><jupyter_text>From the lecture we saw if we observe $N$ Bernoulli random variables $(X_1, \ldots X_N)$, then
$$ \mbox{E}(\bar{X}) = \frac{1}{N} \sum_{i=1}^N p = p$$
and
$$\mbox{Var}(\bar{X})= \frac{1}{N^2} \sum_{i=1}^N p(1-p) = \frac{p(1-p)}{N}$$
In our example, we assume each $X_i$ is a Bernoulli distribution with $p$ = 0.53. Therefore, if $N$ = 1200, we can analytically calculate the standard deviation of $\bar{X}$ directly and compare to the standard error above<jupyter_code>np.sqrt((0.53 * 0.47) / 1200)<jupyter_output><empty_output><jupyter_text>#### Problem 2(c)
Now suppose we run M polls where M is the number of polls that happened in November (calculated in Problem 2(a)). Run 1,000 simulations and compute the mean of the M polls for each simulation. First, let's recall what M and N were in Problem 2(a): <jupyter_code>"Number of polls in November: %i" % M
"Median size of polls in November: %i" % N<jupyter_output><empty_output><jupyter_text>Within one iteration of the simulation, we want to simulate M polls each measuring the *percent* of Obama votes out of a sample size of N. We can use again the Bernoulli distribution with parameter $p$. We simulate the *percentage* of Obama votes from M polls and compute the mean across the M polls using `np.mean`.
<jupyter_code># Represents the percentage of Obama votes from M polls
def simulatePolls(p, N, M):
""" Function to simulate the results
of M polls each measuring the percent
of Obama votes out of a sample size of N
with probability p of voting for Obama
M = Number of polls to simulate
N = Sample size of each poll
p = Probability of voting for Obama """
return map(lambda x: np.mean(np.random.binomial(1, p, size = N)), xrange(M))
simulatePolls(p, N, M) <jupyter_output><empty_output><jupyter_text>Now, we want to repeat this simulation 1000 times. For every iteration of the simulation, we will compute the average across the 19 polls (or average of averages). <jupyter_code>p = 0.53
B = 1000
mom = map(lambda y: np.mean(simulatePolls(p, N, M)), xrange(B))<jupyter_output><empty_output><jupyter_text>What is the distribution of the average of polls?
**Hint**: Show a plot. <jupyter_code>#your code here
plt.hist(mom)<jupyter_output><empty_output><jupyter_text>Using a qqplot, we can compare this distribuiton to a normal distribution. <jupyter_code>stats.probplot((mom - np.mean(mom)) / np.std(mom, ddof=1), dist="norm", plot = plt)
plt.show()<jupyter_output><empty_output><jupyter_text>Answer: Using a histogram and qqplot, we can see this distribution is very similar to a normal distribution. What is the standard error (SE) of the average of polls? <jupyter_code>#your code here
np.std(mom, ddof = 1)<jupyter_output><empty_output><jupyter_text>Answer: <jupyter_code>"The SE of the average of polls is %g" % np.round(np.std(mom, ddof = 1), 5)<jupyter_output><empty_output><jupyter_text>Is the SE of the average of polls larger, the same, or smaller than that the SD of a single poll (calculated in Problem 2(b))? By how much?
**Hint**: Compute a ratio of the two quantities. <jupyter_code>#your code here
ratio = np.std(mom, ddof = 1) / np.std(obs, ddof = 1)
"The ratio of the SE of the average of polls to the SD of a single poll is %g" % ratio<jupyter_output><empty_output><jupyter_text>Answer: The SE of the average of polls is smaller than the SD of a single poll by approximately factor of 4. #### Problem 2(d)
Repeat Problem 2(c) but now record the *across poll* standard deviation in each simulation. <jupyter_code>#your code here
B = 1000
p = 0.53
sds = map(lambda y: np.std(simulatePolls(p, N, M), ddof = 0), xrange(B))<jupyter_output><empty_output><jupyter_text>What is the distribution of the *across M polls* standard deviation?
**Hint**: Show a plot. <jupyter_code>#your code here
plt.hist(sds)
plt.xlabel('Standard deviations across %i polls' % M)
plt.ylabel('Frequency')
plt.title('Histogram of standard deviations across %i polls' % M)
stats.probplot((sds - np.mean(sds)) / np.std(sds, ddof=1), dist="norm", plot = plt)
plt.show()<jupyter_output><empty_output><jupyter_text>Answer: Using the histogram and qqplot, we see the distribution is normally distributed. #### Problem 2(e)
What is the standard deviation of M polls in our real (not simulated) 2012 presidential election data ? <jupyter_code>#your code here
thesd = np.std(filtered["Obama"] / 100, ddof = 0)
thesd<jupyter_output><empty_output><jupyter_text>Is this larger, the same, or smaller than what we expeced if polls were not biased.<jupyter_code>#your code here
thesd / np.mean(sds)<jupyter_output><empty_output><jupyter_text>Answer: The standard deviation in our real 2012 presidential election data is smaller than what we expect if the polls were not biased. Another way of comparing the standard deviation of the M polls in the 2012 presidential eletion data is to compare it directly to the distribuiton of `sds`. We can calculate a *p*-value or the probability of seeing values as large as `thesd` or larger. <jupyter_code>np.mean(thesd > sds)<jupyter_output><empty_output><jupyter_text>We see `thesd` is consistent with the null distribution of `sds`. #### Problem 2(f)
**For AC209 Students**: Learn about the normal approximation for the binomial distribution and derive the results of Problem 2(b) and 2(c) analytically (using this approximation). Compare the results obtained analytically to those obtained from simulations.
**Solution:** A random variable $X$ that has a [Binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution) with $N$ independent trials of a binary outcome (e.g. yes/no) each with probability of success $p$ can be [approximated by a normal distribution](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation) if $n$ is large enough. Another way of writing this is if $X$ has a Binomial distribution with parameters $N$ and $p$, $X \sim Bin(N,p)$, and $N$ is "large enough", then $X$ can be approximated by a normal distribution with mean $Np$ and variance $Np(1-p)$, or $X \sim Normal(Np, Np(1-p))$.
In Problem 2(b), we are asking about the *percentage* of Obama votes, therefore we are actually interested in $Y_i = \frac{X_i}{N}$ where $X_i \sim Bin(N, p)$. In this case, $\mbox{Var}(Y_i) = p(1-p)$. To estimate the standard deviation of $\bar{Y}$,
$$\mbox{Var}(\bar{Y}) = \frac{1}{N^2} \sum_{i=1}^N p(1-p) = \frac{p(1-p)}{N}$$
Thus, we can analytically compute the standard deviation of $\bar{Y}$ using $\sqrt{\frac{p(1-p)}{N}}$ and compare to the standard deviation obtained from simulations. <jupyter_code># Standard deviation from simulations in 2(b)
print "SD from simulations: %g" % np.std(obs, ddof=1)
# Standard deviation computed analytically
print "SD using normal approximation %g" % np.sqrt(p * (1-p)/ N)<jupyter_output>SD from simulations: 0.0142173
SD using normal approximation 0.0144078
<jupyter_text>In Problem 2(c), we are asking about the *percentage* of Obama votes from averaged across M polls. Above we saw the variance for each poll is $\frac{p(1-p)}{N}$. Therefore we can treat the variance at the poll level as $\sigma_{poll}^2 = \frac{p(1-p)}{N}$. Then, when we average across M polls the variance becomes $\sigma_{poll}^2 / M = \frac{p(1-p)}{N \cdot M }$
<jupyter_code># Standard deviation from simulations in 2(c)
print "SD from simulations: %g" % np.std(mom, ddof=1)
# Standard deviation computed analytically
print "SD using normal approximation %g" % np.sqrt((p * (1-p)/ N) / M)<jupyter_output>SD from simulations: 0.00347033
SD using normal approximation 0.00339594
<jupyter_text>Answer: We see the SD from the simulations match the analytical results using the normal approximation. ## Discussion for Problem 2
*Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less.*
In this problem we considered all the polls from pollsters in November 2012. After removing polls that came from the same pollster in Nov 2012, we did not see any pollster bias. The standard deviation from the 2012 Presidential Election was smaller than what we expect by chance.
---
## Problem 3: Is the average of polls better than just one poll?#### Problem 3(a)
Most undecided voters vote for one of the two candidates at the election. Therefore, the reported percentages underestimate the final value of both candidates. However, if we assume the undecided will split evenly, then the observed difference should be an unbiased estimate of the final difference.
Add a new column to the `election` DataFrame containg the difference between Obama and Romeny called `Diff`. <jupyter_code>#your code here
election["Diff"] = (election.Obama / 100) - (election.Romney / 100)
election.head()<jupyter_output><empty_output><jupyter_text>#### Problem 3(b)
Make a plot of the differences for the week before the election (e.g. 5 days) where the days are on the x-axis and the differences are on the y-axis. Add a horizontal line showing 3.9%: the difference between Obama and Romney on election day.
<jupyter_code>#your code here
last_day = max(election["Start Date"])
filtered = election[map(lambda x: (last_day - x).days <= 5, election["Start Date"]) ]
filtered = filtered.sort(columns=["Start Date"])
days= map(lambda x: (last_day - x).days , filtered["Start Date"])
color_map = {}
for i, p in enumerate(set(filtered.Pollster)):
color_map[p] = np.random.rand();
plt.scatter(days, filtered.Diff, c = map(lambda x: color_map[x], filtered.Pollster), s=60 )
plt.axhline(y=0.039, c = "gray")
plt.axhline(y=np.mean(filtered.Diff), c = "red")
plt.xlabel("Days")
plt.ylabel("Difference (Obama - Romney)")
plt.title("Plot of the difference between Obama and Romney colored by different pollsters in the last week")
<jupyter_output><empty_output><jupyter_text>#### Problem 3(c)
Make a plot showing the differences by pollster where the pollsters are on the x-axis and the differences on the y-axis. <jupyter_code>#your code here
pollster_map = {}
polls = list(set(filtered.Pollster))
for i, p in enumerate(polls):
pollster_map[p] = i
plt.scatter(map(lambda x: pollster_map[x],filtered.Pollster), filtered.Diff, \
c = map(lambda x: color_map[x],filtered.Pollster),s=60)
plt.xticks(range(len(polls)), polls, rotation = 90)
plt.xlabel("Pollsters")
plt.ylabel("Difference (Obama - Romney)")
plt.title("Plot of the difference between Obama and Romney by different pollsters")
plt.show()<jupyter_output><empty_output><jupyter_text>Is the *across poll* difference larger than the *between pollster* difference? Answer: For this question, we can compare the variability *within* each pollster (*across* a set of polls) compared to the variability *between* each pollster. From these two visualization it is clear that the *between* pollster difference is larger. #### Problem 3(d)
Take the average for each pollster and then compute the average of that. Given this difference how confident would you have been of an Obama victory?
**Hint**: Compute an estimate of the SE of this average based exclusively on the observed data. <jupyter_code>#your code here
aggr = filtered.groupby("Pollster").mean()
print "Average across pollsters: %g" % np.round(np.mean(aggr.Diff),4)
print "Standard error: %g" % np.std(aggr.Diff, ddof = 0)
<jupyter_output>Average across pollsters: 0.0124
Standard error: 0.0129669
<jupyter_text>Answer: Given the large standard error, I would not have been very confident in the Obama victory. #### Problem 3(e)
**For AC209 Students**: Show the difference against time and see if you can detect a trend towards the end. Use this trend to see if it improves the final estimate.<jupyter_code>#your code here
three_months = dt.datetime(2012,8,15,0,0)
new_data = election [map(lambda x: x >= three_months , election["Start Date"]) ]
new_data = new_data.sort("Start Date")
new_data["days"]= map(lambda x: (x - three_months).days , new_data["Start Date"])
new_data["Diff"] = (new_data.Obama/100) - (new_data.Romney/100)
new_data = new_data.groupby(["days"], as_index=False).mean()
plt.figure()
plt.plot(new_data.days, new_data.Diff )
plt.xlabel("Days from three month before the election")
plt.ylabel("Difference (Obama - Romney)")
plt.title("Difference between Obama and Romney across time")
plt.show()<jupyter_output><empty_output><jupyter_text>Answer: Around fifty days before the election, there was a change resulting a positive difference between Obama and Romney in the polls. ## Discussion for Problem 3
*Write a brief discussion of your conclusions to the questions and tasks above in 100 words or less.*
Yes, the average of polls is better than just one poll because there can be a large amount of variability between pollsters.
---
## Problem 4
In this last problem, we will use the polls from the [2014 Senate Midterm Elections](http://elections.huffingtonpost.com/pollster) from the [HuffPost Pollster API](http://elections.huffingtonpost.com/pollster/api) to create a preliminary prediction of the result of each state.
The HuffPost Pollster API allows you to access the data as a CSV or a JSON response by tacking ".csv" or ".json" at the end of the URLs. For example the 2012 Presidential Election could be accessed as a [.json](http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.json) instead of a [.csv](http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv)#### Problem 4(a)
Read in the polls for **all** of the 2014 Senate Elections using the HuffPost API. For example, we can consider the [2014 Senate race in Kentucky between Mitch McConnell and Alison Grimes](http://elections.huffingtonpost.com/pollster/2014-kentucky-senate-mcconnell-vs-grimes).
To search for the 2014 Senate races, use the `topics` parameter in the API [[read more about topics here](http://elections.huffingtonpost.com/pollster/api)]. <jupyter_code>url_str = "http://elections.huffingtonpost.com/pollster/api/charts/?topic=2014-senate"<jupyter_output><empty_output><jupyter_text>To list all the URLs related to the 2014 Senate races using the pollster API, we can use a list comprehension:<jupyter_code>election_urls = [election['url'] + '.csv' for election in requests.get(url_str).json()]
election_urls<jupyter_output><empty_output><jupyter_text>Because there so many Senate races, we can create a dictionary of pandas DataFrames that will be keyed by the name of the election (a string). <jupyter_code>def build_frame(url):
"""
Returns a pandas DataFrame object containing
the data returned from the given url
"""
source = requests.get(url).text
# Use StringIO because pd.DataFrame.from_csv requires .read() method
s = StringIO(source)
return pd.DataFrame.from_csv(s, index_col=None).convert_objects(
convert_dates="coerce", convert_numeric=True)
# Makes a dictionary of pandas DataFrames keyed on election string.
dfs = dict((election.split("/")[-1][:-4], build_frame(election)) for election in election_urls)<jupyter_output><empty_output><jupyter_text>Show the head of the DataFrame containing the polls for the 2014 Senate race in Kentucky between McConnell and Grimes.<jupyter_code>#your code here
dfs['2014-kentucky-senate-mcconnell-vs-grimes'].head()<jupyter_output><empty_output><jupyter_text>#### Problem 4(b)
For each 2014 Senate race, create a preliminary prediction of the result for that state.
This is a very crude way of making a preliminary prediction: compute the difference between each candidate using all the polls in each race and predict the winner based on if the differences is positive or negative. We want you to be creative and use the tools you are learning in class to make more accurate predictions. <jupyter_code>#your code here
x = {}
for keys in dfs:
dat = dfs[keys]
candidate1 = dat.columns[7]
candidate2 = dat.columns[8]
dat.Diff = (dat[candidate1]/100) - (dat[candidate2]/100)
x[keys] = [candidate1, candidate2, np.round(np.mean(dat.Diff), 3)]
predictions = pd.DataFrame(x).T
predictions.columns = ['Candidate1', 'Candidate2', 'Difference']
predictions['Winner'] = np.where(predictions.Difference >=0,
predictions.Candidate1, predictions.Candidate2)
predictions<jupyter_output><empty_output>
|
no_license
|
/2014/homework-solutions/.ipynb_checkpoints/HW2-solutions-checkpoint.ipynb
|
clementlefevre/CS109
| 54 |
<jupyter_start><jupyter_text># Deliverable 3<jupyter_code>import json
import pandas as pd
import numpy as np
import re
#from sqlalchemy import create_engine
from sqlalchemy import create_engine, inspect, func, MetaData, Table
import psycopg2
from config import db_password
from flask import Flask
#from flask_sqlalchemy import SQLAlchemy
import time
# Add the clean movie function that takes in the argument, "movie".
def clean_movie(movie):
movie = dict(movie) #create a non-destructive copy
alt_titles = {}
# combine alternate titles into one list
for key in ['Also known as','Arabic','Cantonese','Chinese','French',
'Hangul','Hebrew','Hepburn','Japanese','Literally',
'Mandarin','McCune-Reischauer','Original title','Polish',
'Revised Romanization','Romanized','Russian',
'Simplified','Traditional','Yiddish']:
if key in movie:
alt_titles[key] = movie[key]
movie.pop(key)
if len(alt_titles) > 0:
movie['alt_titles'] = alt_titles
return movie
# 1 Add the function that takes in three arguments;
# Wikipedia data, Kaggle metadata, and MovieLens rating data (from Kaggle)
def extract_transform_load():
# 2. Read in the kaggle metadata and MovieLens ratings CSV files as Pandas DataFrames
kaggle_metadata = pd.read_csv(f'{file_dir}movies_metadata.csv', low_memory=False)
ratings = pd.read_csv(f'{file_dir}ratings.csv')
# Open and read the Wikipedia data JSON file.
with open(wiki_file, mode = 'r') as file:
wiki_movies_raw=json.load(file)
# Write a list comprehension to filter out TV shows.
wiki_movies = [movie for movie in wiki_movies_raw
if ('Director' in movie or 'Directed by' in movie)
and 'imdb_link' in movie
and 'No. of episodes' not in movie]
# Write a list comprehension to iterate through the cleaned wiki movies list
# and call the clean_movie function on each movie.
clean_movies = [clean_movie(movie) for movie in wiki_movies]
# Read in the cleaned movies list from Step 4 as a DataFrame.
wiki_movies_df = pd.DataFrame(clean_movies)
# Write a try-except block to catch errors while extracting the IMDb ID using a regular expression string and
# dropping any imdb_id duplicates. If there is an error, capture and print the exception.
try:
wiki_movies_df['imdb_id'] = wiki_movies_df['imdb_link'].str.extract(r'(tt\d{7})')
wiki_movies_df.drop_duplicates(subset="imdb_id", inplace=True)
except Exception as e:
print(e) # Trackback Exception
# Write a list comprehension to keep the columns that don't have null values from the wiki_movies_df DataFrame.
wiki_columns_to_keep = [column for column in wiki_movies_df.columns if wiki_movies_df[column].isnull().sum() < len(wiki_movies_df) * 0.9]
wiki_movies_df = wiki_movies_df[wiki_columns_to_keep]
#wiki_movies_df.head()
# Create a variable that will hold the non-null values from the “Box office” column.
box_office = wiki_movies_df['Box office'].dropna()
# Convert the box office data created in Step 8 to string values using the lambda and join functions.
box_office = box_office.apply(lambda x: ' '.join(x) if type(x) == list else x)
# Write a regular expression to match the six elements of "form_one" of the box office data.
form_one = r'\$\s*\d+\.?\d*\s*[mb]illi?on'
# Write a regular expression to match the three elements of "form_two" of the box office data.
form_two = r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!\s[mb]illion)'
# Add the parse_dollars function.
def parse_dollars(s):
# if s is not a string, return NaN
if type(s) != str:
return np.nan
# if input is of the form $###.# million
if re.match(r'\$\s*\d+\.?\d*\s*milli?on', s, flags=re.IGNORECASE):
# remove dollar sign and " million"
s = re.sub('\$|\s|[a-zA-Z]','', s)
# convert to float and multiply by a million
value = float(s) * 10**6
# return value
return value
# if input is of the form $###.# billion
elif re.match(r'\$\s*\d+\.?\d*\s*billi?on', s, flags=re.IGNORECASE):
# remove dollar sign and " billion"
s = re.sub('\$|\s|[a-zA-Z]','', s)
# convert to float and multiply by a billion
value = float(s) * 10**9
# return value
return value
# if input is of the form $###,###,###
elif re.match(r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!\s[mb]illion)', s, flags=re.IGNORECASE):
# remove dollar sign and commas
s = re.sub('\$|,','', s)
# convert to float
value = float(s)
# return value
return value
# otherwise, return NaN
else:
return np.nan
# Clean the box office column in the wiki_movies_df DataFrame.
# Fix pattern matches
form_one = r'\$\s*\d+\.?\d*\s*[mb]illi?on'
form_two = r'\$\s*\d{1,3}(?:,\d{3})+'
form_two = r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!\s[mb]illion)'
form_one = r'\$\s*\d+\.?\d*\s*[mb]illi?on'
box_office = box_office.str.replace(r'\$.*[-—–](?![a-z])', '$', regex=True)
# Extract and convert box office values
wiki_movies_df['box_office'] = box_office.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)
# Drop box office column
wiki_movies_df.drop('Box office', axis=1, inplace=True)
# Clean the budget column in the wiki_movies_df DataFrame.
# Create a budget variable and drop null values from budget column
budget = wiki_movies_df["Budget"].dropna()
# Convert any list to strings (parse data)
budget = budget.map(lambda x: ' '.join(x) if type(x) == list else x)
# Remove values between dollar sign and a hyphen
budget = budget.str.replace(r'\$.*[-—–](?![a-z])', '$', regex=True)
# Create 2 variables which contains the rows matching with the two forms
matches_form_one = budget.str.contains(form_one, flags=re.IGNORECASE)
matches_form_two = budget.str.contains(form_two, flags=re.IGNORECASE)
budget[~ matches_form_one & ~ matches_form_two]
# Remove citation references
budget = budget.str.replace(r'\[\d+\]\s*', '')
# Apply extract and parsing
wiki_movies_df['budget'] = budget.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)
# Drop budget column
# wiki_movies_df.drop('Budget', axis=1, inplace=True)
# Clean the release date column in the wiki_movies_df DataFrame.
release_date = wiki_movies_df['Release date'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)
# Write regular expressions to match date formats
date_form_one = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\s[123]\d,\s\d{4}'
date_form_two = r'\d{4}.[01]\d.[123]\d'
date_form_three = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\s\d{4}'
date_form_four = r'\d{4}'
# Extract the dates
release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})', flags=re.IGNORECASE)
# Parse the dates
wiki_movies_df['release_date'] = pd.to_datetime(release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})')[0], infer_datetime_format=True)
# Drop release date column
wiki_movies_df.drop('Release date', axis=1, inplace=True)
# Clean the running time column in the wiki_movies_df DataFrame.
# Create a variable to hold the non-null values of relese date in the DataFrame, converting lists to strings
running_time = wiki_movies_df['Running time'].dropna().apply(lambda x: " ".join(x) if type(x) == list else x)
# Accept other abbreviations of "minutes"
running_time.str.contains(r'^\d*\s*m', flags=re.IGNORECASE).sum()
running_time[running_time.str.contains(r'^\d*\s*m', flags=re.IGNORECASE) != True]
# Extract running times and match all the hour + minute patterns
running_time_extract = running_time.str.extract(r'(\d+)\s*ho?u?r?s?\s*(\d*)|(\d+)\s*m')
# Convert to numeric and change all NaNs to zeroes
running_time_extract = running_time_extract.apply(lambda col: pd.to_numeric(col, errors='coerce')).fillna(0)
# Apply function to convert hour and minute capture groups
wiki_movies_df['running_time'] = running_time_extract.apply(lambda row: row[0]*60 + row[1] if row[2] == 0 else row[2], axis=1)
# Drop running time column from the dataset
wiki_movies_df.drop('Running time', axis=1, inplace=True)
# 2. Clean the Kaggle metadata.
# Check data types of columns
kaggle_metadata.dtypes
# Check that all values are either True or False for "adult" and "video" columns
kaggle_metadata['adult'].value_counts()
# Remove bad data
kaggle_metadata[~kaggle_metadata['adult'].isin(['True','False'])]
# Keep columns where 'adult'is False and drop the 'adult' column
kaggle_metadata = kaggle_metadata[kaggle_metadata['adult'] == 'False'].drop('adult',axis='columns')
# Check values of "video" column
kaggle_metadata['video'].value_counts()
# Convert data types
kaggle_metadata['video'] = kaggle_metadata['video'] == 'True'
kaggle_metadata['budget'] = kaggle_metadata['budget'].astype(int)
kaggle_metadata['id'] = pd.to_numeric(kaggle_metadata['id'], errors='raise')
kaggle_metadata['popularity'] = pd.to_numeric(kaggle_metadata['popularity'], errors='raise')
kaggle_metadata['release_date'] = pd.to_datetime(kaggle_metadata['release_date'])
# Convert timestamp column to a datetime data type to store the rating data
ratings['timestamp'] = pd.to_datetime(ratings['timestamp'], unit='s')
# 3. Merged the two DataFrames into the movies DataFrame.
# Merge Kaggle metdata DataFrame and the Wikipedia movies DataFrame
movies_df = pd.merge(wiki_movies_df, kaggle_metadata, on='imdb_id', suffixes=['_wiki','_kaggle'])
#print(movies_df.head())
# 4. Drop unnecessary columns from the merged DataFrame.
#movies_df.drop(columns=['title_wiki','release_date_wiki','Language'], inplace=True)
movies_df.drop(columns=['title_wiki','release_date_wiki','Language'], inplace=True)
# 5. Add in the function to fill in the missing Kaggle data.
# Function to fill in missing data and drop redundant wiki_column
def fill_missing_kaggle_data(df, kaggle_column, wiki_column):
df[kaggle_column] = df.apply(lambda row: row[wiki_column] if row[kaggle_column] == 0 else row[kaggle_column], axis=1)
df.drop(columns=wiki_column, inplace=True)
# 6. Call the function in Step 5 with the DataFrame and columns as the arguments.
fill_missing_kaggle_data(movies_df, 'runtime', 'running_time')
fill_missing_kaggle_data(movies_df, 'budget_kaggle', 'budget_wiki')
fill_missing_kaggle_data(movies_df, 'revenue', 'box_office')
# 7. Filter the movies DataFrame for specific columns.
for col in movies_df.columns:
lists_to_tuples = lambda x: tuple(x) if type(x) == list else x
value_counts = movies_df[col].apply(lists_to_tuples).value_counts(dropna=False)
num_values = len(value_counts)
if num_values == 1:
#print(col)
# 8. Rename the columns in the movies DataFrame.
movies_df.rename({'id':'kaggle_id',
'title_kaggle':'title',
'url':'wikipedia_url',
'budget_kaggle':'budget',
'release_date_kaggle':'release_date',
'Country':'country',
'Distributor':'distributor',
'Producer(s)':'producers',
'Director':'director',
'Starring':'starring',
'Cinematography':'cinematography',
'Editor(s)':'editors',
'Writer(s)':'writers',
'Composer(s)':'composers',
'Based on':'based_on'
}, axis='columns', inplace=True)
# 9. Transform and merge the ratings DataFrame.
# Count "movieID" and "rating" columns
rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count()
# Raname the "userID" column to "count"
rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count() \
.rename({'userId':'count'}, axis=1)
# Pivot the data to make "movieID" the index
rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count() \
.rename({'userId':'count'}, axis=1) \
.pivot(index='movieId',columns='rating', values='count')
# Prepend rating to each column with list comprehension to rename the column
rating_counts.columns = ['rating_' + str(col) for col in rating_counts.columns]
# Merge the rating counts into movie_df
movies_with_ratings_df = pd.merge(movies_df, rating_counts, left_on='kaggle_id', right_index=True, how='left')
# Fill in missing values
movies_with_ratings_df[rating_counts.columns] = movies_with_ratings_df[rating_counts.columns].fillna(0)
# Create connection to the PostgreSQL database
engine = create_engine("postgresql://postgres:{db_password}@127.0.0.1:5432/movie_data")
# 10. Create the path to your file directory and variables for the three files.
file_dir = './'
# The Wikipedia data
wiki_file = f'{file_dir}/wikipedia-movies.json'
# The Kaggle metadata
kaggle_file = f'{file_dir}/movies_metadata.csv'
# The MovieLens rating data.
ratings_file = f'{file_dir}/ratings.csv'
# 11. Set the three variables equal to the function created in D1.
wiki_file, kaggle_file, ratings_file = extract_transform_load()
# 12. Set the DataFrames from the return statement equal to the file names in Step 11.
wiki_movies_df = wiki_file
movies_with_ratings_df = kaggle_file
movies_df = ratings_file
# 13. Check the wiki_movies_df DataFrame.
wiki_movies_df.head()
# 14. Check the movies_with_ratings_df DataFrame.
movies_with_ratings_df.head()
# 15. Check the movies_df DataFrame.
movies_df.head()<jupyter_output><empty_output>
|
no_license
|
/ETL_create_database.ipynb
|
Ha-Duong-88/Movies-ETL
| 1 |
<jupyter_start><jupyter_text># Course Description
The R data.table package is rapidly making its name as the number one choice for handling large datasets in R. This online data.table tutorial will bring you from data.table novice to expert in no time. Once you are introduced to the general form of a data.table query, you will learn the techniques to subset your data.table, how to update by reference and how you can use data.table’s set()-family in your workflow. The course finishes with more complex concepts such as indexing, keys and fast ordered joins. Upon completion of the course, you will be able to use data.table in R for a more efficient manipulation and analysis process. Enjoy!<jupyter_code>library(data.table)<jupyter_output><empty_output><jupyter_text># 1. Data.table novice
Introduction on what exactly a data.table is, how it differs from the traditional data.frame in R, and understanding the general form of a data.table query.## Section 1 - Introduction - Video### Create and subset a data.table
Welcome to the interactive exercises for your data.table course. Here you will learn the ins and outs of working with the data.table package.
While most of the material is covered by Matt and Arun in the videos, you will sometimes need to show some street smarts to get to the right answer. Remember that before using the hint you can always have a look at the official documentation by typing ?data.table in the console.
Let's start with some warm-up exercises based on the topics covered in the video. Recall from the video that you can use L after a numeric to specify that it is an integer. You can also give columns with different lengths when creating a data.table, and R will "recycle" the shorter column to match the length of the longer one by re-using the first items. In the example below, column x is recycled to match the length of column y:
`data.table(x = c("A", "B"), y = 1:4)
x y
1: A 1
2: B 2
3: A 3
4: B 4`
You can also review the slides used in the videos by pressing the slides button.
INSTRUCTIONS
100 XP
- Create a data.table my_first_data_table with a column x = c("a", "b", "c", "d", "e") and a column y = c(1, 2, 3, 4, 5). Use the function data.table().
- Create a two-column data.table DT that contains the four integers 1, 2, 1 and 2 in the first column a and the letters A, B, C and D in the second column b. Use recycling so that the contents of a will be automatically used twice. Note that LETTERS[1] returns "A", LETTERS[2] returns "B", and so on.
- Select the third row of DT and just print the result to the console.
- Select the second and third rows without using commas and print the result to the console.
<jupyter_code># The data.table package is preloaded
# Create my_first_data_table
my_first_data_table <- data.table(x = c("a", "b", "c", "d", "e"),
y = c(1, 2, 3, 4, 5))
# Create a data.table using recycling
DT <- data.table(a = c(1L, 2L), b = LETTERS[1:4])
# Print the third row to the console
print(DT[3,])
# Print the second and third row to the console without using commas
print(DT[2:3,])<jupyter_output> a b
1: 1 C
a b
1: 2 B
2: 1 C
<jupyter_text>### Getting to know a data.table
You can pass a data.table to base R functions like `head()` and `tail()` that accept a data.frame because data.tables are also data.frames. Also, keep in mind that the special symbol .N, when used inside square brackets, contains the number of rows. For example, `DT[.N]` and `DT[nrow(DT)]` will both return the last row in DT.
INSTRUCTIONS
100 XP
- Select the second to last row of the table using .N.
- Return the column names() of the data.table.
- Return the number of rows and number of columns of the data.table using the dim() function.
- Select row 2 twice and row 3 once, returning a data.table with three rows (two of which are identical).<jupyter_code># DT and the data.table package are pre-loaded
# Print the second to last row of DT using .N
print(DT[.N-1])
# Print the column names of DT
names(DT)
# Print the number or rows and columns of DT
dim(DT)
# Print a new data.table containing rows 2, 2, and 3 of DT
DT[c(2,2,3),]<jupyter_output> a b
1: 1 C
<jupyter_text>## Section 2 - Selecting columns in j - Video### A data.table of a vector?
A data.table DT is preloaded in your workspace on the right. Type DT in the console to have a look at it. As you have learned in the video, you can select a column from that data.table with DT[, .(B)].
What do you think is the output of DT[, B]?
INSTRUCTIONS
50 XP
Possible Answers
- A data.table
- press 1
- A vector (Correct)
- press 2<jupyter_code>DT = data.table(A=1:5, B=letters[1:5], C = 6:10)
DT
DT[, .(B)]
DT[, B]<jupyter_output><empty_output><jupyter_text>Correct. When you use .() in j, the result is always a data.table. For convenience, data.table also provides the option to return a vector while computing on just a single column and not wrapping it with .().### A non-existing column
Have a close look at 1.1 and 1.2 from the data.table package FAQs.
Type D <- 5 in the console. What do you think is the output of DT[, .(D)] and DT[, D]?
INSTRUCTIONS
50 XP
Possible Answers
Both outputs give an error.
- press 1
- DT[, D] returns 5 as vector, and DT[, .(D)] returns 5 as data.table. (Correct)
- press 2
- DT[, D] returns 5 as data.table, and DT[, .(D)] returns 5 as vector.
- press 3
- DT[, D] returns 5 as vector, and DT[, .(D)] returns an error.
- press 4
- DT[, D] returns an error, and DT[, .(D)] returns 5 as data.table.
- press 5Well done! Column D does not exist in DT and is thus not seen as a variable. This causes data.table to look for D in DT's parent frame. Also note that .() in j always returns a data.table.### Subsetting data.tables
As a reminder, `DT[i, j, by]` is pronounced
Take DT, subset rows using i, then calculate j grouped by by.
In the video, the second argument j was covered. j can be used to select columns by wrapping the column names in .().
In addition to selecting columns, you can also call functions on them as if the columns were variables. For example, if you had a data.table heights storing people's heights in inches, you could compute their heights in feet as follows:
` name eye_color height_inch
1: Tom Brown 69
2: Boris Blue 71
3: Jim Blue 68`
`> heights[, .(name,
height_ft = height_inch / 12)]
name height_ft
1: Tom 5.750000
2: Boris 5.916667
3: Jim 5.666667`
INSTRUCTIONS
100 XP
- Create a subset containing the columns B and C for rows 1 and 3 of DT. Simply print out this subset to the console.
- From DT, create a data.table, ans with two columns: B and val, where val is the product of A and C.
- Fill in the blanks in the assignment of ans2, such that it equals the data.table specified in target. Use columns from the previously defined data.tables to produce the val column.<jupyter_code># DT and the data.table package are pre-loaded
# Subset rows 1 and 3, and columns B and C
DT[c(1,3), .(B,C)]
# Assign to ans the correct value
ans = DT[,.(B, val=A*C)]
# Fill in the blanks such that ans2 equals target
target <- data.table(B = c("a", "b", "c", "d", "e",
"a", "b", "c", "d", "e"),
val = as.integer(c(6:10, 1:5)))
ans2 <- DT[, .(B, val = as.integer(c(6:10, 1:5)))]<jupyter_output><empty_output><jupyter_text>Great job! Did you notice B is recycled?### Section 3 - Doing j by group - Video### The by basics
In this section you were introduced to the last of the main parts of the data.table syntax: by. If you supply a j expression and a by list of expressions, the j expression is repeated for each by group. Time to master the by argument with some hands-on examples and exercises.
First, just print iris to the console and observe that all rows are printed and that the column names scroll off the top of your screen. This is because iris is a data.frame. Scroll back up to the top to see the column names.
INSTRUCTIONS
100 XP
- Convert the iris dataset to a data.table DT. You're now ready to use data.table magic on it!
- Create a new column containing the mean Sepal.Length for each Species. Do not provide a name for this newly created column.
- Do exactly the same as in the instruction above, but this time, group by the first letter of the Species name instead. Use substr() for this.<jupyter_code># iris is already available in your workspace
# Convert iris to a data.table: DT
DT =data.table(iris)
# For each Species, print the mean Sepal.Length
DT[,mean(Sepal.Length), by=Species]
#DT[,.(mean= mean(Sepal.Length)), by=Species]
# Print mean Sepal.Length, grouping by first letter of Species
DT[,mean(Sepal.Length), by=.(substr(Species,1,1))]<jupyter_output><empty_output><jupyter_text>### Using .N and by
You saw earlier that .N can be used in i and that it designates the number of rows in DT. There, it is typically used for returning the last row or an offset from it. .N can be used in j too and designates the number of rows in this group. This becomes very powerful when you use it in combination with by.
DT, a data.table version of iris, is already loaded in your workspace, so you can start experimenting right away. In this exercise, you will group by sepal area. Though sepals aren't rectangles, just multiply the length by the width to calculate the area.
INSTRUCTIONS
100 XP
- Group the specimens by Sepal area (Sepal.Length * Sepal.Width) to the nearest 10 cm2. Count how many occur in each group by specifying .N in j. Simply print the resulting data.table. Use the template in the sample code by filling in the blanks.
- Copy and adapt the solution to the above question, to name the columns Area and Count, respectively.<jupyter_code># data.table version of iris: DT
DT <- as.data.table(iris)
# Group the specimens by Sepal area (to the nearest 10 cm2) and count how many occur in each group
DT[, .N, by = 10 * round(Sepal.Length * Sepal.Width / 10)]
# Now name the output columns `Area` and `Count`
DT[, .(Count=.N), by = .(Area=10 * round(Sepal.Length * Sepal.Width / 10))]<jupyter_output><empty_output><jupyter_text>Correct! Notice that the order of the groups is retained, just like when they first appeared in DT. This exercise was not that simple, so it's good to see you've made it!## Return multiple numbers in j
In the previous exercises, you've returned only single numbers in j. However, this is not necessary. You'll experiment with this via a new data.table DT, which has already been specified in the sample code.
INSTRUCTIONS
100 XP
- Create a new data.table DT2 with 3 columns, A, B and C, where C is the cumulative sum of the C column of DT. Call the `cumsum()` function in the j argument, and group by `.(A, B)` (i.e. both columns A and B).
- Select from DT2 the last two values of C using the `tail()` function, and assign that to column C while you group by A alone. Make sure the column names don't change.INCORRECT SUBMISSION
Check your code for the second instruction. Use `tail(C, 2)` to specify the C column in the j part. Simply group by A.<jupyter_code># Create the data.table DT
DT <- data.table(A = rep(letters[2:1], each = 4L),
B = rep(1:4, each = 2L),
C = sample(8))
# Create the new data.table, DT2
DT2 = DT[, .(C=cumsum(C)), by=.(A,B)]
# Select from DT2 the last two values from C while you group by A
DT2[,.(C=tail(C,2)), by=A]<jupyter_output><empty_output><jupyter_text>Good job! Time to do some chaining…
You have finished the chapter "Data.table novice"!# 2. Data.table yeoman
Learn how to do multiple operations on the same data.table in one single statement, how to easily take a subset of your data, update by reference, and work with the data.table set()-family.## Section 4 - Chaining### Chaining, the basics
Now that you are comfortable with data.table's DT[i, j, by] syntax, it is time to practice some other very useful concepts in data.table. Here, we'll have a more detailed look at chaining.
Chaining allows the concatenation of multiple operations in a single expression. It's easy to read because the operations are carried out from left to right. Furthermore, it helps to avoid the creation of unnecessary temporary variables (which could quickly clutter one's workspace).
INSTRUCTIONS
100 XP
- In the previous section, you calculated DT2 by taking the cumulative sum of C while grouping by A and B. Next, you selected the last two values of C from DT2 while grouping by A alone. This code is included in the sample code. Use chaining to restructure the code. Simply print out the result of chaining.<jupyter_code># The data.table package has already been loaded
# Build DT
DT <- data.table(A = rep(letters[2:1], each = 4L),
B = rep(1:4, each = 2L),
C = sample(8))
# Combine the two steps in a one-liner
#DT2 <- DT[, .(C = cumsum(C)), by = .(A, B)]
#DT2[, .(C = tail(C, 2)), by = A]
DT[, .(C = cumsum(C)), by = .(A, B)][, .(C = tail(C, 2)), by = A]<jupyter_output><empty_output><jupyter_text>Great! Note that chaining can significantly reduce the amount of instructions necessary in your code.### Chaining your iris dataset
In the previous chapter, you converted the iris dataset to a data.table DT. This DT is already available in your workspace. Print DT to the console to remind yourself of its contents. Now, let's see how you can use chaining to simplify manipulations and calculations.
INSTRUCTIONS
100 XP
- Get the median of each of the four columns Sepal.Length, Sepal.Width, Petal.Length and Petal.Width, while grouping by Species. Reuse the same column names (e.g. the column containing the median Sepal.Length is still called Sepal.Length). Next, order() Species in descending order using chaining. This is deliberately repetitive, but we have a solution for you in the next exercise!
<jupyter_code>DT = as.data.table(iris)
# The data.table DT is loaded in your workspace
# Perform chained operations on DT
DT[, .(Sepal.Length = median(Sepal.Length),
Sepal.Width = median(Sepal.Width),
Petal.Length = median(Petal.Length),
Petal.Width = median(Petal.Width)),
by = Species][order(Species, decreasing=TRUE)]
#Also,
#DT[,lapply(.SD,median), by=Species]<jupyter_output><empty_output><jupyter_text>Nicely done, but this is a little bit repetitive and an error can easily be made if you aren't careful. Maybe there is a better way to do this kind of analysis? Let's find out…## Section 5 - Subset of Data - VideoProgramming time vs readability
It is a good idea to make use of familiar functions from base R to reduce programming time without losing readability.
The data.table package provides a special built-in variable .SD. It refers to the subset of data for each unique value of the by argument. That is, the number of observations in the output will be equal to the number of unique values in by.
Recall that the by argument allows us to separate a data.table into groups. We can now use the .SD variable to reference each group and apply functions separately. For example, suppose we had a data.table storing information about dogs:
`Sex Weight Age Height
M 40 1 12
F 30 4 7
F 80 12 9
M 90 3 14
M 40 6 12`
We could then use
dogs[, lapply(.SD, mean), by = Sex]
to produce average weights, ages, and heights for male and female dogs separately:
` Sex Weight Age Height
1: M 56.66667 3.333333 12.66667
2: F 55.00000 8.000000 8.00000`
A data.table DT has been created for you and is available in the workspace. Type DT in the console to print it out and inspect it.
INSTRUCTIONS
100 XP
- Get the mean of columns y and z grouped by x by using .SD.
- Get the median of columns y and z grouped by x by using .SD.<jupyter_code>DT = data.table(x = c(2, 1, 2, 1, 2, 2, 1), y = c(1, 3, 5, 7,
9, 11, 13), z = c(2, 4, 6, 8, 10, 12, 14))
# A new data.table DT is available
# Mean of columns
DT[,lapply(.SD,mean), by=x]
# Median of columns
DT[,lapply(.SD, median), by=x]
#DT = data.table(x = c(2, 1, 2, 1, 2, 2, 1),
#y = c(1, 3, 5, 7, 9, 11, 13),
#z = c(2, 4, 6, 8, 10, 12, 14))<jupyter_output><empty_output><jupyter_text>Well done! Let's have a look at .SDcols now.### Introducing .SDcols
.SDcols specifies the columns of DT that are included in .SD. Using .SDcols comes in handy if you have too many columns and you want to perform a particular operation on a subset of the columns (apart from the grouping variable columns).
Using .SDcols allows you to apply a function to all rows of a data.table, but only to some of the columns. For example, consider the dog example from the last exercise. If you wanted to compute the average weight and age (the second and third columns) for all dogs, you could assign .SDcols accordingly:
`dogs[, lapply(.SD, mean), .SDcols = 2:3]`
` Weight Age
1: 56 5.2`
While learning the data.table package, you may want to occasionally refer to the documentation. Have a look at ?data.table for more info on .SDcols.
Yet another data.table, DT, has been prepared for you in your workspace. Start by printing it to the console.
INSTRUCTIONS
70 XP
- Calculate the sum of the columns that start with Q, using .SD and .SDcols. Set .SDcols equal to 2:4.
- Set .SDcols to be the result of a function call. This time, calculate the sum of columns H1 and H2 using paste0() to specify the .SDcols argument.
- Finally, select all but the first row of the groups names 6 and 8, returning only the grp column and the columns that start with Q. Use -1 in i of .SD and use paste0() again. Type desired_result into the console to see what your answer should look like.<jupyter_code>DT = data.table(grp = c(6, 6, 8, 8, 8), Q1 = c(4L, 4L, 4L, 2L,
4L), Q2 = c(4L, 2L, 2L, 2L, 2L), Q3 = c(5L, 3L, 5L, 5L, 1L),
H1 = c(2L, 3L, 5L, 3L, 2L), H2 = c(4L, 5L, 4L, 2L, 2L))
# A new data.table DT is available
# Calculate the sum of the Q columns
DT[, lapply(.SD,sum), .SDcols=2:4]
# Calculate the sum of columns H1 and H2
#DT[,lapply(.SD, sum), .SDcols=5:6]
DT[,lapply(.SD, sum), .SDcols=paste0("H",1:2)]
# Select all but the first row of groups 1 and 2, returning only the grp column and the Q columns
DT[,.SD[-1], .SDcols=2:4, by=grp]<jupyter_output><empty_output><jupyter_text>#### HINT
- For the second instruction, make sure to use .SDcols to specify the columns to be included in .SD and use lapply(.SD, sum).
- The paste0() function pastes together two or more character vectors together with no space in between (hence, the 0 in the function name). In this exercise, the two vectors you want to pass to paste0() are "H" and 1:2, returning the strings "H1" and "H2".### Mixing it together: lapply, .SD, .SDcols and .N
This exercise is a challenging one, so don't give up! It's important to remember that whenever the j argument is a list (e.g. if it contains .SD or a call to lapply()), a data.table is returned. For example:
dogs[, lapply(.SD, mean), by = sex, .SDcols = c("weight", "age")]
will return a data.table containing average weights and ages for dogs of each sex.
It's also helpful to know that combining a list with a vector results in a new longer list. Lastly, note that when you select .N on its own, it is renamed N in the output for convenience when chaining.
For this exercise, DT, which contains variables x, y, and z, is loaded in your workspace. You must combine lapply(), .SD, .SDcols, and .N to get your call to return a specific output. Good luck!
INSTRUCTIONS
100 XP
INSTRUCTIONS
100 XP
Get the sum of all columns x, y and z and the number of rows in each group while grouping by x. Your answer should be identical to this:
` x x y z N
1: 2 8 26 30 4
2: 1 3 23 26 3`
Get the cumulative sum of column x and y while grouping by x and z > 8 such that the answer looks like this:
` by1 by2 x y
1: 2 FALSE 2 1
2: 2 FALSE 4 6
3: 1 FALSE 1 3
4: 1 FALSE 2 10
5: 2 TRUE 2 9
6: 2 TRUE 4 20
7: 1 TRUE 1 13`<jupyter_code>DT = data.table(x = c(2, 1, 2, 1, 2, 2, 1), y = c(1, 3, 5, 7,
9, 11, 13), z = c(2, 4, 6, 8, 10, 12, 14))
# DT is pre-loaded
# Sum of all columns and the number of rows
DT[,c(lapply(.SD,sum),.N),by=x, .SDcols=c("x","y","z")]
#DT[,c(lapply(.SD,sum),.N),by=x, .SDcols=x:z]
# Cumulative sum of column x and y while grouping by x and z > 8
DT[, lapply(.SD,cumsum), .SDcols=x:y,by=.(by1=x,by2=z>8)]
<jupyter_output><empty_output><jupyter_text>Wow, good job! This one was not easy at all!<jupyter_code>DT = data.table(x=c(2,2,1,1,1), y=c(6,7,8,9,10), z = NA)
DT
str(DT)
DT[2:4, z := sum(y), by= x]<jupyter_output><empty_output><jupyter_text>### Adding, updating and removing columns
As you now know, := is defined for use in j only, and is used to update data.tables by reference. One way of using := is the LHS := RHS form, where LHS is a character vector of columns (referenced by name or number) you wish to update and RHS is the corresponding value for each column (Note: LHS stands for "left hand side" and RHS stands for "right hand side" in what follows).
For example, the following line multiplies every row of column C by 10 and stores the result in C:
DT[, C := C * 10]
This first exercise will thoroughly test your understanding of := used in the LHS := RHS form. It's time for you to show off your knowledge! A data.table DT has been defined for you in the sample code.
INSTRUCTIONS
100 XP
- Add a column to DT by reference, named Total, that contains sum(B) for each group in column A.
- Add 1L to the values in column B, but only in the rows 2 and 4.
- Add a new column Total2 that contains sum(B) grouped by A but just over rows 2, 3 and 4.
- Remove the Total column from DT.
- Use [[ to select the third column as a vector. Simply print it out to the console.<jupyter_code># The data.table DT
DT <- data.table(A = letters[c(1, 1, 1, 2, 2)], B = 1:5)
# Add column by reference: Total
DT[, Total := sum(B), by=A]
# Add 1 to column B
DT[c(2,4), B := B+1L]
# Add a new column Total2
DT[2:4, Total2 := sum(B), by=A]
# Remove the Total column
DT[, Total := NULL]
# Select the third column using `[[`
DT[[3]]<jupyter_output><empty_output><jupyter_text>Great job, this was not that easy. Note that for the second instruction in j the performance goes up if you coerce RHS to integer yourself via 1L or via as.integer().### To assign or not to assign, that is the question
Print DT to the console. When using the := operator in j, do you need to assign the result to DT as follows?
DT <- DT[, Total := sum(B), by = A]
INSTRUCTIONS
50 XP
Possible Answers
Click or Press Ctrl+1 to focus
- Yes.
- press 1
- No, the DT <- part is not necessary. (Correct)
- press 2
- No. It can make a difference sometimes, but just not in this particular case.
- press 3Well done! The DT <- part is not necessary because the call makes updates to the column by reference.### Deleting a column for a subset of rows
Try deleting a column only for a subset of rows: DT[2, B := NULL]. Did this work?
INSTRUCTIONS
50 XP
Possible Answers
- No. This gives an error stating that when deleting columns, i should not be provided. (Correct)
- press 1
- Yes, because the data.table package states that you can add, modify, and/or delete columns by reference by group within a subset.
- press 2### The functional form
You've had practice with using := in the LHS := RHS form. The second way to use := is with functional form:
`DT[, `:=`(colA = colB + colC)]`
Notice that the := is surrounded by two tick marks! Otherwise data.table will throw a syntax error. It is also important to note that in the generic functional form above, my_fun() can refer to any function, including the basic arithmetic functions. The nice thing about the functional form is that you can get both the RHS alongside the LHS so that it's easier to read.
Time for some experimentation. A data.table DT has been prepared for you in the sample code.
INSTRUCTIONS
100 XP
- Update B with B + 1, add a new column C with A + B, and add a new column D of just 2's.
- A variable my_cols has already been defined. Use it to delete these columns from DT.
- Finally, delete column D using the column number (2), not its name (D).Well done! A column is either there or it's not. It makes no sense to partially delete it. If you find yourself needing to do this, then consider using NAs instead. Rather than silently ignoring the mistaken use of i, data.table throws a syntax error straight away so you can fix it.<jupyter_code># A data.table DT has been created for you
DT <- data.table(A = c(1, 1, 1, 2, 2), B = 1:5)
# Update B, add C and D
DT[, `:=`(B=B+1, C=A+B, D=2)]
# Delete my_cols
my_cols <- c("B", "C")
DT[, (my_cols):=NULL]
# Delete column 2 by number
DT[, names(DT)[2]:=NULL]<jupyter_output><empty_output><jupyter_text>## Section 7 - Using set() - Video### Ready, set(), go!
The set() function is used to repeatedly update a data.table by reference. You can think of the set() function as a loopable, low overhead version of the := operator, except that set() cannot be used for grouping operations. The structure of the set() function looks like this:
set(DT, index, column, value)
The function takes four arguments:
A data.table with the columns you wish to update
The index used in a loop (e.g. the i in for(i in 1:5))
The column or columns you wish to update in the loop
How the column or columns should be updated
In the next two exercises, you will focus on using set() and its siblings setnames() and setcolorder(). You are two exercises away from becoming a data.table yeoman!
INSTRUCTIONS
70 XP
- A data.table DT has been created for you in the workspace. Check it out!
- Loop through columns 2, 3, and 4, and for each one, select 3 rows at random and set the value of that column to NA.
- Change the column names to lower case using the tolower() function. When setnames() is passed a single input vector, that vector needs to contain all the new names.
- Print the resulting DT to the console to see what changed.<jupyter_code>DT = data.table(A = c(2L, 2L, 3L, 5L, 2L, 5L, 5L, 4L, 4L, 1L),
B = c(2L, 1L, 4L, 2L, 4L, 3L, 4L, 5L, 2L, 4L), C = c(5L,
2L, 4L, 1L, 2L, 2L, 1L, 2L, 5L, 2L), D = c(3L, 3L, 3L, 1L,
5L, 4L, 4L, 1L, 4L, 3L))
# Set the seed
set.seed(1)
# Check the DT that is made available to you
DT
# For loop with set
for (i in 2:4) set(DT, sample(10, 3), i, NA)
# Change the column names to lowercase
setnames(DT, c("A", "B", "C", "D"), c("a", "b", "c", "d"))
# Print the resulting DT to the console
DT<jupyter_output><empty_output><jupyter_text>### The set() family
A summary of the set() family:
set() is a loopable, low overhead version of :=.
You can use setnames() to set or change column names.
setcolorder() lets you reorder the columns of a data.table.
A data.table DT has been defined for you in the sample code.
INSTRUCTIONS
100 XP
- First, add a suffix "_2" to all column names of DT. Use paste0() here.
- Next, use setnames() to change a_2 to A2.
- Lastly, reverse the order of the columns with setcolorder().<jupyter_code># Define DT
DT <- data.table(a = letters[c(1, 1, 1, 2, 2)], b = 1)
# Add a suffix "_2" to all column names
setnames(DT, names(DT), paste0(names(DT), "_2"))
# Change column name "a_2" to "A2"
setnames(DT, "a_2", "A2")
# Reverse the order of the columns
setcolorder(DT, c("b_2", "A2"))<jupyter_output><empty_output><jupyter_text>Congratulations! You are now a data.table yeoman! Ready to become a data.table expert?## 3. Data.table expert
Discover the potential behind indexing, followed by generating and using keys. The final part focuses on fast ordered joins.## Section 8 - Indexing - Video### Selecting rows the data.table way
In the video, Matt showed you how to use column names in i to select certain rows. Since practice makes perfect, and since you will find yourself selecting rows over and over again, it'll be good to do a small exercise on this with the familiar iris dataset.
INSTRUCTIONS
100 XP
- Convert the iris dataset to a data.table and store the result as iris.
- Select all the rows where Species is "virginica".
- Select all the rows where Species is either "virginica" or "versicolor".<jupyter_code># The data.table package is pre-loaded
# Convert iris to a data.table
iris = data.table(iris)
# Species is "virginica"
iris[Species == "virginica"]
# Species is either "virginica" or "versicolor"
iris[Species %in% c("virginica", "versicolor"),]<jupyter_output><empty_output><jupyter_text>Good job! Now you know how to select using column names in i (to select rows) and in j (to select columns and run functions on columns).# 3. Data.table expert
Discover the potential behind indexing, followed by generating and using keys. The final part focuses on fast ordered joins.## Section 8 - Indexing### Selecting rows the data.table way
In the video, Matt showed you how to use column names in i to select certain rows. Since practice makes perfect, and since you will find yourself selecting rows over and over again, it'll be good to do a small exercise on this with the familiar iris dataset.
INSTRUCTIONS
100 XP
- Convert the iris dataset to a data.table and store the result as iris.
- Select all the rows where Species is "virginica".
- Select all the rows where Species is either "virginica" or "versicolor".<jupyter_code># The data.table package is pre-loaded
# Convert iris to a data.table
iris = data.table(iris)
# Species is "virginica"
iris[Species == "virginica"]
# Species is either "virginica" or "versicolor"
iris[Species %in% c("virginica", "versicolor"),]<jupyter_output><empty_output><jupyter_text>Good job! Now you know how to select using column names in i (to select rows) and in j (to select columns and run functions on columns).### Removing columns and adapting your column names
In the previous exercise, you selected certain rows from the iris data.table based on the column names. Now you have to take your understanding of the data.table package to the next level by using standard R functions and regular expressions to remove columns and change column names. To practice this, you'll do a little manipulation to prepare for the next exercise.
Since regular expressions can be tricky, here is a quick refresher:
Metacharacters allow you to match certain types of characters. For example, . means any single character, ^ means "begins with", and $ means "ends with".
If you want to use any of the metacharacters as actual text, you need to use the \\ escape sequence.
INSTRUCTIONS
100 XP
- Simplify the names of the columns in iris that contain "Sepal." by removing the "Sepal." prefix. Use gsub() along with the appropriate regular expression inside a call to setnames().
- Remove the two columns that start with "Petal" from the iris data.table.<jupyter_code># iris as a data.table
iris <- as.data.table(iris)
# Remove the "Sepal." prefix
setnames(iris, grep("^Sepal.", names(iris), value=TRUE), gsub("^Sepal.", "", grep("^Sepal.", names(iris), value=TRUE)))
# Remove the two columns starting with "Petal"
iris[,c("Petal.Length", "Petal.Width") := NULL]
#grep("^Petal.", names(iris), value=TRUE)
head(iris)<jupyter_output><empty_output><jupyter_text>### Understanding automatic indexing
You've been introduced to the rule that "if i is a single variable name, it is evaluated in the calling scope, otherwise inside DT's scope". This is a very important rule if you want to conceptually understand what is going on when using column names in i. Only single columns on the left side of operators benefit from automatic indexing.
The iris data.table with the variable names you updated in the previous exercise is available in your workspace.
INSTRUCTIONS
100 XP
- Select the rows where the area is greater than 20 square centimeters.
- Add a new boolean column containing Width * Length > 25 and call it is_large. Remember that := can be used to create new columns.
- Select the rows for which the value of is_large is TRUE.<jupyter_code># Cleaned up iris data.table
iris
# Area is greater than 20 square centimeters
iris[Width*Length > 20]
# Add new boolean column
iris[,is_large := Width*Length>25]
# Now large observations with is_large
iris[is_large==TRUE]<jupyter_output><empty_output><jupyter_text>## Section 9 - Keys - Video<jupyter_code>DT = data.table(A = c("b", "a", "b", "c", "a", "b", "c"),
B = c(2, 4, 1, 7, 5, 3, 6),
C = 6:12)
DT<jupyter_output><empty_output><jupyter_text>### Check to see if you understood the KEY takeaways
The DT data.table is already loaded in your workspace. Perform the following operations:
- Select the b group using ==.
- Set a 2-column key on A and B.
- Select the b group again using ==.
Did the order of the rows within the b group change?
INSTRUCTIONS
50 XP
- Possible Answers
- Click or Press Ctrl+1 to focus
- Yes
- No<jupyter_code>DT[A=="b"]
setkey(DT, A,B)
DT[A=="b"]<jupyter_output><empty_output><jupyter_text>Correct. This is because B is included in the key.### Selecting groups or parts of groups
The previous exercise illustrated how you can manually set a key via setkey(DT, A, B). setkey() sorts the data by the columns that you specify and changes the table by reference. Having set a key will allow you to use it, for example, as a super-charged row name when doing selections. Arguments like mult and nomatch then help you to select only particular parts of groups.
Furthermore, two of the instructions will require you to make use of by = .EACHI. This allows you to run j for each group in which each item in i joins too. The last instruction will require you to produce a side effect inside the j argument in addition to selecting rows.
INSTRUCTIONS
100 XP
A data.table DT has already been created for you with the keys set to A and B.
- Select the "b" group without using ==.
- Select the "b" and "c" groups, again without using ==.
- Select the first row of the "b" and "c" groups using mult.
- Use by = .EACHI and .SD to select the first and last row of the "b" and "c" groups.
- Extend the previous command to print out the group before returning the first and last row from it. You can use curly brackets to include two separate instructions inside the j argument.<jupyter_code># The 'keyed' data.table DT
DT <- data.table(A = letters[c(2, 1, 2, 3, 1, 2, 3)],
B = c(5, 4, 1, 9, 8, 8, 6),
C = 6:12)
setkey(DT, A, B)
# Select the "b" group
DT["b"]
# "b" and "c" groups
DT[c("b", "c")]
# The first row of the "b" and "c" groups
DT[c("b", "c"), mult = "first"]
# First and last row of the "b" and "c" groups
DT[c("b", "c"), .SD[c(1, .N)], by = .EACHI]
# Copy and extend code for instruction 4: add printout
DT[c("b", "c"), { print(.SD); .SD[c(1, .N)] }, by = .EACHI]<jupyter_output><empty_output><jupyter_text>### HINT
Do not forget that you can use standard R functions in i. More specifically, you'll be needing c() everywhere except for the first instruction.## Section 10 - Rolling joins - Video### Rolling joins - part one
In the last video, you learned about rolling joins. The roll applies to the NA values in the last join column. In the next three exercises, you will learn how to work with rolling joins in a data.table setting.
INSTRUCTIONS
100 XP
The same keyed data.table from before, DT, has been provided in the sample code.
- Get the key of DT through the key() function.
- Use the super-charged row names to look up the row where A == "b" and B == 6, without using ==! Verify here that column C is NA.
- Based on the query that was written in the previous instruction, return the prevailing row before this "gap". Specify the roll argument.
- Again, start with the code from the second instruction, but this time, find the nearest row. Specify the roll argument accordingly.<jupyter_code># Keyed data.table DT
DT <- data.table(A = letters[c(2, 1, 2, 3, 1, 2, 3)],
B = c(5, 4, 1, 9, 8, 8, 6),
C = 6:12,
key = "A,B")
# Get the key of DT
key(DT)
# Row where A == "b" and B == 6
DT[.("b",6)]
# Return the prevailing row
DT[.("b",6), roll=TRUE]
# Return the nearest row
DT[,.("b",6), roll="nearest"]<jupyter_output><empty_output><jupyter_text>### Rolling joins - part two
It is time to move on to the rollends argument. The rollends argument is actually a vector of two logical values, but remember that you can always look this up via ?data.table. You were introduced to this argument via the control ends section. If you want to roll for a certain distance, you should continue to use the roll argument.
INSTRUCTIONS
100 XP
- For the group where column A is equal to "b", print out the sequence when column B is set equal to (-2):10. Remember, A and B are the keys for this data.table.
- Extend the code you wrote for the first instruction to roll the prevailing values forward to replace the NAs.
- Extend your code with the appropriate rollends value to roll the first observation backwards.
<jupyter_code># Keyed data.table DT
DT <- data.table(A = letters[c(2, 1, 2, 3, 1, 2, 3)],
B = c(5, 4, 1, 9, 8, 8, 6),
C = 6:12,
key = "A,B")
# Get the key of DT
key(DT)
# Row where A == "b" and B == 6
DT[.("b",6)]
# Return the prevailing row
DT[.("b",6), roll=TRUE]
# Return the nearest row
DT[,.("b",6), roll="nearest"]<jupyter_output><empty_output><jupyter_text>### Rolling joins - part two
It is time to move on to the rollends argument. The rollends argument is actually a vector of two logical values, but remember that you can always look this up via ?data.table. You were introduced to this argument via the control ends section. If you want to roll for a certain distance, you should continue to use the roll argument.
INSTRUCTIONS
100 XP
- For the group where column A is equal to "b", print out the sequence when column B is set equal to (-2):10. Remember, A and B are the keys for this data.table.
- Extend the code you wrote for the first instruction to roll the prevailing values forward to replace the NAs.
- Extend your code with the appropriate rollends value to roll the first observation backwards.
<jupyter_code># Keyed data.table DT
DT <- data.table(A = letters[c(2, 1, 2, 3, 1, 2, 3)],
B = c(5, 4, 1, 9, 8, 8, 6),
C = 6:12,
key = "A,B")
# Print the sequence (-2):10 for the "b" group
DT[.("b", (-2):10)]
# Add code: carry the prevailing values forwards
DT[.("b", (-2):10), roll = +Inf]
# Add code: carry the first observation backwards
DT[.("b", (-2):10), roll = +Inf, rollends=TRUE]
<jupyter_output><empty_output><jupyter_text>### Rolling joins - final part
DT is loaded in your workspace. If you look up the value B == 20 in group A == "b" without limiting the roll, the value of column C is...
INSTRUCTIONS
50 XP
Possible Answers
Click or Press Ctrl+1 to focus
- NA
- 11 (Correct)
- 8
- 6<jupyter_code>DT = data.table(A = c("a", "a", "b", "b", "b", "c", "c"), B = c(4,
8, 1, 5, 8, 6, 9), C = c(7L, 10L, 8L, 6L, 11L, 12L, 9L))
DT<jupyter_output><empty_output>
|
no_license
|
/Others/Data Analysis in R, the data.table Way/Data Analysis in R, the data.table Way.ipynb
|
mohammadshadan/DC-R
| 29 |
<jupyter_start><jupyter_text>Table of Contents
1 Overview2 Learning Objectives3 Dynamic Programming3.1 The Idea3.2 The implementation4 Exercises4.1 Exercise ($\star\star$)4.2 Exercise ($\star\star\star$)4.3 Discussion4.4 Exercise ($\star\star$)4.5 Exercise ($\star\star\star$)4.6 Discussion4.7 Exercise ($\star\star$)4.8 Exercise ($\star\star\star$)4.9 Exercise ($\star\star\star$)4.10 Discussion4.11 Exercise ($\star\star$)4.12 Exercise ($\star\star\star$)4.13 Exercise ($\star\star$)4.14 Exercise ($\star\star\star$)4.15 Exercise ($\star\star\star$)4.16 Discussion5 Appendix5.1 Hints for exercise 4.1:5.2 Hints for exercise 4.2:5.3 Hints for exercise 4.4:5.4 Hints for exercise 4.5:5.5 Hints for exercise 4.7:5.6 Hints for exercise 4.8:5.7 Hints for exercise 4.9:5.8 Hints for exercise 4.11:5.9 Hints for exercise 4.11:5.10 Hints for exercise 4.12:5.11 Hints for exercise 4.13:5.12 Hints for exercise 4.14:5.13 Hints for exercise 4.15:
Lecture Note for Computer Science Foundations (DATS 6450)
Chapter 6: Dynamic Programming
Data Science, Columbian College of Arts & Sciences, George Washington University
Author: Yuxiao Huang
# Overview
- We will discuss Dynamic Programming (DP), which is a useful algorithm to lower the time complexity of a recursive solution
- Particularly, the discussion of the algorithm can be divided into two parts:
- theory, where we will describe the idea of the algorithm
- coding
- where (most of the time) we will start with some examples and then work on some exercises
- particularly, the examples and exercises are organized in such a way that, an exercise (most of the time) is a follow-up on some example prior to it
- **you should analyze the (time and space) complexity of each example and exercise**
- We will use stars to represent the difficulty of the exercises:
- $\star$ means very easy
- $\star\star$ means easy
- $\star\star\star$ means medium
- $\star\star\star\star$ means difficult
- $\star\star\star\star\star$ means very difficult# Learning Objectives
Students should know:
- the idea and implementation of DP
- how to apply DP to design optimal solutions (ones with the lowest complexity)# Dynamic Programming## The Idea
It turns out that, the idea of Dynamic Programming (DP) is closely related to divide-and-conquer. That is, instead of solving a big problem directly (which is usually difficult), we first divide the big problem into small ones, and then solve these small problems. We can see this idea from the example below which, as we discussed in Chapter 1, calculates the $n$th number in [Fibonacci series](https://en.wikipedia.org/wiki/Fibonacci_number).<jupyter_code># Reference: the code below is from Chapter 1 of Lecture Note for Computer Science Foundations (DATS 6450)
def fun_brute_force(n):
"""
The brute-force (recursive) solution for Fibonacci series.
The solution has exponential time complexity and linear space complexity.
Parameters
----------
n : a number
Returns
----------
The nth number in Fibonacci series
"""
# Base
if n <= 1:
return n
# Recursion
return fun_brute_force(n - 2) + fun_brute_force(n - 1)<jupyter_output><empty_output><jupyter_text>In Chapters 1 and 2, we used the recursion tree in fig. 1 to show that, the time / space complexity of the solution above is exponential / linear (can you explain why?). One reason for the exponential time is that, many nodes in the recursion tree will be calculated multiple times. For example, we will calculate node $f(3)$ twice and node $f(2)$ three times.
Figure 1. The recursion tree when $n = 5$.Intuitively, if we can save the result of each node and simply reuse the result rather than recalculate it, we could avoid redundant computation, hence save run time. It turns out that, this is the defining characteristic of DP. Specifically, DP divides the original problem into subproblems, calculates and stores the result of the subproblems, and reuses the results so that the subproblems only need to be calculated once. In simple words, DP is a divide-and-conquer algorithm that allows computation sharing.
While this is the first time we introduce DP, we actually have used DP to design several solutions. Below is the optimal solution for calculating the $n$th number in the Fibonacci series. Can you see it is actually a DP-based solution (i.e., divide-and-conquer + computation sharing)?<jupyter_code># Reference: the code below is from Chapter 1 of Lecture Note for Computer Science Foundations (DATS 6450)
def fun_dp_constant(n):
"""
The optimal solution for Fibonacci series.
The solution has linear time complexity and constant space complexity.
Parameters
----------
n : a number
Returns
----------
The nth number in Fibonacci series
"""
# Implement me
if n <= 1:
return n
a, b = 0, 1
for i in range(2, n + 1):
c = a + b
a = b
b = c
return c<jupyter_output><empty_output><jupyter_text>Let us walk through the solution above and see why it is DP-based. Here we solve the problem (calculating the $n$th number in Fibonacci series) by looping over $i$ from 2 to $n$ and calculate each $i$th number in the series. This coincides the divide-and-conquer part of a DP-based algorithm (dividing a big problem into small ones). Further, after calculating the $i$th number, we store its value (using $b$) and the value of the $i - 1$th number (using $a$), so that they do not have to be calculated again when computing the $i + 1$th number (assigned to $c$). By doing so, we only need to calculate each number in the series only once. That is, the solution above not only allows divide-and-conquer but also computation sharing, hence is DP-based.## The implementation
Being one of the most difficult kind of algorithm, implementing DP usually is not straightforward. However, there are two tips that could make using DP a bit easier. Unsurprisingly, the tips are related to the defining characteristic of DP discussed earlier, that is, divide-and-conquer and computation sharing.
To begin with, we need to find the equation that allows us to divide the problem into subproblems. In Fibonacci series, the equation is
$$
f(n) = f(n - 1) + f(n - 2).
$$
As shown in Section 5 (Hints), we will always mention the equation when discussing the logic of a DP-based solution. You will be very close (to solving the problem using DP) once such equation is found.
What is next is figuring out how to share the computation of the subproblems (so that we only need to calculate them once). This is usually done using a bottom-up approach, where we start from the smallest subproblems and gradually move on to bigger problems. Moreover, we store the results of the problems solved along the way. In the optimal solution (of Fibonacci series), we start from the $0$th and $1$st number (the base), then calculate the number from 2 to $n$ (i.e., bottom-up). This is different from the recursive (brute-force) solution, where we start from the $n$th number, then recursively calculate smaller number (i.e., top-down). Further, after calculating the $i$th number (where $2 \leq i \leq n$), we store its value and the value of the $i - 1$th number, so that they can be reused when calculating the $i + 1$th number (which is the sum of $i - 1$th and $i$th number).# Exercises## Exercise ($\star\star$)
- Problem:
- find the maximum profit by buying and selling stock
- here
- the stock returns are stored in an array, where the $i$th element is the return on day $i$
- you can make no more than one transaction
- see examples in the test cases
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.2](#4.2)
- Complexity:
- $O(n^2)$ time
- $O(1)$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.1)<jupyter_code># Implementation
def fun_41(arr):
"""
Find the maximum profit by buying and selling stock
You can make no more than one transaction
Parameters
----------
arr : a list of integers
Returns
----------
The maximum profit : an integer
"""
# Implement me
# Test
import numpy as np
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_41(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 7
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 5
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 7
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 8
Input 5: [9, 2, 3, 2, 6]
Output 5: 4
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 5
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 7
Input 8: [8, 9, 4, 2, 5]
Output 8: 3
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 2
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 4
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.1](#4.1)
- find the maximum profit by buying and selling stock
- find the solution with the complexity below
- Complexity:
- $O(n)$ time
- $O(1)$ space
- Skills:
- using dp to design an optimal solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.2)<jupyter_code># Implementation
def fun_42(arr):
"""
Find the maximum profit by buying and selling stock
You can make no more than one transaction
Parameters
----------
arr : a list of integers
Returns
----------
The maximum profit : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_41(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 7
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 5
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 7
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 8
Input 5: [9, 2, 3, 2, 6]
Output 5: 4
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 5
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 7
Input 8: [8, 9, 4, 2, 5]
Output 8: 3
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 2
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 4
<jupyter_text>## Discussion
The difference (in run time) between fun_41 and fun_42 is shown in fig. 2. Here, fun_42 (DP-based, $O(n)$ time complexity) is much faster than fun_41 (brute-force, $O(n^2)$ complexity).<jupyter_code>import time
%matplotlib inline
import matplotlib.pyplot as plt
def plot_43(n, funs, sorted_):
"""
Plot the run time of the functions (with respect to the input size).
Parameters
----------
n : an integer
funs : a list of functions
sorted_ : an integer which says:
-1 : sorted in descending order
0 : not sorted
1 : sorted in ascending order
"""
np.random.seed(0)
x = list(range(1, n + 1))
ys = [[] for _ in range(len(funs))]
for i in x:
arr = list(range(1, i + 1))
if sorted_ == -1:
arr = sorted(arr, reverse=True)
elif sorted_ == 0:
np.random.shuffle(arr)
for j in range(len(funs)):
start = time.time()
funs[j](arr)
end = time.time()
ys[j].append(end - start)
for j in range(len(funs)):
plt.plot(x, ys[j], label=funs[j].__name__)
plt.xlabel('$n$', fontsize=20)
plt.ylabel('Run time', fontsize=20)
plt.xticks([min(x), max(x)], fontsize=20)
plt.yticks([min([ys[j][k] for j in range(len(funs)) for k in range(len(ys[j]))]), max([ys[j][k] for j in range(len(funs)) for k in range(len(ys[j]))])], fontsize=20)
plt.legend(fontsize=20)
plt.tight_layout()
plt.show()
plot_43(10 ** 2, [fun_41, fun_42], 0)
print("Figure 2. The run time of fun_41 and fun_42.")<jupyter_output><empty_output><jupyter_text>## Exercise ($\star\star$)
- Problem:
- find the minimum cost to win a game
- here
- the game has a sequence of barriers, numbered from 1 to $n$
- there is a cost for each barrier, particularly for barrier $i$ the cost is arr[i]
- after paying the cost of a barrier, you can either pass the barrier alone, or pass the barrier and the one behind it
- you can start either from the first barrier or the second
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.5](#4.5)
- Complexity:
- $O(2^n)$ time
- $O(n)$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.4)<jupyter_code># Implementation
def fun_44(arr):
"""
Find the minimum cost to win a game
Parameters
----------
arr : a list of integers
Returns
----------
The minimum cost : an integer
"""
# Implement me
# Implementation
def helper(arr, n):
"""
Find the minimum cost to win a game
Parameters
----------
arr : a list of integers
n : the number of barriers we need to pass
Returns
----------
The minimum cost : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_44(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 16
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 21
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 25
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 15
Input 5: [9, 2, 3, 2, 6]
Output 5: 4
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 17
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 18
Input 8: [8, 9, 4, 2, 5]
Output 8: 11
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 17
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 18
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.4](#4.4)
- find the minimum cost to win a game
- find the solution with the complexity below
- Complexity:
- $O(n)$ time
- $O(1)$ space
- Skills:
- using dp to design an optimal solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.5)<jupyter_code># Implementation
def fun_45(arr):
"""
Find the minimum cost to win a game
Parameters
----------
arr : a list of integers
Returns
----------
The minimum cost : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_45(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 16
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 21
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 25
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 15
Input 5: [9, 2, 3, 2, 6]
Output 5: 4
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 17
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 18
Input 8: [8, 9, 4, 2, 5]
Output 8: 11
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 17
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 18
<jupyter_text>## Discussion
The difference (in run time) between fun_44 and fun_45 is shown in fig. 3. Here, fun_45 (DP-based, $O(n)$ time complexity) is much faster than fun_44 (brute-force, $O(2^n)$ complexity).<jupyter_code>plot_43(30, [fun_44, fun_45], 0)
print("Figure 3. The run time of fun_44 and fun_45.")<jupyter_output><empty_output><jupyter_text>## Exercise ($\star\star$)
- Problem:
- find the number of ways to reach the exit of a maze (represented by a $m \times n$ matrix)
- suppose there is a mice in a maze
- the mice starts from the entrance, located at entry [0, 0]
- each time the mice can either move to the right or go down
- the goal (of the mice) is to find the exit, located at entry [m - 1, n - 1]
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.8](#4.8)
- Complexity:
- $O(2^{max(m, n)})$ time
- $O(max(m, n))$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.7)<jupyter_code># Implementation
def fun_47(m, n):
"""
Find the number of ways to reach the exit of a maze (represented by a m * n matrix)
Parameters
----------
m : the number of rows in the matrix
n : the number of columns in the matrix
Returns
----------
The number of ways : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
m, n = np.random.randint(low=0, high=6, size=2)
print('Input ' + str(i + 1) + ':', 'm = ' + str(m), 'n = ' + str(n))
print('Output ' + str(i + 1) + ':', fun_47(m, n), end='\n\n')<jupyter_output>Input 1: m = 4 n = 5
Output 1: 35
Input 2: m = 0 n = 3
Output 2: 0
Input 3: m = 3 n = 3
Output 3: 6
Input 4: m = 1 n = 3
Output 4: 1
Input 5: m = 5 n = 2
Output 5: 5
Input 6: m = 4 n = 0
Output 6: 0
Input 7: m = 0 n = 4
Output 7: 0
Input 8: m = 2 n = 1
Output 8: 1
Input 9: m = 0 n = 1
Output 9: 0
Input 10: m = 5 n = 1
Output 10: 1
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.7](#4.7)
- find the number of ways to reach the exit of a maze (represented by a $m \times n$ matrix)
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.9](#4.9)
- Complexity:
- $O(m \times n)$ time
- $O(m \times n)$ space
- Skills:
- using dp to design an improved solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.8)<jupyter_code># Implementation
def fun_48(m, n):
"""
Find the number of ways to reach the exit of a maze (represented by a m * n matrix)
Parameters
----------
matrix : a matrix of integers
m : the number of rows in the matrix
n : the number of columns in the ma
Returns
----------
The number of ways : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
m, n = np.random.randint(low=0, high=6, size=2)
print('Input ' + str(i + 1) + ':', 'm = ' + str(m), 'n = ' + str(n))
print('Output ' + str(i + 1) + ':', fun_48(m, n), end='\n\n')<jupyter_output>Input 1: m = 4 n = 5
Output 1: 35
Input 2: m = 0 n = 3
Output 2: 0
Input 3: m = 3 n = 3
Output 3: 6
Input 4: m = 1 n = 3
Output 4: 1
Input 5: m = 5 n = 2
Output 5: 5
Input 6: m = 4 n = 0
Output 6: 0
Input 7: m = 0 n = 4
Output 7: 0
Input 8: m = 2 n = 1
Output 8: 1
Input 9: m = 0 n = 1
Output 9: 0
Input 10: m = 5 n = 1
Output 10: 1
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.8](#4.8)
- find the number of ways to reach the exit of a maze (represented by a $m \times n$ matrix)
- find the solution with the complexity below
- Complexity:
- $O(m \times n)$ time
- $O(max(m, n))$ space
- Skills:
- using dp to design an optimal solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.9)<jupyter_code># Implementation
def fun_49(m, n):
"""
Find the number of ways to reach the exit of a maze (represented by a m * n matrix)
Parameters
----------
matrix : a matrix of integers
m : the number of rows in the matrix
n : the number of columns in the ma
Returns
----------
The number of ways : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
m, n = np.random.randint(low=0, high=6, size=2)
print('Input ' + str(i + 1) + ':', 'm = ' + str(m), 'n = ' + str(n))
print('Output ' + str(i + 1) + ':', fun_49(m, n), end='\n\n')<jupyter_output>Input 1: m = 4 n = 5
Output 1: 35
Input 2: m = 0 n = 3
Output 2: 0
Input 3: m = 3 n = 3
Output 3: 6
Input 4: m = 1 n = 3
Output 4: 1
Input 5: m = 5 n = 2
Output 5: 5
Input 6: m = 4 n = 0
Output 6: 0
Input 7: m = 0 n = 4
Output 7: 0
Input 8: m = 2 n = 1
Output 8: 1
Input 9: m = 0 n = 1
Output 9: 0
Input 10: m = 5 n = 1
Output 10: 1
<jupyter_text>## Discussion
The difference (in run time) between fun_47, fun_48, and fun_49 is shown in fig. 4. Here, fun_48 and fun_49 (DP-based, $O(m \times n)$ time complexity) is much faster than fun_47 (brute-force, $O(2^{max(m, n)})$ complexity).<jupyter_code>def plot_410(n, funs):
"""
Plot the run time of the functions (with respect to the input size).
Parameters
----------
n : an integer
funs : a list of functions
"""
np.random.seed(0)
x = list(range(1, n + 1))
ys = [[] for _ in range(len(funs))]
for i in x:
for j in range(len(funs)):
start = time.time()
funs[j](i, i)
end = time.time()
ys[j].append(end - start)
for j in range(len(funs)):
plt.plot(x, ys[j], label=funs[j].__name__)
plt.xlabel('$n$', fontsize=20)
plt.ylabel('Run time', fontsize=20)
plt.xticks([min(x), max(x)], fontsize=20)
plt.yticks([min([ys[j][k] for j in range(len(funs)) for k in range(len(ys[j]))]), max([ys[j][k] for j in range(len(funs)) for k in range(len(ys[j]))])], fontsize=20)
plt.legend(fontsize=20)
plt.tight_layout()
plt.show()
plot_410(10, [fun_47, fun_48, fun_49])
print("Figure 4. The run time of fun_47, fun_48, and fun_49")<jupyter_output><empty_output><jupyter_text>## Exercise ($\star\star$)
- Problem:
- get the maximum number of candies
- your niece is going to visit the houses in your street on Halloween night
- given the number of candies each house has, your goal is to help your niece get the maximum number of candies
- the only constraint here is, your niece cannot visit two houses in a row
- that is, if she visited house $i$ already, she cannot visit house $i + 1$
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.12](#4.12)
- Complexity:
- $O(2^n)$ time
- $O(n)$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.11)<jupyter_code># Implementation
def fun_411(arr):
"""
Get the maximum number of candies
Parameters
----------
arr : a list of integers
Returns
----------
The maximum number of candies : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_411(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 21
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 27
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 32
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 19
Input 5: [9, 2, 3, 2, 6]
Output 5: 18
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 29
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 32
Input 8: [8, 9, 4, 2, 5]
Output 8: 17
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 26
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 26
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.11](#4.11)
- get the maximum number of candies
- find the solution with the complexity below
- Complexity:
- $O(n)$ time
- $O(1)$ space
- Skills:
- using dp to design an optimal solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.12)<jupyter_code># Implementation
def fun_412(arr):
"""
Get the maximum number of candies
Parameters
----------
arr : a list of integers
Returns
----------
The maximum number of candies : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_412(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 21
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 27
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 32
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 19
Input 5: [9, 2, 3, 2, 6]
Output 5: 18
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 29
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 32
Input 8: [8, 9, 4, 2, 5]
Output 8: 17
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 26
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 26
<jupyter_text>## Exercise ($\star\star$)
- Problem:
- find the number of [palindromic](https://en.wikipedia.org/wiki/Palindrome) subarray
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.14](#4.14)
- Complexity:
- $O(n^3)$ time
- $O(1)$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.13)<jupyter_code># Implementation
def fun_413(arr):
"""
Find the number of palindromic subarray
Parameters
----------
arr : a list of integers
Returns
----------
The number of palindromic subarray : an integer
"""
# Implement me
# Implementation
def helper(arr, i, j):
"""
Find whether a subarray is palindromic
Parameters
----------
arr : a list of integers
i : the first index of the subarray
j : the last index of the subarray
Returns
----------
1 : if the subarray is palindromic
0 : otherwise
"""
# Implement me
while i < j:
if arr[i] != arr[j]:
return 0
i += 1
j -= 1
return 1
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_413(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 9
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 7
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 10
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 8
Input 5: [9, 2, 3, 2, 6]
Output 5: 6
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 12
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 11
Input 8: [8, 9, 4, 2, 5]
Output 8: 5
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 12
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 7
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.13](#4.13)
- find the number of [palindromic](https://en.wikipedia.org/wiki/Palindrome) subarray
- find the solution with the complexity below
- this exercise will be followed by
- [exercise 4.15](#4.15)
- Complexity:
- $O(n^2)$ time
- $O(n^2)$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.13)<jupyter_code># Implementation
def fun_414(arr):
"""
Find the number of palindromic subarray
Parameters
----------
arr : a list of integers
Returns
----------
The number of palindromic subarray : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_414(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 9
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 7
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 10
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 8
Input 5: [9, 2, 3, 2, 6]
Output 5: 6
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 12
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 11
Input 8: [8, 9, 4, 2, 5]
Output 8: 5
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 12
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 7
<jupyter_text>## Exercise ($\star\star\star$)
- Problem:
- follow up on [exercise 4.14](#4.14)
- find the number of [palindromic](https://en.wikipedia.org/wiki/Palindrome) subarray
- find the solution with the complexity below
- Complexity:
- $O(n^2)$ time
- $O(n)$ space
- Skills:
- solving a problem by starting with a brute-force solution
- Logic: we strongly recommend you to try to find the solution yourself before looking at the [hints](#hints_4.15)<jupyter_code># Implementation
def fun_415(arr):
"""
Find the number of palindromic subarray
Parameters
----------
arr : a list of integers
Returns
----------
The number of palindromic subarray : an integer
"""
# Implement me
# Test
np.random.seed(0)
for i in range(10):
n = np.random.randint(low=2, high=11)
arr = np.random.randint(low=2, high=11, size=n)
print('Input ' + str(i + 1) + ':', list(arr))
print('Output ' + str(i + 1) + ':', fun_415(arr), end='\n\n')<jupyter_output>Input 1: [2, 5, 5, 9, 5, 7, 4]
Output 1: 9
Input 2: [9, 8, 10, 10, 3, 8]
Output 2: 7
Input 3: [9, 10, 3, 7, 10, 6, 5, 2, 5]
Output 3: 10
Input 4: [2, 4, 5, 10, 3, 5, 5]
Output 4: 8
Input 5: [9, 2, 3, 2, 6]
Output 5: 6
Input 6: [5, 4, 9, 4, 2, 2, 6, 7, 7]
Output 6: 12
Input 7: [10, 6, 3, 6, 10, 3, 3, 9]
Output 7: 11
Input 8: [8, 9, 4, 2, 5]
Output 8: 5
Input 9: [6, 6, 8, 6, 6, 5, 6]
Output 9: 12
Input 10: [10, 6, 5, 9, 7, 7]
Output 10: 7
<jupyter_text>## Discussion
The difference (in run time) between fun_413, fun_414, and fun_415 is shown in fig. 5. Here, fun_414 and fun_415 (DP-based, $O(n^2)$ time complexity) are faster than fun_413 (brute-force, $O(n^3)$ complexity).<jupyter_code>import numpy as np
import time
%matplotlib inline
import matplotlib.pyplot as plt
def plot_416(n, funs, sorted_):
"""
Plot the run time of the functions (with respect to the input size).
Parameters
----------
n : an integer
funs : a list of functions
sorted_ : an integer which says:
-1 : sorted in descending order
0 : not sorted
1 : sorted in ascending order
"""
np.random.seed(0)
x = list(range(1, n + 1))
ys = [[] for _ in range(len(funs))]
for i in x:
arr = np.random.randint(low=0, high=2, size=i)
if sorted_ == -1:
arr = sorted(arr, reverse=True)
elif sorted_ == 0:
np.random.shuffle(arr)
for j in range(len(funs)):
start = time.time()
funs[j](arr)
end = time.time()
ys[j].append(end - start)
for j in range(len(funs)):
plt.plot(x, ys[j], label=funs[j].__name__)
plt.xlabel('$n$', fontsize=20)
plt.ylabel('Run time', fontsize=20)
plt.xticks([min(x), max(x)], fontsize=20)
plt.yticks([min([ys[j][k] for j in range(len(funs)) for k in range(len(ys[j]))]), max([ys[j][k] for j in range(len(funs)) for k in range(len(ys[j]))])], fontsize=20)
plt.legend(fontsize=20)
plt.tight_layout()
plt.show()
plot_416(10 ** 2, [fun_413, fun_414, fun_415], 0)
print("Figure 5. The run time of fun_413, fun_414, and fun_415")<jupyter_output><empty_output>
|
no_license
|
/gwu/algorithm_design_for_data_scicence/spring_2019/lecture_notes/chapter_6/driver/chapter_6_dynamic_programming.ipynb
|
yuxiaohuang/teaching
| 18 |
<jupyter_start><jupyter_text># Übung Kombinatorik und Stochastik
## Aufgabe 1
Personen $a,b,c,d$ sollen auf einer Konferenz einen Vortrag halten. Wie viele verschiedene Reihenfolgen der Redner sind möglich, wenn
1. es keine Einschränkungen gibt,
2. a jedenfalls zuerst sprechen soll oder
3. d nicht an letzter Stelle sprechen soll.
# Aufgabe 2
Ein Multiple-Choice Test besteht aus vier Fragen, bei jeder Frage stehen drei Antworten zur Auswahl. Nur eine davon ist richtig. Wie groß ist die Wahrscheinlichkeit, durch zufälliges Raten
1. alle vier Fragen,
2. nur eine Frage richtig zu beantworten?
## Aufgabe 3
Wie viele Initialien gibt es aus Vor-, Mittel- und Nachname?
## Aufgabe 4
Andreas feiert Geburtstag und lädt dazu acht Personen ein. Alle geben sich gegenseitig die Hand. Wie oft wurde die Hand gereicht.
## Aufgabe 5
Beim Lotto 6aus49 werden 6 Kugeln aus einer Menge von 49 Kugeln gezogen. Wie viele Möglichkeiten gibt es, die 6 Kugeln zu ziehen? Diese werden nicht zurückgelegt. Wie lautet die Wahrscheinlichkeit dafür, von den 6 ausgewählten Kugeln 4 richtig zu haben?
## Aufgabe 6
Wie viele Möglichkeiten gibt es für die Sitzordnung von fünf Personen in einem PKW, wenn nur drei davon einen Führerschein besitzen.
## Aufgabe 7
Angenommen wir haben fünf Personen, darunter einen Mann. Wie viele Möglichkeiten gibt es, diese fünf Personen für ein Foto aufzustellen? Angenommen der Mann soll immer in der Mitte stehen. Wie viele Möglichkeiten gibt es dafür?
## Aufgabe 8
Ein Passwort soll genau zehn Stellen haben. Die Stellen können aus folgenden Zeichen bestehen: Klein-, Großbuchstaben sowie Ziffern und jedes soll genau eines von zehn Sonderzeichen enthalten. Wie viele verschiedene Passwörter gibt es.
## Aufgabe 9
Berechnen Sie die Wahrscheinlichkeit für folgende Ereignisse mit zwei fairen Würfeln. Tuen Sie dies mit Hilfe eines Laplace Experimentes und Python.
1. Es wird genau eine 4 gewürfelt,
2. es wird eine als Summe 7 gewürfelt,
3. es wird eine 3 oder eine 5 gewürfelt und
4. es wird eine ungerade Augenzahl gewürfelt.
## Aufgabe 10
Ein Bluttest für Krebs hat folgende Eigenschaften:
1. $p(\text{Testergebnis ist positiv}\vert\text{Person hat Krebs}) = 0.98$,
2. $p(\text{Testergebnis ist negativ}\vert\text{Person hat keinen Krebs}) = 0.97$,
3. $p(\text{Person hat Krebs}) = 0.01$.
Berechne die folgenden Wahrscheinlichkeiten
1. $p(\text{Person hat keinen Krebs})$,
2. $p(\text{Testergebnis ist positiv}\vert\text{Person hat keinen Krebs})$,
3. $p(\text{Testergebnis ist negativ}\vert\text{Person hat Krebs})$,
4. $p(\text{Person hat keinen Krebs}\vert\text{Testergebnis ist positiv})$?
Wie kann man 1, 2, 3 in Python berechnen? <jupyter_code># insert code here<jupyter_output><empty_output>
|
no_license
|
/Übungen/Übung Kombinatorik und Stochastik.ipynb
|
JulienKlaus/vorkurs_informatik
| 1 |
<jupyter_start><jupyter_text># Quiz 3 Solutions
## MCS 275 Spring 2021 - David Dumas
### Solutions by Jennifer Vaccaro## Problem 2: Arithmetic forbidden - 4 points
### THIS IS THE ONLY PROBLEM ON THE QUIZ
Make a subclass of the built-in class `int` called `NoArithInt` which "forbids arithmetic", meaning that it is not possible to use the operations `+`, `-`, `*`, or `/` with two instances of `NoArithInt`.
Save your class definition in a file called `quiz3prob2.py` and submit it.
Your subclass should NOT have its own constructor, because `int` already has one that does everything needed. The only methods in your subclass should be special methods that handle the arithmetic operations listed above.<jupyter_code># MCS 275 Quiz 3 Problem 2: Minimal Solution
# Jennifer Vaccaro
# This code is my own work, written in
# accordance with the rules in the syllabus.
class NoArithInt(int):
"""Subclass of int that does not allow operators +,-,*,/"""
def __add__(self, other):
"""Return NotImplemented, since you cannot add NoArithInts."""
# Always returns NotImplemented, independent of the type of other
return NotImplemented
def __sub__(self,other):
"""Return NotImplemented, since you cannot subtract NoArithInts."""
return NotImplemented
def __mul__(self, other):
"""Return NotImplemented, since you cannot multiply NoArithInts."""
return NotImplemented
def __truediv__(self, other):
"""Return NotImplemented, since you cannot divide NoArithInts."""
return NotImplemented
#You do not need to implement radd, rmul, etc. since those will only be used when the "left side" of the operator is NOT a NoArithInt.<jupyter_output><empty_output><jupyter_text>Here are some test cases with integers.<jupyter_code># make some instances of int
x = 5
y = 15
# Test that arithmetic between ints works
print(x+y) # 20
print(x-y) # -10
print(x*y) # 75
print(y/x) # 3.0<jupyter_output>20
-10
75
3.0
<jupyter_text>And here are some similar statements that attempt to do arithmetic with `NoArithInt` objects, which should fail. It is assumed here that `NoArithInt` is available in the same scope as this code is run, so you might need to change it to `quiz3prob2.NoArithInt` and add `import quiz3prob2` if you're running these statements in a separate file.<jupyter_code># make some no arithmetic integers
x = NoArithInt(5)
y = NoArithInt(15)
# Test that arithmetic between NoArithInt instances fails.
# should give TypeError: unsupported operand type(s) for +: 'NoArithInt' and 'NoArithInt'
print(x+y)
# should give TypeError: unsupported operand type(s) for -: 'NoArithInt' and 'NoArithInt'
print(x-y)
# should give TypeError: unsupported operand type(s) for *: 'NoArithInt' and 'NoArithInt'
print(x*y)
# should give TypeError: unsupported operand type(s) for /: 'NoArithInt' and 'NoArithInt'
print(y/x)
# WARNING: To test the print() statements above, you'll need to run them one at a time.
# If you just copy all of them into a file and run it, execution will stop as soon as
# the first one raises an exception.<jupyter_output><empty_output><jupyter_text>Here's a bonus solution, which considers a different behavior for the case when "other" is NOT a NoArithInt. Specifically, it uses super() to recreate the behavior of performing arithmentic between an int and other.<jupyter_code># MCS 275 Quiz 3 Problem 2: Supplementary Solution
# Jennifer Vaccaro
# This code is my own work, written in
# accordance with the rules in the syllabus.
class NoArithInt(int):
"""Subclass of int that does not allow operators +,-,*,/"""
def __add__(self, other):
"""Return NotImplemented if other is a NoArithInt
otherwise, behave like an integer and add."""
# Check the type of other...if it's a NoArithInt, then
# returning NotImplemented will raise the appropriate TypeError
if isinstance(other, NoArithInt):
return NotImplemented
else:
# otherwise, add the "other" item to self as if
# self is an integer, using super()
return super().__add__(other)
def __sub__(self,other):
"""Return NotImplemented if other is a NoArithInt
otherwise, behave like an integer and subtract."""
if isinstance(other, NoArithInt):
return NotImplemented
else:
return super().__sub__(other)
def __mul__(self, other):
"""Return NotImplemented if other is a NoArithInt
otherwise, behave like an integer and multiply."""
if isinstance(other, NoArithInt):
return NotImplemented
else:
return super().__mul__(other)
def __truediv__(self, other):
"""Return NotImplemented if other is a NoArithInt
otherwise, behave like an integer and divide."""
if isinstance(other, NoArithInt):
return NotImplemented
else:
return super().__truediv__(other)
#Again, you do not need to implement radd, rmul, etc., but this solution means that operators will behave commutatively.<jupyter_output><empty_output><jupyter_text>Finally, here are some test cases that check the behavior if arithmetic is used between a NoArithInt and another type.<jupyter_code>a = 5
b = 3.5
c = "hello!"
x = NoArithInt(15)
print(a+x) # int + NoArithInt, should be 20
print(x+a) # NoArithInt + int, should be 20
print(b+x) # float + NoArithInt, should be 18.5
print(x+b) # NoArithInt + float, should be 18.5
print(x+c) # NoArithInt + string, should fail (like an int would!)<jupyter_output>20
20
18.5
18.5
|
no_license
|
/quizzes/quiz3_soln.ipynb
|
daviddumas/mcs275spring2021
| 5 |
<jupyter_start><jupyter_text># Myo EMG Preprocessing
The Thalmic labs Myo has 3 different modes of EEG sending EEG data:

The above image shows the officially supported modes, as listed in the [Myo Bluetooth Protocol here.](https://github.com/thalmiclabs/myo-bluetooth/blob/master/myohw.h)
As you may notice, 0x01 is strangely missed out, however sending this command gives a 50Hz stream of rectified and filtered sEMG data.
I have recorded 20 seconds of hand waving to see the difference between these modes.
Modes 1 and 2 were recorded shortly after one another, where as mode 3 was discovered later.
To keep the positioning similar for all 3 recordings, the position of the main sensor pad with the LED pad was drawn with pen on my hand and subsequent recordings were aligned to this positioning.
**What is this notebook?**
This notebook was made to consider how we might filter data coming from the Myo in these different modes and why we might do that.
**TL;DR for preprocessing raw data**
1. Rectify (Take the absolute value) of the signal. As sEMG data has a positive and negative component, most simple applications only care about the amplitude.
2. Apply a low pass filter to remove 0 readings and reduce noise in the data.
**Context in relation to the rest of the project:**
To achieve the goal of playing breakout with my hands, I need to make a model to predict the desired position of the paddle on screen (Rect) from my sEMG data.
<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
cols = ['One', 'Two', 'Three', "Four", "Five", "Six", "Seven", "Eight", "Rect", "Time_ns"]
df_f = pd.read_csv("20_0x01_filtered.csv")
df_f.columns = cols
df_f['Rect'] = df_f['Rect'] / max(df_f['Rect'])
df_rf = pd.read_csv("20_0x02_raw_filtered.csv")
df_rf.columns = cols
df_rf['Rect'] = df_rf['Rect'] / max(df_rf['Rect'])
df_r = pd.read_csv("20_0x03_raw.csv")
df_r.columns = cols
df_r['Rect'] = df_r['Rect'] / max(df_r['Rect'])
<jupyter_output><empty_output><jupyter_text>The frequencies of data we recieve using the different modes changes. <jupyter_code>len(df_r)/20, len(df_rf)/20, len(df_f)/20
df_r.drop("Time_ns",1).plot()
plt.title('Raw data, 0x03')
df_rf.drop("Time_ns",1).plot()
plt.title('Raw filtered data, 0x02')
df_f.drop("Time_ns",1).plot()
plt.title('Filtered data, 0x01')
df_r.describe()
df_r['Three'].var() / sum(df_r.drop(['Time_ns', 'Rect'],1).var()) * 100
df_r.drop(['Time_ns', 'Rect'],1).var()<jupyter_output><empty_output><jupyter_text>As the third channel accounts for over 76% of the variance in the training data, this is the channel that most analysis will focus on.
## Considering the relationship between the paddle and the sEMG<jupyter_code>plt.figure(figsize=(10,8))
channel = "Three"
channel_data = df_r[channel] / max(df_r[channel])
plt.plot(channel_data)
plt.plot(df_r['Rect'])
plt.title('20 Seconds of training data')
# show a legend on the plot
plt.legend(['Raw sEMG channel 3', 'Paddle Position'])
# Display a figure.
plt.show()
sns.heatmap(df_r[['Three', 'Rect']].corr(), annot=True).set(title='Correlations of third channel to paddle position.')<jupyter_output><empty_output><jupyter_text>We can see the raw sEMG data and the position of the paddle on screen has a near zero correlation, and therefore we cannot expect a regression model to take the raw data from the third channel and do something useful with it, we have to do some *preprocessing*.
# Which mode to use?
If we consider the correlation between the data for different modes, this can help us see which we might want to use, we can then look into why it's useful. <jupyter_code>sns.heatmap(df_f.corr(), annot=True)<jupyter_output><empty_output><jupyter_text>In the filtered 50Hz signal the Myo provides, we can see a corrolation between channels 3,4 and the position of the rectangle on the screen, rect. <jupyter_code>sns.heatmap(df_r.corr(), annot=True)<jupyter_output><empty_output><jupyter_text>However, if we look at mode 0x03, the raw unfiltered emg data at 200Hz, we get a very low corrolation.
If we look back at the plots, one clear reason is that the signal has a mean of 0, due to the positive and negative component of sEMG. # Step 1 - Rectification
As we are only interested in the amount of force applied, we only care about the amplitude of our sEMG signal and therefore can apply rectification. <jupyter_code>df_r.mean()
df_r['Three'].mean()<jupyter_output><empty_output><jupyter_text>As sEMG signals have positive and negative components, the mean of our signal is close to 0.
We can fix this by using the absolute value. <jupyter_code>raw_rect = abs(df_r)
raw_rect = raw_rect.drop("Time_ns",1)
raw_rect.plot()
raw_rect_scaled = raw_rect/128
raw_rect_scaled["Rect"] = raw_rect['Rect']
raw_rect_scaled.plot()
sns.heatmap(raw_rect_scaled.corr(), annot=True)<jupyter_output><empty_output><jupyter_text>Rectifying the signal has greatly improved the corrolations, but they are still lower than the 50Hz filtered signal. # Filtering the raw rectified signal.<jupyter_code>myo_raw_filtered_rect = abs(df_rf)
myo_raw_filtered_rect = myo_raw_filtered_rect.drop("Time_ns",1)
myo_raw_filtered_rect.plot()
myo_raw_filtered_rect['Three'].max()
myo_r3 = myo_raw_filtered_rect[['Three','Rect']]
# Applying scaling to the signal
# myo_r3['Three'] = myo_r3['Three']/127
myo_r3['Three'] = myo_r3['Three'] / max(myo_r3['Three'])
myo_r3.plot()
rr3 = raw_rect_scaled[['Three', 'Rect']]
rr3.plot()
sns.heatmap(df_f[['Three', 'Rect']].corr(), annot=True)
sns.heatmap(rr3.corr(), annot=True)<jupyter_output><empty_output><jupyter_text># The 100hz signals are often 0 valued.
We can see our rectified signal, still is not great.
If we look at the graphs, the raw output from the sensor, rr3, seems to have 0 values frequently, causing the prediction of the paddle, to hit 0 frequency and be useless for prediction. <jupyter_code>myo3 = df_f[['Rect','Three']]
myo3['Three'] = myo3['Three'] / max(myo3['Three'])
myo3.plot()
rr3.plot()<jupyter_output><empty_output><jupyter_text>## If we zoom into the graph we can see the problem. <jupyter_code>rr3[2750:3150].plot()<jupyter_output><empty_output><jupyter_text>As Rect rises, my hand has to move further right and therefore I should be using my muscle to do this, therefore I would not expect a signal at 0.
We can specifically look for these signals to get an idea of how many there are:<jupyter_code>rr3.loc[(rr3['Three'] < 0.2) & (rr3['Rect'] > 0.8)]
high_val = 0.8
low_val = 0.2
high_data = rr3.loc[(rr3['Rect'] > high_val)]
print(len(high_data), "rows in our data should have high muscle activations.")
high_data_but_low_muscle = high_data.loc[(rr3['Three'] < low_val)]
print(len(high_data_but_low_muscle), "rows do not have high values.")
len(high_data_but_low_muscle)/len(high_data) * 100<jupyter_output>882 rows in our data should have high muscle activations.
451 rows do not have high values.
<jupyter_text>### High pass filters
As the data sent using mode 0x03, gives values between -128 and 127, the data is bounded and therefore we see some peaking. Using a highpass filter can help remove this. <jupyter_code>import numpy as np
import pandas as pd
from scipy import signal
import matplotlib.pyplot as plt
def sine_generator(fs, sinefreq, duration):
T = duration
nsamples = fs * T
w = 2. * np.pi * sinefreq
t_sine = np.linspace(0, T, nsamples, endpoint=False)
y_sine = np.sin(w * t_sine)
result = pd.DataFrame({
'data' : y_sine} ,index=t_sine)
return result
def butter_highpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype='high', analog=False)
return b, a
def butter_highpass_filter(data, cutoff, fs, order=5):
b, a = butter_highpass(cutoff, fs, order=order)
y = signal.filtfilt(b, a, data)
return y
fps = 30
sine_fq = 10 #Hz
duration = 10 #seconds
sine_5Hz = sine_generator(fps,sine_fq,duration)
sine_fq = 1 #Hz
duration = 10 #seconds
sine_1Hz = sine_generator(fps,sine_fq,duration)
sine = sine_5Hz + sine_1Hz
filtered_sine = butter_highpass_filter(sine.data,10,fps)
plt.figure(figsize=(20,10))
plt.subplot(211)
plt.plot(range(len(sine)),sine)
plt.title('generated signal')
plt.subplot(212)
plt.plot(range(len(filtered_sine)),filtered_sine)
plt.title('filtered signal')
plt.show()
filtered_sine = butter_highpass_filter(rr3['Three'],10,fps)
plt.plot(range(len(filtered_sine)),filtered_sine)
filtered_rr3 = butter_highpass_filter(rr3['Three'],1,fps)
rr3['Filtered'] = filtered_rr3
rr3.plot()<jupyter_output><ipython-input-27-2d8bc61767f7>:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
rr3['Filtered'] = filtered_rr3
<jupyter_text># Low pass filter
A low pass filter attenuates signals with frequencies higher than the cut off frequency.
The 0x03 data set from the Myo has a lot of high frequency noise which often results in low sEMG data being sent when the muscles are contracted.
Note that after data has been though a low pass filter, the rate needed to sample this data and recreate it changes, this is why mode 0x01 sends data at 50Hz, instead of the 200Hz the Myo uses otherwise. <jupyter_code>'''
https://stackoverflow.com/questions/25191620/creating-lowpass-filter-in-scipy-understanding-methods-and-units
'''
import numpy as np
from scipy.signal import butter, lfilter, freqz
import matplotlib.pyplot as plt
def butter_lowpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='low', analog=False)
return b, a
def butter_lowpass_filter(data, cutoff, fs, order=5):
b, a = butter_lowpass(cutoff, fs, order=order)
y = lfilter(b, a, data)
return y
# Filter requirements.
order = 6
fs = 30.0 # sample rate, Hz
cutoff = 3.667 # desired cutoff frequency of the filter, Hz
# Get the filter coefficients so we can check its frequency response.
b, a = butter_lowpass(cutoff, fs, order)
# Plot the frequency response.
w, h = freqz(b, a, worN=8000)
plt.subplot(2, 1, 1)
plt.plot(0.5*fs*w/np.pi, np.abs(h), 'b')
plt.plot(cutoff, 0.5*np.sqrt(2), 'ko')
plt.axvline(cutoff, color='k')
plt.xlim(0, 0.5*fs)
plt.title("Lowpass Filter Frequency Response")
plt.xlabel('Frequency [Hz]')
plt.grid()
# Demonstrate the use of the filter.
# First make some data to be filtered.
T = 5.0 # seconds
n = int(T * fs) # total number of samples
t = np.linspace(0, T, n, endpoint=False)
# "Noisy" data. We want to recover the 1.2 Hz signal from this.
data = np.sin(1.2*2*np.pi*t) + 1.5*np.cos(9*2*np.pi*t) + 0.5*np.sin(12.0*2*np.pi*t)
# Filter the data, and plot both the original and filtered signals.
y = butter_lowpass_filter(data, cutoff, fs, order)
plt.subplot(2, 1, 2)
plt.plot(t, data, 'b-', label='data')
plt.plot(t, y, 'g-', linewidth=2, label='filtered data')
plt.xlabel('Time [sec]')
plt.grid()
plt.legend()
plt.subplots_adjust(hspace=0.35)
plt.show()
# Filter requirements.
order = 6
fs = 200.0 # sample rate, Hz
cutoff = 3.667 # desired cutoff frequency of the filter, Hz
filtered_rr3 = butter_lowpass_filter(rr3['Three'], cutoff, fs, order)
rr3['Filtered'] = filtered_rr3
rr3.plot()
plt.title('Lowpass filtered')
sns.heatmap(rr3.corr(), annot=True).set(title='Correlations to Rect with pre and post filter data')
sns.heatmap(df_f[["Three", "Rect"]].corr(), annot=True).set(title='Myo mode 1 filter')
# Filter requirements.
order = 6
fs = 200.0 # sample rate, Hz
cutoff = 25 # desired cutoff frequency of the filter, Hz
filtered_rr3 = butter_lowpass_filter(rr3['Three'], cutoff, fs, order)
rr3['Filtered'] = abs(filtered_rr3)
rr3.plot()
plt.title('Lowpass filtered')
sns.heatmap(rr3.corr(), annot=True).set(title='Correlations to Rect with pre and post filter data')
# Filter requirements.
order = 6
fs = 200.0 # sample rate, Hz
cutoff = 25 # desired cutoff frequency of the filter, Hz
for f in range(1,25):
filtered_rr3 = butter_lowpass_filter(rr3['Three'], f, fs, order)
rr3['Filtered'] = abs(filtered_rr3)
c = rr3.corr()['Filtered']['Rect']
print(f, c)<jupyter_output>1 0.6223238141524972
2 0.6672026591261138
3 0.6446341847942889
4 0.620448234436012
5 0.5991910391376487
6 0.581390785444717
7 0.5651418537058566
8 0.5495200738518483
9 0.5347261069952158
10 0.5219258724643846
11 0.5109851555810286
12 0.5012731590750525
13 0.49234933801882536
14 0.4839825546988019
15 0.4761073911678768
16 0.4689792532570217
17 0.46253548904547964
18 0.4566436572208237
19 0.4512421623955126
20 0.44612082404423853
21 0.44127958935914713
22 0.4365166886775176
23 0.43191183986511555
24 0.4273175320375244
<jupyter_text>The corrolation between low pass filtered signal and paddle position is achieved when the filter has a cut off frequency of 2Hz, as shown in the graph below:<jupyter_code># Filter requirements.
order = 6
fs = 200.0 # sample rate, Hz
cutoff = 2 # desired cutoff frequency of the filter, Hz
filtered_rr3 = butter_lowpass_filter(rr3['Three'], cutoff, fs, order)
rr3['Filtered'] = abs(filtered_rr3)
rr3.plot()
plt.title('Lowpass filtered')<jupyter_output><ipython-input-52-070d96e03099>:8: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
rr3['Filtered'] = abs(filtered_rr3)
<jupyter_text>Pygame has capped updates at 60fps, therefore when gathering this training data the paddle moved 10 pixel per second, before hitting the end wall at position:
(Width of the window) - (Width of the paddle) = 1600 - 200 = 1400
Therefore the rising edge must take 140 frames, and one full cycle taking 280 frames.
So at 60 frames per second, we would expect each cycle to take 14/3 ~ 4.6 seconds to complete.
Given we have 20 seconds of data, we would therefore expect ~4.28 cycles, which we can see we have.
So the signal I was following with my hand was 1 cycle in ~4.6 frames, therefore around 0.214 cycles per frame. <jupyter_code># Filter requirements.
order = 6
fs = 200.0 # sample rate, Hz
cutoff = 4 * 3/14 # desired cutoff frequency of the filter, Hz
filtered_rr3 = butter_lowpass_filter(rr3['Three'], cutoff, fs, order)
rr3['Filtered'] = abs(filtered_rr3)
rr3.plot()
plt.title('Lowpass filtered')<jupyter_output><ipython-input-65-75e4f918cf22>:8: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
rr3['Filtered'] = abs(filtered_rr3)
|
permissive
|
/Notebooks/MyoModesCompared/MyoEMGPreprocessing.ipynb
|
PerlinWarp/Neuro-Breakout
| 15 |
<jupyter_start><jupyter_text>pivot tables with percentages as opposed to numbers <jupyter_code># print(ks["percent_funded"].mean())
fig,ax = plt.subplots(1)
ks["percent_funded"][ks.percent_funded<3].plot.hist(bins = 50, color='green', ax=ax, title="Percent Funded Histogram", rot =45)
ax.set_xlabel("Percent Funded")
ks['percent_funded'].head(10).ascending=True
ks['goal'].plot.hist(bins=400)
np.log(ks['goal']).plot.hist(bins=40)
ks["goal"][ks.goal<10].value_counts()
ks2 = ks[(ks['goal'] < 10)]
ks2
ks2.loc[:,"goal"].mean()
ks2.state.value_counts()
# show pie charts of ks2 vs ks regular
ks.state.value_counts()
np.log(ks['goal']).plot.hist(bins=40)
np.log(ks2['goal']).plot.hist(bins=40)
ks4 = ks.pledged.replace("0.00"," 0.001")
ks4
ks['goal'].plot.hist(bins=40)
ks2['goal'].plot.hist(bins=40)
ks['pledged'].plot.hist(bins=10)
ks.plot(
kind='scatter',
x="percent_funded",
y="backers",
)
ks[ks.percent_funded<10].plot(
kind='scatter',
x="percent_funded",
y="backers",
alpha = .1
)
ks_adjustedforscatter = ks[ks.percent_funded<3]
ks_adjustedforscatter2 = ks_adjustedforscatter[ks_adjustedforscatter.backers<1000]
fig,ax = plt.subplots(1)
ks_adjustedforscatter2.plot(ax=ax,
kind='scatter',
x="percent_funded",
y="backers",
fontsize = 20,
alpha = .1,
color = "green",
figsize = (30,10)
);
ax.set_title('Backers v. Percent Funded', fontsize=22)
ax.set_xlabel('Percent Funded', fontsize=20)
ax.set_ylabel('Backers', fontsize=20)<jupyter_output><empty_output><jupyter_text>seeing if we can predict the percent funded or something similar using sklearn<jupyter_code>from sklearn.cluster import KMeans
KMeans(n_clusters=10)
print(smf.ols('goal ~ state',data=ks).fit().summary())
ks.plot.scatter(y='goal',x='backers',alpha=.25)
ks['deadline']=ks['deadline'].astype('datetime64[ns]')
ks['launched_date'] = ks['launched_date'].astype('datetime64[ns]')
ks['delta_time']=ks['deadline'].sub(ks['launched_date'], axis=0)
ks['delta_time']=ks.delta_time.dt.days
ks7=ks.loc[(ks["percent_funded"] < 3) & (ks["delta_time"] < 93)]
fig,ax = plt.subplots(1)
ks7.plot(ax=ax,
kind='scatter', x='delta_time', y='percent_funded',
alpha=.2, figsize = (25,10), color='green', fontsize = 20
);
ax.set_title('Delta Time v. Percent Funded', fontsize=22)
ax.set_xlabel('Delta Time', fontsize=20)
ax.set_ylabel('Percent Funded', fontsize=20)
ks7
ks7.groupby('state').delta_time.mean().plot(kind='box')
ks7.sort_values(by='delta_time',ascending= False)
ks7['state'].value_counts()
ks['state'].value_counts()<jupyter_output><empty_output><jupyter_text>what makes your startup successful on kickstarter?<jupyter_code>fig,ax = plt.subplots(1)
sns.boxplot(ax=ax, y=ks7["percent_funded"], x=ks7["main_category"])
ax.set_title('Main Category v. Percent Funded', fontsize=14)
for tick in ax.get_xticklabels():
tick.set_rotation(37)
sns.boxplot( y=ks7["goal"], x=ks7["main_category"])
sns.factorplot('main_category',data=ks,kind='count',hue='state' ,size=15)
from sklearn.cluster import KMeans as kmeans
# ks['k'] = kmeans(n_clusters = 10).fit_predict.ks.loc[:,'Food':'Crafts']
ks['k'] = kmeans(n_clusters = 10)
ks['k']
[ks['k']==4]
ks
ks8 = ks.groupby('main_category').mean
ks8
ks.plot(
kind='box',
x="main_category",
y="backers",
color = "green",
figsize = (30,10)
)
proportion_currency = ks.groupby(['country'])['country'].count() / ks['main_category'].count()
proportion_currency.sort_values(ascending=True).plot(kind='bar', title='% of campaigns by country', figsize=(10,5))
ks.currency.value_counts()
import pylab
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
labels= ['USA', 'U.K.','EU', 'Canada','Austrailia','Sweeden','Mexico', 'New Zeland','Denmark','Switzerland', 'Norway', 'Hong Kong', 'Singapore', 'Japan']
sizes= [292627, 33672, 117219, 14756, 7839, 1757,1752,1447,1113,761,708,618,555,40]
plt.pie(sizes, labels=labels, startangle=90, autopct='%1.1f%%')
plt.axis('equal')
plt.show()
goal_fund = ks['goal'].groupby(ks['main_category'])
goal_fund.mean().reset_index(name='mean').sort_values(['mean'], ascending=False)
ks['pledged'].describe()
ks.main_category.value_counts()
ks.category.value_counts()
ks_film = ks[ks['main_category']=='Film & Video']
ks_film.head(10)
ks_film.category.value_counts()
ks_film['goal'].describe()
ks_film['usd pledged'].quantile(0.90)
print(415/626970)
print(ks_film.state.value_counts())
print(23612/(32892+5744+332+117))
ks_film.sort_values(['goal'], ascending=True)
ks
ks['main_category'].value_counts()
fig,ax = plt.subplots(1)
#ks["launched_date"].plot(ax=ax)
ks7.plot.bar(ax=ax, x="launched_date",color = "green", figsize = (30,10)
);
ks["launched_date"].plot()
print(smf.ols('goal ~ main_category',data=ks).fit().summary())
print(smf.ols('percent_funded ~ goal',data=ks).fit().summary())
print(smf.ols('goal ~ percent_funded',data=ks).fit().summary())
def year_cut(string):
return string[0:4]
ks['year'] = ks['launched'].apply(year_cut)
ks['year'] = ks['year'].astype(int)
ks['year'].value_counts()
ks_yr = {}
for year in range(2009, 2019):
ks_yr[year] = ks[ks['year'] == year]['year'].count()
ks_yr = pd.Series(ks_yr)
ks_yr = pd.DataFrame(ks_yr)
ks_yr = ks_yr.rename(columns = {0: "counts"})
ks_yr
fig,ax = plt.subplots(1)
ks_yr['counts'].plot(ax=ax,
color='green');
ax.set_title('Kickstarter Projects by Year', fontsize=22)
ax.set_xlabel('Year', fontsize=20)
ax.set_ylabel('Frequency', fontsize=20)
pivot_yr2 = pd.pivot_table(ks,
values='goal',
index=['year'],
columns=['state'],
aggfunc="count"
#title = "Category vs. State"
)
pivot_yr2
pivot_yr2["Total"] = pivot_yr2.sum(axis=1)
pivot_yr2 = pivot_yr.apply(lambda x: (x / pivot_yr2["Total"])*100)
# pivot_yr2["Count"] = pivot_yr['Total']
pivot_yr2
pivot_yr = pd.pivot_table(ks,
values='goal',
index=['year'],
columns=['state'],
aggfunc="count"
#title = "Category vs. State"
)
pivot_yr
pivot_yr["Total"] = pivot_yr.sum(axis=1)
# pivot_yr = pivot_yr.apply(lambda x: (x / pivot_yr["Total"])*100)
# pivot_yr["Count"] = pivot_yr['Total']
pivot_yr.fillna(0)
pivot_yr
pivot_yr.style.apply(highlight_max,axis=0)
pivot_yr.fillna(0).style.apply(highlight_max,axis=0)
ks_yr
pivot_yr2 = pd.pivot_table(ks,
values='goal',
index=['year'],
columns=['state'],
aggfunc="count"
#title = "Category vs. State"
)
pivot_yr2
pivot_yr2["Total"] = pivot_yr2.sum(axis=1)
pivot_yr2 = pivot_yr2.apply(lambda x: (x / pivot_yr2["Total"])*100)
pivot_yr2["Count"] = pivot_yr['Total']
pivot_yr2.fillna(0)
pivot_yr2
pivot_yr2['successful']
ks_yr['successful']=pivot_yr2['successful']
ks_yr
fig,ax = plt.subplots(1,2)
sns.set_style("whitegrid")
ks_yr['counts'].plot(ax=ax[0],
color='green');
ks_yr['successful'].plot(ax=ax[1], color='green')
# ax1.set_title(1,2,'Kickstarter Projects by Year', fontsize=18)
ax[0].set_xlabel('Year', fontsize=14)
ax[0].set_ylabel('Frequency', fontsize=14)
ax[1].set_ylabel('Success Rate', fontsize=14)
ax[1].set_xlabel('Year', fontsize=14)
plt.title('Kickstarter Projects by Year & Success Rate', fontsize=18, loc='right')
ax[1].yaxis.tick_right()
sns.set_style("whitegrid")
sns.countplot(ks['country'], order = ks['country'].value_counts().index)
sns.despine(bottom = True, left = True)
pivot_yr3 = pd.pivot_table(ks,
values='goal',
index=['year'],
columns=['main_category'],
aggfunc="count"
#title = "Category vs. State"
)
pivot_yr3
pivot_yr3["Total"] = pivot_yr3.sum(axis=1)
pivot_yr3 = pivot_yr3.apply(lambda x: (x / pivot_yr3["Total"])*100)
pivot_yr3["Count"] = pivot_yr['Total']
pivot_yr3.fillna(0)
pivot_yr3
pivot_yr3.fillna(0).style.apply(highlight_max,axis=0)
pivot_yr4 = pd.pivot_table(ks,
values='goal',
index=['year'],
columns=['main_category'],
aggfunc="count"
#title = "Category vs. State"
)
pivot_yr4
pivot_yr4.fillna(0)
pivot_yr4 = pivot_yr4.drop(1970)
pivot_yr4["Average Project"] = pivot_yr3.sum(axis=1)
pivot_yr4
pivot_yr4.fillna(0).style.apply(highlight_max,axis=0)
fig,ax = plt.subplots(1)
pivot_yr4.plot(ax=ax, figsize=(30,10), marker=10, fontsize='xx-large')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize='xx-large')
ax.set_title('Main Category v. Year', fontsize=26)
ax.set_xlabel("Year", fontsize=22)
ax.set_ylabel("Frequency", fontsize=22)
print('final data bootcamp project finished')<jupyter_output>final data bootcamp project finished
|
no_license
|
/v9_Kickstarter_DataBootcamp_Final.ipynb
|
joseph-meyer/Data_Bootcamp_section_2
| 3 |
<jupyter_start><jupyter_text># Graphing Lines<jupyter_code>import matplotlib.pyplot as plt
import sympy as sym
from math import sqrt
import numpy as np
p1 = [-3,-1]
p2 = [4,4]
plt.plot([p1[0],p2[0]],[p1[1],p2[1]])
plt.show()
x = 3
y = 5
plt.plot(x,y,'ro')
plt.plot([0,x],[0,y],'r')
plt.axis('square')
plt.axis([-6,6,-6,6])
plt.grid()
plt.plot([-6,6],[0,0],'k')
plt.plot([0,0],[-6,6],'k')
plt.show()
x = sym.symbols('x')
y = abs(x)**(1/2)
for i in range(-20,21):
plt.plot([0,i],[0,y.subs(x,i)])
plt.show()<jupyter_output><empty_output><jupyter_text>## Slope-Intercept Form<jupyter_code># y = mx + b
x = [-5,5]
m = 2
b = 1
y = [340,2345]
for i in range(0,len(x)):
y[i] = m*x[i] + b
plt.plot(x,y,label=f'y={m}x+{b}')
plt.axis('square')
plt.grid()
plt.xlim(x)
plt.ylim(x)
axis = plt.gca() # get current axis
plt.plot(axis.get_xlim(),[0,0],'k--')
plt.plot([0,0],axis.get_ylim(),'k--')
plt.legend()
plt.show()
# y = mx + b
x = [-5,5]
m = 2
b = 1
# y = [340,2345]
# for i in range(0,len(x)):
# y[i] = m*x[i] + b
y = m*np.array(x) + b
plt.plot(x,y,label=f'y={m}x+{b}')
plt.axis('square')
plt.grid()
plt.xlim(x)
plt.ylim(x)
axis = plt.gca() # get current axis
plt.plot(axis.get_xlim(),[0,0],'k--')
plt.plot([0,0],axis.get_ylim(),'k--')
plt.legend()
plt.show()
# y = mx + b
x = [-5,5]
m = [.7,-1.25]
b = [-2,.75]
# y = [340,2345]
# for i in range(0,len(x)):
# y[i] = m*x[i] + b
for i in range(0,len(m)):
y = m[i]*np.array(x) + b[i]
plt.plot(x,y,label=f'y={m[i]}x+{b[i]}')
plt.axis('square')
plt.grid()
plt.xlim(x)
plt.ylim(x)
axis = plt.gca() # get current axis
plt.plot(axis.get_xlim(),[0,0],'k--')
plt.plot([0,0],axis.get_ylim(),'k--')
plt.legend()
plt.show()<jupyter_output><empty_output>
|
no_license
|
/notebooks/5-Graphing-and-Vizualization/05.02_graphing_lines.ipynb
|
ccentola/python-math
| 2 |
<jupyter_start><jupyter_text>### 사이킷런을 이용하여 붓꽃(Iris) 데이터 품종 예측하기
<jupyter_code># 사이킷런 버전 확인
import sklearn
print(sklearn.__version__)<jupyter_output>0.21.2
<jupyter_text>** 붓꽃 예측을 위한 사이킷런 필요 모듈 로딩 **<jupyter_code>from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split<jupyter_output><empty_output><jupyter_text>**데이터 세트를 로딩**<jupyter_code>import pandas as pd
# 붓꽃 데이터 세트를 로딩합니다.
iris = load_iris()
# iris.data는 Iris 데이터 세트에서 피처(feature)만으로 된 데이터를 numpy로 가지고 있습니다.
iris_data = iris.data
# iris.target은 붓꽃 데이터 세트에서 레이블(결정 값) 데이터를 numpy로 가지고 있습니다.
iris_label = iris.target
print('iris target값:', iris_label)
print('iris target명:', iris.target_names)
# 붓꽃 데이터 세트를 자세히 보기 위해 DataFrame으로 변환합니다.
iris_df = pd.DataFrame(data=iris_data, columns=iris.feature_names)
iris_df['label'] = iris.target
iris_df.head(3)<jupyter_output>iris target값: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
iris target명: ['setosa' 'versicolor' 'virginica']
<jupyter_text>** 학습 데이터와 테스트 데이터 세트로 분리 **<jupyter_code>X_train, X_test, y_train, y_test = train_test_split(iris_data, iris_label,
test_size=0.2, random_state=11)<jupyter_output><empty_output><jupyter_text>** 학습 데이터 세트로 학습(Train) 수행 **<jupyter_code># DecisionTreeClassifier 객체 생성
dt_clf = DecisionTreeClassifier(random_state=11)
# 학습 수행
dt_clf.fit(X_train, y_train)<jupyter_output><empty_output><jupyter_text>** 테스트 데이터 세트로 예측(Predict) 수행 **<jupyter_code># 학습이 완료된 DecisionTreeClassifier 객체에서 테스트 데이터 세트로 예측 수행.
pred = dt_clf.predict(X_test)
pred<jupyter_output><empty_output><jupyter_text>** 예측 정확도 평가 **<jupyter_code>from sklearn.metrics import accuracy_score
print('예측 정확도: {0:.4f}'.format(accuracy_score(y_test,pred)))<jupyter_output>예측 정확도: 0.9333
|
no_license
|
/2장/2.2 첫 번째 머신러닝 만들어 보기 _ 붓꽃 품종 예측하기.ipynb
|
sunnight9507/Python_Machine_learning_complete_guide
| 7 |
<jupyter_start><jupyter_text>1. model the RV effect of a spot and a plage using the SOAP 2.0 code that I published two years ago (Dumusque et al. 2014), and then
2. simulate the RV effect induced by a star like the Sun.
3. We will inject a planetary signal in the RVs obtained from our solar simulation and see how stellar activity can complicate the detection of small-mass planets.
4. Finally, we will use a Gaussian Process regression, an interesting approach to model short-term stellar activity signal, and thus extract planetary signals despite stellar signa1. Joe Spectrograph studies the star Gliese 33 = HD 4628. He has heard that an exoplanet has been discovered around this star, due to radial velocities which follow this ephemeris: \begin{equation}
RV(t) = \frac{K}{P} \cos(2\pi(t-T)) + v_{sys}
\end{equation}# a. What is the value of vsys for this star?
According to Nidever et al. (2002), the K2.5V star HD 4628 has (barycentric) RV=-10230 m/s
as observed on $t$ = 9262-2440000 (JD).
Thus, $v_{sys}$ can be derived as:<jupyter_code>import numpy as np
#vsys=v_star+v_planet
T = 2457083.15992 #Julian Date
P = 5.2616 #period in days
K = 69 #Amplitude: m/s
RV = -10230 #star: m/s
t = 9262-2440000 #JD
#systemic velocity of the star
vsys= RV - (K * np.cos(2*np.pi*(t-T)/P))
print("vsys= {} m/s".format(vsys))
#2440000: May 23, 1968 at 12:00
#2457083.15992: Mar 1, 2015 at 15:50:17<jupyter_output>vsys= -10298.93016201734 m/s
<jupyter_text>Refrences:
[1] Simbad result: 10140 m/s;http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=HD+4628&submit=SIMBAD+search
[2] Nidever et al. ApJ SS, 141:503–522, 2002
http://iopscience.iop.org/article/10.1086/340570/pdf
Primary Name, JD(-2440000), $\Delta$T (days), RV (m/s)
HD 4628, 9262, 3747, 10230
The barycentric radial velocities reported here are found
in a manner similar to that used to find the relative radial
velocities for the planet search. An observed spectrum is fitted
with a synthetic spectrum that is composed of the individual
stellar and iodine spectra. In detail, the synthetic
spectrum is the product of the deconvolved stellar ‘‘ template
’’ spectrum (with the spectrometer instrumental profile
removed) with a high-resolution spectrum of molecular
iodine. The product of these two is convolved with the
instrumental profile of the spectrometer (at the time of the
observation), to produce the final synthetic spectrum, as
described in Butler et al. (2000).
standard Doppler analysis to obtain relative velocities, see
Marcy & Butler (1992), Butler et al. (1996), and Valenti et
al. (1995).# b. What is the mass of this star?
Can't use this conservation of momentum: M_star*vsys=M_planet*v_planet since mass of planet is not yet known.
Can't use
K = 203.*(P**(-1./3.))*((M_p/M_Jup)/(((M_star/M_sun)+9.548E-4*(M_p/M_Jup))**(2./3.)))
because this is for K of the planet, not the star.<jupyter_code>RV-vsys #RV of 10230 m/s seems too large for K=68 m/s.
v_planet = RV- vsys
print(v_planet)
M_star = ?<jupyter_output><empty_output><jupyter_text># c. Make a graph showing the radial velocity predicted by this formula over three consecutive cycles<jupyter_code>import numpy as np
from matplotlib import pyplot as plt
%pylab inline
T = 2457083.15992 #Julian Date
P = 5.2616 #period in days
K = 69 #m/s
vsys #= 0 #as computed above
t = np.linspace(0,3*P,100) #3 cycles in days
RV = []
for i in t:
RV_i= K * np.cos(2*np.pi*(i-T)/P) + vsys
RV.append(RV_i)
plt.plot(t,RV,'b-');
plt.xlabel('t-2457083.15992 [Julian Day]');
plt.ylabel('RV [m/s]');
plt.title('Ephemeris of Gliese 33/ HD 4628');<jupyter_output>Populating the interactive namespace from numpy and matplotlib
<jupyter_text>Joe uses his telescope, in Rochester, NY, to measure the spectrum of this star on the following nights: **Oct 1, 2, 7, 9, 12, all in 2015**. On each night, he measures the spectrum three times: at **10 PM, midnight, and 2 AM**, each time with an exposure time of **10 minutes**. He calibrates his spectrum against a neon-helium lamp inside the dome each time.# Bonus! Make a graph showing the radial velocities Joe will measure from his spectra, relative to the lamp in the dome.2457083.15992: Mar 1, 2015 at 15:50:17; should I add t=(date in JD-T)<jupyter_code># from DateTime import Timezones
# zones = set(Timezones())
# import pytz
# [x for x in pytz.all_timezones if x not in zones]<jupyter_output><empty_output><jupyter_text>New York uses ETZ.<jupyter_code># from DateTime import DateTime
# DateTime('1997/3/9 1:45pm')
# a= int[JD+0.5]
# if a >= 2299161:
# b= int [(a-1867216.25)/36524.25]
# if a < 2299161:
# c= a+1524
# elif a >= 2299161:
# c= a + b - int[b/4]+1525
# d= int[(c-122.1)/365.25]
# e= int[365.25/d]
# f= int[(c-e)/30.6001]
# D= c - e - int[30.6001f] + frac[JD + 0.5]
# M= f - 1 - 12[int(f/14)]
# Y= d - 4715 - int[(7 + M)/10]
# import pandas as pd
# #22:00, 02:00, 24:00 #exposure time of 10 minutes
# #UST=ETZ+4
# obs= {'10/1': pd.Series([2457297.25, 2457296.4166667, 2457297.3333333]),
# '10/2': pd.Series([2457298.25, 2457297.4166667, 2457298.3333333]),
# '10/7': pd.Series([2457303.25, 2457302.4166667, 2457303.3333333]),
# '10/9': pd.Series([2457305.25, 2457304.4166667, 2457305.3333333]),
# '10/12': pd.Series([2457308.25, 2457307.4166667, 2457308.3333333])}
# t = pd.DataFrame(obs)#, index=['22:00', '02:00', '24:00'])
# t.columns
T-2457297.41667 #Oct.1 at 10pm
import pandas as pd
time=pd.Series(['10/1 22:00', '10/1 02:00', '10/1 24:00', '10/2 22:00', '10/2 02:00', '10/2 24:00', '10/7 22:00', '10/7 02:00', '10/7 24:00', '10/9 22:00', '10/9 02:00', '10/9 24:00', '10/12 22:00', '10/12 02:00', '10/12 24:00'])
JD=pd.Series([2457297.25, 2457296.4166667, 2457297.3333333, 2457298.25, 2457297.4166667, 2457298.3333333, 2457303.25, 2457302.4166667, 2457303.3333333, 2457305.25, 2457304.4166667, 2457305.3333333, 2457308.25, 2457307.4166667, 2457308.3333333])
obs=pd.DataFrame(time)
obs['JD']=pd.DataFrame(JD)
obs.columns=['time','JD']
obs.head()
RV = []
for i in obs.JD:
RV_i= K * np.cos(2*np.pi*(i-T)/P)
RV.append(RV_i)
plt.plot(obs.JD,RV,'bo');
plt.xlabel('Time [Julian Date]');
plt.ylabel('RV [m/s]');
plt.title('Ephemeris of Gliese 33/ HD 4628');
# a=[2457297.25, 2457296.4166667, 2457297.3333333]
# b=[2457298.25, 2457297.4166667, 2457298.3333333]
# c=[2457303.25, 2457302.4166667, 2457303.3333333]
# d=[2457305.25, 2457304.4166667, 2457305.3333333]
# e=[2457308.25, 2457307.4166667, 2457308.3333333]
# obs= append(a,b)
# obs= append(obs,c)
# obs= append(obs,d)
# obs= append(obs,e)
# RV = []
# for i in obs:
# RV_i= K * np.cos(2*np.pi*(i-T)/P) + vsys
# RV.append(RV_i)
# plt.plot(obs,RV,'bo');
# plt.xlabel('Time [Julian Date]');
# plt.ylabel('RV [m/s]');
# plt.title('Sparse observation of Gliese 33/ HD 4628');<jupyter_output><empty_output><jupyter_text># d. What is the distance of this planet from its host star?Using Kepler's 3rd law,<jupyter_code>G= 6.67E-11
a = ((G*M_star*P**2)/(4*np.pi**2))**(1/3)<jupyter_output><empty_output><jupyter_text># e. What is the mass of the planet?We can compute the $V_{\rm{planet}}$ using<jupyter_code>v_planet = 2*np.pi*a/P<jupyter_output><empty_output><jupyter_text>And hence, $M_{\rm{planet}}$ can be computed from<jupyter_code>M_planet=M_star*v_star/v_planet<jupyter_output><empty_output><jupyter_text>Assuming $e$=0 and $i=\pi/2$, ...Joe figures out that he should use standard stars to remove most of the radial velocity variations due to the Earth's rotation and orbital motion. He modifies his procedures and now produces nice tables of radial velocities which show only the change due to the star's own motion.
The following three questions are based on his observations of new objects -- NOT the same as the star in Question 1.
# 2.
Over a period of several years, Joe measures the radial velocity of one particular star. He sees pretty clear evidence for an exoplanet.
**Hint: the periods in the questions below should all be in the range of 1 - 20 days. If you are using a tool to find the period, it is a good idea to search using a step size <= 0.01 days**## a. What is the period of variations in this star's radial velocities?<jupyter_code># url = 'http://spiff.rit.edu/classes/extrasol/homework/hw_4/rv_2.dat'
# req = urllib.request.Request(url)
# with urllib.request.urlopen(req) as response:
# html = response.read()
# outpath = 'confirmed_planets_{}.csv'.format(time.strftime("%Y%m%d")) #include date of download
# print("retrieving URL: {}".format(url))
# with open(outpath,'wb') as f:
# f.write(html)
# print("created file: {}".format(outpath))
import pandas as pd
filename='rv_2.dat'
df = pd.read_csv(filename,delim_whitespace=True)# error_bad_lines=False, skiprows=?
df.tail()<jupyter_output><empty_output><jupyter_text>#---Details of Columns:
# HJD (d) (F13.5) Heliocentric Julian Date; UTC [ucd=time.epoch]
# RV (m/s) (F7.2) Relative radial Velocity (G1) [ucd=phys.veloc]
# e_RV (m/s) (F5.2) Uncertainty in RV (G2) [ucd=stat.error]
# ------------- ------- -----
# HJD (d) RV(m/s) e_RV(m/s)
# ------------- ------- -----
#
# 2454603: May 16, 2008 at 12:00<jupyter_code>len(df)
#fig, ax = plt.subplots()
#df.plot(yerr=df.error, ax=ax, kind='scatter')
import seaborn as sb
t=df.HJD
RV=df.RV
with sb.axes_style('whitegrid'):
fig, ax = plt.subplots(1,2,figsize=(15,3))
ax[0].plot(t,RV,'bo');
ax[0].set_xlabel('Time [Julian Date]');
ax[0].set_ylabel('RV [m/s]');
ax[1].plot(t,RV,'bo');
ax[1].set_xlim([t[14],HJD[57]])
ax[1].set_xlabel('Time [Julian Date]');
ax[1].set_ylabel('RV [m/s]');<jupyter_output><empty_output><jupyter_text>To compute the period, we need to fit a curve first.<jupyter_code>import gatspy
from gatspy.periodic import LombScargleFast
t, f = t, df.RV
f /= np.median(f) #normalize
model = LombScargleFast().fit(t, f)
periods, power = model.periodogram_auto(nyquist_factor=100)
idx1 = periods > 1
idx2 = np.argmax(power[idx1])
peak = periods[idx1][idx2]
with sb.axes_style('white'):
fig, ax = plt.subplots(1,1,figsize=(15,5))
ax.plot(periods, power, 'k-')
ax.set(xlim=(0.5, 5),
# , ylim=(0, 0.01),
xlabel='period (days)',
ylabel='Lomb-Scargle Power')
ax.vlines(peak, *ax.get_ylim(), linestyles='dotted', colors='r')<jupyter_output><empty_output><jupyter_text>Let's remove the datapoints towards the end of observation<jupyter_code>n=10
t1, RV1 = t[:n],RV[:n]
with sb.axes_style('whitegrid'):
fig, ax = plt.subplots(1,1,figsize=(15,3))
ax.plot(t1, RV1,'bo');
ax.set_xlabel('Time [Julian Date]');
ax.set_ylabel('RV [m/s]');
import gatspy
from gatspy.periodic import LombScargleFast
model = LombScargleFast().fit(t1, RV1)
periods, power = model.periodogram_auto(nyquist_factor=100)
idx1 = periods > 1
idx2 = np.argmax(power[idx1])
peak = periods[idx1][idx2]
with sb.axes_style('white'):
fig, ax = plt.subplots(1,1,figsize=(15,5))
ax.plot(periods, power, 'k-')
ax.set(xlim=(0.5, 5),
xlabel='period (days)',
ylabel='Lomb-Scargle Power')
ax.vlines(peak, *ax.get_ylim(), linestyles='dotted', colors='r')
print("Period is {0:.3} days".format(peak))
import scipy.optimize as opt
#opt.minimize?
K = max(RV1)
def simple_sin(theta, x):
K, P, phi = theta
return K*np.sin(2*np.pi*(x-phi)/P)
def objective(theta, xi, yi):
model = simple_sin(theta, xi)
return np.sum((model - yi)**2) #res**2
P=peak
init_guess = [K,P,0] #V,Pinv,vsys
x = np.arange(t[0],t[n],0.01)
plt.plot(t1,RV1,'bo');
plt.plot(x, simple_sin(init_guess, x),'r-', lw=2, alpha=0.4)
plt.xlabel('Time [Julian Date]');
plt.ylabel('RV [m/s]');<jupyter_output><empty_output><jupyter_text>Fit seems a little off in phase.<jupyter_code>optimize = opt.minimize(objective, init_guess, args=(t1,RV1), method='nelder-mead')
optimize
for i in optimize.x: #x is the result of opt.minimize
print('Success={}'.format(optimize.success))
print("parameter optimum: {}".format(i))<jupyter_output>Success=True
parameter optimum: 113.39861084065063
Success=True
parameter optimum: 5.089982470784545
Success=True
parameter optimum: 4.578688554204742e-05
<jupyter_text>Re-plot using optimized values.<jupyter_code>new_guess = []
for i in optimize:
new_guess.append(optimize[i])
x = np.arange(t[0],t[n],0.01)
plt.plot(t1,RV1,'bo');
plt.plot(x, simple_sin(init_guess, x),'r-', lw=2, alpha=0.4)
plt.xlabel('Time [Julian Date]');
plt.ylabel('RV [m/s]');
phi=new_guess[2] #optimize.x[2]
y_noisy=K * np.sin(2*np.pi*(x-phi)/P)
#create a function identical to simple_sine
#mapping/copying of simple_sine to h
h = lambda x,V,P,vsys: K * np.cos(2*np.pi*(x-T)/P) + vsys
p_opt, p_cov = opt.curve_fit(h, HJD, RV, p0=new_guess)
var = np.diag(p_cov)
std = np.sqrt(np.diag(p_cov)) #a.k.a. sigma
for i,j in zip(p_opt, std):
print ("parameter optimum: {} +/- {}".format(i, j))
#above is similar to
#for i in range(len(p_opt)):
# print("parameter optimum: {} +/- {}".format(p_opt[i], p_std))
plt.plot(HJD, RV, 'o')
plt.plot(x, h(x,*p_opt), 'r-', lw=2); # *mu; asterisk in variable is required for lambda func <jupyter_output><empty_output><jupyter_text>## b. Assuming that this is a Sun-like star, what is the orbital radius and mass of the planet?<jupyter_code>G= 6.67E-11
a = ((G*M_star*P**2)/(4*np.pi**2))**(1/3)
#M_star*v_star=M_planet*v_planet<jupyter_output><empty_output><jupyter_text>## c. Bonus! This is a real exoplanet. Which one?# 3.
Joe finds another star with RV variations. These are less obvious. ## a. What is the period of variations in this star's radial velocities?<jupyter_code>import pandas as pd
filename='HW2/rv_3.dat'
df2 = pd.read_csv(filename,delim_whitespace=True)# error_bad_lines=False, skiprows=?
df2.head()<jupyter_output><empty_output><jupyter_text>## b. Assuming that this is a Sun-like star, what is the orbital radius and mass of the planet?<jupyter_code>#fig, ax = plt.subplots()
#df.plot(yerr=df.error, ax=ax, kind='scatter')
HJD2=df2.JD
RV2=df2.RV
plt.plot(HJD2,RV2,'bo');
plt.xlabel('Time [Julian Date]');
plt.ylabel('RV [m/s]');<jupyter_output><empty_output><jupyter_text>## c. Bonus! This is a real exoplanet. Which one?# 4.
Bonus! Joe thinks that maybe, just maybe, there might a signal in the measurements of another star. He's not sure.
What do you think? <jupyter_code>import pandas as pd
filename='HW2/rv_4.dat'
df3 = pd.read_csv(filename,delim_whitespace=True)# error_bad_lines=False, skiprows=?
df3.head()
#fig, ax = plt.subplots()
#df.plot(yerr=df.error, ax=ax, kind='scatter')
HJD3=df3.HD
RV3=df3.RV
plt.plot(HJD3,RV3,'bo');
plt.xlabel('Time [Julian Date]');
plt.ylabel('RV [m/s]');<jupyter_output><empty_output>
|
no_license
|
/research/ basics/RV/HW2/HW2.ipynb
|
jpdeleon/OpenAstro
| 18 |
<jupyter_start><jupyter_text> #### Coded by: Vikranth<jupyter_code>import pandas as pd
from sklearn import ensemble
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
%matplotlib notebook
participant_1 = pd.read_pickle('Participant_1.pkl')
#print(participant_1)
labels_1 = np.load('Participant_1_label.npy')
#print(labels_1)
start_time = labels_1[0,0]
end_time = labels_1[-1,0]
dataset = np.copy(participant_1.values)
print(dataset.shape)
print(labels_1.shape)
delta = 40
dataset = dataset[dataset[:,0] >= start_time,:]
dataset = dataset[dataset[:,0] < end_time + delta,:]
print(dataset.shape)
plt.figure()
plt.plot(dataset[:,0],dataset[:,3])
plt.show()
plt.plot(dataset[:,0],labels_1[:,2],'ro')
plt.show()
type_window = 1000
k = int(type_window/delta)
l = len(labels_1)
typing_labels = np.zeros((l,1))
for i in range(0,l):
if i<(l-k):
typing_labels[i] = np.amax(labels_1[i:i+k,2])
else:
typing_labels[i] = np.amax(labels_1[i:,2])
#print(typing_labels)
plt.figure()
plt.plot(dataset[:,0],dataset[:,3])
plt.show()
plt.plot(dataset[:,0],typing_labels,'r.', ms = 0.1)
plt.show()
# Remove Missing Data:
missing_1 = dataset[:,13] + dataset[:,14]
print(Counter(missing_1))
dataset = dataset[np.where(missing_1 == 0)]
typing_labels = typing_labels[np.where(missing_1 == 0)]
print(typing_labels.shape)
print(dataset.shape)
plt.figure()
plt.plot(dataset[:,0],dataset[:,3])
plt.show()
plt.plot(dataset[:,0],typing_labels,'r.', ms = 0.1)
plt.show()
dataset_cp = np.copy(dataset[:,1:13])
n_samples, d = dataset_cp.shape
window = 20
stride = 5
data_slide = np.zeros((int((n_samples-window)/stride)+1,window,d))
typing_labels_cp = np.zeros((int((n_samples-window)/stride)+1,1))
k=0
for i in range(0,n_samples-window,stride): #400ms
data_slide[k,:,:] = dataset_cp[i:i+window,:]
typing_labels_cp[k] = np.amax(typing_labels[i:i+window])
k=k+1
print (data_slide.shape)
print(typing_labels_cp.shape)
import numpy as np
import scipy.io
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, LSTM, Dense, Dropout, Flatten
from keras.layers.core import Permute, Reshape
from keras import backend as K
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
z = 13000
X_train0 = data_slide[:z]
Y_train = typing_labels_cp[0:z].reshape(-1).astype(np.uint8)
X_test0 = data_slide[z:]
Y_test = typing_labels_cp[z:]
print(np.shape(X_train0))
print(np.shape(Y_train))
print(np.shape(X_test0))
print(np.shape(Y_test))
num_classes = 2
Y_train = keras.utils.to_categorical(Y_train, num_classes)
Y_test = keras.utils.to_categorical(Y_test, num_classes)
def _data_reshaping(X_tr, X_va, network_type):
_, win_len, dim = X_tr.shape
print(network_type)
if network_type=='CNN' or network_type=='ConvLSTM':
# make it into (frame_number, dimension, window_size, channel=1) for convNet
X_tr = np.swapaxes(X_tr,1,2)
X_va = np.swapaxes(X_va,1,2)
X_tr = np.reshape(X_tr, (-1, dim, win_len, 1))
X_va = np.reshape(X_va, (-1, dim, win_len, 1))
if network_type=='MLP':
print('MLP...')
X_tr = np.reshape(X_tr, (-1, dim*win_len))
X_va = np.reshape(X_va, (-1, dim*win_len))
return X_tr, X_va
def model_variant(model, num_feat_map, dim, network_type):
p = 0.4
print(network_type)
if network_type == 'ConvLSTM':
model.add(Permute((2, 1, 3))) # for swap-dimension
model.add(Reshape((-1,num_feat_map*dim)))
model.add(LSTM(32, return_sequences=False, stateful=False))
model.add(Dropout(p))
if network_type == 'CNN':
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dropout(p))
def model_conv(model, num_feat_map):
p=0.3
model.add(Conv2D(num_feat_map, kernel_size=(1, 5),
activation='relu',
input_shape=(dim, win_len, 1),
padding='same'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Dropout(p))
model.add(Conv2D(num_feat_map, kernel_size=(1, 5), activation='relu',padding='same'))
model.add(MaxPooling2D(pool_size=(1, 2)))
model.add(Dropout(p))
def model_LSTM(model):
p = 0.3
model.add(LSTM(num_hidden_lstm,
input_shape=(win_len,dim),
return_sequences=True))
model.add(Dropout(p))
model.add(LSTM(num_hidden_lstm, return_sequences=False))
model.add(Dropout(p))
def model_MLP(model, num_hidden_mlp):
p=0.3
model.add(Dense(num_hidden_mlp, activation='relu', input_shape=(dim*win_len,)))
model.add(Dropout(p))
model.add(Dense(num_hidden_mlp, activation='relu'))
model.add(Dropout(p))
def model_output(model):
model.add(Dense(num_classes, activation='softmax'))
batch_size = 1024
num_feat_map = 16
num_hidden_mlp = 128
num_hidden_lstm = 64
#network_type = 'CNN'
#network_type = 'ConvLSTM'
network_type = 'LSTM'
#network_type = 'MLP'
_, win_len, dim = X_train0.shape
print(win_len)
print(dim)
X_train, X_test = _data_reshaping(X_train0, X_test0, network_type)
print('building the model ... ')
model = Sequential()
if network_type=='CNN' or network_type=='ConvLSTM':
model_conv(model, num_feat_map)
model_variant(model, num_feat_map, dim, network_type)
if network_type=='LSTM':
model_LSTM(model)
if network_type=='MLP':
model_MLP(model, num_hidden_mlp)
model_output(model)
model.summary()
epochs = 20
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
H = model.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
shuffle=True,
validation_data=(X_test, Y_test))
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
y_pred = np.argmax(model.predict(X_test), axis=1)
y_true = np.argmax(Y_test, axis=1)
cf_matrix = confusion_matrix(y_true, y_pred)
print(cf_matrix)
class_wise_f1 = np.round(f1_score(y_true, y_pred, average=None)*100)*0.01
print('the mean-f1 score: {:.2f}'.format(np.mean(class_wise_f1)))
plt.figure()
plt.plot(y_true, label="true")
plt.plot(y_pred, label="pred")
#plt.plot(typing_labels_cp[60000:], label="label")
plt.legend()
plt.show()
model.save('LSTM.h5')<jupyter_output><empty_output>
|
no_license
|
/Training/Pause_Training.ipynb
|
nesl/mperf-TypingAnalysis
| 1 |
<jupyter_start><jupyter_text>___
___# Part of Speech Basics
The challenge of correctly identifying parts of speech is summed up nicely in the [spaCy docs](https://spacy.io/usage/linguistic-features):
Processing raw text intelligently is difficult: most words are rare, and it's common for words that look completely different to mean almost the same thing. The same words in a different order can mean something completely different. Even splitting text into useful word-like units can be difficult in many languages. While it's possible to solve some problems starting from only the raw characters, it's usually better to use linguistic knowledge to add useful information. That's exactly what spaCy is designed to do: you put in raw text, and get back a **Doc** object, that comes with a variety of annotations.
In this section we'll take a closer look at coarse POS tags (noun, verb, adjective) and fine-grained tags (plural noun, past-tense verb, superlative adjective).<jupyter_code># Perform standard imports
import spacy
nlp = spacy.load('en_core_web_sm')
# Create a simple Doc object
doc = nlp(u"The quick brown fox jumped over the lazy dog's back.")<jupyter_output><empty_output><jupyter_text>## View token tags
Recall that you can obtain a particular token by its index position.
* To view the coarse POS tag use `token.pos_`
* To view the fine-grained tag use `token.tag_`
* To view the description of either type of tag use `spacy.explain(tag)`
Note that `token.pos` and `token.tag` return integer hash values; by adding the underscores we get the text equivalent that lives in **doc.vocab**.<jupyter_code># Print the full text:
print(doc.text)
# Print the fifth word and associated tags:
print(doc[4].text, doc[4].pos_, doc[4].tag_, spacy.explain(doc[4].tag_))<jupyter_output>jumped VERB VBD verb, past tense
<jupyter_text>We can apply this technique to the entire Doc object:<jupyter_code>for token in doc:
print(f'{token.text:{10}} {token.pos_:{8}} {token.tag_:{6}} {spacy.explain(token.tag_)}')<jupyter_output>The DET DT determiner
quick ADJ JJ adjective
brown ADJ JJ adjective
fox NOUN NN noun, singular or mass
jumped VERB VBD verb, past tense
over ADP IN conjunction, subordinating or preposition
the DET DT determiner
lazy ADJ JJ adjective
dog NOUN NN noun, singular or mass
's PART POS possessive ending
back NOUN NN noun, singular or mass
. PUNCT . punctuation mark, sentence closer
<jupyter_text>## Coarse-grained Part-of-speech Tags
Every token is assigned a POS Tag from the following list:
POSDESCRIPTIONEXAMPLES
ADJadjective*big, old, green, incomprehensible, first*
ADPadposition*in, to, during*
ADVadverb*very, tomorrow, down, where, there*
AUXauxiliary*is, has (done), will (do), should (do)*
CONJconjunction*and, or, but*
CCONJcoordinating conjunction*and, or, but*
DETdeterminer*a, an, the*
INTJinterjection*psst, ouch, bravo, hello*
NOUNnoun*girl, cat, tree, air, beauty*
NUMnumeral*1, 2017, one, seventy-seven, IV, MMXIV*
PARTparticle*'s, not,*
PRONpronoun*I, you, he, she, myself, themselves, somebody*
PROPNproper noun*Mary, John, London, NATO, HBO*
PUNCTpunctuation*., (, ), ?*
SCONJsubordinating conjunction*if, while, that*
SYMsymbol*$, %, §, ©, +, −, ×, ÷, =, :), 😝*
VERBverb*run, runs, running, eat, ate, eating*
Xother*sfpksdpsxmsa*
SPACEspace___
## Fine-grained Part-of-speech Tags
Tokens are subsequently given a fine-grained tag as determined by morphology:
POSDescriptionFine-grained TagDescriptionMorphology
ADJadjectiveAFXaffixHyph=yes
ADJJJadjectiveDegree=pos
ADJJJRadjective, comparativeDegree=comp
ADJJJSadjective, superlativeDegree=sup
ADJPDTpredeterminerAdjType=pdt PronType=prn
ADJPRP\$pronoun, possessivePronType=prs Poss=yes
ADJWDTwh-determinerPronType=int rel
ADJWP\$wh-pronoun, possessivePoss=yes PronType=int rel
ADPadpositionINconjunction, subordinating or preposition
ADVadverbEXexistential thereAdvType=ex
ADVRBadverbDegree=pos
ADVRBRadverb, comparativeDegree=comp
ADVRBSadverb, superlativeDegree=sup
ADVWRBwh-adverbPronType=int rel
CONJconjunctionCCconjunction, coordinatingConjType=coor
DETdeterminerDTdeterminer
INTJinterjectionUHinterjection
NOUNnounNNnoun, singular or massNumber=sing
NOUNNNSnoun, pluralNumber=plur
NOUNWPwh-pronoun, personalPronType=int rel
NUMnumeralCDcardinal numberNumType=card
PARTparticlePOSpossessive endingPoss=yes
PARTRPadverb, particle
PARTTOinfinitival toPartType=inf VerbForm=inf
PRONpronounPRPpronoun, personalPronType=prs
PROPNproper nounNNPnoun, proper singularNounType=prop Number=sign
PROPNNNPSnoun, proper pluralNounType=prop Number=plur
PUNCTpunctuation-LRB-left round bracketPunctType=brck PunctSide=ini
PUNCT-RRB-right round bracketPunctType=brck PunctSide=fin
PUNCT,punctuation mark, commaPunctType=comm
PUNCT:punctuation mark, colon or ellipsis
PUNCT.punctuation mark, sentence closerPunctType=peri
PUNCT''closing quotation markPunctType=quot PunctSide=fin
PUNCT""closing quotation markPunctType=quot PunctSide=fin
PUNCT``opening quotation markPunctType=quot PunctSide=ini
PUNCTHYPHpunctuation mark, hyphenPunctType=dash
PUNCTLSlist item markerNumType=ord
PUNCTNFPsuperfluous punctuation
SYMsymbol#symbol, number signSymType=numbersign
SYM\$symbol, currencySymType=currency
SYMSYMsymbol
VERBverbBESauxiliary "be"
VERBHVSforms of "have"
VERBMDverb, modal auxiliaryVerbType=mod
VERBVBverb, base formVerbForm=inf
VERBVBDverb, past tenseVerbForm=fin Tense=past
VERBVBGverb, gerund or present participleVerbForm=part Tense=pres Aspect=prog
VERBVBNverb, past participleVerbForm=part Tense=past Aspect=perf
VERBVBPverb, non-3rd person singular presentVerbForm=fin Tense=pres
VERBVBZverb, 3rd person singular presentVerbForm=fin Tense=pres Number=sing Person=3
XotherADDemail
XFWforeign wordForeign=yes
XGWadditional word in multi-word expression
XXXunknown
SPACEspace_SPspace
NILmissing tag
For a current list of tags for all languages visit https://spacy.io/api/annotation#pos-tagging## Working with POS Tags
In the English language, the same string of characters can have different meanings, even within the same sentence. For this reason, morphology is important. **spaCy** uses machine learning algorithms to best predict the use of a token in a sentence. Is *"I read books on NLP"* present or past tense? Is *wind* a verb or a noun?<jupyter_code>doc = nlp(u'I read books on NLP.')
r = doc[1]
print(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')
doc = nlp(u'I read a book on NLP.')
r = doc[1]
print(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')<jupyter_output>read VERB VBD verb, past tense
<jupyter_text>In the first example, with no other cues to work from, spaCy assumed that ***read*** was present tense.In the second example the present tense form would be ***I am reading a book***, so spaCy assigned the past tense.## Counting POS Tags
The `Doc.count_by()` method accepts a specific token attribute as its argument, and returns a frequency count of the given attribute as a dictionary object. Keys in the dictionary are the integer values of the given attribute ID, and values are the frequency. Counts of zero are not included.<jupyter_code>doc = nlp(u"The quick brown fox jumped over the lazy dog's back.")
# Count the frequencies of different coarse-grained POS tags:
POS_counts = doc.count_by(spacy.attrs.POS)
POS_counts<jupyter_output><empty_output><jupyter_text>This isn't very helpful until you decode the attribute ID:<jupyter_code>doc.vocab[83].text<jupyter_output><empty_output><jupyter_text>### Create a frequency list of POS tags from the entire document
Since `POS_counts` returns a dictionary, we can obtain a list of keys with `POS_counts.items()`.By sorting the list we have access to the tag and its count, in order.<jupyter_code>for k,v in sorted(POS_counts.items()):
print(f'{k}. {doc.vocab[k].text:{5}}: {v}')
# Count the different fine-grained tags:
TAG_counts = doc.count_by(spacy.attrs.TAG)
for k,v in sorted(TAG_counts.items()):
print(f'{k}. {doc.vocab[k].text:{4}}: {v}')<jupyter_output>74. POS : 1
1292078113972184607. IN : 1
10554686591937588953. JJ : 3
12646065887601541794. . : 1
15267657372422890137. DT : 2
15308085513773655218. NN : 3
17109001835818727656. VBD : 1
<jupyter_text>**Why did the ID numbers get so big?** In spaCy, certain text values are hardcoded into `Doc.vocab` and take up the first several hundred ID numbers. Strings like 'NOUN' and 'VERB' are used frequently by internal operations. Others, like fine-grained tags, are assigned hash values as needed.
**Why don't SPACE tags appear?** In spaCy, only strings of spaces (two or more) are assigned tokens. Single spaces are not.<jupyter_code># Count the different dependencies:
DEP_counts = doc.count_by(spacy.attrs.DEP)
for k,v in sorted(DEP_counts.items()):
print(f'{k}. {doc.vocab[k].text:{4}}: {v}')<jupyter_output>399. amod: 3
412. det : 2
426. nsubj: 1
436. pobj: 1
437. poss: 1
440. prep: 1
442. punct: 1
8110129090154140942. case: 1
8206900633647566924. ROOT: 1
|
no_license
|
/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
|
shubs202k/NLP_Udemy
| 8 |
<jupyter_start><jupyter_text># Credits
> This code is a slight modification to a translation (TensorFlow --> PyTorch) of a previous version of the [02456](http://kurser.dtu.dk/course/02456) course material.
> [Original repo link (TensorFlow)](https://github.com/DeepLearningDTU/02456-deep-learning).
> [Translated repo link (PyTorch)](https://github.com/munkai/pytorch-tutorial/tree/master/2_intermediate).<jupyter_code>import torch
from torch.autograd import Variable
from torch.nn.parameter import Parameter
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.nn.init as init
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text># MNIST dataset
MNIST is a dataset that is often used for benchmarking. The MNIST dataset consists of 70,000 images of handwritten digits from 0-9. The dataset is split into a 50,000 images training set, 10,000 images validation set and 10,000 images test set. The images are 28x28 pixels, where each pixel represents a normalised value between 0-255 (0=black and 255=white).

## Primer
We use a feedforward neural network to classify the 28x28 mnist images. `num_features` is therefore $28 * 28=784$, i.e. we represent each image as a vector. The ordering of the pixels in the vector does not matter, so we could permutate all images using the same permutation and still get the same performance. (You are of course encouraged to try this using ``numpy.random.permutation`` to get a random permutation. This task is therefore called the _permutation invariant_ MNIST. Obviously this throws away a lot of structure in the data. In the next module we'll fix this with the convolutional neural network wich encodes prior knowledgde about data that has either spatial or temporal structure. ## MNIST
First let's load the MNIST dataset and plot a few examples:<jupyter_code>!if [ ! -f mnist.npz ]; then curl -N https://raw.githubusercontent.com/DeepLearningDTU/02456-deep-learning-with-PyTorch/master/static_files/mnist.npz; else echo "mnist.npz already downloaded"; fi
#To speed up training we'll only work on a subset of the data
data = np.load('mnist.npz')
num_classes = 10
x_train = data['X_train'][:1000].astype('float32')
targets_train = data['y_train'][:1000].astype('int32')
x_valid = data['X_valid'][:500].astype('float32')
targets_valid = data['y_valid'][:500].astype('int32')
x_test = data['X_test'][:500].astype('float32')
targets_test = data['y_test'][:500].astype('int32')
print("Information on dataset")
print("x_train", x_train.shape)
print("targets_train", targets_train.shape)
print("x_valid", x_valid.shape)
print("targets_valid", targets_valid.shape)
print("x_test", x_test.shape)
print("targets_test", targets_test.shape)
data
#plot a few MNIST examples
idx, dim, classes = 0, 28, 10
# create empty canvas
canvas = np.zeros((dim*classes, classes*dim))
# fill with tensors
for i in range(classes):
for j in range(classes):
canvas[i*dim:(i+1)*dim, j*dim:(j+1)*dim] = x_train[idx].reshape((dim, dim))
idx += 1
# visualize matrix of tensors as gray scale image
plt.figure(figsize=(4, 4))
plt.axis('off')
plt.imshow(canvas, cmap='gray')
plt.title('MNIST handwritten digits')
plt.show()<jupyter_output><empty_output><jupyter_text>## Model
One of the large challenges in deep learning is the amount of hyperparameters that needs to be selected, and the lack of a good principled way of selecting them.
Hyperparameters can be found by experience (guessing) or some search procedure (often quite slow).
Random search is easy to implement and performs decent: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf .
More advanced search procedures include [Spearmint](https://github.com/JasperSnoek/spearmint) and many others.
**In practice a lot of trial and error is almost always involved.** This can be frustrating and time consuming, but the best thing to do is to think as a scientist, and go about it in a ordered manner --> monitor as much as you can, take notes, and be deliberate!
Below are some guidelines that you can use as a starting point to some of the most important hyperparameters.
(*regularization* is also very important, but will be covered later.)
### Ballpark estimates of hyperparameters
__Number of hidden units and network structure:__
You'll have to experiment. One rarely goes below 512 units for feedforward networks (unless your are training on CPU...).
There's some research into stochastic depth networks: https://arxiv.org/pdf/1603.09382v2.pdf, but in general this is trial and error.
__Parameter initialization:__
Parameter initialization is extremely important.
PyTorch has a lot of different initializers, check the [PyTorch API](http://pytorch.org/docs/master/nn.html#torch-nn-init). Often used initializer are
1. Kaming He
2. Xavier Glorot
3. Uniform or Normal with small scale (0.1 - 0.01)
4. Orthogonal (this usually works very well for RNNs)
Bias is nearly always initialized to zero using the [torch.nn.init.constant(tensor, val)](http://pytorch.org/docs/master/nn.html#torch.nn.init.constant)
__Mini-batch size:__
Usually people use 16-256. Bigger is not allways better. With smaller mini-batch size you get more updates and your model might converge faster. Also small batch sizes use less memory, which means you can train a model with more parameters.
__Nonlinearity:__ [The most commonly used nonliearities are](http://pytorch.org/docs/master/nn.html#non-linear-activations)
1. ReLU
2. Leaky ReLU
3. Elu
3. Sigmoid squash the output [0, 1], and are used if your output is binary (not used in the hidden layers)
4. Tanh is similar to sigmoid, but squashes in [-1, 1]. It is rarely used any more.
4. Softmax normalizes the the output to 1, and is used as output if you have a classification problem
See the plot below.
__Optimizer and learning rate:__
1. SGD + Momentum: learning rate 1.0 - 0.1
2. ADAM: learning rate 3*1e-4 - 1e-5
3. RMSPROP: somewhere between SGD and ADAM
<jupyter_code># Illustrate different output units
x = np.linspace(-6, 6, 100)
units = {
"ReLU": lambda x: np.maximum(0, x),
"Leaky ReLU": lambda x: np.maximum(0, x) + 0.1 * np.minimum(0, x),
"Elu": lambda x: (x > 0) * x + (1 - (x > 0)) * (np.exp(x) - 1),
"Sigmoid": lambda x: (1 + np.exp(-x))**(-1),
"tanh": lambda x: (np.exp(x) - np.exp(-x)) / (np.exp(x) + np.exp(-x))
}
plt.figure(figsize=(5, 5))
[plt.plot(x, unit(x), label=unit_name, lw=3,alpha=0.2) for unit_name, unit in units.items()]
plt.legend(loc=2, fontsize=16)
plt.title('Non-linearities', fontsize=20)
plt.ylim([-2, 5])
plt.xlim([-6, 6])
# assert that all class probablities sum to one
softmax = lambda x: np.exp(x) / np.sum(np.exp(x))
print("softmax should sum to one (approxiamtely):", np.sum(softmax(x)))
#Hyperparameters
num_classes = 10
num_l1 = 512
num_l2 = 256
num_features = x_train.shape[1]
# define network
class Net(nn.Module):
def __init__(self, num_features, num_hidden_1,num_hidden_2, num_output):
super(Net, self).__init__()
# input layer
self.W_1 = Parameter(init.kaiming_normal_(torch.Tensor(num_hidden_1, num_features)))
self.b_1 = Parameter(init.constant_(torch.Tensor(num_hidden_1), 0))
# hidden layer 1
self.W_2 = Parameter(init.kaiming_normal_(torch.Tensor(num_hidden_2, num_hidden_1)))
self.b_2 = Parameter(init.constant_(torch.Tensor(num_hidden_2), 0))
# batch normalization 1
self.batchnorm_3 = nn.BatchNorm1d(num_hidden_1)
# hidden layer 2
self.W_3 = Parameter(init.kaiming_normal_(torch.Tensor(num_output, num_hidden_2)))
self.b_3 = Parameter(init.constant_(torch.Tensor(num_output), 0))
# batch norm 2
self.batchnorm_4 = nn.BatchNorm1d(num_hidden_2)
# define activation function in constructor
self.activation = torch.nn.ReLU()
#dropout
self.dropout = nn.Dropout(p=0.3) #<--adjust
# Train Loss 0.157572 , Train acc 0.885000, Valid acc 0.742000
def forward(self, x):
x = F.linear(x, self.W_1, self.b_1)
x = self.activation(x)
x = self.batchnorm_3(x)
x = F.linear(x, self.W_2, self.b_2)
x = self.activation(x)
x = self.batchnorm_4(x)
x = F.dropout(x)
x = F.linear(x, self.W_3, self.b_3)
return F.softmax(x, dim=1)
net = Net(num_features, num_l1,num_l2, num_classes)
#optimizer = optim.SGD(net.parameters(), lr=0.1)
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.1)
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.5)
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6) #<<----- this SGD works best
#Train Loss 0.148523 , Train acc 0.978000, Valid acc 0.848000
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.8)
#### SGD for L2
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6, weight_decay=1e-4)
#Train Loss 0.148910 , Train acc 0.975000, Valid acc 0.834000
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6, weight_decay=1e-5)
#Train Loss 0.156589 , Train acc 0.894000, Valid acc 0.772000
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6, weight_decay=1e-6) #<<-- best weight decay
#Train Loss 0.148832 , Train acc 0.975000, Valid acc 0.846000
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6, weight_decay=1e-7)
#Train Loss 0.148993 , Train acc 0.973000, Valid acc 0.844000
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6, weight_decay=1e-8)
#Train Loss 0.148465 , Train acc 0.979000, Valid acc 0.830000
#optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.6, weight_decay=0.1) #<-- best with weight decay ????
#Train Loss 0.147393 , Train acc 0.987000, Valid acc 0.848000
#### ADAM BELOW
#optimizer = optim.Adam(net.parameters(), lr=1e-2)
#Train Loss 0.236715 , Train acc 0.094000, Valid acc 0.106000
#optimizer = optim.Adam(net.parameters(), lr=1e-3) #<<--- BEST ADAM
#Train Loss 0.147684 , Train acc 0.984000, Valid acc 0.858000
#Train Loss 0.147492 , Train acc 0.986000, Valid acc 0.840000
#optimizer = optim.Adam(net.parameters(), lr=5e-3)
#Train Loss 0.203111 , Train acc 0.410000, Valid acc 0.400000
#optimizer = optim.Adam(net.parameters(), lr=1e-4)
#Train Loss 0.148532 , Train acc 0.979000, Valid acc 0.846000
#optimizer = optim.Adam(net.parameters(), lr=1e-5)
#Train Loss 0.172813 , Train acc 0.810000, Valid acc 0.728000
#optimizer = optim.Adam(net.parameters(), lr=1e-7)
#Train Loss 0.230129 , Train acc 0.147000, Valid acc 0.130000
#optimizer = optim.Adam(net.parameters(), lr=1e-10)
#Train Loss 0.231685 , Train acc 0.076000, Valid acc 0.066000
criterion = nn.CrossEntropyLoss()
#Test the forward pass with dummy data
x = np.random.normal(0, 1, (45, dim*dim)).astype('float32')
print(net(Variable(torch.from_numpy(x))).size())<jupyter_output>torch.Size([45, 10])
<jupyter_text># Build the training loop
We train the network by calculating the gradient w.r.t the cost function and update the parameters in direction of the negative gradient.
When training neural network you always use mini batches. Instead of calculating the average gradient using the entire dataset you approximate the gradient using a mini-batch of typically 16 to 256 samples. The paramters are updated after each mini batch. Networks converge much faster using mini batches because the parameters are updated more often.
We build a loop that iterates over the training data. Remember that the parameters are updated each time ``optimizer.step()`` is called.<jupyter_code># we could have done this ourselves,
# but we should be aware of sklearn and it's tools
from sklearn.metrics import accuracy_score
# setting hyperparameters and gettings epoch sizes
batch_size = 100
num_epochs = 100
num_samples_train = x_train.shape[0]
num_batches_train = num_samples_train // batch_size
num_samples_valid = x_valid.shape[0]
num_batches_valid = num_samples_valid // batch_size
# setting up lists for handling loss/accuracy
train_acc, train_loss = [], []
valid_acc, valid_loss = [], []
test_acc, test_loss = [], []
cur_loss = 0
losses = []
get_slice = lambda i, size: range(i * size, (i + 1) * size)
for epoch in range(num_epochs):
# Forward -> Backprob -> Update params
## Train
cur_loss = 0
#net.train()
net.train() #<---- DROPOUT?
for i in range(num_batches_train):
slce = get_slice(i, batch_size)
x_batch = Variable(torch.from_numpy(x_train[slce]))
output = net(x_batch)
# compute gradients given loss
target_batch = Variable(torch.from_numpy(targets_train[slce]).long())
batch_loss = criterion(output, target_batch)
optimizer.zero_grad()
batch_loss.backward()
optimizer.step()
cur_loss += batch_loss
losses.append(cur_loss / batch_size)
net.eval() #<---- DROPOUT?
### Evaluate training
train_preds, train_targs = [], []
for i in range(num_batches_train):
slce = get_slice(i, batch_size)
x_batch = Variable(torch.from_numpy(x_train[slce]))
output = net(x_batch)
preds = torch.max(output, 1)[1]
train_targs += list(targets_train[slce])
train_preds += list(preds.data.numpy())
### Evaluate validation
val_preds, val_targs = [], []
for i in range(num_batches_valid):
slce = get_slice(i, batch_size)
x_batch = Variable(torch.from_numpy(x_valid[slce]))
output = net(x_batch)
preds = torch.max(output, 1)[1]
val_preds += list(preds.data.numpy())
val_targs += list(targets_valid[slce])
train_acc_cur = accuracy_score(train_targs, train_preds)
valid_acc_cur = accuracy_score(val_targs, val_preds)
train_acc.append(train_acc_cur)
valid_acc.append(valid_acc_cur)
if epoch % 10 == 0:
print("Epoch %2i : Train Loss %f , Train acc %f, Valid acc %f" % (
epoch+1, losses[-1], train_acc_cur, valid_acc_cur))
epoch = np.arange(len(train_acc))
plt.figure()
plt.plot(epoch, train_acc, 'r', epoch, valid_acc, 'b')
plt.legend(['Train Accucary','Validation Accuracy'])
plt.xlabel('Updates'), plt.ylabel('Acc')<jupyter_output>Epoch 1 : Train Loss 0.218122 , Train acc 0.675000, Valid acc 0.590000
Epoch 11 : Train Loss 0.150082 , Train acc 0.982000, Valid acc 0.834000
Epoch 21 : Train Loss 0.147893 , Train acc 0.988000, Valid acc 0.846000
Epoch 31 : Train Loss 0.147497 , Train acc 0.989000, Valid acc 0.850000
Epoch 41 : Train Loss 0.147071 , Train acc 0.993000, Valid acc 0.852000
Epoch 51 : Train Loss 0.146967 , Train acc 0.993000, Valid acc 0.856000
Epoch 61 : Train Loss 0.146925 , Train acc 0.993000, Valid acc 0.854000
Epoch 71 : Train Loss 0.146893 , Train acc 0.993000, Valid acc 0.854000
Epoch 81 : Train Loss 0.146788 , Train acc 0.994000, Valid acc 0.854000
Epoch 91 : Train Loss 0.146596 , Train acc 0.996000, Valid acc 0.846000
|
no_license
|
/02456-deep-learning-with-PyTorch-master/1_Feedforward/1.4-FFN-MNIST.ipynb
|
DannyDannyDanny/Deep-Learning
| 4 |
<jupyter_start><jupyter_text>https://towardsdatascience.com/time-series-forecasting-in-real-life-budget-forecasting-with-arima-d5ec57e634cb
https://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/
https://www.kdnuggets.com/2020/01/stock-market-forecasting-time-series-analysis.html<jupyter_code><jupyter_output><empty_output><jupyter_text>#Importing Basics Packages<jupyter_code>!pip install yfinance
# imports datetime for picking beginning and end dates for the analysis
import datetime
# imports yahoo finance for getting historical stock prices
import yfinance as yf
# imports pandas for dataframe manipulation
import pandas as pd
# imports numpy
import numpy as np
# for data visualization
import matplotlib as mpl
# for changing the plot size in the Jupyter Notebook output
%matplotlib inline
# sets the plot size to 12x8
mpl.rcParams['figure.figsize'] = (12,8)
# for shorter lines with plotting
from matplotlib import pyplot as plt
# to hide warning messages
import warnings
warnings.filterwarnings('ignore')
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.stattools import kpss
from statsmodels.tsa.stattools import coint
# sets the sample period as 5 years back from 09/12/2019
end = datetime.datetime(2020, 2, 11)
start = end - datetime.timedelta(days = 7*365)<jupyter_output><empty_output><jupyter_text>#Declaring a Function for time series plot<jupyter_code>def tsplot(y, lags=None, figsize=(10, 8), style='seaborn-bright'):
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
layout = (3, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
qq_ax = plt.subplot2grid(layout, (2, 0))
pp_ax = plt.subplot2grid(layout, (2, 1))
y.plot(ax=ts_ax, linewidth=1.5)
ts_ax.set_title('Time Series Analysis Plots')
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax, alpha=0.05)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax, alpha=0.05)
sm.qqplot(y, line='s', ax=qq_ax)
qq_ax.set_title('QQ Plot')
scs.probplot(y, sparams=(y.mean(), y.std()), plot=pp_ax)
plt.tight_layout()
return<jupyter_output><empty_output><jupyter_text>#Getting Nifty 50 data from Yahoo<jupyter_code># gets the closing price fo Nifty 50 for the past 7 years
my_stock = yf.Ticker('^NSEI')
my_stock = pd.DataFrame(my_stock.history(start = start, end = end)['Close'])
my_stock = my_stock.rename(str.lower, axis = 'columns')
my_stock.head()
tsplot(my_stock.close)<jupyter_output><empty_output><jupyter_text>Dickey-Fuller Test can help to find out that a time series is stationary or not. It is a statistical test, where the Null Hypothesis states there is a unit root for the given series, while the alternative hypothesis states that the series is stationary.#Dickey-Fuller test, with the option of doing a log-transform.<jupyter_code># log_dataset: boolean indicating if we want to log-transform the dataset before running Augmented Dickey-Fuller test
def adf_test(dataset, log_dataset):
ds = dataset
if log_dataset:
ds = dataset.apply(lambda x: np.log(x))
ds.dropna(inplace=True)
result = adfuller(ds.close)
print('Augmented Dickey-Fuller Test')
print('test statistic: %.10f' % result[0])
print('p-value: %.10f' % result[1])
print('critical values')
for key, value in result[4].items():
print('\t%s: %.10f' % (key, value))
adf_test(my_stock,1)<jupyter_output>Augmented Dickey-Fuller Test
test statistic: -1.0663849923
p-value: 0.7282893893
critical values
1%: -3.4341889019
5%: -2.8632356598
10%: -2.5676727236
<jupyter_text>We can apply other techniques that transform the data, without changing its properties:
Differencing subtract each data point by the value of a specific time point in the series, e.g., always subtract by the value of the next period
Decomposition this technique is going to isolate each component of the time-series that was mentioned at the beginning (trend, seasonality, cycle, irregularity) and provide the residuals#Differencing<jupyter_code># data: our dataset
# column_name: column to difference
n_diff_dataset = pd.DataFrame(data=np.diff(np.array(my_stock['close'])))
n_diff_dataset.columns = ['close']
# dropping NAN values
n_diff_dataset.dropna(inplace=True)
adf_test(n_diff_dataset,0)
#n_diff_dataset.head()
# create a differenced series
def difference(dataset, interval=1):
diff = list()
for i in range(interval, len(dataset)):
value = dataset[i] - dataset[i - interval]
diff.append(value)
return diff
n_diff_dataset_1=difference(my_stock.close)
n_diff_dataset_1=pd.DataFrame(data=n_diff_dataset_1)
n_diff_dataset_1.columns=['close']
adf_test(n_diff_dataset_1,0)
tsplot(n_diff_dataset_1.close)<jupyter_output><empty_output><jupyter_text>#Decomposes<jupyter_code>from statsmodels.tsa.seasonal import seasonal_decompose
from matplotlib import pyplot
result = seasonal_decompose(my_stock, freq = 52, model='multiplicative')
result.plot()
pyplot.show()<jupyter_output><empty_output>
|
no_license
|
/Make_a_Signal_Stationary.ipynb
|
jesvin1/EPAT_TIMESERIES
| 7 |
<jupyter_start><jupyter_text>## Make predictions on actual board images<jupyter_code>X_test1, y_test1 = get_features_labels("C:\\Users\\issuser\\Desktop\\ExtendingBoardGamesOnline\\data\\myboard_images")
test_predictions1 = model.predict(X_test1, batch_size=batch_size)
y_test_pred1 = [np.argmax(x) for x in test_predictions1]
cnf_matrix1 = confusion_matrix(y_test1, y_test_pred1)
plot_confusion_matrix(cnf_matrix1, classes=class_names, normalize=False,title='confusion matrix')<jupyter_output><empty_output>
|
no_license
|
/chess_piece_detection/3_class_classifier/CNN/ColorDetectionCNN-Modeling-ColorImages.ipynb
|
ace-racer/Extending-Board-Games-using-deep-learning
| 1 |
<jupyter_start><jupyter_text>
Classification with PythonIn this notebook we try to practice all the classification algorithms that we learned in this course.
We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.
Lets first load required libraries:<jupyter_code>import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline<jupyter_output><empty_output><jupyter_text>### About datasetThis dataset is about past loans. The __Loan_train.csv__ data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:
| Field | Description |
|----------------|---------------------------------------------------------------------------------------|
| Loan_status | Whether a loan is paid off on in collection |
| Principal | Basic principal loan amount at the |
| Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule |
| Effective_date | When the loan got originated and took effects |
| Due_date | Since it’s one-time payoff schedule, each loan has one single due date |
| Age | Age of applicant |
| Education | Education of applicant |
| Gender | The gender of applicant |Lets download the dataset<jupyter_code>!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv<jupyter_output>--2019-09-15 13:31:07-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23101 (23K) [text/csv]
Saving to: ‘loan_train.csv’
100%[======================================>] 23,101 --.-K/s in 0.07s
2019-09-15 13:31:08 (304 KB/s) - ‘loan_train.csv’ saved [23101/23101]
<jupyter_text>### Load Data From CSV File <jupyter_code>df = pd.read_csv('loan_train.csv')
df.head()
df.shape<jupyter_output><empty_output><jupyter_text>### Convert to date time object <jupyter_code>df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()<jupyter_output><empty_output><jupyter_text># Data visualization and pre-processing
Let’s see how many of each class is in our data set <jupyter_code>df['loan_status'].value_counts()<jupyter_output><empty_output><jupyter_text>260 people have paid off the loan on time while 86 have gone into collection
Lets plot some columns to underestand data better:<jupyter_code># notice: installing seaborn might takes a few minutes
!conda install -c anaconda seaborn -y
import seaborn as sns
bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()<jupyter_output><empty_output><jupyter_text># Pre-processing: Feature selection/extraction### Lets look at the day of the week people get the loan <jupyter_code>df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()<jupyter_output><empty_output><jupyter_text>We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4 <jupyter_code>df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
df.head()<jupyter_output><empty_output><jupyter_text>## Convert Categorical features to numerical valuesLets look at gender:<jupyter_code>df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)<jupyter_output><empty_output><jupyter_text>86 % of female pay there loans while only 73 % of males pay there loan
Lets convert male to 0 and female to 1:
<jupyter_code>df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()<jupyter_output><empty_output><jupyter_text>## One Hot Encoding
#### How about education?<jupyter_code>df.groupby(['education'])['loan_status'].value_counts(normalize=True)<jupyter_output><empty_output><jupyter_text>#### Feature befor One Hot Encoding<jupyter_code>df[['Principal','terms','age','Gender','education']].head()<jupyter_output><empty_output><jupyter_text>#### Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame <jupyter_code>Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()<jupyter_output><empty_output><jupyter_text>### Feature selectionLets defind feature sets, X:<jupyter_code>X = Feature
X[0:5]<jupyter_output><empty_output><jupyter_text>What are our lables?<jupyter_code>y = df['loan_status'].values
y[0:5]<jupyter_output><empty_output><jupyter_text>## Normalize Data Data Standardization give data zero mean and unit variance (technically should be done after train test split )<jupyter_code>X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]<jupyter_output>/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:1: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
if __name__ == '__main__':
<jupyter_text># Classification Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model
You should use the following algorithm:
- K Nearest Neighbor(KNN)
- Decision Tree
- Support Vector Machine
- Logistic Regression
__ Notice:__
- You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.
- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.
- You should include the code of the algorithm in the following cells.# K Nearest Neighbor(KNN)
Notice: You should find the best k to build the model with the best accuracy.
**warning:** You should not use the __loan_test.csv__ for finding the best k, however, you can split your train_loan.csv into train and test to find the best __k__.<jupyter_code>from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
from sklearn import metrics
from sklearn.neighbors import KNeighborsClassifier
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
ConfustionMx = [];
for n in range(1,Ks):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat_knn=neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat_knn)
std_acc[n-1]=np.std(yhat_knn==y_test)/np.sqrt(yhat_knn.shape[0])
print( "The best accuracy for KNN was with", mean_acc.max(), "with k=", mean_acc.argmax()+1) <jupyter_output>The best accuracy for KNN was with 0.7857142857142857 with k= 7
<jupyter_text># Decision Tree<jupyter_code>from sklearn.tree import DecisionTreeClassifier
loan = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
loan.fit(X_train,y_train)
yhat_dt = loan.predict(X_test)
print("Accuracy for Decision Tree", metrics.accuracy_score(y_test, yhat_dt))<jupyter_output>Accuracy for Decision Tree 0.6142857142857143
<jupyter_text># Support Vector Machine<jupyter_code>from sklearn import svm
clf = svm.SVC(kernel='rbf')
clf.fit(X_train, y_train)
yhat_svm = clf.predict(X_test)
print("Accuracy for SVM",metrics.accuracy_score(y_test, yhat_svm))<jupyter_output>Accuracy for SVM 0.7428571428571429
<jupyter_text># Logistic Regression<jupyter_code>from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
LR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train,y_train)
yhat_lr = LR.predict(X_test)
print("Accuracy for Logistic Regression", metrics.accuracy_score(y_test, yhat_lr))<jupyter_output>Accuracy for Logistic Regression 0.6857142857142857
<jupyter_text># Model Evaluation using Test set<jupyter_code>from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss<jupyter_output><empty_output><jupyter_text>First, download and load the test set:<jupyter_code>!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv<jupyter_output>--2019-09-15 13:37:49-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3642 (3.6K) [text/csv]
Saving to: ‘loan_test.csv’
100%[======================================>] 3,642 --.-K/s in 0s
2019-09-15 13:37:50 (376 MB/s) - ‘loan_test.csv’ saved [3642/3642]
<jupyter_text>### Load Test set for evaluation <jupyter_code>test_df = pd.read_csv('loan_test.csv')
test_df.head()<jupyter_output><empty_output><jupyter_text>## Data pre-processing<jupyter_code>test_df['due_date'] = pd.to_datetime(test_df['due_date'])
test_df['effective_date'] = pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
test_Feature = test_df[['Principal','terms','age','Gender','weekend']]
test_Feature = pd.concat([test_Feature,pd.get_dummies(test_df['education'])], axis=1)
test_Feature.drop(['Master or Above'], axis = 1,inplace=True)
test_X = test_Feature
test_X= preprocessing.StandardScaler().fit(test_X).transform(test_X)
test_y = test_df['loan_status'].values<jupyter_output>/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:10: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
<jupyter_text>## KNN<jupyter_code>neigh = KNeighborsClassifier(n_neighbors = mean_acc.argmax()+1 ).fit(X,y)
print("jaccard index for KNN -", jaccard_similarity_score(test_y, neigh.predict(test_X)))
print("F1-score for KNN -", f1_score(test_y, neigh.predict(test_X), average='weighted'))<jupyter_output>jaccard index for KNN - 0.7222222222222222
F1-score for KNN - 0.7001989201477693
<jupyter_text>## Decision Tree<jupyter_code>loan = DecisionTreeClassifier(criterion="entropy", max_depth = 4).fit(X,y)
print("jaccard index for Decision Tree -", jaccard_similarity_score(test_y, loan.predict(test_X)))
print("F1-score for Decision Tree -", f1_score(test_y, loan.predict(test_X), average='weighted'))<jupyter_output>jaccard index for Decision Tree - 0.7777777777777778
F1-score for Decision Tree - 0.7283950617283951
<jupyter_text>## SVM<jupyter_code>clf = svm.SVC(kernel='rbf').fit(X, y)
print("jaccard index for SVM -", jaccard_similarity_score(test_y, clf.predict(test_X)))
print("F1-score for SVM -", f1_score(test_y, clf.predict(test_X), average='weighted'))<jupyter_output>jaccard index for SVM - 0.7222222222222222
F1-score for SVM - 0.6212664277180406
<jupyter_text>## Logistic Regression<jupyter_code>LR = LogisticRegression(C=0.01, solver='liblinear').fit(X,y)
print("jaccard index for Logistic Regression -", jaccard_similarity_score(test_y, LR.predict(test_X)))
print("F1-score for Logistic Regression -", f1_score(test_y, LR.predict(test_X), average='weighted'))
print("Logloss for Logistic Regression -", log_loss(test_y, LR.predict_proba(test_X)))<jupyter_output>jaccard index for Logistic Regression - 0.7407407407407407
F1-score for Logistic Regression - 0.6304176516942475
Logloss for Logistic Regression - 0.5566084946309207
<jupyter_text># Report<jupyter_code>table = [["KNN",jaccard_similarity_score(test_y, neigh.predict(test_X)),f1_score(test_y, neigh.predict(test_X), average='weighted'),'NA'],["Decision Tree",jaccard_similarity_score(test_y, loan.predict(test_X)),f1_score(test_y, loan.predict(test_X), average='weighted'),'NA'],
... ["SVM",jaccard_similarity_score(test_y, clf.predict(test_X)),f1_score(test_y, clf.predict(test_X), average='weighted'),'NA'],["LogisticRegression",jaccard_similarity_score(test_y, LR.predict(test_X)),f1_score(test_y, LR.predict(test_X), average='weighted'),log_loss(test_y, LR.predict_proba(test_X))]]
from tabulate import tabulate
from IPython.display import HTML
HTML(tabulate(table, headers= ['Algorithm', 'Jaccard', 'F1-score','LogLoss'], tablefmt='html'))<jupyter_output>/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
|
no_license
|
/ML0101EN-Proj-Loan-py-v1.ipynb
|
padmanabha-bonageri/Machine_learning
| 29 |
<jupyter_start><jupyter_text>Modulo3### 1. Importar datos del modulo anterior, estos datos ya estar formateados
se importan librerias iniciales
y se carga el archivo clean_df.csv<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
ruta = 'clean_df.csv'
df = pd.read_csv(ruta)
df.head()<jupyter_output><empty_output><jupyter_text> 2. Análisis de patrones de características individuales mediante visualización <jupyter_code>print(df.dtypes)
<jupyter_output>Unnamed: 0 int64
symboling int64
normalized-losses int64
make object
num-of-doors object
body-style object
drive-wheels object
engine-location object
wheel-base float64
length float64
width float64
height float64
curb-weight int64
engine-type object
num-of-cylinders object
engine-size int64
fuel-system object
bore float64
stroke float64
compression-ratio float64
horsepower int64
peak-rpm float64
city-mpg int64
highway-L/100km float64
price float64
diesel int64
gas int64
aspiration-std int64
aspiration-turbo int64
dtype: object
<jupyter_text>Para instalar seaborn usamos el pip que es el administrador de paquetes de python.
> ***pip install seaborn*** ¿Cómo elegir el método de visualización correcto?
Al visualizar variables individuales, es importante comprender primero con qué tipo de variable está tratando. Esto nos ayudará a encontrar el método de visualización adecuado para esa variable. <jupyter_code>print(df.dtypes['peak-rpm'])
df.corr()<jupyter_output><empty_output><jupyter_text>por ejemplo, podemos calcular la correlación entre variables de tipo `int64` o `float64` utilizando el método `corr ()`:<jupyter_code>df[['bore','stroke' ,'compression-ratio','horsepower']].corr()<jupyter_output><empty_output><jupyter_text> Variables numéricas continuas:
Las variables numéricas continuas son variables que pueden contener cualquier valor dentro de cierto rango. Las variables numéricas continuas pueden tener el tipo "int64" o "float64". Una excelente manera de visualizar estas variables es mediante el uso de diagramas de dispersión con líneas ajustadas.
Para comenzar a comprender la relación (lineal) entre una variable individual y el precio. Podemos hacer esto usando "regplot", que traza el diagrama de dispersión más la línea de regresión ajustada para los datos.
Veamos varios ejemplos de diferentes relaciones lineales:
Relación lineal positiva
Busquemos el diagrama de dispersión de tamaño del `motor` y `precio`
<jupyter_code>df[['highway-L/100km', 'price']].corr()
# Engine size as potential predictor variable of price
sns.regplot(x="engine-size", y="price", data=df)
plt.ylim(0,)<jupyter_output><empty_output><jupyter_text>> A medida que aumenta el tamaño del motor, sube el precio: esto indica una correlación directa positiva entre estas dos variables. El tamaño del motor parece un buen predictor de precio ya que la línea de regresión es casi una línea diagonal perfectaPodemos examinar la correlación entre 'tamaño del motor' y 'precio' y ver que es aproximadamente 0.87<jupyter_code>df[["engine-size", "price"]].corr()<jupyter_output><empty_output><jupyter_text>El mpg en carretera es una variable predictiva potencial de precio ***este dato esta diferente al de la guía del curso*** *segun la gia debe dar una correlacion negativa lineal*<jupyter_code>sns.regplot(x="highway-L/100km", y="price", data=df)
sns.regplot(x="peak-rpm", y="price", data=df)<jupyter_output><empty_output><jupyter_text> Relación lineal débil Veamos si `Peak-rpm` como una variable predictora de `precio`.<jupyter_code>df[['peak-rpm','price']].corr()<jupyter_output><empty_output><jupyter_text> Las rpm máximas no parecen ser un buen predictor del precio, ya que la línea de regresión está cerca de la
horizontal. Además, los puntos de datos están muy dispersos y lejos de la línea ajustada, lo que muestra mucha
variabilidad. Por lo tanto, no es una variable confiable.Podemos examinar la correlación entre 'pico-rpm' y 'precio' y ver que es aproximadamente -0.101616<jupyter_code>df[["stroke","price"]].corr()
sns.regplot(x="stroke", y="price", data=df)
sns.boxplot(x="body-style", y="price", data=df)<jupyter_output><empty_output><jupyter_text>Veamos la relación entre `estilo de cuerpo` y `precio`. Variables categóricas
Estas son variables que describen una 'característica' de una unidad de datos y se seleccionan de un pequeño grupo de categorías. Las variables categóricas pueden tener el tipo "objeto" o "int64". Una buena forma de visualizar variables categóricas es mediante el uso de diagramas de caja. <jupyter_code>sns.boxplot(x="engine-location", y="price", data=df)<jupyter_output><empty_output><jupyter_text>> Vemos que las distribuciones de precios entre las diferentes categorías de estilo de cuerpo tienen una superposición significativa, por lo que el estilo de cuerpo no sería un buen predictor del precio. Examinemos la ubicación del motor" y el "precio" del motor:<jupyter_code>sns.boxplot(x="drive-wheels", y="price", data=df)<jupyter_output><empty_output><jupyter_text> > Aquí vemos que la distribución del precio entre estas dos categorías de ubicación del motor, delantera y trasera, son lo suficientemente distintas como para tomar la ubicación del motor como un buen predictor potencial del precio. Examinemos las `ruedas motrices` y el `precio`. 3. Análisis estadístico descriptivo > Aquí vemos que la distribución de precios entre las diferentes categorías de ruedas motrices es diferente; como tales, las ruedas motrices podrían ser un predictor de precio.
Primero echemos un vistazo a las variables utilizando un método de descripción.
La función * describe * calcula automáticamente estadísticas básicas para todas las variables continuas. Cualquier valor de NaN se omite automáticamente en estas estadísticas.
Esto mostrará:
el recuento de esa variable
la media
la desviación estándar (estándar)
el valor mínimo
el IQR (rango intercuartil: 25%, 50% y 75%)
el valor máximo
<jupyter_code>df.describe()
df.describe(include=['object'])<jupyter_output><empty_output><jupyter_text> Valor cuenta
El conteo de valores es una buena forma de entender cuántas unidades de cada característica / variable tenemos. Podemos aplicar el método "value_counts" en la columna 'ruedas motrices'. No olvide que el método "value_counts" solo funciona en series Pandas, no en Pandas Dataframes. Como resultado, solo incluimos un soporte "df ['ruedas motrices']" no dos soportes "df [['ruedas motrices']]". <jupyter_code>df['drive-wheels'].value_counts()
df['drive-wheels'].value_counts().to_frame()<jupyter_output><empty_output><jupyter_text>Repita los pasos anteriores, pero guarde los resultados en el marco de datos `drive_wheels_counts`y cambie el nombre de la columna `drive-wheels` a `value_counts`.<jupyter_code>drive_wheels_counts = df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts.rename(columns={'drive-wheels': 'value_counts'}, inplace=True)
drive_wheels_counts<jupyter_output><empty_output><jupyter_text>Ahora cambiemos el nombre del índice a `ruedas motrices`:<jupyter_code>drive_wheels_counts.index.name = 'drive-wheels'
drive_wheels_counts
# engine-location as variable
engine_loc_counts = df['engine-location'].value_counts().to_frame()
engine_loc_counts.rename(columns={'engine-location': 'value_counts'}, inplace=True)
engine_loc_counts.index.name = 'engine-location'
engine_loc_counts.head(10)<jupyter_output><empty_output><jupyter_text> de la informaccion anterior de la variable localizacion del motor Examinar los recuentos de valores de la ubicación del motor no sería una buena variable predictiva para el precio. Esto se debe a que solo tenemos tres autos con motor trasero y 198 con motor delantero, este resultado es sesgado. Por lo tanto, no podemos sacar ninguna conclusión sobre la ubicación del motor. El método "groupby" agrupa los datos por diferentes categorías. Los datos se agrupan en función de una o varias variables y el análisis se realiza en los grupos individuales.
Por ejemplo, agrupemos por la variable "ruedas motrices". Vemos que hay 3 categorías diferentes de ruedas motrices. <jupyter_code>df['drive-wheels'].unique()<jupyter_output><empty_output><jupyter_text> Si queremos saber, en promedio, qué tipo de rueda motriz es más valiosa, podemos agrupar las "ruedas motrices" y luego promediarlas.
Podemos seleccionar las columnas 'ruedas motrices', 'estilo de carrocería' y 'precio', luego asignarlo a la variable "df_group_one". <jupyter_code>df_group_one = df[['drive-wheels','body-style','price']]<jupyter_output><empty_output><jupyter_text>Luego podemos calcular el precio promedio para cada una de las diferentes categorías de datos.<jupyter_code>df_group_one = df_group_one.groupby(['drive-wheels'],as_index=False).mean()
df_group_one<jupyter_output><empty_output><jupyter_text>
422/5000
Según nuestros datos, parece que los vehículos con tracción trasera son, en promedio, los más caros, mientras que las 4 ruedas y las ruedas delanteras tienen aproximadamente el mismo precio.
También puede agrupar con múltiples variables. Por ejemplo, agrupemos por 'ruedas motrices' y 'estilo de carrocería'. Esto agrupa el marco de datos por las combinaciones únicas 'ruedas motrices' y 'estilo de carrocería'. Podemos almacenar los resultados en la variable 'grouped_test1'. <jupyter_code>df_gptest = df[['drive-wheels','body-style','price']]
grouped_test1 = df_gptest.groupby(['drive-wheels','body-style'],as_index=False).mean()
grouped_test1<jupyter_output><empty_output><jupyter_text> Estos datos agrupados son mucho más fáciles de visualizar cuando se convierten en una tabla dinámica. Una tabla dinámica es como una hoja de cálculo de Excel, con una variable a lo largo de la columna y otra a lo largo de la fila. Podemos convertir el marco de datos en una tabla dinámica utilizando el método "pivote" para crear una tabla dinámica a partir de los grupos.
En este caso, dejaremos la variable de la rueda motriz como las filas de la tabla y giraremos el estilo del cuerpo para que se conviertan en las columnas de la tabla: <jupyter_code>grouped_pivot = grouped_test1.pivot(index='drive-wheels',columns='body-style')
grouped_pivot<jupyter_output><empty_output><jupyter_text> A menudo, no tendremos datos para algunas de las celdas dinámicas. Podemos llenar estas celdas faltantes con el valor 0, pero cualquier otro valor también podría usarse. Cabe mencionar que la falta de datos es un tema bastante complejo y es un curso completo por sí solo. <jupyter_code>grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0
grouped_pivot
df_gptest2 = df[['body-style','price']]
grouped_test2 = df_gptest2.groupby(['body-style'],as_index=False).mean()
grouped_test2<jupyter_output><empty_output><jupyter_text>Usemos un mapa de calor para visualizar la relación entre Body Style vs Price.<jupyter_code>#use the grouped results
plt.pcolor(grouped_pivot, cmap='RdBu')
plt.colorbar()
plt.show()<jupyter_output><empty_output><jupyter_text> El mapa de calor traza la variable objetivo (precio) proporcional al color con respecto a las variables 'rueda motriz' y 'estilo de carrocería' en el eje vertical y horizontal respectivamente. Esto nos permite visualizar cómo se relaciona el precio con 'rueda motriz' y 'estilo de carrocería'.
Las etiquetas predeterminadas no nos transmiten información útil. Cambiemos eso: <jupyter_code>fig, ax = plt.subplots()
im = ax.pcolor(grouped_pivot, cmap='RdBu')
#label names
row_labels = grouped_pivot.columns.levels[1]
col_labels = grouped_pivot.index
#move ticks and labels to the center
ax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)
#insert labels
ax.set_xticklabels(row_labels, minor=False)
ax.set_yticklabels(col_labels, minor=False)
#rotate label if too long
plt.xticks(rotation=90)
fig.colorbar(im)
plt.show()<jupyter_output><empty_output><jupyter_text> 5. Correlación y causalidad
Correlación : una medida del grado de interdependencia entre variables.
Causación : la relación entre causa y efecto entre dos variables.
Es importante saber la diferencia entre estos dos y que la correlación no implica causalidad. Determinar la correlación es mucho más simple que la causalidad determinante ya que la causalidad puede requerir experimentación independiente.
Correlación de Pearson
La Correlación de Pearson mide la dependencia lineal entre dos variables X e Y.
El coeficiente resultante es un valor entre -1 y 1 inclusive, donde:
1 : correlación lineal positiva total.
0 : Sin correlación lineal, las dos variables probablemente no se afectan entre sí.
-1 : correlación lineal negativa total.
Correlación de Pearson es el método predeterminado de la función "corr". Como antes, podemos calcular la correlación de Pearson de las variables 'int64' o 'float64'. <jupyter_code>df.corr()<jupyter_output><empty_output><jupyter_text>a veces nos gustaría saber lo significativo de la estimación de correlación. Valor P :
¿Qué es este valor P? El valor P es el valor de probabilidad de que la correlación entre estas dos variables sea estadísticamente significativa. Normalmente, elegimos un nivel de significancia de 0.05, lo que significa que estamos 95% seguros de que la correlación entre las variables es significativa.
Por convención, cuando el
el valor p es
el valor p es
el valor p es
el valor p es > 0.1: no hay evidencia de que la correlación sea significativa.
Podemos obtener esta información usando el módulo "stats" en la biblioteca "scipy".<jupyter_code>from scipy import stats<jupyter_output><empty_output><jupyter_text>Calculemos el coeficiente de correlación de Pearson y el valor P de 'distancia entre ejes' y 'precio'.<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)<jupyter_output>The Pearson Correlation Coefficient is 0.584641822265508 with a P-value of P = 8.076488270733218e-20
<jupyter_text> Conclusión:
Dado que el valor p es
Calculemos el coeficiente de correlación de Pearson y el valor P de 'caballos de fuerza' y 'precio'.<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value) <jupyter_output>The Pearson Correlation Coefficient is 0.8096068016571054 with a P-value of P = 6.273536270650504e-48
<jupyter_text> Conclusión:
Dado que el valor p es Calculemos el coeficiente de correlación de Pearson y el valor P de 'longitud' y 'precio'.<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value) <jupyter_output>The Pearson Correlation Coefficient is 0.6906283804483638 with a P-value of P = 8.016477466159556e-30
<jupyter_text> Conclusión:
Dado que el valor p es Calculemos el coeficiente de correlación de Pearson y el valor P de 'ancho' y 'precio':<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value ) <jupyter_output>The Pearson Correlation Coefficient is 0.7512653440522673 with a P-value of P = 9.200335510481646e-38
<jupyter_text>
##### Conclusión:
Como el valor p es <0.001, la correlación entre ancho y precio es estadísticamente significativa, y la relación lineal es bastante fuerte (~ 0.751).Calculemos el coeficiente de correlación de Pearson y el valor P de 'peso en vacío' y 'precio':<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['curb-weight'], df['price'])
print( "The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value) <jupyter_output>The Pearson Correlation Coefficient is 0.8344145257702843 with a P-value of P = 2.189577238894065e-53
<jupyter_text> Conclusión:
Dado que el valor p es Calculemos el coeficiente de correlación de Pearson y el valor P de 'tamaño del motor' y 'precio':<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['engine-size'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value) <jupyter_output>The Pearson Correlation Coefficient is 0.8723351674455185 with a P-value of P = 9.265491622198389e-64
<jupyter_text> Conclusión:
Dado que el valor p es $ Calculemos el coeficiente de correlación de Pearson y el valor P de 'bore' y 'price':<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['bore'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value ) <jupyter_output>The Pearson Correlation Coefficient is 0.5431553832626602 with a P-value of P = 8.049189483935489e-17
<jupyter_text> Conclusión:
Dado que el valor p es $ Podemos relacionar el proceso para cada 'City-mpg' y 'Highway-mpg':<jupyter_code>pearson_coef, p_value = stats.pearsonr(df['city-mpg'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value) <jupyter_output>The Pearson Correlation Coefficient is -0.6865710067844678 with a P-value of P = 2.321132065567641e-29
<jupyter_text> Conclusión:
Dado que el valor p es $ Highway-mpg vs precio <jupyter_code>pearson_coef, p_value = stats.pearsonr(df['highway-L/100km'], df['price'])
print( "The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value ) <jupyter_output>The Pearson Correlation Coefficient is 0.8011176263981974 with a P-value of P = 3.046784581041456e-46
<jupyter_text>##### Conclusión:
Dado que el valor p es <0.001, la correlación entre mpg en carretera y precio es estadísticamente significativa, y el coeficiente de ~ 0.80 muestra que la relación es positiva y moderadamente fuerte.(este dato de highway-mpg fue cambiado por highway-L/100km da un valor distinto a lo calculado en la pagina edx del curso)## analisis de varianza ANOVA ANOVA: Análisis de varianza
El análisis de varianza (ANOVA) es un método estadístico utilizado para evaluar si existen diferencias significativas entre las medias de dos o más grupos. ANOVA devuelve dos parámetros:
Puntaje de la prueba F : ANOVA asume que las medias de todos los grupos son iguales, calcula cuánto se desvían los medios reales de la suposición, y lo informa como el puntaje de la prueba F. Una puntuación mayor significa que hay una diferencia mayor entre las medias.
Valor P : el valor P indica cuán estadísticamente significativo es nuestro valor de puntaje calculado.
Si nuestra variable de precio está fuertemente correlacionada con la variable que estamos analizando, esperemos que ANOVA devuelva una puntuación considerable en la prueba F y un valor p pequeño. Ruedas motrices
Dado que ANOVA analiza la diferencia entre diferentes grupos de la misma variable, la función groupby será útil. Debido a que el algoritmo ANOVA promedia los datos automáticamente, no necesitamos tomar el promedio de antemano.
Veamos si diferentes tipos de 'ruedas motrices' impactan 'precio', agrupamos los datos.
Veamos si diferentes tipos de 'ruedas motrices' impactan 'precio', agrupamos los datos.<jupyter_code>grouped_test2=df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])
grouped_test2.head(2)
df_gptest<jupyter_output><empty_output><jupyter_text>Podemos obtener los valores del grupo de métodos utilizando el método "get_group".<jupyter_code>grouped_test2.get_group('4wd')['price']<jupyter_output><empty_output><jupyter_text>podemos usar la función 'f_oneway' en el módulo 'stats' para obtener el puntaje de prueba F y valor P .<jupyter_code># ANOVA
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val) <jupyter_output>ANOVA results: F= 67.95406500780399 , P = 3.3945443577151245e-23
<jupyter_text>Este es un gran resultado, con un puntaje de prueba F grande que muestra una fuerte correlación y un valor de P de casi 0 que implica una significación estadística casi segura. Pero, ¿significa esto que los tres grupos probados están altamente correlacionados?#### Por separado: fwd y rwd<jupyter_code>f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val )<jupyter_output>ANOVA results: F= 130.5533160959111 , P = 2.2355306355677845e-23
<jupyter_text>Examinemos los otros grupos.#### 4wd y rwd<jupyter_code>f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val) <jupyter_output>ANOVA results: F= 8.580681368924756 , P = 0.004411492211225333
<jupyter_text>4wd y fwd<jupyter_code>f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price'])
print("ANOVA results: F=", f_val, ", P =", p_val) <jupyter_output>ANOVA results: F= 0.665465750252303 , P = 0.41620116697845666
|
no_license
|
/curso-edx-ciencia-datos/modulo3.ipynb
|
arbarr20/edx-jupyter
| 40 |
<jupyter_start><jupyter_text>Collect Problem<jupyter_code>problem = """37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690 """
<jupyter_output><empty_output><jupyter_text>Split String and into list of strings (lists = [])<jupyter_code>ingredients = str.splitlines(problem)
print (ingredients)
numbers = []
for i in ingredients:
numbers.append(int(i))
print (numbers)
Sum inputs
numbers_2 = sum(numbers)
print(str(numbers_2) [:10])
<jupyter_output>5537376230
|
no_license
|
/PE Puzzle 13.ipynb
|
arainbowabacus/Aubergine
| 2 |
<jupyter_start><jupyter_text># Working of PCA on olivetti dataset from sklearn<jupyter_code>## all the imports done here
# datasets for olivetti
from sklearn import datasets
# pyplot for ploting graph
import matplotlib.pyplot as plt
# PCA for dimentionality reduction
from sklearn.decomposition import PCA
# downloading olivetti datasets
oliv=datasets.fetch_olivetti_faces()
# showing oliv
oliv
# from description we come to know that there are 10 images of each person
# oliv datasets is dictionary with keys as shown below
oliv.keys()
# data of oliv contains the combined info about all the faces
# 400 data values and 4096 columns
oliv["data"].shape
# images of faces are used for ploting them
# 400 images and size 64*64
oliv["images"].shape
# there are 40 persons each having 10 facial expressions
# here plotted just 64 of them
fig=plt.figure(figsize=(8,8))
for i in range(64):
ax=fig.add_subplot(8,8,i+1)
ax.imshow(oliv.images[i],cmap=plt.cm.bone)
plt.show()
# storing data and targets in x and y respectively
x=oliv.data
y=oliv.target
# applying PCA
pca=PCA()
pca.fit(x)
# default value onf n_components=min(m,n)
# pca_components tell eigen vectors that stores the dimentionality
pca.components_.shape
# finding optimal value of k for variance=0.95
k=0
total=sum(pca.explained_variance_)
current_sum=0
while(current_sum/total<0.95):
current_sum+=pca.explained_variance_[k]
k=k+1
k
# now we apply PCA with dimentions reduced to k
pca=PCA(n_components=k,whiten=True)
# we transform the data according to pca new
transformed_data=pca.fit_transform(x)
transformed_data.shape
# now we take inverse_transform according to out new dimensions to get back them to original form
x_approx=pca.inverse_transform(transformed_data)
x_approx.shape
# now reshaping to get back x_approx in size of images
x_approx_images=x_approx.reshape((400,64,64))
# now we again plot out x_approx images
# remarkable thing is that we are still able to plot the original images without significant loss of data
fig=plt.figure(figsize=(8,8))
for i in range(64):
ax=fig.add_subplot(8,8,i+1)
ax.imshow(x_approx_images[i],cmap=plt.cm.bone)
plt.show()<jupyter_output><empty_output><jupyter_text>NOW we will see eigen vectors and try to plot eigen faces<jupyter_code># storing eigen vectors from pca.components_
eigenv_=pca.components_
# ploting PCA components and they resemble much like faces thatswhy called eighen faces
eigenfaces=eigenv_.reshape((123,64,64))
eigenfaces.shape
# ploting eigen faces
fig=plt.figure(figsize=(8,8))
for i in range(64):
ax=fig.add_subplot(8,8,i+1)
ax.imshow(eigenfaces[i],cmap=plt.cm.bone)
plt.show()<jupyter_output><empty_output><jupyter_text># Now we ll see how classifiers predicts the 40 classes in train data<jupyter_code># two classifiers imported
from sklearn.ensemble import (RandomForestClassifier,GradientBoostingClassifier)
# imported two classifiers for spliting the data
from sklearn.model_selection import train_test_split
# for confusion matrix
from sklearn.metrics import confusion_matrix
# for classification report
from sklearn.metrics import classification_report
# classifier objects
clf1 = GradientBoostingClassifier()
clf2 = RandomForestClassifier()
# using train-test-split
x_train, x_test, y_train, y_test = train_test_split(transformed_data, y, test_size=0.25, random_state=0)
# fitting the data on both the classifiers
clf1.fit(x_train,y_train)
clf2.fit(x_train,y_train)
y_test.shape
# storing the predictions of both the classifiers
y_predict1=clf1.predict(x_test)
y_predict2=clf2.predict(x_test)
# printing confusion matrix for clf1
confusion_matrix(y_test,y_predict1)
# printing confusion matrix for clf2
confusion_matrix(y_test,y_predict2)
# printing classification report for clf1
print(classification_report(y_test,y_predict1))
# printing classification report for clf1
print(classification_report(y_test,y_predict2))
# accuracy by cl1
y1=(y_test==y_predict1)
print(y1.sum()/len(y1))
# accuracy by cl2
y2=(y_test==y_predict2)
print(y2.sum()/len(y2))<jupyter_output>0.5
|
no_license
|
/PCA on Oliivitti images.ipynb
|
divijsharma123/PCA-Analysis
| 3 |
<jupyter_start><jupyter_text># Machine Learning Engineer Nanodegree
## Unsupervised Learning
## Project: Creating Customer SegmentsWelcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.## Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.<jupyter_code># Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"<jupyter_output>Wholesale customers dataset has 440 samples with 6 features each.
<jupyter_text>## Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.<jupyter_code># Display a description of the dataset
display(data.describe())<jupyter_output><empty_output><jupyter_text>### Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.<jupyter_code># TODO: Select three indices of your choice you wish to sample from the dataset
indices = [11,12,34]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)<jupyter_output>Chosen samples of wholesale customers dataset:
<jupyter_text>### Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
*What kind of establishment (customer) could each of the three samples you've chosen represent?*
**Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant.**Answer:**
Sample 0: Fish Market (Fresh is very close to the mean and Grocery is exceeds 75% of customers. The other features all in the bottom 25% except for Frozen which is just under 50% of customers..)
Sample 1: Organic Supermarket (Fresh/Milk/Grocery/Delicatessen all exceed 75% of customers, Detergents_Paper is just below 75%, all the while frozen is in the bottom 25%.
Sample 2: Diner (Milk/Grocery/Delicatessen are barely above the 25% mark but everything else is below 25%. A smaller customer but it is clear enough which features the customer buys more. ) ### Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function.
- Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`.
- Import a decision tree regressor, set a `random_state`, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's `score` function.<jupyter_code>from sklearn import cross_validation
from sklearn.tree import DecisionTreeRegressor
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.copy()
X_all = new_data.drop(['Grocery'], axis = 1, inplace = False)
y_all = data.copy()[['Grocery']]
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X_all, y_all, test_size=.25, random_state=1)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=1).fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print score<jupyter_output>0.795768311576
<jupyter_text>### Question 2
*Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?*
**Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data.**Answer:**
I attempted to predict Grocery and got a prediction score of ~.7958. This means that the other features were able to reconstruct Grocery information fairly well and is probably not going to be nessasary in identifying customers' spending habits. If the prediction score was very low or negative, the features would have shown to not be able to reconstruct the removed feature and that feature would have been very relevant. ### Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.<jupyter_code># Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');<jupyter_output><empty_output><jupyter_text>### Question 3
*Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?*
**Hint:** Is the data normally distributed? Where do most of the data points lie? **Answer:**
The pairs Milk/Grocery and Milk/Detergents_Paper shows a fairly strong correlation with each other. This confirms my suspicion that Milk has very low relevancy since it correlates very well with other features meaning it isn't really needed in the data. The distribution of all these features all have a positive skew with a very high amount of data being close to zero with a very steep drop off.## Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.### Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to `log_data` after applying logarithmic scaling. Use the `np.log` function for this.
- Assign a copy of the sample data to `log_samples` after applying logarithmic scaling. Again, use `np.log`.<jupyter_code># TODO: Scale the data using the natural logarithm
log_data = np.log(data.copy())
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples.copy())
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');<jupyter_output><empty_output><jupyter_text>### Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.<jupyter_code># Display the log-transformed sample data
display(log_samples)<jupyter_output><empty_output><jupyter_text>### Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this.
- Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`.
- Assign the calculation of an outlier step for the given feature to `step`.
- Optionally remove data points from the dataset by adding indices to the `outliers` list.
**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable `good_data`.<jupyter_code># For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[[feature]], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[[feature]], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3 - Q1)*1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65,66,75,128,154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)<jupyter_output>Data points considered outliers for the feature 'Fresh':
<jupyter_text>### Question 4
*Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.* **Answer:**
There are five datapoints I noticed that were in fact outliers for at least two features(65,66,75,128,154). I chose to remove these from the dataset because for a customer to be an outlier on not one but two features makes them far from being the average customer and is likely some type of specialty business. Perhaps it is advantageous to remove more outliers than just these but I have decided to take a more conservative approach to see how it goes. ## Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.### Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`.
- Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.<jupyter_code>from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6).fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)<jupyter_output><empty_output><jupyter_text>### Question 5
*How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.*
**Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the indivdual feature weights.**Answer:**
The first and second principal components explain a total variance of 70.68%. Using the first four principal components, the total variance explained is 93.11%. The specifics of what each dimension represents in terms of customer spending is explained below:
Dimension 1
For the dimension with the most variance, Milk, Grocery, and Detergents_Paper all have a postitive correlation with each other with Detergents_Paper weighted the most. There is however a small negative correlation between Fresh and Frozen. This pattern could represent spending behavior associated with general stores.
Dimension 2
The second principal component has a positive correlation between all features but Fresh, Frozen, and Delicatessen features weighted significantly more than the others. This pattern could represent a butchery or meat/fish market.
Dimension 3
The third principal component has a minor correlation between Fresh and Detergents_Paper with Fresh weighted significantly higher and a negative correlation with Frozen and Delicatessen which Delicatessen is weighted significantly lower. This pattern could represent value food marts.
Dimension 4
For the fourth principal component, their is a minor positive correlation between Frozen and Detergents_Paper with Frozen being weighted significantly more and a minor negative correlation between Fresh and Delicatessen with Delicatessen being weighted significantly lower. This pattern could represent a pharmacy store.
### Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.<jupyter_code># Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))<jupyter_output><empty_output><jupyter_text>### Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with `good_data` to `pca`.
- Apply a PCA transformation of `good_data` using `pca.transform`, and assign the results to `reduced_data`.
- Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.<jupyter_code># TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])<jupyter_output><empty_output><jupyter_text>### Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.<jupyter_code># Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))<jupyter_output><empty_output><jupyter_text>## Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.<jupyter_code># Create a biplot
vs.biplot(good_data, reduced_data, pca)<jupyter_output><empty_output><jupyter_text>### Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?## Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. ### Question 6
*What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?***Answer:**
K-Means clustering has a lot of advantages in that it is very fast, scaleable and works in most cases. The drawbacks are that K-Means can be very unpredictable and can output completely different results depending on where the cluster points started from. Guassian Mixture Models (GMM) have the advantage of doing a soft assignment of clusters by giving each point a probabillity of which cluster it belongs too. This can be very effective when it comes the the outermost points of a cluster that don't have a clear distinction from another cluster. However, GMMs are more complex in that they require more parameters to work. Based on the Biplot generated above, I'm definetely going to be using GMM since it isn't really clear on where many data points might be assigned. ### Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`.
- Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`.
- Find the cluster centers using the algorithm's respective attribute and assign them to `centers`.
- Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`.
- Import `sklearn.metrics.silhouette_score` and calculate the silhouette score of `reduced_data` against `preds`.
- Assign the silhouette score to `score` and print the result.<jupyter_code>from sklearn.metrics import silhouette_score
from sklearn.mixture import GMM
for x in range(2,7):
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=x, covariance_type='tied').fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
print str(x) + ' clusters: ' + str(score)
# Repeat a single loop but with a set number of clusters
clusterer = GMM(n_components=2, covariance_type='tied').fit(reduced_data)
preds = clusterer.predict(reduced_data)
centers = clusterer.means_
sample_preds = clusterer.predict(pca_samples)
score = silhouette_score(reduced_data, preds)<jupyter_output>2 clusters: 0.422569975755
3 clusters: 0.377326422205
4 clusters: 0.318670490641
5 clusters: 0.296531713807
6 clusters: 0.320821042384
<jupyter_text>### Question 7
*Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?* **Answer:**
Based on the silhouette score, the number of clusters which was most successful was 2 clusters with a covariance_type of tied. I tried other covariance types but tied produced the best result.### Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters. <jupyter_code># Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)<jupyter_output><empty_output><jupyter_text>### Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`.
- Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
<jupyter_code># TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)<jupyter_output><empty_output><jupyter_text>### Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?*
**Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`.**Answer:**
Cluster 0: Customers are likely to be associated with restaurants.
Fresh/Frozen: above the median but below the top 25% of all customer spending.
Milk/Grocery/Detergents_Paper/Delicatessen: above the bottom 25% of all customer spending but less than the median.
Cluster 1: Customers are likely to be associated with a general Supermarket.
Milk/Grocery/Detergents_Paper: in the top 25% of all customer spending.
Fresh/Frozen/Delicatessen: above the bottom 25% of all customer spending but less than the median. ### Question 9
*For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*
Run the code block below to find which cluster each sample point is predicted to be.<jupyter_code># Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred<jupyter_output>Sample point 0 predicted to be in Cluster 0
Sample point 1 predicted to be in Cluster 1
Sample point 2 predicted to be in Cluster 0
<jupyter_text>**Answer:**
Predictions seem well spot on based on their features.
Sample 0: In the correct cluster based on features. A lot more than the average Fresh on Cluster 0 but seems other features too priority.
Sample 1: In the correct cluster based on features. Matches Cluster 1 pretty spot on.
Sample 2: In the correct cluster based on features. Matches Cluster 0 pretty spot on.## ConclusionIn this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships.### Question 10
Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. *How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?*
**Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?**Answer:**
For the A/B test, the best results would likely be obtained from testing on one particular cluster at a time. Each cluster could be split into two parts as evenly as possible and then one part can be used as A while the other could be used as B with part A having the 5 days a week service and part B having the 3 days a week service. The reason for using one cluster at a particular time is because A/B tests are most effective when the control and experiment group are as similar as possible. In reality, an optimal test would be to run all of one cluster on 5 days a week and all of the same cluster on 3 days a week but that is impossible to do at the same time. Splitting each cluster seperately is the next best things we can do with the information provided. By comparing each cluster individually, we can see how each cluster of customers reacts to being on a 3 days a week service as opposed to the normal 5 days a week. By comparing the results of the two tests, we can see which cluster reacts more positively to the change. ### Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service.
*How can the wholesale distributor label the new customers using only their estimated product spending and the* ***customer segment*** *data?*
**Hint:** A supervised learner could be used to train on the original customers. What would be the target variable?**Answer:**
New customers can be labled with "Cluster 0" or "Cluster 1" based on their customer spending. Using each new customer's spending habits, we can use a supervised learning algorithm to figure out which cluster they belong to. For example, with logistic regression, we can assign each spending habit feature of a customer and give it positive or negative weights with accordance to our new structure. A negative output would results in Cluster 0 and a positive output results in Cluster 1. ### Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.<jupyter_code># Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)<jupyter_output><empty_output>
|
no_license
|
/Project 3- John Rangel/customer_segments/.ipynb_checkpoints/customer_segments-checkpoint.ipynb
|
rtsrangel/Udacity_Course_Projects
| 18 |
<jupyter_start><jupyter_text>**Data Cleaning Phase****Part 1 Load the Tables into CSV Format**<jupyter_code>#States table
print(states.shape[0], "rows")
print(states.shape[1], "columns")
states.head()
#Abridged counties table
print(abridged_counties.shape[0], "rows")
print(abridged_counties.shape[1], "columns")
abridged_counties.head()
#Time Series Confirmed Cases US
print(time_series_confirmed_us.shape[0], "rows")
print(time_series_confirmed_us.shape[1], "columns")
time_series_confirmed_us.head()
#Time Series Confirmed Deaths US
print(time_series_deaths_us.shape[0], "rows")
print(time_series_deaths_us.shape[1], "columns")
time_series_deaths_us.head()<jupyter_output>3255 rows
100 columns
<jupyter_text>**Part 2: Understand Null Values in Tables and Impute them Accordingly**<jupyter_code>#Which columns have the most null values in the states table?
states_null = states.isnull().sum().sort_values(ascending = False)
states_null
#Which columns have the most null values in the abridged counties table?
abridged_counties_null = abridged_counties.isnull().sum().sort_values(ascending = False).head(20)
abridged_counties_null
#Which columns have the most null values in the Time Series Confirmed Cases US table?
time_series_confirmed_us_null = time_series_confirmed_us.isnull().sum().sort_values(ascending = False)
time_series_confirmed_us_null
#Let's explore the null values in the "Admin2" column
#Null values corresponding to first few rows
time_series_confirmed_us['Admin2'].head(20)
time_series_confirmed_us.loc[0:10, :]
#The 'Admin2' column 'NaN' values corresspond to American territories.
#This is because they don't have counties like the states of the U.S. do.
#Thus, we can simply impute the 'NaN' values to 'No County'.
#Impute the NaN values with 'No County'
time_series_confirmed_us.iloc[0:5, 5] = 'No County'
#'No County' values have been inserted
time_series_confirmed_us.loc[0:10, :]
#Let's explore the null values in the "FIPS" column
time_series_confirmed_us[time_series_confirmed_us['FIPS'].isnull()]
#Some additional exploration may be needed to understand the values within the FIPS column
#It appears to be some kind of quantitative continuous ID.
#Additionally, there appears to be only 4 null values in a series of 3254 elements
#As a result, it is most likely safe to impute these values as they are most likely
#unlikely to affect predictions very much.
#Thus, it is safe to assume that it is appropriate to impute these 'NaN' values to 0
time_series_confirmed_us['FIPS']
#Impute the 'NaN' values in the FIPS column to 0
time_series_confirmed_us.loc[time_series_confirmed_us['FIPS'].isnull(), 'FIPS'] = 0
#Notice the 'NaN' in rows 3253 and 3254 respectively have been changed to 0.0 respectively.
time_series_confirmed_us['FIPS']
#Which columns have the most null values in the Time Series Confirmed Deaths US table?
#Based on how the series appears, using the same imputation stategy used for the previous
#table is appropriate.
#This table looks similar to the previously data cleaned table
time_series_deaths_us_null = time_series_deaths_us.isnull().sum().sort_values(ascending = False)
time_series_deaths_us_null
#Impute the NaN values with 'No County'
time_series_deaths_us.iloc[0:5, 5] = 'No County'
#'No County' values have been inserted
time_series_deaths_us.loc[0:10, :]
#Let's explore the null values in the "FIPS" column
time_series_deaths_us[time_series_deaths_us['FIPS'].isnull()]
#Some additional exploration may be needed to understand the values within the FIPS column
#Impute the 'NaN' values in the FIPS column to 0
time_series_deaths_us.loc[time_series_deaths_us['FIPS'].isnull(), 'FIPS'] = 0
#Notice the 'NaN' in rows 3253 and 3254 respectively have been changed to 0.0 respectively.
time_series_confirmed_us['FIPS']<jupyter_output><empty_output><jupyter_text>**Part 2: Explore Columns with Majority Null Values and Decide How to Impute Them**
This part consists of writing imputation functions to make data cleaning easier.<jupyter_code>#Which columns have the most null values in the states table?
states_null = states.isnull().sum().sort_values(ascending = False)
states_null
#There appears to be the most null values in the
#'Hosptitalization_Rate' and 'People_Hospitalized' columns
#Let's explore them
#Null values in 'Hospitalization_Rate' column
states[states['Hospitalization_Rate'].isnull()]
#Null values in 'People_Hospitalized' column
states[states['People_Hospitalized'].isnull()]
#Let's define a few data cleaning functions to make this process easier
#Let's write a function to get the datatypes of each column
#FUNC getColumnDatatypes takes in a TABLE and RETURNS a Dictionary representing the mappings from
#columns of table to the datatype of those columns respectively.
def getColumnDatatypes(table):
column_datatype_dict = {}
columns_list = list(table.columns)
table = table[np.logical_not(table.isnull())]
for column in columns_list:
column_datatype_dict[column] = type(table[column].iloc[0])
return column_datatype_dict
#Let's write a function to replace the null values of each column with either the mean or the median
#(if skewed) of the non-null values of each column respectively.
#FUNC imputeQuantitativeNullValues takes in a TABLE and type_dictionary RETURNS a
#the TABLE with appropriately imputed values
def imputeQuantitativeNullValues(table, type_dictionary):
quantitative_columns_list = list(type_dictionary.keys())
table_non_null = table.fillna(673)
mean = 0
median = 0
for column in quantitative_columns_list:
if type_dictionary[column] == np.float64:
if np.mean(table_non_null[column]) != np.median(table_non_null[column]):
table_wout_null = table_non_null[table_non_null[column] != 673]
median = np.median(table_wout_null[column])
table.loc[table[column].isnull(), column] = median
elif np.mean(table_non_null[column]) == np.median(table_non_null[column]):
table_wout_null = table_non_null[table_non_null[column] != 673]
mean = np.mean(table_wout_null[column])
table.loc[table[column].isnull(), column] = mean
return table
#Let's write a function to standardize the quantitative continuous variables
#FUNC standardizeQuantitativeVariables takes in a TABLE and type_dictionary and
#RETURNS a TABLE with appropriately standardized quantitative values
def standardizeQuantitativeVariables(table, type_dictionary):
quantitative_columns_list = list(type_dictionary.keys())
quantitative_columns = []
non_quantitative_columns = []
for column in quantitative_columns_list:
if type_dictionary[column] != np.float64:
non_quantitative_columns.append(column)
else:
quantitative_columns.append(column)
quant_table = table[quantitative_columns]
normalized_table = (quant_table-np.mean(quant_table))/np.std(quant_table)
table = table[non_quantitative_columns].join(normalized_table)
return table
#Get the column datatypes of the states table
states_type_dictionary = getColumnDatatypes(states[states.isnull()])
states_type_dictionary
#All the are of the float datatype, so using means/medians as an imputer is appropriate
#Perform imputation on states table for quantitative continuous values in the states table
states_transformed = imputeQuantitativeNullValues(states, states_type_dictionary)
states_transformed
#Standardize the quantitative continuous variables in the states table
states_transformed_standardized = standardizeQuantitativeVariables(states_transformed, states_type_dictionary)
states_transformed_standardized
#The 'LastUpdate' column doesn't appear to be particularly useful in regards to summary information regarding
#individual states and territories of interest. So let's remove it from states_transformed and
#states_transformed_standardized
states_transformed = states_transformed.drop(columns = ['Last_Update'])
states_transformed_standardized = states_transformed_standardized.drop(columns = ['Last_Update'])
#Which columns have the most null values in the abridged counties table?
abridged_counties_null = abridged_counties.isnull().sum().sort_values(ascending = False).head(20)
#Get the column datatypes of the abridged_counties table
abridged_counties_type_dictionary = getColumnDatatypes(abridged_counties[abridged_counties.isnull()])
abridged_counties_type_dictionary
#All the are of the float datatype, so using means/medians as an imputer is appropriate
#Perform imputation on abridged_counties table for the quantitative continuous values
abridged_counties_transformed = imputeQuantitativeNullValues(abridged_counties, abridged_counties_type_dictionary)
abridged_counties_transformed.head()
#Standardize the quantitative continuous variables in the abridged_counties table
abridged_counties_standardized = \
standardizeQuantitativeVariables(abridged_counties_transformed, abridged_counties_type_dictionary)
abridged_counties_standardized
#There appears to be some null values in the 'State' column.
abridged_counties_standardized.isnull().sum().sort_values(ascending = False)
#From above, the 'federal guidelines' and 'foreign travel ban' columns had the same value of
# 737500.0 and 737495.0 respectively, so they don't appears to be useful feature columns.
#Thus, we will drop these feature columns from both the transformed and the standardized
#abridged_counties tables:
abridged_counties_transformed = abridged_counties_transformed.drop( \
columns = ['federal guidelines', 'foreign travel ban'])
abridged_counties_standardized = abridged_counties_standardized.drop( \
columns = ['federal guidelines', 'foreign travel ban'])<jupyter_output><empty_output><jupyter_text>In conclusion, in part 2, I created two tables each for the states table and abridged_counties respectively. The two kinds of tables I created were a transformed table(only null values imputed), and a standardized table(null values imputed as well as quantitative continuous features transformed).
The simply transformed table will be used for data visualizations in the Exploratory Data Analysis phase to preserve the actual values for better interpretability, while the standardized table will be used for modeling in the Model Development Phase.**Part 3: Use One-Hot Encoding for Appropriate Columns with Categorical Variables**Here, I will be modifying the standardized table by getting the dummies of cateogorical variables, which will be useful as inputs in the model development phase.
However, I will not be modifying the simply transformed table, the for ease of creating data visualizations in the Exploratory Data Analysis Phase.
I will not be creating the transformed versions of the time_series_confirmed_us table and the time_series_deaths_us table, as it is already high dimensional with informative time series data on confirmed cases and deaths over time respectively.<jupyter_code>#Let's define a function that can perform one_hot encoding for any table with cateogorical variables passed in
#FUNC getColumnDatatypes takes in a TABLE and RETURNS a Dictionary representing the mappings from
#columns of table to the datatype of those columns respectively.
def oneHotEncoding(table):
return pd.get_dummies(table)
#Get the dummies of the states_transformed_standardized table
states_transformed_standardized = oneHotEncoding(states_transformed_standardized)
states_transformed_standardized
abridged_counties_standardized.head()
abridged_counties_not_dummies = abridged_counties_standardized[['countyFIPS',
'StateName',
'State',
'CensusRegionName',
'CensusDivisionName']]
abridged_counties_dummies = oneHotEncoding(abridged_counties_standardized.iloc[:, 5:])
abridged_counties_dummies['CountyName'] = abridged_counties_standardized['CountyName']
abridged_counties_standardized = abridged_counties_not_dummies.join(abridged_counties_dummies)
abridged_counties_standardized.head()<jupyter_output><empty_output><jupyter_text>Let's first examine how the states_transformed, abridged_counties_transformed, time_series_confirmed_us, and time_series_deaths_us tables look after the data cleaning phase.<jupyter_code>#states_transformed table
print(states_transformed.shape)
states_transformed.head()
#abridged_counties_transformed table
print(abridged_counties_transformed.shape)
abridged_counties_transformed.head()
#time_series_confirmed_us table
print(time_series_confirmed_us.shape)
time_series_confirmed_us.head()
#time_series_deaths_us table
print(time_series_deaths_us.shape)
time_series_deaths_us.head(10)<jupyter_output>(3255, 100)
<jupyter_text>**Part 4: Join Tables together to create the appropriate visualizations**
<jupyter_code>#The first step is to select the time series data(the dates columns for both the time_series_confirmed_us, and time_series_deaths_us tables)
confirmed_cases_dates = time_series_confirmed_us.iloc[:, 12:]
confirmed_cases_dates['Province_State'] = time_series_confirmed_us['Province_State']
deaths_dates = time_series_deaths_us.iloc[:, 12:]
deaths_dates['Province_State'] = time_series_deaths_us['Province_State']
deaths_dates['Population'] = time_series_deaths_us['Population']
#Now we want to group by state (by the 'Province State') column, and aggregate by sum, to get the number of confirmed cases, and number of
#deaths respectively
grouped_confirmed_cases_dates = confirmed_cases_dates.groupby('Province_State', as_index = False).agg(sum)
grouped_deaths_cases_dates = deaths_dates.groupby('Province_State', as_index = False).agg(sum)
#Let's visualize the confirmed cases grouped table
grouped_confirmed_cases_dates.head()
#Let's visualize the deaths grouped table
grouped_deaths_cases_dates.head()
#Now, since we want to only consider the U.S. states, we will inner join the states_transformed_standardized table, the
#grouped_confirmed_cases_deaths table, and the grouped_deaths_cases_dates table.
states_transformed_standardized['Province_State'] = states_transformed['Province_State']
merged_states_standardized = pd.merge( \
left = states_transformed_standardized, right = grouped_confirmed_cases_dates, how = 'inner', on = 'Province_State')
merged_states_standardized = pd.merge( \
left = merged_states_standardized, right = grouped_deaths_cases_dates, how = 'inner', on = 'Province_State')
#We perform the same merging operations as above, only difference is we will inner join the two time series tables with states_transformed
#which is the table used for data visualizations below.
merged_states_transformed = pd.merge( \
left = states_transformed, right = grouped_confirmed_cases_dates, how = 'inner', on = 'Province_State')
merged_states_transformed = pd.merge( \
left = merged_states_transformed, right = grouped_deaths_cases_dates, how = 'inner', on = 'Province_State')
#Adding back the states column in order respectively
merged_states_standardized['Province_State'] = states_transformed['Province_State']
merged_states_transformed['Province_State'] = states_transformed['Province_State']
merged_states_standardized.head()
merged_states_transformed.head()<jupyter_output><empty_output><jupyter_text>**Part 5: Pick Preliminary Features based on Coronavirus Research, and join tables to obtain all information to create data visualizations in the EDA phase.**Specifically, from domain knowledge and Coronavirus research, individuals with certain pre-existing conditions such as heart disease,
diabetes, have had a stroke, or have respiratory issues are more susceptible to catching coronavirus. Additionally, males and females above the age of 60 are usually more frail, and also
more likely to catch the virus due to age induced immunodeficiency.
Furthermore, individuals who are undeserved or below the poverty line for any given state, may lack proper medical coverage and are thus also more likely susceptible.
Thus, let's explore the abridged counties table, and aggregate each of these features by state to join them to both the merged tables, using which we will create the visualizations.<jupyter_code>#Let's explore the columns in the abridged_counties dataset, as there are useful features with respect to health, age, and socioeconomic status.
#Thus, let's add those respective features from the abridged_counties table
#First, let's group by the two letter abbreviations for each state in the table, and average the features for each state
important_features_transformed = abridged_counties_transformed[['DiabetesPercentage',
'HeartDiseaseMortality',
'StrokeMortality',
'Smokers_Percentage',
'RespMortalityRate2014',
'HPSAUnderservedPop',
'#EligibleforMedicare2018',
"MedicareEnrollment,AgedTot2017",
'dem_to_rep_ratio',
'PopMale5-92010',
'PopFmle5-92010',
'PopMale10-142010',
'PopFmle10-142010',
'PopMale65-742010',
'PopFmle65-742010',
'PopMale75-842010',
'PopFmle75-842010']]
important_features_standardized = abridged_counties_standardized[['DiabetesPercentage',
'HeartDiseaseMortality',
'StrokeMortality',
'Smokers_Percentage',
'RespMortalityRate2014',
'HPSAUnderservedPop',
'#EligibleforMedicare2018',
"MedicareEnrollment,AgedTot2017",
'dem_to_rep_ratio',
'PopMale5-92010',
'PopFmle5-92010',
'PopMale10-142010',
'PopFmle10-142010',
'PopMale65-742010',
'PopFmle65-742010',
'PopMale75-842010',
'PopFmle75-842010']]
important_features_transformed['State'] = abridged_counties_transformed['State']
important_features_standardized['State'] = abridged_counties_standardized['State']
grouped_important_features_transformed = important_features_transformed.groupby('State', as_index = False).agg(np.mean)
grouped_important_features_standardized = important_features_standardized.groupby('State', as_index = False).agg(np.mean)
grouped_important_features_transformed.head()
print(grouped_important_features_standardized.shape)
grouped_important_features_standardized.head()
print(grouped_important_features_transformed.shape)
grouped_important_features_transformed.head()
#Sort the 'Province_State' column alphabetically so that the merged_states_transformed and merged_states_standardized transformed tables can be joined with the
# grouped_important_features_transformed and grouped_important_features_standardized tables respectively
merged_states_transformed = merged_states_transformed.sort_values('Province_State', ascending = True)
merged_states_standardized = merged_states_standardized.sort_values('Province_State', ascending = True)
merged_states_transformed.head()
#The next step is to join these grouped_important_features_standardized table and grouped_important_features_transformed table with
#the merged_states_standardized table and merged_states_transformed table respectively. Let's use an inner join
merged_states_standardized = pd.merge(merged_states_standardized, grouped_important_features_standardized, how = 'inner', left_on = 'Province_State', right_on = 'State')
merged_states_transformed = pd.merge(merged_states_transformed, grouped_important_features_transformed, how = 'inner', left_on = 'Province_State', right_on = 'State')
#Visualize this table
merged_states_standardized.head()
#Visualize this table
merged_states_transformed.head()
#There appears to be an large outlier, which is likely to affect the root mean squared error of the linear regression model. Since it is only
#one datapoint, it is in the best interest to replace it with the average mortality
merged_states_standardized.loc[merged_states_standardized['Mortality_Rate'] == merged_states_standardized['Mortality_Rate'].max()] = \
np.mean(merged_states_standardized['Mortality_Rate'])
<jupyter_output><empty_output><jupyter_text>**Exploratory Data Analysis Phase**
**Understand and design visualizations to understand how and if certain features are associated with the response, which is the mortality rate.**
**In other words, based on EDA, which features are likely to be good predictors for the quantitative continuous response variable in a Linear Regression Model, Regression Tree Model, and Random Forest?****Visualization 1**: Scatterplot displaying the association between Underserved Population Proportion(explanatory variable) and Mortality Rate (response variable)
Visually, it can be observed that for the given data that we have, as the undeserved population proportion increases, the mortality rate increases as well. Although the association may appear to be fairly weak, it is important to note that for undeserved population proportion values larger than around 5000, not a single state had a mortality rate lower than 2 deaths per thousand.
While we cannot conclude causation between these two variables, they do appear to be associated, so it is likely that the Undeserved population is likely to be a good predictor of the mortality rate. <jupyter_code>sns.scatterplot(merged_states_transformed['HPSAUnderservedPop'], merged_states_transformed['Mortality_Rate'])
plt.title('Scatterplot displaying the association between Undeserved Population and Mortality Rate(Deaths per 1000 individuals)')
plt.xlabel('Undeserved Population Proportion')
plt.xticks(rotation=45)
plt.ylabel('Mortality Rate(Deaths/1000 Individuals)')
sns.set(rc={'figure.figsize':(15,8.27)})<jupyter_output><empty_output><jupyter_text>**Visualization 2**: Histogram displaying the mortality rates of all the U.S. states and U.S. territories on April 18th 2020.
The histogram appears to be bimodal, and skewed to the right (skewed towards higher mortality rates). The histogram tells me that there is likely an outlier with a mortality rate of around 15 deaths/1000 people. I would assume that this corressponds to a state/territory with an especially large population density.
However, most of the states/territories have morality rates between around 1.5 deaths/1000 people and 5 deaths/1000 people.
<jupyter_code># People_Tested People_Hospitalized
sns.distplot(merged_states_transformed['Mortality_Rate'], kde = True, hist = True, rug = True, bins = np.arange(0, 15, 0.5))
plt.title('Histogram displaying the distribution of mortality rates on April 18th')
plt.xlabel('Mortailty Rate(Deaths/1000 people)')
plt.xticks(rotation=45)
plt.ylabel('Percent Per Mortailty Rate')
sns.set(rc={'figure.figsize':(15,8.27)})<jupyter_output><empty_output><jupyter_text>**Visualization 3**: Lineplot displaying the cumulative distribution function of the aggregated number of deaths over time.
Around day 60 is when the number of deaths begins to grow exponentially, sometime around early March. This visualization indicates that we are still currently in the exponential growth portion of the logistic carrying capacity curve of the number of deaths due to cornonavirus. The deaths curve has not leveled off as of 4/18/2020.<jupyter_code>#Create a column called coronavirus deaths over time, and create a line plot
virus_deaths_over_time = merged_states_transformed.iloc[:, 104:192].sum()
virus_deaths_over_time = virus_deaths_over_time.reset_index().reset_index()
virus_deaths_over_time
sns.lineplot(x = virus_deaths_over_time['level_0'], y = virus_deaths_over_time.iloc[:, 2])
plt.title('Lineplot displaying the exponential growth of coronavirus deaths over time')
plt.xlabel('Numbered Days')
plt.xticks(rotation=45)
plt.ylabel('Aggregated Number of Deaths')<jupyter_output><empty_output><jupyter_text>**Visualization 4**: Scatterplot displaying the association between Hopspitalization Rate (explanatory variable) and Mortality Rate(response variable)
Visually, it can be observed that for the given data that we have, as the Hospitalization Rate increases, the mortality rate increases as well. This makes sense since, the more confirmed cases there are in a given state, the more are hospitalized, and the greater the mortality rate.
While we cannot conclude causation between these two variables, they do appear to be associated, so it is likely that the Undeserved population is likely to be a good predictor of the mortality rate. <jupyter_code>sns.lmplot(x = 'Hospitalization_Rate', y = 'Mortality_Rate', data = merged_states_transformed, height = 10)
plt.title('Scatterplot displaying the association between Hospitalization Rate and Mortality Rate(Deaths per 1000 individuals)')
plt.xlabel('Hospitalization_Rate')
plt.xticks(rotation=45)
plt.xlim(0, 40)
plt.ylabel('Mortality Rate(Deaths/1000 Individuals)')
sns.set(rc={'figure.figsize':(15,8.27)})<jupyter_output><empty_output><jupyter_text>**Visualization 5**: Plot of North America displaying the magnitude of the mortality rate among all States in the U.S.
Visually, it can be observed that the darker the color(black), the higher the mortality rate. This may be contrary to what is expected, as we would expect midwest states with less population density to have lower mortality rates. This is compared to states on the coasts which are much more populous. However, mortality rate is agnostic of population, as larger states have more cases, but smaller less populous states have fewer cases but high mortality rate.
<jupyter_code>#Install the geopandas map library
pip install geopandas
import geopandas
gdf = geopandas.GeoDataFrame(
merged_states_transformed, geometry=geopandas.points_from_xy(merged_states_transformed['Long_'], merged_states_transformed['Lat']))
pip install -U mapclassify
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
# We restrict to North America.
ax = world[world.continent == 'North America'].plot(
color='white', edgecolor='black')
import mapclassify
gdf.plot(ax = ax, column='Mortality_Rate');
sns.set(rc={'figure.figsize':(15,8.27)})
plt.title('Map of United States depicting Mortality Rate on a scale (Darker means higher mortality rate)')<jupyter_output><empty_output><jupyter_text>#Model Development + Feature Engineering Phase**Part 1:** Perform the Train-Test Split, with 60% of data in the training split and 40% of the data in the testing split. This is important as we want to ultimately evaluate the performance of the model on the test data, after all feature engineering, feature selection, cross validation accuracy etc. has been completed.
However, I will not examine the test data until the very end, so as to not bias my model development process in favor of improving model accuracy on the test split.
The merged_states_standardized table will be used in the Model Development Phase<jupyter_code># #Select the features of interest using the feature selection pipeline
def feature_selection_pipeline(table, features, label):
table = table[features]
y = table[label]
X = table.drop(columns = [label])
return X, y
features = ['PopMale5-92010',
'PopFmle5-92010',
'PopMale10-142010',
'PopFmle10-142010',
'PopMale65-742010',
'PopFmle65-742010',
'PopMale75-842010',
'PopFmle75-842010',
'DiabetesPercentage',
'Smokers_Percentage',
'HeartDiseaseMortality',
'StrokeMortality',
'RespMortalityRate2014',
'Hospitalization_Rate',
'People_Tested',
'Mortality_Rate']
X, y = feature_selection_pipeline(merged_states_standardized, features, 'Mortality_Rate')
#Perform the train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40, random_state=42)<jupyter_output><empty_output><jupyter_text>**Part 2:** Build the baseline linear regression model, regression tree, and random forest of regression trees models with the set of features chosen, and calculate the MSE for each. Additionally, for each model, perform bootstrap cross validation accuracy to evaluate how the model would perform on unseen data. The reason why bootstrap cross validation accuracy is being employed rather than k-fold cross validation accuracy is the size of the dataset. There are about only 50 rows, and thus the splits will have too much natural variability. <jupyter_code>#mean squared error function
def MeanSquaredError(actual, predicted):
return np.mean((actual-predicted)**2)
#Cross validation accuracy for regression model
from sklearn.model_selection import KFold
from sklearn.base import clone
def cross_validate_mse_bootstrap(model, k_iter, features):
table_sample = 0
mse = []
mse_error = 0
for i in range(k_iter):
table_sample = merged_states_standardized.sample(frac=1, replace=True, random_state=i)
X, y = feature_selection_pipeline(table_sample, features, 'Mortality_Rate')
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.40, random_state=42)
the_model = model()
the_model.fit(X_train, y_train)
training_predictions = the_model.predict(X_val)
actual_values = y_val
mse_error = MeanSquaredError(actual_values, training_predictions)
mse.append(mse_error)
print("The variance of the errors is", np.var(mse))
print("The average of the errors is", np.mean(mse))
return np.mean(mse)
#Building the Baseline Linear Regression Model
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
training_predictions_linreg = regression_model.predict(X_train)
actual_values = y_train
print("The mean squared error for the Linear Regression Model is:", MeanSquaredError(actual_values, training_predictions_linreg))
linalg_original_mse = cross_validate_mse_bootstrap(LinearRegression, 30, features)
#Building Baseline Regression Tree Model
#Regression Tree Models tend to overfit to the training data, and fail to generalize well to unseen data.
#One benefit of Regression Trees is it tends to work well for small datasets, as in this case where there
#are only 50 records(rows)
from sklearn.tree import DecisionTreeRegressor
decision_tree_regression = DecisionTreeRegressor(random_state=0)
decision_tree_regression.fit(X_train, y_train)
training_predictions_dtree = decision_tree_regression.predict(X_train)
actual_values = y_train
print("The mean squared error for the Decision Tree Regressor is:", MeanSquaredError(actual_values, training_predictions_dtree))
regression_tree_original_mse = cross_validate_mse_bootstrap(DecisionTreeRegressor, 20, features)
<jupyter_output>The mean squared error for the Decision Tree Regressor is: 0.0
The variance of the errors is 0.04794448621696601
The average of the errors is 0.38441186807601907
<jupyter_text>As you can see above, the decision tree tends to get 100% accuracy on the training set(in this case 0 root mean squared error), as it tends to overfit to the training set.<jupyter_code>#Since ordinary decision trees tend to overfit to the training dataset, and thus lead there to be low bias and high variance,
#random forest ensemble combines the multiple decision trees overfitting to multiple bagged(boostrapped) samples of the data
#to capture the variability in the data, leading to low variance.
from sklearn.ensemble import RandomForestRegressor
random_forest_regression = RandomForestRegressor(random_state=0)
random_forest_regression.fit(X_train, y_train)
training_predictions_random_forest = random_forest_regression.predict(X_train)
actual_values = y_train
print("The mean squared error for the Random Forest Regressor is:", MeanSquaredError(actual_values, training_predictions_random_forest))
random_forest_original_mse = cross_validate_mse_bootstrap(RandomForestRegressor, 30, features)
<jupyter_output>The mean squared error for the Random Forest Regressor is: 0.07006550516282674
The variance of the errors is 0.03399226236723881
The average of the errors is 0.27321129196306004
<jupyter_text>Currently, for all three models above, the variance and mean are approximately equal. This is concerning as this indicates that the model may overfitting to the training dataset, and failing to generalize well to the unseen data.
Particularly, small changes in the unseen data that deviate (in terms of the noise) from the training set lead to large changes in the accuracy. In other words, these models have low bias and high variance.
Thus, feature selection in the next step is important, as it will help narrow down on only the most essential features, to ensure that model complexity isn't too high. Particularly, the size of the dataset is only around 50, so narrowing down on only the essential features will greatly improve the accuracy.**Part 3**: Perform Feature Selection to Enhance Model Performance
<jupyter_code>#Let's create a pairplot to examine associations between the quantitative continuous features
X_train['Mortality_Rate'] = y_train
sns.pairplot(X_train)<jupyter_output>/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
<jupyter_text>The pairplot above tells me that HeartDiseaseMortality, StrokeMortality, and RespMortalityRate2014 features have no association with the Mortality_Rate strangely.
Earlier, I had expected that HeartDiseaseMortality, StrokeMortality, and RespMortalityRate2014 features would be significantly correlated with the mortality rate, but they aren't according to these scatterplots. This tells me that these mortalities are independent of covid-19 specific mortalities. Thus, these features will be removed from the final model.
Furthermore, the populations of particular age group features appear to be redundant. To be able to underscore the difference between age groups further, it may be better to include the PopMale5-92010, PopFmle-5-92010, PopMale75-842010, and PopFmle75-842010, and leave out the others.
The rest of the features appear to be associated with the response.Let's create another pairplot to understand features(specifically pertaining to features not considered earlier such as Deaths, Confirmed Cases etc.)<jupyter_code>#Another pairplot depicting features not considered originally.
features = ['Lat',
'Long_',
'Confirmed',
'Deaths',
'Recovered',
'Active',
'FIPS',
'Incident_Rate',
'People_Tested',
'People_Hospitalized',
'Testing_Rate',
'Mortality_Rate']
X_pairplot, y_pairplot = feature_selection_pipeline(merged_states_standardized, features, 'Mortality_Rate')
X_pairplot['Mortality_Rate'] = y_pairplot
sns.pairplot(X_pairplot)<jupyter_output><empty_output><jupyter_text>Based on this pairplot, Long_, Confirmed, Deaths, Incident_Rate, and People_Tested features area associated with the response variable which is the Mortality_Rate.
Amongst the associated features, Long_, People_Tested, and Incident_Rate are the most strongly associated, so these features will be added to the original list of features. Now, let's build the enhanced models with specific feature selection, and compare the accuracies to before.**Part 4:** Build models with specifically selected features during feature selection<jupyter_code>final_features = [
'Mortality_Rate',
'PopMale5-92010',
'PopFmle5-92010',
'PopMale75-842010',
'PopFmle75-842010',
'DiabetesPercentage',
'Smokers_Percentage',
'Long_',
'Hospitalization_Rate',
'People_Tested',
'Incident_Rate'
]
X_final, y_final = feature_selection_pipeline(merged_states_standardized, final_features, 'Mortality_Rate')
X_train_final, X_test_final, y_train_final, y_test_final = train_test_split(X_final, y_final, test_size=0.40, random_state=42)
#Linear Regression Model After Feature Selection
regression_model_final = LinearRegression()
regression_model_final.fit(X_train_final, y_train_final)
training_predictions_linreg_feat_selec = regression_model_final.predict(X_train_final)
actual_values = y_train_final
print("The mean squared error for the Linear Regression Model is:", MeanSquaredError(actual_values, training_predictions_linreg_feat_selec))
linalg_final_mse = cross_validate_mse_bootstrap(LinearRegression, 30, final_features)<jupyter_output>The mean squared error for the Linear Regression Model is: 0.15430819478542315
The variance of the errors is 1.0858116105794842
The average of the errors is 0.6053297937861216
<jupyter_text>Clearly, the root mean squared error for the regression model went down from 0.<jupyter_code>#Regression Tree Model After Feature Selection
#Regression Tree Models tend to overfit to the training data, and fail to generalize well to unseen data.
#One benefit of Regression Trees is it tends to work well for small datasets, as in this case where there
#are only 50 records(rows)
from sklearn.tree import DecisionTreeRegressor
decision_tree_regression_final = DecisionTreeRegressor(random_state=0)
decision_tree_regression_final.fit(X_train_final, y_train_final)
training_predictions_dtree_final = decision_tree_regression_final.predict(X_train_final)
actual_values = y_train_final
print("The mean squared error for the Decision Tree Regressor is:", MeanSquaredError(actual_values, training_predictions_dtree_final))
regression_tree_final_mse = cross_validate_mse_bootstrap(DecisionTreeRegressor, 30, final_features)
<jupyter_output>The mean squared error for the Decision Tree Regressor is: 0.0
The variance of the errors is 0.03499827051203105
The average of the errors is 0.3134319785458988
<jupyter_text>**Part 5** Calculate and Examine the Improvements in the models after feature selection. Additionally, evaluate the performance of the models on the test data.<jupyter_code>#Final Random Forest Regressor Model
#Since ordinary decision trees tend to overfit to the training dataset, and thus lead there to be low bias and high variance,
#random forest ensemble combines the multiple decision trees overfitting to multiple bagged(boostrapped) samples of the data
#to capture the variability in the data, leading to low variance.
from sklearn.ensemble import RandomForestRegressor
random_forest_regression_final = RandomForestRegressor(random_state=0)
random_forest_regression_final.fit(X_train_final, y_train_final)
training_predictions_random_forest_fin = random_forest_regression_final.predict(X_train_final)
actual_values = y_train_final
print("The mean squared error for the Random Forest Regressor is:", MeanSquaredError(actual_values, training_predictions_random_forest))
random_forest_final_mse = cross_validate_mse_bootstrap(RandomForestRegressor, 30, final_features)
#Linear Regression Model (Average) Percent Error Reduction:
lin_reg_error_reduction = 100*((linalg_final_mse-linalg_original_mse)/linalg_original_mse)
print("Linear Regression Model Percent Error Reduction: ", lin_reg_error_reduction, "%")
#Regression Tree Model Percent Error Reduction:
reg_tree_error_reduction = 100*((regression_tree_final_mse-regression_tree_original_mse)/regression_tree_original_mse)
print("Regression Tree Model Percent Error Reduction: ", reg_tree_error_reduction, "%")
#Random Forest of Regression Trees Model Percent Error Reduction:
random_forest_error_reduction = 100*((random_forest_final_mse-random_forest_original_mse)/random_forest_original_mse)
print("Random Forest of Regression Trees Model Percent Error Reduction: ", random_forest_error_reduction, "%")<jupyter_output>Linear Regression Model Percent Error Reduction: -67.05024288071961 %
Regression Tree Model Percent Error Reduction: -18.464541660842706 %
Random Forest of Regression Trees Model Percent Error Reduction: -14.31173174934856 %
<jupyter_text>**Part 5:** Evaluate the Performance of the Models on the Test Set<jupyter_code>#Evaluate Linear Regression Model on the Test Set
lin_reg_test_set_predictions = regression_model_final.predict(X_test_final)
lin_reg_test_set_error = MeanSquaredError(y_test_final, lin_reg_test_set_predictions)
print("Linear Regression Model Test Set Error: ", lin_reg_test_set_error)
# #Evaluate Regression Tree Model on the Test Set
reg_tree_test_set_predictions = decision_tree_regression_final.predict(X_test_final)
reg_tree_test_set_error = MeanSquaredError(y_test_final, reg_tree_test_set_predictions)
print("Regression Tree Model Test Set Error: ", reg_tree_test_set_error)
# #Evaluate Random Forest of Regression Trees Model on the Test Set
random_forest_reg_tree_test_set_predictions = random_forest_regression_final.predict(X_test_final)
random_forest_reg_tree_test_set_error = MeanSquaredError(y_test_final, random_forest_reg_tree_test_set_predictions)
print("Random Forest of Regression Trees Model Test Set Error: ", random_forest_reg_tree_test_set_error)<jupyter_output>Linear Regression Model Test Set Error: 0.3461902049653014
Regression Tree Model Test Set Error: 0.4291695606635044
Random Forest of Regression Trees Model Test Set Error: 0.2880493911874899
|
no_license
|
/Predicting COVID-19 Mortality Rate.ipynb
|
Dodgeramadi/ML_DataScience_Projects
| 21 |
<jupyter_start><jupyter_text># This jupyter notebook analyzes data about ramen.<jupyter_code># KEY TAKEAWAYS
# 2,580 total reviews.
# Reviews are from 38 different countries.
# Average ramen rating is 3.6.
# No correlation between the number of ratings a brand or style of ramen received and how high its rating was.
# Ramen that are instant, spicy and flavorful, and made up of either beef or chicken are the ones that were the most rated.
# Nissin is the most popular brand of ramen that customers bought.
# Packed ramen is the most popular style of ramen that customers consumed.
# Asia is the most common region for ramen ratings.
# The Middle East and Africa are the least common regions for ramen ratings.
from IPython.display import Image
Image(filename='ramen.jpg', width = 500, height = 200)
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import plotly.offline as pyo
import plotly.graph_objs as go
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
df = pd.read_csv('Ramen.csv', header=0,encoding = 'unicode_escape')
df.head()
df.dtypes
# Stars column has a the word Unrated in it. That should be removed.
df['Stars'].value_counts()
# Replacing the word Unrated in the Stars column with a NaN value.
df = df.replace({'Stars':{'Unrated':np.nan}})
df['Stars'].value_counts()
df['Stars'] = df.Stars.astype('float')
# 355 brands of ramen. Nissin is the most popular brand of ramen that customers bought.
# 7 styles of ramen. Pack is the most popular brand of ramen.
# Reviewers are from 38 different countries. Japan is the top source of reviews.
# Average rating is 3.6
df.describe(include='all')
# Replaced the value United States with USA.
df = df.replace({'Country':{'United States':'USA'}})
# Japan, USA, and South Korea have the most ratings.
c_count = df['Country'].value_counts().head(10)
plt.subplots(figsize=(12,4))
c_count.plot.bar(title='Countries With the Most Ratings')
sns.set()
plt.show()
# 10 most popular words in the Variety column.
# Ramen that are instant, spicy and flavorful, and made up of either beef or chicken are the ones that were the most rated.
a = pd.Series(' '.join(df['Variety']).lower().split()).value_counts().head(10)
plt.subplots(figsize=(12,4))
a.plot.bar(title='10 Most Common Words in Varieties of Ramen')
sns.set()
plt.show()
# Since there are 38 different countries, it would be easier to analyze them by country region.
# Country Regions = Africa, Asia, Europe, Middle East, North America, Oceania, South America,
df['Country_Region'] = df['Country']
df['Country'].value_counts()
df.replace(to_replace={'Country_Region':
{'Japan': 'Asia',
'USA': 'North America',
'South Korea': 'Asia',
'Taiwan': 'Asia',
'Thailand': 'Asia',
'China': 'Asia',
'Malaysia': 'Asia',
'Hong Kong': 'Asia',
'Indonesia': 'Asia',
'Singapore': 'Asia',
'Vietnam': 'Asia',
'UK': 'Europe',
'Philippines':'Asia',
'Canada': 'North America',
'India': 'Asia',
'Germany': 'Europe',
'Mexico': 'South America',
'Australia': 'Oceania',
'Netherlands': 'Europe',
'Nepal': 'Asia',
'Myanmar': 'Asia',
'Pakistan': 'Asia',
'Hungary': 'Asia',
'Bangladesh': 'Asia',
'Colombia': 'South America',
'Cambodia': 'Asia',
'Brazil': 'South America',
'Fiji': 'Oceania',
'Poland': 'Europe',
'Holland': 'Europe',
'Dubai': 'Middle East',
'Sarawak': 'Asia',
'Sweden': 'Europe',
'Estonia': 'Europe',
'Ghana': 'Africa',
'Nigeria': 'Africa' ,
'Finland': 'Europe'
}},inplace=True)
df['Country_Region'].value_counts()
# Ramen that comes in a pack is the most popular style of ramen, followed by ramen in a bowl and ramen in a cup.
# Ramen retailers should consider discontinuing selling ramen in a box, can, and bar.
plt.subplots(figsize=(12,4))
sns.countplot(x='Style', data=df, hue='Country_Region')
plt.title('Ramen Ratings by Style of Ramen', fontweight = 'bold', fontsize=15)
plt.xlabel('Style of Ramen', fontweight='bold')
# In Asia, the most popular ramen styles are packed ramen, ramen in a bowl and a ramen in a cup.
# For the Asian market, ramen retailers should consider discontinuing selling ramen in a box and ramen in a tray.
df_Asia = df[df['Country_Region'] == 'Asia']
plt.subplots(figsize=(12,4))
sns.countplot(x='Style', data=df_Asia)
plt.title('Ramen Ratings by Style of Ramen', fontweight = 'bold', fontsize=15)
plt.xlabel('Style of Ramen', fontweight='bold')
# Asia is the most common region for ramen ratings.
# The Middle East and AFrica are the least common regions for ramen ratings.
plt.subplots(figsize=(12,4))
sns.countplot(x='Country_Region', data=df)
plt.title('Count of Countries by Region', fontweight = 'bold', fontsize=15)
plt.ylabel('Number of Countries in Region', fontweight='bold')
plt.xlabel('Country Region', fontweight='bold')
# The highest rated ramen brands are Kimura, ORee Garden, The Ramen Rater Select, Komforte Chockolates, and ChoripDong.
df2 = df.groupby('Brand').mean()
df2.sort_values('Stars', ascending = False).head(5)
# 10 most rated ramen varieties. These could also be viewed as the most popular ramen varieties.
df4 = df.groupby('Variety').mean()
df4.sort_values('Review #', ascending = False).head(10)
# No correlation between the number of reviews and the rating a ramen brand receives.
d3 = df2.corr()
sns.heatmap(d3, annot=True)
# The lowest rated ramen brands are Dr. McDougall's, Tiger, Kim Ve Wong, Roland, and US Canning.
df2.sort_values('Stars', ascending = False).tail(5)
# 10 least rated ramen varieties. These could also be viewed as the least popular ramen varieties.
df4 = df.groupby('Variety').mean()
df4.sort_values('Review #', ascending = False).tail(10)<jupyter_output><empty_output>
|
no_license
|
/Ramen.ipynb
|
dhakshanayashwanth/Ramen
| 1 |
<jupyter_start><jupyter_text># Overview
This notebook describes how to construct GRN models.
Please read our paper first to know about the CellOracle algorithm.
### Notebook file
Notebook file is available at CellOracle GitHub.
https://github.com/morris-lab/CellOracle/blob/master/docs/notebooks/04_Network_analysis/Network_analysis_with_Paul_etal_2015_data.ipynb
### Data
CellOracle uses two input data below for the GRN model construction.
- **Input data1: scRNA-seq data**. Please look at the previous section to know the scRNA-seq data preprocessing method. https://morris-lab.github.io/CellOracle.documentation/tutorials/scrnaprocess.html
- **Input data2: Base-GRN**. Base-GRN is a binary matrix (or list) that represents the TF-target gene connection. Please look at our paper to know the concept of base-GRN.
- CellOracle typically uses base-GRN constructed from scATAC-seq. If you want to create custom base-GRN from your data, please look at another notebook on how to get base-GRN from your scATAC-seq data. https://morris-lab.github.io/CellOracle.documentation/tutorials/base_grn.html
- If you do not have any scATAC-seq data that correspond / similar to the cell type of the scRNA-seq data, please use pre-built base-GRN.
- We provide multiple options for pre-built base-GRN. For mouse analysis, we recommend using base-GRN constructed from the mouse sciATAC-seq atlas dataset. It includes various tissue and various cell types. Another option is base-GRN constructed from promoter sequence. We provide promoter base-GRN for ten species.
### What you can do
After constructing the CellOracle GRN model, you can do two analyses.
1. **in silico TF perturbation** to simulate cell identity shift. CellOracle uses the GRN model to simulate cell identity shift in response to TF perturbation. For this analysis, you need to construct GRN models in this notebook first.
2. **Network analysis** using graph theory. You can analyze the GRN model itself. We provide several functions for Network analysis using graph theory.
- CellOracle construct cluster-wise GRN model. You can compare the GRN model structure between clusters. By comparing GRN models, you can investigate the cell type-specific GRN configuration and rewiring process of this GRN.
- You can export the network models. You can analyze the GRN model using any method you like.
### Custom data class / object
In this notebook, CellOracle uses two custom classes, `Oracle` and `Links`.
- `Oracle` is the main class in the CellOracle package. It will do almost all calculations of GRN model construction and TF perturbation simulation. `Oracle` will do the following calculation sequentially.
1. Import scRNA-sequence data. Please look at another notebook to learn preprocessing method.
2. Import base-GRN data.
3. scRNA-seq data processing.
4. GRN model construction.
5. in silico petrurbation. We will describe how to do it in the following notebook.
- `Links` is a class to store GRN data. Also, it has many functions for network analysis and visualization.
# 0. Import libraries<jupyter_code># 0. Import
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scanpy as sc
import seaborn as sns
import celloracle as co
co.__version__
# visualization settings
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
plt.rcParams['figure.figsize'] = [6, 4.5]
plt.rcParams["savefig.dpi"] = 300
<jupyter_output><empty_output><jupyter_text>## 0.1. Check installation
Celloracle uses some R libraries in network analysis.
Please make sure that all dependent R libraries are installed on your computer.
You can test the installation with the following command.<jupyter_code>co.test_R_libraries_installation()<jupyter_output>R path: /usr/bin/R
checking R library installation: igraph -> OK
checking R library installation: linkcomm -> OK
checking R library installation: rnetcarto -> OK
<jupyter_text>## 0.2. Make a folder to save graph<jupyter_code>save_folder = "figures"
os.makedirs(save_folder, exist_ok=True)<jupyter_output><empty_output><jupyter_text># 1. Load data
## 1.1. Load processed gene expression data (anndata)
Please refer to the previous notebook in the tutorial for an example of how to process scRNA-seq data.
https://morris-lab.github.io/CellOracle.documentation/tutorials/scrnaprocess.html
We need scRNA-seq data as anndata.
> **This CellOracle tutorial notebook assume the user have a basic knoledge and experience of scRNA-seq analysis with scanpy and anndata.** This notebook do not intend to give introductory knowledge about scanpy and anndata.
> If you are not familiar with them, please look at the documentation and tutorials of annata (https://anndata.readthedocs.io/en/stable/) and Scanpy (https://scanpy.readthedocs.io/en/stable/).
<jupyter_code># Load data. !!Replace the data path below when you use another data.
# adata = sc.read_h5ad("DATAPATH")
# Here, we will use a hematopoiesis data by Paul 2015.
# You can load preprocessed data using a celloracle function as follows.
adata = co.data.load_Paul2015_data()
adata<jupyter_output><empty_output><jupyter_text>## 1.2. (Optional step) Downsampling
If your scRNA-seq data includes more than 20-30K cells, we recommend doing downsampling.
It is because the later simulation process will require large amount of memory if you have large data.
Also, please pay attention to the number of genes. If you are following the instruction in the previous tutorial notebook, the scRNA-seq data should include only top 2~3K variable genes. If you have more than 3K genes, it might cause problems in the later steps.<jupyter_code>print(f"Cell number is :{adata.shape[0]}")
print(f"Gene number is :{adata.shape[1]}")
# Random downsampling into 30K cells if the anndata include more than 30 K cells.
n_cells_downsample = 30000
if adata.shape[0] > n_cells_downsample:
# Let's dowmsample into 30K cells
sc.pp.subsample(adata, n_obs=n_cells_downsample, random_state=123)
print(f"Cell number is :{adata.shape[0]}")<jupyter_output>Cell number is :2671
<jupyter_text>## 1.2. Load base-GRN data.
For the GRN inference, celloracle needs base-GRN.
- There are several ways to make base-GRN. We can typically generate TF information from scATAC-seq data or bulk ATAC-seq data. Please refer to the first step of the tutorial for the details of this process. https://morris-lab.github.io/CellOracle.documentation/tutorials/base_grn.html
- If you do not have your scATAC-seq data, you can use some built-in base-GRN data.
- Base-GRN made from mouse sci-ATAC-seq atlas dataset: The built-in base-GRN was made from various tissue/cell-types (http://atlas.gs.washington.edu/mouse-atac/). We recommend using this for mouse scRNA-seq data. Please load this data as follows.
`base_GRN = co.data.load_mouse_scATAC_atlas_base_GRN()`
- Promoter base-GRN: We provide base-GRN made from promoter DNA-sequencing for ten species. You can load this data as follos.
- For Human: `base_GRN = co.data.load_human_promoter_base_GRN()`
<jupyter_code># Load TF info which was made from mouse cell atlas dataset.
base_GRN = co.data.load_mouse_scATAC_atlas_base_GRN()
# Check data
base_GRN.head()<jupyter_output><empty_output><jupyter_text># 2. Initiate Oracle object
We can use Oracle for the data preprocessing and GRN inference steps.
The Oracle object stores all of the necessary information and does the calculations with its internal functions.
We instantiate an Oracle object, then input the gene expression data (anndata) and a TFinfo into the Oracle object.<jupyter_code># Instantiate Oracle object
oracle = co.Oracle()<jupyter_output><empty_output><jupyter_text>## 2.1. load gene expression data into oracle object.For the celloracle analysis, the anndata shoud include (1) gene expression count, (2) clustering information, (3) trajectory (dimensional reduction embeddings) data. Please refer to another notebook for more information on anndata preprocessing.
When you load a scRNA-seq data, please enter **the name of clustering data** and **dimensional reduction data.**
- The clustering data should be to be stored in the attribute of `obs` in the anndata.
> You can check it by the following command.
>
> `adata.obs.columns`
- Dimensional reduction data suppose to be stored in the attribute of "obsm" in the anndata.
> You can check it by the following command.
>
> `adata.obsm.keys()`
<jupyter_code># Show data name in anndata
print("metadata columns :", list(adata.obs.columns))
print("dimensional reduction: ", list(adata.obsm.keys()))
# In this notebook, we use raw mRNA count as an input of Oracle object.
adata.X = adata.layers["raw_count"].copy()
# Instantiate Oracle object.
oracle.import_anndata_as_raw_count(adata=adata,
cluster_column_name="louvain_annot",
embedding_name="X_draw_graph_fa")<jupyter_output><empty_output><jupyter_text>## 2.2. Load base-GRN data into oracle object<jupyter_code># You can load TF info dataframe with the following code.
oracle.import_TF_data(TF_info_matrix=base_GRN)
# Alternatively, if you saved the informmation as a dictionary, you can use the code below.
# oracle.import_TF_data(TFdict=TFinfo_dictionary)<jupyter_output><empty_output><jupyter_text>## 2.3. (Optional) Add TF-target gene pair manually
We can add additional TF-target gene pair manually.
For example, if there is a study or database that includes specific TF-target pairs, you can use such information in the following way.
### 2.3.1. Make dictionary
Here, we will introduce how to manually add TF-target gene pair data.
As an example, we will use TF binding data that was published in supplemental table 4 in the paper. (http://doi.org/10.1016/j.cell.2015.11.013).
You can dowmload this file by running the following command. If it fails, please download manually.
https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/TF_data_in_Paul15.csv
In order to import TF data into the Oracle object, we need to convert them into a python dictionary. The dictionary keys is a target gene, and dictionary value is a list of regulatory candidate TFs.<jupyter_code># Download file.
!wget https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/TF_data_in_Paul15.csv
# If you are using macOS, please try the following command.
#!curl -O https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/TF_data_in_Paul15.csv
# We have TF and its target gene information. This is from a supplemental Fig of Paul et. al, (2015).
Paul_15_data = pd.read_csv("TF_data_in_Paul15.csv")
Paul_15_data
# Make dictionary: dictionary Key is TF, dictionary Value is list of target genes
TF_to_TG_dictionary = {}
for TF, TGs in zip(Paul_15_data.TF, Paul_15_data.Target_genes):
# convert target gene to list
TG_list = TGs.replace(" ", "").split(",")
# store target gene list in a dictionary
TF_to_TG_dictionary[TF] = TG_list
# We have to make a dictionary, in which a Key is Target gene and value is TF.
# We invert the dictionary above using a utility function in celloracle.
TG_to_TF_dictionary = co.utility.inverse_dictionary(TF_to_TG_dictionary)
<jupyter_output><empty_output><jupyter_text>### 2.3.2. Add TF informatio dictionary into the oracle object<jupyter_code># Add TF information
oracle.addTFinfo_dictionary(TG_to_TF_dictionary)<jupyter_output><empty_output><jupyter_text># 3. Knn imputation
Celloracle uses the same strategy as velocyto for visualizing cell transitions. This process requires KNN imputation in advance.
For the KNN imputation, we need PCA and PC selection first.
## 3.1. PCA<jupyter_code># Perform PCA
oracle.perform_PCA()
# Select important PCs
plt.plot(np.cumsum(oracle.pca.explained_variance_ratio_)[:100])
n_comps = np.where(np.diff(np.diff(np.cumsum(oracle.pca.explained_variance_ratio_))>0.002))[0][0]
plt.axvline(n_comps, c="k")
print(n_comps)
n_comps = min(n_comps, 50)<jupyter_output><empty_output><jupyter_text>## 3.2. KNN imputationEstimate the optimal number of nearest neighbors for KNN imputation.<jupyter_code>n_cell = oracle.adata.shape[0]
print(f"cell number is :{n_cell}")
k = int(0.025*n_cell)
print(f"Auto-selected k is :{k}")
oracle.knn_imputation(n_pca_dims=n_comps, k=k, balanced=True, b_sight=k*8,
b_maxl=k*4, n_jobs=4)<jupyter_output><empty_output><jupyter_text># 4. Save and Load.
You can save `Oracle` object using `Oracle.to_hdf5(FILE_NAME.celloracle.oracle)`.
Pleasae use `co.load_hdf5(FILE_NAME.celloracle.oracle)` to load the saved file.
<jupyter_code># Save oracle object.
oracle.to_hdf5("Paul_15_data.celloracle.oracle")
# Load file.
oracle = co.load_hdf5("Paul_15_data.celloracle.oracle")<jupyter_output><empty_output><jupyter_text># 5. GRN calculation
The next step is constructing a cluster-specific GRN for all clusters.
- You can calculate GRNs with the `get_links` function, and the function returns GRNs as a `Links` object.
The `Links` object stores inferred GRNs and the corresponding metadata. You can do network analysis with the `Links` object.
- The GRN will be calculated for each cluster/sub-group. In the example below, we construct GRN for each unit of the "louvain_annot" clustering.
<jupyter_code># check data
sc.pl.draw_graph(oracle.adata, color="louvain_annot")<jupyter_output><empty_output><jupyter_text>## 5.1. Get GRNs<jupyter_code>%%time
# Calculate GRN for each population in "louvain_annot" clustering unit.
# This step may take long time.(~30 minutes)
links = oracle.get_links(cluster_name_for_GRN_unit="louvain_annot", alpha=10,
verbose_level=10, test_mode=False)<jupyter_output><empty_output><jupyter_text>## 5.2. (Optional) Export GRNs
Although celloracle has many functions for network analysis, you can analyze GRNs by hand if you choose.
The raw GRN data is stored as a dictionary of dataframe in the attribute of `links_dict`.
For example, you can get the GRN for the "Ery_0" cluster with the following commands.<jupyter_code>links.links_dict.keys()
links.links_dict["Ery_0"]<jupyter_output><empty_output><jupyter_text>You can export the file as follows.<jupyter_code># Set cluster name
cluster = "Ery_0"
# Save as csv
#links.links_dict[cluster].to_csv(f"raw_GRN_for_{cluster}.csv")<jupyter_output><empty_output><jupyter_text>## 5.3. (Optional) Change orderThe links object has a color information in an attribute, `palette`.
This information is used for the visualization
The sample will be visualized in that order.
Here we can change color and order.<jupyter_code># Show the contents of pallete
links.palette
# Change the order of pallete
order = ['MEP_0', 'Mk_0', 'Ery_0',
'Ery_1', 'Ery_2', 'Ery_3', 'Ery_4', 'Ery_5', 'Ery_6', 'Ery_7', 'Ery_8', 'Ery_9',
'GMP_0', 'GMP_1', 'GMP_2', 'GMPl_0', 'GMPl_1',
'Mo_0', 'Mo_1', 'Mo_2',
'Gran_0', 'Gran_1', 'Gran_2', 'Gran_3']
links.palette = links.palette.loc[order]
links.palette<jupyter_output><empty_output><jupyter_text># 6. Network preprocessing
## 6.1. Filter network edges
Using base-GRN, CellOracle constructs GRN models as lits of a directed edge between TF and its target gene.
We need to remove weak edges or insignificant edges before doing network analysis.
We filter the network edges as follows.
1. Remove uncertain network edges based on the p-value.
2. Remove weak network edge. In this tutorial, we pick up the top 2000 edges by edge strength.
The raw network data is stored as an attribute, `links_dict`, while filtered network data is stored in `filtered_links`.
<jupyter_code>links = co.data.load_tutorial_links_object()
links.filter_links(p=0.001, weight="coef_abs", threshold_number=2000)<jupyter_output><empty_output><jupyter_text>## 6.2. Degree distribution
In the first step, we examine the network degree distribution.
>Network degree, which is the number of edges for each node, is one of the important metrics used to investigate the network structure (https://en.wikipedia.org/wiki/Degree_distribution).
Please keep in mind that the degree distribution may change depending on the filtering threshold.<jupyter_code>plt.rcParams["figure.figsize"] = [9, 4.5]
links.plot_degree_distributions(plot_model=True,
#save=f"{save_folder}/degree_distribution/",
)
plt.rcParams["figure.figsize"] = [6, 4.5]<jupyter_output><empty_output><jupyter_text>## 5.3. Calculate netowrk score
Next, we calculate several network score using some R libraries.
Please make sure that R libraries are installed in your PC before running the command below.
<jupyter_code># Calculate network scores. It takes several minutes.
links.get_score()<jupyter_output>processing... batch 1/3
Ery_0: finished.
Ery_1: finished.
Ery_2: finished.
Ery_3: finished.
Ery_4: finished.
Ery_5: finished.
Ery_6: finished.
Ery_7: finished.
processing... batch 2/3
Ery_8: finished.
Ery_9: finished.
GMP_0: finished.
GMP_1: finished.
GMPl_0: finished.
Gran_0: finished.
Gran_1: finished.
Gran_2: finished.
processing... batch 3/3
MEP_0: finished.
Mk_0: finished.
Mo_0: finished.
Mo_1: finished.
<jupyter_text>The score is stored as a attribute `merged_score`.<jupyter_code>links.merged_score.head()<jupyter_output><empty_output><jupyter_text>## 6.4. Save
Save processed GRN. We use this file in in silico TF perturbation analysis.<jupyter_code># Save Links object.
links.to_hdf5(file_path="links.celloracle.links")
# You can load files with the following command.
links = co.load_hdf5(file_path="links.celloracle.links")
<jupyter_output><empty_output><jupyter_text>**If you are not interested in Network analysis and jut want to do TF perturbation simulation, you can skip the network analysis described below.
Please go to next step: in silico gene perturbation with GRNs**
https://morris-lab.github.io/CellOracle.documentation/tutorials/simulation.html# 7. Network analysis; Network score for each gene
The `Links` class has many functions to visualize network score.
See the documentation for the details of the functions.
## 7.1. Network score in each cluster
We have calculated several network scores using different centrality metrics.
>The centrality score is one of the important indicators of network structure (https://en.wikipedia.org/wiki/Centrality).
Let's visualize genes with high network centrality.
<jupyter_code># Check cluster name
links.cluster
# Visualize top n-th genes that have high scores.
links.plot_scores_as_rank(cluster="MEP_0", n_gene=30, save=f"{save_folder}/ranked_score")<jupyter_output><empty_output><jupyter_text>## 7.2. Network score comparison between two clusters
By comparing network scores between two clusters, we can analyze differences in GRN structure.<jupyter_code>plt.ticklabel_format(style='sci',axis='y',scilimits=(0,0))
links.plot_score_comparison_2D(value="eigenvector_centrality",
cluster1="MEP_0", cluster2="GMPl_0",
percentile=98, save=f"{save_folder}/score_comparison")
plt.ticklabel_format(style='sci',axis='y',scilimits=(0,0))
links.plot_score_comparison_2D(value="betweenness_centrality",
cluster1="MEP_0", cluster2="GMPl_0",
percentile=98, save=f"{save_folder}/score_comparison")
plt.ticklabel_format(style='sci',axis='y',scilimits=(0,0))
links.plot_score_comparison_2D(value="degree_centrality_all",
cluster1="MEP_0", cluster2="GMPl_0",
percentile=98, save=f"{save_folder}/score_comparison")<jupyter_output><empty_output><jupyter_text>## 7.3. Network score dynamics
In the following session, we focus on how a gene's network score changes during the differentiation.
Using Gata2, we introduce how to visualize networks scores dynamics.
Gata2 is known to play an essential role in the early MEP and GMP populations. .<jupyter_code># Visualize Gata2 network score dynamics
links.plot_score_per_cluster(goi="Gata2", save=f"{save_folder}/network_score_per_gene/")<jupyter_output><empty_output><jupyter_text>If a gene have no connections in a cluster, it is impossible to calculate network degree scores.
Thus the scores will not be shown.
For example, Cebpa have no connection in the erythloids clusters, and there is no degree scores for Cebpa in these clusters as follows.<jupyter_code>links.plot_score_per_cluster(goi="Cebpa")<jupyter_output><empty_output><jupyter_text>You can check filtered network edge as follows.<jupyter_code>cluster_name = "Ery_1"
filtered_links_df = links.filtered_links[cluster_name]
filtered_links_df.head()<jupyter_output><empty_output><jupyter_text>You can confirm that there is no Cebpa connection in Ery_0 cluster.<jupyter_code>filtered_links_df[filtered_links_df.source == "Cebpa"]<jupyter_output><empty_output><jupyter_text>## 7.4. Gene cartography analysis
We can calculate gene cartography as follows.
The gene cartography will be calculated for the GRN in each cluster.
>Gene cartography is a method for gene network analysis.
The method classifies gene into several groups using the network module structure and connections.
For more information on gene cartography, please refer to the following paper (https://www.nature.com/articles/nature03288).
<jupyter_code># Plot cartography as a scatter plot
links.plot_cartography_scatter_per_cluster(scatter=True,
kde=False,
gois=["Gata1", "Gata2", "Sfpi1"], # Highlight genes of interest
auto_gene_annot=False,
args_dot={"n_levels": 105},
args_line={"c":"gray"}, save=f"{save_folder}/cartography")
plt.rcParams["figure.figsize"] = [4, 7]
# Plot the summary of cartography analysis
links.plot_cartography_term(goi="Gata2",
# save=f"{save_folder}/cartography",
)<jupyter_output>Gata2
<jupyter_text># 8. Network analysis; network score distribution
Next, we visualize the distribution of network score to get insight into the global trend of the GRNs.## 8.1. Distribution of network degree<jupyter_code>plt.rcParams["figure.figsize"] = [6, 4.5]
plt.subplots_adjust(left=0.15, bottom=0.3)
plt.ylim([0,0.040])
links.plot_score_discributions(values=["degree_centrality_all"], method="boxplot",
#save=f"{save_folder}",
)
plt.subplots_adjust(left=0.15, bottom=0.3)
plt.ylim([0, 0.28])
links.plot_score_discributions(values=["eigenvector_centrality"], method="boxplot", save=f"{save_folder}")
<jupyter_output><empty_output><jupyter_text>## 8.2. Distribution of netowrk entropy<jupyter_code>plt.subplots_adjust(left=0.15, bottom=0.3)
links.plot_network_entropy_distributions(save=f"{save_folder}")
<jupyter_output><empty_output>
|
permissive
|
/docs/notebooks/04_Network_analysis/.ipynb_checkpoints/Network_analysis_with_Paul_etal_2015_data-checkpoint.ipynb
|
marvelouscandy/CellOracle
| 33 |
<jupyter_start><jupyter_text>## 練習時間
#### 請寫一個函式用來計算 Mean Square Error
$ MSE = \frac{1}{n}\sum_{i=1}^{n}{(Y_i - \hat{Y}_i)^2} $
### Hint: [如何取平方](https://googoodesign.gitbooks.io/-ezpython/unit-1.html)# [作業目標]
- 仿造範例的MAE函數, 自己寫一個MSE函數(參考上面公式)# [作業重點]
- 注意程式的縮排
- 是否能將數學公式, 轉換為 Python 的函式組合? (In[2], Out[2])<jupyter_code># 載入基礎套件與代稱
import numpy as np
import matplotlib.pyplot as plt
def mean_absolute_error(y, yp):
"""
計算 MAE
Args:
- y: 實際值
- yp: 預測值
Return:
- mae: MAE
"""
mae = MAE = sum(abs(y - yp)) / len(y)
return mae
# 定義 mean_squared_error 這個函數, 計算並傳回 MSE
def mean_squared_error(y, yp):
"""
計算 MAE
Args:
- y: 實際值
- yp: 預測值
Return:
- mae: MAE
"""
mse = MSE = sum((y - yp) ** 2) / len(y)
return mse
# 與範例相同, 不另外解說
w = 3
b = 0.5
x_lin = np.linspace(0, 100, 101)
y = (x_lin + np.random.randn(101) * 5) * w + b
plt.plot(x_lin, y, 'b.', label = 'data points')
plt.title("Assume we have data points")
plt.legend(loc = 2)
plt.show()
# 與範例相同, 不另外解說
y_hat = x_lin * w + b
plt.plot(x_lin, y, 'b.', label = 'data')
plt.plot(x_lin, y_hat, 'r-', label = 'prediction')
plt.title("Assume we have data points (And the prediction)")
plt.legend(loc = 2)
plt.show()
# 執行 Function, 確認有沒有正常執行
MSE = mean_squared_error(y, y_hat)
MAE = mean_absolute_error(y, y_hat)
print("The Mean squared error is %.3f" % (MSE))
print("The Mean absolute error is %.3f" % (MAE))<jupyter_output>The Mean squared error is 189.744
The Mean absolute error is 11.019
|
no_license
|
/homework/.ipynb_checkpoints/Day_001_HW-checkpoint.ipynb
|
ryotsu1036/3rd-ML100Days
| 1 |
<jupyter_start><jupyter_text><jupyter_code>import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
df=pd.read_csv('/content/amsPredictionSheet11.csv')
df.head()
df.describe()<jupyter_output><empty_output><jupyter_text>Normalize dataset<jupyter_code>scaler = MinMaxScaler()
df_values = df.values
df_scaled = scaler.fit_transform(df_values)
normalized_df = pd.DataFrame(df_scaled)
normalized_df.head()
normalized_df.describe()<jupyter_output><empty_output><jupyter_text>Standardize dataset (average = 0 and standart devition = 1)<jupyter_code>from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler()
df_values = df.values
df_std = std_scaler.fit_transform(df_values)
standardize_df = pd.DataFrame(df_std)
standardize_df.head()
standardize_df.describe()<jupyter_output><empty_output><jupyter_text>Converting catagorical dataset into numerical datasetread catorigal dataset file and store in df<jupyter_code>df=pd.read_csv('/content/stdcat.csv')
df.head()<jupyter_output><empty_output><jupyter_text>Check if data set is catagorical<jupyter_code>df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RNO 5 non-null int64
1 State 5 non-null object
2 Category 5 non-null object
3 Gender 5 non-null object
dtypes: int64(1), object(3)
memory usage: 288.0+ bytes
<jupyter_text>Methods for Converting catagorical dataset into numerical dataset
Method 1) by replacing valies in datasetMaking gender as numeric
BY replacing Male by 0 and female by 1<jupyter_code>new = {'F':0,'M':1}
df=df.replace(new)
df.info()
df.head()<jupyter_output><empty_output><jupyter_text>Method 2) By label encoderChanging State data set with numbers in ascending order(of alphabet of stste names) starting fron 0<jupyter_code>from sklearn.preprocessing import LabelEncoder
lb = LabelEncoder()
df['State'] = lb.fit_transform(df['State'])
df.head()
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RNO 5 non-null int64
1 State 5 non-null int64
2 Category 5 non-null object
3 Gender 5 non-null int64
dtypes: int64(3), object(1)
memory usage: 288.0+ bytes
<jupyter_text>Method 3) One_Hot EncodingSplit catagory column into number of category available by prefix Cat before category name, and putting value 1 in that column which is category of person and 0 to remaining columns<jupyter_code>df = pd.get_dummies(df, columns={'Category'}, prefix = {'Cat'})
df.head()
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RNO 5 non-null int64
1 State 5 non-null int64
2 Gender 5 non-null int64
3 Cat_GEN 5 non-null uint8
4 Cat_OBC 5 non-null uint8
5 Cat_SC 5 non-null uint8
6 Cat_ST 5 non-null uint8
dtypes: int64(3), uint8(4)
memory usage: 268.0 bytes
|
no_license
|
/Data_Pre_Processing_Lab.ipynb
|
Anuragbn/ML_Assignments
| 8 |
<jupyter_start><jupyter_text># String applications<jupyter_code># Function to calculate the length of a string
def string_length(str1):
count = 0
for char in str1:
count += 1
return count
print(string_length('My first function: count the length of a string'))
# Check whether a string starts with specified characters
string = "MyPython"
print(string.startswith("my"))
# Function to Reverse words in a string
def reverse_string_words(text):
for line in text.split('\n'):
return(' '.join(line.split()[::-1])) # join stick all the words together
print(reverse_string_words("The quick brown fox jumps over the lazy dog"))
print(reverse_string_words("Python Exercises"))
# Find a string made of the first 3 and the last 2 chars from a given a string
def string_both_ends(str):
if len(str) < 3:
return ''
return str[0:3] + str[-2:]
print(string_both_ends('Hello world in Python'))
print(string_both_ends('www'))
print(string_both_ends('12'))
# Remove the nth index character from a non empty string
def remove_char(str, n):
first_part = str[:n]
last_pasrt = str[n+1:]
return first_part + last_pasrt
print(remove_char('Python', 0))
print(remove_char('Python', 3))
print(remove_char('Python', 5))
# Change a given string to a new string where the first and last chars have been exchanged
def change_sring(str1):
return str1[-1:] + str1[1:-1] + str1[:1]
print(change_sring('abcd'))
print(change_sring('12345'))
# Remove the characters of odd index values of a given string
def odd_values_string(str):
result = ""
for i in range(len(str)):
if i % 2 == 0:
result = result + str[i]
return result
print(odd_values_string('123456789'))
print(odd_values_string('Python'))
# Count the occurrences of each word in a given sentence
def word_count(str):
counts = dict()
words = str.split()
for word in words:
if word in counts:
counts[word] += 1
else:
counts[word] = 1
return counts
print( word_count('the quick brown fox jumps over the lazy dog.')) <jupyter_output>{'the': 2, 'quick': 1, 'brown': 1, 'fox': 1, 'jumps': 1, 'over': 1, 'lazy': 1, 'dog.': 1}
|
no_license
|
/Strings-2.ipynb
|
zhongyiz1995/Python-Programming
| 1 |
<jupyter_start><jupyter_text>
From Understanding to Preparation## Introduction
In this lab, we will continue learning about the data science methodology, and focus on the **Data Understanding** and the **Data Preparation** stages.## Table of Contents
1. [Recap](#0)
2. [Data Understanding](#2)
3. [Data Preparation](#4)
# Recap In Lab **From Requirements to Collection**, we learned that the data we need to answer the question developed in the business understanding stage, namely *can we automate the process of determining the cuisine of a given recipe?*, is readily available. A researcher named Yong-Yeol Ahn scraped tens of thousands of food recipes (cuisines and ingredients) from three different websites, namely:
www.allrecipes.com
www.epicurious.com
www.menupan.comFor more information on Yong-Yeol Ahn and his research, you can read his paper on [Flavor Network and the Principles of Food Pairing](http://yongyeol.com/papers/ahn-flavornet-2011.pdf).We also collected the data and placed it on an IBM server for your convenience.
------------# Data Understanding Important note: Please note that you are not expected to know how to program in Python. The following code is meant to illustrate the stages of data understanding and data preparation, so it is totally fine if you do not understand the individual lines of code. We have a full course on programming in Python, Python for Data Science, which is also offered on Coursera. So make sure to complete the Python course if you are interested in learning how to program in Python.### Using this notebook:
To run any of the following cells of code, you can type **Shift + Enter** to excute the code in a cell.Get the version of Python installed.<jupyter_code># check Python version
!python -V<jupyter_output>Python 3.6.11
<jupyter_text>Download the library and dependencies that we will need to run this lab.<jupyter_code>import pandas as pd # import library to read data into dataframe
pd.set_option('display.max_columns', None)
import numpy as np # import numpy library
import re # import library for regular expression<jupyter_output><empty_output><jupyter_text>Download the data from the IBM server and read it into a *pandas* dataframe.<jupyter_code>recipes = pd.read_csv("https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/data/recipes.csv")
print("Data read into dataframe!") # takes about 30 seconds<jupyter_output>Data read into dataframe!
<jupyter_text>Show the first few rows.<jupyter_code>recipes.head()<jupyter_output><empty_output><jupyter_text>Get the dimensions of the dataframe.<jupyter_code>recipes.shape<jupyter_output><empty_output><jupyter_text>So our dataset consists of 57,691 recipes. Each row represents a recipe, and for each recipe, the corresponding cuisine is documented as well as whether 384 ingredients exist in the recipe or not, beginning with almond and ending with zucchini.We know that a basic sushi recipe includes the ingredients:
* rice
* soy sauce
* wasabi
* some fish/vegetablesLet's check that these ingredients exist in our dataframe:<jupyter_code>ingredients = list(recipes.columns.values)
print([match.group(0) for ingredient in ingredients for match in [(re.compile(".*(rice).*")).search(ingredient)] if match])
print([match.group(0) for ingredient in ingredients for match in [(re.compile(".*(wasabi).*")).search(ingredient)] if match])
print([match.group(0) for ingredient in ingredients for match in [(re.compile(".*(soy).*")).search(ingredient)] if match])<jupyter_output>['brown_rice', 'licorice', 'rice']
['wasabi']
['soy_sauce', 'soybean', 'soybean_oil']
<jupyter_text>Yes, they do!
* rice exists as rice.
* wasabi exists as wasabi.
* soy exists as soy_sauce.
So maybe if a recipe contains all three ingredients: rice, wasabi, and soy_sauce, then we can confidently say that the recipe is a **Japanese** cuisine! Let's keep this in mind!
----------------# Data Preparation In this section, we will prepare data for the next stage in the data science methodology, which is modeling. This stage involves exploring the data further and making sure that it is in the right format for the machine learning algorithm that we selected in the analytic approach stage, which is decision trees.First, look at the data to see if it needs cleaning.<jupyter_code>recipes["country"].value_counts() # frequency table<jupyter_output><empty_output><jupyter_text>By looking at the above table, we can make the following observations:
1. Cuisine column is labeled as Country, which is inaccurate.
2. Cuisine names are not consistent as not all of them start with an uppercase first letter.
3. Some cuisines are duplicated as variation of the country name, such as Vietnam and Vietnamese.
4. Some cuisines have very few recipes.#### Let's fixes these problems.Fix the name of the column showing the cuisine.<jupyter_code>column_names = recipes.columns.values
column_names[0] = "cuisine"
recipes.columns = column_names
recipes<jupyter_output><empty_output><jupyter_text>Make all the cuisine names lowercase.<jupyter_code>recipes["cuisine"] = recipes["cuisine"].str.lower()<jupyter_output><empty_output><jupyter_text>Make the cuisine names consistent.<jupyter_code>recipes.loc[recipes["cuisine"] == "austria", "cuisine"] = "austrian"
recipes.loc[recipes["cuisine"] == "belgium", "cuisine"] = "belgian"
recipes.loc[recipes["cuisine"] == "china", "cuisine"] = "chinese"
recipes.loc[recipes["cuisine"] == "canada", "cuisine"] = "canadian"
recipes.loc[recipes["cuisine"] == "netherlands", "cuisine"] = "dutch"
recipes.loc[recipes["cuisine"] == "france", "cuisine"] = "french"
recipes.loc[recipes["cuisine"] == "germany", "cuisine"] = "german"
recipes.loc[recipes["cuisine"] == "india", "cuisine"] = "indian"
recipes.loc[recipes["cuisine"] == "indonesia", "cuisine"] = "indonesian"
recipes.loc[recipes["cuisine"] == "iran", "cuisine"] = "iranian"
recipes.loc[recipes["cuisine"] == "italy", "cuisine"] = "italian"
recipes.loc[recipes["cuisine"] == "japan", "cuisine"] = "japanese"
recipes.loc[recipes["cuisine"] == "israel", "cuisine"] = "jewish"
recipes.loc[recipes["cuisine"] == "korea", "cuisine"] = "korean"
recipes.loc[recipes["cuisine"] == "lebanon", "cuisine"] = "lebanese"
recipes.loc[recipes["cuisine"] == "malaysia", "cuisine"] = "malaysian"
recipes.loc[recipes["cuisine"] == "mexico", "cuisine"] = "mexican"
recipes.loc[recipes["cuisine"] == "pakistan", "cuisine"] = "pakistani"
recipes.loc[recipes["cuisine"] == "philippines", "cuisine"] = "philippine"
recipes.loc[recipes["cuisine"] == "scandinavia", "cuisine"] = "scandinavian"
recipes.loc[recipes["cuisine"] == "spain", "cuisine"] = "spanish_portuguese"
recipes.loc[recipes["cuisine"] == "portugal", "cuisine"] = "spanish_portuguese"
recipes.loc[recipes["cuisine"] == "switzerland", "cuisine"] = "swiss"
recipes.loc[recipes["cuisine"] == "thailand", "cuisine"] = "thai"
recipes.loc[recipes["cuisine"] == "turkey", "cuisine"] = "turkish"
recipes.loc[recipes["cuisine"] == "vietnam", "cuisine"] = "vietnamese"
recipes.loc[recipes["cuisine"] == "uk-and-ireland", "cuisine"] = "uk-and-irish"
recipes.loc[recipes["cuisine"] == "irish", "cuisine"] = "uk-and-irish"
recipes<jupyter_output><empty_output><jupyter_text>Remove cuisines with < 50 recipes.<jupyter_code># get list of cuisines to keep
recipes_counts = recipes["cuisine"].value_counts()
cuisines_indices = recipes_counts > 50
cuisines_to_keep = list(np.array(recipes_counts.index.values)[np.array(cuisines_indices)])
rows_before = recipes.shape[0] # number of rows of original dataframe
print("Number of rows of original dataframe is {}.".format(rows_before))
recipes = recipes.loc[recipes['cuisine'].isin(cuisines_to_keep)]
rows_after = recipes.shape[0] # number of rows of processed dataframe
print("Number of rows of processed dataframe is {}.".format(rows_after))
print("{} rows removed!".format(rows_before - rows_after))<jupyter_output>Number of rows of original dataframe is 57691.
Number of rows of processed dataframe is 57403.
288 rows removed!
<jupyter_text>Convert all Yes's to 1's and the No's to 0's<jupyter_code>recipes = recipes.replace(to_replace="Yes", value=1)
recipes = recipes.replace(to_replace="No", value=0)<jupyter_output><empty_output><jupyter_text>#### Let's analyze the data a little more in order to learn the data better and note any interesting preliminary observations.Run the following cell to get the recipes that contain **rice** *and* **soy** *and* **wasabi** *and* **seaweed**.<jupyter_code>recipes.head()
check_recipes = recipes.loc[
(recipes["rice"] == 1) &
(recipes["soy_sauce"] == 1) &
(recipes["wasabi"] == 1) &
(recipes["seaweed"] == 1)
]
check_recipes<jupyter_output><empty_output><jupyter_text>Based on the results of the above code, can we classify all recipes that contain **rice** *and* **soy** *and* **wasabi** *and* **seaweed** as **Japanese** recipes? Why?<jupyter_code>Your Answer: No, because, acording to the data, the cuisine could also be asian or east_asian.<jupyter_output><empty_output><jupyter_text>Double-click __here__ for the solution.
<!-- The correct answer is:
No, because other recipes such as Asian and East_Asian recipes also contain these ingredients.
-->Let's count the ingredients across all recipes.<jupyter_code># sum each column
ing = recipes.iloc[:, 1:].sum(axis=0)
# define each column as a pandas series
ingredient = pd.Series(ing.index.values, index = np.arange(len(ing)))
count = pd.Series(list(ing), index = np.arange(len(ing)))
# create the dataframe
ing_df = pd.DataFrame(dict(ingredient = ingredient, count = count))
ing_df = ing_df[["ingredient", "count"]]
print(ing_df.to_string())<jupyter_output> ingredient count
0 almond 2306
1 angelica 1
2 anise 223
3 anise_seed 87
4 apple 2422
5 apple_brandy 37
6 apricot 620
7 armagnac 11
8 artemisia 13
9 artichoke 391
10 asparagus 460
11 avocado 660
12 bacon 2169
13 baked_potato 9
14 balm 3
15 banana 989
16 barley 266
17 bartlett_pear 23
18 basil 3842
19 bay 1463
20 bean 1992
21 beech 1
22 beef 4902
23 beef_broth 845
24 beef_liver 10
25 beer 307
26 beet[...]<jupyter_text>Now we have a dataframe of ingredients and their total counts across all recipes. Let's sort this dataframe in descending order.<jupyter_code>ing_df.sort_values(["count"], ascending=False, inplace=True)
ing_df.reset_index(inplace=True, drop=True)
print(ing_df)<jupyter_output> ingredient count
0 egg 21025
1 wheat 20781
2 butter 20719
3 onion 18080
4 garlic 17353
.. ... ...
378 strawberry_jam 1
379 sturgeon_caviar 1
380 kaffir_lime 1
381 beech 1
382 durian 0
[383 rows x 2 columns]
<jupyter_text>#### What are the 3 most popular ingredients?<jupyter_code>Your Answer:
1.egg
2.wheat
3.butter<jupyter_output><empty_output><jupyter_text>Double-click __here__ for the solution.
<!-- The correct answer is:
// 1. Egg with 21,025 occurrences.
// 2. Wheat with 20,781 occurrences.
// 3. Butter with 20,719 occurrences.
-->However, note that there is a problem with the above table. There are ~40,000 American recipes in our dataset, which means that the data is biased towards American ingredients.**Therefore**, let's compute a more objective summary of the ingredients by looking at the ingredients per cuisine.#### Let's create a *profile* for each cuisine.
In other words, let's try to find out what ingredients Chinese people typically use, and what is **Canadian** food for example.<jupyter_code>cuisines = recipes.groupby("cuisine").mean()
cuisines.head()<jupyter_output><empty_output><jupyter_text>As shown above, we have just created a dataframe where each row is a cuisine and each column (except for the first column) is an ingredient, and the row values represent the percentage of each ingredient in the corresponding cuisine.
**For example**:
* *almond* is present across 15.65% of all of the **African** recipes.
* *butter* is present across 38.11% of all of the **Canadian** recipes.Let's print out the profile for each cuisine by displaying the top four ingredients in each cuisine.<jupyter_code>num_ingredients = 4 # define number of top ingredients to print
# define a function that prints the top ingredients for each cuisine
def print_top_ingredients(row):
print(row.name.upper())
row_sorted = row.sort_values(ascending=False)*100
top_ingredients = list(row_sorted.index.values)[0:num_ingredients]
row_sorted = list(row_sorted)[0:num_ingredients]
for ind, ingredient in enumerate(top_ingredients):
print("%s (%d%%)" % (ingredient, row_sorted[ind]), end=' ')
print("\n")
# apply function to cuisines dataframe
create_cuisines_profiles = cuisines.apply(print_top_ingredients, axis=1)<jupyter_output><empty_output>
|
no_license
|
/DS0103EN-3-3-1-From-Understanding-to-Preparation-v2.0.ipynb
|
hualcosa/IBM-Data-Science-Professional-Certification-assignments
| 19 |
<jupyter_start><jupyter_text>### System Test#### Sol: Task-1<jupyter_code>
n=int(input("Enter how many people are sharing their expenses: "))
a=[]
ppl={}
for i in range(0,n):
temp=input("Enter person %d name : "%(i+1))
a.append(temp)
ppl[temp]=i
# print(a)
expenses=[]
for i in range(0,n):
expenses.append([])
for j in range(0,n):
expenses[i].append(0)
# print(expenses)
val=0
while(val!=3):
print("Main Menu: \n1. Add an Expense. \n2. Show Expenses. \n3. Exit Program\n")
val=int(input())
if (val==1):
per=""
while per not in a:
per = input("Enter the name of the person who paid the expense: ")
if(per not in a):
print(per,"not exist")
exp=int(input("Enter the Amount of expense %s paid: "%(per)))
amt=exp/n
for name in a:
if(name!=per):
expenses[ppl[name]][ppl[per]] = expenses[ppl[name]][ppl[per]] + amt
if( expenses[ppl[name]][ppl[per]] < expenses[ppl[per]][ppl[name]] ):
expenses[ppl[per]][ppl[name]] = expenses[ppl[per]][ppl[name]] - expenses[ppl[name]][ppl[per]]
expenses[ppl[name]][ppl[per]] = 0
else:
expenses[ppl[name]][ppl[per]] = expenses[ppl[name]][ppl[per]] - expenses[ppl[per]][ppl[name]]
expenses[ppl[per]][ppl[name]] = 0
elif val==2:
pass
elif val==3:
pass
else:
print("Wrong Entry")
<jupyter_output>Enter how many people are sharing their expenses: 3
Enter person 1 name : SHIVA
Enter person 2 name : RAJ
Enter person 3 name : RAMYA
Main Menu:
1. Add an Expense.
2. Show Expenses.
3. Exit Program
3
<jupyter_text>#### Sol: Task-2<jupyter_code>
def ExpensesMenu():
print("select the Option number from the given option")
print("1. All Users share the expense. \n2. Only some users share the expense")
opt = int(input())
if(opt==1):
ExpenseOption1()
elif(opt==2):
ExpensesOption2()
def ExpenseOption1():
per="" # stores the person name who paid the expense
while per not in a:
per = input("Enter the name of the person who paid the expense: ")
if(per not in a):
print(per,"not exist")
exp=int(input("Enter the Amount of expense %s paid: "%(per)))
amt=exp/n
# below for loop stores the net expenses
for name in a:
if(name!=per):
expenses[ppl[name]][ppl[per]] = expenses[ppl[name]][ppl[per]] + amt
if( expenses[ppl[name]][ppl[per]] < expenses[ppl[per]][ppl[name]] ):
expenses[ppl[per]][ppl[name]] = expenses[ppl[per]][ppl[name]] - expenses[ppl[name]][ppl[per]]
expenses[ppl[name]][ppl[per]] = 0
else:
expenses[ppl[name]][ppl[per]] = expenses[ppl[name]][ppl[per]] - expenses[ppl[per]][ppl[name]]
expenses[ppl[per]][ppl[name]] = 0
def ExpensesOption2():
for i in range(0,n):
print(i+1, ". ",a[i], end='\n')
paidPer=-1
paidPer = int(input("Enter the number corresponding to the person who paid the expense: "))
paidPer=paidPer-1
print("Who all share the expense")
for i in range(0,n):
print(i+1, ". ",a[i], end='\n')
print("Enter the number corresponding to the people(who share expenses) followed by comma(,) ex - 1,2 ")
newppl=list(map(int,input().split(',')))
exp=int(input("Enter the Amount of expense %s paid: "%(a[paidPer])))
amt=exp/len(newppl)
for j in range(0,len(newppl)):
newppl[j]=newppl[j]-1
for i in newppl:
j=paidPer
expenses[i][j] = expenses[i][j]+amt
if(expenses[i][j]< expenses[j][i]):
expenses[j][i]= expenses[j][i]-expenses[i][j]
expenses[i][j]=0
else:
expenses[i][j] = expenses[i][j] - expenses[j][i]
expenses[j][i]=0
def ShowExpenses():
row_format = "{:>15}" * (len(a) + 1)
print(row_format.format("", *a))
for team, row in zip(a,expenses):
print(row_format.format(team, *row))
n=int(input("Enter how many people are sharing their expenses: "))
a=[] # to store people names
ppl={}
for i in range(0,n):
temp=input("Enter person %d name : "%(i+1))
a.append(temp)
ppl[temp]=i
expenses=[]
# keeping initial expenses as zero in below for loop
for i in range(0,n):
expenses.append([])
for j in range(0,n):
expenses[i].append(0)
# print(expenses)
val=0
while(val!=3):
print("Main Menu: \n1. Add an Expense. \n2. Show Expenses. \n3. Exit Program\n")
val=int(input())
if (val==1):
# Add an Expense
ExpensesMenu()
elif val==2:
# Show Expenses
ShowExpenses()
elif val==3:
# Exit Program
pass
else:
print("Wrong Entry")
<jupyter_output><empty_output><jupyter_text>#### Sol:Task-3<jupyter_code>
def ExpensesMenu(expenses,a):
print("select the Option number from the given option")
print("1. All Users share the expense. \n2. Only some users share the expense")
opt = int(input())
allPeople =a
if(opt==1):
# ExpenseOption1()
for i in range(0,n):
print((i+1),". ",a[i], end='\n')
paidPer=-1
paidPer = int(input("Enter the number corresponding to the person who paid the expense: "))
paidPer=paidPer-1
allPeople = a
newPeopleIndexes = list(range(0,len(a)))
paidPersonIndex = paidPer
amt=int(input("Enter the Amount of expense %s paid: "%(allPeople[paidPersonIndex])))
print("Enter the chosen option from below")
print("1. Everyone shares the expenses equally. \n2. Expense is shared in a ratio")
optionSelected = int(input())
if(optionSelected==1):
newPeopleRatio = [1]*len(a)
elif(optionSelected==2):
print("Enter the ratio of Persons in in-order separated by comma(,)")
for i in range(0,n):
print((i+1),". ",a[i], end='\n')
newPeopleRatio = list(map(int,input().split(',')))
ExpenseShareAndRatio(allPeople,newPeopleIndexes,paidPersonIndex,newPeopleRatio,expenses,amt)
elif(opt==2):
for i in range(0,n):
print((i+1),". ",a[i], end='\n')
paidPer=-1
paidPer = int(input("Enter the number corresponding to the person who paid the expense: "))
paidPer=paidPer-1
paidPersonIndex = paidPer
print("Who all share the expense")
for i in range(0,n):
print((i+1),". ",a[i], end='\n')
print("Enter the number corresponding to the people(who share expenses) followed by comma(,) ex - 1,2 ")
newPeopleIndexes=list(map(int,input().split(',')))
for i in range(0,len(newPeopleIndexes)):
newPeopleIndexes[i] = newPeopleIndexes[i] -1
amt=int(input("Enter the Amount of expense %s paid: "%(a[paidPer])))
print("Enter the chosen option from below")
print("1. Everyone shares the expenses equally. \n2. Expense is shared in a ratio")
optionSelected = int(input())
if(optionSelected==1):
newPeopleRatio = [1]*len(newPeopleIndexes)
elif(optionSelected==2):
print("Enter the ratio of Persons in in-order separated by comma(,)")
for i in range(0,len(newPeopleIndexes)):
print((i+1),". ",a[newPeopleIndexes[i]], end='\n')
newPeopleRatio = list(map(int,input().split(',')))
ExpenseShareAndRatio(allPeople,newPeopleIndexes,paidPersonIndex,newPeopleRatio,expenses,amt)
def ExpenseShareAndRatio(allPeople,newPeopleIndexes,paidPersonIndex,newPeopleRatio,expenses,amt):
j=paidPersonIndex
for i in range(0,len(newPeopleIndexes)):
if(newPeopleIndexes[i]!=j):
expenses[newPeopleIndexes[i]][j]=expenses[newPeopleIndexes[i]][j] + amt*(newPeopleRatio[i]/sum(newPeopleRatio))
if(expenses[newPeopleIndexes[i]][j] < expenses[j][newPeopleIndexes[i]]):
expenses[j][newPeopleIndexes[i]]= expenses[j][newPeopleIndexes[i]]-expenses[newPeopleIndexes[i]][j]
expenses[newPeopleIndexes[i]][j]=0
else:
expenses[newPeopleIndexes[i]][j] = expenses[newPeopleIndexes[i]][j] - expenses[j][newPeopleIndexes[i]]
expenses[j][newPeopleIndexes[i]]=0
else:
pass
def ShowExpenses():
row_format = "{:>15}" * (len(a) + 1)
print(row_format.format("", *a))
for team, row in zip(a,expenses):
print(row_format.format(team, *row))
n=int(input("Enter how many people are sharing their expenses: "))
a=[] # to store people names
ppl={}
for i in range(0,n):
temp=input("Enter person %d name : "%(i+1))
a.append(temp)
ppl[temp]=i
allPeople = a;
expenses=[]
# keeping initial expenses as zero in below for loop
for i in range(0,n):
expenses.append([])
for j in range(0,n):
expenses[i].append(0)
# print(expenses)
val=0
while(val!=3):
print("Main Menu: \n1. Add an Expense. \n2. Show Expenses. \n3. Exit Program\n")
val=int(input())
if (val==1):
# Add an Expense
ExpensesMenu(expenses,a)
elif val==2:
# Show Expenses
ShowExpenses()
elif val==3:
# Exit Program
pass
else:
print("Wrong Entry")
<jupyter_output><empty_output>
|
no_license
|
/Untitled18.ipynb
|
Koundinya99/Coding
| 3 |
<jupyter_start><jupyter_text>## Part 1
### (a) For lambda = 0, 1, ..., 5000, solve for wRR.<jupyter_code>X_train = pd.read_csv("data/hw1-data/X_train.csv",
names = ['cylinders', 'displacement', 'horsepower',
'weight', 'acceleration', 'year made', 'constant'],
index_col=False)
y_train = pd.read_csv("data/hw1-data/y_train.csv",
names = ['miles per gallon'],
index_col=False)
X_test = pd.read_csv("data/hw1-data/X_test.csv",
names = ['cylinders', 'displacement', 'horsepower',
'weight', 'acceleration', 'year made', 'constant'],
index_col=False)
y_test = pd.read_csv("data/hw1-data/y_test.csv",
names = ['miles per gallon'],
index_col=False)
# Standardize features.
for i in range(len(X_train.columns) - 1):
train_mean = X_train[X_train.columns[i]].mean()
train_std_mle = np.sqrt(((X_train[X_train.columns[i]] - train_mean) ** 2).sum() / len(X_train[X_train.columns[i]]))
X_train[X_train.columns[i]] = (X_train[X_train.columns[i]] - train_mean).div(train_std_mle)
X_test[X_train.columns[i]] = (X_test[X_train.columns[i]] - train_mean).div(train_std_mle)
def w_RR(X_train, y_train, lamb_min, lamb_max):
'''
Input: X_train, y_train, lamb_min, lamb_max
1. X_train, y_train: training data; matrix
2. lamb_min: the min of lambda; int
3. lamb_max: the max of lambda; int
lamb_min and lamb_max define a range for lambda.
Function: Calculate w_RR for each lambda, which is range from lamb_min to lamb_max.
Output: w_RRs, df
1. w_RRs: ridge regression solution; a (d x n) dataframe, d: feature dimension, n: number of lambdas' value
(need to be transpose when used)
2. df: degree of freedom; list
'''
# Calculate least squares solution
w_LS = np.dot(np.dot(np.linalg.inv(np.dot(np.transpose(X_train), X_train)),
np.transpose(X_train)), y_train)
U, s, VH = np.linalg.svd(X_train) # Do Singular Value Decompositions
w_RRs = pd.DataFrame() # Create a dataframe to store the w_RRs
df = [] # Create a list to store dfs
for lambd in range(lamb_min, lamb_max + 1, 1):
'''Calculate w_RR and df for each lambda'''
w_RR = np.dot(np.linalg.inv(lambd*np.linalg.inv(np.dot(np.transpose(X_train), X_train))
+ np.identity(X_train.shape[1])), w_LS)
w_RRs[lambd] = pd.Series(np.transpose(w_RR)[0]) # Add w_RR to the dataframe
df_lambd = 0
for i in range(len(s)): # Calculate df.
df_lambd += s[i]*s[i]/(lambd + s[i]*s[i])
df.append(df_lambd) # Add df_lambd to the list
return(w_RRs, df)
# Call function w_RR() to calculate w_RRs and df
w_RRs_o, df = w_RR(X_train.values, y_train.values, 0, 5000)
w_RRs_o = w_RRs_o.T # Transpose the dataframe
w_RRs_o = w_RRs_o.rename(columns={0: 'cylinders', 1: 'displacement', 2: 'horsepower', 3: 'weight',
4: 'acceleration', 5: 'year made', 6: 'constant'}) # Rename the columns
w_RRs_o['df'] = df # Add df to the dataframe as a column.
# Plot the figure.
plt.figure(figsize = (10, 15))
plt.plot(w_RRs_o['df'], w_RRs_o['cylinders'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '1: cylinders')
plt.plot(w_RRs_o['df'], w_RRs_o['displacement'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '2: displacement')
plt.plot(w_RRs_o['df'], w_RRs_o['horsepower'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '3: horsepower')
plt.plot(w_RRs_o['df'], w_RRs_o['weight'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '4: weight')
plt.plot(w_RRs_o['df'], w_RRs_o['acceleration'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '5: acceleration')
plt.plot(w_RRs_o['df'], w_RRs_o['year made'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '6: year made')
plt.plot(w_RRs_o['df'], w_RRs_o['constant'], linestyle = '--', marker = 'o', lw = 1, ms = 2, label = '7: intercept')
plt.axhline(y=0, xmin=0, ls = 'dotted', c = 'black') # Add a horizontal line at y = 0 for the convenience of comparison.
# Set the attributes of the figure.
plt.xlabel('df(\u03BB)', size = 15)
plt.title('w vs. df(\u03BB)', size = 17)
plt.xlim(0, 8)
plt.legend(loc = 'lower left', fontsize = 12)
# Label the 7 curves by their dimension in X
plt.text(7.2, 2.70, '6', fontsize = 12)
plt.text(7.2, 0.68, '2', fontsize = 12)
plt.text(7.2, 0.24, '5', fontsize = 12)
plt.text(7.2, 0.08, '7', fontsize = 12)
plt.text(7.2, -0.34, '3', fontsize = 12)
plt.text(7.2, -0.54, '1', fontsize = 12)
plt.text(7.2, -5.67, '4', fontsize = 12)<jupyter_output><empty_output><jupyter_text>### (c) For lambda = 0, ..., 50, predict all 42 test cases. Plot the root mean squared error (RMSE) on the test set as a function of lambda.<jupyter_code>def rmse(X_test, y_test, w_RR):
'''
Input: X_test, y_test, w_RR
1. X_test, y_test: test data; dataframe
2. w_RR: ridge regression solutions; dataframe
Function: Calculate rmses
Output: lamb, rmses
1. lamb: value of lambda; list
2. rmses: root mean squared error; list
'''
rmses = []
lamb = []
for i in w_RR.index:
w_RR_i = w_RR.iloc[i].values.reshape(-1, 1)
y_predict = np.dot(X_test.values, w_RR_i)
rmse = np.sqrt(np.sum(np.square(y_test.values - y_predict))/42)
rmses.append(rmse)
lamb.append(i)
return(lamb, rmses)
w_RR_51 = w_RRs_o.iloc[:51,:-1]
# Call function rmse().
lamb, rmses = rmse(X_test, y_test, w_RR_51)
# Plot the figure.
plt.figure(figsize = (10, 6))
plt.plot(lamb, rmses, linestyle = '--', marker = 'o', lw = 1, ms = 2)
# Set the attributes of the figure.
plt.xlabel('\u03BB', size = 15)
plt.ylabel('RMSE', size = 15)
plt.title('RMSE vs. \u03BB (Ridge Regression)', size = 17)<jupyter_output><empty_output><jupyter_text>### (d) In one figure, plot the test RMSE as a function of lambda = 0, ..., 100 for p = 1, 2, 3.<jupyter_code># 1th-order polynomial regression model
w_RRs_1 = w_RRs_o.iloc[:101,:-1]
# Call function rmse().
lamb_1, rmses_1 = rmse(X_test, y_test, w_RRs_1)
# 2th-order polynomial regression
X_train_2pol = X_train.copy(deep = True)
X_test_2pol = X_test.copy(deep = True)
name2 = ['x1^2', 'x2^2', 'x3^2', 'x4^2', 'x5^2', 'x6^2']
for i in range(len(X_train.columns) - 1):
# Add columns to X_train_2pol and standardize them.
X_train_2pol[name2[i]] = X_train_2pol.iloc[:,i] ** 2
train_mean = X_train_2pol[name2[i]].mean()
train_std_mle = np.sqrt(((X_train_2pol[name2[i]] - train_mean) ** 2).sum() / len(X_train_2pol[name2[i]]))
X_train_2pol[name2[i]] = (X_train_2pol[name2[i]] - train_mean).div(train_std_mle)
# Add columns to X_test_2pol and standardize them.
X_test_2pol[name2[i]] = X_test_2pol.iloc[:,i] ** 2
X_test_2pol[name2[i]] = (X_test_2pol[name2[i]] - train_mean).div(train_std_mle)
# Call function w_RR() to calculate w_RRs and df.
w_RRs_2, df_2 = w_RR(X_train_2pol.values, y_train.values, 0, 100)
w_RRs_2 = w_RRs_2.T # Transpose the dataframe
w_RRs_2 = w_RRs_2.rename(columns={0: 'x1', 1: 'x2', 2: 'x3', 3: 'x4',
4: 'x5', 5: 'x6', 6: 'Intercept',
7: 'x1^2', 8: 'x2^2', 9: 'x3^2',
10: 'x4^2', 11: 'x5^2', 12: 'x6^2'}) # Rename the columns
# Call function rmse() to predict test data and calculate RMSE.
lamb_2, rmses_2 = rmse(X_test_2pol, y_test, w_RRs_2)
# 3th-order polynomial regression
X_train_3pol = X_train_2pol.copy(deep = True)
X_test_3pol = X_test_2pol.copy(deep = True)
name3 = ['x1^3', 'x2^3', 'x3^3', 'x4^3', 'x5^3', 'x6^3']
for i in range(len(X_train.columns) - 1):
# Add columns to X_train_3pol and standardize them.
X_train_3pol[name3[i]] = X_train_3pol.iloc[:,i] ** 3
train_mean = X_train_3pol[name3[i]].mean()
train_std_mle = np.sqrt(((X_train_3pol[name3[i]] - train_mean) ** 2).sum() / len(X_train_3pol[name3[i]]))
X_train_3pol[name3[i]] = (X_train_3pol[name3[i]] - train_mean).div(train_std_mle)
# Add columns to X_test_3pol and standardize them.
X_test_3pol[name3[i]] = X_test_3pol.iloc[:,i] ** 3
X_test_3pol[name3[i]] = (X_test_3pol[name3[i]] - train_mean).div(train_std_mle)
# Call function w_RR() to calculate w_RRs and df.
w_RRs_3, df_3 = w_RR(X_train_3pol.values, y_train.values, 0, 100)
w_RRs_3 = w_RRs_3.T # Transpose the dataframe
w_RRs_3 = w_RRs_3.rename(columns={0: 'x1', 1: 'x2', 2: 'x3', 3: 'x4',
4: 'x5', 5: 'x6', 6: 'constant',
7: 'x1^2', 8: 'x2^2', 9: 'x3^2',
10: 'x4^2', 11: 'x5^2', 12: 'x6^2',
13: 'x1^3', 14: 'x2^3', 15: 'x3^3',
16: 'x4^3', 17: 'x5^3', 18: 'x6^3'}) # Rename the columns
# Call function rmse() to predict test data and calculate RMSE.
lamb_3, rmses_3 = rmse(X_test_3pol, y_test, w_RRs_3)
# Plot the figure.
plt.figure(figsize = (10, 6))
plt.plot(lamb_1, rmses_1, linestyle = '--', marker = 'o', c = 'b', lw = 1, ms = 2, label = '1st-order polynomial regression model')
plt.plot(lamb_2, rmses_2, linestyle = '--', marker = 'o', c = 'r', lw = 1, ms = 2, label = '2nd-order polynomial regression model')
plt.plot(lamb_3, rmses_3, linestyle = '--', marker = 'o', c = 'y', lw = 1, ms = 2, label = '3rd-order polynomial regression model')
# Set the attributes of the figure.
plt.xlabel('\u03BB', size = 15)
plt.ylabel('RMSE', size = 15)
plt.title('RMSE vs. \u03BB (Ridge Regression)', size = 17)
plt.legend(loc = 'upper left', fontsize = 11)
<jupyter_output><empty_output>
|
no_license
|
/Ridge_Regression_Implementation .ipynb
|
bl2791/Machine-Leaning
| 3 |
<jupyter_start><jupyter_text>
#Table of Contents
This notebook demonstrates the use of the IPythonTOC class. It was inspired by [Creating a table of contents with internal links in IPython Notebooks and Markdown documents](http://bit.ly/1zge9gE). If you don't know about [nbviewer](http://nbviewer.ipython.org/), you should.
The following cell was created by running _toc.genTOCMarkdownCell_('First Entry In Table of Contents') as described<jupyter_code># a. move this cell to preceed the markdown cell you want to index
# b. put the title string you want as argument to genTOCMarkdownCell
# c. change the loaded cell from a code cell to a markdown cell
# d. go to the head of this notebook anrd run the genTOCEntry cell - copy that output and paste into the TOC cell
with open('TOCMarkdownCell.txt', 'w') as outfile:
outfile.write(toc.genTOCMarkdownCell('First Entry in Table of Contents')) #<<< change title!!!
!cat TOCMarkdownCell.txt
!rm TOCMarkdownCell.txt
<jupyter_output><a id='First_Entry_in_Table_of_Contents'></a>
###First Entry in Table of Contents<jupyter_text>
###First Entry In Table of Contents
------------------------------------------------------------------------------------------------------------
The text in this markdown cell above the line was pasted from the output of the preceeding genTOCMarkdownCell call.
Next, run the genTOCEntry cell (normally go back to the top of the notebook), copy the output and edit the **TOC** cell to add the link.<jupyter_code># You have called toc.genTOCMarkdownCell in cell 2 before this cell so the title has been set in the class
# run it now.
toc.genTOCEntry()
<jupyter_output><empty_output><jupyter_text>#Table of Contents
The following link was created by copying output from cell 3's output: _toc.genTOCMarkdownCell_('First Entry In Table of Contents')
[First Entry in Table of Contents](#First_Entry_in_Table_of_Contents)
Now I'll create a second entry by changing the text passed to genTOCMarkdownCell<jupyter_code># This cell was copied or moved to preceed the markdown cell at the beginning of a section you want to index
# Put the title string of the new section as argument to genTOCMarkdownCell
# Add a markdown cell after this to begin the new section and copy the output of this cell to the markdown cell
# Normally, you go to the head of this notebook and run the genTOCEntry cell for output to paste into the TOC
# However, I've duplicated this code so you can just proceed through the notebook
with open('TOCMarkdownCell.txt', 'w') as outfile:
outfile.write(toc.genTOCMarkdownCell('Second Entry in Table of Contents')) #<<< change title!!!
!cat TOCMarkdownCell.txt
!rm TOCMarkdownCell.txt
<jupyter_output><a id='Second_Entry_in_Table_of_Contents'></a>
###Second Entry in Table of Contents<jupyter_text>
###Second Entry in Table of Contents
The markdown text above was copied from the output of cell 4.
Normally, I'd go to the cell before my **TOC** and execute it. However, I've duplicated that code so you can just proceed through this notebook and see things work.<jupyter_code># Create a markdown cell after this one to contain the Table of Contents
toc.genTOCEntry()
# the output of this cell would be copied and added to the TOC Which I've duplicated below<jupyter_output><empty_output><jupyter_text>#Table of Contents
The following link was created by copying output from cell 3 _toc.genTOCMarkdownCell_('First Entry In Table of Contents') as described
[First Entry in Table of Contents](#First_Entry_in_Table_of_Contents)
The following link was created by copying output from cell 5 _toc.genTOCMarkdownCell_('First Entry In Table of Contents') as described
[Second Entry in Table of Contents](#Second_Entry_in_Table_of_Contents)
Now, the following links are active. You can click on them and navigate:
[Table of Contents](#Table_of_Contents)
[First Entry in Table of Contents](#First_Entry_in_Table_of_Contents)
[Second Entry in Table of Contents](#Second_Entry_in_Table_of_Contents)
One thing I often forget is that clicking a line in the **TOC** makes the browser jump to that cell but doesn't select it. Whatever cell was active when we clicked on the **TOC** link is still active, so a down or up arrow or shift-enter refers to still active cell, not the cell we got by clicking on the **TOC** link. If you don't want to create a disk file for the cat and rm, you can just run _toc.genTOCMarkdownCell_('your title') and create a link to 'your title' suitable for copying into the **TOC**. However, you'll have to edit the '\n' to split the lines yourself.
OK, now try it yourself:<jupyter_code># %load TOCMarkdownCell.txt
<a id='Try_it_yourself'></a>
###Try it yourself<jupyter_output><empty_output>
|
no_license
|
/TOC.ipynb
|
charloco/IPythonTOC
| 5 |
<jupyter_start><jupyter_text># Introduction to Sets - Lab
## Introduction
Probability theory is all around. A common example is in the game of poker or related card games, where players try to calculate the probability of winning a round given the cards they have in their hands. Also, in a business context, probabilities play an important role. Operating in a volatile economy, companies need to take uncertainty into account and this is exactly where probability theory plays a role.
As mentioned in the lesson before, a good understanding of probability starts with understanding of sets and set operations. That's exactly what you'll learn in this lab!
## Objectives
You will be able to:
* Use Python to perform set operations
* Use Python to demonstrate the inclusion/exclusion principle
## Exploring Set Operations Using a Venn Diagram
Let's start with a pretty conceptual example. Let's consider the following sets:
- $\Omega$ = positive integers between [1, 12]
- $A$= even numbers between [1, 10]
- $B = \{3,8,11,12\}$
- $C = \{2,3,6,8,9,11\}$
#### a. Illustrate all the sets in a Venn Diagram like the one below. The rectangular shape represents the universal set.
#### b. Using your Venn Diagram, list the elements in each of the following sets:
- $ A \cap B$
- $ A \cup C$
- $A^c$
- The absolute complement of B
- $(A \cup B)^c$
- $B \cap C'$
- $A\backslash B$
- $C \backslash (B \backslash A)$
- $(C \cap A) \cup (C \backslash B)$
#### c. For the remainder of this exercise, let's create sets A, B and C and universal set U in Python and test out the results you came up with. Sets are easy to create in Python. For a guide to the syntax, follow some of the documentation [here](https://www.w3schools.com/python/python_sets.asp)<jupyter_code># Create set A
A = set([2,4,6,8,10])
'Type A: {}, A: {}'.format(type(A), A) # "Type A: <class 'set'>, A: {2, 4, 6, 8, 10}"
# Create set B
B = set([3,8,11,12])
'Type B: {}, B: {}'.format(type(B), B) # "Type B: <class 'set'>, B: {8, 11, 3, 12}"
# Create set C
C = set([2,3,6,8,9,11])
'Type C: {}, C: {}'.format(type(C), C) # "Type C: <class 'set'>, C: {2, 3, 6, 8, 9, 11}"
# Create universal set U
U = set([i for i in range(1,13)])
'Type U: {}, U: {}'.format(type(U), U) # "Type U: <class 'set'>, U: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}"<jupyter_output><empty_output><jupyter_text>Now, verify your answers in section 1 by using the correct methods in Python. To provide a little bit of help, you can find a table with common operations on sets below.
| Method | Equivalent | Result |
| ------ | ------ | ------ |
| s.issubset(t) | s <= t | test whether every element in s is in t
| s.issuperset(t) | s >= t | test whether every element in t is in s
| s.union(t) | s $\mid$ t | new set with elements from both s and t
| s.intersection(t) | s & t | new set with elements common to s and t
| s.difference(t) | s - t | new set with elements in s but not in t
| s.symmetric_difference(t) | s ^ t | new set with elements in either s or t but not both#### 1. $ A \cap B$<jupyter_code>A_inters_B = A.intersection(B)
A_inters_B # {8}<jupyter_output><empty_output><jupyter_text>#### 2. $ A \cup C $<jupyter_code>A_union_C = A.union(C)
A_union_C # {2, 3, 4, 6, 8, 9, 10, 11}<jupyter_output><empty_output><jupyter_text>#### 3. $A^c$ (you'll have to be a little creative here!)<jupyter_code>A_comp = U.difference(A)
A_comp # {1, 3, 5, 7, 9, 11, 12}<jupyter_output><empty_output><jupyter_text>#### 4. $(A \cup B)^c $<jupyter_code>A_union_B_comp = U.difference(A.union(B))
A_union_B_comp # {1, 5, 7, 9}<jupyter_output><empty_output><jupyter_text>#### 5. $B \cap C' $<jupyter_code>B_inters_C_comp = B.intersection(U.difference(C))
B_inters_C_comp # {12}<jupyter_output><empty_output><jupyter_text>#### 6. $A\backslash B$<jupyter_code>compl_of_B = A.difference(B)
compl_of_B # {2, 4, 6, 10}<jupyter_output><empty_output><jupyter_text>#### 7. $C \backslash (B \backslash A) $<jupyter_code>C_compl_B_compl_A = C.difference(B.difference(A))
C_compl_B_compl_A # {2, 6, 8, 9}<jupyter_output><empty_output><jupyter_text>#### 8. $(C \cap A) \cup (C \backslash B)$<jupyter_code>C_inters_A_union_C_min_B= C.intersection(A).union(C.difference(B))
C_inters_A_union_C_min_B # {2, 6, 8, 9}<jupyter_output><empty_output><jupyter_text>## The Inclusion Exclusion PrincipleUse A, B and C from exercise one to verify the inclusion exclusion principle in Python.
You can use the sets A, B and C as used in the previous exercise.
Recall from the previous lesson that:
$$\mid A \cup B\cup C\mid = \mid A \mid + \mid B \mid + \mid C \mid - \mid A \cap B \mid -\mid A \cap C \mid - \mid B \cap C \mid + \mid A \cap B \cap C \mid $$
Combining these main commands:
| Method | Equivalent | Result |
| ------ | ------ | ------ |
| a.union(b) | A $\mid$ B | new set with elements from both a and b
| a.intersection(b) | A & B | new set with elements common to a and b
along with the `len(x)` function to get to the cardinality of a given x ("|x|").
What you'll do is translate the left hand side of the equation for the inclusion principle in the object `left_hand_eq`, and the right hand side in the object `right_hand_eq` and see if the results are the same.
<jupyter_code>left_hand_eq = A | B | C
print(left_hand_eq) # 9 elements in the set
right_hand_eq = A | B | C - (A&B) - (A&C) - (B&C) | (A&B&C)
print(right_hand_eq) # 9 elements in the set
left_hand_eq == right_hand_eq # Use a comparison operator to compare `left_hand_eq` and `right_hand_eq`. Needs to say "True".<jupyter_output><empty_output><jupyter_text>## Set Operations in Python
Mary is preparing for a road trip from her hometown, Boston, to Chicago. She has quite a few pets, yet luckily, so do her friends. They try to make sure that they take care of each other's pets while someone is away on a trip. A month ago, each respective person's pet collection was given by the following three sets:<jupyter_code>Nina = set(["Cat","Dog","Rabbit","Donkey","Parrot", "Goldfish"])
Mary = set(["Dog","Chinchilla","Horse", "Chicken"])
Eve = set(["Rabbit", "Turtle", "Goldfish"])<jupyter_output><empty_output><jupyter_text>In this exercise, you'll be able to use the following operations:
|Operation | Equivalent | Result|
| ------ | ------ | ------ |
|s.update(t) | $s \mid t$ |return set s with elements added from t|
|s.intersection_update(t) | s &= t | return set s keeping only elements also found in t|
|s.difference_update(t) | s -= t |return set s after removing elements found in t|
|s.symmetric_difference_update(t) | s ^= t |return set s with elements from s or t but not both|
|s.add(x) | | add element x to set s|
|s.remove(x) | | remove x from set s|
|s.discard(x) | | removes x from set s if present|
|s.pop() | | remove and return an arbitrary element from s|
|s.clear() | |remove all elements from set s|
Sadly, Eve's turtle passed away last week. Let's update her pet list accordingly.<jupyter_code>Eve.remove('Turtle')
Eve # should be {'Rabbit', 'Goldfish'}<jupyter_output><empty_output><jupyter_text>This time around, Nina promised to take care of Mary's pets while she's away. But she also wants to make sure her pets are well taken care of. As Nina is already spending a considerable amount of time taking care of her own pets, adding a few more won't make that much of a difference. Nina does want to update her list while Marie is away. <jupyter_code>Nina.update(Mary)
Nina # {'Chicken', 'Horse', 'Chinchilla', 'Parrot', 'Rabbit', 'Donkey', 'Dog', 'Cat', 'Goldfish'}<jupyter_output><empty_output><jupyter_text>Mary, on the other hand, wants to clear her list altogether while away:<jupyter_code>Mary.clear()
Mary # set()<jupyter_output><empty_output><jupyter_text>Look at how many species Nina is taking care of right now.<jupyter_code>n_species_Nina = len(Nina)
n_species_Nina # 9<jupyter_output><empty_output><jupyter_text>Taking care of this many pets is weighing heavily on Nina. She remembered Eve had a smaller collection of pets lately, and that's why she asks Eve to take care of the common species. This way, the extra pets are not a huge effort on Eve's behalf. Let's update Nina's pet collection.<jupyter_code>Nina.difference_update(Eve)
Nina # 7<jupyter_output><empty_output><jupyter_text>Taking care of 7 species is something Nina feels comfortable doing!## Writing Down the Elements in a Set
Mary dropped off her Pet's at Nina's house and finally made her way to the highway. Awesome, her vacation has begun!
She's approaching an exit. At the end of this particular highway exit, cars can either turn left (L), go straight (S) or turn right (R). It's pretty busy and there are two cars driving close to her. What you'll do now is create several sets. You won't be using Python here, it's sufficient to write the sets down on paper. A good notion of sets and subsets will help you calculate probabilities in the next lab!
Note: each set of action is what _all three cars_ are doing at any given time
a. Create a set $A$ of all possible outcomes assuming that all three cars drive in the same direction.
b. Create a set $B$ of all possible outcomes assuming that all three cars drive in a different direction.
c. Create a set $C$ of all possible outcomes assuming that exactly 2 cars turn right.
d. Create a set $D$ of all possible outcomes assuming that exactly 2 cars drive in the same direction.
e. Write down the interpretation and give all possible outcomes for the sets denoted by:
- I. $D'$
- II. $C \cap D$,
- III. $C \cup D$. ## Optional Exercise: European Countries
Use set operations to determine which European countries are not in the European Union. You just might have to clean the data first with pandas.<jupyter_code>import pandas as pd
#Load Europe and EU
europe = pd.read_excel('Europe_and_EU.xlsx', sheet_name = 'Europe')
eu = pd.read_excel('Europe_and_EU.xlsx', sheet_name = 'EU')
#Use pandas to remove any whitespace from names
europe.columns = europe.columns.str.replace(' ', '_')
eu.columns = eu.columns.str.replace(' ', '_')
europe.head(3) #preview dataframe
eu.head(3)
# Your code comes here
europe.Country = europe.Country.str.replace(' ', '_')
eu.Country = eu.Country.str.replace(' ', '_')
eu_set = set(eu.Country.to_list())
europe_set = set(europe.Country.to_list())
europe_set.difference(eu_set)<jupyter_output><empty_output>
|
non_permissive
|
/index.ipynb
|
Lionslicer-Coding/dsc-intro-to-sets-lab-onl01-dtsc-pt-030220
| 17 |
<jupyter_start><jupyter_text>1. Given index=["India","Pakistan","Bangladesh","Srilanka","china","Nepal"] and data=[1,3,5,np.nan,6,8] . Create a table with the "index" as indices and data as values.
Expected output
India 1.0
Pakistan 3.0
Bangladesh 5.0
Srilanka NaN
China 6.0
Nepal 8.0
Name: myseries, dtype: float64<jupyter_code>#please type your answer below this line.
import numpy as np
import pandas as pd
index=["India", "Pakistan", "Bangladesh","Srilanka","China", "Nepal"]
data=[1,3,5, np.nan,6,8]
print (pd.Series (data, index, name='myseries'))<jupyter_output>India 1.0
Pakistan 3.0
Bangladesh 5.0
Srilanka NaN
China 6.0
Nepal 8.0
Name: myseries, dtype: float64
<jupyter_text>2. Create a data 'dates' starting from 2021-01-01 with range 6. and create a random 6x4 array and a list=('ABCD').now create a dataframe with 'dates' as index and list as the column and data as the datas of this table.
Expected output
A B C D
2021-01-01 -0.429358 0.779826 0.389625 0.248792
2021-01-02 -1.044721 0.272807 -0.370437 1.177273
2021-01-03 -0.961394 -0.525376 0.120843 0.470060
2021-01-04 1.842266 -0.551049 0.588390 -0.326937
2021-01-05 0.986198 0.391301 -0.084837 -0.855573
2021-01-06 -1.531207 0.905463 -0.099024 0.589219
<jupyter_code>#please type
import numpy as np
import pandas as pd
dates = pd.date_range('2021-01-01',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
df<jupyter_output><empty_output>
|
no_license
|
/TASK_16.ipynb
|
gaya3-b/LEARN.PY
| 2 |
<jupyter_start><jupyter_text>The objective is to predict the likelihood of different types of white blood cells in each picture.
In general, there are two classes available in the dataset:
1. MONONUCLEAR
2. POLYNUCLEAR# Training set: Class distribution<jupyter_code>train_path = "../Dataset/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
count_dict = {}
for i in sub_folders:
class_count = len(check_output(["ls", "../Dataset/train/" + i]).decode("utf8").strip().split('\n'))
print("Total count for class", i,":",class_count)
count_dict[i] = class_count
plt.figure(figsize=(15, 8))
sns.barplot(list(count_dict.keys()), list(count_dict.values()), alpha=0.8)
plt.title("Training set: Class count", fontsize=15)
plt.xlabel("Types of white blood cells", fontsize=12)
plt.ylabel("Image count", fontsize=12)
plt.show()<jupyter_output>('Total count for class', u'MONONUCLEAR', ':', 4996)
('Total count for class', u'POLYNUCLEAR', ':', 4961)
<jupyter_text>According to my analysis, this is a balanced training dataset. Class MONONUCLEAR is almost equal to class POLYNUCLEAR.# Training set: Image size<jupyter_code>train_path = "../Dataset/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
image_size_dict = {}
for i in sub_folders:
file_names = check_output(["ls", train_path + i]).decode("utf8").strip().split('\n')
for y in file_names:
im_array = imread(train_path + i + "/" + y)
size = "_".join(map(str,list(im_array.shape)))
image_size_dict[size] = image_size_dict.get(size, 0) + 1
plt.figure(figsize=(15, 8))
sns.barplot(list(image_size_dict.keys()), list(image_size_dict.values()), alpha=0.8)
plt.title("Training set: Image size count", fontsize=15)
plt.xlabel('Image size', fontsize=12)
plt.ylabel('Image count', fontsize=12)
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Notebook/Exploratory Notebook v1.ipynb
|
JLewiste/White-Blood-Cells-Classification-via-Convolutional-Neural-Network
| 2 |
<jupyter_start><jupyter_text># 机器学习工程师纳米学位
## 模型评价与验证
## 项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以**编程练习**开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以**TODO**标出。请仔细阅读所有的提示!
除了实现代码外,你还**必须**回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以**'问题 X'**为标题。请仔细阅读每个问题,并且在问题后的**'回答'**文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
>**提示:**Code 和 Markdown 区域可通过 **Shift + Enter** 快捷键运行。此外,Markdown可以通过双击进入编辑模式。---
## 第一步. 导入数据
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自[UCI机器学习知识库(数据集已下线)](https://archive.ics.uci.edu/ml/datasets.html)。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个`'MEDV'` 值为50.0的数据点被移除。 这很可能是由于这些数据点包含**遗失**或**看不到的值**。
- 有1个数据点的 `'RM'` 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的`'RM'`, `'LSTAT'`,`'PTRATIO'`以及`'MEDV'`特征是必要的,其余不相关特征已经被移除。
- `'MEDV'`特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。<jupyter_code># 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
# 检查你的Python版本
from sys import version_info
if version_info.major != 2 and version_info.minor != 7:
raise Exception('请使用Python 2.7来完成此项目')
# 让结果在notebook中显示
%matplotlib inline
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)<jupyter_output>Boston housing dataset has 489 data points with 4 variables each.
<jupyter_text>---
## 第二步. 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为**特征(features)**和**目标变量(target variable)**。
- **特征** `'RM'`, `'LSTAT'`,和 `'PTRATIO'`,给我们提供了每个数据点的数量相关的信息。
- **目标变量**:` 'MEDV'`,是我们希望预测的变量。
他们分别被存在`features`和`prices`两个变量名中。### 编程练习 1:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了` numpy `,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算`prices`中的`'MEDV'`的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。<jupyter_code>#TODO 1
#目标:计算价值的最小值
minimum_price = np.min(prices)
#目标:计算价值的最大值
maximum_price = np.max(prices)
#目标:计算价值的平均值
mean_price = np.mean(prices)
#目标:计算价值的中值
median_price = np.median(prices)
#目标:计算价值的标准差
std_price = np.std(prices)
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)<jupyter_output>Statistics for Boston housing dataset:
Minimum price: $105,000.00
Maximum price: $1,024,800.00
Mean price: $454,342.94
Median price $438,900.00
Standard deviation of prices: $165,171.13
<jupyter_text>### 问题 1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:`'RM'`、`'LSTAT'` 和`'PTRATIO'`,对每一个数据点:
- `'RM'` 是该地区中每个房屋的平均房间数量;
- `'LSTAT'` 是指该地区有多少百分比的业主属于是低收入阶层(有工作但收入微薄);
- `'PTRATIO'` 是该地区的中学和小学里,学生和老师的数目比(`学生/老师`)。
_凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,`'MEDV'`的值会是**增大**还是**减小**呢?每一个答案都需要你给出理由。_
**提示:**你预期一个`'RM'` 值是6的房屋跟`'RM'` 值是7的房屋相比,价值更高还是更低呢?### 问题 1 - 回答:
> `'RM'`:增大该特征值,`'MEDV'`的值会增大,因为房间的数量越多,对应的价格应该越高。
> `'LSTAT'`:增大该特征值,`'MEDV'`的值会减少,因为低收入阶层越多,可能说明该区域的地价相对便宜。
> `'PTRATIO'`:增大该特征值,`'MEDV'`的值会减少,因为该值越大,说明教育资源越紧张,也会影响到房屋价格。### 编程练习 2: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。
在下面的代码中,你需要
使用 `sklearn.model_selection` 中的 `train_test_split`, 将`features`和`prices`的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 `train_test_split` 中的 `random_state` ,这会确保结果的一致性;<jupyter_code># TODO 2
# 提示: 导入train_test_split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features,prices,test_size=0.2,random_state=50)<jupyter_output><empty_output><jupyter_text>### 问题 2 - 训练及测试
*将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?*
*如果用模型已经见过的数据,例如部分训练集数据进行测试,又有什么坏处?*
**提示:** 如果没有数据来对模型进行测试,会出现什么问题?### 问题 2 - 回答:
> 训练数据用于训练模型,如果不对数据进行拆分,所有的数据都用来训练的话,就没有办法验证模型是否是有效以及高效。
> 用部分训练数据进行测试,该部分数据已经拟合过并对模型产生了影响,再用于验证的话,不能说明训练出的模型是否是有效的。---
## 第三步. 模型衡量标准
在项目的第三步中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。### 编程练习3:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算[*决定系数*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination) R2 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R2的数值范围从0至1,表示**目标变量**的预测值和实际值之间的相关程度平方的百分比。一个模型的R2 值为0还不如直接用**平均值**来预测效果好;而一个R2 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用**特征**来解释。_模型也可能出现负值的R2,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。_
在下方代码的 `performance_metric` 函数中,你要实现:
- 使用 `sklearn.metrics` 中的 [`r2_score`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html) 来计算 `y_true` 和 `y_predict`的R2值,作为对其表现的评判。
- 将他们的表现评分储存到`score`变量中。
或
- (可选) 不使用任何外部库,参考[决定系数的定义](https://en.wikipedia.org/wiki/Coefficient_of_determination)进行计算,这也可以帮助你更好的理解决定系数在什么情况下等于0或等于1。<jupyter_code># TODO 3
# 提示: 导入r2_score
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
"""计算并返回预测值相比于预测值的分数"""
score = r2_score(y_true,y_predict)
return score
# TODO 3 可选
# 不允许导入任何计算决定系数的库
def performance_metric2(y_true, y_predict):
"""计算并返回预测值相比于预测值的分数"""
score = None
return score<jupyter_output><empty_output><jupyter_text>### 问题 3 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
*你觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。*
**提示**:运行下方的代码,使用`performance_metric`函数来计算模型的决定系数。<jupyter_code># 计算这个模型的预测结果的决定系数
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)<jupyter_output>Model has a coefficient of determination, R^2, of 0.923.
<jupyter_text>### 问题 3 - 回答:
> 基本上成功描述了目标变量的变化,因为R^2为0.923,接近于1,说明模型的均方误差比简单均方误差小的多,所以可以认为成功描述了目标变量的变化。---
## 第四步. 分析模型的表现
在项目的第四步,我们来看一下不同参数下,模型在训练集和验证集上的表现。这里,我们专注于一个特定的算法(带剪枝的决策树,但这并不是这个项目的重点),和这个算法的一个参数 `'max_depth'`。用全部训练集训练,选择不同`'max_depth'` 参数,观察这一参数的变化如何影响模型的表现。画出模型的表现来对于分析过程十分有益,这可以让我们看到一些单看结果看不到的行为。### 学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观得显示了随着训练数据量的增加,模型学习曲线的在训练集评分和验证集评分的变化,评分使用决定系数R2。曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。
运行下方区域中的代码,并利用输出的图形回答下面的问题。<jupyter_code># 根据不同的训练集大小,和最大深度,生成学习曲线
vs.ModelLearning(X_train, y_train)<jupyter_output><empty_output><jupyter_text>### 问题 4 - 学习曲线
*选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练集曲线的评分有怎样的变化?验证集曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?*
**提示:**学习曲线的评分是否最终会收敛到特定的值?### 问题 4 - 回答:
> 图二 max_depth=3的图中,随着数据量的增加,训练集曲线的评分逐渐减少,验证集曲线的评分逐渐增加。如果有更多的训练数据集,应该也无法有效提升该模型的表现,因为随着数据的增加,训练集曲线和验证集曲线逐渐贴合并收敛于特定值0.8左右。### 复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练集的变化,一个是验证集的变化。跟**学习曲线**相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 `performance_metric` 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。<jupyter_code># 根据不同的最大深度参数,生成复杂度曲线
vs.ModelComplexity(X_train, y_train)<jupyter_output><empty_output><jupyter_text>### 问题 5 - 偏差(bias)与方差(variance)之间的权衡取舍
*当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?*
**提示:** 你如何得知模型是否出现了偏差很大或者方差很大的问题?### 问题 5 - 回答:
> 当模型以最大深度1训练时,模型的预测出现了很大的偏差,当模型以最大深度10训练时出现了很大的方差。因为在深度为1的训练时,训练集与验证集的R^2分数都很低,说明当前模型与数据拟合的还不够好,产生了欠拟合现象,所以偏差较大。
> 当在深度为10的情况下训练时,训练集的分数收敛于1,说明模型与训练集的数据拟合的很好,所以此时偏差较小,但是从图中可以看出随着深度的增加,验证集分数没有明显的提升反而逐渐下降,说明此时模型记住了太多的“特征”导致泛化效果降低,可能产生了过拟合现象,所以这时候的方差较大。### 问题 6- 最优模型的猜测
*结合问题 5 中的图,你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?*### 问题 6 - 回答:
> 我认为最大深度是4的时候能够最好的对未见过的数据进行预测。依据是在该深度下,验证集曲线的分数达到最大值,之后随着深度的增加,训练集的分数增加,验证集的分数反而减少,说明模型的泛化效果降低,可能产生了过拟合的现象。---
## 第五步. 选择最优参数### 问题 7- 网格搜索(Grid Search)
*什么是网格搜索法?如何用它来优化模型?*
### 问题 7 - 回答:
> 网格搜索法是遍历给定参数来训练及跟踪模型,是一种参数调优的方法。
> 使用网格搜索法配合交叉验证来优化模型,给定不同的参数进行多次验证取得最优的参数。### 问题 8 - 交叉验证
- 什么是K折交叉验证法(k-fold cross-validation)?
- [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html)是如何结合交叉验证来完成对最佳参数组合的选择的?
- [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html)中的`'cv_results_'`属性能告诉我们什么?
- 网格搜索时如果不使用交叉验证会有什么问题?交叉验证又是如何解决这个问题的?
**提示:** 在下面 fit_model函数最后加入 `print pd.DataFrame(grid.cv_results_)` 可以帮你查看更多信息。### 问题 8 - 回答:
K折交叉验证法就是将训练集的数据分成k份,每次使用其中一份作为验证集,其他的作为训练集,以此类推,最后将得到的结果做平均数。
GridSearchCV是逐次遍历指定参数,每次遍历参数时都进行一次K折交叉验证法,以此来寻找最优的模型。
GridSearchCV中的'cv_results_'以字典的方式告诉我们得到的最优的参数及其所对应的平均分数。
网格搜索时如果不使用交叉验证,受限于数据问题,比如数据量的大小,数据的排列分布等问题导致得到的验证集的分数不准确;使用交叉验证经过多次验证,取得验证的平均分数,这样可以平抑验证分数的误差。### 编程练习 4:训练最优模型
在这个练习中,你将需要将所学到的内容整合,使用**决策树算法**训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 `'max_depth'` 参数。你可以把`'max_depth'` 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是**监督学习算法**中的一种。
在下方 `fit_model` 函数中,你需要做的是:
1. **定义 `'cross_validator'` 变量**: 使用 `sklearn.model_selection` 中的 [`KFold`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html) 创建一个交叉验证生成器对象;
2. **定义 `'regressor'` 变量**: 使用 `sklearn.tree` 中的 [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) 创建一个决策树的回归函数;
3. **定义 `'params'` 变量**: 为 `'max_depth'` 参数创造一个字典,它的值是从1至10的数组;
4. **定义 `'scoring_fnc'` 变量**: 使用 `sklearn.metrics` 中的 [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) 创建一个评分函数;
将 `‘performance_metric’` 作为参数传至这个函数中;
5. **定义 `'grid'` 变量**: 使用 `sklearn.model_selection` 中的 [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) 创建一个网格搜索对象;将变量`'regressor'`, `'params'`, `'scoring_fnc'`和 `'cross_validator'` 作为参数传至这个对象构造函数中;
如果你对python函数的默认参数定义和传递不熟悉,可以参考这个MIT课程的[视频](http://cn-static.udacity.com/mlnd/videos/MIT600XXT114-V004200_DTH.mp4)。<jupyter_code># TODO 4
#提示: 导入 'KFold' 'DecisionTreeRegressor' 'make_scorer' 'GridSearchCV'
from sklearn.model_selection import GridSearchCV, KFold
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
def fit_model(X, y):
""" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型"""
cross_validator = KFold(n_splits=10)
regressor = DecisionTreeRegressor(random_state=0)
params = {'max_depth':range(1,11)}
scoring_fnc = make_scorer(performance_metric)
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cross_validator)
# 基于输入数据 [X,y],进行网格搜索
grid = grid.fit(X, y)
# 返回网格搜索后的最优模型
return grid.best_estimator_<jupyter_output><empty_output><jupyter_text>### 编程练习 4:训练最优模型 (可选)
在这个练习中,你将需要将所学到的内容整合,使用**决策树算法**训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 `'max_depth'` 参数。你可以把`'max_depth'` 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是**监督学习算法**中的一种。
在下方 `fit_model` 函数中,你需要做的是:
- 遍历参数`‘max_depth’`的可选值 1~10,构造对应模型
- 计算当前模型的交叉验证分数
- 返回最优交叉验证分数对应的模型<jupyter_code># TODO 4 可选
'''
不允许使用 DecisionTreeRegressor 以外的任何 sklearn 库
提示: 你可能需要实现下面的 cross_val_score 函数
def cross_val_score(estimator, X, y, scoring = performance_metric, cv=3):
""" 返回每组交叉验证的模型分数的数组 """
scores = [0,0,0]
return scores
'''
def fit_model2(X, y):
""" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型"""
#最优交叉验证分数对应的最优模型
best_estimator = None
return best_estimator<jupyter_output><empty_output><jupyter_text>### 问题 9 - 最优模型
*最优模型的最大深度(maximum depth)是多少?此答案与你在**问题 6**所做的猜测是否相同?*
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。<jupyter_code># 基于训练数据,获得最优模型
optimal_reg = fit_model(X_train, y_train)
# 输出最优模型的 'max_depth' 参数
print "Parameter 'max_depth' is {} for the optimal model.".format(optimal_reg.get_params()['max_depth'])<jupyter_output>Parameter 'max_depth' is 4 for the optimal model.
<jupyter_text>### 问题 9 - 回答:
最优模型的最大深度是max_depth=4,与问题6所做猜测相同。## 第六步. 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据*提问*,并返回对**目标变量**的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。### 问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
*你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?为什么?*
**提示:**用你在**分析数据**部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。<jupyter_code># 生成三个客户的数据
client_data = [[5, 17, 15], # 客户 1
[4, 32, 22], # 客户 2
[8, 3, 12]] # 客户 3
# 进行预测
predicted_price = optimal_reg.predict(client_data)
for i, price in enumerate(predicted_price):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)<jupyter_output>Predicted selling price for Client 1's home: $404,911.11
Predicted selling price for Client 2's home: $212,223.53
Predicted selling price for Client 3's home: $938,053.85
<jupyter_text>### 问题 10 - 回答:
建议客户1的房屋销售价格为`$404,911.11`,客户2的房屋销售价格为`$212,223.53`,客户3的房屋销售价格为`$938,053.85`。
这样的价格还是比较合理的。首先,从分析数据部分得到的统计信息:
最小销售价格: `$105,000.00`
最大销售价格: `$1,024,800.00`
平均销售价格: `$454,342.94`
可以看出我们得到的三个预测值在该统计数据范围内。
其次,从房屋特征的数值来看客户2的房产房间数最少说明面积可能很小,社区贫困指数最高说明地段可能相对便宜,学生/教师比例最高说明教育资源可能比较紧张,所以客户二的房价是三个之中最低的比较合理。
同理,客户3的房产从特征上来看是最好的,所以价格最高;客户1的房产的特征值在二者之间,所以价格也在二者之间。
### 编程练习 5
你刚刚预测了三个客户的房子的售价。在这个练习中,你将用你的最优模型在整个测试数据上进行预测, 并计算相对于目标变量的决定系数 R2的值**。<jupyter_code>#TODO 5
# 提示:你可能需要用到 X_test, y_test, optimal_reg, performance_metric
# 提示:你可能需要参考问题10的代码进行预测
# 提示:你可能需要参考问题3的代码来计算R^2的值
r2 = 1
predicted_price = optimal_reg.predict(X_test)
r2 = performance_metric(y_test,predicted_price)
print "Optimal model has R^2 score {:,.2f} on test data".format(r2)<jupyter_output>Optimal model has R^2 score 0.76 on test data
<jupyter_text>### 问题11 - 分析决定系数
你刚刚计算了最优模型在测试集上的决定系数,你会如何评价这个结果?### 问题11 - 回答
R^2 score为0.76,我认为这个结果说明该模型对房价预测能起到一定辅助作用,但是还不能起到决定性的作用。### 模型健壮性
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。
### 问题 12 - 模型健壮性
模型是否足够健壮来保证预测的一致性?
**提示**: 执行下方区域中的代码,采用不同的训练和测试集执行 `fit_model` 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。<jupyter_code># 请先注释掉 fit_model 函数里的所有 print 语句
vs.PredictTrials(features, prices, fit_model, client_data)<jupyter_output>Trial 1: $391,183.33
Trial 2: $411,417.39
Trial 3: $415,800.00
Trial 4: $420,622.22
Trial 5: $418,377.27
Trial 6: $411,931.58
Trial 7: $399,663.16
Trial 8: $407,232.00
Trial 9: $402,531.82
Trial 10: $413,700.00
Range in prices: $29,438.89
<jupyter_text>### 问题 12 - 回答:
模型有足够的健壮性,fit_model执行10次,从数据看出每次得到的返回值虽然上下浮动,但是基本上维持在`$400000`左右,所以有足够的健壮性。### 问题 13 - 实用性探讨
*简单地讨论一下你建构的模型能否在现实世界中使用?*
提示:回答以下几个问题,并给出相应结论的理由:
- *1978年所采集的数据,在已考虑通货膨胀的前提下,在今天是否仍然适用?*
- *数据中呈现的特征是否足够描述一个房屋?*
- *在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?*
- *你觉得仅仅凭房屋所在社区的环境来判断房屋价值合理吗?*### 问题 13 - 回答:
1.1978年所采集的数据,即使考虑通过膨胀的情况下,在今天已经不适用了,因为可能数据的特征值已经完全不同,比如某些地区大开发,地价提升。
2.不能足够描述一个房屋,比如真正的面积,房间多不一定面积就足够大。
3.不能,环境不同,特征值应该也有所不同。
4.不合理,除了社区环境还有很多因素,比如政府的未来规划,或者房间的当前价值,比如装修优良等等。## 可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中 `bj_housing.csv`。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。<jupyter_code># TODO 6
# 你的代码<jupyter_output><empty_output>
|
no_license
|
/机器学习入门项目/波士顿房价预测/boston_housing.ipynb
|
cx-jiang/m-learning
| 14 |
<jupyter_start><jupyter_text># Torch and Numpy
https://morvanzhou.github.io/tutorials/machine-learning/torch/2-01-torch-numpy/
http://pytorch-cn.readthedocs.io/zh/latest/package_references/torch/<jupyter_code>import torch
import numpy as np
<jupyter_output><empty_output><jupyter_text># translate between numpy and tensor<jupyter_code>np_data = np.arange(6).reshape((2, 3))
print('numpy data\n', np_data)
# numpy 2 torch tensor
torch_data = torch.from_numpy(np_data)
print('torch data', torch_data)
# torch tensor to numpy
np_from_tensor = torch_data.numpy()
print('numpy from tensor\n', np_from_tensor)<jupyter_output>numpy from tensor
[[0 1 2]
[3 4 5]]
<jupyter_text>## <jupyter_code>torch_data = torch.arange(-2,8,1).resize_(2,5)
print('arange torch data', torch_data)
# absolute value
print('abs', torch_data.abs())
# sin
print('sin', torch_data.sin())
# mean
print('mean \n', torch_data.mean())<jupyter_output>mean
2.5
<jupyter_text># matrix multiply and dot multiply<jupyter_code>data = [[1,2], [3,4]]
numpy_array = np.array(data)
torch_tensor = torch.FloatTensor(data)
# matrix multiply
print('numpy matrix multiply (np.matmul) \n', np.matmul(numpy_array, numpy_array))
print('torch matrix multiply (tensor.mm)', torch_tensor.mm(torch_tensor))
<jupyter_output>numpy matrix multiply (np.matmul)
[[ 7 10]
[15 22]]
torch matrix multiply (tensor.mm)
7 10
15 22
[torch.FloatTensor of size 2x2]
|
no_license
|
/pyTorch/.ipynb_checkpoints/0-1 Torch_or_Numpy-checkpoint.ipynb
|
YiJingLin/-Note-python
| 4 |
<jupyter_start><jupyter_text># Outlier Removal<jupyter_code>num_cols = ['bra_size','hips','quality','shoe_size','size','waist', 'bust', 'height', ]
plt.figure(figsize=(18,9))
modcloth_df[num_cols].boxplot()
plt.title("Numerical variables in Modcloth dataset", fontsize=20)
plt.show()
num_cols = ['bra_size','hips','quality','shoe_size','size','waist', 'bust', 'height', ]
plt.figure(figsize=(18,9))
np.log(modcloth_df[num_cols]).boxplot()
plt.title("Numerical variables(log transformed) in Modcloth dataset", fontsize=20)
plt.show()<jupyter_output>C:\Users\Raja\anaconda3\envs\env2\lib\site-packages\ipykernel_launcher.py:3: RuntimeWarning: divide by zero encountered in log
This is separate from the ipykernel package so we can avoid doing imports until
<jupyter_text>As we can see there are a couple of variables with values that lies in the outlier range (by rule of thumb, although some deep analysis can be done if these anomalies can be explained and there is evidence to explain them) here i am presuming them to be outliers and removing the values beyond [Q3+1.5*IQR)]
- bra_size, hips, shoe_size, waist, bust, height
@Note :- Implement other methods if time permits<jupyter_code>Q1 = modcloth_df.quantile(0.25)
Q3 = modcloth_df.quantile(0.75)
Q1.drop(["item_id","user_id"], inplace =True)
Q3.drop(["item_id","user_id"], inplace = True)
IQR = Q3 - Q1
modcloth_df
modcloth_df = modcloth_df[~((modcloth_df < (Q1 - 1.5 * IQR)) |(modcloth_df > (Q3 + 1.5 * IQR))).any(axis=1)]
modcloth_df.reset_index(drop=True, inplace=True)<jupyter_output><empty_output><jupyter_text># Handling Missing Values<jupyter_code># Checking amount of missing values
missing_data = pd.DataFrame({'perc_missing': (modcloth_df.isnull().sum()/modcloth_df.shape[0])*100})
missing_data.sort_values("perc_missing", ascending=False, inplace = True)
missing_data
# The first 4 columns namely waist, bust, shoe_width and shoe_size have high number of mising data
# and hence should be dropped. shoe_size can be indicators for product to be a shoe
# otherwise they should be dropped as well
# Checking of product is a shoe
shoe_review_df = modcloth_df[np.logical_and(pd.notnull(modcloth_df["review_text"]), pd.notnull(modcloth_df["shoe_size"]))][["user_name","shoe_size","review_text"]]
shoe_review_df.shape
# %age review containing shoe mentions
np.sum([True if len(re.findall(r"shoes|shoe",x))> 0 else False for x in shoe_review_df.review_text])/shoe_review_df.shape[0]
# There are 585 users with differing shoe_sizes (since time information is not present,
# also the person for which the product is bought for is not known) it is difficult
# to call this misrepresentation as an error
np.sum(shoe_review_df.groupby('user_name')['shoe_size'].unique().apply(lambda x: len(x)) > 1)
# There are roughly 700 new additions to the above logic, which means that for the same
# user the shoe_size is reported but is not reproted somewhere else in the data.
np.sum(modcloth_df.groupby('user_name')['shoe_size'].unique().apply(lambda x: len(x)) > 1)<jupyter_output><empty_output><jupyter_text>Allthough the above methodology above is heuristical but it gives a fair sense that presence of shoe sizes can't indicate that shoes were bought hence all the 4 columns should be removed. (bust was also found to be highly correlated to waist size,when i used the missingno heatmap without droppin the 4 columns)
There are other reasons to be aggressive at dropping information around shoe sizes check the correlation graph below between height and shoe_sizes<jupyter_code>fig = plt.gcf()
fig.set_size_inches(20,10)
__ = sns.violinplot(x='shoe_size', y='height',data=modcloth_df)
modcloth_df.drop(["waist", "bust", "shoe_width", "shoe_size"], axis= 1, inplace=True)<jupyter_output>C:\Users\Raja\anaconda3\envs\env2\lib\site-packages\pandas\core\frame.py:3997: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
<jupyter_text>Using the missing no package analye if the missing values follows a pattern when analyzed wrt another column<jupyter_code># missing values in different columns as a heatmap
msno.heatmap(modcloth_df) <jupyter_output><empty_output><jupyter_text>Some observations are :-
- Correlation of cupsize and brasize
- Impute using mode for different category of cupsize and brasize
- where both cupsize and brasize are nan use column mode
- review_summary and review_text are 1-to-1 match , their missing values can
be imputed by replacing with "Unknown"
- height, quality and length can be imputed by measures of central tendency
- mising value in hips doesn't show high correlation to any of the other columns
and hence we will categorize the column based on its distribution(kde plot)### Handling cupsize and brasize<jupyter_code>bra_size_to_cup_size = {x:y for x,y in modcloth_df.groupby("bra_size")["cup_size"].agg(pd.Series.mode).reset_index().values}
cup_size_to_bra_size = {x:y for x,y in modcloth_df.groupby("cup_size")["bra_size"].median().reset_index().values}
bra_size_med = modcloth_df.bra_size.median()
cup_size_mod = bra_size_to_cup_size[bra_size_med] # not matching with cup size mode
modcloth_df
imputed_value = []
for x,y in zip(modcloth_df["bra_size"],modcloth_df["cup_size"]):
if pd.isnull(x) and pd.isnull(y):
imputed_value.append([bra_size_med, cup_size_mod])
elif pd.isnull(x) and pd.notnull(y):
imputed_value.append([cup_size_to_bra_size[y], y])
elif pd.notnull(x) and pd.isnull(y):
imputed_value.append([x, bra_size_to_cup_size[x]])
else:
imputed_value.append([x,y])
modcloth_df.loc[:,["bra_size","cup_size"]] = imputed_value<jupyter_output>C:\Users\Raja\anaconda3\envs\env2\lib\site-packages\pandas\core\indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
<jupyter_text>Although i have done this imputation but if i had dataset from fixed genders i would have kept the NAs as unknowns because it doesn't make business sense to hard impute values for all the kind of users.### Handling Review text and summary<jupyter_code>modcloth_df.review_summary.fillna("Unknown", inplace=True)
modcloth_df.review_text.fillna("Unknown", inplace=True)<jupyter_output>C:\Users\Raja\anaconda3\envs\env2\lib\site-packages\pandas\core\generic.py:6245: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._update_inplace(new_data)
<jupyter_text>### Handling height, quality and length data<jupyter_code># from sklearn.preprocessing import Imputer
# median_imputer = Imputer(missing_values='NaN', strategy='median', axis=0)
# modcloth_df[['height','quality']] = median_imputer.fit_transform(modcloth_df[['height','quality']])
from sklearn.impute import SimpleImputer
median_imputer = SimpleImputer(missing_values=np.nan, strategy='median')
modcloth_df[['height','quality']] = median_imputer.fit_transform(modcloth_df[['height','quality']])
# imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
modcloth_df["length"] = modcloth_df.length.fillna(modcloth_df['length'].value_counts().index[0])
modcloth_df.hips.plot.kde()
modcloth_df.hips.fillna(-1.0, inplace = True)
bins = [-2,0,31,37,40,44,75]
labels = ['Unknown','XS','S','M', 'L','XL']
modcloth_df.hips = pd.cut(modcloth_df.hips, bins, labels=labels)<jupyter_output>C:\Users\Raja\anaconda3\envs\env2\lib\site-packages\pandas\core\generic.py:6245: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._update_inplace(new_data)
C:\Users\Raja\anaconda3\envs\env2\lib\site-packages\pandas\core\generic.py:5303: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self[name] = value
<jupyter_text>It is also observed that review_summary text is randomly truncated and hence is not that useful as compared to review_text column.
<jupyter_code>modcloth_df.drop('review_summary', axis=1, inplace = True)
name_id_df = modcloth_df.groupby("user_name")['user_id'].apply(lambda x: len(np.unique(x))).reset_index()
id_name_df = modcloth_df.groupby("user_id")['user_name'].apply(lambda x: len(np.unique(x))).reset_index()
<jupyter_output><empty_output><jupyter_text>On analysis it looks like that the user_name needs to be normalized by lowering them and also user_id is not he the correct column to keep as one user_name has many user_id but one user_id has just one type of user_name on normalization<jupyter_code>modcloth_df.drop('user_id', axis=1, inplace = True)
modcloth_df["user_name"] = modcloth_df.user_name.apply(lambda x: x.lower())
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
from tensorflow import feature_column
from sklearn.model_selection import train_test_split
tf.__version__<jupyter_output><empty_output><jupyter_text># Building a Deep learning model inspired by architecture in the given paper,
# https://arxiv.org/pdf/1907.09844.pdf (Fig 1)
<jupyter_code># Split into train and validation dataset, i am not showing any performance on
# test dataset for now,
train, val = train_test_split(modcloth_df, test_size=0.2)
user_categorical_features = ["user_name","hips","cup_size"]
user_numerical_features = ["height","bra_size"]
item_categorical_features = ["item_id", "category", "length"]
item_numerical_features = ["size","quality"]
# Scaling the numerical features
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler().fit(train[["height","bra_size","size","quality"]])
tr = scaler.transform(train.loc[:, ("height","bra_size","size","quality")])
va = scaler.transform(val.loc[:, ("height","bra_size","size","quality")])
print(train)
train.loc[:, ("height","bra_size","size","quality")] = tr
val.loc[:, ("height","bra_size","size","quality")] = va
print("----------------after----------------")
print(train)
# d = {'col1': [1, 2, 6], 'col2': [3, 4, 7], 'col3': [5, 6, 1], 'col4': [7, 8, 0]}
# df = pd.DataFrame(data=d)
# df.loc[1, ("col1", "col2", "col3", "col4")]=tr[0]
# df
user_categorical_features
for col in user_categorical_features + item_categorical_features:
modcloth_df[col] = modcloth_df[col].astype(str)
modcloth_df
feature_column.numeric_column(col)
# 1) Create feature columns for ingestion into NN
# Numeric Columns
numeric_users = {
col : feature_column.numeric_column(col) \
for col in user_numerical_features
}
numeric_items = {
col : feature_column.numeric_column(col) \
for col in item_numerical_features
}
# Categorical Columns
# Now categorical columns can be encoded into one-hot vectors and fed into NN
# But the paper has used embedding and hence we can use the same (generally used for
# a categroical feature with lots of categories, but let us see)
hips = feature_column.categorical_column_with_vocabulary_list(
'hips', modcloth_df.hips.unique().tolist())
cup_size = feature_column.categorical_column_with_vocabulary_list(
'cup_size', modcloth_df.cup_size.unique().tolist())
user_name = feature_column.categorical_column_with_vocabulary_list(
'user_name', modcloth_df.user_name.unique().tolist())
item_id = feature_column.categorical_column_with_vocabulary_list(
'item_id', modcloth_df.item_id.unique().tolist())
category = feature_column.categorical_column_with_vocabulary_list(
'category', modcloth_df.category.unique().tolist())
length = feature_column.categorical_column_with_vocabulary_list(
'length', modcloth_df.length.unique().tolist())
# There are 6 dimension hyperparameters to be given here which is a lot,
# hence for now i am giving these values, later we will see if we can
# somehow guide the decision via cross-val
hips_embedding = feature_column.embedding_column(hips, dimension=5)
cup_size_embedding = feature_column.embedding_column(cup_size, dimension=5)
user_name_embedding = feature_column.embedding_column(user_name, dimension=50)
item_id_embedding = feature_column.embedding_column(item_id, dimension=50)
category_embedding = feature_column.embedding_column(category, dimension=5)
length_embedding = feature_column.embedding_column(length, dimension=5)
cat_users = {
'hips' : hips_embedding,
'cup_size' : cup_size_embedding,
'user_name': user_name_embedding
}
cat_items = {
'item_id' : item_id_embedding,
'category' : category_embedding,
'length': length_embedding
}
input_user = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32') \
for colname in numeric_users.keys()
}
input_user.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string') \
for colname in cat_users.keys()
})
input_items = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype = 'float32') \
for colname in numeric_items.keys()
}
input_items.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string') \
for colname in cat_items.keys()
})
input_items
# Create a feature layer
feature_layer_users = keras.layers.DenseFeatures(numeric_users.values())(input_user)
feature_layer_items = keras.layers.DenseFeatures(numeric_items.values())(input_items)
CLASS_LABELS = np.array(["fit","small","large"])
# 2) Create a input pipeline using tf.data
# A utility method to create a tf.data dataset from a Pandas Dataframe
import copy
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('fit')
labels = labels.apply(lambda x:x == CLASS_LABELS)
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
#prefetching was giving some trouble on google colab,
#there might be some issue with some gdfs, hence not here
return ds<jupyter_output><empty_output><jupyter_text>The class defined below is basically the trapezium blocks show in Fig 1 of the paper, this is how i am thinking that skip connections would have been formed (there are no details or implementation available to reinforce my believe).<jupyter_code>class SkipCon(keras.layers.Layer):
def __init__(self, size, reduce = True, deep = 3, skip_when=0, activation="relu", **kwargs):
"""
@Params
size = size of dense layer
deep = the depth of network in one SkipCon block call
skip_when = if a skip connection is required, pass 1
activation = by default using relu, in the paper authors have used tanh(no reasons again)
"""
super().__init__(**kwargs)
self.activation = keras.activations.get(activation) # used to combine
# skip connections and cascaded dense layers
self.main_layers =[]
self.skip_when = skip_when #to be used in call as a control
if reduce:
for _ in range(deep):
self.main_layers.extend([
keras.layers.Dense(size, activation=activation,
use_bias=True),
keras.layers.BatchNormalization()])
# Reduce the input size by two each time, if the
# network is to be designed deeper and narrow
size = size/2
else:
for _ in range(deep):
self.main_layers.extend([
keras.layers.Dense(size, activation=activation,
use_bias=True),
keras.layers.BatchNormalization()])
self.skip_layers = []
if skip_when > 0:
if reduce:
size = size*2 # since the size of skipped connection
# should match with cascaded dense
self.skip_layers = [
keras.layers.Dense(size, activation=activation,
use_bias=True),
keras.layers.BatchNormalization()]
def call(self, inputs):
Z = inputs
for layer in self.main_layers:
Z = layer(Z)
if not self.skip_when:
return self.activation(Z)
skip_Z = inputs
for layer in self.skip_layers:
skip_Z = layer(skip_Z)
return self.activation(Z + skip_Z)
batch_size = 512 # paper they have taken 2048 batch size, but no clear explanation
# tanh activation is used, but i am using relu
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
user_layer = keras.layers.Dense(256, activation='relu', use_bias = True)(feature_layer_users)
# Add a Skip Connection
user_layer = SkipCon(size = 256, deep = 2, reduce = False, skip_when=1, activation="relu")(user_layer)
user_layer = keras.layers.Dropout(0.5)(user_layer) # Way to handle overfitting
user_layer = SkipCon(size = 256, deep = 2, reduce = True, skip_when=1, activation="relu")(user_layer)
user_layer = keras.layers.Dropout(0.5)(user_layer)
user_layer = SkipCon(size = 64, deep = 2, reduce = True, skip_when=0, activation="relu")(user_layer)
item_layer = keras.layers.Dense(256, activation='relu', use_bias = True)(feature_layer_items)
# Add a Skip Connection
item_layer = SkipCon(size = 256, deep = 2, reduce = False, skip_when=1, activation="relu")(item_layer)
item_layer = keras.layers.Dropout(0.5)(item_layer) # Way to handle overfitting
item_layer = SkipCon(size = 256, deep = 2, reduce = True, skip_when=1, activation="relu")(item_layer)
item_layer = keras.layers.Dropout(0.5)(item_layer)
item_layer = SkipCon(size = 64, deep = 2, reduce = True, skip_when=0, activation="relu")(item_layer)
# combine the output of the two branches
combined = tf.concat([user_layer, item_layer], axis =-1)
# The combined input should go through another set of skip connection non-linearity as per the paper
both_layer = SkipCon(size = 128, deep = 2, reduce = False, skip_when=1, activation="relu")(combined)
both_layer = keras.layers.Dropout(0.5)(both_layer)
both_layer = SkipCon(size = 64, deep = 2, reduce = False, skip_when=1, activation="relu")(combined)
both_layer = keras.layers.Dropout(0.5)(both_layer)
both_layer = SkipCon(size = 16, deep = 2, reduce = False, skip_when=0, activation="relu")(both_layer)
z = keras.layers.Dense(3, activation="softmax")(both_layer)
model = keras.Model(inputs=[input_user, input_items], outputs=z)
# Another control put in place to handle overfitting
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
optimizer = keras.optimizers.SGD(lr=0.1, momentum=0.9, decay=0.01)
model.compile(optimizer= optimizer,
loss='categorical_crossentropy',
metrics=METRICS,
callbacks = [early_stopping])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=15)
model.summary()
# raja= train
ds1 = tf.data.Dataset.from_tensor_slices(dict(raja))
print(ds1)
# raja1=df_to_dataset(raja, batch_size=2)
# raja
# train.loc[1,]
# x=[]
# n=0
# for i in input_user:
# if(n<2):
# a=float(train.loc[1,:][i])
# else:
# a=train.loc[1,:][i]
# x.append(a)
# n+=1
# n=0
# for i in input_items:
# if(n<2):
# a=float(train.loc[1,:][i])
# else:
# a=train.loc[1,:][i]
# x.append(a)
# n+=1
# print(x)
np.set_printoptions(threshold=10)
# print(asha)
model.predict(ds1)
<jupyter_output><empty_output><jupyter_text># Vizualize the training layers<jupyter_code>auc = history.history['auc']
val_auc = history.history['val_auc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(15)
plt.subplot(1, 2, 1)
plt.plot(epochs_range, auc, label='Training Accuracy')
plt.plot(epochs_range, val_auc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# Let us see how we can handle the review_text (since this has been ignored in the paper)
# Proposing a text classification based model which uses review_text to predict the "fit"
#Since here we are only using review_text as a column we will be removing all the null or "Unknown" values
modcloth_df_text = modcloth_df[~modcloth_df.review_text.isin(["Unknown"])][["review_text","fit"]]<jupyter_output><empty_output><jupyter_text># Experimenting with text classifcation algorithms,
- Transfer Learning based (Transformers) :- Bert, ULMFit, GPT
There can be a lot other forms of doing text classification (mainly better feature engineering(topic models, tf-idf, syntax based), but then it will be altogether a new problem.<jupyter_code># Split into train and validation dataset
train_text, val_text = train_test_split(modcloth_df_text, test_size=0.2)
# 2) Create a input pipeline using tf.data
# A utility method to create a tf.data dataset from a Pandas Dataframe
import copy
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('fit')
labels = labels.apply(lambda x:x == CLASS_LABELS)
ds = tf.data.Dataset.from_tensor_slices((dataframe.review_text.values, labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
#prefetching was giving some trouble on google colab,
#there might be some issue with some gdfs, hence not here
return ds
train_ds_text = df_to_dataset(train_text, batch_size = 32)
val_ds_text = df_to_dataset(val_text, batch_size = 32)
import tensorflow_hub as hub
# # Pre-trained Embedding
# model_embedding = keras.Sequential([
# hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1",
# dtype=tf.string, input_shape=[], output_shape=[128]),
# keras.layers.Dense(128, activation="relu"),
# keras.layers.Dense(64, activation="relu"),
# keras.layers.Dense(64, activation="relu"),
# keras.layers.Dense(32, activation="relu"),
# keras.layers.Dense(10, activation="relu"),
# keras.layers.Dense(3, activation="sigmoid")
# ])
model_embedding = keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4",
dtype=tf.string, input_shape=[], output_shape=[512]),
keras.layers.Dense(256, activation="relu"),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dense(32, activation="relu"),
keras.layers.Dense(10, activation="relu"),
keras.layers.Dense(3, activation="sigmoid")
])
# Another control put in place to handle overfitting
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
optimizer = keras.optimizers.SGD(lr=0.1, momentum=0.9, decay=0.01)
model_embedding.compile(optimizer= optimizer,
loss='categorical_crossentropy',
metrics=METRICS,
callbacks = [early_stopping])
model_embedding.summary()
history = model_embedding.fit(train_ds_text,
validation_data=val_ds_text,
epochs=15)
from datetime import date,datetime
today = datetime.now().strftime("%d%m%Y_%H%M%S")
model_dir = os.path.join(BASE_MODEL_PATH,"Text",today+'_text_model.h5')
auc = history.history['auc']
val_auc = history.history['val_auc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(15)
plt.subplot(1, 2, 1)
plt.plot(epochs_range, auc, label='Training Accuracy')
plt.plot(epochs_range, val_auc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# Save the model
model_embedding.save(model_dir)
# Recreate the exact same model purely from the file
# text_model = tf.keras.models.load_model(model_dir, custom_objects={'KerasLayer':hub.KerasLayer})<jupyter_output><empty_output>
|
no_license
|
/.ipynb_checkpoints/copy-checkpoint.ipynb
|
rajashah33/new-fit-recomm
| 14 |
<jupyter_start><jupyter_text># Exploratory data analysis with Pandas## 1. Demonstration of main Pandas methods
**[Pandas](http://pandas.pydata.org)** is a Python library that provides extensive means for data analysis. Data scientists often work with data stored in table formats like `.csv`, `.tsv`, or `.xlsx`. Pandas makes it very convenient to load, process, and analyze such tabular data using SQL-like queries. In conjunction with other useful library `Matplotlib` and `Seaborn`, `Pandas` provides a wide range of opportunities for visual analysis of tabular data.
The main data structures in `Pandas` are implemented with **DataFrame** classes. `DataFrames` are great for representing real data: rows correspond to instances (examples, observations, etc.), and columns correspond to features of these instances.
To run each cell we press **Shift+Enter**. Running the first cell should print "Executing the first cell of my class."<jupyter_code>print ("Executing the first cell of my class")<jupyter_output>Executing the first cell of my class
<jupyter_text>Next we import python libraries `numpy` and `pandas`. To do so, we execute the below cell<jupyter_code>import numpy as np
import pandas as pd
pd.set_option("display.precision", 2)<jupyter_output><empty_output><jupyter_text>In this assignment, we'll demonstrate the main methods in action by analyzing a [dataset](https://bigml.com/user/francisco/gallery/dataset/5163ad540c0b5e5b22000383) on the churn rate of telecom operator clients. Let's read the data (using `read_csv`), and take a look at the first 5 lines using the `head` method:<jupyter_code>df = pd.read_csv('telecom_churn.csv')
df.head()<jupyter_output><empty_output><jupyter_text>Recall that each row corresponds to one client, an **instance**, and columns are **features** of this instance.Let’s have a look at data dimensionality, feature names, and feature types.<jupyter_code>print(df.shape)<jupyter_output>(3333, 21)
<jupyter_text>From the output, we can see that the table contains 3333 rows and 21 columns.
Now let's try printing out column names using `columns`:<jupyter_code>print(df.columns)<jupyter_output>Index(['state', 'account length', 'area code', 'phone number',
'international plan', 'voice mail plan', 'number vmail messages',
'total day minutes', 'total day calls', 'total day charge',
'total eve minutes', 'total eve calls', 'total eve charge',
'total night minutes', 'total night calls', 'total night charge',
'total intl minutes', 'total intl calls', 'total intl charge',
'customer service calls', 'churn'],
dtype='object')
<jupyter_text>We can use the `info()` method to output some general information about the dataframe: <jupyter_code>print(df.info())<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 3333 entries, 0 to 3332
Data columns (total 21 columns):
state 3333 non-null object
account length 3333 non-null int64
area code 3333 non-null int64
phone number 3333 non-null object
international plan 3333 non-null object
voice mail plan 3333 non-null object
number vmail messages 3333 non-null int64
total day minutes 3333 non-null float64
total day calls 3333 non-null int64
total day charge 3333 non-null float64
total eve minutes 3333 non-null float64
total eve calls 3333 non-null int64
total eve charge 3333 non-null float64
total night minutes 3333 non-null float64
total night calls 3333 non-null int64
total night charge 3333 non-null float64
total intl minutes 3333 non-null float64
total intl calls 3333 non-null int64
total intl charge 3333 non-null float64[...]<jupyter_text>`bool`, `int64`, `float64` and `object` are the data types of our features. We see that one feature is logical (`bool`), 3 features are of type `object`, and 16 features are numeric. With this same method, we can easily see if there are any missing values. Here, there are none because each column contains 3333 observations, the same number of rows we saw before with `shape`.The `describe` method shows basic statistical characteristics of each numerical feature (`int64` and `float64` types): number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles.<jupyter_code>df.describe()<jupyter_output><empty_output><jupyter_text>In order to see statistics on non-numerical features, one has to explicitly indicate data types of interest in the `include` parameter.<jupyter_code>df.describe(include=['object', 'bool'])<jupyter_output><empty_output><jupyter_text>For categorical (type `object`) and boolean (type `bool`) features we can use the `value_counts` method. Let's have a look at the distribution of `churn`:<jupyter_code>df['churn'].value_counts()<jupyter_output><empty_output><jupyter_text>2850 users out of 3333 are *loyal*; their `Churn` value is `0`. To calculate fractions, pass `normalize=True` to the `value_counts` function.<jupyter_code>df['churn'].value_counts(normalize=True)<jupyter_output><empty_output><jupyter_text>
### Sorting
A DataFrame can be sorted by the value of one of the variables (i.e columns). For example, we can sort by *Total day charge* (use `ascending=False` to sort in descending order):
<jupyter_code>df.sort_values(by='total day charge', ascending=False).head()<jupyter_output><empty_output><jupyter_text>Sorting Total day charge in ascending order<jupyter_code>df.sort_values(by='total day charge').head()<jupyter_output><empty_output><jupyter_text>### Indexing and retrieving data
A DataFrame can be indexed in a few different ways.
To get a single column/feature, you can use a `DataFrame['Name']` construction. Let's use this to answer a question about that column alone: **what is the proportion of churned users in our dataframe?**<jupyter_code>df['churn'].mean()<jupyter_output><empty_output><jupyter_text>14.5% is actually quite bad for a company; such a churn rate can make the company go bankrupt.
**Boolean indexing** with one column is also very convenient. The syntax is `df[df['Name'] == P]`, where `P` is some logical condition that is checked for each element of the `Name` column. The result of such indexing is the DataFrame consisting only of rows that satisfy the `P` condition on the `Name` column.
Lets use it to see all customers who have churned out of the company **('churn' == 1)**
Shaoming: It is better to change code to `df[df['churn'] == True]`<jupyter_code>df[df['churn'] == True]<jupyter_output><empty_output><jupyter_text>Let's answer the question:
**What are average values of numerical features for churned users?**<jupyter_code>df[df['churn'] == True].mean()<jupyter_output><empty_output><jupyter_text>**What are average values of numerical features for non-churned users?**<jupyter_code>df[df['churn'] == False].mean()<jupyter_output><empty_output><jupyter_text>**How much time (on average) do churned users spend on the phone during daytime?**<jupyter_code>df[df['churn'] == True]['total day minutes'].mean()<jupyter_output><empty_output><jupyter_text>
**What is the maximum length of international calls among loyal users (`churn == 0`) who do not have an international plan?**
<jupyter_code>df[(df['churn'] == False) & (df['international plan'] == 'no')]['total intl minutes'].max()<jupyter_output><empty_output><jupyter_text>
### Grouping
In general, grouping data in Pandas works as follows:
```python
df.groupby(by=grouping_columns)[columns_to_show].function()
```
1. First, the `groupby` method divides the `grouping_columns` by their values. They become a new index in the resulting dataframe.
2. Then, columns of interest are selected (`columns_to_show`). If `columns_to_show` is not included, all non groupby clauses will be included.
3. Finally, one or several functions are applied to the obtained groups per selected columns.
Here is an example where we group the data according to the values of the `Churn` variable and display statistics of three columns in each group:<jupyter_code>columns_to_show = ['total day minutes', 'total eve minutes', 'total night minutes']
df.groupby(['churn'])[columns_to_show].describe(percentiles=[])<jupyter_output><empty_output><jupyter_text>
### Summary tables
Suppose we want to see how the observations in our sample are distributed in the context of two variables - `churn` and `international plan`. To do so, we can build a **contingency table** using the `crosstab` method:
<jupyter_code>pd.crosstab(df['churn'], df['international plan'])
pd.crosstab(df['churn'], df['voice mail plan'])<jupyter_output><empty_output><jupyter_text>
### DataFrame transformations
Like many other things in Pandas, adding columns to a DataFrame is doable in many ways.
For example, if we want to calculate the total charge of calls for all users, let's create the `total charge` Series and paste it into the DataFrame:
Notice the last column "total charge," which we have added in our data frame <jupyter_code>df['total charge'] = df['total day charge'] + df['total eve charge'] + \
df['total night charge'] + df['total intl charge']
df.head()<jupyter_output><empty_output><jupyter_text>To delete columns or rows, use the `drop` method, passing the required indexes and the `axis` parameter (`1` if you delete columns, and nothing or `0` if you delete rows). The `inplace` argument tells whether to change the original DataFrame. With `inplace=False`, the `drop` method doesn't change the existing DataFrame and returns a new one with dropped rows or columns. With `inplace=True`, it alters the DataFrame.<jupyter_code># get rid of just created columns
df.drop(['total charge', 'phone number'], axis=1, inplace=True)
# and here’s how you can delete rows
#df.drop([1, 2]).head() <jupyter_output><empty_output><jupyter_text>## 2. First attempt at predicting telecom churn
Let's see how churn rate is related to the *International plan* feature. We'll do this using a `crosstab` contingency table and also through visual analysis with `Seaborn`
<jupyter_code>pd.crosstab(df['churn'], df['international plan'], margins=True)
# some imports to set up plotting
import matplotlib.pyplot as plt
# pip install seaborn
import seaborn as sns
# Graphics in retina format are more sharp and legible
%config InlineBackend.figure_format = 'retina'
sns.countplot(x='international plan', hue='churn', data=df);<jupyter_output><empty_output><jupyter_text>
We see that, with *International Plan*, the churn rate is much higher, which is an interesting observation! Perhaps large and poorly controlled expenses with international calls are very conflict-prone and lead to dissatisfaction among the telecom operator's customers.
Next, let's look at another important feature – *Customer service calls*. Let's also make a summary table and a picture.<jupyter_code>pd.crosstab(df['churn'], df['customer service calls'], margins=True)
sns.countplot(x='customer service calls', hue='churn', data=df);<jupyter_output><empty_output><jupyter_text>Although it's not so obvious from the summary table, it's easy to see from the above plot that the churn rate increases sharply from 4 customer service calls and above.
Now let's add a binary feature to our DataFrame – `Customer service calls > 3`. And once again, let's see how it relates to churn. <jupyter_code>df['many_service_calls'] = (df['customer service calls'] > 3).astype('int')
pd.crosstab(df['many_service_calls'], df['churn'], margins=True)
sns.countplot(x='many_service_calls', hue='churn', data=df);<jupyter_output><empty_output><jupyter_text>
Let's construct another contingency table that relates *Churn* with both *International plan* and freshly created *Many_service_calls*.
<jupyter_code>pd.crosstab(df['many_service_calls'] & (df['international plan'] == 'yes') , df['churn'])<jupyter_output><empty_output>
|
no_license
|
/Project1/Project1/Experiment - 1/.ipynb_checkpoints/Topic 1. Exploratory Data Analysis with Pandas (1)-checkpoint.ipynb
|
ShadySean/5523Project1
| 26 |
<jupyter_start><jupyter_text># Example - MNIST classification## 1. Preprocess
### Library Import, and Load dataset from MNIST<jupyter_code>from keras.datasets import mnist
from keras import models
from keras import layers
from keras.utils import to_categorical
(_train_images, _train_labels), (_test_images, _test_labels) = mnist.load_data()
<jupyter_output>Using TensorFlow backend.
<jupyter_text>### Normalize images (from 0 to 1)<jupyter_code>
#Scaling from zero to one
train_images = _train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = _test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
print(train_images.max())
print(test_images.max())
<jupyter_output>1.0
1.0
<jupyter_text>### One hot encoding
이 값을keras.np_utils.categorical()을 사용하여 원핫인코딩(One-Hot-Encoding)로 변환한다.<jupyter_code>print(_test_labels)
train_labels = to_categorical(_train_labels)
test_labels = to_categorical(_test_labels)
print("after on-hot encoding, \n", test_labels)<jupyter_output>[7 2 1 ... 4 5 6]
after on-hot encoding,
[[0. 0. 0. ... 1. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
<jupyter_text>## 2. Modeling and Compile for Network
### Simple Sequential Model<jupyter_code>
# 모델 정의하기
model = models.Sequential()
model.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
model.add(layers.Dense(10, activation='softmax'))
# 모델 컴파일 하기
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
<jupyter_output>WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3576: The name tf.log is deprecated. Please use tf.math.log instead.
<jupyter_text>## 3. Model.fit - Model Training with parameters<jupyter_code># fit() 메서드로 모델 훈련 시키기
model.fit(train_images, train_labels, epochs=5, batch_size=128)<jupyter_output>WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3005: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
Epoch 1/5
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session i[...]<jupyter_text>## 4. Evaluation with test dataset
<jupyter_code>#summary of network
model.summary()
# 테스트 데이터로 정확도 측정하기
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('test_loss :', test_loss, 'test_acc : ', test_acc)
!cat $HOME/.keras/keras.json
<jupyter_output>{
"epsilon": 1e-07,
"floatx": "float32",
"image_data_format": "channels_last",
"backend": "tensorflow"
}
|
no_license
|
/1014_mnist.ipynb
|
junobae/Test
| 6 |
<jupyter_start><jupyter_text>- You have a salesman who must travel to each city in a list exactly once and returning to his starting city. There is direct connection from every city to every other city. The salesman can visit in any order. The goal is to find the shortest path.
- We cannot use the minimum spanning tree because the MST finds the shortest way to connect all of the cities but it does not provide the shortest path for visiting all of them exactly once.
- This problem is known as an NP-hard problem: non-deterministic polynomial hard problem - no polynomial time alogrithm exists and the time required to solve increases exceptionally as the number of cities increases.
### The naive approach
- The naive approach is O(n!).
- Try every possible route.<jupyter_code>from typing import Dict, List, Iterable, Tuple
from itertools import permutations
vt_distances: Dict[str, Dict[str, int]] = {
'Rutland':
{'Burlington': 67,
'White River Junction': 46,
'Bennington': 55,
'Brattleboro': 75},
'Burlington':
{'Rutland': 67,
'White River Junction': 91,
'Bennington': 122,
'Brattleboro': 153},
'White River Junction':
{'Rutland': 46,
'Burlington': 91,
'Bennington': 98,
'Brattleboro': 65},
'Bennington':
{'Rutland': 55,
'Burlington': 122,
'White River Junction': 98,
'Brattleboro': 40},
'Brattleboro':
{'Rutland': 75,
'Burlington': 153,
'White River Junction': 65,
'Bennington': 40}
}<jupyter_output><empty_output><jupyter_text>__itertools.permutations__ takes an iterable and returns an iterator that has the permutations, with each item being a tuple. Be careful with city_permutations because its items can only be returned once.<jupyter_code>vt_cities: Iterable[str] = vt_distances.keys()
print(vt_cities)
print('\n')
city_permutations: Iterable[Tuple[str,...]] = permutations(vt_cities)
<jupyter_output>dict_keys(['Rutland', 'Burlington', 'White River Junction', 'Bennington', 'Brattleboro'])
<jupyter_text>To get the actual path, we need to add the first city to each of the tuple because the salesman must return to the same city he started in.<jupyter_code>tsp_paths: List[Tuple[str,...]] = [c + (c[0],) for c in city_permutations]<jupyter_output><empty_output><jupyter_text>Brute force in action:<jupyter_code>best_path: Tuple[str,...] = tuple()
min_distance: int = 99999999 # arbitarily high number
for path in tsp_paths:
distance: int = 0
last: str = path[0]
for next in path[1:]:
distance += vt_distances[last][next]
last = next
if distance < min_distance:
min_distance = distance
best_path = path
print(f'The shortest path is {best_path} in {min_distance} miles.')<jupyter_output>The shortest path is ('Rutland', 'Burlington', 'White River Junction', 'Brattleboro', 'Bennington', 'Rutland') in 318 miles.
|
no_license
|
/Chapter 9/.ipynb_checkpoints/The Travelling Salesman Problem-checkpoint.ipynb
|
wlsamchen01/Classical-Computer-Science-Problems-in-Python
| 4 |
<jupyter_start><jupyter_text>## Introduction
Greetings from the Kaggle bot! This is an automatically-generated kernel with starter code demonstrating how to read in the data and begin exploring. Click the blue "Edit Notebook" or "Fork Notebook" button at the top of this kernel to begin editing.## Exploratory Analysis
To begin this exploratory analysis, first use `matplotlib` to import libraries and define functions for plotting the data. Depending on the data, not all plots will be made. (Hey, I'm just a kerneling bot, not a Kaggle Competitions Grandmaster!)<jupyter_code>from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt # plotting
import numpy as np # linear algebra
import os # accessing directory structure
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
<jupyter_output><empty_output><jupyter_text>There is 1 csv file in the current version of the dataset:
<jupyter_code>print(os.listdir('../input'))<jupyter_output><empty_output><jupyter_text>The next hidden code cells define functions for plotting data. Click on the "Code" button in the published kernel to reveal the hidden code.<jupyter_code># Histogram of column data
def plotHistogram(df, nHistogramShown, nHistogramPerRow):
nunique = df.nunique()
df = df[[col for col in df if nunique[col] > 1 and nunique[col] < 50]] # For displaying purposes, pick columns that have between 1 and 50 unique values
nRow, nCol = df.shape
columnNames = list(df)
nHistRow = (nCol + nHistogramPerRow - 1) / nHistogramPerRow
plt.figure(num=None, figsize=(6*nHistogramPerRow, 8*nHistRow), dpi=80, facecolor='w', edgecolor='k')
for i in range(min(nCol, nHistogramShown)):
plt.subplot(nHistRow, nHistogramPerRow, i+1)
df.iloc[:,i].hist()
plt.ylabel('counts')
plt.xticks(rotation=90)
plt.title(f'{columnNames[i]} (column {i})')
plt.tight_layout(pad=1.0, w_pad=1.0, h_pad=1.0)
plt.show()
# Correlation matrix
def plotCorrelationMatrix(df, graphWidth):
filename = df.dataframeName
df = df.dropna('columns') # drop columns with NaN
df = df[[col for col in df if df[col].nunique() > 1]] # keep columns where there are more than 1 unique values
if df.shape[1] < 2:
print(f'No correlation plots shown: The number of non-NaN or constant columns ({df.shape[1]}) is less than 2')
return
corr = df.corr()
plt.figure(num=None, figsize=(graphWidth, graphWidth), dpi=80, facecolor='w', edgecolor='k')
corrMat = plt.matshow(corr, fignum = 1)
plt.xticks(range(len(corr.columns)), corr.columns, rotation=90)
plt.yticks(range(len(corr.columns)), corr.columns)
plt.gca().xaxis.tick_bottom()
plt.colorbar(corrMat)
plt.title(f'Correlation Matrix for {filename}', fontsize=15)
plt.show()
# Scatter and density plots
def plotScatterMatrix(df, plotSize, textSize):
df = df.select_dtypes(include =[np.number]) # keep only numerical columns
# Remove rows and columns that would lead to df being singular
df = df.dropna('columns')
df = df[[col for col in df if df[col].nunique() > 1]] # keep columns where there are more than 1 unique values
columnNames = list(df)
if len(columnNames) > 10: # reduce the number of columns for matrix inversion of kernel density plots
columnNames = columnNames[:10]
df = df[columnNames]
ax = pd.plotting.scatter_matrix(df, alpha=0.75, figsize=[plotSize, plotSize], diagonal='kde')
corrs = df.corr().values
for i, j in zip(*plt.np.triu_indices_from(ax, k = 1)):
ax[i, j].annotate('Corr. coef = %.3f' % corrs[i, j], (0.8, 0.2), xycoords='axes fraction', ha='center', va='center', size=textSize)
plt.suptitle('Scatter and Density Plot')
plt.show()
<jupyter_output><empty_output><jupyter_text>Now you're ready to read in the data and use the plotting functions to visualize the data.### Let's check 1st file: ../input/avocado.csv<jupyter_code>nRowsRead = 1000 # specify 'None' if want to read whole file
# avocado.csv has 18250 rows in reality, but we are only loading/previewing the first 1000 rows
df1 = pd.read_csv('../input/avocado.csv', delimiter=',', nrows = nRowsRead)
df1.dataframeName = 'avocado.csv'
nRow, nCol = df1.shape
print(f'There are {nRow} rows and {nCol} columns')<jupyter_output><empty_output><jupyter_text>Let's take a quick look at what the data looks like:<jupyter_code>df1.head(5)<jupyter_output><empty_output><jupyter_text>Histogram of sampled columns:<jupyter_code>plotHistogram(df1, 10, 5)<jupyter_output><empty_output><jupyter_text>Correlation matrix:<jupyter_code>plotCorrelationMatrix(df1, 8)<jupyter_output><empty_output><jupyter_text>Scatter and density plots:<jupyter_code>plotScatterMatrix(df1, 20, 10)<jupyter_output><empty_output>
|
no_license
|
/notebooks/mrisdal/starter-avocado-prices-45e9c68c-b.ipynb
|
Sayem-Mohammad-Imtiaz/kaggle-notebooks
| 8 |
<jupyter_start><jupyter_text># [EDA] 了解變數分布狀態: Bar & KDE (density plot)# To do: 變項的分群比較
1. 自 20 到 70 歲,切 11 個點,進行分群比較 (KDE plot)
2. 以年齡區間為 x, target 為 y 繪製 barplot# [作業目標]
- 試著調整資料, 並利用提供的程式繪製分布圖# [作業重點]
- 如何將資料依照歲數, 將 20 到 70 歲切成11個區間? (In[4], Hint : 使用 numpy.linspace),
送入繪圖前的除了排序外, 還要注意什麼? (In[5])
- 如何調整對應資料, 以繪製長條圖(bar chart)? (In[7])<jupyter_code># 載入需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns # 另一個繪圖-樣式套件
%matplotlib inline
plt.style.use('ggplot')
# 忽略警告訊息
import warnings
warnings.filterwarnings('ignore')
# 設定 data_path
dir_data = './data/'
# 讀取檔案
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
app_train.head()
# 資料整理 ( 'DAYS_BIRTH'全部取絕對值 )
app_train['DAYS_BIRTH'] = abs(app_train['DAYS_BIRTH'])
# 根據年齡分成不同組別 (年齡區間 - 還款與否)
age_data = app_train[['TARGET', 'DAYS_BIRTH']] # subset
age_data['YEARS_BIRTH'] = age_data['DAYS_BIRTH'] / 365 # day-age to year-age
#自 20 到 70 歲,切 11 個點 (得到 10 組)
"""
Your Code Here
"""
bin_cut = np.linspace(20,70,11)
age_data['YEARS_BINNED'] = pd.cut(age_data['YEARS_BIRTH'], bins = bin_cut)
# 顯示不同組的數量
print(age_data['YEARS_BINNED'].value_counts())
age_data.head()
# 繪圖前先排序 / 分組
"""
Your Code Here
"""
year_group_sorted = np.sort(age_data['YEARS_BINNED'].unique())
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(age_data.loc[(age_data['YEARS_BINNED'] == year_group_sorted[i]) & \
(age_data['TARGET'] == 0), 'YEARS_BIRTH'], label = str(year_group_sorted[i]))
sns.distplot(age_data.loc[(age_data['YEARS_BINNED'] == year_group_sorted[i]) & \
(age_data['TARGET'] == 1), 'YEARS_BIRTH'], label = str(year_group_sorted[i]))
plt.title('KDE with Age groups')
plt.show()
# 計算每個年齡區間的 Target、DAYS_BIRTH與 YEARS_BIRTH 的平均值
age_groups = age_data.groupby('YEARS_BINNED').mean()
age_groups
plt.figure(figsize = (8, 8))
# 以年齡區間為 x, target 為 y 繪製 barplot
"""
Your Code Here
"""
px = age_data['YEARS_BIRTH']
py = age_data['TARGET']
sns.barplot(px, py)
# Plot labeling
plt.xticks(rotation = 75); plt.xlabel('Age Group (years)'); plt.ylabel('Failure to Repay (%)')
plt.title('Failure to Repay by Age Group');<jupyter_output><empty_output>
|
no_license
|
/Day_011_HW.ipynb
|
jovihu/2nd-ML100Days
| 1 |
<jupyter_start><jupyter_text># Tesseract and Photographs<jupyter_code># Lets try a new example and bring together some of the things we have learned.
# Here's an image of a storefront, lets load it and try and get the name of the
# store out of the image
from PIL import Image
import pytesseract
image=Image.open('storefront.jpg')
display(image)
# Finally, lets try and run tesseract on that image and see what the results are
pytesseract.image_to_string(image)
# We see at the very bottom there is just an empty string. Tesseract is unable to take
# this image and pull out the name. But we learned how to crop the images in the
# last set of lectures, so lets try and help Tesseract by cropping out certain pieces.
#
# First, lets set the bounding box. In this image the store name is in a box
# bounded by (315, 170, 700, 270)
bounding_box=(315, 170, 700, 270)
# Now lets crop the image
title_image=image.crop(bounding_box)
# Now lets display it and pull out the text
display(title_image)
pytesseract.image_to_string(title_image)
# Great, we see how with a bit of a problem reduction we can make that work. So now we have
# been able to take an image, preprocess it where we expect to see text, and turn that text
# into a string that python can understand.
#
# If you look back up at the image though, you'll see there is a small sign inside of the
# shop that also has the shop name on it. I wonder if we're able to recognize the text on
# that sign? Let's give it a try.
#
# First, we need to determine a bounding box for that sign. I'm going to show you a short-cut
# to make this easier in an optional video in this module, but for now lets just use the bounding
# box I decided on
bounding_box=(900, 420, 940, 445)
# Now, lets crop the image
little_sign=image.crop((900, 420, 940, 445))
display(little_sign)
# All right, that is a little sign! OCR works better with higher resolution images, so
# lets increase the size of this image by using the pillow resize() function
# Lets set the width and height equal to ten times the size it is now in a (w,h) tuple
new_size=(little_sign.width*10,little_sign.height*10)
# I think we should be able to find something better. I can read it, but it looks
# really pixelated. Lets see what all the different resize options look like
options=[Image.NEAREST, Image.BOX, Image.BILINEAR, Image.HAMMING, Image.BICUBIC, Image.LANCZOS]
for option in options:
# lets print the option name
print(option)
# lets display what this option looks like on our little sign
display(little_sign.resize( new_size, option))
# First lets resize to the larger size
bigger_sign=little_sign.resize(new_size, Image.BICUBIC)
# Lets print out the text
pytesseract.image_to_string(bigger_sign)
# Well, no text there. Lets try and binarize this. First, let me just bring in the
# binarization code we did earlier
def binarize(image_to_transform, threshold):
output_image=image_to_transform.convert("L")
for x in range(output_image.width):
for y in range(output_image.height):
if output_image.getpixel((x,y))< threshold:
output_image.putpixel( (x,y), 0 )
else:
output_image.putpixel( (x,y), 255 )
return output_image
# Now, lets apply binarizations with, say, a threshold of 190, and try and display that
# as well as do the OCR work
binarized_bigger_sign=binarize(bigger_sign, 190)
display(binarized_bigger_sign)
pytesseract.image_to_string(binarized_bigger_sign)
# Ok, that text is pretty useless. How should we pick the best binarization
# to use? Well, there are some methods, but lets just try something very simple to
# show how well this can work. We have an english word we are trying to detect, "FOSSIL".
# If we tried all binarizations, from 0 through 255, and looked to see if there were
# any english words in that list, this might be one way. So lets see if we can
# write a routine to do this.
#
# First, lets load a list of english words into a list. I put a copy in the readonly
# directory for you to work with
eng_dict=[]
with open ("words_alpha.txt", "r") as f:
data=f.read()
# now we want to split this into a list based on the new line characters
eng_dict=data.split("\n")
# Now lets iterate through all possible thresholds and look for an english word, printing
# it out if it exists
for i in range(150,170):
# lets binarize and convert this to s tring values
strng=pytesseract.image_to_string(binarize(bigger_sign,i))
# We want to remove non alphabetical characters, like ([%$]) from the text, here's
# a short method to do that
# first, lets convert our string to lower case only
strng=strng.lower()
# then lets import the string package - it has a nice list of lower case letters
import string
# now lets iterate over our string looking at it character by character, putting it in
# the comaprison text
comparison=''
for character in strng:
if character in string.ascii_lowercase:
comparison=comparison+character
# finally, lets search for comparison in the dictionary file
if comparison in eng_dict:
# and print it if we find it
print(i)
print(comparison)
# Well, not perfect, but we see fossil there among other values which are in the dictionary.
# This is not a bad way to clean up OCR data. It can useful to use a language or domain specific
# dictionary in practice, especially if you are generating a search engine for specialized language
# such as a medical knowledge base or locations. And if you scroll up and look at the data
# we were working with - this small little wall hanging on the inside of the store - it's not
# so bad.
# now lets try our same old method
def binarize(image_to_transform, threshold):
output_image=image_to_transform.convert("L")
for x in range(output_image.width):
for y in range(output_image.height):
if output_image.getpixel((x,y))< threshold:
output_image.putpixel( (x,y), 0 )
else:
output_image.putpixel( (x,y), 255 )
return output_image
for thresh in range(150,170):
print("Trying with threshold " + str(thresh))
display(binarize(bigger_sign, thresh))
print(tess.image_to_string(binarize(bigger_sign, thresh)))<jupyter_output><empty_output><jupyter_text># Jupyter Widgets (Optional) (USELESS BUT SOMTIMES MIGHT HELP)<jupyter_code># In this brief lecture I want to introduce you to one of the more advanced features of the
# Jupyter notebook development environment called widgets. Sometimes you want
# to interact with a function you have created and call it multiple times with different
# parameters. For instance, if we wanted to draw a red box around a portion of an
# image to try and fine tune the crop location. Widgets are one way to do this quickly
# in the browser without having to learn how to write a large desktop application.
#
# Lets check it out. First we want to import the Image and ImageDraw classes from the
# PILLOW package
from PIL import Image, ImageDraw
# Then we want to import the interact class from the widgets package
from ipywidgets import interact
# We will use interact to annotate a function. Lets bring in an image that we know we
# are interested in, like the storefront image from a previous lecture
image=Image.open('storefront.jpg')
# Ok, our setup is done. Now we're going to use the interact decorator to indicate
# that we want to wrap the python function. We do this using the @ sign. This will
# take a set of parameters which are identical to the function to be called. Then Jupyter
# will draw some sliders on the screen to let us manipulate these values. Decorators,
# which is what the @ sign is describing, are standard python statements and just a
# short hand for functions which wrap other functions. They are a bit advanced though, so
# we haven't talked about them in this course, and you might just have to have some faith
@interact(left=100, top=100, right=200, bottom=200)
# Now we just write the function we had before
def draw_border(left, top, right, bottom):
img=image.copy()
drawing_object=ImageDraw.Draw(img)
drawing_object.rectangle((left,top,right,bottom), fill = None, outline ='red')
display(img)<jupyter_output><empty_output>
|
no_license
|
/Tesseract and Optical Character Recognition-checkpoint.ipynb
|
singhaanonno/Python-made-fun
| 2 |
<jupyter_start><jupyter_text># **0. 해커톤 진행 주의사항**
**1) 개발 관련 주의사항**
* [1. 초기 환경 설정]은 절대 수정하지 말 것
* 단, 사용할 데이터셋에 따라 is_mnist만 수정
* 모든 구현은 [2. 데이터 전처리]와 [3. 모델 생성]에서만 진행
* 데이터 전처리 후 트레이닝, 데이터 셋은 x_train_after, x_test_after 변수명을 유지해주세요.
* 데이터셋이 달라져도 같은 모델 구조를 유지하여야함.
* [4. 모델 저장]과 [5. 모델 로드 및 평가]에서 team_name 변수 변경 (예.`team_name = 'team01'`)
* 트레이닝 중간에 checkpoint를 활용하여 모델을 저장한 경우에도 파일 이름 양식 통일 필수
* team_name을 제외한 다른 부분은 수정하지 말 것
* Colab 사용중 실수로 데이터 손실이 발생할 수도 있으니 중간 결과값을 github에 업로드
* "런타임->모든 런타임 재설정"은 절대 누르지 말 것 (저장한 모델 데이터가 모두 삭제됨)
* 효율적인 구현 및 테스팅을 위해 GPU 가속 기능 활성화
* "런타임 -> 런타임 유형변경 -> 하드웨어 가속기 -> GPU 설정"
* 주석을 최대한 자세히 작성
* Keras API 관련하여 [Keras Documentation](https://keras.io/) 참조
**2) 제출 관련 주의사항**
* 제출물
* 소스코드 (hackathon_teamXX.ipynb)
* 모델 구조 파일 (model_structure_teamXX.json)
* 모델 weight 파일 (model_weight_teamXX.h5)
* 컴파일된 모델 파일 (model_entire_teamXX.h5)
* 제출 기한: **오후 6시**
* 제출 방법: [GitHub README](https://github.com/cauosshackathonta/2019_cau_oss_hackathon/) 참조
**3) 평가 관련 주의사항**
* 모델 성능 = 테스트 데이터 셋 분류 정확도
* model.evaluate(x_test, y_test)
* 제출된 모델들의 테스트 데이터 셋 분류 정확도를 기준으로 수상작 결정
* 수상 후보들에 대해서는 소스코드를 기반으로 모델 재검증
**4) 수상 실격 사유**
* 유사한 소스코드 or 알고리즘이 적발될 경우
* 소스코드와 제출된 모델이 상이한 경우
* 두 개의 데이터셋에 대해 다른 모델 구조를 사용한 경우
* 개발 관련 주의사항을 지키지 않은 경우
* 예: [초기 환경 설정]을 수정한 경우
* 데이터 셋을 변조한 경우
* 예. 테스트 데이터 셋을 트레이닝 데이터 셋에 포함하여 모델 생성
* 주석이 소스코드와 맞지 않거나 미비할 경우
# **1. 초기 환경 설정**
<jupyter_code>from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
# tensorflow와 tf.keras 및 관련 라이브러리 임포트
import tensorflow as tf
from tensorflow import keras
from keras.utils import np_utils
import numpy as np
is_mnist = False
# 데이터셋 로드
# x_train, y_train: 트레이닝 데이터 및 레이블
# x_test, y_test: 테스트 데이터 및 레이블
if is_mnist:
data_type = 'mnist'
(x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data() # fashion MNIST 데이터셋인 경우,
else:
data_type = 'cifar10'
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() # cifar10 데이터셋인 경우,
# 분류를 위해 클래스 벡터를 바이너리 매트릭스로 변환
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
# 총 클래스 개수
num_classes = y_test.shape[1]
# 인풋 데이터 타입
input_shape = x_test.shape[1:]<jupyter_output><empty_output><jupyter_text># **2. 데이터 전처리**
<jupyter_code>from keras.preprocessing.image import ImageDataGenerator
# 데이터 전처리 (예: normalization)
x_train_after = x_train / 255.0
x_test_after = x_test / 255.0
if is_mnist:
x_train_after = x_train_after.reshape(x_train.shape[0], 28, 28, 1)
x_test_after = x_test_after.reshape(x_test.shape[0], 28, 28, 1)
input_shape = x_test_after.shape[1:]
<jupyter_output><empty_output><jupyter_text># **3. 모델 생성**
<jupyter_code>import os
# 순차 모델 생성 (가장 기본구조)
model = keras.Sequential()
model.add(keras.layers.Conv2D(32, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', input_shape=input_shape, kernel_initializer='he_normal'))
print(model.output_shape)
model.add(keras.layers.Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', input_shape=input_shape, kernel_initializer='he_normal'))
print(model.output_shape)
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
print(model.output_shape)
model.add(keras.layers.Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', input_shape=input_shape, kernel_initializer='he_normal'))
print(model.output_shape)
model.add(keras.layers.Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', input_shape=input_shape, kernel_initializer='he_normal'))
print(model.output_shape)
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
print(model.output_shape)
model.add(keras.layers.Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', input_shape=input_shape, kernel_initializer='he_normal'))
print(model.output_shape)
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
print(model.output_shape)
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu', kernel_initializer='he_normal'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(num_classes, kernel_regularizer=keras.regularizers.l2(0.001), activation='softmax'))
# 모델 컴파일
# optimizer: 모델을 업데이트 하는 방식
# loss: 모델의 정확도를 판단하는 방식
# metrics: 트레이닝 및 테스팅 성능 모니터링을 위한 평가지표
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# checkpoint
checkpoint_path = '/content/model_weight_' + data_type + '_best.h5'
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
monitor='val_acc',
save_weight_only=True,
save_best_only=True,
verbose=1)
early_stopping = tf.keras.callbacks.EarlyStopping(patience = 20)
# 모델 트레이닝
# batch_size: 전체 데이터셋 중 몇개씩 학습시킬 것인지
# epoch: 학습에 전체 데이터셋이 총 몇번 이용될 것인지
# shuffle: 학습전에 트레이닝 데이터셋을 랜덤하게 섞을 것인지
# validation_data: 중간 성능 검증에 사용할 data set
model.fit(x_train_after, y_train, batch_size = 256, epochs = 50, shuffle=True, validation_data=[x_test_after, y_test], callbacks=[cp_callback, early_stopping])<jupyter_output>(None, 32, 32, 32)
(None, 32, 32, 64)
(None, 16, 16, 64)
(None, 16, 16, 128)
(None, 16, 16, 256)
(None, 8, 8, 256)
(None, 8, 8, 512)
(None, 4, 4, 512)
Train on 50000 samples, validate on 10000 samples
Epoch 1/50
49920/50000 [============================>.] - ETA: 0s - loss: 1.6574 - acc: 0.4121
Epoch 00001: val_acc improved from -inf to 0.56030, saving model to /content/model_weight_cifar10_best.h5
50000/50000 [==============================] - 17s 340us/sample - loss: 1.6570 - acc: 0.4121 - val_loss: 1.2484 - val_acc: 0.5603
Epoch 2/50
49920/50000 [============================>.] - ETA: 0s - loss: 1.1407 - acc: 0.5963
Epoch 00002: val_acc improved from 0.56030 to 0.64880, saving model to /content/model_weight_cifar10_best.h5
50000/50000 [==============================] - 14s 276us/sample - loss: 1.1405 - acc: 0.5964 - val_loss: 1.0080 - val_acc: 0.6488
Epoch 3/50
49920/50000 [============================>.] - ETA: 0s - loss: 0.9304 - acc: 0.6777
Epoch 00003: val_acc improved from 0.64[...]<jupyter_text># **4. 모델 저장**<jupyter_code>save_path = '/content/'
team_name = 'team10'
# 모델의 weight 값만 저장합니다.
model.save_weights(save_path + 'model_weight_' + data_type + '_' + team_name + '.h5')
# 모델의 구조만을 저장합니다.
model_json = model.to_json()
with open(save_path + 'model_structure_' + data_type + '_' + team_name + '.json', 'w') as json_file :
json_file.write(model_json)
# 트레이닝된 전체 모델을 저장합니다.
model.save(save_path + 'model_entire_' + data_type + '_' + team_name + '.h5')<jupyter_output><empty_output><jupyter_text># **5. 모델 로드 및 평가**<jupyter_code>save_path = '/content/'
team_name = 'team10'
# model = keras.models.load_model(save_path + 'model_entire_' + data_type + '_' + team_name + '.h5')
model = keras.models.load_model(save_path + 'model_weight_best.h5')
model.evaluate(x_test_after, y_test)
<jupyter_output><empty_output>
|
no_license
|
/hackathon_team10.ipynb
|
simba328/2019_cau_oss_hackathon
| 5 |
<jupyter_start><jupyter_text>## Deriving the A_lambda/E(B-V) coefficients needed for dereddening
This notebook shows a quick demo of how to calculate the A_lambda/E(B-V) coefficients needed to remove Galactic dust from the DC2 catalogs.
In DC2, the CCM model (https://ui.adsabs.harvard.edu/abs/1989ApJ...345..245C/abstract) was assumed when calculating the dust model, we will use the lsst.sims.photUtils package to compute the effective wavelengths for the filters, and then set up the CCM model and calculate Alambda/E(B-V) for each of the LSST filters.<jupyter_code>import os
import GCRCatalogs
import pandas as pd
import numpy as np
import scipy.interpolate
from lsst.sims import photUtils
from lsst.sims.photUtils import BandpassSet<jupyter_output><empty_output><jupyter_text>This function grabs the "LSST_THROUGHPUTS_BASELINE" filters as currently specified in the photUtils package, then computes both lambda_eff and A_lambda/E(B-V) for each of the six LSST filters:<jupyter_code>def compute_alambda_over_ebv(filterset='ugrizy',basepath=None):
"""
compute the effective wavelengths and alambda/E(B-V) values for a set of filters
We will grab the flat SED from the SIMS library to calculate the CCM dust model
that was assumed for DC2, and then grab the baseline ugrizy filters and calculate
their effective wavelengths, and evaluate the CCM alam_over_ebv value at those
wavelengths
inputs: filterset:
vector of filters (limited to ugrizy present for the baseline LSST filterset)
returns:
lam_eff_list:
np 1d array of filter effective wavelengths for the filters
alam_over_ebv_list:
np 1d array of alam_over_ebv values for the filters
"""
lam_eff_list = []
alam_over_ebv_list = []
sed_file = os.path.join(os.environ['SIMS_SED_LIBRARY_DIR'],'flatSED','sed_flat.txt.gz')
sed = photUtils.Sed()
sed.readSED_flambda(sed_file)
ax,bx = sed.setupCCM_ab()
ccm_model = 3.1*ax+bx
wl = sed.wavelen
ccm_spline = scipy.interpolate.interp1d(wl,ccm_model,bounds_error=True)
alam_over_ebv = 3.1*ax+bx
for filt in filterset:
if basepath is None:
bp_file = os.path.join(os.environ['LSST_THROUGHPUTS_BASELINE'],'',f'total_{filt}.dat')
else:
bp_file = os.path.join(basepath,f'total_{filt}.dat')
bandpass = photUtils.Bandpass()
bandpass.readThroughput(bp_file)
_,leff = bandpass.calcEffWavelen()
lam_eff_list.append(leff)
alam = ccm_spline(leff)
alam_over_ebv_list.append(alam)
return np.array(lam_eff_list),np.array(alam_over_ebv_list)<jupyter_output><empty_output><jupyter_text>Let's do a quick check that we are getting the results that we expect. For DC2 we should get the following for the effective wavelengths and A_lam/E(B-V) values:
u 367.07 nm A_lambda/EBV = 4.812
g 482.69 nm A_lambda/EBV = 3.643
r 622.32 nm A_lambda/EBV = 2.699
i 754.60 nm A_lambda/EBV = 2.063
z 869.01 nm A_lambda/EBV = 1.578
y 971.03 nm A_lambda/EBV = 1.313
If these values do not match those in the next cell, check that the Baseline filter definitions have not changed!<jupyter_code>filterlist = ['u','g','r','i','z','y']
leff_list,alamebv_list = compute_alambda_over_ebv(filterlist)
for filt,leff, alamebv in zip(filterlist,leff_list,alamebv_list):
print(f"filter {filt} lam_eff: {leff:.2f}nm alam/E(B-V): {alamebv:.3f}")<jupyter_output><empty_output><jupyter_text>These numbers do *not* match, there are very slight differences, as the filters were updated in mid 2019. The filter curves used for DC2 are released as v1.4 cosmoDC2:
https://github.com/lsst/throughputs/releases/tag/1.4. For convenience, we have saved a copy of these filters in the data/dc2_throughputs subdirectory. So, we can calculate the lambda_eff and A_lambda/E(B-V) for these filters:<jupyter_code>leff_list,alamebv_list = compute_alambda_over_ebv(filterlist,basepath="data/dc2_throughputs")
for filt,leff, alamebv in zip(filterlist,leff_list,alamebv_list):
print(f"filter {filt} lam_eff: {leff:.2f}nm alam/E(B-V): {alamebv:.3f}")<jupyter_output><empty_output>
|
permissive
|
/contributed/Derive_A_EBV_coefficients.ipynb
|
LSSTDESC/DC2-analysis
| 4 |
<jupyter_start><jupyter_text><jupyter_code>from google.colab import drive
drive.mount('/content/gdrive')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from datetime import datetime, timedelta
ithaca = pd.read_csv('/content/gdrive/MyDrive/Weather ML Project/ITH_temp_Apr2011_Mar2021.csv')<jupyter_output><empty_output><jupyter_text>station: three or four character site identifier
valid: timestamp of the observation
tmpf: Air Temperature in Fahrenheit, typically @ 2 meters<jupyter_code>print(ithaca.shape)
is_METAR = ithaca['valid'].str.contains("56")
is_leap_year = ithaca['valid'].str.contains("02-29")
temp_missing = ithaca['tmpf'].str.contains("M")
ithaca = ithaca[is_METAR & (is_leap_year==False) & (temp_missing==False)]
print(ithaca.shape)
time_stamp = pd.to_datetime(ithaca['valid'])
temp_series = ithaca['tmpf'].astype(float)
time_stamp.head()
temp_series.head()
plt.figure(figsize=(20, 6))
plt.plot(time_stamp, temp_series)
plt.xlabel("Time")
plt.ylabel("Temperature (℉)")
plt.grid(True)
#Average Temperature per day
hour_diff = 15
plt.figure(figsize=(20, 6))
plt.plot(time_stamp[hour_diff:], temp_series[hour_diff:])
plt.plot(time_stamp[hour_diff:], temp_series[:-hour_diff])
plt.xlabel("Time")
plt.ylabel("Temperature (℉)")
plt.grid(True)
print(keras.metrics.mean_squared_error(temp_series[hour_diff:], temp_series[:-hour_diff]).numpy())
print(keras.metrics.mean_absolute_error(temp_series[hour_diff:], temp_series[:-hour_diff]).numpy())
x = []
y = []
for i in range(48):
x.append(i+15)
diff = hour_diff
diff += i
val = keras.metrics.mean_absolute_error(temp_series[diff:], temp_series[:-diff]).numpy()
y.append(val)
plt.figure(figsize=(20, 6))
plt.plot(x, y)
plt.xticks(np.arange(15, 63))
plt.grid(True)
<jupyter_output><empty_output>
|
no_license
|
/How_Cold_Is_It_Tomorrow?.ipynb
|
danielee3/ProjectWX
| 2 |
<jupyter_start><jupyter_text># Linear Regression## Agenda
1. Introducing the bikeshare dataset
- Reading in the data
- Visualizing the data
2. Linear regression basics
- Form of linear regression
- Building a linear regression model
- Using the model for prediction
- Does the scale of the features matter?
3. Working with multiple features
- Visualizing the data (part 2)
- Adding more features to the model
4. Choosing between models
- Feature selection
- Evaluation metrics for regression problems
- Comparing models with train/test split and RMSE
- Comparing testing RMSE with null RMSE
5. Creating features
- Handling categorical features
- Feature engineering
6. Comparing linear regression with other models## Reading in the data
We'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition ([data dictionary](https://www.kaggle.com/c/bike-sharing-demand/data)).<jupyter_code># read the data and set the datetime as the index
%matplotlib inline
import pandas as pd
path= '../data/'
url = path + 'bikeshare.csv' #path + 'bikeshare.csv'
bikes = pd.read_csv(url, index_col='datetime', parse_dates=True)
bikes.head()
bikes.dtypes
bikes.index
bikes.head()<jupyter_output><empty_output><jupyter_text>**Questions:**
- What does each observation represent?
- What is the response variable (as defined by Kaggle)?
- How many features are there?<jupyter_code># "count" is a method, so it's best to name that column something else
bikes.rename(columns={'count':'total'}, inplace=True)
bikes.columns<jupyter_output><empty_output><jupyter_text>## Visualizing the data<jupyter_code>%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
len(bikes)
# Pandas scatter plot
bikes.plot(kind='scatter', x='temp', y='total', alpha=.3)
# Seaborn scatter plot with regression line
sns.lmplot(x='temp', y='total', data=bikes, aspect=1.5, scatter_kws={'alpha':0.2})<jupyter_output><empty_output><jupyter_text>## Form of linear regression
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$
- $y$ is the response
- $\beta_0$ is the intercept
- $\beta_1$ is the coefficient for $x_1$ (the first feature)
- $\beta_n$ is the coefficient for $x_n$ (the nth feature)
The $\beta$ values are called the **model coefficients**:
- These values are estimated (or "learned") during the model fitting process using the **least squares criterion**.
- Specifically, we find the line (mathematically) which minimizes the **sum of squared residuals** (or "sum of squared errors").
- And once we've learned these coefficients, we can use the model to predict the response.

In the diagram above:
- The black dots are the **observed values** of x and y.
- The blue line is our **least squares line**.
- The red lines are the **residuals**, which are the vertical distances between the observed values and the least squares line.## Building a simple linear regression model (one feature)For our first task we decide to create a model which will predict the number of rentals based on the temperature.<jupyter_code># create X and y
feature_cols = ['temp']
X = bikes[feature_cols]
y = bikes.total
X.shape
type(X)
y.shape
type(y)
# We have created our X (feature matrix) and Y (response vector so now lets:
# import our chosen estimator, instantiate it into a variable, fit the model with the X and y
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X, y)
type(linreg)
# print the coefficients
print linreg.intercept_
print linreg.coef_<jupyter_output>6.04621295962
[ 9.17054048]
<jupyter_text>Interpreting the **intercept** ($\beta_0$):
- It is the value of $y$ when $x$=0.
- Thus, it is the estimated number of rentals when the temperature is 0 degrees Celsius.
- **Note:** It does not always make sense to interpret the intercept. (Why?)
Interpreting the **"temp" coefficient** ($\beta_1$):
- It is the change in $y$ divided by change in $x$, or the "slope".
- Thus, a temperature increase of 1 degree Celsius is **associated with** a rental increase of 9.17 bikes.
- This is not a statement of causation.
- $\beta_1$ would be **negative** if an increase in temperature was associated with a **decrease** in rentals.## Using the model for prediction
How many bike rentals would we predict if the temperature was 25 degrees Celsius?<jupyter_code># manually calculate the prediction
linreg.intercept_ + linreg.coef_*25
# use the predict method
linreg.predict(25)<jupyter_output><empty_output><jupyter_text>## Does the scale of the features matter?
Let's say that temperature was measured in Fahrenheit, rather than Celsius. How would that affect the model?<jupyter_code># create a new column for Fahrenheit temperature
bikes['temp_F'] = bikes.temp * 1.8 + 32
bikes.head()
# Seaborn scatter plot with regression line
sns.lmplot(x='temp_F', y='total', data=bikes, aspect=1.5, scatter_kws={'alpha':0.2})
# create X and y
feature_cols = ['temp_F']
X = bikes[feature_cols]
y = bikes.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print linreg.intercept_
print linreg.coef_
# convert 25 degrees Celsius to Fahrenheit
25 * 1.8 + 32
# predict rentals for 77 degrees Fahrenheit
linreg.predict(77)<jupyter_output><empty_output><jupyter_text>**Conclusion:** The scale of the features is **irrelevant** for linear regression models. When changing the scale, we simply change our **interpretation** of the coefficients.<jupyter_code># remove the temp_F column
bikes.drop('temp_F', axis=1, inplace=True)
bikes.head(10)<jupyter_output><empty_output><jupyter_text>## Visualizing the data (part 2)<jupyter_code># explore more features
feature_cols = ['temp', 'season', 'weather', 'humidity']
# multiple scatter plots in Seaborn
sns.pairplot(bikes, x_vars=feature_cols, y_vars='total', kind='reg')
# multiple scatter plots in matplotlib
fig, axs = plt.subplots(1, len(feature_cols), sharey=True)
for index, feature in enumerate(feature_cols):
bikes.plot(kind='scatter', x=feature, y='total', ax=axs[index], figsize=(16, 3))<jupyter_output><empty_output><jupyter_text>Are you seeing anything that you did not expect?<jupyter_code># cross-tabulation of season and month
pd.crosstab(bikes.season, bikes.index.month)
# box plot of rentals, grouped by season
bikes.boxplot(column='total', by='season')<jupyter_output><empty_output><jupyter_text>Notably:
- A line can't capture a non-linear relationship.
- There are more rentals in winter than in spring (?)<jupyter_code># line plot of rentals
bikes.total.plot()<jupyter_output><empty_output><jupyter_text>What does this tell us?
There are more rentals in the winter than the spring, but only because the system is experiencing **overall growth** and the winter months happen to come after the spring months.<jupyter_code># correlation matrix (ranges from 1 to -1)
bikes.corr()
# visualize correlation matrix in Seaborn using a heatmap
sns.heatmap(bikes.corr())<jupyter_output><empty_output><jupyter_text>What relationships do you notice?## Adding more features to the model<jupyter_code># create a list of features
feature_cols = ['temp', 'season', 'weather', 'humidity']
# create X and y
X = bikes[feature_cols]
y = bikes.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print linreg.intercept_
print linreg.coef_
# pair the feature names with the coefficients
zip(feature_cols, linreg.coef_)<jupyter_output><empty_output><jupyter_text>Interpreting the coefficients:
- Holding all other features fixed, a 1 unit increase in **temperature** is associated with a **rental increase of 7.86 bikes**.
- Holding all other features fixed, a 1 unit increase in **season** is associated with a **rental increase of 22.5 bikes**.
- Holding all other features fixed, a 1 unit increase in **weather** is associated with a **rental increase of 6.67 bikes**.
- Holding all other features fixed, a 1 unit increase in **humidity** is associated with a **rental decrease of 3.12 bikes**.
Does anything look incorrect?## Feature selection
How do we choose which features to include in the model? We're going to use **train/test split** (and eventually **cross-validation**).
Why not use of **p-values** or **R-squared** for feature selection?
- Linear models rely upon **a lot of assumptions** (such as the features being independent), and if those assumptions are violated, p-values and R-squared are less reliable. Train/test split relies on fewer assumptions.
- Features that are unrelated to the response can still have **significant p-values**.
- Adding features to your model that are unrelated to the response will always **increase the R-squared value**, and adjusted R-squared does not sufficiently account for this.
- p-values and R-squared are **proxies** for our goal of generalization, whereas train/test split and cross-validation attempt to **directly estimate** how well the model will generalize to out-of-sample data.
More generally:
- There are different methodologies that can be used for solving any given data science problem, and this course follows a **machine learning methodology**.
- This course focuses on **general purpose approaches** that can be applied to any model, rather than model-specific approaches.## Evaluation metrics for regression problems
Evaluation metrics for classification problems, such as **accuracy**, are not useful for regression problems. We need evaluation metrics designed for comparing **continuous values**.
Here are three common evaluation metrics for regression problems:
**Mean Absolute Error** (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$
**Mean Squared Error** (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$
**Root Mean Squared Error** (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$<jupyter_code># example true and predicted response values
true = [10, 7, 5, 5]
pred = [8, 6, 5, 10]
# calculate these metrics by hand!
from sklearn import metrics
import numpy as np
print 'MAE:', metrics.mean_absolute_error(true, pred)
print 'MSE:', metrics.mean_squared_error(true, pred)
print 'RMSE:', np.sqrt(metrics.mean_squared_error(true, pred))<jupyter_output>MAE: 2.0
MSE: 7.5
RMSE: 2.73861278753
<jupyter_text>Comparing these metrics:
- **MAE** is the easiest to understand, because it's the average error.
- **MSE** is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.
- **RMSE** is even more popular than MSE, because RMSE is interpretable in the "y" units.
All of these are **loss functions**, because we want to minimize them.
Here's an additional example, to demonstrate how MSE/RMSE punish larger errors:<jupyter_code># same true values as above
true = [10, 7, 5, 5]
# new set of predicted values
pred = [10, 7, 5, 13]
# MAE is the same as before
print 'MAE:', metrics.mean_absolute_error(true, pred)
# MSE and RMSE are larger than before
print 'MSE:', metrics.mean_squared_error(true, pred)
print 'RMSE:', np.sqrt(metrics.mean_squared_error(true, pred))<jupyter_output>MAE: 2.0
MSE: 16.0
RMSE: 4.0
<jupyter_text>## Comparing models with train/test split and RMSE<jupyter_code># from sklearn.cross_validation import train_test_split # deprecated syntax
from sklearn.model_selection import train_test_split
# define a function that accepts a list of features and returns testing RMSE
def train_test_rmse(feature_cols):
X = bikes[feature_cols]
y = bikes.total
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
linreg = LinearRegression()
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
return np.sqrt(metrics.mean_squared_error(y_test, y_pred))
train_test_split(X, y, random_state=123)
# compare different sets of features
print train_test_rmse(['temp', 'season', 'weather', 'humidity'])
print train_test_rmse(['temp', 'season', 'weather'])
print train_test_rmse(['temp', 'season', 'humidity'])
print train_test_rmse(['temp', 'humidity'])
# using these as features is not allowed!
print train_test_rmse(['casual', 'registered'])<jupyter_output>6.46507997608e-14
<jupyter_text>## Comparing testing RMSE with null RMSE
Null RMSE is the RMSE that could be achieved by **always predicting the mean response value**. It is a benchmark against which you may want to measure your regression model.<jupyter_code># split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
# create a NumPy array with the same shape as y_test
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
# compute null RMSE
np.sqrt(metrics.mean_squared_error(y_test, y_null))<jupyter_output><empty_output><jupyter_text>## Handling categorical features
scikit-learn expects all features to be numeric. So how do we include a categorical feature in our model?
- **Ordered categories:** transform them to sensible numeric values (example: small=1, medium=2, large=3)
- **Unordered categories:** use dummy encoding (0/1)
What are the categorical features in our dataset?
- **Ordered categories:** weather (already encoded with sensible numeric values)
- **Unordered categories:** season (needs dummy encoding), holiday (already dummy encoded), workingday (already dummy encoded)
For season, we can't simply leave the encoding as 1 = spring, 2 = summer, 3 = fall, and 4 = winter, because that would imply an **ordered relationship**. Instead, we create **multiple dummy variables:**<jupyter_code># create dummy variables
season_dummies = pd.get_dummies(bikes.season, prefix='season')
# print 5 random rows
season_dummies.sample(n=5, random_state=1)<jupyter_output><empty_output><jupyter_text>However, we actually only need **three dummy variables (not four)**, and thus we'll drop the first dummy variable.
Why? Because three dummies captures all of the "information" about the season feature, and implicitly defines spring (season 1) as the **baseline level:**<jupyter_code># drop the first column
season_dummies.drop(season_dummies.columns[0], axis=1, inplace=True)
# print 5 random rows
season_dummies.sample(n=5, random_state=1)<jupyter_output><empty_output><jupyter_text>In general, if you have a categorical feature with **k possible values**, you create **k-1 dummy variables**.
If that's confusing, think about why we only need one dummy variable for holiday, not two dummy variables (holiday_yes and holiday_no).<jupyter_code># concatenate the original DataFrame and the dummy DataFrame (axis=0 means rows, axis=1 means columns)
bikes = pd.concat([bikes, season_dummies], axis=1)
# print 5 random rows
bikes.sample(n=5, random_state=1)
# include dummy variables for season in the model
feature_cols = ['temp','season_1','season_2', 'season_3', 'season_4', 'humidity']
X = bikes[feature_cols]
y = bikes.total
linreg = LinearRegression()
linreg.fit(X, y)
zip(feature_cols, linreg.coef_)
# include dummy variables for season in the model
feature_cols = ['temp','season_1','season_2', 'season_3', 'season_4', 'humidity']
X = bikes[feature_cols]
y = bikes.total
linreg = LinearRegression()
linreg.fit(X, y)
zip(feature_cols, linreg.coef_)<jupyter_output><empty_output><jupyter_text>How do we interpret the season coefficients? They are **measured against the baseline (spring)**:
- Holding all other features fixed, **summer** is associated with a **rental decrease of 3.39 bikes** compared to the spring.
- Holding all other features fixed, **fall** is associated with a **rental decrease of 41.7 bikes** compared to the spring.
- Holding all other features fixed, **winter** is associated with a **rental increase of 64.4 bikes** compared to the spring.
Would it matter if we changed which season was defined as the baseline?
- No, it would simply change our **interpretation** of the coefficients.
**Important:** Dummy encoding is relevant for all machine learning models, not just linear regression models.<jupyter_code># compare original season variable with dummy variables
print train_test_rmse(['temp', 'season', 'humidity'])
print train_test_rmse(['temp', 'season_2', 'season_3', 'season_4', 'humidity'])<jupyter_output>155.598189367
154.333945936
<jupyter_text>## Feature engineering
See if you can create the following features:
- **hour:** as a single numeric feature (0 through 23)
- **hour:** as a categorical feature (use 23 dummy variables)
- **daytime:** as a single categorical feature (daytime=1 from 7am to 8pm, and daytime=0 otherwise)
Then, try using each of the three features (on its own) with `train_test_rmse` to see which one performs the best!<jupyter_code># hour as a numeric feature
# hour as a categorical feature
# daytime as a categorical feature
<jupyter_output><empty_output>
|
no_license
|
/notebooks/07_linear_regression.ipynb
|
stubeef/DS_GA_SEA
| 21 |
<jupyter_start><jupyter_text>Retrieve available months by parsing website<jupyter_code>def retrieve_month_excels(murl, path):
print(murl)
month_table = []
# mcols = ['url', 'excel_num', 'excel_description', 'excel_url']
# mdf = pd.DataFrame(columns=mcols)
mr = re.get(murl)
mdata = mr.text
msoup = BeautifulSoup(mdata, "lxml")
mtable = msoup.find('div', {'class': 'stat-dataset_list-body'})
for row in mtable.find_all('article', {'class': 'stat-dataset_list-item'}):
excel_num = row.find('li', {'class': 'stat-dataset_list-detail-item stat-dataset_list-border-top'}).contents[0].replace('\n','')
excel_description = row.find('a').contents[0]
excel_url = ''
excel_a = row.find_all('a')[1]
if(excel_a['data-file_type'] == 'EXCEL'):
excel_url = path + excel_a['href']
month_table.append({
'url': murl,
'excel_num': excel_num,
'excel_description': excel_description,
'excel_url': excel_url
})
# mdfrow = pd.DataFrame([[murl, excel_num, excel_description, excel_url]], columns=mcols)
# if(len(mdf)==0):
# mdf = mdfrow
# else:
# mdf = mdf.append(mdfrow, ignore_index=True) #why the hell doesn't df.append work inplace?? Didn't it always use to?
mdf = pd.DataFrame(month_table)
return(mdf)<jupyter_output><empty_output><jupyter_text>Define function to enter link for each month, and store the excel notebook links and descriptions within to a dataframe.<jupyter_code>%%time
excels_df = pd.concat([retrieve_month_excels(murl, path) for murl in top_df['url']])
excelref_df = top_df.merge(excels_df, how='left', on='url')
excelref_df.head()<jupyter_output><empty_output><jupyter_text>Utilize the function retrieve_month_excels to retrieve all available excel sheets for every month in the top-level dataframe<jupyter_code>exceldesccount_df = excelref_df.groupby(['excel_description']).agg(['count']).sort_values([('year', 'count')], ascending=False).iloc[:,:1]
exceldesccount_df.columns=['count']
# exceldesccount_df.insert(0,'excel_description', exceldesccount_df.index)
exceldesccount_df.reset_index(inplace=True)
print(exceldesccount_df)<jupyter_output> excel_description \
0
Number of In-migrants from Other Prefectures by Age (5-Year Age Group) and Sex for Japan and Prefectures
1
Number of Out-migrants to Other Prefectures by Age (5-Year Age Group) and Sex for Japan and Prefectures
2
Number of Net-migration by Age (5-Year Age Group) and Sex for Japan and Prefectures (All nationalities and Japanese)
3
Number of Inter-prefectural Mig[...]<jupyter_text>Count of sheets of each description in all available data<jupyter_code>%%time
# print(excelref_df.head())
mdir = os.path.join('extracts', 'monthly_reports')
if not os.path.exists(mdir):
os.makedirs(mdir)
months = excelref_df.month.unique() #do not sort, retain the order
monthnum = list(range(1,13,1))
monthdirs = [str(num).zfill(2) + month for num,month in zip(monthnum, months)]
monthnamedict = dict(zip(months, monthdirs))
print(monthnamedict)
for index,row in excelref_df.iterrows():
filedir = os.path.join(mdir, row['year'], monthnamedict.get(row['month']))
if not os.path.exists(filedir):
os.makedirs(filedir)
filename = str(row['excel_num']) + '.xls'
savepath = os.path.join(filedir, filename)
if not os.path.isfile(savepath):
fileresponse = re.get(row['excel_url'], stream=True)
with open(savepath, 'wb') as out_file:
out_file.write(fileresponse.content)
del fileresponse
print("saved " + savepath)<jupyter_output>{'Jan': '01Jan', 'Feb': '02Feb', 'Mar': '03Mar', 'Apr': '04Apr', 'May': '05May', 'Jun': '06Jun', 'Jul': '07Jul', 'Aug': '08Aug', 'Sep': '09Sep', 'Oct': '10Oct', 'Nov': '11Nov', 'Dec': '12Dec'}
Wall time: 165 ms
<jupyter_text>Save all files available to disk if they don't already exist<jupyter_code>samplecols = ['sampleyear', 'samplemonth', 'sampleexcel_num', 'excel_description', 'count']
samplebydesc_df = pd.DataFrame(columns=samplecols)
samplebydesc_df['excel_description'] = exceldesccount_df['excel_description']
samplebydesc_df['count'] = exceldesccount_df['count']<jupyter_output><empty_output><jupyter_text>I want to build a dataframe of example excel workbooks of each description, so that i can formulate a way to parse each workbook according to their description.<jupyter_code># sampledesc = 'Number of In-migrants from Other Prefectures by Age (5-Year Age Group) and Sex for Japan and Prefectures'
# dfdf1 = excelref_df[excelref_df['excel_description'] == sampledesc][:1]['month'].iloc[0]
samplebydesc_df['sampleyear'] = [excelref_df[excelref_df['excel_description'] == x][:1]['year'].iloc[0]\
for x in samplebydesc_df['excel_description']]
samplebydesc_df['samplemonth'] = [excelref_df[excelref_df['excel_description'] == x][:1]['month'].iloc[0]\
for x in samplebydesc_df['excel_description']]
samplebydesc_df['sampleexcel_num'] = [excelref_df[excelref_df['excel_description'] == x][:1]['excel_num'].iloc[0]\
for x in samplebydesc_df['excel_description']]
print(samplebydesc_df[:20])<jupyter_output> sampleyear samplemonth sampleexcel_num \
0 2014 Jan 3-1
1 2014 Jan 3-2
2 2017 Jan 3-3
3 2018 Jan 2
4 2013 Jan 3-3
5 2017 Jan 1
6 2017 Jan 3-2
7 2017 Jan 3-1
8 2014 Jan 2
9 2009 Jan 2
10 2009 Jan 1
11 2012 Jan 2
12 2012 Jan 1
13 2013 Jan 1
14 2014 Jan 1
15 2010 Jan 1
16 2006 Jan 1
17 2006 Jan 2
18 2009 Apr 2
19 2018 Jan 1
[...]
|
no_license
|
/stat-miso.ipynb
|
freedaemons/jpstat
| 6 |
<jupyter_start><jupyter_text># Reflect Tables into SQLAlchemy ORM<jupyter_code># Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# inspect
inspector = inspect(engine)
# print columns and data type for Measurement
columns = inspector.get_columns('measurement')
for column in columns:
print(column['name'], column['type'])
# print columns and data type for Station
columns_station = inspector.get_columns('station')
for column in columns_station:
print(column['name'], column['type'])
# Create our session (link) from Python to the DB
session = Session(engine)
# Display first row of table in each class
first_row = session.query(Measurement).first()
first_row.__dict__
first_row_station = session.query(Station).first()
first_row_station.__dict__<jupyter_output><empty_output><jupyter_text># Exploratory Climate Analysis<jupyter_code>connection = engine.connect()
connection.execute('SELECT * FROM measurement').fetchall()
connection.execute('SELECT * FROM station').fetchall()
# Design a query to retrieve the last 12 months of precipitation data and plot the results
year_now = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(f'Last date of dataset is: {year_now}')
# Calculate the date 1 year ago from the last data point in the database
year_ago = dt.date(2017,8,23) - dt.timedelta(days = 365)
print(f'Date one year ago was: {year_ago}')
# Perform a query to retrieve the data and precipitation scores
year_prcp = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date >= year_ago).\
order_by(Measurement.date).all()
year_prcp
# Save the query results as a Pandas DataFrame and set the index to the date column
year_prcp_df = pd.DataFrame(year_prcp).set_index('date')
year_prcp_df
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
year_prcp_df.plot(figsize = (20,8))
plt.title("Precipitation levels over the year since 8/23/2016 in Hawaii")
plt.ylabel('Precipitation')
plt.tight_layout()
plt.savefig('images/01_preciptation_year.png')
plt.show()<jupyter_output>Last date of dataset is: ('2017-08-23',)
Date one year ago was: 2016-08-23
<jupyter_text><jupyter_code># Use Pandas to calcualte the summary statistics for the precipitation data
year_prcp_df.describe()<jupyter_output><empty_output><jupyter_text><jupyter_code># Design a query to show how many stations are available in this dataset?
stations = session.query(func.count(Measurement.station.distinct())).all()
print(f'There are {stations} stations.')
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
sel = [Measurement.station,
func.count(Measurement.station)]
active_stations = session.query(*sel).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
active_stations
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
sel2 = [func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)]
session.query(*sel2).\
filter(Measurement.station == 'USC00519281').all()
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
last_date = session.query(Measurement.date).\
filter(Measurement.station == 'USC00519281').\
order_by(Measurement.date.desc()).first()
print(f'Last date of temp observation for USC00519281 was: {last_date}')
year_last = dt.date(2017, 8, 18) - dt.timedelta(days = 365)
print(f'Date one year ago was for USC00519281: {year_last}')
sel3 = [Measurement.date,
Measurement.tobs]
highest_no_temp = session.query(*sel3).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= year_last).all()
highest_no_temp
# convert to df
highest_no_temp_df = pd.DataFrame(highest_no_temp)
highest_no_temp_df
# histogram
x = highest_no_temp_df['tobs']
num_bins = 12
plt.hist(x, num_bins)
plt.title("Histogram of Temperature Observed at USC00519281")
plt.ylabel('Frequency')
plt.xlabel('Temperature Observed (F)')
plt.savefig('images/02_histogram.png')
plt.show()<jupyter_output>Last date of temp observation for USC00519281 was: ('2017-08-18',)
Date one year ago was for USC00519281: 2016-08-18
<jupyter_text><jupyter_code># This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
t_results = calc_temps('2017-08-10','2017-08-23')
print(t_results)
t_df = pd.DataFrame(t_results, columns =['tmin','tavg','tmax'])
t_df
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
fig, ax = plt.subplots()
yerr = t_df['tmax']-t_df['tmin']
ax.bar(1, t_df['tavg'], yerr=yerr)
plt.title('Trip Avg Temp')
plt.ylabel('Temperature (F)')
plt.xlim(0, 2)
plt.ylim(0,100)
#remove x labels
labels = [item.get_text() for item in ax.get_xticklabels()]
empty_string_labels = ['']*len(labels)
ax.set_xticklabels(empty_string_labels)
plt.savefig('images/03_tripavgtemp.png')
plt.show()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
for m, s in session.query(Measurement, Station).filter(Measurement.station == Station.station).\
filter(Measurement.date >= '2017-08-10').filter(Measurement.date <= '2017-08-23').\
group_by(Measurement.station).\
order_by(func.sum(Measurement.prcp).desc()).all():
print(m.station, s.name, s.latitude, s.longitude, s.elevation, m.prcp)
<jupyter_output>USC00516128 MANOA LYON ARBO 785.2, HI US 21.3331 -157.8025 152.4 0.07
USC00519281 WAIHEE 837.5, HI US 21.45167 -157.84888999999998 32.9 0.0
USC00519523 WAIMANALO EXPERIMENTAL FARM, HI US 21.33556 -157.71139 19.5 0.0
USC00514830 KUALOA RANCH HEADQUARTERS 886.9, HI US 21.5213 -157.8374 7.0 0.0
USC00519397 WAIKIKI 717.2, HI US 21.2716 -157.8168 3.0 0.0
<jupyter_text>## Optional Challenge Assignment<jupyter_code># Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
<jupyter_output><empty_output>
|
no_license
|
/climate.ipynb
|
loryuen/sqlalchemy-challenge
| 6 |
<jupyter_start><jupyter_text>## DevType separation### 2017<jupyter_code>mask = data_2017['DevType'].str.contains(r'data|machine|business', case = False,na=False)
df = data_2017[mask].copy()
# remove all space at beginning of the text
df['DevType'].replace('^\s+', '', regex=True, inplace=True) #front
# Split the text by semi colon
split_2017 = df['DevType'].str.get_dummies(sep='; ')
# as 2017 have different choice from 18,19 and 20, we will merge the choices to align it with the rest
split_2017['Data scientist or machine learning specialist'] = split_2017['Data scientist'] | split_2017['Machine learning specialist']
# Get the desired columns
select_type_2017 = ['Data scientist or machine learning specialist',
'Database administrator']
candidate_2017 = (np.sum(split_2017.loc[:,select_type_2017],axis = 1) != 0).index
split_2017.head()<jupyter_output><empty_output><jupyter_text>### 2018<jupyter_code>mask = data_2018['DevType'].str.contains(r'data|machine|business', case = False,na=False)
df = data_2018[mask].copy()
# Replace all space at beginning of the text
df['DevType'].replace('^\s+', '', regex=True, inplace=True) #front
# Split the text by semi colon
split_2018 = df['DevType'].str.get_dummies(sep=';')
# Get the desired columns
select_type_2018 = ['Data or business analyst',
'Data scientist or machine learning specialist',
'Database administrator']
candidate_2018 = (np.sum(split_2018.loc[:,select_type_2018],axis = 1) != 0).index<jupyter_output><empty_output><jupyter_text>### 2019<jupyter_code>mask = data_2019['DevType'].str.contains(r'data|machine|business', case = False,na=False)
df = data_2019[mask].copy()
# Replace all space at beginning of the text
df['DevType'].replace('^\s+', '', regex=True, inplace=True) #front
# Split the text by semi colon
split_2019 = df['DevType'].str.get_dummies(sep=';')
# Get the desired columns
select_type_2019 = ['Data or business analyst',
'Data scientist or machine learning specialist',
'Database administrator',
'Engineer, data']
candidate_2019 = (np.sum(split_2019.loc[:,select_type_2019],axis = 1) != 0).index<jupyter_output><empty_output><jupyter_text>### 2020<jupyter_code>mask = data_2020['DevType'].str.contains(r'data|machine|business', case = False,na=False)
df = data_2020[mask].copy()
# Replace all space at beginning of the text
df['DevType'].replace('^\s+', '', regex=True, inplace=True) #begin of the txt
# Split the text by semi colon
split_2020 = df['DevType'].str.get_dummies(sep=';')
# Get the desired columns
select_type_2020 = ['Data or business analyst',
'Data scientist or machine learning specialist',
'Database administrator',
'Engineer, data']
candidate_2020 = (np.sum(split_2020.loc[:,select_type_2020],axis = 1) != 0).index<jupyter_output><empty_output><jupyter_text>- Now, we merge all the splitted columns of `DevType` together. And from now, we will just use the data of the developers having the data related job
- Then we will also convert features that contain list of values in to dummies variable: `DatabaseDesireNextYear`, `DatabaseWorkedWith`, `LanguageDesireNextYear` and `LanguageWorkedWith`.<jupyter_code># Concat all dummies of 4 year
dm_dev_type = pd.concat([split_2017[select_type_2017],
split_2018[select_type_2018],
split_2019[select_type_2019],
split_2020[select_type_2020]],axis = 0)
# we only consider these job type in the data
data = data.loc[dm_dev_type.index,:]
# Function to convert
def dummies_converter(df, col):
# remove space at the begining of the text
df[col].replace('^\s+', '', regex=True, inplace=True) #begin of the txt
# Split the text by semi colon
dm1 = df[df['Year'] == 2017][col].str.get_dummies(sep='; ')
dm2 = df[df['Year'] >= 2018][col].str.get_dummies(sep=';')
return pd.concat([dm1,dm2],axis = 0)
# Feature to get dummies:
feat_for_dm = ['DatabaseDesireNextYear', 'DatabaseWorkedWith', 'LanguageDesireNextYear', 'LanguageWorkedWith', 'DevType']
# Convert to dummies
dm_db_nextyear = dummies_converter(data,'DatabaseDesireNextYear')
dm_db_work = dummies_converter(data,'DatabaseWorkedWith')
dm_language_nextyear = dummies_converter(data,'LanguageDesireNextYear')
dm_language_work = dummies_converter(data,'LanguageWorkedWith')
# Drop converted features
data = data.drop(feat_for_dm,axis = 1,errors='ignore')<jupyter_output><empty_output><jupyter_text>**Now we have 5 data frames of dummies features**
- dm_dev_type
- dm_db_nextyear
- dm_db_work
- dm_language_nextyear
- dm_language_work<jupyter_code>#data = pd.merge(data, dev_type, left_index=True, right_index=True)
dm_dev_type.head()
df = data.copy()
#df = pd.merge(df, dm_dev_type, left_index= True, right_index= True)
#df['LogEarnings'] = np.log10(df['ConvertedComp']+1)
df.describe()<jupyter_output><empty_output><jupyter_text>**The target varialbe of this analyis this annual salary - `ConvertedComp`, lets take a look into that feature first**<jupyter_code>df.isna().sum()
fig, ax = plt.subplots(figsize=(18, 5))
sns.boxplot(x="Year", y="ConvertedComp",
hue = 'Employment',
data=df,
ax = ax)
sns.despine(offset=10, trim=True)
plt.show()<jupyter_output><empty_output><jupyter_text>To make this analysis as practical as possible, we will only consider people who was having a job related to data, so we will exclude people who does not have a salary information and job of not employed, no information on job or retired.<jupyter_code># Drop irrelevant job title
df = df.loc[~df['Employment'].isin(['Not employed, and not looking for work',
'Not employed, but looking for work',
'I prefer not to say', 'Retired']), :]
# Drop people do not have information about salary
df = df.loc[~df["ConvertedComp"].isna(),:]
fig, ax = plt.subplots(figsize=(18, 5))
sns.boxplot(x="Year", y="ConvertedComp",
hue = 'Employment',
data=df,
ax = ax)
sns.despine(offset=10, trim=True)
plt.show()<jupyter_output><empty_output><jupyter_text>From the box-plot, we can observe that there is a lot of outliers in the annual salary, for relevancy of this analysis, we will exclude people ving annual salary more than 300,000$<jupyter_code># Remove outlier
df = df.loc[(df['ConvertedComp'] < 300000) & (df['ConvertedComp'] > 0),:]
# plot
fig, ax = plt.subplots(figsize=(18, 5))
sns.boxplot(x="Year", y="ConvertedComp",
hue = 'Employment',
data=df,
ax = ax)
sns.despine(offset=10, trim=True)
plt.show()
#percentage of data kept after removing outliers and considering only instances that have our target variable
len(df)/len(data)*100<jupyter_output><empty_output><jupyter_text>## Missing values treatment<jupyter_code>#This is the number of missing values still existing (without dummies)
df.isna().sum().sum()
#will give the index of the rows with any missing value
nans_index = df.isna().any(axis=1)
df[nans_index]
#if we would remove all the rowa with still missing values that would mean loosing 14,5% of data
len(df[nans_index])/len(df)*100
#Let's verify the type of the variables with numeric values
df['YearsCodePro'].dtype
df['JobSat'].dtype
#let's transform to float
df['JobSat']=df['JobSat'].astype(float)
df['JobSat'].dtype
# Creating new df copy to explore missing values imputation
df_na = df.copy()
# KNNImputer
imputer = KNNImputer(n_neighbors=5, weights="uniform")
# to fill the missing values
related_variables_yearscodepro = (['YearsCodePro'])
related_variables_jobsat = (['JobSat'])
df_na[related_variables_yearscodepro] = imputer.fit_transform(df_na[related_variables_yearscodepro])
df_na[related_variables_jobsat] = imputer.fit_transform(df_na[related_variables_jobsat])
df_na["YearsCodePro"].unique()
df_na["YearsCodePro"]=df_na["YearsCodePro"].round(decimals=0)
df_na["YearsCodePro"].unique()
df_na["JobSat"].unique()
df_na["JobSat"]=df_na["JobSat"].round(decimals=0)
df_na["JobSat"].unique()
#apply the imputation to the database
df=df_na.copy()
#filling with mode
df['EdLevel'].fillna(df['EdLevel'].mode()[0], inplace=True)
df['Employment'].value_counts().head(10)
#since there is clearly a most frequent value we will use the mode to subtitute the missing values
df['Employment'].fillna(df['Employment'].mode()[0], inplace=True)
df['Employment'].isna().sum()
df['OrgSize'].value_counts().head(10)
#since there is an option for 'i don't know' we will use it to fill the missing values
df['OrgSize'].fillna(df['OrgSize'].mode()[0], inplace=True)
df['UndergradMajor'].value_counts().head(10)
#there is clearly a most frequent value in this variable so we will use the mode to substitute the missing values
df['UndergradMajor'].fillna(df['UndergradMajor'].mode()[0], inplace=True)
# to verify the final number of missing values
df.isna().sum()
#the nan values in the dummies will be substituted by zero, we just must have in consideration that for example the databases in 2017 don't include all the options that exist in the other years
#in the same way in devtype the options of data/ business analyst and data engineer in 2017 qre not existing
dm_db_nextyear.fillna(value=0, inplace=True)
dm_dev_type.fillna(value=0, inplace=True)
dm_db_work.fillna(value=0, inplace=True)
dm_language_nextyear.fillna(value=0, inplace=True)
dm_language_work.fillna(value=0, inplace=True)
df_all = pd.concat([df, dm_dev_type, dm_db_work, dm_language_work],
axis=1, join='inner')
#final number of missing values
df_all.isna().sum().sum()<jupyter_output><empty_output><jupyter_text>## Correlation study<jupyter_code>metric_features=['ConvertedComp','JobSat','YearsCodePro',"OrgSize", "EdLevel"]
#check the variables that are highly correlated (more than or equal to 90%) and drop one of them to avoid redundancy
corr_matrix = df_all.corr().abs()
high_corr_var=np.where(corr_matrix>=0.8)
high_corr_var=[(corr_matrix.columns[x],corr_matrix.columns[y]) for x,y in zip(*high_corr_var) if x!=y and x<y]
high_corr_var
# Prepare figure
fig = plt.figure(figsize=(8, 8))
# Obtain correlation matrix. Round the values to 2 decimal cases. Use the DataFrame corr() and round() method.
corr = np.round(df[metric_features].corr(method="pearson"), decimals=2)
# Build annotation matrix (values above |0.5| will appear annotated in the plot)
mask_annot = np.absolute(corr.values) >= 0.5
annot = np.where(mask_annot, corr.values, np.full(corr.shape,"")) # Try to understand what this np.where() does
# Plot heatmap of the correlation matrix
sns.set(font_scale = 2)
sns.heatmap(data=corr, annot=annot, cmap=sns.diverging_palette(220, 10, as_cmap=True),
fmt='s', vmin=-1, vmax=1, center=0, square=True, linewidths=.5)
# Layout
fig.subplots_adjust(top=0.95)
fig.suptitle("Correlation Matrix", fontsize=20)
plt.show()
corr_target = df.corrwith(df["ConvertedComp"])
corr_target
metric_features=['ConvertedComp','YearsCodePro',"OrgSize", "EdLevel"]<jupyter_output><empty_output><jupyter_text>## One hot encoding<jupyter_code>non_metric_features=['Hobbyist','Country','Employment','UndergradMajor']
ohc = OneHotEncoder(sparse=False, drop="if_binary")
ohc_feat = ohc.fit_transform(df_all[non_metric_features])
ohc_feat_names = ohc.get_feature_names(non_metric_features)
encoded = pd.DataFrame(ohc_feat, index = df_all.index, columns = ohc_feat_names)
df_all=df_all.drop(non_metric_features, axis=1)
df_encoded=df.drop(non_metric_features, axis=1)
df_all=pd.concat([df_all, encoded], axis=1,copy=False)
df_encoded=pd.concat([df_encoded, encoded], axis=1,copy=False)
df_all.astype(float)
df_all.dtypes
df_encoded_devtype=pd.concat([df_encoded,dm_dev_type ], axis=1,copy=False)
df_encoded_db_ny=pd.concat([df_encoded,dm_db_nextyear ], axis=1,copy=False)
df_encoded_db_w=pd.concat([df_encoded,dm_db_work], axis=1,copy=False)
df_encoded_l_ny=pd.concat([df_encoded,dm_language_nextyear], axis=1,copy=False)
df_encoded_l_w=pd.concat([df_encoded,dm_language_work ], axis=1,copy=False)
dm_major=df_all.filter(regex="^UndergradMajor")
dm_major
"UndergradMajor_Another.engineering.discipline"
"UndergradMajor_Business"
"UndergradMajor_Computer.science"
"UndergradMajor_Mathematics.or.statistics"
#df_encoded_major=pd.concat([df_encoded,dm_major], axis=1,copy=False)<jupyter_output><empty_output><jupyter_text>## Obtain the csv file<jupyter_code># export the data
filename = 'stack.csv'
df_all.to_csv(filename, index=False)
# df
df_all.head()<jupyter_output><empty_output><jupyter_text>## Feature importance<jupyter_code>from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
import graphviz
# Import the necessary modules and libraries
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
X = df_all.drop(columns=['ConvertedComp'])
y = df_all["ConvertedComp"]
# Splitting the data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Fitting the decision tree
dt = DecisionTreeRegressor(random_state=42, max_depth=5)
dt.fit(X_train, y_train)
print("It is estimated that in average, we are able to predict {0:.2f}% of the salary correctly".format(dt.score(X_test, y_test)*100))
# Assessing feature importance
pd.Series(dt.feature_importances_, index=X_train.columns).sort_values(ascending=False)
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
import graphviz
# Visualizing the decision tree
dot_data = export_graphviz(dt, out_file=None,
feature_names=X.columns.to_list(),
filled=True,
rounded=True,
special_characters=True)
graphviz.Source(dot_data)
dm_dev_type.columns
dm_db_nextyear.columns
dm_db_work.columns
dm_language_nextyear.columns
dm_language_work.columns
<jupyter_output><empty_output><jupyter_text>## Normalization<jupyter_code>df_all_withoutscaler=df_all.copy()
from sklearn.preprocessing import MinMaxScaler, StandardScaler, RobustScaler
df_all_normalized=df_all.copy()
trans = RobustScaler()
#we don't year right?
metric_features=['ConvertedComp','YearsCodePro',"OrgSize", "EdLevel"]
df_all_normalized[metric_features] = trans.fit_transform(df_all_normalized[metric_features])
df_all_normalized<jupyter_output><empty_output><jupyter_text>## Kmeans<jupyter_code>from sklearn.cluster import KMeans
# To decide how many clusters to use in the clustering we will use the elbow method
range_clusters = range(1, 15)
inertia = []
for n_clus in range_clusters: # iterate over desired ncluster range
kmclust = KMeans(n_clusters=n_clus, init='k-means++', n_init=15, random_state=1)
kmclust.fit(df_all_normalized[metric_features])
inertia.append(kmclust.inertia_) # save the inertia of the given cluster solution
sns.set(font_scale = 1)
ax.tick_params(axis="x", labelsize=8)
ax.tick_params(axis="y", labelsize=8)
plt.plot(inertia)
plt.ylabel("Inertia: SSw", size=15)
plt.xlabel("Number of clusters", size=15)
plt.title("Inertia plot over clusters", size=15)
plt.show()
# final cluster solution
number_clusters = 5
kmclust = KMeans(n_clusters=number_clusters, init='k-means++', n_init=15, random_state=1)
km_labels = kmclust.fit_predict(df_all_normalized[metric_features])
km_labels
# Characterizing the final clusters
df_all_withoutscaler['labels'] =km_labels
df_all_normalized['labels']=km_labels
df['labels']=km_labels
df_all['labels']=km_labels
df_all_withoutscaler
df_all_withoutscaler.groupby('labels').mean()[metric_features]
df_counts = df_all_withoutscaler.groupby('labels')\
.size()\
.to_frame()
df_counts.rename(columns={0: "Size"},inplace=True)
df_counts
# using R^2
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss
sst = get_ss(df_all_withoutscaler[metric_features])
ssw_labels = df_all_withoutscaler[metric_features + ["labels"]].groupby(by='labels').apply(get_ss)
ssb = sst - np.sum(ssw_labels)
r2_kp = ssb / sst
r2_kp<jupyter_output><empty_output><jupyter_text># K-prototypes<jupyter_code>nonmetricfeatures_ols= ["Year","Data.scientist.or.machine.learning.specialist", "Database.administrator", "Data.or.business.analyst", "Engineer..data", "Cassandra", "PostgreSQL",
"Redis", "SQLite", "Amazon.DynamoDB", "Amazon.RDS.Aurora", "Amazon.Redshift", "Apache.HBase", "DynamoDB", "Elasticsearch","Google.BigQuery","MariaDB", "Memcached", "Microsoft.Azure..Tables..CosmosDB..SQL..etc.", "Microsoft.SQL.Server", "Other.s..", "Dart", "Go", "Groovy", "Matlab", "Objective.C", "PHP", "R", "Ruby", "SQL", "Scala", "TypeScript", "VBA", "Bash.Shell.PowerShell", "CSS", "HTML.CSS", "Kotlin", "Other.s...1","Country_Algeria",
"Country_Australia",
"Country_Austria"
,"Country_Belgium"
,"Country_Bermuda"
,"Country_Botswana"
,"Country_Canada"
,"Country_Denmark"
,"Country_Germany"
,"Country_Hong.Kong..S.A.R.."
,"Country_I.prefer.not.to.say"
,"Country_Iceland"
,"Country_Ireland"
,"Country_Israel"
,"Country_Liechtenstein"
,"Country_Luxembourg"
,"Country_Myanmar"
,"Country_Netherlands"
,"Country_New.Zealand"
,"Country_Norway"
,"Country_Puerto.Rico"
,"Country_Saint.Lucia"
,"Country_Sri.Lanka"
,"Country_Switzerland"
,"Country_United.Kingdom"
,"Country_United.States"
,"Country_Virgin.Islands..USA."
,"Employment_Employed.full.time"
,"Employment_Employed.part.time"
,"UndergradMajor_Another.engineering.discipline"
,"UndergradMajor_Business"
,"UndergradMajor_Computer.science"
,"UndergradMajor_Mathematics.or.statistics"]
for i in range(len(nonmetricfeatures_ols)):
nonmetricfeatures_ols[i] = nonmetricfeatures_ols[i].replace(".", " ")
print(nonmetricfeatures_ols)
#non metric features to analyze
nonmetricfeatures=["Year",'Data scientist or machine learning specialist', 'Database administrator', 'Data or business analyst', 'Engineer, data', 'Cassandra', 'PostgreSQL', 'Redis', 'SQLite', 'Amazon DynamoDB', 'Amazon RDS/Aurora', 'Amazon Redshift', 'Apache HBase', 'DynamoDB', 'Elasticsearch', 'Google BigQuery', 'MariaDB', 'Memcached', 'Microsoft Azure (Tables, CosmosDB, SQL, etc)', 'Microsoft SQL Server', 'Other(s):', 'Dart', 'Go', 'Groovy', 'Matlab', 'Objective-C', 'PHP', 'R', 'Ruby', 'SQL', 'Scala', 'TypeScript', 'VBA', 'Bash/Shell/PowerShell', 'CSS', 'HTML/CSS', 'Kotlin', 'Other(s):', 'Country_Algeria', 'Country_Australia', 'Country_Austria', 'Country_Belgium', 'Country_Bermuda', 'Country_Botswana', 'Country_Canada', 'Country_Denmark', 'Country_Germany', 'Country_Hong Kong (S.A.R.)', 'Country_I prefer not to say', 'Country_Iceland', 'Country_Ireland', 'Country_Israel', 'Country_Liechtenstein', 'Country_Luxembourg', 'Country_Myanmar', 'Country_Netherlands', 'Country_New Zealand', 'Country_Norway', 'Country_Puerto Rico', 'Country_Saint Lucia', 'Country_Sri Lanka', 'Country_Switzerland', 'Country_United Kingdom', 'Country_United States', 'Country_Virgin Islands (USA)', 'Employment_Employed full-time', 'Employment_Employed part-time', 'UndergradMajor_Another engineering discipline', 'UndergradMajor_Business', 'UndergradMajor_Computer science', 'UndergradMajor_Mathematics or statistics']
#a new copy of the databse
kprot_data = df_all_normalized.copy()
kprot_data.info(verbose=True)
#obtaining the index of the categorical features
kprot_nonmetric=list(range(0,73))
from sklearn.metrics import davies_bouldin_score
from sklearn import metrics
from kmodes.kprototypes import KPrototypes
#K-prototypes clustering on social perspective
kproto = KPrototypes(n_clusters= 5 , init='Cao', n_jobs = 4)
kpro_cluster = kproto.fit_predict(kprot_data, categorical=kprot_nonmetric)
#number of donors per cluster
pd.Series(kpro_cluster).value_counts()
#labels
kpro_cluster
#dataset with labels
kprot_data_label=pd.concat((kprot_data, pd.Series(kpro_cluster, name='labels', index=kprot_data.index)), axis=1)
#to transform to the not scalled data
#iqr_1=iqr(donors_withoutscaler[metric_features].values, axis=0)
#median_1=np.median(donors_withoutscaler[metric_features].values, axis=0)
#kprot_data_notransf=(kprot_data[metric_features]*iqr_1)+median_1
kprot_data_label_notransf=pd.concat((df_all_withoutscaler, pd.Series(kpro_cluster, name='K_proto_labels', index=df_all_withoutscaler.index)), axis=1)
kprot_data_label_notransf.groupby(by='K_proto_labels').mean()[metric_features]
kprot_data_label_notransf.groupby(by='K_proto_labels').mean()[nonmetricfeatures]
# using R^2
def get_ss(df):
ss = np.sum(df.var() * (df.count() - 1))
return ss
sst = get_ss(kprot_data_label_notransf[metric_features])
ssw_labels = kprot_data_label_notransf[metric_features + ["K_proto_labels"]].groupby(by='K_proto_labels').apply(get_ss)
ssb = sst - np.sum(ssw_labels)
r2_kp = ssb / sst
r2_kp
#nonmetricfeatures= df_all_normalized.drop(columns=["ConvertedComp","JobSat", "YearsCodePro","EdLvel","OrgSize"]).columns
#nonmetricfeatures
df_all_withoutscaler.info(verbose=True)
#graphs for the non metric features
for var in nonmetricfeatures:
metric_feature = sns.countplot(data=df_all_withoutscaler, x= 'labels', hue=var)
plt.show()
figure = metric_feature.get_figure()
name = 'barplot_'+var+'.png'
#figure.savefig(name, dpi=400)
# First, we will divided the salary by 1000 for easy interpretation
df['ConvertedComp'] = df['ConvertedComp']/1000
df_all['ConvertedComp']=df_all['ConvertedComp']/1000<jupyter_output><empty_output><jupyter_text>## 1. What is the impact of education major on salary<jupyter_code># Use normal dataframe
df.head()
fig, axes = plt.subplots(nrows = 1, ncols = 4, sharex="all", figsize=(16,4))
sns.set_style("white")
names = ['Engineering', 'Business', 'Computer science', 'Maths&Stats']
major_types = ["UndergradMajor_Another engineering discipline", "UndergradMajor_Business", "UndergradMajor_Computer science", "UndergradMajor_Mathematics or statistics"]
for major_type, name, ax in zip(major_types ,names , axes.flatten()):
ax = sns.pointplot(x="Year", y="ConvertedComp", hue=major_type,
capsize=.2, palette="rocket", legend_out=True,
data=df_all, ax= ax)
ax.set_title(name, fontsize=14)
ax.set_ylim(bottom=50, top=90)
plt.show()
from numpy import median
plt.figure(figsize=(300, 300))
sns.set(font_scale = 1)
plt.tight_layout()
sns.catplot(x="UndergradMajor", y="ConvertedComp", kind="point", data=df, ci=0, palette="rocket")
plt.xlabel('')
plt.ylabel("Compensation", size=10)
plt.xticks(fontsize=10, rotation=90)
plt.title("Average salary per major", size=14)
plt.grid()
from numpy import median
plt.figure(figsize=(300, 300))
sns.set(font_scale = 1)
plt.tight_layout()
sns.catplot(x="UndergradMajor", y="ConvertedComp", hue="Year", kind="point", data=df, estimator=median, ci=0, palette="rocket")
plt.xlabel('')
plt.ylabel("Compensation", size=10)
plt.xticks(fontsize=10, rotation=90)
plt.title("Median salary per major", size=14)
plt.grid()
fig, ax = plt.subplots(figsize=(20, 15))
sns.boxplot(x="Year", y="ConvertedComp",
hue = 'UndergradMajor',
data=df,
ax = ax, palette="rocket")
sns.despine(offset=10, trim=True)
plt.show()
plt.figure(figsize=(100, 50))
sns.set(font_scale = 6)
sns.despine(left=True)
sns.barplot(x="Year",
y="ConvertedComp",
hue="UndergradMajor",
data=df)
plt.ylabel("Compensation", size=70)
plt.legend(loc=3, prop={'size': 50})
plt.xlabel("Year", size=70)
plt.title("Average salary per major", size=90)
plt.grid()
#plt.savefig("grouped_barplot_Seaborn_barplot_Python.png")
from math import ceil
# All Numeric Variables' Histograms in one figure
sns.set()
# Prepare figure. Create individual axes where each histogram will be placed
fig, axes = plt.subplots(2, ceil(len(metric_features) / 2), figsize=(20, 11))
# Plot data
# Iterate across axes objects and associate each histogram (hint: use the ax.hist() instead of plt.hist()):
for ax, feat in zip(axes.flatten(), metric_features): # Notice the zip() function and flatten() method
ax.hist(df[feat])
ax.set_title(feat, y=-0.13)
# Layout
# Add a centered title to the figure:
title = "Numeric Variables' Histograms"
plt.suptitle(title)
#plt.savefig(os.path.join('..', 'figures', 'exp_analysis', 'numeric_variables_histograms.png'), dpi=200)
plt.show()
plt.figure(figsize=(100, 70))
sns.set(font_scale = 6)
sns.boxenplot(x="Year",
y="ConvertedComp",
hue="UndergradMajor",
data=df)
sns.set(rc={'axes.facecolor':'cornflowerblue', 'figure.facecolor':'cornflowerblue'})
plt.ylabel("Compensation", size=70)
plt.legend(loc=1, prop={'size': 50})
plt.xlabel("Year", size=70)
plt.title("Grouped Boxenplot: Impact of major in Salary", size=90)
#plt.savefig("grouped_barplot_Seaborn_barplot_Python.png")
plt.figure(figsize=(200, 300))
sns.set(font_scale = 20)
ax = sns.countplot(y="UndergradMajor", hue="Year", data=df, order = df["UndergradMajor"].value_counts().index, palette="rocket")
ax.set_ylabel('')
plt.legend(loc=4, prop={'size': 200})
plt.title("Undergraduate major presence", size=190)
plt.grid()
plt.figure(figsize=(16,4))
sns.set(font_scale = 1)
sns.set_style("white")
#names = ["UndergradMajor_Another engineering discipline", "UndergradMajor_Business", "UndergradMajor_Computer science", "UndergradMajor_Mathematics or statistics"]
#major_types = ['Engineering', 'Business', 'Computer science', 'Maths&Stats']
sns.catplot(x="Year", y="ConvertedComp", hue="UndergradMajor_Another engineering discipline", kind="point",capsize=.2, palette="rocket", data=df_all, ax= ax, legend_out=True)
plt.title("Another engineering", size=14)
plt.ylabel("Compensation", size=9)
plt.xlabel("Year", size=9)
plt.legend(loc=4, prop={'size': 9})
#ax.set_ylim(bottom=50, top=75)
plt.figure(figsize=(16,4))
sns.set(font_scale = 1)
sns.set_style("white")
#names = ["UndergradMajor_Another engineering discipline", "UndergradMajor_Business", "UndergradMajor_Computer science", "UndergradMajor_Mathematics or statistics"]
#major_types = ['Engineering', 'Business', 'Computer science', 'Maths&Stats']
sns.catplot(x="Year", y="ConvertedComp", hue="UndergradMajor_Business", kind="point",capsize=.2, palette="rocket", data=df_all, ax= ax, legend_out=True, ci="sd")
plt.title("Business", size=14)
plt.ylabel("Compensation", size=9)
plt.xlabel("Year", size=9)
plt.legend(loc=4, prop={'size': 9})
#ax.set_ylim(bottom=50, top=75)
plt.figure(figsize=(16,4))
sns.set(font_scale = 1)
sns.set_style("white")
#names = ["UndergradMajor_Another engineering discipline", "UndergradMajor_Business", "UndergradMajor_Computer science", "UndergradMajor_Mathematics or statistics"]
#major_types = ['Engineering', 'Business', 'Computer science', 'Maths&Stats']
sns.catplot(x="Year", y="ConvertedComp", hue="UndergradMajor_Computer science", kind="point",capsize=.2, palette="rocket", data=df_all, ax= ax, legend_out=True)
plt.title("Computer science", size=14)
plt.ylabel("Compensation", size=9)
plt.xlabel("Year", size=9)
plt.legend(loc=4, prop={'size': 9})
#ax.set_ylim(bottom=50, top=75)
plt.figure(figsize=(16,4))
sns.set(font_scale = 1)
sns.set_style("white")
#names = ["UndergradMajor_Another engineering discipline", "UndergradMajor_Business", "UndergradMajor_Computer science", "UndergradMajor_Mathematics or statistics"]
#major_types = ['Engineering', 'Business', 'Computer science', 'Maths&Stats']
sns.catplot(x="Year", y="ConvertedComp", hue="UndergradMajor_Mathematics or statistics", kind="point",capsize=.2, palette="rocket", data=df_all, ax= ax, legend_out=True)
plt.title("Mathematics or statistics", size=14)
plt.ylabel("Compensation", size=9)
plt.xlabel("Year", size=9)
plt.legend(loc=4, prop={'size': 9})
#ax.set_ylim(bottom=50, top=75)<jupyter_output><empty_output><jupyter_text>## 2. What is the impact of Job type on salary<jupyter_code>df_job = pd.concat([df, dm_dev_type],
axis=1, join='inner')
df_job.head()<jupyter_output><empty_output><jupyter_text>## 3. What is the impact of Year of experience on Salary<jupyter_code># Use normal dataframe
df.head()<jupyter_output><empty_output><jupyter_text>## 4. What is the impact of Programming language on Salary<jupyter_code># concat programming language features together
df_p_language = pd.concat([df, dm_db_nextyear, dm_db_work, dm_language_nextyear, dm_language_work],
axis=1, join='inner')
df_p_language.head()
from sklearn.manifold import TSNE
# This is step can be quite time consuming
two_dim = TSNE(random_state=42).fit_transform(df_all_normalized[metric_features])
# t-SNE visualization
pd.DataFrame(two_dim).plot.scatter(x=0, y=1, c=df_all_withoutscaler['labels'], colormap='tab10', figsize=(15,10))
plt.show()
# First, we will divided the salary by 1000 for easy interpretation
df_all_withoutscaler['ConvertedComp'] = df_all_withoutscaler['ConvertedComp']/1000
df_all_normalized['ConvertedComp']=df_all_normalized['ConvertedComp']/1000
def cluster_profiles(df, label_columns, figsize, compar_titles=None):
"""
Pass df with labels columns of one or multiple clustering labels.
Then specify this label columns to perform the cluster profile according to them.
"""
if compar_titles == None:
compar_titles = [""]*len(label_columns)
sns.set()
fig, axes = plt.subplots(nrows=len(label_columns), ncols=2, figsize=figsize, squeeze=False)
for ax, label, titl in zip(axes, label_columns, compar_titles):
# Filtering df
drop_cols = [i for i in label_columns if i!=label]
dfax = df.drop(drop_cols, axis=1)
# Getting the cluster centroids and counts
centroids = dfax.groupby(by=label, as_index=False).mean()
counts = dfax.groupby(by=label, as_index=False).count().iloc[:,[0,1]]
counts.columns = [label, "counts"]
# Setting Data
pd.plotting.parallel_coordinates(centroids, label, color=sns.color_palette(), ax=ax[0])
sns.barplot(x=label, y="counts", data=counts, ax=ax[1])
#Setting Layout
handles, _ = ax[0].get_legend_handles_labels()
cluster_labels = ["Cluster {}".format(i) for i in range(len(handles))]
# ax[0].annotate(text=titl, xy=(0.95,1.1), xycoords='axes fraction', fontsize=13, fontweight = 'heavy')
ax[0].legend(handles, cluster_labels) # Adaptable to number of clusters
ax[0].axhline(color="black", linestyle="--")
ax[0].set_title("Cluster Means - {} Clusters".format(len(handles)), fontsize=13)
ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=-20)
ax[1].set_xticklabels(cluster_labels)
ax[1].set_xlabel("")
ax[1].set_ylabel("Absolute Frequency")
ax[1].set_title("Cluster Sizes - {} Clusters".format(len(handles)), fontsize=13)
plt.subplots_adjust(hspace=0.4, top=0.90)
plt.suptitle("Cluster Simple Profilling", fontsize=23)
plt.show()
# Profilling each cluster (product, behavior, merged)
cluster_profiles(
df = df_all_normalized[metric_features + ['labels']],
label_columns = ['labels'],
figsize = (50, 25),
compar_titles = ["Clustering"]
)
df_all
df_all.shape[0]
#! pip install pywaffle
# Reference: https://stackoverflow.com/questions/41400136/how-to-do-waffle-charts-in-python-square-piechart
from pywaffle import Waffle
# Prepare Data
df_grouby = df_all.groupby('labels').size().reset_index(name='counts')
n_categories = df_all.shape[0]
colors = [plt.cm.inferno_r(i/float(n_categories)) for i in range(n_categories)]
# Draw Plot and Decorate
fig = plt.figure(
FigureClass=Waffle,
plots={
'111': {
'values': df['counts'],
'labels': ["{0} ({1})".format(n[0], n[1]) for n in df[['class', 'counts']].itertuples()],
'legend': {'loc': 'upper left', 'bbox_to_anchor': (1.05, 1), 'fontsize': 12},
'title': {'label': '# Vehicles by Class', 'loc': 'center', 'fontsize':18}
},
},
rows=7,
colors=colors,
figsize=(16, 9)
)
#! pip install "notebook>=5.3" "ipywidgets>=7.5"
#! pip install plotly==4.14.3
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode()
# split df into cluster groups
grouped = df_all_normalized.groupby(['labels'], sort=True)
# compute sums for every column in every group
sums = grouped.mean()
sums
data = [go.Heatmap( z=sums.values.tolist(),
y=['Cluster 1', 'Cluster 2', 'Cluster 3', 'Cluster 4', 'Cluster 5'],
x=['ConvertedComp','YearsCodePro',"OrgSize", "EdLevel"
],
colorscale='Viridis')]
plotly.offline.iplot(data, filename='pandas-heatmap')
# Prepare figure
fig = plt.figure(figsize=(8, 8))
# Obtain correlation matrix. Round the values to 2 decimal cases. Use the DataFrame corr() and round() method.
corr = np.round(df_all_normalized[metric_features].median(), decimals=2)
# Plot heatmap of the correlation matrix
sns.set(font_scale = 2)
sns.heatmap(data=corr, annot=annot, cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, linewidths=.5)
# Layout
fig.subplots_adjust(top=0.95)
fig.suptitle("Correlation Matrix", fontsize=20)
plt.show()<jupyter_output><empty_output><jupyter_text>## OLS variables <jupyter_code>significant=["Year", "EdLevel", "OrgSize", "YearsCodePro", 'Data scientist or machine learning specialist', 'Database administrator', 'Data or business analyst', 'Engineer, data', 'Cassandra', 'PostgreSQL', 'Redis', 'SQLite', 'Amazon DynamoDB', 'Amazon RDS/Aurora', 'Amazon Redshift', 'Apache HBase', 'DynamoDB', 'Elasticsearch', 'Google BigQuery', 'MariaDB', 'Memcached', 'Microsoft Azure (Tables, CosmosDB, SQL, etc)', 'Microsoft SQL Server', 'Other(s):', 'Dart', 'Go', 'Groovy', 'Matlab', 'Objective-C', 'PHP', 'R', 'Ruby', 'SQL', 'Scala', 'TypeScript', 'VBA', 'Bash/Shell/PowerShell', 'CSS', 'HTML/CSS', 'Kotlin', 'Other(s):', 'Country_Algeria', 'Country_Australia', 'Country_Austria', 'Country_Belgium', 'Country_Bermuda', 'Country_Botswana', 'Country_Canada', 'Country_Denmark', 'Country_Germany', 'Country_Hong Kong (S.A.R.)', 'Country_I prefer not to say', 'Country_Iceland', 'Country_Ireland', 'Country_Israel', 'Country_Liechtenstein', 'Country_Luxembourg', 'Country_Myanmar', 'Country_Netherlands', 'Country_New Zealand', 'Country_Norway', 'Country_Puerto Rico', 'Country_Saint Lucia', 'Country_Sri Lanka', 'Country_Switzerland', 'Country_United Kingdom', 'Country_United States', 'Country_Virgin Islands (USA)', 'Employment_Employed full-time', 'Employment_Employed part-time', 'UndergradMajor_Another engineering discipline', 'UndergradMajor_Business', 'UndergradMajor_Computer science', 'UndergradMajor_Mathematics or statistics']
#so we will remove job satisfaction
#Some studies are also in agreement that
those with higher degrees and more education tend to have higher salaries (Barkley, Stock, &
Sylvius, 1999). One study even went as far as to look at salaries across academic disciplines and
found that those in the social sciences tend to have lower salaries than those in the law and
engineering disciplines (Hansen, 1985).
https://search.proquest.com/openview/a3670ed2c2a37493cc3fffe666118891/1?pq-origsite=gscholar&cbl=41798
df_all["Python"].sum()
df_all.shape<jupyter_output><empty_output>
|
no_license
|
/stats_project 28.01.ipynb
|
phucnh22/stat_project
| 22 |
<jupyter_start><jupyter_text># **Perdiction using Unsupervised ML**
**K means clustering**
Auther: Isha Shende
<jupyter_code># Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import datasets
# Load the iris dataset
iris = datasets.load_iris()
iris_df = pd.DataFrame(iris.data, columns = iris.feature_names)
iris_df.head() # See the first 5 rows<jupyter_output><empty_output><jupyter_text>**Finding the optimum number of clusters for K Means.**<jupyter_code># Finding the optimum number of clusters for k-means classification
x = iris_df.iloc[:, [0, 1, 2, 3]].values
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
# Plotting the results onto a line graph,
# `allowing us to observe 'The elbow'
plt.plot(range(1, 11), wcss)
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS') # Within cluster sum of squares
plt.show()
# Applying kmeans to the dataset / Creating the kmeans classifier
kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)<jupyter_output><empty_output><jupyter_text>**Visualising the clusters**<jupyter_code>plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1],
s = 100, c = 'red', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1],
s = 100, c = 'blue', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],
s = 100, c = 'green', label = 'Iris-virginica')
# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'yellow', label = 'Centroids')
plt.legend()<jupyter_output><empty_output>
|
no_license
|
/Prediction using unsupervised ML(K means clustering).ipynb
|
ishashende/ML
| 3 |
<jupyter_start><jupyter_text>PassengerId: An unique index for passenger rows. It starts from 1 for first row and increments by 1 for every new rows.
Survived: Shows if the passenger survived or not. 1 stands for survived and 0 stands for not survived.
Pclass: Ticket class. 1 stands for First class ticket. 2 stands for Second class ticket. 3 stands for Third class ticket.
Name: Passenger's name. Name also contain title. "Mr" for man. "Mrs" for woman. "Miss" for girl. "Master" for boy.
Sex: Passenger's sex. It's either Male or Female.
Age: Passenger's age. "NaN" values in this column indicates that the age of that particular passenger has not been recorded.
SibSp: Number of siblings or spouses travelling with each passenger.
Parch: Number of parents of children travelling with each passenger.
Ticket: Ticket number.
Fare: How much money the passenger has paid for the travel journey.
Cabin: Cabin number of the passenger. "NaN" values in this column indicates that the cabin
number of that particular passenger has not been recorded.
Embarked: Port from where the particular passenger was embarked/boarded.
<jupyter_code>train.shape
train.describe()
train.describe(include=['O'])
train.info()
train.isnull().sum()
survived = train[train['Survived'] == 1]
not_survived = train[train['Survived'] == 0]
print ("Survived: %i (%.1f%%)"%(len(survived), float(len(survived))/len(train)*100.0))
print ("Not Survived: %i (%.1f%%)"%(len(not_survived), float(len(not_survived))/len(train)*100.0))
print ("Total: %i"%len(train))<jupyter_output>Survived: 342 (38.4%)
Not Survived: 549 (61.6%)
Total: 891
<jupyter_text># Pclass vs. Survival<jupyter_code>train.Pclass.value_counts()
train.groupby('Pclass').Survived.value_counts()
sns.barplot(x='Pclass', y='Survived', data=train)<jupyter_output><empty_output><jupyter_text># Sex vs Survival<jupyter_code>train.Sex.value_counts()
train[['Sex', 'Survived']].groupby(['Sex'], as_index=False).mean()
sns.barplot(x='Sex', y='Survived', data=train)<jupyter_output><empty_output><jupyter_text># Pclass & Sex vs. Survival<jupyter_code>tab = pd.crosstab(train['Pclass'], train['Sex'])
print (tab)
tab.div(tab.sum(1).astype(float), axis=0).plot(kind="bar", stacked=True)
plt.xlabel('Pclass')
plt.ylabel('Percentage')
sns.factorplot('Sex', 'Survived', hue='Pclass', size=4, aspect=2, data=train)<jupyter_output>/opt/anaconda3/envs/ml/lib/python3.8/site-packages/seaborn/categorical.py:3714: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`.
warnings.warn(msg)
/opt/anaconda3/envs/ml/lib/python3.8/site-packages/seaborn/categorical.py:3720: UserWarning: The `size` parameter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
/opt/anaconda3/envs/ml/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
<jupyter_text># Pclass, Sex & Embarked vs. Survival<jupyter_code>sns.factorplot(x='Pclass', y='Survived', hue='Sex', col='Embarked', data=train)<jupyter_output>/opt/anaconda3/envs/ml/lib/python3.8/site-packages/seaborn/categorical.py:3714: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`.
warnings.warn(msg)
<jupyter_text># Embarked vs. Survival<jupyter_code>train.Embarked.value_counts()
train.groupby('Embarked').Survived.value_counts()
train[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean()
sns.barplot(x='Embarked', y='Survived', data=train)<jupyter_output><empty_output><jupyter_text># Parch vs.Survival <jupyter_code>train.Parch.value_counts()
train.groupby('Parch').Survived.value_counts()
train[['Parch', 'Survived']].groupby(['Parch'], as_index=False).mean()
sns.barplot(x='Parch', y='Survived', ci=None, data=train) # ci=None will hide the error bar<jupyter_output><empty_output><jupyter_text># SibSp vs. Survival<jupyter_code>train.SibSp.value_counts()
train.groupby('SibSp').Survived.value_counts()
train[['SibSp', 'Survived']].groupby(['SibSp'], as_index=False).mean()
sns.barplot(x='SibSp', y='Survived', ci=None, data=train) # ci=None will hide the error bar<jupyter_output><empty_output><jupyter_text># Age vs. Survival<jupyter_code>fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
sns.violinplot(x="Embarked", y="Age", hue="Survived", data=train, split=True, ax=ax1)
sns.violinplot(x="Pclass", y="Age", hue="Survived", data=train, split=True, ax=ax2)
sns.violinplot(x="Sex", y="Age", hue="Survived", data=train, split=True, ax=ax3)
total_survived = train[train['Survived']==1]
total_not_survived = train[train['Survived']==0]
male_survived = train[(train['Survived']==1) & (train['Sex']=="male")]
female_survived = train[(train['Survived']==1) & (train['Sex']=="female")]
male_not_survived = train[(train['Survived']==0) & (train['Sex']=="male")]
female_not_survived = train[(train['Survived']==0) & (train['Sex']=="female")]
plt.figure(figsize=[15,5])
plt.subplot(111)
sns.distplot(total_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='blue')
sns.distplot(total_not_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='red', axlabel='Age')
plt.figure(figsize=[15,5])
plt.subplot(121)
sns.distplot(female_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='blue')
sns.distplot(female_not_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='red', axlabel='Female Age')
plt.subplot(122)
sns.distplot(male_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='blue')
sns.distplot(male_not_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='red', axlabel='Male Age')<jupyter_output>/opt/anaconda3/envs/ml/lib/python3.8/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
<jupyter_text>
From the above figures, we can see that:
Combining both male and female, we can see that children with age between 0 to 5 have better chance of survival.
Females with age between "18 to 40" and "50 and above" have higher chance of survival.
Males with age between 0 to 14 have better chance of survival.
# Correlating Features<jupyter_code>plt.figure(figsize=(15,6))
sns.heatmap(train.drop('PassengerId',axis=1).corr(), vmax=0.6, square=True, annot=True)<jupyter_output><empty_output>
|
no_license
|
/Titanic_project/titanic_data_exploration.ipynb
|
Deniz-shelby/Machine_learning
| 10 |
<jupyter_start><jupyter_text># Gene Expression anasylis and clustering using Birch<jupyter_code>"""
Read the dataset
1. Perform the PCA
2. Do a visualization on PCA and unnderstand the distribution of visualization
3. Clutering the PCA using Birch
4. Visualize the formed cluster
5. Cluster using the Birch
6. Visualize the Cluster
7. Do the Data normalization
8. Apply PCA on normalized data
9. Visualize the PCA
10. Cluster the normalized the PCA data
11. Visualize the cluster centers
12. Cluster the Normalized data using Birch
13. Visualize the Cluster center
14. Prepare the conclusion on perfomance of birch on PCA, Actual Data, Normalized PCA. Normalized Data
"""<jupyter_output><empty_output><jupyter_text>## Import the Libraries<jupyter_code>import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.cluster import Birch
from sklearn.decomposition import PCA
from sklearn import preprocessing
import numpy as np<jupyter_output><empty_output><jupyter_text>## Read the dataset<jupyter_code>original_df = pd.read_excel("./dataset/data_set_ALL_AML_train.xlsx")
feature_length = len(original_df.columns)
print("Feature size is", feature_length)<jupyter_output>Feature size is 38
<jupyter_text>## Perfrom the PCA & Visualize<jupyter_code>pca = PCA(2)
pca_data = pca.fit_transform(original_df)
clr1 = '#2026B2'
fig = plt.figure(figsize=(15, 15), dpi=80)
ax1 = fig.add_subplot(111)
ax1.set_title("2 Component PCA Analysis on Geneexpression Dataset")
ax1.plot(pca_data[:, 0], pca_data[:, 1], '.', mfc=clr1, mec=clr1)
plt.show()<jupyter_output><empty_output><jupyter_text>## Cluster the PCA data using Birch<jupyter_code>birch = Birch(branching_factor=50, compute_labels=True, copy=True, n_clusters=None,threshold=2000)
birch.fit_transform(pca_data)
print(f"Cluster Size with threshold {birch.threshold} is ", len(birch.subcluster_centers_))<jupyter_output>Cluster Size with threshold 2000 is 136
<jupyter_text>## Visualize the cluster centers<jupyter_code>cluster_centers = birch.subcluster_centers_
labels = birch.labels_
fig = plt.figure(figsize=(15, 15), dpi=200)
ax = fig.add_subplot(111)
ax.set_title("Cluster centers on 2 Component PCA in Geneexpression Dataset on Birch")
for x,y,lab in zip(cluster_centers[:, 0],cluster_centers[:, 1],labels):
ax.scatter(x,y,label=lab, s= 250)
colormap = plt.cm.gist_ncar #nipy_spectral, Set1,Paired
colorst = [colormap(i) for i in np.linspace(0, 0.9,len(ax.collections))]
for t,j1 in enumerate(ax.collections):
j1.set_color(colorst[t])
<jupyter_output><empty_output><jupyter_text>## Apply the datanormalization on expression dataset<jupyter_code>x = original_df.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df_normalized = pd.DataFrame(x_scaled)<jupyter_output><empty_output><jupyter_text>## Apply PCA on Normalized data & Visulaize<jupyter_code>pca = PCA(2)
pca_norm_data = pca.fit_transform(df_normalized)
clr1 = '#2026B2'
fig = plt.figure(figsize=(15, 15), dpi=80)
ax1 = fig.add_subplot(111)
ax1.set_title("2 Component PCA Analysis on Geneexpression Dataset")
ax1.plot(pca_norm_data[:, 0], pca_norm_data[:, 1], '.', mfc=clr1, mec=clr1)
plt.show()<jupyter_output><empty_output><jupyter_text>## Cluster the normalized PCA Data using Birch<jupyter_code>birch = Birch(branching_factor=50, compute_labels=True, copy=True, n_clusters=None,threshold=0.03)
birch.fit_transform(pca_norm_data)<jupyter_output><empty_output><jupyter_text>## Visualize the cluster centers<jupyter_code>cluster_centers = birch.subcluster_centers_
labels = birch.labels_
fig = plt.figure(figsize=(15, 15), dpi=200)
ax = fig.add_subplot(111)
ax.set_title("Cluster centers on 2 Component PCA in Normalized Gene expression Dataset on Birch")
for x,y,lab in zip(cluster_centers[:, 0],cluster_centers[:, 1],labels):
ax.scatter(x,y,label=lab, s= 250)
colormap = plt.cm.gist_ncar #nipy_spectral, Set1,Paired
colorst = [colormap(i) for i in np.linspace(0, 0.9,len(ax.collections))]
for t,j1 in enumerate(ax.collections):
j1.set_color(colorst[t])
pca_norm_data<jupyter_output><empty_output><jupyter_text>## Cluster the original data using Birch<jupyter_code>birch = Birch(branching_factor=50, compute_labels=True, copy=True, n_clusters=None,threshold=2000)
birch.fit_transform(original_df)
<jupyter_output><empty_output><jupyter_text>## Visualize the cluster centers<jupyter_code>cluster_centers = birch.subcluster_centers_
labels = birch.labels_
fig = plt.figure(figsize=(15, 15), dpi=200)
ax = fig.add_subplot(111)
ax.set_title("Cluster centers of original Gene expression Dataset on Birch")
for x,y,lab in zip(cluster_centers[:, 0],cluster_centers[:, 1],labels):
ax.scatter(x,y,label=lab, s= 250)
colormap = plt.cm.gist_ncar #nipy_spectral, Set1,Paired
colorst = [colormap(i) for i in np.linspace(0, 0.9,len(ax.collections))]
for t,j1 in enumerate(ax.collections):
j1.set_color(colorst[t])<jupyter_output><empty_output><jupyter_text>## Clustering Normalized data without PCA using Birch<jupyter_code>birch = Birch(branching_factor=50, compute_labels=True, copy=True, n_clusters=None,threshold=0.2)
birch.fit_transform(df_normalized)<jupyter_output><empty_output><jupyter_text>## Visualise the cluster centers<jupyter_code>cluster_centers = birch.subcluster_centers_
labels = birch.labels_
fig = plt.figure(figsize=(15, 15), dpi=200)
ax = fig.add_subplot(111)
ax.set_title("Cluster centers of normalized Gene expression Dataset(without PCA) on Birch")
for x,y,lab in zip(cluster_centers[:, 0],cluster_centers[:, 1],labels):
ax.scatter(x,y,label=lab, s= 250)
colormap = plt.cm.gist_ncar #nipy_spectral, Set1,Paired
colorst = [colormap(i) for i in np.linspace(0, 0.9,len(ax.collections))]
for t,j1 in enumerate(ax.collections):
j1.set_color(colorst[t])<jupyter_output><empty_output>
|
no_license
|
/Gene data Expression Analysis.ipynb
|
anjanajiji/BIRCH_MINI_PROJECT
| 14 |
<jupyter_start><jupyter_text># Statistics<jupyter_code>def print_data(sum_,mean_):
sum_string=["Aggregated xi:", "Aggregated meanlogr:","Aggregated npairs:"]
for i,data in enumerate(sum_):
print(sum_string[i])
print(sum_[i])
print(sep)
mean_string=["Mean xi:", "Mean meanlogr:","Mean npairs:"]
for i,data in enumerate(mean_):
print(mean_string[i])
print(mean_[i])
print(sep)<jupyter_output><empty_output><jupyter_text># Abs Plots<jupyter_code>plt_w_error(mean_abs[1],-mean_abs[0],sig_abs,r"$\gamma_+$ of Clusters On a Absolute Scale ")
print_data(sum_abs,mean_abs)
plt_w_error(r_meanlogr,-r_xi,r_sigma,r"$\gamma_+$ with Random Catalog")
print(-r_xi)
print(r_sigma)
plt_w_error(mean_abs[1],-(mean_abs[0]+r_xi),np.hypot(r_sigma,sig_abs),r"$\gamma_+$ of Clusters - $\gamma_T$ with Random Catalog")<jupyter_output><empty_output><jupyter_text># Rel plots<jupyter_code>plt_w_error(mean_rel[1],-mean_rel[0],sig_rel,r"$\gamma_+$ of Clusters, Normalized by R_LAMBDA")
print_data(sum_rel,mean_rel)<jupyter_output><empty_output>
|
no_license
|
/lib/output/im3/all/all_z/lambda_2_treecorr_im3_all_all_z.ipynb
|
zchvsre/sa
| 3 |
<jupyter_start><jupyter_text>**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/categorical-variables).**
---
By encoding **categorical variables**, you'll obtain your best results thus far!
# Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.<jupyter_code># Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex3 import *
print("Setup Complete")<jupyter_output><empty_output><jupyter_text>In this exercise, you will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course).

Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`.<jupyter_code>import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X = pd.read_csv('../input/train.csv', index_col='Id')
X_test = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
X.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = X.SalePrice
X.drop(['SalePrice'], axis=1, inplace=True)
# To keep things simple, we'll drop columns with missing values
cols_with_missing = [col for col in X.columns if X[col].isnull().any()]
X.drop(cols_with_missing, axis=1, inplace=True)
X_test.drop(cols_with_missing, axis=1, inplace=True)
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y,
train_size=0.8, test_size=0.2,
random_state=0)<jupyter_output><empty_output><jupyter_text>Use the next code cell to print the first five rows of the data.<jupyter_code>X_train.head()<jupyter_output><empty_output><jupyter_text>Notice that the dataset contains both numerical and categorical variables. You'll need to encode the categorical data before training a model.
To compare different models, you'll use the same `score_dataset()` function from the tutorial. This function reports the [mean absolute error](https://en.wikipedia.org/wiki/Mean_absolute_error) (MAE) from a random forest model.<jupyter_code>from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds)<jupyter_output><empty_output><jupyter_text># Step 1: Drop columns with categorical data
You'll get started with the most straightforward approach. Use the code cell below to preprocess the data in `X_train` and `X_valid` to remove columns with categorical data. Set the preprocessed DataFrames to `drop_X_train` and `drop_X_valid`, respectively. <jupyter_code># Fill in the lines below: drop columns in training and validation data
drop_X_train = X_train.select_dtypes(exclude=['object'])
drop_X_valid = X_valid.select_dtypes(exclude=['object'])
# Check your answers
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
#step_1.solution()<jupyter_output><empty_output><jupyter_text>Run the next code cell to get the MAE for this approach.<jupyter_code>print("MAE from Approach 1 (Drop categorical variables):")
print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))<jupyter_output><empty_output><jupyter_text>Before jumping into label encoding, we'll investigate the dataset. Specifically, we'll look at the `'Condition2'` column. The code cell below prints the unique entries in both the training and validation sets.<jupyter_code>print("Unique values in 'Condition2' column in training data:", X_train['Condition2'].unique())
print("\nUnique values in 'Condition2' column in validation data:", X_valid['Condition2'].unique())<jupyter_output><empty_output><jupyter_text># Step 2: Label encoding
### Part A
If you now write code to:
- fit a label encoder to the training data, and then
- use it to transform both the training and validation data,
you'll get an error. Can you see why this is the case? (_You'll need to use the above output to answer this question._)<jupyter_code># Check your answer (Run this code cell to receive credit!)
step_2.a.check()
#step_2.a.hint()<jupyter_output><empty_output><jupyter_text>This is a common problem that you'll encounter with real-world data, and there are many approaches to fixing this issue. For instance, you can write a custom label encoder to deal with new categories. The simplest approach, however, is to drop the problematic categorical columns.
Run the code cell below to save the problematic columns to a Python list `bad_label_cols`. Likewise, columns that can be safely label encoded are stored in `good_label_cols`.<jupyter_code># All categorical columns
object_cols = [col for col in X_train.columns if X_train[col].dtype == "object"]
# Columns that can be safely label encoded
good_label_cols = [col for col in object_cols if
set(X_train[col]) == set(X_valid[col])]
# Problematic columns that will be dropped from the dataset
bad_label_cols = list(set(object_cols)-set(good_label_cols))
print('Categorical columns that will be label encoded:', good_label_cols)
print('\nCategorical columns that will be dropped from the dataset:', bad_label_cols)<jupyter_output><empty_output><jupyter_text>### Part B
Use the next code cell to label encode the data in `X_train` and `X_valid`. Set the preprocessed DataFrames to `label_X_train` and `label_X_valid`, respectively.
- We have provided code below to drop the categorical columns in `bad_label_cols` from the dataset.
- You should label encode the categorical columns in `good_label_cols`. <jupyter_code># Drop categorical columns that will not be encoded
label_X_train = X_train.drop(bad_label_cols, axis=1)
label_X_valid = X_valid.drop(bad_label_cols, axis=1)
# Apply label encoder
label_encoder = LabelEncoder()
for col in set(good_label_cols):
label_X_train[col] = label_encoder.fit_transform(X_train[col])
label_X_valid[col] = label_encoder.transform(X_valid[col])
# Check your answer
step_2.b.check()
# Lines below will give you a hint or solution code
#step_2.b.hint()
#step_2.b.solution()<jupyter_output><empty_output><jupyter_text>Run the next code cell to get the MAE for this approach.<jupyter_code>print("MAE from Approach 2 (Label Encoding):")
print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))<jupyter_output><empty_output><jupyter_text>So far, you've tried two different approaches to dealing with categorical variables. And, you've seen that encoding categorical data yields better results than removing columns from the dataset.
Soon, you'll try one-hot encoding. Before then, there's one additional topic we need to cover. Begin by running the next code cell without changes. <jupyter_code># Get number of unique entries in each column with categorical data
object_nunique = list(map(lambda col: X_train[col].nunique(), object_cols))
d = dict(zip(object_cols, object_nunique))
# Print number of unique entries by column, in ascending order
sorted(d.items(), key=lambda x: x[1])<jupyter_output><empty_output><jupyter_text># Step 3: Investigating cardinality
### Part A
The output above shows, for each column with categorical data, the number of unique values in the column. For instance, the `'Street'` column in the training data has two unique values: `'Grvl'` and `'Pave'`, corresponding to a gravel road and a paved road, respectively.
We refer to the number of unique entries of a categorical variable as the **cardinality** of that categorical variable. For instance, the `'Street'` variable has cardinality 2.
Use the output above to answer the questions below.<jupyter_code># Fill in the line below: How many categorical variables in the training data
# have cardinality greater than 10?
high_cardinality_numcols = 3
# Fill in the line below: How many columns are needed to one-hot encode the
# 'Neighborhood' variable in the training data?
num_cols_neighborhood = 25
# Check your answers
step_3.a.check()
# Lines below will give you a hint or solution code
#step_3.a.hint()
#step_3.a.solution()<jupyter_output><empty_output><jupyter_text>### Part B
For large datasets with many rows, one-hot encoding can greatly expand the size of the dataset. For this reason, we typically will only one-hot encode columns with relatively low cardinality. Then, high cardinality columns can either be dropped from the dataset, or we can use label encoding.
As an example, consider a dataset with 10,000 rows, and containing one categorical column with 100 unique entries.
- If this column is replaced with the corresponding one-hot encoding, how many entries are added to the dataset?
- If we instead replace the column with the label encoding, how many entries are added?
Use your answers to fill in the lines below.<jupyter_code># Fill in the line below: How many entries are added to the dataset by
# replacing the column with a one-hot encoding?
OH_entries_added = 1e4*100 - 1e4
# Fill in the line below: How many entries are added to the dataset by
# replacing the column with a label encoding?
label_entries_added = 0
# Check your answers
step_3.b.check()
# Lines below will give you a hint or solution code
#step_3.b.hint()
#step_3.b.solution()<jupyter_output><empty_output><jupyter_text>Next, you'll experiment with one-hot encoding. But, instead of encoding all of the categorical variables in the dataset, you'll only create a one-hot encoding for columns with cardinality less than 10.
Run the code cell below without changes to set `low_cardinality_cols` to a Python list containing the columns that will be one-hot encoded. Likewise, `high_cardinality_cols` contains a list of categorical columns that will be dropped from the dataset.<jupyter_code># Columns that will be one-hot encoded
low_cardinality_cols = [col for col in object_cols if X_train[col].nunique() < 10]
# Columns that will be dropped from the dataset
high_cardinality_cols = list(set(object_cols)-set(low_cardinality_cols))
print('Categorical columns that will be one-hot encoded:', low_cardinality_cols)
print('\nCategorical columns that will be dropped from the dataset:', high_cardinality_cols)<jupyter_output><empty_output><jupyter_text># Step 4: One-hot encoding
Use the next code cell to one-hot encode the data in `X_train` and `X_valid`. Set the preprocessed DataFrames to `OH_X_train` and `OH_X_valid`, respectively.
- The full list of categorical columns in the dataset can be found in the Python list `object_cols`.
- You should only one-hot encode the categorical columns in `low_cardinality_cols`. All other categorical columns should be dropped from the dataset. <jupyter_code>from sklearn.preprocessing import OneHotEncoder
# Use as many lines of code as you need!
# Your code here
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[low_cardinality_cols]))
OH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[low_cardinality_cols]))
# One-hot encoding removed index; put it back
OH_cols_train.index = X_train.index
OH_cols_valid.index = X_valid.index
# Remove categorical columns (will replace with one-hot encoding)
num_X_train = X_train.drop(object_cols, axis=1)
num_X_valid = X_valid.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features
OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1)
OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1)
# Check your answer
step_4.check()
# Lines below will give you a hint or solution code
#step_4.hint()
#step_4.solution()<jupyter_output><empty_output><jupyter_text>Run the next code cell to get the MAE for this approach.<jupyter_code>print("MAE from Approach 3 (One-Hot Encoding):")
print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))<jupyter_output><empty_output><jupyter_text># Generate test predictions and submit your results
After you complete Step 4, if you'd like to use what you've learned to submit your results to the leaderboard, you'll need to preprocess the test data before generating predictions.
**This step is completely optional, and you do not need to submit results to the leaderboard to successfully complete the exercise.**
Check out the previous exercise if you need help with remembering how to [join the competition](https://www.kaggle.com/c/home-data-for-ml-course) or save your results to CSV. Once you have generated a file with your results, follow the instructions below:
1. Begin by clicking on the blue **Save Version** button in the top right corner of the window. This will generate a pop-up window.
2. Ensure that the **Save and Run All** option is selected, and then click on the blue **Save** button.
3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.
4. Click on the **Output** tab on the right of the screen. Then, click on the file you would like to submit, and click on the blue **Submit** button to submit your results to the leaderboard.
You have now successfully submitted to the competition!
If you want to keep working to improve your performance, select the blue **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work.
<jupyter_code># (Optional) Your code here<jupyter_output><empty_output>
|
no_license
|
/intro to ml/exercise-categorical-variables.ipynb
|
Aastha-31/Kaggle_tasks
| 18 |
<jupyter_start><jupyter_text># Computational Physics
---## Week 2: Numerical Integration<jupyter_code>import numpy
import matplotlib.pyplot as plt
%matplotlib inline<jupyter_output><empty_output><jupyter_text>Define the function `f`, such that $\textrm{f}(x) \equiv x^{2}\sin(x)$. This is the function that we will be integrating.<jupyter_code>def f(x):
'''Function equivalent to x^2 sin(x).'''
return ((x**2)*(numpy.sin(x)))<jupyter_output><empty_output><jupyter_text>Ensure your function works with numpy arrays:<jupyter_code>xs=numpy.arange(0, 1, step=0.1)
assert numpy.isclose(f(xs),
[0., 0.00099833, 0.00794677, 0.02659682, 0.06230693,
0.11985638, 0.20327129, 0.31566667, 0.4591079 , 0.6344948 ]).all()<jupyter_output><empty_output><jupyter_text>Derive the indefinite integral of $f(x)$ nalytically. Call this function $g(x)$ and implement it below. Set the constant of integration such that $g(0)=0$.<jupyter_code>def g(x):
'''Analytical integral of f(x).'''
a = (2*x*(numpy.sin(x)))
b = (((x**2)-2)*(numpy.cos(x)))
c = 2
return (a-b-c)<jupyter_output><empty_output><jupyter_text>Check your solution with the same numpy array:<jupyter_code>assert g(0) == 0.
assert numpy.isclose(g(xs),
[0., 0.00002497, 0.00039822, 0.00200482, 0.0062869,
0.01519502, 0.03112138, 0.05681646, 0.09529087, 0.1497043 ]).all()<jupyter_output><empty_output><jupyter_text>Now, using the analytically derived indefinite integral, $g(x)$, define a function which calculates the definite integral of $f(x)$ over the interval $(x_{min},~x_{max})$.<jupyter_code>def integrate_analytic(xmin, xmax):
r = g(xmax)
t = g(xmin)
return (r-t)<jupyter_output><empty_output><jupyter_text>Check your analytic function:<jupyter_code>assert numpy.isclose(integrate_analytic(xmin=0, xmax=4), 1.096591)<jupyter_output><empty_output><jupyter_text>## Numerical implementationCreate a function which calculates the definite integral of the function $f(x)$ over the interval $(x_{min},~x_{max})$ using Simpson's rule with $N$ panels.<jupyter_code>def integrate_numeric(xmin, xmax, N):
'''
Numerical integral of f from xmin to xmax using Simpson's rule with
N panels.
'''
xs = numpy.linspace(xmin, xmax, N+1)
deltax = xs[1]-xs[0]
almost_total = 0
for i in range(0, (len(xs))):
if i == 0:
fxi = f(xs[i])
#fmi = f(((xs[i])+(xs[i-1]))/2)
#fxi_1 = f(xs[i+1])
almost_total += (fxi)
elif i == (len(xs)-1):
fxi = f(xs[i])
fmi = f(((xs[i])+(xs[i-1]))/2)
#fxi_1 = f(xs[i+1])
almost_total += ((4*fmi)+((fxi)))
else:
fxi = f(xs[i])
fmi = f(((xs[i])+(xs[i-1]))/2)
#fxi_1 = f(xs[i+1])
almost_total += ((4*fmi)+(2*(fxi)))
return ((deltax/6)*almost_total)<jupyter_output><empty_output><jupyter_text>Make sure you have implemented Simpson's rule correctly:<jupyter_code>assert numpy.isclose(integrate_numeric(xmin=0, xmax=4, N=1), 1.6266126)
assert numpy.isclose(integrate_numeric(xmin=0, xmax=4, N=50), 1.096591)<jupyter_output><empty_output><jupyter_text>## Plotting task** Task 1 **
There will always be some discrepancy between a numerically calculated result and an analytically derived result. Produce a log-log plot showing the fractional error between these two results as the number of panels is varied. The plot should have labels and a title.
<jupyter_code>x0, x1 = 0, 2 # Bounds to integrate f(x) over
panel_counts = [4, 8, 16, 32, 64, 128, 256, 512, 1024] # Panel numbers to use
result_analytic = integrate_analytic(x0, x1) # Define reference value from analytical solution
new_lst = [integrate_numeric(x0, x1, i) for i in panel_counts]
errors_lst = [abs((new_lst[i]-result_analytic)/(result_analytic)) for i in range(len(new_lst))]
plt.plot(panel_counts, errors_lst)
plt.xlabel("Number of panels used")
plt.ylabel("integration error")
plt.title("Fractional error between numerical and analytical integration")
plt.yscale('log')
plt.xscale('log')<jupyter_output><empty_output>
|
no_license
|
/data/Assignment 2/Assignment 2/last/626d73.ipynb
|
zblsvvm/Plagiarism-detection-in-Jupyter-notebook
| 10 |
<jupyter_start><jupyter_text># Lineare Optimierung in Python
Wir wollen hier kurz kennenlernen, wie man lineare Optimierungsprobleme in Python lösen kann.
Zuerst müssen wir Python-Module laden, die dafür nötig sind. Das Modul `numpy` unterstützt numerische Rechnungen in Python, z.B. Rechnen mit Matrizen, Vektoren und das Lösen von Gleichungssystemen.<jupyter_code>import numpy as np
from scipy.optimize import linprog<jupyter_output><empty_output><jupyter_text>Wir wollen nun das Problem
\begin{align*}
\text{Minimiere} &\quad c^\top x \\
\text{u.d.N.} &\quad A x = b \\
&\quad x \ge 0
\end{align*}
lösen. Dabei wollen wir die Daten
$$
A =
\begin{pmatrix} 2 & 1 & 2 \\ 5 & 3 & 4 \end{pmatrix},
\quad
b =
\begin{pmatrix} 3 \\ 8 \end{pmatrix},
\quad
c =
\begin{pmatrix} 16 \\ 9 \\ 18 \end{pmatrix}
$$
nutzen.
Matrizen können mittels `np.array` angelegt werden. Dabei ist die Matrix *zeilenweise* zu übergeben.<jupyter_code>A = np.array([[2,1,2],[5,3,4]])<jupyter_output><empty_output><jupyter_text>Nun können wir überprüfen, ob das funktioniert hat.<jupyter_code>print(A)<jupyter_output>[[2 1 2]
[5 3 4]]
<jupyter_text>Analog können wir die Vektoren $b$ und $c$ anlegen.<jupyter_code>c = np.array([16,9,18])
b = np.array([3,8])<jupyter_output><empty_output><jupyter_text>Jetzt können wir das Optimierungsproblem mittels `linprog` lösen. Diese Methode kann auch Ungleichungsnebenbedingungen $A x \le b$ behandeln, daher müssen wir mittels `A_eq` und `b_eq` spezifizieren, dass dies Gleichungsnebenbedingungen sind. Die Dokumentation dieser Routine findet man [hier](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html).<jupyter_code>solution = linprog(c, A_eq = A, b_eq = b, options={'disp':True})<jupyter_output>Optimization terminated successfully.
Current function value: 25.000000
Iterations: 2
<jupyter_text>Der Rückgabewert `solution` enthält:<jupyter_code>print(solution)<jupyter_output> fun: 25.0
message: 'Optimization terminated successfully.'
nit: 2
slack: array([], dtype=float64)
status: 0
success: True
x: array([1., 1., 0.])
<jupyter_text>Da die duale Lösung nicht enthalten ist, müssen wir diese selbst erstellen. Dazu konstruieren wir zuerst die Basis $J$. Die Basisvariablen sind die ersten beiden Komponenten in $x$. **Nun ist aber zu beachten, dass die Indizes in Python mit $0$ beginnen!** Daher ist $J = (0, 1)$.<jupyter_code>J = [0,1]<jupyter_output><empty_output><jupyter_text>Um dies zu Überprüfen, berechnen wir nochmals die Basisvariablen $x^*_J = A_J^{-1} b$. Die Matrix $A_J$ erhalten wir mittels `A[:,J]`. Der `:` steht dafür, dass wir alle Zeilen auswählen und das `J` steht dafür, dass wir nur die Spalten aus `J` auswählen. Das Lösen des Gleichungssystems geschieht mittels `np.linalg.solve`.<jupyter_code>np.linalg.solve(A[:,J], b)<jupyter_output><empty_output><jupyter_text>Die duale Lösung erhalten wir nun mittels $y = A_J^{-\top} c_J$.<jupyter_code>y = np.linalg.solve(A[:,J].T, c[J])
y<jupyter_output><empty_output>
|
no_license
|
/Einfaches_Beispiel.ipynb
|
gerw/python_linprog
| 9 |
<jupyter_start><jupyter_text># 图像增广
在[“深度卷积神经网络(AlexNet)”](../chapter_convolutional-neural-networks/alexnet.ipynb)小节里我们提到过,大规模数据集是成功应用深度神经网络的前提。图像增广(image augmentation)技术通过对训练图像做一系列随机改变,来产生相似但又不同的训练样本,从而扩大训练数据集的规模。图像增广的另一种解释是,随机改变训练样本可以降低模型对某些属性的依赖,从而提高模型的泛化能力。例如,我们可以对图像进行不同方式的裁剪,使感兴趣的物体出现在不同位置,从而减轻模型对物体出现位置的依赖性。我们也可以调整亮度、色彩等因素来降低模型对色彩的敏感度。可以说,在当年AlexNet的成功中,图像增广技术功不可没。本节我们将讨论这个在计算机视觉里被广泛使用的技术。
首先,导入实验所需的包或模块。<jupyter_code>%matplotlib inline
import d2lzh as d2l
import mxnet as mx
from mxnet import autograd, gluon, image, init, nd
from mxnet.gluon import data as gdata, loss as gloss, utils as gutils
import sys
import time<jupyter_output>/var/lib/jenkins/miniconda3/envs/d2l-zh-build/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.11) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
<jupyter_text>## 常用的图像增广方法
我们来读取一张形状为$400\times 500$(高和宽分别为400像素和500像素)的图像作为实验的样例。<jupyter_code>d2l.set_figsize()
img = image.imread('../img/cat1.jpg')
d2l.plt.imshow(img.asnumpy())<jupyter_output><empty_output><jupyter_text>下面定义绘图函数`show_images`。<jupyter_code># 本函数已保存在d2lzh包中方便以后使用
def show_images(imgs, num_rows, num_cols, scale=2):
figsize = (num_cols * scale, num_rows * scale)
_, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize)
for i in range(num_rows):
for j in range(num_cols):
axes[i][j].imshow(imgs[i * num_cols + j].asnumpy())
axes[i][j].axes.get_xaxis().set_visible(False)
axes[i][j].axes.get_yaxis().set_visible(False)
return axes<jupyter_output><empty_output><jupyter_text>大部分图像增广方法都有一定的随机性。为了方便观察图像增广的效果,接下来我们定义一个辅助函数`apply`。这个函数对输入图像`img`多次运行图像增广方法`aug`并展示所有的结果。<jupyter_code>def apply(img, aug, num_rows=2, num_cols=4, scale=1.5):
Y = [aug(img) for _ in range(num_rows * num_cols)]
show_images(Y, num_rows, num_cols, scale)<jupyter_output><empty_output><jupyter_text>### 翻转和裁剪
左右翻转图像通常不改变物体的类别。它是最早也是最广泛使用的一种图像增广方法。下面我们通过`transforms`模块创建`RandomFlipLeftRight`实例来实现一半概率的图像左右翻转。<jupyter_code>apply(img, gdata.vision.transforms.RandomFlipLeftRight())<jupyter_output><empty_output><jupyter_text>上下翻转不如左右翻转通用。但是至少对于样例图像,上下翻转不会造成识别障碍。下面我们创建`RandomFlipTopBottom`实例来实现一半概率的图像上下翻转。<jupyter_code>apply(img, gdata.vision.transforms.RandomFlipTopBottom())<jupyter_output><empty_output><jupyter_text>在我们使用的样例图像里,猫在图像正中间,但一般情况下可能不是这样。在[“池化层”](../chapter_convolutional-neural-networks/pooling.ipynb)一节里我们解释了池化层能降低卷积层对目标位置的敏感度。除此之外,我们还可以通过对图像随机裁剪来让物体以不同的比例出现在图像的不同位置,这同样能够降低模型对目标位置的敏感性。
在下面的代码里,我们每次随机裁剪出一块面积为原面积$10\% \sim 100\%$的区域,且该区域的宽和高之比随机取自$0.5 \sim 2$,然后再将该区域的宽和高分别缩放到200像素。若无特殊说明,本节中$a$和$b$之间的随机数指的是从区间$[a,b]$中随机均匀采样所得到的连续值。<jupyter_code>shape_aug = gdata.vision.transforms.RandomResizedCrop(
(200, 200), scale=(0.1, 1), ratio=(0.5, 2))
apply(img, shape_aug)<jupyter_output><empty_output><jupyter_text>### 变化颜色
另一类增广方法是变化颜色。我们可以从4个方面改变图像的颜色:亮度、对比度、饱和度和色调。在下面的例子里,我们将图像的亮度随机变化为原图亮度的$50\%$(即$1-0.5$)$\sim 150\%$(即$1+0.5$)。<jupyter_code>apply(img, gdata.vision.transforms.RandomBrightness(0.5))<jupyter_output><empty_output><jupyter_text>类似地,我们也可以随机变化图像的色调。<jupyter_code>apply(img, gdata.vision.transforms.RandomHue(0.5))<jupyter_output><empty_output><jupyter_text>我们也可以创建`RandomColorJitter`实例并同时设置如何随机变化图像的亮度(`brightness`)、对比度(`contrast`)、饱和度(`saturation`)和色调(`hue`)。<jupyter_code>color_aug = gdata.vision.transforms.RandomColorJitter(
brightness=0.5, contrast=0.5, saturation=0.5, hue=0.5)
apply(img, color_aug)<jupyter_output><empty_output><jupyter_text>### 叠加多个图像增广方法
实际应用中我们会将多个图像增广方法叠加使用。我们可以通过`Compose`实例将上面定义的多个图像增广方法叠加起来,再应用到每张图像之上。<jupyter_code>augs = gdata.vision.transforms.Compose([
gdata.vision.transforms.RandomFlipLeftRight(), color_aug, shape_aug])
apply(img, augs)<jupyter_output><empty_output><jupyter_text>## 使用图像增广训练模型
下面我们来看一个将图像增广应用在实际训练中的例子。这里我们使用CIFAR-10数据集,而不是之前我们一直使用的Fashion-MNIST数据集。这是因为Fashion-MNIST数据集中物体的位置和尺寸都已经经过归一化处理,而CIFAR-10数据集中物体的颜色和大小区别更加显著。下面展示了CIFAR-10数据集中前32张训练图像。<jupyter_code>show_images(gdata.vision.CIFAR10(train=True)[0:32][0], 4, 8, scale=0.8);<jupyter_output><empty_output><jupyter_text>为了在预测时得到确定的结果,我们通常只将图像增广应用在训练样本上,而不在预测时使用含随机操作的图像增广。在这里我们只使用最简单的随机左右翻转。此外,我们使用`ToTensor`实例将小批量图像转成MXNet需要的格式,即形状为(批量大小, 通道数, 高, 宽)、值域在0到1之间且类型为32位浮点数。<jupyter_code>flip_aug = gdata.vision.transforms.Compose([
gdata.vision.transforms.RandomFlipLeftRight(),
gdata.vision.transforms.ToTensor()])
no_aug = gdata.vision.transforms.Compose([
gdata.vision.transforms.ToTensor()])<jupyter_output><empty_output><jupyter_text>接下来我们定义一个辅助函数来方便读取图像并应用图像增广。Gluon的数据集提供的`transform_first`函数将图像增广应用在每个训练样本(图像和标签)的第一个元素,即图像之上。有关`DataLoader`的详细介绍,可参考更早的[“图像分类数据集(Fashion-MNIST)”](../chapter_deep-learning-basics/fashion-mnist.ipynb)一节。<jupyter_code>num_workers = 0 if sys.platform.startswith('win32') else 4
def load_cifar10(is_train, augs, batch_size):
return gdata.DataLoader(
gdata.vision.CIFAR10(train=is_train).transform_first(augs),
batch_size=batch_size, shuffle=is_train, num_workers=num_workers)<jupyter_output><empty_output><jupyter_text>### 使用多GPU训练模型
我们在CIFAR-10数据集上训练[“残差网络(ResNet)”](../chapter_convolutional-neural-networks/resnet.ipynb)一节介绍的ResNet-18模型。我们还将应用[“多GPU计算的简洁实现”](../chapter_computational-performance/multiple-gpus-gluon.ipynb)一节中介绍的方法,使用多GPU训练模型。
首先,我们定义`try_all_gpus`函数,从而能够获取所有可用的GPU。<jupyter_code>def try_all_gpus(): # 本函数已保存在d2lzh包中方便以后使用
ctxes = []
try:
for i in range(16): # 假设一台机器上GPU的数量不超过16
ctx = mx.gpu(i)
_ = nd.array([0], ctx=ctx)
ctxes.append(ctx)
except mx.base.MXNetError:
pass
if not ctxes:
ctxes = [mx.cpu()]
return ctxes<jupyter_output><empty_output><jupyter_text>下面定义的辅助函数`_get_batch`将小批量数据样本`batch`划分并复制到`ctx`变量所指定的各个显存上。<jupyter_code>def _get_batch(batch, ctx):
features, labels = batch
if labels.dtype != features.dtype:
labels = labels.astype(features.dtype)
return (gutils.split_and_load(features, ctx),
gutils.split_and_load(labels, ctx), features.shape[0])<jupyter_output><empty_output><jupyter_text>然后,我们定义`evaluate_accuracy`函数评价模型的分类准确率。与[“softmax回归的从零开始实现”](../chapter_deep-learning-basics/softmax-regression-scratch.ipynb)和[“卷积神经网络(LeNet)”](../chapter_convolutional-neural-networks/lenet.ipynb)两节中描述的`evaluate_accuracy`函数不同,这里定义的函数更加通用:它通过辅助函数`_get_batch`使用`ctx`变量所包含的所有GPU来评价模型。<jupyter_code># 本函数已保存在d2lzh包中方便以后使用
def evaluate_accuracy(data_iter, net, ctx=[mx.cpu()]):
if isinstance(ctx, mx.Context):
ctx = [ctx]
acc_sum, n = nd.array([0]), 0
for batch in data_iter:
features, labels, _ = _get_batch(batch, ctx)
for X, y in zip(features, labels):
y = y.astype('float32')
acc_sum += (net(X).argmax(axis=1) == y).sum().copyto(mx.cpu())
n += y.size
acc_sum.wait_to_read()
return acc_sum.asscalar() / n<jupyter_output><empty_output><jupyter_text>接下来,我们定义`train`函数使用多GPU训练并评价模型。<jupyter_code># 本函数已保存在d2lzh包中方便以后使用
def train(train_iter, test_iter, net, loss, trainer, ctx, num_epochs):
print('training on', ctx)
if isinstance(ctx, mx.Context):
ctx = [ctx]
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n, m, start = 0.0, 0.0, 0, 0, time.time()
for i, batch in enumerate(train_iter):
Xs, ys, batch_size = _get_batch(batch, ctx)
with autograd.record():
y_hats = [net(X) for X in Xs]
ls = [loss(y_hat, y) for y_hat, y in zip(y_hats, ys)]
for l in ls:
l.backward()
trainer.step(batch_size)
train_l_sum += sum([l.sum().asscalar() for l in ls])
n += sum([l.size for l in ls])
train_acc_sum += sum([(y_hat.argmax(axis=1) == y).sum().asscalar()
for y_hat, y in zip(y_hats, ys)])
m += sum([y.size for y in ys])
test_acc = evaluate_accuracy(test_iter, net, ctx)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, '
'time %.1f sec'
% (epoch + 1, train_l_sum / n, train_acc_sum / m, test_acc,
time.time() - start))<jupyter_output><empty_output><jupyter_text>现在就可以定义`train_with_data_aug`函数使用图像增广来训练模型了。该函数获取了所有可用的GPU,并将Adam算法作为训练使用的优化算法,然后将图像增广应用于训练数据集之上,最后调用刚才定义的`train`函数训练并评价模型。<jupyter_code>def train_with_data_aug(train_augs, test_augs, lr=0.001):
batch_size, ctx, net = 256, try_all_gpus(), d2l.resnet18(10)
net.initialize(ctx=ctx, init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': lr})
loss = gloss.SoftmaxCrossEntropyLoss()
train_iter = load_cifar10(True, train_augs, batch_size)
test_iter = load_cifar10(False, test_augs, batch_size)
train(train_iter, test_iter, net, loss, trainer, ctx, num_epochs=10)<jupyter_output><empty_output><jupyter_text>下面使用随机左右翻转的图像增广来训练模型。<jupyter_code>train_with_data_aug(flip_aug, no_aug)<jupyter_output>training on [gpu(0), gpu(1)]
epoch 1, loss 1.4039, train acc 0.500, test acc 0.558, time 12.6 sec
epoch 2, loss 0.8465, train acc 0.699, test acc 0.660, time 11.2 sec
epoch 3, loss 0.6185, train acc 0.782, test acc 0.738, time 12.2 sec
epoch 4, loss 0.4844, train acc 0.831, test acc 0.777, time 12.1 sec
epoch 5, loss 0.4027, train acc 0.861, test acc 0.810, time 11.2 sec
epoch 6, loss 0.3326, train acc 0.885, test acc 0.828, time 18.7 sec
epoch 7, loss 0.2829, train acc 0.902, test acc 0.819, time 18.5 sec
epoch 8, loss 0.2359, train acc 0.920, test acc 0.831, time 16.0 sec
epoch 9, loss 0.2024, train acc 0.931, test acc 0.813, time 12.0 sec
epoch 10, loss 0.1651, train acc 0.943, test acc 0.830, time 10.9 sec
|
no_license
|
/MachineLearning/MXNet/d2l-zh/chapter_computer-vision/image-augmentation.ipynb
|
wanghao15536870732/StudyNotes
| 20 |
<jupyter_start><jupyter_text>## The Titanic Disaster(a) Join the Titanic: Machine Learning From Disaster competition on Kaggle. Download the training and test data.
<jupyter_code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
train = pd.read_csv('Titanic_train.csv')
test = pd.read_csv('Titanic_test.csv')
tt_train = train
tt_train.head(8)<jupyter_output><empty_output><jupyter_text>## Data Visualization & Analysis<jupyter_code>#Data Visualization
sns.barplot(x = "Pclass" , y ='Survived', hue = 'Sex', data = tt_train)
#Age
plt.figure(figsize = (12,5))
survivals = tt_train[tt_train['Survived'] == 1]
dead = tt_train[tt_train['Survived'] == 0]
sns.distplot(tt_train['Fare'].dropna())
#sns.distplot(survivals['Age'].dropna())
sns.plt.show()
def survive_rate(df, col, fillna = True):
sub = df
if fillna:
sub[col].fillna('N/a', inplace = True)
ctgry = sub[col].unique()
for c in ctgry:
df_sub = df[df[col] == c]
print('The survival rate of ', c, ' is ',df_sub['Survived'].sum()/len(df_sub),
' ( data size:', len(df_sub), ')')
return
survive_rate(tt_train, 'Embarked')
survive_rate(tt_train, 'Pclass')
survive_rate(tt_train, 'Sex')
survive_rate(tt_train, 'SibSp')
survive_rate(tt_train, 'Parch')
#transforming features
def Age(df):
df.Age.fillna(-0.5, inplace = True)
bins = [-1, 0, 4, 12, 18, 25, 35, 55, 120]
names = ['Na', 'Baby', 'Child', 'Teenager', 'Young Adult', 'Adult', 'Elder Adult', 'Senior']
ctgry = pd.cut(df.Age, bins, labels = names)
df.Age = ctgry
return df
def Fare(df):
df.Fare.fillna(-0.5, inplace = True)
bins = [-1, 0, 10, 50, 100, 180, 300, 600]
names = ['na','6th Class', '5th Class', '4th Class', '3rd Class', '2nd Class', '1st Class']
ctgry = pd.cut(df.Fare, bins, labels = names)
df.Fare = ctgry
return df
def Cabin(df):
df.Cabin.fillna('N/a', inplace = True)
df.Cabin = df.Cabin.apply(lambda x: x[0])
return df
def Name(df):
df['Last Name'] = df.Name.apply(lambda x: x.split(' ')[0][:-1])
df['Prefix'] = df.Name.apply(lambda x: x.split(' ')[1][:-1])
return
def Dropfeature(df):
features = ['Name','Ticket']
df.drop(features, axis = 1, inplace = True)
return df
def feature_trsfm(df):
Cabin(df)
Fare(df)
Age(df)
Name(df)
Dropfeature(df)
df.Embarked.fillna('N/a', inplace = True)
return df
train1 = feature_trsfm(tt_train)
train1.head(5)
survive_rate(train1, 'Cabin', fillna = False)
survive_rate(train1, 'Fare', fillna = False)
survive_rate(train1, 'Age', fillna = False)<jupyter_output>The survival rate of Young Adult is 0.3333333333333333 ( data size: 162 )
The survival rate of Elder Adult is 0.4011299435028249 ( data size: 177 )
The survival rate of Adult is 0.42346938775510207 ( data size: 196 )
The survival rate of Na is 0.2937853107344633 ( data size: 177 )
The survival rate of Baby is 0.675 ( data size: 40 )
The survival rate of Teenager is 0.42857142857142855 ( data size: 70 )
The survival rate of Senior is 0.3 ( data size: 40 )
The survival rate of Child is 0.4482758620689655 ( data size: 29 )
<jupyter_text>From the data visualization and analysis, we can see Cabin has a lot of missing data so we will not use it for logistic regression## Data Transfromation(b) Using logistic regression, try to predict whether a passenger survived the disaster. You can choose the features (or combinations of features) you would like to use or ignore, provided you justify your reasoning.<jupyter_code>train.head(10)
def Name(df):
df['Last Name'] = df.Name.apply(lambda x: x.split(' ')[0][:-1])
df['Prefix'] = df.Name.apply(lambda x: x.split(' ')[1][:-1])
return
def Dropfeature(df):
features = ['Name','Ticket','Cabin']
df.drop(features, axis = 1, inplace = True)
return
def Sex(df):
df.Sex.replace({'male': 0,
'female': 1},
inplace = True)
return
def Embarked(df):
df.Embarked.replace({'S': 1,
'C': 2,
'Q': 3,},
inplace = True)
df.Embarked.fillna('0', inplace = True)
return
def Age(df):
df.Age.fillna(train.Age.mean(), inplace = True)
return
def Fare(df):
df.Fare.fillna(train.Fare.mean(), inplace = True)
def transform(df):
Age(df)
Name(df)
Dropfeature(df)
Sex(df)
Embarked(df)
Fare(df)
return
transform(train)
train.head(10)
train_x = train[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']]
train_y = train[['Survived']]
from sklearn.linear_model import LogisticRegression
import numpy as np
lr = LogisticRegression()
lr.fit(train_x, train_y.values.reshape(len(train_y),))<jupyter_output><empty_output><jupyter_text>(c) Train your classifier using all of the training data, and test it using the testing data. Submit your results to Kaggle.<jupyter_code>transform(test)
test_x = test[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']]
prediction = lr.predict(test_x)
result = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': prediction})
result.head(5)
result.to_csv('Titanic_prediction.csv', index = False)<jupyter_output><empty_output>
|
no_license
|
/HW1/Titanic.ipynb
|
OiBoii/AML-1
| 4 |
<jupyter_start><jupyter_text># ExceptionsAn exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions.
You've already seen some exceptions in the **Debugging** lesson.
* Many programs want to know about exceptions when they occur. For example, if the input to a program is a file path. If the user inputs an invalid or non-existent path, the program generates an exception. It may be desired to provide a response to the user in this case.It may also be that programs will *generate* exceptions. This is a way of indicating that there is an error in the inputs provided. In general, this is the preferred style for dealing with invalid inputs or states inside a python function rather than having an error return.## Catching ExceptionsPython provides a way to detect when an exception occurs. This is done by the use of a block of code surrounded by a "try" and "except" statement.<jupyter_code>def divide(numerator, denominator):
result = numerator/denominator
print("result = %f" % result)
divide(1.0, 0)
def divide1(numerator, denominator):
try:
result = numerator/denominator
print("result = %f" % result)
except:
print("You can't divide by 0!")
divide(1.0, 'a')
divide1(1.0, 2)
divide1("x", 2)
def divide2(numerator, denominator):
try:
result = numerator / denominator
print("result = %f" % result)
except (ZeroDivisionError, TypeError) as err:
print("Got an exception: %s" % err)
divide2(1, "X")
divide2("x, 2)<jupyter_output><empty_output><jupyter_text>#### Why didn't we catch this `SyntaxError`?<jupyter_code># Handle division by 0 by using a small number
SMALL_NUMBER = 1e-3
def divide3(numerator, denominator):
result = numerator*denominator
return result
divide3(1,0)
divide3("1",0)<jupyter_output><empty_output><jupyter_text>#### What do you do when you get an exception?
First, you can feel relieved that you caught a problematic element of your software! Yes, relieved. Silent fails are much worse. (Again, another plug for testing.)## Generating Exceptions#### Why *generate* exceptions? (Don't I have enough unintentional errors?)<jupyter_code>import pandas as pd
def validateDF(df):
if not "hours" in df.columns:
raise ValueError("DataFrame should have a column named 'hours'.")
df = pd.DataFrame({'hours': range(10) })
validateDF(df)
df = pd.DataFrame({'years': range(10) })
validateDF(df)<jupyter_output><empty_output>
|
permissive
|
/07_debug_exceptions_testing/Exceptions.ipynb
|
Jinghpp/lecture-materials
| 3 |
<jupyter_start><jupyter_text>### 필요한 모듈과 타이타닉 데이터 로드[part-1](part-1.ipynb)에서 했던 작업과 동일하게 데이터를 로드하여 트레이닝, 테스트 데이터셋을 만듭니다.<jupyter_code>import pandas
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
from tensorflow.contrib import skflow
train = pandas.read_csv('data/titanic_train.csv')
y, X = train['Survived'], train[['Age', 'SibSp', 'Fare']].fillna(0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)<jupyter_output><empty_output><jupyter_text>### 딥뉴럴 네트워크를 이용한 타이타닉 테스트<jupyter_code>classifier = skflow.TensorFlowDNNClassifier(
hidden_units=[10, 20, 10],
n_classes=2,
batch_size=128,
steps=500,
learning_rate=0.05)
classifier.fit(X_train, y_train)
print(accuracy_score(classifier.predict(X_test), y_test))<jupyter_output>0.664804469274
<jupyter_text>### tanh 활성화함수를 이용한 테스트<jupyter_code>import tensorflow as tf
def dnn_tanh(X, y):
layers = skflow.ops.dnn(X, [10, 20, 10], tf.tanh)
return skflow.models.logistic_regression(layers, y)
classifier = skflow.TensorFlowEstimator(
model_fn=dnn_tanh,
n_classes=2,
batch_size=128,
steps=500,
learning_rate=0.05)
classifier.fit(X_train, y_train)
print(accuracy_score(classifier.predict(X_test), y_test))<jupyter_output>0.698324022346
<jupyter_text>### 숫자 판별 테스트<jupyter_code>import random
from sklearn import datasets
random.seed(42)
digits = datasets.load_digits()
X = digits.images
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
def conv_model(X, y):
X = tf.expand_dims(X, 3)
features = tf.reduce_max(skflow.ops.conv2d(X, 12, [3, 3]), [1, 2])
features = tf.reshape(features, [-1, 12])
return skflow.models.logistic_regression(features, y)
classifier = skflow.TensorFlowEstimator(
model_fn=conv_model,
n_classes=10,
batch_size=128,
steps=500,
learning_rate=0.05)
classifier.fit(X_train, y_train)
print(accuracy_score(classifier.predict(X_test), y_test))<jupyter_output>0.747222222222
|
permissive
|
/tfk-notebooks-master/tensorflow-tutorial(Illia Polosukhin)/part-2.ipynb
|
jassminn/Notebook
| 4 |
<jupyter_start><jupyter_text><jupyter_code>import cv2
import os
import urllib.request as req
img_last = None
no = 0
save_dir = "./exfish"
os.makedirs(save_dir, exist_ok=True)
url = "https://github.com/masatokg/sample_photo/raw/master/fish.mp4"
save_file = "fish.mp4"
req.urlretrieve(url, save_file)
cap = cv2.VideoCapture(save_file)
while(True):
is_ok, frame = cap.read()
if not is_ok:
print("end")
break
frame = cv2.resize(frame, (640, 360))
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (15, 15), 0)
img_b = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)[1]
if not img_last is None:
frame_diff = cv2.absdiff(img_last, img_b)
cnts = cv2.findContours(frame_diff, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
for pt is cnts:
x, y, w, h = cv2.boundingRect(pt)
if w< 100 or w > 500: continue:
imgex = frame[ y:y+h, x:x+w ]
outfile = save_dir + "/" + str(no) + ".jpg"
cv2.imwrite(outfile, imgex)
no += 1
img_last = img_b
cap.release()
print("ok")
! zip -r /content/download.zip /content/exfish
from google.colab import files
files.download("/content/download.zip")<jupyter_output> adding: content/exfish/ (stored 0%)
|
no_license
|
/2020AI0305_fishvideo.ipynb
|
aso2001054/AI_teach2020
| 1 |
<jupyter_start><jupyter_text># Explore eBay API## Imports<jupyter_code>import json
import datetime
import pandas as pd
import numpy as np
from ebaysdk.exception import ConnectionError
from ebaysdk.finding import Connection
from ebaysdk.trading import Connection as Trading
<jupyter_output><empty_output><jupyter_text>## Helper functions<jupyter_code>def format_time(intime):
"""
Return datetime object formated for eBay API
"""
out = intime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3]+'Z'
return out<jupyter_output><empty_output><jupyter_text>## Credentials<jupyter_code># Grab private data
if __name__ == "__main__":
EBAY_API_APPID = os.environ['EBAY_API_APPID']
EBAY_API_DEVID = os.environ['EBAY_API_DEVID']
EBAY_API_CERTID = os.environ['EBAY_API_CERTID']
EBAY_API_TOKEN = os.environ['EBAY_API_TOKEN']<jupyter_output><empty_output><jupyter_text>## Keyword item search<jupyter_code># https://github.com/timotheus/ebaysdk-python
try:
api = Connection(appid=APPID, config_file=None)
response = api.execute('findItemsAdvanced', {'keywords': 'legos'})
assert(response.reply.ack == 'Success')
assert(type(response.reply.timestamp) == datetime.datetime)
assert(type(response.reply.searchResult.item) == list)
item = response.reply.searchResult.item[0]
assert(type(item.listingInfo.endTime) == datetime.datetime)
assert(type(response.dict()) == dict)
except ConnectionError as e:
print(e)
print(e.response.dict())<jupyter_output><empty_output><jupyter_text>## Find a list of categories<jupyter_code>try:
api = Trading(appid=APPID, devid=DEVID, certid=CERTID, token=TOKEN, config_file=None)
response = api.execute('GetCategories', {'DetailLevel': 'ReturnAll','LevelLimit': '1'})
except ConnectionError as e:
print(e)
print(e.response.dict())
cat_array = response.dict()['CategoryArray']['Category']
category_id_dict = {}
for category in cat_array:
if category['CategoryParentID'] != category['CategoryID']:
print(category)
else:
category_id_dict[category['CategoryID']] = category['CategoryName']
category_id_dict<jupyter_output><empty_output><jupyter_text>## Find all items in list of categories<jupyter_code>try:
api = Connection(appid=APPID, config_file=None)
response = api.execute('findItemsByCategory', {'categoryId' : ['11232']},)
assert(response.reply.ack == 'Success')
assert(type(response.reply.timestamp) == datetime.datetime)
assert(type(response.reply.searchResult.item) == list)
item = response.reply.searchResult.item[0]
assert(type(item.listingInfo.endTime) == datetime.datetime)
assert(type(response.dict()) == dict)
except ConnectionError as e:
print(e)
print(e.response.dict())<jupyter_output><empty_output><jupyter_text>## Find completed items in list of categories<jupyter_code>try:
api = Connection(appid=APPID, config_file=None)
response = api.execute('findCompletedItems', {'categoryId' : ['11232']},)
assert(response.reply.ack == 'Success')
assert(type(response.reply.timestamp) == datetime.datetime)
assert(type(response.reply.searchResult.item) == list)
item = response.reply.searchResult.item[0]
assert(type(item.listingInfo.endTime) == datetime.datetime)
assert(type(response.dict()) == dict)
except ConnectionError as e:
print(e)
print(e.response.dict())
print(response.reply.searchResult.item[0])<jupyter_output>{'itemId': '152274579098', 'isMultiVariationListing': 'true', 'topRatedListing': 'false', 'globalId': 'EBAY-US', 'title': 'Horror / Thriller Dvd Lot #1 ~ Over 90 to Choose $1.50 Each ~ $3.99 Unl Shipping', 'country': 'US', 'shippingInfo': {'expeditedShipping': 'false', 'shipToLocations': ['US', 'CA', 'GB', 'AU', 'AT', 'BE', 'FR', 'DE', 'IT', 'JP', 'ES', 'TW', 'NL', 'CN', 'HK', 'MX', 'DK', 'RO', 'SK', 'BG', 'CZ', 'FI', 'HU', 'LV', 'LT', 'MT', 'EE', 'GR', 'PT', 'CY', 'SI', 'SE', 'KR', 'ID', 'TH', 'IE', 'PL', 'RU', 'IL', 'NZ'], 'shippingServiceCost': {'_currencyId': 'USD', 'value': '3.99'}, 'oneDayShippingAvailable': 'false', 'handlingTime': '1', 'shippingType': 'Flat'}, 'galleryURL': 'http://thumbs3.ebaystatic.com/pict/1522745790984040_4.jpg', 'autoPay': 'false', 'location': 'San Manuel,AZ,USA', 'postalCode': '85631', 'returnsAccepted': 'true', 'viewItemURL': 'http://www.ebay.com/itm/Horror-Thriller-Dvd-Lot-1-Over-90-Choose-1-50-Each-3-99-Unl-Shipping-/152274579098?var=451470437158', 'se[...]<jupyter_text>## Create empty data frame to hold results<jupyter_code># Create empty data frame to hold results
df = pd.DataFrame(columns=('EndTimeFrom', 'EndTimeTo', 'CategoryID','items_sold','items_not_sold','items_listed'))
df.info()
# Set column types
column_names = df.columns.tolist()
dtype = ['datetime64','datetime64','int64','int64','int64','int64']
column_dtypes = zip(column_names,dtype)
for k, v in column_dtypes:
df[k] = df[k].astype(v)
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Data columns (total 6 columns):
EndTimeFrom 0 non-null datetime64[ns]
EndTimeTo 0 non-null datetime64[ns]
CategoryID 0 non-null int64
items_sold 0 non-null int64
items_not_sold 0 non-null int64
items_listed 0 non-null int64
dtypes: datetime64[ns](2), int64(4)
memory usage: 0.0+ bytes
<jupyter_text>## Find completed items in list of categories for timeframe<jupyter_code>endtimefrom = datetime.datetime(2017, 1, 24, 0, 0, 0)
endtimeto = datetime.datetime(2017, 1, 24, 1, 0, 0)
# Set API connection
api = Connection(appid=APPID, config_file=None)
# categoryId = '11232'
while endtimeto < datetime.datetime(2017, 1, 25, 0, 0, 1):
for categoryId in category_id_dict.keys():
# Format time string
endtimefrom_str = format_time(endtimefrom)
endtimeto_str = format_time(endtimeto)
# Define query criteria - newly listed items
para_dict = {'categoryId' : [categoryId],
'itemFilter': [
{'name': 'StartTimeFrom', 'value': endtimefrom_str},
{'name': 'StartTimeTo', 'value': endtimeto_str},
{'name': 'LocatedIn', 'value': 'US'}],
'paginationInput': {
'pageNumber': '1'},
}
# Run API query - is this th right api here??
response = api.execute('findCompletedItems', para_dict)
# Extract numbers
n_listed = int(response.dict()['paginationOutput']['totalEntries'])
# Define query criteria - items sold
para_dict = {'categoryId' : [categoryId],
'itemFilter': [
{'name': 'EndTimeFrom', 'value': endtimefrom_str},
{'name': 'EndTimeTo', 'value': endtimeto_str},
{'name': 'LocatedIn', 'value': 'US'},
{'name': 'SoldItemsOnly', 'value': 'True'}],
'paginationInput': {
'pageNumber': '1'},
}
# Run API query
response = api.execute('findCompletedItems', para_dict)
# Extract numbers
n_sold = int(response.dict()['paginationOutput']['totalEntries'])
# Define query criteria - all items
para_dict = {'categoryId' : [categoryId],
'itemFilter': [
{'name': 'EndTimeFrom', 'value': endtimefrom_str},
{'name': 'EndTimeTo', 'value': endtimeto_str},
{'name': 'LocatedIn', 'value': 'US'}],
'paginationInput': {
'pageNumber': '1'},
}
# Run API query
response = api.execute('findCompletedItems', para_dict)
# Extract numbers
n_all = int(response.dict()['paginationOutput']['totalEntries'])
# Calculate items not sold
n_not_sold = n_all - n_sold
print(categoryId,' : ',endtimefrom, '--', endtimeto,' : ', n_all)
# Add result to data frame
df.loc[df.shape[0]]= [endtimefrom,endtimeto,int(categoryId),n_sold,n_not_sold,n_listed]
# Indent time
endtimefrom = endtimefrom + datetime.timedelta(hours=1.0)
endtimeto = endtimeto + datetime.timedelta(hours=1.0)
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 816 entries, 0 to 815
Data columns (total 6 columns):
EndTimeFrom 816 non-null datetime64[ns]
EndTimeTo 816 non-null datetime64[ns]
CategoryID 816 non-null int64
items_sold 816 non-null int64
items_not_sold 816 non-null int64
items_listed 816 non-null int64
dtypes: datetime64[ns](2), int64(4)
memory usage: 44.6 KB
<jupyter_text>## Save data frame to disk<jupyter_code># Remember to rename the temporary file!
df.to_csv('tmp.csv', sep=',')
df
# Remove some entries
# idx = df[df['EndTimeTo'] == endtimeto].index.values.tolist()
# print(idx)
# df.drop(df.index[648])<jupyter_output><empty_output>
|
no_license
|
/explore_ebay_api.ipynb
|
cleipski/Decision-Scientist-Project
| 10 |
<jupyter_start><jupyter_text># Advanced Notebook<jupyter_code>%matplotlib inline
import numpy as np
import pandas as pd
from pandas.tools.plotting import scatter_matrix
from sklearn.datasets import load_boston
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster')
sns.set_style('whitegrid')
plt.rcParams['figure.figsize'] = 12, 8 # plotsize
import warnings
warnings.filterwarnings('ignore')
df_dict = load_boston()
features = pd.DataFrame(data=df_dict.data, columns = df_dict.feature_names)
target = pd.DataFrame(data=df_dict.target, columns = ['MEDV'])
df = pd.concat([features, target], axis=1)
df['Zone'] = df['ZN'].astype('category')
df.head()<jupyter_output><empty_output><jupyter_text>## QGrid
Interactive pandas dataframes: https://github.com/quantopian/qgrid<jupyter_code>import qgrid
qgrid_widget = qgrid.show_grid(df[['CRIM',
'Zone',
'INDUS',
# 'CHAS',
'NOX',
# 'RM',
'AGE',
# 'DIS',
'RAD',
'TAX',
# 'PTRATIO',
# 'B',
'LSTAT',
'MEDV',
]], show_toolbar=True)
qgrid_widget
df2 = qgrid_widget.get_changed_df()
df2.head()<jupyter_output><empty_output><jupyter_text># BQPlot
Examples here are shamelessly stolen from the amazing: https://github.com/maartenbreddels/jupytercon-2017/blob/master/jupytercon2017-widgets.ipynb<jupyter_code># mixed feelings about this import
import bqplot.pyplot as plt
import numpy as np
x = np.linspace(0, 2, 50)
y = x**2
fig = plt.figure()
scatter = plt.scatter(x, y)
plt.show()
fig.animation_duration = 500
scatter.y = 2 * x**.5
scatter.selected_style = {'stroke':'red', 'fill': 'orange'}
plt.brush_selector();
scatter.selected
scatter.selected = [1,2,10,40]<jupyter_output><empty_output><jupyter_text>## ipyvolume<jupyter_code>import ipyvolume as ipv
N = 1000
x, y, z = np.random.random((3, N))
fig = ipv.figure()
scatter = ipv.scatter(x, y, z, marker='box')
ipv.show()
scatter.x = scatter.x - 0.5
scatter.x = x
scatter.color = "green"
scatter.size = 5
scatter.color = np.random.random((N,3))
scatter.size = 2
ex = ipv.datasets.animated_stream.fetch().data
ex.shape
ex[:, ::, ::4].shape
ipv.figure()
ipv.style.use('dark')
quiver = ipv.quiver(*ipv.datasets.animated_stream.fetch().data[:,::,::4], size=5)
ipv.animation_control(quiver, interval=200)
ipv.show()
ipv.style.use('light')
ipv.style.use('light')
quiver.geo = "cat"
N = 1000*1000
x, y, z = np.random.random((3, N)).astype('f4')
ipv.figure()
s = ipv.scatter(x, y, z, size=0.2)
ipv.show()
ipv.save("3d-example-plot.html")
!open 3d-example-plot.html<jupyter_output><empty_output>
|
permissive
|
/notebooks/Advanced-Notebook-Tricks.ipynb
|
kzhoulatte/jupyter-tips-and-tricks
| 4 |
<jupyter_start><jupyter_text># Download Datasets from 4TU.ResearchData
**fairly** can download public datasets from 4TU.ResearchData.
The *4TU.ResearchData* repository uses Figshare as a platform for managing research datasets. For this example, we will use the dataset [EDoM measurement campaign](https://data.4tu.nl/articles/dataset/EDoM_measurement_campaign_full_data_from_the_lower_Ems_River/20308263). This dataset contains 28 files of different types (`.txt`, `.pdf`), and it is about `278 MBs` in size.
The dataset has ID: `20308263`, in 4TU.ResearchData the dataset ID is the last part of the URL that appears in the web browser. We can fetch a dataset using either its ID or its URL.
## 1. Connect to 4TU.ResearchData
To connect to data repositories we use clients. A client manage the connection to an specific data repository. We can create a client to connect to 4TU.ReseachData as follows:<jupyter_code>import fairly
fourtu = fairly.client("4tu") <jupyter_output><empty_output><jupyter_text>## 2. Connect to a dataset
Now, we can connect to a *public* dataset by calling the `get_dataset()` method and using either the dataset ID or its URL, or the DOI.<jupyter_code># USING ID
# dataset = fourtu.get_dataset("20308263")
# USING URL
dataset = fourtu.get_dataset("https://data.4tu.nl/articles/dataset/EDoM_measurement_campaign_full_data_from_the_lower_Ems_River/20308263")
# COMVENIENT FUNCTION, USING DOI
# dataset = fairly.dataset( https://doi.org/10.4121/19519618.v1) # client is infered from DOI<jupyter_output><empty_output><jupyter_text>## 3. Explore dataset's metadata
Once we have made a connection to a dataset, we can access its metadata (as stored in the data repository) by using the `metadata` property. <jupyter_code># Retrieves metadata from data repository
dataset.metadata<jupyter_output><empty_output><jupyter_text>## 4. List dataset's files
We can list the files of a dataset using the `files` property. The result is a Python dictionary where the name of each file becomes elements of the dictionary.<jupyter_code># Lists files (data) associated to the dataset
files = dataset.files
print("There are", len(files), "files in this dataset")
print(files)<jupyter_output>There are 28 files in this dataset
{'CsEmspier_01052017-01052019_from_NLWKN.txt': 'CsEmspier_01052017-01052019_from_NLWKN.txt', 'CsGandesum_01052017-01052019_from_NLWKN.txt': 'CsGandesum_01052017-01052019_from_NLWKN.txt', 'CsKnock_01052017-01052019_from_NLWKN.txt': 'CsKnock_01052017-01052019_from_NLWKN.txt', 'CsMP1_01052017-01052019_from_WSV.txt': 'CsMP1_01052017-01052019_from_WSV.txt', 'CsPogum_01052017-01052019_from_NLWKN.txt': 'CsPogum_01052017-01052019_from_NLWKN.txt', 'CsTerborg_01052017-01052019_from_NLWKN.txt': 'CsTerborg_01052017-01052019_from_NLWKN.txt', 'Messung_Gewaesserguete_EMS_NLWKN.pdf': 'Messung_Gewaesserguete_EMS_NLWKN.pdf', 'O2Emspier_01052017-01052019_from_NLWKN.txt': 'O2Emspier_01052017-01052019_from_NLWKN.txt', 'O2Gandersum_01052017-01052019_from_NLWKN.txt': 'O2Gandersum_01052017-01052019_from_NLWKN.txt', 'O2Knock_01052017-01052019_from_NLWKN.txt': 'O2Knock_01052017-01052019_from_NLWKN.txt', 'O2MP1_01052017-01052019_from_WSV.txt': 'O2MP1_01052017-01052019_from_WSV.[...]<jupyter_text>## 5. Download a file
We can donwload a single file in a dataset by using tis name. For example, this dataset contains a file with the name `'CsEmspier_01052017-01052019_from_NLWKN.txt'`.
> The `path` parameter can be used to define where to store the file, otherwise the file will be store in the working directory.
<jupyter_code># Select a file from the dataset
single_file = dataset.files['CsEmspier_01052017-01052019_from_NLWKN.txt']
# download the file
fourtu.download_file(single_file)<jupyter_output><empty_output><jupyter_text>## 6. Download a dataset
We can download all files and metadata of a dataset using the `store()` function. We need to provide a `path` to a directory to store the dataset. If the directory does not exist, it would be created.<jupyter_code># This will download about 278 MBs
dataset.store("./edom")<jupyter_output><empty_output>
|
permissive
|
/docs/package/demo-4tu.ipynb
|
ITC-CRIB/jupyter-fairly
| 6 |
<jupyter_start><jupyter_text>## Ex 3.8
This question involves the use of simple linear regression on the Auto data set.
* (a) Use the lm() function to perform a simple linear regression with mpg as the response and horsepower as the predictor. Use the summary() function to print the results. Comment on the output. For example:
* i. Is there a relationship between the predictor and the re- sponse?
* ii. How strong is the relationship between the predictor and the response?
* iii. Is the relationship between the predictor and the response positive or negative?
* iv. What is the predicted mpg associated with a horsepower of 98? What are the associated 95 % confidence and prediction intervals?
* (b) Plot the response and the predictor. Use the abline() function to display the least squares regression line.
* (c) Use the plot() function to produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit.<jupyter_code>Auto<-read.csv("../data/Auto.csv", header=T, na.strings="?")
#or require(ISLR) ; data(Auto)
str(Auto)<jupyter_output>'data.frame': 397 obs. of 9 variables:
$ mpg : num 18 15 18 16 17 15 14 14 14 15 ...
$ cylinders : int 8 8 8 8 8 8 8 8 8 8 ...
$ displacement: num 307 350 318 304 302 429 454 440 455 390 ...
$ horsepower : int 130 165 150 150 140 198 220 215 225 190 ...
$ weight : int 3504 3693 3436 3433 3449 4341 4354 4312 4425 3850 ...
$ acceleration: num 12 11.5 11 12 10.5 10 9 8.5 10 8.5 ...
$ year : int 70 70 70 70 70 70 70 70 70 70 ...
$ origin : int 1 1 1 1 1 1 1 1 1 1 ...
$ name : Factor w/ 304 levels "amc ambassador brougham",..: 49 36 231 14 161 141 54 223 241 2 ...
<jupyter_text>#### 3.8.a<jupyter_code>lmfit1<-lm(mpg~horsepower,data=Auto)
summary(lmfit1)<jupyter_output><empty_output><jupyter_text>**i. is there a relation?** We see a low p-value for predictor's coefficient so there is a relationship**ii. how strong is relationship?**<jupyter_code>#find percentage error RSE / response's mean
paste("percentage error: ", summary(lmfit1)$sigma / mean(Auto$mpg) * 100 )<jupyter_output><empty_output><jupyter_text> Percentage error is about 20% and $R^2=60%$ of a variance of mpg is explained by the horsepower.**iii. positive or negative relationship between response and predictor?** Coefficient is negative. So the relationshipo is negative**iv. predicted mpg for horsepower =98**<jupyter_code>predict( lmfit1, data.frame(horsepower=98), interval="confidence" )
predict( lmfit1, data.frame(horsepower=98), interval="prediction" )<jupyter_output><empty_output><jupyter_text>#### 3.8.b<jupyter_code>plot(mpg~horsepower,data=Auto, cex=0.3)
abline(lmfit1, col=2)<jupyter_output><empty_output><jupyter_text>#### 3.8.c<jupyter_code>par(mfrow=c(2,2))
plot(lmfit1, cex=0.3)<jupyter_output><empty_output><jupyter_text>**residuals plots** reveal some non-linearity between the horsepower predictor and the response## Ex 3.9
This question involves the use of multiple linear regression on the Auto data set.
* (a) Produce a scatterplot matrix which includes all of the variables in the data set.
* (b) Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, which is qualitative.
* (c) Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results. Comment on the output. For instance:
* i. Is there a relationship between the predictors and the re- sponse?
* ii. Which predictors appear to have a statistically significant relationship to the response?
* iii. What does the coefficient for the year variable suggest?
* (d) Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?
* (e) Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?
* (f) Try a few different transformations of the variables, such as log(X), √X, X2. Comment on your findings.
<jupyter_code>Auto<-read.csv("../data/Auto.csv", header=T, na.strings="?")
str(Auto)
#9.(a)
pairs(Auto[,1:8], cex=0.2, col='red')
#9.(b)
cor(Auto[,1:8], use="complete.obs")
#9.(c)
lmfit2<- lm(mpg~.-name, data=Auto)
summary(lmfit2)
#percentage error
summary(lmfit2)$sigma / mean(Auto$mpg)<jupyter_output><empty_output><jupyter_text>* **9.c.i** F-Statistics is high and p-value is low, so there is a relation between mpg and all quantitative predictors. $R^2 = 0.82$ i.e. 82% of mpg variance is explained by predictors
* **9.c.ii** based on p-values acceleration,cylinders,horsepower do not have significant relationship. displacement, weight, year, origin do
* **9.c.iii** Coefficient for year is positive i.e. newer cars have a higher mileage per galon.<jupyter_code>#9.d()
par(mfrow=c(2,2))
plot(lmfit2, cex=0.3)<jupyter_output><empty_output><jupyter_text>**9.(d)** Residuals plot shows some non-linearity in the relationship, as well as a non-constant variance of error terms.
On the residual plot 4 outliers are observed with values > 10 (** missed to plot studentized residuals**)
on Residuals vs Leverage plot we observe one observation with hight leverage<jupyter_code>#9.e
## correction. from correlation matrix pick the most correlated pairs
lmfit3<-lm(mpg~cylinders*displacement+displacement*weight,data = Auto)
summary(lmfit3)
#9.f look pairs chart, observe that displacement + horsepower + weight + acceleration
# are variables most suited for log, sqrt .. transformations
lmfit4<- lm(mpg~horsepower+weight+acceleration, data=Auto)
summary(lmfit4)
#took off displacement, high p-value
##took off displacement
lmfit4b<- lm(mpg~log(weight) + sqrt(horsepower)+acceleration +I(acceleration^2), data=Auto)
summary(lmfit4b)
par(mfrow=c(3,2))
plot(lmfit4b, cex=0.3)
plot(predict(lmfit4b), rstudent(lmfit4b))<jupyter_output><empty_output><jupyter_text>for lmfit4b the residuals plot shows less of a pattern, but qqplot shows some unnormality
If we look at pairs plt in 9.a we observe that displacement, horsepoewr and wegith show a similar nonlinear pattern for mpg. So the next attempt is to try to transpform the response log(mpg)<jupyter_code>lmfit5<-lm(log(mpg)~cylinders+displacement+horsepower+weight+acceleration+year+origin,data=Auto)
summary(lmfit5)
## we have a higher R^2 the charts below shows a better model fit and no outliers
par(mfrow=c(3,2))
plot(lmfit5, cex=0.3)
plot(predict(lmfit5), rstudent(lmfit5))<jupyter_output><empty_output><jupyter_text>### Ex 10.
This question should be answered using the Carseats data set.
* (a) Fit a multiple regression model to predict Sales using Price,Urban, and US.
* (b) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
* (c) Write out the model in equation form, being careful to handle the qualitative variables properly.
* (d) For which of the predictors can you reject the null hypothesis H0 :βj =0?
* (e) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
* (f) How well do the models in (a) and (e) fit the data?
* (g) Using the model from (e), obtain 95% confidence intervals for the coefficient(s).
* (h) Is there evidence of outliers or high leverage observations in the model from (e)?<jupyter_code>library(ISLR)
data(Carseats)
str(Carseats)
#10.a
# pairs(Carseats[,c(1,6,10,11)], cex=0.1)
lmfit101<-lm(Sales~Price+Urban+US, data=Carseats)
summary(lmfit101) <jupyter_output><empty_output><jupyter_text>### 10.b
We observe statistially significant linear regression coefficients for quantitative predictor Price and qualitative predictor US.
Price contributes negatively to Sales, whereas if a customer is from US it brings Sales higher.
Qualitative predictor Urban does not have a statistically significant coefficient. ### 10.c
$ Sales = \beta_0 + \beta_1 Price + \beta_2 + \epsilon$ if a customer is from US.
$ Sales = \beta_0 + \beta_1 Price + \epsilon$ if a customer is not from US. ### 10.d
We can reject a null hypohesis for a qualitative predictor Urban due to its high p-value.<jupyter_code>### 10.e
lmfit102<-lm(Sales~Price+US, data=Carseats)
summary(lmfit102) <jupyter_output><empty_output><jupyter_text>### 10.f
Both models have $R^2=0.239$ which is a relative low percentage of the variance of Sales explained by predictors.
The second model have a higher F-statistic and a slighely lower RSE. Thus the 2nd model is slighly better.
<jupyter_code>##10.g 95% confidence intervals for coefficiens
confint(lmfit102)
##10.h
par(mfrow=c(3,2))
plot(lmfit102, cex=0.3)
plot(predict(lmfit102), rstudent(lmfit102))
##leverage statisitcs (p+1)n
p<-2
n<-400
(p+1)/n<jupyter_output><empty_output><jupyter_text>### 10.h
(a) the plot of Rstudentized residuals show that their are within [-3,3],so outliers.
(b) the Residuals vs Leverage plot shows a high number of points with greater than Cooks distance (p+1)/n.
Points with a large leverage distance with higher residuals are more dangerous.(indexes 26,50,368)## ex 11
In this problem we will investigate the t-statistic for the null hypothesis H0 in a simple linear regression without an intercept.<jupyter_code>set.seed(1)
x=rnorm(100)
y<-2*x+rnorm(100)
plot(y~x)
#11.a
lmfit111<-lm(y~x+0) #without an intercept
summary(lmfit111)<jupyter_output><empty_output><jupyter_text>#### 11(a)
Coef estimate 1.99 its std err = 0.106, is t-statistics 18.73 and p-value is 2 E(-16)
The liniear model has a high f-statistics and a high R squared at 0.77 suggesting that the model fits the data well<jupyter_code>#11.b
lmfit112<-lm(x~y+0) #without an intercept
summary(lmfit112)<jupyter_output><empty_output><jupyter_text>#### 11(b)
Fitting x onto y produces a model with the same degree of accuracy as y=f(x)
### 11(c)
An inverse relationship is suggested
$y = 2 x + \epsilon$
$x = \frac{1}{2}(y - \epsilon)$### 11(d)
is done by hand on in ISLEX1
We can prove that t-statistic is
$$tstat = \frac{(\sqrt{n-1})\sum_{i=1}^n x_i y_i }
{\sqrt{\sum_{i=1}^n x_i^2 \sum_{i'=1}^n y_{i'}^2
- \left(\sum_{i=1'}^n x_{i'} y_{i'}\right)^2 }}$$### 11.(e)
lm(y~x) and lm(y~x) has the equal t-statistics because in the above formula the variable substituion of y by x and x by y yield the same result.### 11(f)<jupyter_code>lmfit113<-lm(x~y)
lmfit114<-lm(y~x)
summary(lmfit113)$coefficients
summary(lmfit114)$coefficients<jupyter_output><empty_output><jupyter_text>As we can see the t-value for coefficients for y and x are the same## Ex 12#### 12.a
If
$$ \hat{\beta} = \left( \sum_{i=1}^n x_i y_i \right) /
\left( \sum_{i'=1}^n x_{i'}^2 \right) $$
The coefficient estimate $\beta$ will be the same for both regression X onto Y and Y and X if
$ \sum_{i=1}^n x_{i}^2 = \sum_{i=1}^n y_{i}^2 $<jupyter_code>##12.b
set.seed(1)
x=rnorm(100)
y<-2*x+runif(100)
print(summary(lm(y~x-1))$coefficients)
print(summary(lm(x~y-1))$coefficients)
##12.c
set.seed(1)
x=rnorm(100)
y=rnorm(100) # y is drawn from the same distibution as x
print(summary(lm(y~x-1))$coefficients)
print(summary(lm(x~y-1))$coefficients)<jupyter_output> Estimate Std. Error t value Pr(>|t|)
x -0.006123917 0.1064767 -0.05751413 0.9542516
Estimate Std. Error t value Pr(>|t|)
y -0.005455947 0.09486272 -0.05751413 0.9542516
<jupyter_text>## Ex 13<jupyter_code>## 13.a
set.seed(1)
x<-rnorm(100, mean=0, sd=1) #default values
##13.b
eps<-rnorm(100, mean=0, sd=0.25) #residuals with smaller variance
##13.c
y<- -1+0.5*x + eps
#13(d)
plot(y~x, main="eps=0.25")
##13.e
#e)
lmfit131<-lm(y~x)
print(summary(lmfit131))<jupyter_output>
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-0.46921 -0.15344 -0.03487 0.13485 0.58654
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.00942 0.02425 -41.63 <2e-16 ***
x 0.49973 0.02693 18.56 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2407 on 98 degrees of freedom
Multiple R-squared: 0.7784, Adjusted R-squared: 0.7762
F-statistic: 344.3 on 1 and 98 DF, p-value: < 2.2e-16
<jupyter_text>#### 13.e
The model fits the data well, hight F-statistics, low p-value<jupyter_code>#13.f
plot(y~x, main="eps=0.25")
abline(lmfit131, col=2)
abline(-1, 0.5, col=3)
legend(-1, legend=c("model fit", "pop. regression"),col=2:3,lwd=3 )<jupyter_output><empty_output><jupyter_text>#### 13.f
We observe that the linear model fit and the population regression lines are very close<jupyter_code>### 13.g
lmfit132<-lm(y~x+I(x^2))
print(summary(lmfit132))
plot(y~x, main="eps = 0.25")
abline(lmfit131, col=2)
points(x,predict(lmfit132, data.frame(x)) ,col=4, cex=0.2)
legend(-1, legend=c("linear model fit", "quadratic model fit"),col=c(2,4),lwd=3 )<jupyter_output><empty_output><jupyter_text>#### 13.g
We observe a lower F-statistic in quadratic regression compared to linear one. And a slight decrease in RSE And a slight increase R squared. The quadratic model fits data a bit better than linear model.<jupyter_code>##13.h
epsb<-rnorm(100, mean=0, sd=0.10) #residuals with smaller varianc
yb<- -1+0.5*x + epsb
lmfit133<-lm(yb~x)
lmfit134<-lm(yb~x+I(x^2))
summary(lmfit133)
summary(lmfit134)
plot(yb~x, main="lower eps = 0.10")
abline(lmfit133, col=2)
points(x,predict(lmfit134, data.frame(x)) ,col=4, cex=0.2)
legend(-1, legend=c("linear model fit", "quadratic model fit"),col=c(2,4),lwd=3 )
<jupyter_output><empty_output><jupyter_text>### 13.h
With lower variance in residuals we observe linear and quadratic fits with higher F-statistics. The quadratic fits as previously fits the data a little better compared to the linear model<jupyter_code>## 13.i
epsbb<-rnorm(100, mean=0, sd=0.40) #residuals with larger variance
ybb<- -1+0.5*x + epsbb
lmfit135<-lm(ybb~x)
lmfit136<-lm(ybb~x+I(x^2))
summary(lmfit135)
summary(lmfit136)
plot(ybb~x, main="larger eps = 0.40")
abline(lmfit135, col=2)
points(x,predict(lmfit136, data.frame(x)) ,col=4, cex=0.2)
legend(-1, legend=c("linear model fit", "quadratic model fit"),col=c(2,4),lwd=3 )<jupyter_output><empty_output><jupyter_text>### 13.i
With higher variance in residuals we observe linear and quadratic fits with lower F-statistics. The quadratic fit as previously fits the data similarly compared to the linear model. F-statistis is lower.<jupyter_code>#13.j
confint(lmfit131) #ref eps
confint(lmfit133) #smaller eps
confint(lmfit133) #larger eps
paste("x interval length eps ref:", confint(lmfit131)[2,2] - confint(lmfit131)[2,1] )
paste("x interval length eps smaller:", confint(lmfit133)[2,2] - confint(lmfit133)[2,1] )
paste("x interval length eps larger:", confint(lmfit135)[2,2] - confint(lmfit135)[2,1] )
<jupyter_output><empty_output><jupyter_text>#### 13.4
As expected smaller variance of residuals makes a confidence intervals smaller i.e. coefficient estimation is more precise. and vice versa.## Ex 14
#### 14.a<jupyter_code>set.seed(1)
x1<-runif(100)
x2 <- 0.5*x1 + rnorm(100)/10
y <- 2+2*x1 +0.3*x2+rnorm(100)
<jupyter_output><empty_output><jupyter_text>The the linear model $y = 2.0 + 2*x1 + 0.3*x2 + \epsilon$
The coefficients for x1: 2 , for x2: 0.2#### 14.b<jupyter_code>cor(x1,x2) # 0.835 the correlation is high
plot(x1,x2, main="x1 vs x2") # we observe correlated values<jupyter_output><empty_output><jupyter_text>#### 14.c<jupyter_code>lmfit141<-lm(y~x1+x2)
summary(lmfit141)<jupyter_output><empty_output><jupyter_text>A linear regression model has an F-statistic 12.8 and a low p-value.
Estimates of coefficients $\hat\beta_0, \hat\beta_1, \hat\beta_2$ : 2.13, 1.43, 1.0
For $\beta_0$ estimate relates well. There is a bigger diference for $\beta_1$ and for $\beta_2$ the estimate is event more different.
For x1 p-value = 0.04 It is small enough so we can reject the null hypothesis $H_0: \beta_1 = 0$.
For x2 p-value = 0.37 It is large so we cannot reject the null hypothesis $H_0: \beta_2 = 0$.
#### 14.d<jupyter_code>lmfit142<-lm(y~x1)
summary(lmfit142)<jupyter_output><empty_output><jupyter_text>Now the coefficient estimate of x1 is much closer to its real value 2.0
Given its small p-value we can reject the null hypo $H_0: \beta_1 = 0$.#### 14.e<jupyter_code>lmfit143<-lm(y~x2)
summary(lmfit143)<jupyter_output><empty_output><jupyter_text>Given the small p-value we can reject the null hypo $H_0: \beta_2 = 0$.#### 14.f
The results in c and e do not contradict each other as they come from two distinct linear regression model. They point to a correlation between x1 and x2 data and a potential collinearity between them#### 14.g<jupyter_code>## add additional data point
x1 <- c(x1, 0.1)
x2 <- c(x2, 0.8) #this is an outlier
y <- c(y, 6) <jupyter_output><empty_output><jupyter_text>## Ex 3.15
This problem involves the Boston data set, which we saw in the lab for this chapter. We will now try to predict per capita crime rate using the other variables in this data set. In other words, per capita crime rate is the response, and the other variables are the predictors.
(a) For each predictor, fit a simple linear regression model to predict the response. Describe your results. In which of the models is there a statistically significant association between the predictor and the response? Create some plots to back up your assertions.
(b) Fit a multiple regression model to predict the response using all of the predictors. Describe your results. For which predictors can we reject the null hypothesis H0 : βj = 0?
(c) How do your results from (a) compare to your results from (b)? Create a plot displaying the univariate regression coefficients from (a) on the x-axis, and the multiple regression coefficients from (b) on the y-axis. That is, each predictor is displayed as a single point in the plot. Its coefficient in a simple linear regres- sion model is shown on the x-axis, and its coefficient estimate in the multiple linear regression model is shown on the y-axis.
(d) Is there evidence of non-linear association between any of the predictors and the response? To answer this question, for each predictor X, fit a model of the form
$$Y = \beta_0 +\beta_1X +\beta_2X^2 +\beta_3X^3 +\epsilon$$
<jupyter_code>library(MASS)
str(Boston)<jupyter_output>'data.frame': 506 obs. of 14 variables:
$ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
$ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
$ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
$ chas : int 0 0 0 0 0 0 0 0 0 0 ...
$ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
$ rm : num 6.58 6.42 7.18 7 7.15 ...
$ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
$ dis : num 4.09 4.97 4.97 6.06 6.06 ...
$ rad : int 1 2 2 3 3 3 5 5 5 5 ...
$ tax : num 296 242 242 222 222 222 311 311 311 311 ...
$ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
$ black : num 397 397 393 395 397 ...
$ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
$ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
<jupyter_text>### 15.a<jupyter_code>##convert chas variable to a factor
Boston$chas <- factor(Boston$chas, labels = c("N","Y"))
# running single predictor linear regression model on all variables except response itself
print(paste(names(Boston)[-1], collapse=" "))
##########################
# 'zn' proportion of residential land zoned for lots over 25,000 sq.ft.
lm_zn<-lm(crim~zn,data=Boston)
print(summary(lm_zn)) #### p-value low. linear model statistically significant but rsq very low 3.8%
# 'indus' proportion of non-retail business acres per town.
lm_indus<-lm(crim~indus,data=Boston)
print(summary(lm_indus)) ### p-value low, lm significant, rsq = 0.163
#'chas' Charles River dummy variable
lm_chas<-lm(crim~chas,data=Boston)
print(summary(lm_chas)) ##### p-value high, lm non sig
#'nox' nitrogen oxides concentration
lm_nox<-lm(crim~nox,data=Boston)
print(summary(lm_nox)) ##### p-value low, lm sig, rsq = 0.17
#'rm' average number of rooms per dwelling.
lm_rm<-lm(crim~rm,data=Boston)
print(summary(lm_rm)) # p-val ok lm sig, rsq=0.046
#'age' proportion of owner-occupied units built prior to 1940.
lm_age<-lm(crim~age,data=Boston)
print(summary(lm_age)) ##### p-value ok, lm sig, rsq=0.12
# 'dis' weighted mean of distances to five Boston employmet centres.
lm_dis<-lm(crim~dis,data=Boston)
print(summary(lm_dis)) ### p-val ok, lm sig, rsq=0.14
#'rad' index of accessibility to radial highways.
lm_rad<-lm(crim~rad,data=Boston)
print(summary(lm_rad)) #### p-val ok, lm sig rsq = 0.39 !!!!
#'tax' full-value property-tax rate per \$10,000.
lm_tax<-lm(crim~tax,data=Boston)
print(summary(lm_tax)) ### p-val ok, lm sig , rsq 0.338 !!!
#'ptratio' pupil-teacher ratio by town.
lm_ptratio<-lm(crim~ptratio,data=Boston)
print(summary(lm_ptratio)) ### p-val ok, lm sig rsq = 0.08
#'black' 1000(Bk - 0.63)^2 where Bk is the proportion of blacks bytown.
lm_black<-lm(crim~black,data=Boston)
print(summary(lm_black)) ### p-val ok, lm sig, rsq = 0.14 !!
#'lstat' lower status of the population (percent).
lm_lstat<-lm(crim~lstat,data=Boston)
print(summary(lm_lstat)) ### p-val ok, lm sig, rsq = 0.2 !!!
#'medv' median value of owner-occupied homes in \$1000s.
lm_medv<-lm(crim~medv,data=Boston)
print(summary(lm_medv)) #### p-val ok, lm sig, rsq=0.149 !!!<jupyter_output>
Call:
lm(formula = crim ~ medv, data = Boston)
Residuals:
Min 1Q Median 3Q Max
-9.071 -4.022 -2.343 1.298 80.957
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 11.79654 0.93419 12.63 <2e-16 ***
medv -0.36316 0.03839 -9.46 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 7.934 on 504 degrees of freedom
Multiple R-squared: 0.1508, Adjusted R-squared: 0.1491
F-statistic: 89.49 on 1 and 504 DF, p-value: < 2.2e-16
<jupyter_text>**15.(a)** In summary liner models are statistically significant with all but one predictor
(chas)
Predictors producing the highest percengages of the response variance explained are:
(rad, tax,black,lstat,medv)
<jupyter_code>### plot of predictors with highest relation
par(mfrow=c(3,2))
plot(crim~rad, data=Boston, cex=0.2, col=1)
abline(lm_rad, col=1)
plot(crim~tax, data=Boston, cex=0.2, col=2)
abline(lm_tax, col=2)
plot(crim~black, data=Boston, cex=0.2, col=3)
abline(lm_black, col=3)
plot(crim~lstat, data=Boston, cex=0.2, col=4)
abline(lm_lstat, col=4)
plot(crim~medv, data=Boston, cex=0.2, col=5)
abline(lm_medv, col=5)<jupyter_output><empty_output><jupyter_text>### 15.b<jupyter_code>## linear model using all predictors
lm_15b<-lm(crim~., data=Boston)
print(summary(lm_15b))<jupyter_output>
Call:
lm(formula = crim ~ ., data = Boston)
Residuals:
Min 1Q Median 3Q Max
-9.924 -2.120 -0.353 1.019 75.051
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.033228 7.234903 2.354 0.018949 *
zn 0.044855 0.018734 2.394 0.017025 *
indus -0.063855 0.083407 -0.766 0.444294
chasY -0.749134 1.180147 -0.635 0.525867
nox -10.313535 5.275536 -1.955 0.051152 .
rm 0.430131 0.612830 0.702 0.483089
age 0.001452 0.017925 0.081 0.935488
dis -0.987176 0.281817 -3.503 0.000502 ***
rad 0.588209 0.088049 6.680 6.46e-11 ***
tax -0.003780 0.005156 -0.733 0.463793
ptratio -0.271081 0.186450 -1.454 0.146611
black -0.007538 0.003673 -2.052 0.040702 *
lstat 0.126211 0.075725 1.667 0.096208 .
medv -0.198887 0.060516 -3.287 0.001087 **
---
Signif. codes: 0 ‘***’ 0.0[...]<jupyter_text>Running ** a multiple linear regression ** produces R-squared higher than for any single predictor linear regression.
We can **reject null hypothesis** for predictors: {zn,dis,rad,black,medv}
<jupyter_code>lm_15bsig<-lm(crim~zn+dis+rad+black+medv,data=Boston)
print(summary(lm_15bsig))<jupyter_output>
Call:
lm(formula = crim ~ zn + dis + rad + black + medv, data = Boston)
Residuals:
Min 1Q Median 3Q Max
-10.553 -1.869 -0.358 0.839 75.744
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.919933 1.778986 4.452 1.05e-05 ***
zn 0.051799 0.017329 2.989 0.002935 **
dis -0.672189 0.202939 -3.312 0.000992 ***
rad 0.472306 0.042102 11.218 < 2e-16 ***
black -0.008211 0.003615 -2.271 0.023562 *
medv -0.174219 0.036295 -4.800 2.10e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 6.473 on 500 degrees of freedom
Multiple R-squared: 0.4393, Adjusted R-squared: 0.4337
F-statistic: 78.34 on 5 and 500 DF, p-value: < 2.2e-16
<jupyter_text>### 15.c<jupyter_code>#construct y vec
coef_mult_reg <- lm_15b$coef[-1]
#coef_mult_reg
#construct x vec
coef_singl_reg <- c(lm_zn$coef[-1], lm_indus$coef[-1], lm_chas$coef[-1], lm_nox$coef[-1], lm_rm$coef[-1],
lm_age$coef[-1], lm_dis$coef[-1], lm_rad$coef[-1], lm_tax$coef[-1], lm_ptratio$coef[-1],
lm_black$coef[-1], lm_lstat$coef[-1], lm_medv$coef[-1])
#coef_singl_reg
par(mfrow=c(1,1))
plot(coef_singl_reg, coef_mult_reg, main = "coefficients")<jupyter_output><empty_output><jupyter_text>coefficient for nox is -10 in multiple linear regression and 31 in single linear regression### 15.(d)<jupyter_code>predictors<-c('zn','indus','nox','rm','age','dis','rad','tax','ptratio','black','lstat','medv')
print( paste(predictors, collapse=" ") )
for (p in predictors) {
print(p)
lm_p_q <- lm(crim~poly(eval(as.name(p)),3) , data=Boston )
print(summary(lm_p_q))
}
<jupyter_output>[1] "zn indus nox rm age dis rad tax ptratio black lstat medv"
[1] "zn"
Call:
lm(formula = crim ~ poly(eval(as.name(p)), 3), data = Boston)
Residuals:
Min 1Q Median 3Q Max
-4.821 -4.614 -1.294 0.473 84.130
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.6135 0.3722 9.709 < 2e-16 ***
poly(eval(as.name(p)), 3)1 -38.7498 8.3722 -4.628 4.7e-06 ***
poly(eval(as.name(p)), 3)2 23.9398 8.3722 2.859 0.00442 **
poly(eval(as.name(p)), 3)3 -10.0719 8.3722 -1.203 0.22954
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 8.372 on 502 degrees of freedom
Multiple R-squared: 0.05824, Adjusted R-squared: 0.05261
F-statistic: 10.35 on 3 and 502 DF, p-value: 1.281e-06
[1] "indus"
Call:
lm(formula = crim ~ poly(eval(as.name(p)), 3), data = Boston)
Residuals:
Min 1Q Median 3Q Max
-8.278 -2.514 0.054 0.764 79.713
Coefficients[...]
|
no_license
|
/ai/lagunita/tibshirani/ch3-LinearRegression/ch3-ex-applied.ipynb
|
slzdevsnp/learn
| 36 |
<jupyter_start><jupyter_text>Wont include handwashing facilities, and male/female smokers.<jupyter_code>df
continent_df
#groupby location and then resample by week - get new cases
test = df.set_index('date').groupby(['location']).resample("W").sum().reset_index()[['location','date','new_cases']]
for date in test['date'].unique():
print (date)
#groupby location and then resample by week - get new cases
df.set_index('date').groupby(['location']).resample("M").sum().reset_index()[['date','new_cases']]
fig, ax = plt.subplots(figsize = (10,10))
#color of dots
color = {'Africa':'red',
'Asia':'blue',
'Europe':'green',
'North America':'black',
'Australia':'yellow',
'South America':'purple'}
# plot scatter plot
grouped = continent_df.reset_index().groupby('population')
for key, group in grouped:
group.plot(ax=ax,
kind='scatter',
x='gdp_per_capita',
y='total_cases',
#label=key,
#color=color[key],
alpha = 0.5,
s = key/1000000)
plt.yscale("log")
plt.title("Log plot: GDP per capita vs cases of COVID-19")
plt.show()
grouped = continent_df.reset_index().groupby(['continent','location','population'])
for key, group in grouped:
print (key)
#color of dots
fig, ax = plt.subplots(figsize = (10,10))
color = {'Africa':'red',
'Asia':'blue',
'Europe':'green',
'North America':'black',
'Australia':'yellow',
'South America':'purple'}
grouped = continent_df.dropna().reset_index().groupby(['continent','population'])
continent_used = []
for key, group in grouped:
if key[0] not in continent_used:
plt.scatter(x=group['gdp_per_capita'],
y=group['total_cases'],
label=key[0],
color=color[key[0]],
alpha = 0.5,
s = key[1]/1000000)
continent_used.append(key[0])
else:
plt.scatter(x=group['gdp_per_capita'],
y=group['total_cases'],
#label=key[0],
color=color[key[0]],
alpha = 0.5,
s = key[1]/1000000)
#plt.plot(group['gdp_per_capita'],group['total_cases'],color=color[key[0]])
plt.yscale("log")
plt.legend()
plt.show()
continent_df.info()
plt.figure(figsize=(10,10))
sns.scatterplot(data=continent_df,
x='gdp_per_capita',
y='total_cases',
size='population',
sizes=(10, 2000),
hue='continent',
alpha=0.7
)
plt.legend(loc='lower right',markerscale=0.4,labelspacing=1)
plt.yscale("log")
plt.show()<jupyter_output><empty_output><jupyter_text>Now lets animate the plot by time<jupyter_code>#import matplotlib animate module
plt.rcParams["animation.html"] = "jshtml"
%matplotlib inline
from matplotlib.animation import FuncAnimation
fig,ax = plt.subplots(figsize=(10,10))
#FuncAnimation(fig, func, frames=None, init_func=None,
#fargs=None, save_count=None, *, cache_frame_data=True, **kwargs)
#first recorded date
day_counter = df['date'][df['date'].notnull()].unique()[0]
setx, sety = [],[]
#plot
scatter = sns.scatterplot(
x=setx,
y=sety,
size=30,
#sizes=(10, 2000),
#hue='continent',
alpha=0.7,
ax=ax)
def animate(i):
global day_counter
#select wanted data, given the date we have
gdp_per_capita = df.groupby(['date','location','continent']).mean().loc[day_counter]['gdp_per_capita']
total_cases = df.groupby(['date','location','continent']).sum().loc[day_counter]['total_cases']
population = df.groupby(['date','location','continent']).mean().loc[day_counter]['population']
#add continents data
#df.groupby(['date','location','continent']).mean().loc[day_counter]['gdp_per_capita']
#append new data to
setx.append(gdp_per_capita)
sety.append(sety)
#continue to next day
day_counter = day_counter + np.timedelta64(1,'D')
return scatter
#create dataframe
animation = FuncAnimation(fig,
func=animate,
interval=100)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.animation
import numpy as np
fig, ax = plt.subplots()
x, y = [],[]
sc = ax.scatter(x,y)
plt.xlim(0,10)
plt.ylim(0,10)
def animate(i):
x.append(np.random.rand(1)*10)
y.append(np.random.rand(1)*10)
sc.set_offsets(np.c_[x,y])
ani = matplotlib.animation.FuncAnimation(fig, animate,
frames=2, interval=100, repeat=True)
from IPython.display import HTML
HTML(ani.to_html5_video())
#add one day to initial date
df['date'][df['date'].notnull()].unique()[0]+np.timedelta64(1,'D')
df.groupby(['date','location','continent']).mean().loc[day_counter].index[2][1]
df.groupby(['date','location','continent']).mean().loc['2020-01-01']['gdp_per_capita']
import time
for i in df['date'][df['date'].notnull()].unique():
yaxis = df.groupby(['date','location']).sum().loc[i]['total_cases']
#print (df.groupby(['date','location']).sum().loc[i])
xaxis = df.groupby(['date','location']).mean().loc[i]['gdp_per_capita']
#print (yaxis)
df.groupby(['date','location']).mean().loc['2020-05-18']['gdp_per_capita']
df.groupby('date').loc['2019-12-31 00:00:00']
#Create median age bins
to_sort = df.groupby('location').mean()['median_age']
bins = [0,20,30,40,50]
age_bins = pd.cut(to_sort,bins,labels = ['<20','20-30','30-40','40-50'])
#New age bin column
to_bin = lambda x: age_bins[x]
df['age_bins'] = df.location[df.location.notna()].apply(to_bin).astype(str)
df['age_bins'] = df['age_bins'].astype(str)
df['age_bins'].unique()
df['age_bins'].value_counts()
#make df that is first grouped by country and age group
country_age = df.groupby(['location','age_bins'])[['total_cases','total_deaths']].max().reset_index()
#set the order in which the bars should appear
cases_age = country_age.groupby('age_bins').sum().reindex(['<20', '20-30', '30-40','40-50']).reset_index()
#bars for the whole world
plt.bar(cases_age['age_bins'],
cases_age['total_cases'],
label='Rest of the World')
#bars for the US
plt.bar(country_age['age_bins'][(country_age['age_bins'] != 'nan') & (country_age['location'] == 'United States')],
country_age['total_cases'][(country_age['age_bins'] != 'nan') & (country_age['location'] == 'United States')],
label='United States')
plt.title("Cases in countries by median age group")
plt.xlabel('Median age group')
plt.ylabel('Cases')
plt.legend()
plt.show()<jupyter_output><empty_output><jupyter_text>Now create whole new map<jupyter_code>#function to return cartesian coordinates
#https://stackoverflow.com/questions/1185408/converting-from-longitude-latitude-to-cartesian-coordinates
def spherical_to_cartesian(lat,long):
R = 1 #radius of the earth = 6367 km
x = R * np.cos((lat)/180 *np.pi) * np.cos((long)/180 *np.pi)
y = R * np.cos((lat)/180 *np.pi) * np.sin((long)/180 *np.pi)
z = R *np.sin((lat)/180 *np.pi)
return x,y,z
#create new x, y and z cartesian columns
df['x'] = spherical_to_cartesian(df['latitude'],df['longitude'])[0]
df['y'] = spherical_to_cartesian(df['latitude'],df['longitude'])[1]
df['z'] = spherical_to_cartesian(df['latitude'],df['longitude'])[2]
#groupby location and then resample by week - get new cases
country_cases_week = df.set_index('date').groupby(['location']).resample("M").sum().reset_index()[['date','new_cases']]
#groupby location and then resample by week - get cardinal
country_cardinal_week = df.set_index('date').groupby(['location']).resample("M").mean().reset_index()[['x','y','z']]
location_weighted_df = pd.concat([country_cases_week,country_cardinal_week],axis=1)
#Create weighted mean
def wm(x):
try:
return np.average(x, weights=location_weighted_df.loc[x.index, "new_cases"])
except ZeroDivisionError:
return 0
weighted_df = location_weighted_df.groupby('date').agg(x_weighted_mean=("x", wm),
y_weighted_mean=("y", wm),
z_weighted_mean=("z", wm))
#function to return spherical coordinates
#https://stackoverflow.com/questions/1185408/converting-from-longitude-latitude-to-cartesian-coordinates
def cartesian_to_spherical(x,y,z):
R = 1 #radius of the earth = 6367 km
latitude = np.arcsin(z / R)
longitude = np.arctan2(y, x)
#convert from radians to degrees
latitude = (latitude/np.pi)*180
longitude = (longitude/np.pi)*180
return latitude, longitude
weighted_df['latitude'] = cartesian_to_spherical(weighted_df['x_weighted_mean'],
weighted_df['y_weighted_mean'],
weighted_df['z_weighted_mean'])[0]
weighted_df['longitude'] = cartesian_to_spherical(weighted_df['x_weighted_mean'],
weighted_df['y_weighted_mean'],
weighted_df['z_weighted_mean'])[1]
#make data more versatile by adding new_cases
weighted_df = pd.concat([weighted_df,location_weighted_df.groupby('date').sum()],axis=1)
weighted_df
weighted_df.loc['2020-03-31'].longitude-10
#plot movement of COVID-19 across the world
#https://geopandas.readthedocs.io/en/latest/gallery/create_geopandas_from_pandas.html
#plot world
ax = world.plot(color='white', edgecolor='black', figsize = (20,20))
#plot data
plt.scatter(weighted_df.longitude,
weighted_df.latitude,
color='red',
s=weighted_df.new_cases/5000,
alpha =0.9)
plt.plot(weighted_df.longitude,
weighted_df.latitude,
color ='red',
linewidth=3)
#plt.xlim(10,150)
#plt.ylim(-20,60)
plt.annotate("March",
(weighted_df.loc['2020-03-31'].longitude, weighted_df.loc['2020-03-31'].latitude),
(weighted_df.loc['2020-03-31'].longitude - 20, weighted_df.loc['2020-03-31'].latitude + 5),
fontsize=20,
color='red')
plt.annotate("April",
(weighted_df.loc['2020-04-30'].latitude,weighted_df.loc['2020-04-30'].latitude),
(weighted_df.loc['2020-04-30'].longitude - 20, weighted_df.loc['2020-04-30'].latitude + 2),
fontsize=20,
color='red')
plt.annotate("May",
(weighted_df.loc['2020-05-31'].longitude,weighted_df.loc['2020-05-31'].latitude),
(weighted_df.loc['2020-05-31'].longitude - 20,weighted_df.loc['2020-05-31'].latitude - 1),
fontsize=20,
color='red')
plt.title("The geographic center of COVID-19 over time", fontsize = 30)
plt.axis('off')
plt.show()
# set the range for the choropleth
vmin, vmax = 1000, 500000
# create figure and axes for Matplotlib
fig, ax = plt.subplots(1, figsize=(30, 12))
total_cases_df.plot(column='total_cases',
cmap='Reds',
linewidth=0.8,
ax=ax,
edgecolor='0.8',
vmin=vmin,
vmax=vmax,figsize=(30,20))
ax.set_title("Total Coronavirus cases")
# Create colorbar as a legend
sm = plt.cm.ScalarMappable(cmap='Reds', norm=plt.Normalize(vmin=vmin, vmax=vmax))
# empty array for the data range
sm._A = []
# add the colorbar to the figure
cbar = fig.colorbar(sm)
#plot data
plt.scatter(weighted_df.longitude,
weighted_df.latitude,
color='red',
s=weighted_df.new_cases/5000,
alpha =0.9)
plt.plot(weighted_df.longitude,
weighted_df.latitude,
color ='red',
linewidth=3)
#annote the points
plt.annotate("March",
(weighted_df.loc['2020-03-31'].longitude, weighted_df.loc['2020-03-31'].latitude),
(weighted_df.loc['2020-03-31'].longitude - 20, weighted_df.loc['2020-03-31'].latitude + 5),
fontsize=20,
color='red')
plt.annotate("April",
(weighted_df.loc['2020-04-30'].latitude,weighted_df.loc['2020-04-30'].latitude),
(weighted_df.loc['2020-04-30'].longitude - 20, weighted_df.loc['2020-04-30'].latitude + 2),
fontsize=20,
color='red')
plt.annotate("May",
(weighted_df.loc['2020-05-31'].longitude,weighted_df.loc['2020-05-31'].latitude),
(weighted_df.loc['2020-05-31'].longitude - 20,weighted_df.loc['2020-05-31'].latitude - 1),
fontsize=20,
color='red')
plt.title("The geographic center of COVID-19 over time", fontsize = 30)
plt.axis('off')
plt.show();
#Global cases
fig = plt.figure()
ax = plt.axes()
#ax.plot(x, np.sin(x));
#data
groupby_global = df.groupby('date').sum()
x = groupby_global.index
y = groupby_global.total_cases
#plot
ax.plot(x,y,c='r')
ax.bar(x,y)
ax.tick_params(axis='x',which='both',labelbottom=False)
ax.set_title('Global cases over time')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
#save figure
plt.savefig('/Users/sebastiangraff/Desktop/Data_Analysis/COVID19-Exploration/plot_images/global_cases.png')
plt.show();
#Global cases vs deaths
fig, ax = plt.subplots(figsize = (15,5),nrows=1,ncols=2)
#data
groupby_global = df.groupby('date').sum()
x = groupby_global.index
y1 = groupby_global.total_cases
y2 = groupby_global.total_deaths
#plot
ax[0].plot(x,y1,c='purple',label='Cases')
ax[0].plot(x,y2,c='red',label='Deaths')
ax[0].tick_params(axis='x',which='both',labelbottom=False)
ax[0].set_title('Global cases over time')
ax[0].set_xlabel("Date")
ax[0].set_ylabel("Cases")
ax[1].plot(x,y1,c='purple',label='Cases')
ax[1].plot(x,y2,c='red',label='Deaths')
ax[1].tick_params(axis='x',which='both',labelbottom=False)
ax[1].set_title('Global logarithimic cases over time,')
ax[1].set_xlabel("Date")
ax[1].set_ylabel("Cases")
ax[1].set_yscale('log')
#save figure
ax[0].legend()
ax[1].legend()
plt.savefig('/Users/sebastiangraff/Desktop/Data_Analysis/COVID19-Exploration/plot_images/global_cases_vs_deaths.png')
plt.show();
groupby_global = df.groupby('date').sum()
groupby_global
#Global cases vs tests
#data
groupby_global = df.groupby('date').sum()
x = groupby_global.new_tests
y = groupby_global.new_cases
plt.scatter(x,y)
plt.title("COVID-19 Cases vs number of Tests")
plt.xlabel("Tests")
plt.ylabel("Cases")
plt.savefig('/Users/sebastiangraff/Desktop/Data_Analysis/COVID19-Exploration/plot_images/global_tests_vs_cases.png')
plt.show();
#Stacked plot by continents
#data
groupby_continent = df.groupby('continent').sum()
x=df.date
y=df.total_cases[df.continent ==]
# Basic stacked area chart.
plt.stackplot(x,y, labels=df.continent.unique())
plt.legend(loc='upper left')
from collections import defaultdict
#Data
continent_dict=defaultdict(list)
for index, row in df.iterrows():
continent_dict[row.continent].append(row.total_cases)
x=df.date
y1=continent_dict['North America']
y2=continent_dict['Asia']
y3=continent_dict['Africa']
y4=continent_dict['Europa']
y5=continent_dict['South America']
y6=continent_dict['Australia']
# Plot
plt.stackplot(continent_dict.keys(), y1,y2, labels=continent_dict.keys())
plt.legend(loc='upper left')
plt.show();
#np.array(d.keys(),dtype=float)<jupyter_output><empty_output>
|
no_license
|
/.ipynb_checkpoints/ECDC-Exploration-checkpoint 177.ipynb
|
sebastian-graeff/COVID19-Exploration
| 3 |
<jupyter_start><jupyter_text># Lead Scoring Case Study### Importing important libraries required for performing analysis and model building<jupyter_code># Suppress Warnings
import warnings
warnings.filterwarnings('ignore')
#importing the necessary packages required
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.metrics import precision_recall_curve<jupyter_output><empty_output><jupyter_text>## 1. Reading and understanding the Data<jupyter_code># Reading Data
df=pd.read_csv("Leads.csv")
df.head()<jupyter_output><empty_output><jupyter_text>### Looking into the physical structure of the Dataframe<jupyter_code>df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 9240 entries, 0 to 9239
Data columns (total 37 columns):
Prospect ID 9240 non-null object
Lead Number 9240 non-null int64
Lead Origin 9240 non-null object
Lead Source 9204 non-null object
Do Not Email 9240 non-null object
Do Not Call 9240 non-null object
Converted 9240 non-null int64
TotalVisits 9103 non-null float64
Total Time Spent on Website 9240 non-null int64
Page Views Per Visit 9103 non-null float64
Last Activity 9137 non-null object
Country 6779 non-null object
Specialization 7802 [...]<jupyter_text>### Looking into the statistical description of the numerical data elements<jupyter_code>df.describe()<jupyter_output><empty_output><jupyter_text>### Looking into the shape of the Dataframe<jupyter_code>df.shape<jupyter_output><empty_output><jupyter_text>### Dropping unnecessary columnsThe columns "Prospect ID",'Asymmetrique Activity Index','Asymmetrique Profile Index','Asymmetrique Profile Score','Asymmetrique Activity Score','Tags' are not logically much significant. Hence these are dropped from the Dataframe.<jupyter_code># Drop column that is not required for this analysis
df=df.drop(["Prospect ID",'Asymmetrique Activity Index','Asymmetrique Profile Index','Asymmetrique Profile Score','Asymmetrique Activity Score','Tags'], axis=1)<jupyter_output><empty_output><jupyter_text>Inspecting the Dataframe after dropping the unnecessary columns.<jupyter_code>df.head()<jupyter_output><empty_output><jupyter_text>### Checking the percentage of null values in each column.<jupyter_code>round(100*(df.isnull().sum())/len(df), 2)<jupyter_output><empty_output><jupyter_text>### Replacing 'Select' from the categorical columns with NaN (null)<jupyter_code># Replace 'select' value with np.nan
df=df.replace('Select', np.nan)<jupyter_output><empty_output><jupyter_text>Checking if the replacement of 'Select' to NaN worked propoerly or not.<jupyter_code>df.head()<jupyter_output><empty_output><jupyter_text>### Again, checking the null percentage in each column after replacing 'Select' with null values.<jupyter_code>#Checking for null values in the dataset
round(100*(df.isnull().sum())/len(df), 2)<jupyter_output><empty_output><jupyter_text>### Dropping the 3 listed columns since the null percentage in them is very high ( > 50%) and with these many null values in them, they will not impact the target variable much.
- How did you hear about X Education
- Lead Quality
- Lead Profile<jupyter_code># Drop columns with high null values(greater than 50 %)
df=df.drop(["How did you hear about X Education","Lead Quality","Lead Profile"], axis=1)
<jupyter_output><empty_output><jupyter_text>### Checking the info of the Dataframe after dropping these columns mentioned above.<jupyter_code>df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 9240 entries, 0 to 9239
Data columns (total 28 columns):
Lead Number 9240 non-null int64
Lead Origin 9240 non-null object
Lead Source 9204 non-null object
Do Not Email 9240 non-null object
Do Not Call 9240 non-null object
Converted 9240 non-null int64
TotalVisits 9103 non-null float64
Total Time Spent on Website 9240 non-null int64
Page Views Per Visit 9103 non-null float64
Last Activity 9137 non-null object
Country 6779 non-null object
Specialization 5860 non-null object
What is your current occupation 6550 [...]<jupyter_text>### Checking the nullpercentage after dropping columns with high null percentage<jupyter_code>#Checking for null values in the dataset
round(100*(df.isnull().sum())/len(df), 2)<jupyter_output><empty_output><jupyter_text>### Checking the shape of the Dataframe again<jupyter_code>df.shape<jupyter_output><empty_output><jupyter_text>### Dropping the rows where there are more than 5 empty variables.<jupyter_code># Selecting rows having less than 5 null values
df=df.loc[~((df.isnull().sum(axis=1))>5)]
<jupyter_output><empty_output><jupyter_text>### Checking the head of the Dataframe to confirm if the deletion of rows based on the null variable criteria worked properly or not.<jupyter_code>df.head()
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 9191 entries, 0 to 9239
Data columns (total 28 columns):
Lead Number 9191 non-null int64
Lead Origin 9191 non-null object
Lead Source 9161 non-null object
Do Not Email 9191 non-null object
Do Not Call 9191 non-null object
Converted 9191 non-null int64
TotalVisits 9103 non-null float64
Total Time Spent on Website 9191 non-null int64
Page Views Per Visit 9103 non-null float64
Last Activity 9126 non-null object
Country 6779 non-null object
Specialization 5860 non-null object
What is your current occupation 6508 [...]<jupyter_text>Checing the shape of the Dataframe again for the number of rows in the dataframe.<jupyter_code>df.shape<jupyter_output><empty_output><jupyter_text>We can see that the current number of rows in the Dataframe is 9191. We calculate the percentage of retained rows.<jupyter_code># Percantage of rows retained
round(len(df.index)/9240*100,2)<jupyter_output><empty_output><jupyter_text>### The percentage of retained rows is 99.47%### Checking the percentage of null values in each column.<jupyter_code>round(100*(df.isnull().sum())/len(df), 2)<jupyter_output><empty_output><jupyter_text>We see from the above percentage representation that the below columns have null values:
- Lead Source
- TotalVisits
- Page Views Per Visit
- Last Activity
- Country
- Specialization
- What is your current occupation
- What matters most to you in choosing a course
- City<jupyter_code>df.describe()<jupyter_output><empty_output><jupyter_text>### Imputing null values with mode/median/mean as applicableFor the columns TotalVisits, we impute the null values with median (3.0)<jupyter_code># Imputing TotalVisits with median
df['TotalVisits'].fillna(3.0, inplace=True)<jupyter_output><empty_output><jupyter_text>For the columns Page Views Per Visit, we impute the null values with mode<jupyter_code># Imputing Page Views Per Visit with mean
df['Page Views Per Visit'].fillna(df['Page Views Per Visit'].mean(), inplace=True)
df['Lead Source'].mode()<jupyter_output><empty_output><jupyter_text>For the column Lead Source, we impute the null values with mode - Google<jupyter_code>df['Lead Source'].fillna('Google', inplace=True)
df['Last Activity'].mode()<jupyter_output><empty_output><jupyter_text>For the column Last Activity, we impute the null values with mode - Email Opened<jupyter_code>df['Last Activity'].fillna('Email Opened', inplace=True)
df['Country'].mode()<jupyter_output><empty_output><jupyter_text>For the column Country, we impute the null values with mode - India<jupyter_code>df['Country'].fillna('India', inplace=True)
df['Specialization'].mode()<jupyter_output><empty_output><jupyter_text>For the column Specialization, we impute the null values with mode - Financial Management<jupyter_code>df['Specialization'].fillna('Finance Management', inplace=True)
df['What is your current occupation'].mode()<jupyter_output><empty_output><jupyter_text>For the column What is your current occupation, we impute the null values with mode - Unemployed<jupyter_code>df['What is your current occupation'].fillna('Unemployed', inplace=True)
df['What matters most to you in choosing a course'].mode()<jupyter_output><empty_output><jupyter_text>For the column What matters most to you in choosing a course, we impute the null values with mode - Better Career Prospects<jupyter_code>df['What matters most to you in choosing a course'].fillna('Better Career Prospects', inplace=True)
df['City'].mode()<jupyter_output><empty_output><jupyter_text>For the column City, we impute the null values with mode - Mumbai<jupyter_code>df['City'].fillna('Mumbai', inplace=True)<jupyter_output><empty_output><jupyter_text>### After all the imputations, we check the null percentage of the columns to check if we missed anthing<jupyter_code>round(100*(df.isnull().sum())/len(df), 2)<jupyter_output><empty_output><jupyter_text>### We check the percentage of rows retained<jupyter_code># Percantage of rows retained
round(len(df.index)/9240*100,2)<jupyter_output><empty_output><jupyter_text>We see that the percentage of rows retained remains same as 99.47%## 2. Visualising the Data### Using pair plot to establish the correlation if any between the numeric variables<jupyter_code># Pairplot for the numerical variables
plt.figure(figsize=[16,10])
sns.pairplot(df, vars=['TotalVisits','Total Time Spent on Website','Page Views Per Visit'])
plt.show()<jupyter_output><empty_output><jupyter_text>There is a very little correlation between TotalVisits and Page Views Per Visit### Using box plots to check the outliers in the numeric fields<jupyter_code># creating boxplots to identify outliers
plt.figure(figsize=[16,10])
plt.subplot(311)
sns.boxplot(df['TotalVisits'])
plt.subplot(312)
sns.boxplot(df['Total Time Spent on Website'])
plt.subplot(313)
sns.boxplot(df['Page Views Per Visit'])
plt.show()<jupyter_output><empty_output><jupyter_text>#### Treating outliers by capping the upper limit of the fields to 99 percentile<jupyter_code>TV_Q4 = df['TotalVisits'].quantile(0.99)
PVV_Q4 = df['Page Views Per Visit'].quantile(0.99)
df['TotalVisits'] = np.where((df['TotalVisits'] >= TV_Q4), TV_Q4, df['TotalVisits'])
df['Page Views Per Visit'] = np.where((df['Page Views Per Visit'] >= PVV_Q4), PVV_Q4, df['Page Views Per Visit'])<jupyter_output><empty_output><jupyter_text>### Using box plots again to check if the outliers were treated in the numeric fields or not<jupyter_code># creating boxplots again to validate if the outliers were removed or not
plt.figure(figsize=[16,10])
plt.subplot(311)
sns.boxplot(df['TotalVisits'])
plt.subplot(312)
sns.boxplot(df['Total Time Spent on Website'])
plt.subplot(313)
sns.boxplot(df['Page Views Per Visit'])
plt.show()<jupyter_output><empty_output><jupyter_text>## 3. Preparing the Data for Model Building### Checking the skewness of the categorical variables<jupyter_code>df['Lead Origin'].value_counts(normalize=True)
df['Lead Source'].value_counts(normalize=True)
df['Do Not Email'].value_counts(normalize=True)
df['Do Not Call'].value_counts(normalize=True)
df['Last Activity'].value_counts(normalize=True)
df['Country'].value_counts(normalize=True)
df['Specialization'].value_counts(normalize=True)
df['What is your current occupation'].value_counts(normalize=True)
df['What matters most to you in choosing a course'].value_counts(normalize=True)
df['Search'].value_counts(normalize=True)
df['Magazine'].value_counts(normalize=True)
df['Newspaper Article'].value_counts(normalize=True)
df['X Education Forums'].value_counts(normalize=True)
df['Newspaper'].value_counts(normalize=True)
df['Digital Advertisement'].value_counts(normalize=True)
df['Through Recommendations'].value_counts(normalize=True)
df['Receive More Updates About Our Courses'].value_counts(normalize=True)
df['Update me on Supply Chain Content'].value_counts(normalize=True)
df['Get updates on DM Content'].value_counts(normalize=True)
df['City'].value_counts(normalize=True)
df['I agree to pay the amount through cheque'].value_counts(normalize=True)
df['A free copy of Mastering The Interview'].value_counts(normalize=True)
df['Last Notable Activity'].value_counts(normalize=True)<jupyter_output><empty_output><jupyter_text>### Removing the below mentioned columns which have highly skewed data
* Do Not Email
* Do Not Call
* Country
* What is your current occupation
* What matters most to you in choosing a course
* Search
* Magazine
* Newspaper Article
* X Education Forums
* Newspaper
* Digital Advertisement
* Through Recommendations
* Receive More Updates About Our Courses
* Update me on Supply Chain Content
* Get updates on DM Content
* I agree to pay the amount through cheque<jupyter_code># Removing highly skewed columns
to_remove=['Do Not Email','Do Not Call','Country','What is your current occupation','What matters most to you in choosing a course','Search','Magazine','Newspaper Article','X Education Forums','Newspaper','Digital Advertisement','Through Recommendations','Receive More Updates About Our Courses','Update me on Supply Chain Content','Get updates on DM Content','I agree to pay the amount through cheque']
df1=df.drop(to_remove, axis=1)<jupyter_output><empty_output><jupyter_text>### Inspecting the Dataframe after removing the highly skewed variables<jupyter_code>df1.shape
df1.head()<jupyter_output><empty_output><jupyter_text>### Mapping the Yes/No to 1/0 respectively in the categorical variables<jupyter_code># Mapping 'Yes' with 1 and 'No' with 0
# Defining the map function
def binary_map(x):
return x.map({'Yes': 1, 'No': 0})
# Applying the function to the housing list
df1[['A free copy of Mastering The Interview']] = df1[['A free copy of Mastering The Interview']].apply(binary_map)<jupyter_output><empty_output><jupyter_text>### Inspecting if the mapping worked correctly or not<jupyter_code>df1.head()<jupyter_output><empty_output><jupyter_text>### Bucketing the categorical variables where there are more number of categories<jupyter_code>df1['Lead Origin'] = np.where(~(df1['Lead Origin'].isin(['Landing Page Submission','API'])), "Others", df1['Lead Origin'])
df1['Lead Origin'].value_counts(normalize=True)
df1['Lead Source'] = np.where(~(df1['Lead Source'].isin(['Google','Direct Traffic','Olark Chat','Organic Search'])), "Others", df1['Lead Source'])
df1['Lead Source'].value_counts(normalize=True)
df1['Last Activity'] = np.where(~(df1['Last Activity'].isin(['Email Opened','SMS Sent','Olark Chat Conversation'])), "Others", df1['Last Activity'])
df1['Last Activity'].value_counts(normalize=True)
df1['Specialization'] = np.where(~(df1['Specialization'].isin(['Finance Management','Human Resource Management','Marketing Management','Operations Management'])), "Others", df1['Specialization'])
df1['Specialization'].value_counts(normalize=True)
df1['City'] = np.where(~(df1['City'].isin(['Mumbai','Thane & Outskirts'])), "Others", df1['City'])
df1['City'].value_counts(normalize=True)
df1['Last Notable Activity'] = np.where(~(df1['Last Notable Activity'].isin(['Modified','Email Opened','SMS Sent'])), "Others", df1['Last Notable Activity'])
df1['Last Notable Activity'].value_counts(normalize=True)<jupyter_output><empty_output><jupyter_text>### Creating dummy variables for the categorical columns after bucketing them and also removing one of the dummyfied variablesDummyfying the column Lead Origin and removing the column LO_Others<jupyter_code># Creating dummy variables for the variable 'Lead Origin'
lo1 = pd.get_dummies(df1['Lead Origin'], prefix='LO')
# Dropping LO_Others column
lo = lo1.drop(['LO_Others'], 1)
#Adding the results to the master dataframe
df1 = pd.concat([df1,lo], axis=1)
df1.head()<jupyter_output><empty_output><jupyter_text>Dummyfying the column Lead Source and removing the column LS_Others<jupyter_code># Creating dummy variables for the variable 'Lead Source'
ls1 = pd.get_dummies(df1['Lead Source'], prefix='LS')
# Dropping LS_Others column
ls = ls1.drop(['LS_Others'], 1)
#Adding the results to the master dataframe
df1 = pd.concat([df1,ls], axis=1)
df1.head()<jupyter_output><empty_output><jupyter_text>Dummyfying the column Last Activity and removing the column LA_Others<jupyter_code># Creating dummy variables for the variable 'Last Activity'
la1 = pd.get_dummies(df1['Last Activity'], prefix='LA')
# Dropping LA_Others column
la = la1.drop(['LA_Others'], 1)
#Adding the results to the master dataframe
df1 = pd.concat([df1,la], axis=1)
df1.head()<jupyter_output><empty_output><jupyter_text>Dummyfying the column Specialization and removing the column SPL_Others<jupyter_code># Creating dummy variables for the variable 'Specialization'
spl1 = pd.get_dummies(df1['Specialization'], prefix='SPL')
# Dropping SPL_Others column
spl = spl1.drop(['SPL_Others'], 1)
#Adding the results to the master dataframe
df1 = pd.concat([df1,spl], axis=1)
df1.head()<jupyter_output><empty_output><jupyter_text>Dummyfying the column City and removing the column CITY_Others<jupyter_code># Creating dummy variables for the variable 'City'
CT1 = pd.get_dummies(df1['City'], prefix='CITY')
# Dropping CITY_Others column
City = CT1.drop(['CITY_Others'], 1)
#Adding the results to the master dataframe
df1 = pd.concat([df1,City], axis=1)
df1.head()<jupyter_output><empty_output><jupyter_text>Dummyfying the column Last Notable Activity and removing the column LNA_Others<jupyter_code># Creating dummy variables for the variable 'Last Notable Activity'
lna1 = pd.get_dummies(df1['Last Notable Activity'], prefix='LNA')
# Dropping LNA_Others column
lna = lna1.drop(['LNA_Others'], 1)
#Adding the results to the master dataframe
df1 = pd.concat([df1,lna], axis=1)
df1.head()<jupyter_output><empty_output><jupyter_text>### Dropping the original columns for which dummy variables have been created<jupyter_code># Dropping the original columns for which dummies have been created
drop_columns = ['Lead Origin','Lead Source','Last Activity','Specialization','City','Last Notable Activity']
lead = df1.drop(drop_columns, axis=1)<jupyter_output><empty_output><jupyter_text>### Inspecting the Dataframe after dropping the categorical variables<jupyter_code>lead.head()
lead.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 9191 entries, 0 to 9239
Data columns (total 24 columns):
Lead Number 9191 non-null int64
Converted 9191 non-null int64
TotalVisits 9191 non-null float64
Total Time Spent on Website 9191 non-null int64
Page Views Per Visit 9191 non-null float64
A free copy of Mastering The Interview 9191 non-null int64
LO_API 9191 non-null uint8
LO_Landing Page Submission 9191 non-null uint8
LS_Direct Traffic 9191 non-null uint8
LS_Google 9191 non-null uint8
LS_Olark Chat 9191 non-null uint8
LS_Organic Search 9191 non-null uint8
LA_Email Opened 9191 non-null uint8
LA_Olark Chat Conversation 9191 non-null uint8
LA_SMS Sent [...]<jupyter_text>## 4. Creating the Model### Creating X and y variables<jupyter_code># Putting feature variable to X
X = lead.drop(['Lead Number','Converted'], axis=1)
X.head()
# Putting response variable to y
y = lead['Converted']
y.head()<jupyter_output><empty_output><jupyter_text>### Train-Test split<jupyter_code># Splitting the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, test_size=0.3, random_state=100)<jupyter_output><empty_output><jupyter_text>#### Checking the shape of Train and Test dataframe to check the splitting worked correctly<jupyter_code># Checking if the splitting of data worked correctly
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)<jupyter_output>(6433, 22)
(2758, 22)
(6433,)
(2758,)
<jupyter_text>### Variable Scaling#### Creating a list of numerical variables so that the same can be scaled<jupyter_code>#creating a list of numerical variables
vars = ['TotalVisits','Total Time Spent on Website','Page Views Per Visit']<jupyter_output><empty_output><jupyter_text>#### Using standardization to scale the numerical variables<jupyter_code>#Creating the scaler object
scaler = StandardScaler()
#Applying the scaler object on the numeric variables of the train dataframe
X_train[vars] = scaler.fit_transform(X_train[vars])<jupyter_output><empty_output><jupyter_text>#### Checking the head of the scalled Dataframe<jupyter_code>X_train.head()<jupyter_output><empty_output><jupyter_text>### Calculating the conversion rate<jupyter_code>### Checking the Conversion Rate
Lead_Converted = (sum(lead['Converted'])/len(lead.index))*100
Lead_Converted<jupyter_output><empty_output><jupyter_text>The conversion rate is 38.34%## Correlation: creating heatmap to understand the correlation between variables<jupyter_code>#Checking the correlation between the variables
plt.figure(figsize=(20,16))
sns.heatmap(lead.corr(), annot=True, cmap='YlGnBu')
plt.show()<jupyter_output><empty_output><jupyter_text>### Dropping the highly correlated variables from both X_train and X_test datasets<jupyter_code>X_train = X_train.drop(['TotalVisits', 'LA_SMS Sent', 'LA_Email Opened','LO_Landing Page Submission'], 1)
#X_test = X_test.drop(['TotalVisits', 'LA_SMS Sent', 'LA_Email Opened','LO_Landing Page Submission'], 1)
X_test = X_test.drop(['LA_SMS Sent', 'LA_Email Opened','LO_Landing Page Submission'], 1)<jupyter_output><empty_output><jupyter_text>#### After dropping highly correlated variables. checking the heatmap again to find any other variables that may have high correlation with others<jupyter_code>plt.figure(figsize = (20,10))
sns.heatmap(X_train.corr(),annot = True)
plt.show()<jupyter_output><empty_output><jupyter_text>#### Instecting the X_train dataframe<jupyter_code>X_train.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 6433 entries, 8331 to 5676
Data columns (total 18 columns):
Total Time Spent on Website 6433 non-null float64
Page Views Per Visit 6433 non-null float64
A free copy of Mastering The Interview 6433 non-null int64
LO_API 6433 non-null uint8
LS_Direct Traffic 6433 non-null uint8
LS_Google 6433 non-null uint8
LS_Olark Chat 6433 non-null uint8
LS_Organic Search 6433 non-null uint8
LA_Olark Chat Conversation 6433 non-null uint8
SPL_Finance Management 6433 non-null uint8
SPL_Human Resource Management 6433 non-null uint8
SPL_Marketing Management 6433 non-null uint8
SPL_Operations Management 6433 non-null uint8
CITY_Mumbai 6433 non-null uint8
CITY_Thane & Outsk[...]<jupyter_text>### Logistic Regression Model Building<jupyter_code># Logistic regression model
lead_lr1 = sm.GLM(y_train,(sm.add_constant(X_train)), family = sm.families.Binomial())
print(lead_lr1.fit().summary())<jupyter_output> Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: Converted No. Observations: 6433
Model: GLM Df Residuals: 6414
Model Family: Binomial Df Model: 18
Link Function: logit Scale: 1.0000
Method: IRLS Log-Likelihood: -3040.8
Date: Mon, 01 Jun 2020 Deviance: 6081.6
Time: 21:12:57 Pearson chi2: 6.66e+03
No. Iterations: 5
Covariance Type: nonrobust
==========================================================================================================
[...]<jupyter_text>We created the model with all the variables initially and in the statistical representation of the model, we found that few of the variables are statistically very insignificant. Hence, we choose to select the most important features by using Automatic Feature Selection technique(RFE).## Feature Selection using RFE#### Importing the required library and creating the Logistic Regression object<jupyter_code># Importing required library for using Logistic Regression
from sklearn.linear_model import LogisticRegression
# Creating LogisticRegression object
logreg = LogisticRegression()<jupyter_output><empty_output><jupyter_text>#### We have decided to choose 13 most significant variables through the automatic feature selection technique.<jupyter_code># Importing required libraries for using RFE for feature selection
from sklearn.feature_selection import RFE
rfe = RFE(logreg, 13) # running RFE with 13 variables as output
rfe = rfe.fit(X_train, y_train)
rfe.support_<jupyter_output><empty_output><jupyter_text>#### We are looking into the different variables and their ranks as per the statistical sigficance.<jupyter_code>list(zip(X_train.columns, rfe.support_, rfe.ranking_))<jupyter_output><empty_output><jupyter_text>#### Creating a list of 13 most significant columns<jupyter_code>col = X_train.columns[rfe.support_]<jupyter_output><empty_output><jupyter_text>#### Creating the model again with the 13 most significant variables<jupyter_code>X_train_sm = sm.add_constant(X_train[col])
lead_lr2 = sm.GLM(y_train,X_train_sm, family = sm.families.Binomial())
res = lead_lr2.fit()
res.summary()<jupyter_output><empty_output><jupyter_text>After creating the model with 13 most significantvariables, we can still see some statistically insignificant variables in them. However, we still want to evaluate the model we have just built.Using the model to predict the Conversion probability for the first 10 training data<jupyter_code># Getting the predicted values on the train set
y_train_pred = res.predict(X_train_sm)
y_train_pred[:10]<jupyter_output><empty_output><jupyter_text>Converting the Conversions into a NumPy array<jupyter_code>y_train_pred = y_train_pred.values.reshape(-1)
y_train_pred[:10]<jupyter_output><empty_output><jupyter_text>### Creating a dataframe with the actual Converted flag and the predicted probabilities of ConversionCreating a new dataframe containing the actual converted values with the conversion probabilities for the first 10 rows from the training set<jupyter_code>y_train_pred_final = pd.DataFrame({'Converted':y_train.values, 'Conversion_Prob':y_train_pred, 'Lead_Score':y_train_pred*100})
#y_train_pred_final = pd.DataFrame({'Converted':y_train.values, 'Lead_Score':y_train_pred*100})
y_train_pred_final['Lead Number'] = y_train.index
y_train_pred_final.head()<jupyter_output><empty_output><jupyter_text>#### Creating new column 'predicted' with 1 if Conversion Probability > 0.5 else 0<jupyter_code>y_train_pred_final['predicted'] = y_train_pred_final.Lead_Score.map(lambda x: 1 if x > 50.0 else 0)
# Let's see the head
y_train_pred_final.head()<jupyter_output><empty_output><jupyter_text>### Creating the Confusion Matrix#### Importing the required library<jupyter_code># Importing Library to create the confusion matrix
from sklearn import metrics<jupyter_output><empty_output><jupyter_text>Creating the actual confusion matrix<jupyter_code># Confusion matrix
confusion = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.predicted )
print(confusion)<jupyter_output>[[3399 583]
[ 824 1627]]
<jupyter_text>Since, the number of Conversions that are the top most priority, we will calculate the Sensitivity of the model<jupyter_code>TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
# Checking the Sensityvity
TP / float(TP+FN)<jupyter_output><empty_output><jupyter_text>We get the Sensitivity of the model is as low as 66%Also we calculate the accuracy of the model<jupyter_code># Let's check the overall accuracy.
print(metrics.accuracy_score(y_train_pred_final.Converted, y_train_pred_final.predicted))<jupyter_output>0.7812840043525572
<jupyter_text>The accuracy of the model we get as 78% which is not very impressive.We use VIF to find the correlation between the variables<jupyter_code># Create a dataframe that will contain the names of all the feature variables and their respective VIFs
vif = pd.DataFrame()
vif['Features'] = X_train[col].columns
vif['VIF'] = [variance_inflation_factor(X_train[col].values, i) for i in range(X_train[col].shape[1])]
vif['VIF'] = round(vif['VIF'], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
vif<jupyter_output><empty_output><jupyter_text>We can see that the variables are not correlated with each other very significantly. Hence, based on the statistical significance, we decide to remove the column SPL_Operations Management from the dataframe since this is the most insignificant column.<jupyter_code># Based on the p-value or the statistical significance, we remove the column 'SPL_Operations Management'
col = col.drop('SPL_Operations Management', 1)
col<jupyter_output><empty_output><jupyter_text>Creating the model again with the reduced number of variables.<jupyter_code># Let's re-run the model using the selected variables
X_train_sm = sm.add_constant(X_train[col])
logm3 = sm.GLM(y_train,X_train_sm, family = sm.families.Binomial())
res = logm3.fit()
res.summary()<jupyter_output><empty_output><jupyter_text>Converting the Conversions into a NumPy array<jupyter_code>y_train_pred = res.predict(X_train_sm).values.reshape(-1)
y_train_pred[:10]<jupyter_output><empty_output><jupyter_text>Adding a new column as Conversion_Prob so that the same converted as the predicted value of conversion based on a certain cut off<jupyter_code>y_train_pred_final['Conversion_Prob'] = y_train_pred
y_train_pred_final['Lead_Score'] = y_train_pred*100<jupyter_output><empty_output><jupyter_text>Setting a cutoff of 0.5 and converting the probability into the predicted conversion<jupyter_code># Creating new column 'predicted' with 1 if Conversion_Prob > 0.5 else 0
y_train_pred_final['predicted'] = y_train_pred_final.Lead_Score.map(lambda x: 1 if x > 50.0 else 0)
y_train_pred_final.head()<jupyter_output><empty_output><jupyter_text>Checking the Accuracy of the model again to idnetify any change in it<jupyter_code># Let's check the overall accuracy.
print(metrics.accuracy_score(y_train_pred_final.Converted, y_train_pred_final.predicted))<jupyter_output>0.7817503497590549
<jupyter_text>By looking into the accuracy, we see that there is no significant change in the accuracy of the model. This means that the column we removed from the model was not really affecting the target variable.Creating the confusion matrix again<jupyter_code># Confusion matrix
confusion = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.predicted )
print(confusion)<jupyter_output>[[3404 578]
[ 826 1625]]
<jupyter_text>Checking the Sensitivity of the model again<jupyter_code>TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
# Checking the Sensityvity
TP / float(TP+FN)<jupyter_output><empty_output><jupyter_text>We found that there is no such change in the sensitivity as well. Thus, the new model is not able to predict the conversion probabilities better than the previous model.Checking the VIF again to find the correlation of the model.<jupyter_code># Create a dataframe that will contain the names of all the feature variables and their respective VIFs
vif = pd.DataFrame()
vif['Features'] = X_train[col].columns
vif['VIF'] = [variance_inflation_factor(X_train[col].values, i) for i in range(X_train[col].shape[1])]
vif['VIF'] = round(vif['VIF'], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
vif<jupyter_output><empty_output><jupyter_text>We can see that the variables are not correlated with each other very significantly. Hence, based on the statistical significance, we decide to remove the column LNA_Email Opened from the dataframe since this is the most insignificant column.<jupyter_code># Based on the p-value or the statistical significance, we remove the column 'LNA_Email Openednt'
col = col.drop('LNA_Email Opened', 1)
col<jupyter_output><empty_output><jupyter_text>Again we are creating the model with the reduced number of columns<jupyter_code># Let's re-run the model using the selected variables
X_train_sm = sm.add_constant(X_train[col])
logm3 = sm.GLM(y_train,X_train_sm, family = sm.families.Binomial())
res = logm3.fit()
res.summary()<jupyter_output><empty_output><jupyter_text>Now we see from the statistical representation that all the variables are now statistically significant. Also to be more sure, we check the correlation by using VIF.<jupyter_code># Create a dataframe that will contain the names of all the feature variables and their respective VIFs
vif = pd.DataFrame()
vif['Features'] = X_train[col].columns
vif['VIF'] = [variance_inflation_factor(X_train[col].values, i) for i in range(X_train[col].shape[1])]
vif['VIF'] = round(vif['VIF'], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
vif<jupyter_output><empty_output><jupyter_text>From the VIF, we can see that the VIF is values for every column is now below 3 and this means that the model is a stable one and variables are independent to each other. Again, converting the conversion probabilities into NumPy array.<jupyter_code>y_train_pred = res.predict(X_train_sm).values.reshape(-1)
y_train_pred[:10]<jupyter_output><empty_output><jupyter_text>Adding a new column as Conversion_Prob so that the same converted as the predicted value of conversion based on a certain cut off<jupyter_code>y_train_pred_final['Conversion_Prob'] = y_train_pred
y_train_pred_final['Lead_Score'] = y_train_pred*100<jupyter_output><empty_output><jupyter_text>Setting a cutoff of 0.5 and converting the probability into the predicted conversion<jupyter_code># Creating new column 'predicted' with 1 if Churn_Prob > 0.5 else 0
y_train_pred_final['predicted'] = y_train_pred_final.Lead_Score.map(lambda x: 1 if x > 50.0 else 0)
y_train_pred_final.head()<jupyter_output><empty_output><jupyter_text>Checking the overall accuracy of the model.<jupyter_code># Let's check the overall accuracy.
print(metrics.accuracy_score(y_train_pred_final.Converted, y_train_pred_final.predicted))<jupyter_output>0.7797295196642313
<jupyter_text>Creating the confusion matrix again for the newly built model.<jupyter_code># Confusion matrix
confusion = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.predicted )
print(confusion)<jupyter_output>[[3396 586]
[ 831 1620]]
<jupyter_text>Checking the Sensitivity anagin<jupyter_code>TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
# Checking the Sensityvity
TP / float(TP+FN)<jupyter_output><empty_output><jupyter_text>By looking into the sensitivity, we see that there is no such improvement in the sensitivity. Hence, we decided to vary the probability cutoff to see if the sensitivity improves.
Applying ROC curve technique to find out the optimum cutoff for the conversion probability.# Plotting the ROC Curve<jupyter_code>def draw_roc( actual, probs ):
fpr, tpr, thresholds = metrics.roc_curve( actual, probs,
drop_intermediate = False )
auc_score = metrics.roc_auc_score( actual, probs )
plt.figure(figsize=(5, 5))
plt.plot( fpr, tpr, label='ROC curve (area = %0.2f)' % auc_score )
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate or [1 - True Negative Rate]')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
return None
fpr, tpr, thresholds = metrics.roc_curve( y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob, drop_intermediate = False )
draw_roc(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)<jupyter_output><empty_output><jupyter_text>After plotting the ROC curve, we see that the area under the curve is 0.85 which is very high and this is indicates that the model that we have built is a good model.# Finding Optimal Cutoff PointOptimal cutoff probability is that prob where we get balanced sensitivity and specificity<jupyter_code># Let's create columns with different probability cutoffs
numbers = [float(x)/10 for x in range(10)]
for i in numbers:
y_train_pred_final[i]= y_train_pred_final.Conversion_Prob.map(lambda x: 1 if x > i else 0)
y_train_pred_final.head()<jupyter_output><empty_output><jupyter_text>For different probability, we calculate the Accuracy, Sensitivity and Specificity for the model.<jupyter_code># Now let's calculate accuracy sensitivity and specificity for various probability cutoffs.
cutoff_df = pd.DataFrame( columns = ['prob','accuracy','sensi','speci'])
from sklearn.metrics import confusion_matrix
# TP = confusion[1,1] # true positive
# TN = confusion[0,0] # true negatives
# FP = confusion[0,1] # false positives
# FN = confusion[1,0] # false negatives
num = [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
for i in num:
cm1 = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final[i] )
total1=sum(sum(cm1))
accuracy = (cm1[0,0]+cm1[1,1])/total1
speci = cm1[0,0]/(cm1[0,0]+cm1[0,1])
sensi = cm1[1,1]/(cm1[1,0]+cm1[1,1])
cutoff_df.loc[i] =[ i ,accuracy,sensi,speci]
print(cutoff_df)<jupyter_output> prob accuracy sensi speci
0.0 0.0 0.381004 1.000000 0.000000
0.1 0.1 0.544381 0.972256 0.281015
0.2 0.2 0.702627 0.903305 0.579106
0.3 0.3 0.770247 0.830273 0.733300
0.4 0.4 0.779108 0.753162 0.795078
0.5 0.5 0.779730 0.660955 0.852838
0.6 0.6 0.762008 0.524684 0.908086
0.7 0.7 0.733406 0.392901 0.942993
0.8 0.8 0.705581 0.275806 0.970116
0.9 0.9 0.658635 0.121175 0.989453
<jupyter_text>We plot three curves for three different characteristics against the probability to find the optimum cutoff. We know that the intersection point is the optimum cutoff point.<jupyter_code># Let's plot accuracy sensitivity and specificity for various probabilities.
cutoff_df.plot.line(x='prob', y=['accuracy','sensi','speci'])
plt.show()<jupyter_output><empty_output><jupyter_text>We see that the point of intersection is somewhere between 0.3 and 0.4. Hence, we consider the cutoff as 0.3 and try to predict the final probability of the train set. Predicting the final probability of Lead Conversion with probability cutoff as 0.3<jupyter_code>y_train_pred_final['final_predicted'] = y_train_pred_final.Lead_Score.map( lambda x: 1 if x > 30.0 else 0)
y_train_pred_final.head()<jupyter_output><empty_output><jupyter_text>Checking the overall Accuracy of the model<jupyter_code># Let's check the overall accuracy.
metrics.accuracy_score(y_train_pred_final.Converted, y_train_pred_final.final_predicted)<jupyter_output><empty_output><jupyter_text>The overall Accuracy is now becomes 77%.Creating the confusion matrix.<jupyter_code>confusion = metrics.confusion_matrix(y_train_pred_final.Converted, y_train_pred_final.final_predicted )
confusion<jupyter_output><empty_output><jupyter_text>Calculating the Sensitivity of the model built.<jupyter_code>TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
# Checking the Sensityvity
TP / float(TP+FN)<jupyter_output><empty_output><jupyter_text>The final Sensitivity of the model becomes 83% with a probability of 0.3<jupyter_code># Let us calculate specificity
TN / float(TN+FP)<jupyter_output><empty_output><jupyter_text>The Specificity of the model is 73%## Precision and Recall<jupyter_code>confusion<jupyter_output><empty_output><jupyter_text>Calculating Precision of the model from the confusion matrix<jupyter_code>#precision
prec = confusion[1,1]/(confusion[0,1]+confusion[1,1])
prec<jupyter_output><empty_output><jupyter_text>Calculating the Recall of the model<jupyter_code>#Recall
reca = confusion[1,1]/(confusion[1,0]+confusion[1,1])
reca<jupyter_output><empty_output><jupyter_text>#### F1 Score<jupyter_code>F1 = 2*((prec * reca)/(prec + reca))
F1<jupyter_output><empty_output><jupyter_text>We get the F1 Score for the model as 73% which is another form of representing model wellness.## Precision and recall tradeoffThere is a trade off between Precision and Recall for the model we biult. There is an optimum point at which both Precision and Recall are same. Other than this point, these two metrices behave like invesely proportional to each other.<jupyter_code>y_train_pred_final.Converted, y_train_pred_final.predicted<jupyter_output><empty_output><jupyter_text>Plotting Precision vs Recall for different probablity threshold<jupyter_code>p, r, thresholds = precision_recall_curve(y_train_pred_final.Converted, y_train_pred_final.Conversion_Prob)
plt.plot(thresholds, p[:-1], "g-")
plt.plot(thresholds, r[:-1], "r-")
plt.show()<jupyter_output><empty_output><jupyter_text>We see that the optimum point lies after 0.3 and this means considering 0.3 as threshold is okay.# Making predictions on the test set<jupyter_code>X_test.head()<jupyter_output><empty_output><jupyter_text>We need to apply scaling on the numeric variables of the test set.<jupyter_code>X_test[vars]=scaler.transform(X_test[vars])
X_test.head()<jupyter_output><empty_output><jupyter_text>Selecting onky the columns that were used to create the final model and applied on the train set<jupyter_code>X_test = X_test[col]
X_test.head()<jupyter_output><empty_output><jupyter_text>Adding a constant to the test set<jupyter_code>X_test_sm = sm.add_constant(X_test)<jupyter_output><empty_output><jupyter_text>Predicting the final conversion probability from the test set<jupyter_code>y_test_pred = res.predict(X_test_sm)<jupyter_output><empty_output><jupyter_text>First 10 values from the test set are predicted the probablity of lead conversion.<jupyter_code>y_test_pred[:10]<jupyter_output><empty_output><jupyter_text>Converting y_pred to a dataframe which is an array<jupyter_code># Converting y_pred to a dataframe which is an array
y_pred_1 = pd.DataFrame(y_test_pred)
# Let's see the head
y_pred_1.head()
# Converting y_test to dataframe
y_test_df = pd.DataFrame(y_test)<jupyter_output><empty_output><jupyter_text>Creating field Lead Number along with actual converted field values<jupyter_code># Putting Lead Number to index
y_test_df['Lead Number'] = y_test_df.index
y_test_df.head()<jupyter_output><empty_output><jupyter_text>Removing index for both dataframes to append them side by side<jupyter_code># Removing index for both dataframes to append them side by side
y_pred_1.reset_index(drop=True, inplace=True)
y_test_df.reset_index(drop=True, inplace=True)<jupyter_output><empty_output><jupyter_text>Appending y_test_df and y_pred_1<jupyter_code># Appending y_test_df and y_pred_1
y_pred_final = pd.concat([y_test_df, y_pred_1],axis=1)
y_pred_final.head()<jupyter_output><empty_output><jupyter_text>Renaming the field 0 to Conversion_Prob<jupyter_code># Renaming the column
y_pred_final= y_pred_final.rename(columns={ 0 : 'Conversion_Prob'})<jupyter_output><empty_output><jupyter_text>Renaming the CustID column to Lead Number<jupyter_code># Renaming the column
y_pred_final= y_pred_final.rename(columns={ 'CustID' : 'Lead Number'})
y_pred_final['Lead_Score'] = y_pred_final['Conversion_Prob']*100
y_pred_final.head()<jupyter_output><empty_output><jupyter_text>Rearranging the final predicted dataframe<jupyter_code>y_pred_final = y_pred_final[['Lead Number', 'Converted', 'Conversion_Prob', 'Lead_Score']]
y_pred_final.head()<jupyter_output><empty_output><jupyter_text>Setting the same cutoff 0.3 using which the model on train was created. <jupyter_code>y_pred_final['final_predicted'] = y_pred_final.Lead_Score.map(lambda x: 1 if x > 30.0 else 0)
y_pred_final.head()<jupyter_output><empty_output><jupyter_text>Calculating the overall accuracy of the model<jupyter_code># Let's check the overall accuracy.
metrics.accuracy_score(y_pred_final.Converted, y_pred_final.final_predicted)<jupyter_output><empty_output><jupyter_text>The final overall accuracy of the test dats is 78% which is almost same with the train model accuracy.Creating the confusion matrix with the test data<jupyter_code>confusion = metrics.confusion_matrix(y_pred_final.Converted, y_pred_final.final_predicted )
confusion<jupyter_output><empty_output><jupyter_text>Calculating the Sensitivity of the final model<jupyter_code>TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
# Let's see the sensitivity of our logistic regression model
TP / float(TP+FN)<jupyter_output><empty_output>
|
no_license
|
/Lead Score Case Study.ipynb
|
souravkrmkr/Lead_Score_Case_Study
| 126 |
<jupyter_start><jupyter_text># X, Y ¿Qué?
En el cápitulo 2 aprendimos como crear gráficos básicos especificando un marcador (como barras o puntos) y codificando columnas de valores de tu __DataFrame__ a elementos del gráfico como **X e Y**.
`Altair` se toma la libertad de tomar decisiones por ti cuando a detalles se refiere. Por ejemplo, si le asignas la columna `Periodo` a **X** en tu gráfico, `altair` asume que el eje tiene que ir desde el valor mínimo encontrado en tal columna hasta el valor máximo de `Periodo`. También asume que el título del eje es el nombre de la columna: `Periodo`. Y si especificas el tipo de datos que `Periodo` es (cuantitativo, ordinal, nominal o temporal) también decide como codificarlo al eje y como etiquetar los tics. Si fuera una serie de fechas `altair` lo exhibiria de una manera diferente a si fuera una serie de números o una serie de etiquetas.
Tomemos por ejemplo el gráfico de líneas del capítulo 2:Primero asignamos los datos a un __DataFrame__:<jupyter_code>import pandas as pd
import altair as alt
datos = pd.read_csv("../../datos/poblacion.csv")
datos.head()<jupyter_output><empty_output><jupyter_text>### Ejemplo del Capítulo 2:<jupyter_code>alt.Chart(datos).mark_line().encode(
x = 'Periodo',
y = 'Número de personas',
)<jupyter_output><empty_output><jupyter_text>Como puedes ver `altair` esta leyendo la columna `Periodo` como una cantidad cuantitativa aunque sabemos que representa los años de 1910 a 2015. Lo fácil sería especificar el tipo de datos que cada asignación es como lo hemos hecho antes:<jupyter_code>alt.Chart(datos).mark_line().encode(
x = 'Periodo:T',
y = 'Número de personas:Q',
)<jupyter_output><empty_output><jupyter_text>O-oh. Le hemos dicho a `altair` que `Periodo` es una cantidad temporal pero `altair` no sabe a que cantidad de tiempo nos referimos, ¿segundos? ¿días? ¿nanosegundos?
Pera esto es mejor transformar nuestros datos en `pandas` que en `altair`. `Pandas` es una biblioteca de `python` muy poderosa y eficáz, si es necesario manipular tus datos se te recomienda hacerlo en `pandas` y no en `altair`.<jupyter_code>datos['Periodo'] = pd.to_datetime(datos['Periodo'], format = '%Y')<jupyter_output><empty_output><jupyter_text>{{site.data.alerts.tip}} `pd.to_datetime` es un método de `pandas` que toma una serie de datos y la transforma a una serie de fechas (oséa una serie de datos de tipo `datetime`), `pandas` hace lo mejor que pueda para deducir el formato en el que se encuentran tus datos pero si lo sabes es mejor. En este caso es '%Y' porque nuestros datos son de 4 dígitos representando un año cada uno. {{site.data.alerts.end}}<jupyter_code>alt.Chart(datos).mark_line().encode(
x = 'Periodo:T',
y = 'Número de personas:Q',
)<jupyter_output><empty_output><jupyter_text>Ahora las etiquetas de cada tic en el eje **X** si parecen años. ¿Qué tal si queremos modificar el titulo de cada eje?
Este tipo de especificaciones son más complejas y requieren utilizar los objetos de `altair`, en este caso: `alt.X()` y `alt.Y()`.<jupyter_code>alt.Chart(datos).mark_line().encode(
x = alt.X('Periodo:T', title = "Año",),
y = alt.Y('Número de personas:Q', title = 'Población')
)<jupyter_output><empty_output><jupyter_text>Utilizando los objetos de `altair` es como podemos modificar nuestros gráficos al gusto. Expandamos un poco más este gráfico.<jupyter_code>alt.Chart(datos).mark_line().encode(
x = alt.X("Periodo:T", title = "Año"),
y = alt.Y("Número de personas:Q", title = "Población", axis = alt.Axis(format = 's')),
)<jupyter_output><empty_output><jupyter_text>Aquí estamos utilizando el objeto `alt.Axis()` para personalizar nuestro eje **Y** el cual es un argumento dentro del objeto `alt.Y()`. Esto no es ningún problema para `altair`. Como `altair` lo que hace es traducir tu código `python` a `Vega-lite` que es una librería de `JavaScript` sigue las mismas convenciones en varios aspectos. En este caso, el formato `'s'` le indica a `altair` que estamos hablando en millares. Terminemos de personalizar este gráfico con un título más apropiado.<jupyter_code>alt.Chart(datos).mark_line().encode(
x = alt.X("Periodo:T", title = "Año"),
y = alt.Y("Número de personas:Q", title = "Población", axis = alt.Axis(format = 's')),
).properties(
title = "La Población de México 1910-2015"
)<jupyter_output><empty_output><jupyter_text>***
Algunos de ustedes tal vez esten pensando que los títulos de los ejes son un poco redundantes ya que hemos agregado un título al gráfico. Tan simple como borrarlos:<jupyter_code>alt.Chart(datos).mark_line().encode(
x = alt.X("Periodo:T", title = ""),
y = alt.Y("Número de personas:Q", title = "", axis = alt.Axis(format = 's')),
).properties(
title = "La Población de México 1910-2015"
)<jupyter_output><empty_output>
|
permissive
|
/content/section-03/1/equis-y-que.ipynb
|
cimarron-io/guia-altair
| 9 |
<jupyter_start><jupyter_text># Task 1:
- Extraire des infos intéressantes type prix min, moyen et max, durée min/max/moyenne par trajet<jupyter_code># look for data
print('rows: {}, columns: {}'.format(ticket_data.shape[0], ticket_data.shape[1]))
ticket_data.head(3)
# describe some statistic in numerical features
ticket_data.describe()
# info about types
ticket_data.info()
#conver dates to datetime format
ticket_data['arrival_ts'] = pd.to_datetime(ticket_data['arrival_ts'])
ticket_data['departure_ts'] = pd.to_datetime(ticket_data['departure_ts'])
ticket_data.dtypes
# create new column 'duration'. It is difference betweeen arrival_ts - departure_ts in seconds.
ticket_data = ticket_data.assign(duration = (ticket_data['arrival_ts'] - ticket_data['departure_ts']).dt.total_seconds())
# Extraire des infos intéressantes type prix min, moyen et max, durée min/max/moyenne par trajet
# !!! duartion in seconds
task1 = ticket_data.groupby(['o_city', 'd_city'])['price_in_cents', 'duration'].agg(
{"price_in_cents":[min, max, 'mean'],"duration":[min, max, 'mean']})
task1 = task1.rename(columns = {'duration': 'duration_in_seconds'})
task1.head(20)<jupyter_output><empty_output><jupyter_text># TASK 2
- différence de prix moyen et durée selon le train, le bus et le covoit selon la distance du trajet (0-200km, 201-800km, 800-2000km, 2000+km) <jupyter_code># drop unececary columns and rename some columns
ticket_data.drop(['departure_ts', 'arrival_ts', 'search_ts', 'middle_stations', 'other_companies', 'o_station', 'd_station'],
axis=1, inplace=True)
ticket_data.rename(columns={'duration': 'duration_in_sec'}, inplace=True)
ticket_data.head()
# drop unececary columns and rename some columns
cities.drop(['population'], axis=1, inplace=True)
cities.head(1)
print('ticket: ', ticket_data.shape)
print('citites: ', cities.shape)
# join ticket_data and citiyes by origin city
task2 = ticket_data.merge(cities, left_on = 'o_city', right_on = 'id', suffixes=('_left', '_right'))
# rename some columns
task2.rename(columns = {'unique_name': 'o_city_name', 'latitude': 'o_lat', 'longitude': 'o_lon'}, inplace=True)
task2 = task2.drop(['id_right', 'local_name'], axis=1)
task2.head()
# join ticket_data and citiyes by departure city
task2 = task2.merge(cities, left_on = 'd_city', right_on = 'id')
task2.rename(columns = {'unique_name': 'd_city_name', 'latitude': 'd_lat', 'longitude': 'd_lon'}, inplace=True)
task2 = task2.drop(['id', 'local_name'], axis=1)
task2.head()
# drop some columns from providers data
providers = providers.loc[:,['id', 'name', 'transport_type']]
providers.head()
# join providers with our data by company id
task2 = task2.merge(providers, left_on='company', right_on='id')
task2 = task2.rename(columns = {'name': 'providers'})
task2.drop(['company', 'id'], axis=1, inplace=True)
task2.head()
# create function for calculating distance. haversine_distance
import numpy as np
def haversine_distance(lat1, lon1, lat2, lon2):
r = 6371 #earth radius
phi1 = np.radians(lat1)
phi2 = np.radians(lat2)
delta_phi = np.radians(lat2 - lat1)
delta_lambda = np.radians(lon2 - lon1)
a = np.sin(delta_phi / 2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda / 2)**2
res = r * (2 * np.arctan2(np.sqrt(a), np.sqrt(1 - a)))
return np.round(res, 2)
# create new column 'distance' and calcuate distance by orogin(lat, lon) and destination(lat, lon)
task2 = task2.assign(distance = haversine_distance(task2.o_lat, task2.o_lon, task2.d_lat, task2.d_lon))
task2.head()
# function mapping categories in distance
def cat_by_dist(data):
if data <= float(200):
data = '0-200'
elif data > float(200) and data <=float(800):
data = '200-800'
else:
data = '800-2000'
return data
task2['distance_cat'] = task2['distance'].apply(cat_by_dist)
task2 = task2.groupby(['transport_type', 'distance_cat']).price_in_cents.agg(['mean', max, min])
# answer is
task2<jupyter_output><empty_output>
|
no_license
|
/submition.ipynb
|
zhantileuov/tictactrip
| 2 |
<jupyter_start><jupyter_text>## 1. Load Library and Iris Dataset<jupyter_code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import neighbors
from matplotlib.colors import ListedColormap
from sklearn.neighbors import KNeighborsClassifier
%matplotlib inline
df_iris = pd.read_csv("Iris.csv")
df_iris.head()<jupyter_output><empty_output><jupyter_text>## 2. Explore Iris Data<jupyter_code>df_iris.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 6 columns):
Id 150 non-null int64
SepalLengthCm 150 non-null float64
SepalWidthCm 150 non-null float64
PetalLengthCm 150 non-null float64
PetalWidthCm 150 non-null float64
Species 150 non-null object
dtypes: float64(4), int64(1), object(1)
memory usage: 7.1+ KB
<jupyter_text>**The data has 150 observations and 5 variables, and no missing value**<jupyter_code>sns.pairplot(df_iris, hue = 'Species')<jupyter_output><empty_output><jupyter_text>**From the pairplot graph, we can see that there are boundaries among species. Therefore, we will explore classification on this dataset**## 3. Prepare training dataset### 3.1 Quantify Species<jupyter_code>df_iris['Species'][df_iris['Species'] =='Iris-setosa'] = 0
df_iris['Species'][df_iris['Species'] =='Iris-versicolor'] = 1
df_iris['Species'][df_iris['Species'] =='Iris-virginica'] = 2
df_iris.head()<jupyter_output><empty_output><jupyter_text>### 3.2 Split Train and Testing Dataset<jupyter_code>X = df_iris.drop('Species',axis = 1)
y = df_iris['Species'].astype('int')
X.head()
y.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
print("There are " + str(X_train.shape[0]) +
" samples in the train the model and " + str(X_test.shape[0]) +
" samples in the test the model.")<jupyter_output>There are 90 samples in the train the model and 60 samples in the test the model.
<jupyter_text>### 3.3 Scaling X Values<jupyter_code>sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)<jupyter_output><empty_output><jupyter_text>### 3.4 Support Vector Machine<jupyter_code>svm = SVC(kernel='rbf', random_state=0, gamma=.10, C=1.0)
svm.fit(X_train_std, y_train)
print("The score of SVC on the training set is {} %.".format(svm.score(X_train_std, y_train)*100))
print("The score of SVC on the test set is {} %.".format(svm.score(X_test_std, y_test)*100))<jupyter_output>The score of SVC on the training set is 100.0 %.
The score of SVC on the test set is 100.0 %.
<jupyter_text>### 3.5 K-Nearest Neighnors<jupyter_code>neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X_train_std, y_train)
print("The score of KNN with k=3 on the training set is {} %.".format(neigh.score(X_train_std, y_train)*100))
print("The score of KNN with k=3 on the test set is {} %.".format(neigh.score(X_test_std, y_test)*100))
X_train_std_petal = X_train_std[:,:2]
X_test_std_petal = X_test_std[:,:2]
neigh.fit(X_train_std_petal, y_train)
# Assign Value
x_min, x_max = X_test_std_petal[:, 0].min() - 1, X_test_std_petal[:, 0].max() + 1
y_min, y_max = X_test_std_petal[:, 1].min() - 1, X_test_std_petal[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.2), np.arange(y_min, y_max, 0.2))
Z = neigh.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
plt.contourf(xx, yy, Z, alpha=0.6, cmap= cmap_light)
plt.scatter(X_test_std_petal[:, 0], X_test_std_petal[:, 1], c=y_test, s=20, edgecolor='Black',
cmap= cmap_bold)
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Iris_Classification_Explore.ipynb
|
shichuan1007/Iris_Data_Exploration
| 8 |
<jupyter_start><jupyter_text> Checkpoint:
**Looking to see completetion and effort in completing the checkpoint. It's okay if it's not correct**
Based off this dataset with school financial, enrollment, and achievement data, we are interested in what information is a useful indicator of student performance at the state level.
This question is a bit too big for a checkpoint, however. Instead, we want you to look at smaller questions related to our overall goal. Here's the overview:
1. Choose a specific test to focus on
>Math/Reading for 4/8 grade
* Pick or create features to use
>Will all the features be useful in predicting test score? Are some more important than others? Should you standardize, bin, or scale the data?
* Explore the data as it relates to that test
>Create 2 well-labeled visualizations (graphs), each with a caption describing the graph and what it tells us about the data
* Create training and testing data
>Do you want to train on all the data? Only data from the last 10 years? Only Michigan data?
* Train a ML model to predict outcome
>Pick if you want to do a regression or classification task. For both cases, defined _exactly_ what you want to predict, and pick any model in sklearn to use (see sklearn regressors and classifiers).
* Summarize your findings
>Write a 1 paragraph summary of what you did and make a recommendation about if and how student performance can be predicted
** Include comments throughout your code! Every cleanup and preprocessing task should be documented.
Of course, if you're finding this assignment interesting (and we really hope you do!), you are welcome to do more than the requirements! For example, you may want to see if expenditure affects 4th graders more than 8th graders. Maybe you want to look into the extended version of this dataset and see how factors like sex and race are involved. You can include all your work in this notebook when you turn it in -- just always make sure you explain what you did and interpret your results. Good luck! Data Cleanup
Import numpy, pandas, matplotlib, and seaborn
(Feel free to import other libraries!)<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb<jupyter_output><empty_output><jupyter_text>Load in the "states_edu.csv" dataset and take a look at the head of the data<jupyter_code>df = pd.read_csv('/Users/leoguo/Library/Mobile Documents/com~apple~CloudDocs/大学/大二上学期/|Data science/mdst_tutorials_F21/data/states_edu.csv')<jupyter_output><empty_output><jupyter_text>You should always familiarize yourself with what each column in the dataframe represents. \ Read about the states_edu dataset here: https://www.kaggle.com/noriuk/us-education-datasets-unification-projectUse this space to rename columns, deal with missing data, etc. _(optional)_<jupyter_code>df.head()
df.shape
df.describe()<jupyter_output><empty_output><jupyter_text>Exploratory Data Analysis (EDA) Chosen Predictor for Test: **** (Ex. Math for 8th grade)
**(hit `Enter` to edit)**
Predictor Score in the questions refers to the predictor variable you chose here.How many different years of data are in our dataset? Use a pandas function.<jupyter_code>len(pd.unique(df['YEAR']))<jupyter_output><empty_output><jupyter_text>Let's compare Michigan to Ohio. Which state has the higher average predictor score across all years?<jupyter_code>df[df['YEAR'] == 2019].AVG_MATH_8_SCORE.mean()<jupyter_output><empty_output><jupyter_text>Find the average for your pedictor score across all states in 2019<jupyter_code>df.groupby('STATE').AVG_MATH_8_SCORE.max()<jupyter_output><empty_output><jupyter_text>Find the maximum predictor score for every state. Hint: there's a function that allows you to do this easily Feature Selection
After exploring the data, you now have to choose features that you would use to predict the performance of the students on a chosen test (your chosen predictor). By the way, you can also create your own features. For example, perhaps you figured that maybe a state's expenditure per student may affect their overall academic performance so you create a expenditure_per_student feature.
Use this space to modify or create features<jupyter_code>df['instr_expen_per'] = df['INSTRUCTION_EXPENDITURE'] / df['ENROLL']<jupyter_output><empty_output><jupyter_text>Final feature list: **Instruction expense per enrolled student**Feature selection justification: **to demonstrate a relationship between instruction expense and academic performance**Visualization
Use any graph you wish to see the relationship of your chosen predictor with any features you chose
**Visualization 1**<jupyter_code>df.plot.scatter(y='AVG_READING_8_SCORE',x='instr_expen_per')
plt.xlabel('instruction expenditure per enrolled student')
plt.ylabel('8th grade reading score')<jupyter_output><empty_output><jupyter_text>**Scatter plot of the reading score according to each student's instruction expenditure****Visualization 2**<jupyter_code>df.plot.scatter(y='AVG_MATH_8_SCORE',x='instr_expen_per')
plt.xlabel('instruction expenditure per student')
plt.ylabel('8th grade math score')<jupyter_output><empty_output><jupyter_text>**Scatter plot of the math score according to each student's instruction expenditure** Data Creation
_Use this space to create train/test data_<jupyter_code>from sklearn.model_selection import train_test_split
filtered_features = df[['TOTAL_REVENUE','instr_expen_per','YEAR','AVG_MATH_8_SCORE']].dropna()
# Y = df.loc[X.index]['AVG_MATH_8_SCORE'].dropna()
# X = X.loc[y.index][['TOTAL_REVENUE','instr_expen_per','YEAR']]
X = C[['TOTAL_REVENUE','instr_expen_per','YEAR']]
y = C[['AVG_MATH_8_SCORE']]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=5, random_state=0)<jupyter_output><empty_output><jupyter_text> Prediction ML Models Resource: https://medium.com/@vijaya.beeravalli/comparison-of-machine-learning-classification-models-for-credit-card-default-data-c3cf805c9a5aChosen ML task: ****<jupyter_code>from sklearn.linear_model import LinearRegression
# create your model here
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# FOR REGRESSION ONLY: (pick a single column to visualize results)
# Results from this graph _should not_ be used as a part of your results -- it is just here to help with intuition.
# Instead, look at the error values and individual intercepts.
col_name = 'instr_expen_per'
col_index = X_train.columns.get_loc(col_name)
f = plt.figure(figsize=(12,6))
plt.scatter(X_train[col_name], y_train, color = "red")
plt.scatter(X_train[col_name], model.predict(X_train), color = "green")
plt.scatter(X_test[col_name], model.predict(X_test), color = "blue")
new_x = np.linspace(X_train[col_name].min(),X_train[col_name].max(),200)
##intercept = model.predict([X_train.sort_values('instr_expen_per').iloc[0]]) - X_train['instr_expen_per'].min()*model.coef_[col_index]
#plt.plot(new_x, intercept+new_x*model.coef_[col_index])
plt.legend(['controlled model','true training','predicted training','predicted testing'])
plt.xlabel(col_name)
plt.ylabel('Math 8 score')<jupyter_output><empty_output>
|
no_license
|
/checkpoint2_New.ipynb
|
leoguo145997/leo
| 11 |
<jupyter_start><jupyter_text># Different Clustering Algorithms#### **Importing Libraries**<jupyter_code>%matplotlib inline
import os
import time
import timeit
import numpy as np
import pandas as pd
from ce3 import *
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.cluster import SpectralClustering,KMeans
from IPython.display import Image
from imageio import imread
from skimage.transform import resize
from matplotlib.colors import to_hex<jupyter_output><empty_output><jupyter_text>## Create DataLet's create a collection of points with some "shapes." It will be interesting to see how K-means and Spectral clustering manage it differently.<jupyter_code>moons_data = make_moons(n_samples=1500, noise=0.07, random_state=0)
moons = moons_data[0]
plt.scatter(moons[:,0], moons[:,1], s=5)<jupyter_output><empty_output><jupyter_text>## K-means ClusteringLet's take a look at my K-means clustering results paired with the results of the corresponding built in sklearn function, and evaluate the efficacy of the work via time (in seconds). #### My implementation:<jupyter_code>start_time = time.time()
my_km = my_kmeans(moons, 2, 31)
stop_time = time.time()
print("--- %s seconds ---" % (stop_time-start_time))
# Plot the results
my_km_centers = my_km[0]
my_km_labels = my_km[1]
# Plot and color the points according to their label
plt.scatter(moons[:,0], moons[:,1], c=my_km_labels, s=5, cmap="winter")<jupyter_output>--- 0.15566086769104004 seconds ---
<jupyter_text>#### Sklearn's implementation:<jupyter_code>start_time = time.time()
km_alg = KMeans(n_clusters=2, init="random", random_state = 1, max_iter = 200)
fit = km_alg.fit(moons)
stop_time = time.time()
print("--- %s seconds ---" % (stop_time-start_time))
# Plot and color the points according to their label
sk_km_labels = fit.labels_
plt.scatter(moons[:,0], moons[:,1], c=sk_km_labels, s=5, cmap="winter")<jupyter_output>--- 0.027688980102539062 seconds ---
<jupyter_text>#### Observation:
The results are slightly different. If we imagine that we draw a line on each plot to divide the points into two clusters, the line that sklearn's implementation draws is steeper than that of my implementation. But the idea is the same.
It is not very surprising to see that my implemention (0.156s) is slower than the sklearn's (0.028s), because the algorithm of sklearn must have been optimized. The time varies each time I run the code, but in general the sklearn's won't take more than 0.1s, while mine won't take less than 0.1s. #### More on K-means (and why it seems to fail on this dataset):
K-means clustering does not seem to do what we expected on this dataset, which has a "graphical feature." For me, it seems clear that there are two clusters of this dataset, one is the upper arch, and the second one is the lower arch. But k-means split it as the left half and the right half, which makes me feel a bit disappointed.
Reviewing how k-means was implemented, the result makes sense to me, though. I think this is because the k-means clustering algorithm utilizes Euclidean distance and is moving around the centroids, so it is looking for clusters like this image below that I grabbed from the textbook (PML). The points are compact in each of the cluster within a certain boundary. In contrast, our data might not be ideal for k-means.<jupyter_code>Image(filename = "pic.png", width = 600, height = 300)<jupyter_output><empty_output><jupyter_text>## Spectral Clustering Let's take a look at my Spectral clustering results paired with the results of the corresponding built in sklearn function, and evaluate the efficacy of the work via time (in seconds). #### My implementation:<jupyter_code>start_time = time.time()
adj = make_adj(moons)
L = my_laplacian(adj)
my_sc = spect_clustering(L,2)
stop_time = time.time()
print("--- %s seconds ---" % (stop_time-start_time))
# Plot the results
my_sc_labels = my_sc[0]
# Plot and color the points according to their label
plt.scatter(moons[:,0], moons[:,1], c=my_sc_labels, s=5, cmap="winter")<jupyter_output>--- 1.3464438915252686 seconds ---
<jupyter_text>#### Sklearn's implementation:<jupyter_code>start_time = time.time()
sc = SpectralClustering(
n_clusters=2,
random_state=42,
affinity='nearest_neighbors',
n_neighbors=10,
assign_labels='kmeans'
)
sc.fit(moons)
stop_time = time.time()
print("--- %s seconds ---" % (stop_time-start_time))
# Plot and color the points according to their label
sk_sc_labels = sc.labels_
plt.scatter(moons[:,0], moons[:,1], c=sk_sc_labels, s=5, cmap="winter")<jupyter_output>--- 0.6460461616516113 seconds ---
<jupyter_text>#### Observation:
The sklearn's spectral clustering is giving me the results I was expecting since the first glance of the dataset - each arch is a cluster. My implementation is also going in the right direction, but there is a tiny amount of points that are mistakenly clustered to the other arch. In terms of time, again, my implenention (1.346s) is slower than the sklearn's (0.646s). The time varies each time I run the code, but in general the sklearn's won't take more than 1s, while mine won't take less than 1s.
With the above visualizations, it seems that spectral clustering is more sensitive to the "shape" of the data, at lease in the two-dimensional space. However, it requires a bit more computations than k-means, and takes slightly more time. #### An overall view of the four plots:<jupyter_code>fig, ax = plt.subplots(2, 2)
ax[0, 0].scatter(moons[:,0], moons[:,1], c=my_km_labels, s=5, cmap="winter")
ax[0, 1].scatter(moons[:,0], moons[:,1], c=sk_km_labels, s=5, cmap="winter")
ax[1, 0].scatter(moons[:,0], moons[:,1], c=my_sc_labels, s=5, cmap="winter")
ax[1, 1].scatter(moons[:,0], moons[:,1], c=sk_sc_labels, s=5, cmap="winter")
ax[0, 0].set_title("my k-means implementation")
ax[0, 1].set_title("sklearn's k-means")
ax[1, 0].set_title("my spectral clustering")
ax[1, 1].set_title("sklearn's spectral clustering")
fig.tight_layout()
plt.show()<jupyter_output><empty_output><jupyter_text>We can further verify and compare the runtimes by using the magic built in python command ```timeit```:<jupyter_code>%%timeit
my_km = my_kmeans(moons, 2, 31)
%%timeit
km_alg = KMeans(n_clusters=2, init="random", random_state = 1, max_iter = 200)
fit = km_alg.fit(moons)
%%timeit
adj = make_adj(moons)
L = my_laplacian(adj)
my_sc = spect_clustering(L,2)
%%timeit
sc = SpectralClustering(
n_clusters=2,
random_state=42,
affinity='nearest_neighbors',
n_neighbors=10,
assign_labels='kmeans'
)
sc.fit(moons)<jupyter_output>569 ms ± 47.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
<jupyter_text>In `timeit`, the code cell is run several times to find the average run time and its standard deviation. Let's see the results in a table.<jupyter_code>table = pd.DataFrame({
"Implementation": ["my k-means","sklearn's k-means","my spectral","sklearn's spectral"],
"Average time(ms)": [126, 16.7, 1.39*1000, 569]
})
table.sort_values(by="Average time(ms)", ascending=True)<jupyter_output><empty_output>
|
no_license
|
/Clustering-and-Vintage-Art/Different-Clustering-Algorithms.ipynb
|
comp-machine-learning-spring2021/portfolio-HelenaSG
| 10 |
<jupyter_start><jupyter_text>之前几天的学习都是直接创建Sequence,然后添加层。
接下来,采用函数式api方式来调用tensorflow,这样可以做到更加性细化控制流程。<jupyter_code>train_image.shape
plt.imshow(train_image[55000])
train_image = train_image / 255.0
test_image = train_image / 255.0
#通过函数api建立网络层级
input = tf.keras.Input(shape=(28,28))
x=tf.keras.layers.Flatten()(input)
x=tf.keras.layers.Dense(32,activation='relu')(x)
x=tf.keras.layers.Dropout(0.5)(x)
x=tf.keras.layers.Dense(64,activation='relu')(x)
output = tf.keras.layers.Dense(10, activation='softmax')(x)
#建立模型,编辑输入和输出
model = tf.keras.Model(inputs=input, outputs = output)
model.summary()<jupyter_output>Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 28, 28)] 0
_________________________________________________________________
flatten_2 (Flatten) (None, 784) 0
_________________________________________________________________
dense_2 (Dense) (None, 32) 25120
_________________________________________________________________
dropout_1 (Dropout) (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 64) 2112
_________________________________________________________________
dense_4 (Dense) (None, 10) 650
=============================================================[...]
|
permissive
|
/tensorflowStudy/Day5_tensorflow.ipynb
|
ggwhsd/PythonPractice
| 1 |
<jupyter_start><jupyter_text>## Bessel_Functions
April 21, 2020
**1 Bessel Functions**
In this notebook we want to verify two simple relations involving the Bessel functions $J_n(x)$ of the first kind.
The following plot shows the Bessel function of the first kind $J_n(x)$ for $n=0,1,2$:
The relations in this notebook are the asymptotic form of $J_n(x)$ for $x >> n$ and the known recursion relation to obtain $J_{n+1}(x)$ from $J_n(x)$ and $J_{n-1}(x)$:
- $J_n(x)\approx \sqrt{\frac{2}{\pi x}}\cos( x-\left(n\frac{\pi}{2}+\frac{\pi}{4}\right) )$
for $x>>n$
- $J_{n+1}(x)=\frac{2n}{x}J_n(x)-J_{n-1}(x)$
For more information on the functions, visit the corresponding [Wikipedia article](https://en.wikipedia.org/wiki/Bessel_function).
We basically would like to check how well the [scipy](https://scipy.org/) Bessel function implementation satisfies the above relations.
<jupyter_code># do not forget to put the following '%matplotlib inline'
# within Jupyter notebooks. If you forget it, external
# windows are opened for the plot but we would like to
# have the plots integrated in the notebooks
# The line only needs to be give ONCE per notebook!
%matplotlib inline
# Verification of scipys Bessel function implementation
# - asymptotic behaviour for large x
import scipy.special as ss
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# for nicer plots, make fonts larger and lines thicker
matplotlib.rcParams['font.size'] = 12
matplotlib.rcParams['axes.linewidth'] = 2.0
def jn_asym(n,x):
"""Asymptotic form of jn(x) for x>>n"""
return np.sqrt(2.0 / np.pi / x) * \
np.cos(x - (n * np.pi / 2.0 + np.pi / 4.0))
# We choose to plot between 0 and 50. We exclude 0 because the
# recursion relation contains a division by it.
x = np.linspace(0., 50, 500)
# plot J_0, J_1 and J_5.
for n in [0, 1, 5]:
plt.plot(x, ss.jn(n, x), label='$J_%d$' % (n))
# and compute its asymptotic form (valid for x>>n, where n is the order).
# must first find the valid range of x where at least x>n.
x_asym = x[x > n]
plt.plot(x_asym, jn_asym(n, x_asym), linewidth = 2.0,
label='$J_%d$ (asymptotic)' % n)
# Finish the plot and show it
plt.title('Bessel Functions')
plt.xlabel('x')
# note that you also can use LaTeX for plot labels!
plt.ylabel('$J_n(x)$')
# horizontal line at 0 to show x-axis, but after the legend
plt.legend()
plt.axhline(0)<jupyter_output><empty_output><jupyter_text>We see that the asymptotic form is an excellent approximation for the Bessel function at large $x$-values.<jupyter_code># Verification of scipys Bessel function implementation
# - recursion relation
import scipy.special as ss
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# for nicer plots, make fonts larger and lines thicker
matplotlib.rcParams['font.size'] = 12
matplotlib.rcParams['axes.linewidth'] = 2.0
# Now, let's verify numerically the recursion relation
# J(n+1,x) = (2n/x)J(n,x)-J(n-1,x), n = 5
# We choose here to consider x-values between 0.1 and 50.
# We exclude 0 because the recursion relation contains a
# formal division by it.
x = np.linspace(0.1, 50, 500)
# construct both sides of the recursion relation, these should be equal
n = 5
# the scipy implementation of jn(5);
j_n = ss.jn(5, x)
# The recursion relation:
j_n_rec = (2.0 * (n - 1) / x) * ss.jn(n - 1, x) - ss.jn(n - 2, x)
# We now plot the difference between the two formulas
# (j_n and j_n_rec above). Note that to
# properly display the errors, we want to use a logarithmic y scale.
plt.semilogy(x, abs(j_n - j_n_rec), 'r+-', linewidth=2.0)
plt.title('Error in recursion for $J_%s$' % n)
plt.xlabel('x')
plt.ylabel('$|J_n(5) - J_{n,rec}(5)|$')
plt.grid()
# Don't forget a show() call at the end of the script.
# Here we save the plot to a file
plt.savefig("bessel_error.png")<jupyter_output><empty_output>
|
no_license
|
/Exercise-1/Bessel_Functions.ipynb
|
rcamposdelgado/Python-codes
| 2 |
<jupyter_start><jupyter_text>Please check if CUDA on your device is available.<jupyter_code>import torch
print(torch.__version__)
print(torch.cuda.is_available())
%load_ext autoreload
%autoreload 2
import os
import torch as t
from utils.config import opt
from model import FasterRCNNVGG16
from trainer import FasterRCNNTrainer
from data.util import read_image
from utils.vis_tool import vis_bbox
from utils import array_tool as at
from train import train
from Test import test
%matplotlib inline<jupyter_output><empty_output><jupyter_text>To use Visdom, please run the codes below in cmd and visit http://localhost:8097 in your browser.
python -m visdom.serverIf you'd like to train for a new one, set **load_path** = None in utils/config.py<jupyter_code>train()<jupyter_output><empty_output><jupyter_text>You'll need to download pretrained model from [BaiduDisk](https://pan.baidu.com/s/12NLSy-7zsNpuYNoeJjj_wA) with password:b02h<jupyter_code>test()<jupyter_output><empty_output><jupyter_text>Following is a sample to show a demo of some images.<jupyter_code>faster_rcnn = FasterRCNNVGG16()
trainer = FasterRCNNTrainer(faster_rcnn).cuda()
trainer.load(opt.load_path)
demo_path = 'demo'
img_list = os.listdir(demo_path)
imgs = list()
for filename in img_list:
img = read_image(demo_path + '/' + filename, color=True)
img = t.from_numpy(img)
imgs.append(img)
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(imgs, visualize=True)
# print(_bboxes, _labels, _scores)
for i in range(len(imgs)):
vis_bbox(at.tonumpy(imgs[i]),
at.tonumpy(_bboxes[i]),
at.tonumpy(_labels[i]).reshape(-1),
at.tonumpy(_scores[i]).reshape(-1))<jupyter_output><empty_output>
|
permissive
|
/Demo_pro.ipynb
|
IT-BillDeng/Garbage-Classification
| 4 |
<jupyter_start><jupyter_text># Create Word Context Embeddings<jupyter_code># import modules
import numpy as np
import pandas as pd
from nltk import word_tokenize, pos_tag
from nltk.corpus import stopwords, wordnet
from nltk.tokenize.treebank import TreebankWordDetokenizer
from sklearn.base import TransformerMixin
from sklearn.decomposition import FastICA, TruncatedSVD, PCA, NMF
from sklearn.preprocessing import StandardScaler
import re
import matplotlib.pyplot as plt
from nltk.stem import WordNetLemmatizer
# to convert contractions picked up by word_tokenize() into full words
contractions = {
"n't": 'not',
"'ve": 'have',
"'s": 'is', # note that this will include possessive nouns
'gonna': 'going to',
'gotta': 'got to',
"'d": 'would',
"'ll": 'will',
"'re": 'are',
"'m": 'am',
'wanna': 'want to'
}
# to convert nltk_pos tags to wordnet-compatible PoS tags
def convert_pos_wordnet(tag):
tag_abbr = tag[0].upper()
tag_dict = {
'J': wordnet.ADJ,
'N': wordnet.NOUN,
'V': wordnet.VERB,
'R': wordnet.ADV
}
if tag_abbr in tag_dict:
return tag_dict[tag_abbr]
class ContextMatrix(TransformerMixin):
# initialize class & private variables
def __init__(self,
window_size = 4,
remove_stopwords = True,
add_start_end_tokens = True,
lowercase = False,
lemmatize = False,
pmi = False,
spmi_k = 1,
laplace_smoothing = 0,
pmi_positive = False,
sppmi_k = 1):
""" Params:
window_size: size of +/- context window (default = 4)
remove_stopwords: boolean, whether or not to remove NLTK English stopwords
add_start_end_tokens: boolean, whether or not to append <START> and <END> to the
beginning/end of each document in the corpus (default = True)
lowercase: boolean, whether or not to convert words to all lowercase
lemmatize: boolean, whether or not to lemmatize input text
pmi: boolean, whether or not to compute pointwise mutual information
pmi_positive: boolean, whether or not to compute positive PMI
"""
self.window_size = window_size
self.remove_stopwords = remove_stopwords
self.add_start_end_tokens = add_start_end_tokens
self.lowercase = lowercase
self.lemmatize = lemmatize
self.pmi = pmi
self.spmi_k = spmi_k
self.laplace_smoothing = laplace_smoothing
self.pmi_positive = pmi_positive
self.sppmi_k = sppmi_k
self.corpus = None
self.clean_corpus = None
self.vocabulary = None
self.X = None
self.doc_terms_lists = None
def fit(self, corpus, y = None):
""" Learn the dictionary of all unique tokens for given corpus.
Params:
corpus: list of strings
Returns: self
"""
self.corpus = corpus
term_dict = dict()
k = 0
corpus_words = []
clean_corpus = []
doc_terms_lists = []
detokenizer = TreebankWordDetokenizer()
lemmatizer = WordNetLemmatizer()
for text in corpus:
words = word_tokenize(text)
if self.remove_stopwords:
clean_words = []
for word in words:
if word.lower() not in set(stopwords.words('english')):
clean_words.append(word)
words = clean_words
if self.lowercase:
clean_words = []
for word in words:
clean_words.append(word.lower())
words = clean_words
if self.lemmatize:
clean_words = []
for word in words:
PoS_tag = pos_tag([word])[0][1]
# to change contractions to full word form
if word in contractions:
word = contractions[word]
if PoS_tag[0].upper() in 'JNVR':
word = lemmatizer.lemmatize(word, convert_pos_wordnet(PoS_tag))
else:
word = lemmatizer.lemmatize(word)
clean_words.append(word)
words = clean_words
# detokenize trick taken from this StackOverflow post:
# https://stackoverflow.com/questions/21948019/python-untokenize-a-sentence
# and NLTK treebank documentation:
# https://www.nltk.org/_modules/nltk/tokenize/treebank.html
text = detokenizer.detokenize(words)
clean_corpus.append(text)
[corpus_words.append(word) for word in words]
if self.add_start_end_tokens:
words = ['<START>'] + words + ['<END>']
doc_terms_lists.append(words)
self.clean_corpus = clean_corpus
self.doc_terms_lists = doc_terms_lists
corpus_words = list(set(corpus_words))
if self.add_start_end_tokens:
corpus_words = ['<START>'] + corpus_words + ['<END>']
corpus_words = sorted(corpus_words)
for el in corpus_words:
term_dict[el] = k
k += 1
self.vocabulary = term_dict
return self
def transform(self, y = None):
""" Compute the co-occurrence matrix for given corpus and window_size, using term dictionary
obtained with fit method.
Returns: term-context co-occurrence matrix (shape: target terms by context terms) with
raw counts
"""
num_terms = len(self.vocabulary)
window = self.window_size
X = np.full((num_terms, num_terms), self.laplace_smoothing)
# iterate over each text in the clean corpus (if stopwords were not removed, this is identical
# to the original corpus)
for i in range(len(self.doc_terms_lists)):
# (ordered) list of words in text
words = self.doc_terms_lists[i]
for i in range(len(words)):
target = words[i]
# check to see if target word is in the dictionary; if not, skip
if target in self.vocabulary:
# grab index from dictionary
target_dict_index = self.vocabulary[target]
# find left-most and right-most window indices for each target word
left_end_index = max(i - window, 0)
right_end_index = min(i + window, len(words) - 1)
# loop over all words within window
# NOTE: this will include the target word; make sure to skip over it
for j in range(left_end_index, right_end_index + 1):
# skip "context word" where the "context word" index is equal to the
# target word index
if j != i:
context_word = words[j]
# check to see if context word is in the fitted dictionary; if
# not, skip
if context_word in self.vocabulary:
X[target_dict_index, self.vocabulary[context_word]] += 1
# if pmi = True, compute pmi matrix from word-context raw frequencies
# more concise code taken from this StackOverflow post:
# https://stackoverflow.com/questions/58701337/how-to-construct-ppmi-matrix-from-a-text-corpus
if self.pmi:
denom = X.sum()
col_sums = X.sum(axis = 0)
row_sums = X.sum(axis = 1)
expected = np.outer(row_sums, col_sums)/denom
X = X/expected
for i in range(X.shape[0]):
for j in range(X.shape[1]):
if X[i,j] > 0:
X[i,j] = np.log(X[i,j]) - np.log(self.spmi_k)
if self.pmi_positive:
X[i,j] = max(X[i,j] - np.log(self.sppmi_k), 0)
# note that X is a dense matrix
self.X = X
return X
cm = ContextMatrix(lowercase = True, lemmatize = True)
tweets = [
"Coronavirus is a fake liberal hoax.",
"Trump won't do anything about coronavirus.",
"The liberal fake news media always blame Pres Trump."
]
cm.fit(tweets)
pd.DataFrame(cm.transform(tweets), index = cm.vocabulary, columns = cm.vocabulary)
pd.DataFrame(cm.transform(tweets), index = cm.vocabulary, columns = cm.vocabulary)
cm.clean_corpus
cm.corpus<jupyter_output><empty_output><jupyter_text># Train embeddings using tweets as corpus<jupyter_code>tweets = pd.read_csv('COVID19_Dataset-text_labels_only.csv')
tweets
cm = ContextMatrix(window_size = 1, lowercase = True)
cm.fit(tweets['Tweet'])
word_context_matrix = cm.transform(tweets['Tweet'])
pd.DataFrame(word_context_matrix, index = cm.vocabulary, columns = cm.vocabulary)
word_context_matrix.shape
ica = FastICA(n_components = 2)
std_scaler = StandardScaler()
X_std = std_scaler.fit_transform(word_context_matrix)
matrix = ica.fit_transform(X_std)
matrix
df = pd.DataFrame(matrix,
index = cm.vocabulary,
columns = ['Comp {}'.format(i+1) for i in range(2)])
df
words = [key for key in cm.vocabulary.keys()]
for i, word in enumerate(words):
#if re.match('^co[rv]', word):
x = df['Comp 1'][i]
y = df['Comp 2'][i]
plt.scatter(x, y)
plt.text(x, y, word)
# set word embeddings for text classification
ica = FastICA(n_components = 200)
embeddings = ica.fit_transform(X_std)
embeddings<jupyter_output><empty_output><jupyter_text># Clean text<jupyter_code># clean text - all lowercase & remove stopwords
def clean_text(str_list):
clean_list = []
for text in str_list:
words = word_tokenize(text)
clean_words = []
for word in words:
if word.lower() not in set(stopwords.words('english')):
clean_words.append(word.lower())
words = clean_words
clean_text = ' '.join(clean_words)
clean_list.append(clean_text)
return clean_list
# run on all tweets
tweets['clean_tweet'] = clean_text(tweets['Tweet'])
# pull out target
y = tweets['Is_Unreliable']<jupyter_output><empty_output><jupyter_text># Derive text vectors from word embeddings<jupyter_code>def get_text_vectors(word_embeddings, word_index_dict, text_list):
for k in range(len(text_list)):
text = text_list[k]
text_vec = np.zeros(word_embeddings.shape[1])
words = word_tokenize(text)
for i in range(len(words)):
word = words[i]
if word in word_index_dict:
word_embed_vec = word_embeddings[word_index_dict[word],:]
if i == 0:
text_matrix = word_embed_vec
else:
text_matrix = np.vstack((text_matrix, word_embed_vec))
for j in range(len(text_vec)):
text_vec[j] = text_matrix[:,j].mean()
if k == 0:
full_matrix = text_vec
else:
full_matrix = np.vstack((full_matrix, text_vec))
return full_matrix
X = get_text_vectors(embeddings, cm.vocabulary, tweets['clean_tweet'])
X.shape
X<jupyter_output><empty_output><jupyter_text># Classification - nested CV<jupyter_code>from sklearn.model_selection import cross_validate, KFold, GridSearchCV
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report, roc_auc_score
# create pipeline
pipe = Pipeline([
('classify', SVC())
])
# SVC hyperparams to optimize
kernel = ['rbf', 'linear', 'poly', 'sigmoid']
C = [0.001, 0.01, 0.1, 1, 10]
# set up parameter grid
params = {
'classify__kernel': kernel,
'classify__C': C
}
# Set CV scheme for inner and outer loops
inner_cv = KFold(n_splits = 3, shuffle = True, random_state = 1)
outer_cv = KFold(n_splits = 5, shuffle = True, random_state = 1)
# Set up GridSearch for inner loop
grid_SVC = GridSearchCV(pipe, params, cv = inner_cv)
#grid_SVC.fit(X, y)
# Nested CV scores
scores = cross_validate(grid_SVC,
X = X,
y = y,
cv = outer_cv,
scoring = ['roc_auc', 'accuracy', 'f1', 'precision', 'recall'],
return_estimator = True)
auc = scores['test_roc_auc']
accuracy = scores['test_accuracy']
f1 = scores['test_f1']
precision = scores['test_precision']
recall = scores['test_recall']
estimators = scores['estimator']
auc.mean()
accuracy.mean()
f1.mean()
precision.mean()
recall.mean()
for i in estimators:
print(i.best_params_)
print('\n')<jupyter_output>{'classify__C': 10, 'classify__kernel': 'rbf'}
{'classify__C': 1, 'classify__kernel': 'rbf'}
{'classify__C': 1, 'classify__kernel': 'rbf'}
{'classify__C': 1, 'classify__kernel': 'rbf'}
{'classify__C': 1, 'classify__kernel': 'rbf'}
|
no_license
|
/embed_preprocess_optimization/16 June 2020/ICA/No Lemmatize/Window Size 1/WordContextEmbeddings-window1-rawCounts-ICA200_plus_classification.ipynb
|
caitmmoroney/covid_misinfo
| 5 |
<jupyter_start><jupyter_text>### The questions for the exercises are given above the cells and the expected output is given below the cell. Please type the code inserting a new cell below the question because if you run the expected output cell the output would vanish! Happy learning! ## Exercise Question 1: Given an input list removes the element at index 4 and add it to the 2nd position and also, at the end of the list<jupyter_code>original_List=[34, 54, 67, 89, 11, 43, 94]
print('Original list',original_List)
element_ToAdd=original_List.pop(4)
print('List After removing element at index 4',original_List)
original_List.insert(2,element_ToAdd)
print('Adding element at index 2',original_List)
original_List.insert(len(original_List),element_ToAdd)
print('List after Adding element at last',original_List)<jupyter_output>Original list [34, 54, 67, 89, 11, 43, 94]
List After removing element at index 4 [34, 54, 67, 89, 43, 94]
Adding element at index 2 [34, 54, 11, 67, 89, 43, 94]
List after Adding element at last [34, 54, 11, 67, 89, 43, 94, 11]
<jupyter_text>## Exercise Question 2: Given a Python list you should be able to display Python list in the following order<jupyter_code>original_List=[100, 200, 300, 400, 500]
print('The Original List is:', original_List)
#Method 1
print('The Expected result is:', original_List[::-1])
#Method 2
original_List.reverse()
print('The Expected result is:',original_List )<jupyter_output>The Original List is: [100, 200, 300, 400, 500]
The Expected result is: [500, 400, 300, 200, 100]
The Expected result is: [500, 400, 300, 200, 100]
<jupyter_text>## Exercise Question 3: Concatenate (join) two lists in the following order
<jupyter_code>#Method 1
original_List1=['Hello ', 'take ']
original_List2=['Dear', 'Sir']
print('The original list1:', original_List1)
print('The original list2:', original_List2)
print('The Expected result is:', original_List1+original_List2)
#Method 2
original_List1=['Hello ', 'take ']
original_List2=['Dear', 'Sir']
print('The original list1:', original_List1)
print('The original list2:', original_List2)
original_List1.extend(original_List2)
print('The Expected result is:',original_List1 )
#Method 3
original_List1=['Hello ', 'take ']
original_List2=['Dear', 'Sir']
print('The original list1:', original_List1)
print('The original list2:', original_List2)
for elem in original_List2:
original_List1.append(elem)
print('The Expected result is:',original_List1)<jupyter_output>The original list1: ['Hello ', 'take ']
The original list2: ['Dear', 'Sir']
The Expected result is: ['Hello ', 'take ', 'Dear', 'Sir']
The original list1: ['Hello ', 'take ']
The original list2: ['Dear', 'Sir']
The Expected result is: ['Hello ', 'take ', 'Dear', 'Sir']
The original list1: ['Hello ', 'take ']
The original list2: ['Dear', 'Sir']
The Expected result is: ['Hello ', 'take ', 'Dear', 'Sir']
<jupyter_text>## Exercise Question 4: Add item 7000 after 6000 in the following Python List<jupyter_code>original_List=[10, 20, [300, 400, [5000, 6000], 500], 30, 40]
print('The original list is:', original_List)
def find_Element(data,valueToSearch,valueToReplace):
if(type(data) is list):
for elem in data:
if(find_Element(elem,valueToSearch,valueToReplace)):
data.insert(data.index(elem)+1,valueToReplace)
elif(type(data) is int and data==valueToSearch):
return True
find_Element(original_List,6000,7000)
print('The expected list is:',original_List)
<jupyter_output>The original list is: [10, 20, [300, 400, [5000, 6000], 500], 30, 40]
[10, 20, [300, 400, [5000, 6000, 7000], 500], 30, 40]
<jupyter_text>## Exercise Question 5: Given a nested list extend it with adding sub list ["h", "i", "j"] in a such a way that it will look like the## Exercise Question 6: Given a Python list, find value 20 in the list, and if it is present, replace it with 200. Only update the first occurrence of a value<jupyter_code>original_List=[5, 10, 15, 20, 25, 50, 20]
print('The original list is:', original_List)
search_Value=20
original_List.insert(original_List.index(search_Value) if search_Value in original_List else -1,200)
print('The expected output is:', original_List)<jupyter_output>The original list is: [5, 10, 15, 20, 25, 50, 20]
The expected output is: [5, 10, 15, 200, 20, 25, 50, 20]
|
no_license
|
/Python/Exercise - Day_1.ipynb
|
abhijithmnair/DataScience
| 5 |
<jupyter_start><jupyter_text># Neural Machine Translation
Welcome to your first programming assignment for this week!
* You will build a Neural Machine Translation (NMT) model to translate human-readable dates ("25th of June, 2009") into machine-readable dates ("2009-06-25").
* You will do this using an attention model, one of the most sophisticated sequence-to-sequence models.
This notebook was produced together with NVIDIA's Deep Learning Institute. ## Updates
#### If you were working on the notebook before this update...
* The current notebook is version "4a".
* You can find your original work saved in the notebook with the previous version name ("v4")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Clarified names of variables to be consistent with the lectures and consistent within the assignment
- pre-attention bi-directional LSTM: the first LSTM that processes the input data.
- 'a': the hidden state of the pre-attention LSTM.
- post-attention LSTM: the LSTM that outputs the translation.
- 's': the hidden state of the post-attention LSTM.
- energies "e". The output of the dense function that takes "a" and "s" as inputs.
- All references to "output activation" are updated to "hidden state".
- "post-activation" sequence model is updated to "post-attention sequence model".
- 3.1: "Getting the activations from the Network" renamed to "Getting the attention weights from the network."
- Appropriate mentions of "activation" replaced "attention weights."
- Sequence of alphas corrected to be a sequence of "a" hidden states.
* one_step_attention:
- Provides sample code for each Keras layer, to show how to call the functions.
- Reminds students to provide the list of hidden states in a specific order, in order to pause the autograder.
* model
- Provides sample code for each Keras layer, to show how to call the functions.
- Added a troubleshooting note about handling errors.
- Fixed typo: outputs should be of length 10 and not 11.
* define optimizer and compile model
- Provides sample code for each Keras layer, to show how to call the functions.
* Spelling, grammar and wording corrections.Let's load all the packages you will need for this assignment.<jupyter_code>from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline<jupyter_output>Using TensorFlow backend.
<jupyter_text>## 1 - Translating human readable dates into machine readable dates
* The model you will build here could be used to translate from one language to another, such as translating from English to Hindi.
* However, language translation requires massive datasets and usually takes days of training on GPUs.
* To give you a place to experiment with these models without using massive datasets, we will perform a simpler "date translation" task.
* The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*)
* The network will translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*).
* We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.
<!--
Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> ### 1.1 - Dataset
We will train the model on a dataset of 10,000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples. <jupyter_code>m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]<jupyter_output><empty_output><jupyter_text>You've loaded:
- `dataset`: a list of tuples of (human readable date, machine readable date).
- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index.
- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index.
- **Note**: These indices are not necessarily consistent with `human_vocab`.
- `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters.
Let's preprocess the data and map the raw text data into the index values.
- We will set Tx=30
- We assume Tx is the maximum length of the human readable date.
- If we get a longer input, we would have to truncate it.
- We will set Ty=10
- "YYYY-MM-DD" is 10 characters long.<jupyter_code>Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)<jupyter_output>X.shape: (10000, 30)
Y.shape: (10000, 10)
Xoh.shape: (10000, 30, 37)
Yoh.shape: (10000, 10, 11)
<jupyter_text>You now have:
- `X`: a processed version of the human readable dates in the training set.
- Each character in X is replaced by an index (integer) mapped to the character using `human_vocab`.
- Each date is padded to ensure a length of $T_x$ using a special character ().
- `X.shape = (m, Tx)` where m is the number of training examples in a batch.
- `Y`: a processed version of the machine readable dates in the training set.
- Each character is replaced by the index (integer) it is mapped to in `machine_vocab`.
- `Y.shape = (m, Ty)`.
- `Xoh`: one-hot version of `X`
- Each index in `X` is converted to the one-hot representation (if the index is 2, the one-hot version has the index position 2 set to 1, and the remaining positions are 0.
- `Xoh.shape = (m, Tx, len(human_vocab))`
- `Yoh`: one-hot version of `Y`
- Each index in `Y` is converted to the one-hot representation.
- `Yoh.shape = (m, Tx, len(machine_vocab))`.
- `len(machine_vocab) = 11` since there are 10 numeric digits (0 to 9) and the `-` symbol.* Let's also look at some examples of preprocessed training examples.
* Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed. <jupyter_code>index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])<jupyter_output>Source date: 9 may 1998
Target date: 1998-05-09
Source after preprocessing (indices): [12 0 24 13 34 0 4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36
36 36 36 36 36]
Target after preprocessing (indices): [ 2 10 10 9 0 1 6 0 1 10]
Source after preprocessing (one-hot): [[ 0. 0. 0. ..., 0. 0. 0.]
[ 1. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 1.]
[ 0. 0. 0. ..., 0. 0. 1.]
[ 0. 0. 0. ..., 0. 0. 1.]]
Target after preprocessing (one-hot): [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
<jupyter_text>## 2 - Neural machine translation with attention
* If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate.
* Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.
* The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.
### 2.1 - Attention mechanism
In this part, you will implement the attention mechanism presented in the lecture videos.
* Here is a figure to remind you how the model works.
* The diagram on the left shows the attention model.
* The diagram on the right shows what one "attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$.
* The attention variables $\alpha^{\langle t, t' \rangle}$ are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
**Figure 1**: Neural machine translation with attention
Here are some properties of the model that you may notice:
#### Pre-attention and Post-attention LSTMs on both sides of the attention mechanism
- There are two separate LSTMs in this model (see diagram on the left): pre-attention and post-attention LSTMs.
- *Pre-attention* Bi-LSTM is the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism.
- The attention mechanism is shown in the middle of the left-hand diagram.
- The pre-attention Bi-LSTM goes through $T_x$ time steps
- *Post-attention* LSTM: at the top of the diagram comes *after* the attention mechanism.
- The post-attention LSTM goes through $T_y$ time steps.
- The post-attention LSTM passes the hidden state $s^{\langle t \rangle}$ and cell state $c^{\langle t \rangle}$ from one time step to the next. #### An LSTM has both a hidden state and cell state
* In the lecture videos, we were using only a basic RNN for the post-attention sequence model
* This means that the state captured by the RNN was outputting only the hidden state $s^{\langle t\rangle}$.
* In this assignment, we are using an LSTM instead of a basic RNN.
* So the LSTM has both the hidden state $s^{\langle t\rangle}$ and the cell state $c^{\langle t\rangle}$. #### Each time step does not use predictions from the previous time step
* Unlike previous text generation examples earlier in the course, in this model, the post-attention LSTM at time $t$ does not take the previous time step's prediction $y^{\langle t-1 \rangle}$ as input.
* The post-attention LSTM at time 't' only takes the hidden state $s^{\langle t\rangle}$ and cell state $c^{\langle t\rangle}$ as input.
* We have designed the model this way because unlike language generation (where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.#### Concatenation of hidden states from the forward and backward pre-attention LSTMs
- $\overrightarrow{a}^{\langle t \rangle}$: hidden state of the forward-direction, pre-attention LSTM.
- $\overleftarrow{a}^{\langle t \rangle}$: hidden state of the backward-direction, pre-attention LSTM.
- $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}, \overleftarrow{a}^{\langle t \rangle}]$: the concatenation of the activations of both the forward-direction $\overrightarrow{a}^{\langle t \rangle}$ and backward-directions $\overleftarrow{a}^{\langle t \rangle}$ of the pre-attention Bi-LSTM. #### Computing "energies" $e^{\langle t, t' \rangle}$ as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t' \rangle}$
- Recall in the lesson videos "Attention Model", at time 6:45 to 8:16, the definition of "e" as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$.
- "e" is called the "energies" variable.
- $s^{\langle t-1 \rangle}$ is the hidden state of the post-attention LSTM
- $a^{\langle t' \rangle}$ is the hidden state of the pre-attention LSTM.
- $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ are fed into a simple neural network, which learns the function to output $e^{\langle t, t' \rangle}$.
- $e^{\langle t, t' \rangle}$ is then used when computing the attention $a^{\langle t, t' \rangle}$ that $y^{\langle t \rangle}$ should pay to $a^{\langle t' \rangle}$.- The diagram on the right of figure 1 uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times.
- Then it uses `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$.
- The concatenation of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ is fed into a "Dense" layer, which computes $e^{\langle t, t' \rangle}$.
- $e^{\langle t, t' \rangle}$ is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$.
- Note that the diagram doesn't explicitly show variable $e^{\langle t, t' \rangle}$, but $e^{\langle t, t' \rangle}$ is above the Dense layer and below the Softmax layer in the diagram in the right half of figure 1.
- We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. ### Implementation Details
Let's implement this neural translator. You will start by implementing two functions: `one_step_attention()` and `model()`.
#### one_step_attention
* The inputs to the one_step_attention at time step $t$ are:
- $[a^{},a^{}, ..., a^{}]$: all hidden states of the pre-attention Bi-LSTM.
- $s^{}$: the previous hidden state of the post-attention LSTM
* one_step_attention computes:
- $[\alpha^{},\alpha^{}, ..., \alpha^{}]$: the attention weights
- $context^{ \langle t \rangle }$: the context vector:
$$context^{} = \sum_{t' = 1}^{T_x} \alpha^{}a^{}\tag{1}$$
##### Clarifying 'context' and 'c'
- In the lecture videos, the context was denoted $c^{\langle t \rangle}$
- In the assignment, we are calling the context $context^{\langle t \rangle}$.
- This is to avoid confusion with the post-attention LSTM's internal memory cell variable, which is also denoted $c^{\langle t \rangle}$.#### Implement `one_step_attention`
**Exercise**: Implement `one_step_attention()`.
* The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop.
* It is important that all $T_y$ copies have the same weights.
* It should not reinitialize the weights every time.
* In other words, all $T_y$ steps should have shared weights.
* Here's how you can implement layers with shareable weights in Keras:
1. Define the layer objects in a variable scope that is outside of the `one_step_attention` function. For example, defining the objects as global variables would work.
- Note that defining these variables inside the scope of the function `model` would technically work, since `model` will then call the `one_step_attention` function. For the purposes of making grading and troubleshooting easier, we are defining these as global variables. Note that the automatic grader will expect these to be global variables as well.
2. Call these objects when propagating the input.
* We have defined the layers you need as global variables.
* Please run the following cells to create them.
* Please note that the automatic grader expects these global variables with the given variable names. For grading purposes, please do not rename the global variables.
* Please check the Keras documentation to learn more about these layers. The layers are functions. Below are examples of how to call these functions.
* [RepeatVector()](https://keras.io/layers/core/#repeatvector)
```Python
var_repeated = repeat_layer(var1)
```
* [Concatenate()](https://keras.io/layers/merge/#concatenate)
```Python
concatenated_vars = concatenate_layer([var1,var2,var3])
```
* [Dense()](https://keras.io/layers/core/#dense)
```Python
var_out = dense_layer(var_in)
```
* [Activation()](https://keras.io/layers/core/#activation)
```Python
activation = activation_layer(var_in)
```
* [Dot()](https://keras.io/layers/merge/#dot)
```Python
dot_product = dot_layer([var1,var2])
```<jupyter_code># Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attention) LSTM cell
"""
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
# For grading purposes, please list 'a' first and 's_prev' second, in this order.
concat = concatenator([a, s_prev])
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = dotor([alphas, a])
### END CODE HERE ###
return context<jupyter_output><empty_output><jupyter_text>You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function.#### model
* `model` first runs the input through a Bi-LSTM to get $[a^{},a^{}, ..., a^{}]$.
* Then, `model` calls `one_step_attention()` $T_y$ times using a `for` loop. At each iteration of this loop:
- It gives the computed context vector $context^{}$ to the post-attention LSTM.
- It runs the output of the post-attention LSTM through a dense layer with softmax activation.
- The softmax generates a prediction $\hat{y}^{}$. **Exercise**: Implement `model()` as explained in figure 1 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.<jupyter_code>n_a = 32 # number of units for the pre-attention, bi-directional LSTM's hidden state 'a'
n_s = 64 # number of units for the post-attention LSTM's hidden state "s"
# Please note, this is the post attention LSTM cell.
# For the purposes of passing the automatic grader
# please do not modify this global variable. This will be corrected once the automatic grader is also updated.
post_activation_LSTM_cell = LSTM(n_s, return_state = True) # post-attention LSTM
output_layer = Dense(len(machine_vocab), activation=softmax)<jupyter_output><empty_output><jupyter_text>Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps:
1. Propagate the input `X` into a bi-directional LSTM.
* [Bidirectional](https://keras.io/layers/wrappers/#bidirectional)
* [LSTM](https://keras.io/layers/recurrent/#lstm)
* Remember that we want the LSTM to return a full sequence instead of just the last hidden state.
Sample code:
```Python
sequence_of_hidden_states = Bidirectional(LSTM(units=..., return_sequences=...))(the_input_X)
```
2. Iterate for $t = 0, \cdots, T_y-1$:
1. Call `one_step_attention()`, passing in the sequence of hidden states $[a^{\langle 1 \rangle},a^{\langle 2 \rangle}, ..., a^{ \langle T_x \rangle}]$ from the pre-attention bi-directional LSTM, and the previous hidden state $s^{}$ from the post-attention LSTM to calculate the context vector $context^{}$.
2. Give $context^{}$ to the post-attention LSTM cell.
- Remember to pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM
* This outputs the new hidden state $s^{}$ and the new cell state $c^{}$.
Sample code:
```Python
next_hidden_state, _ , next_cell_state =
post_activation_LSTM_cell(inputs=..., initial_state=[prev_hidden_state, prev_cell_state])
```
Please note that the layer is actually the "post attention LSTM cell". For the purposes of passing the automatic grader, please do not modify the naming of this global variable. This will be fixed when we deploy updates to the automatic grader.
3. Apply a dense, softmax layer to $s^{}$, get the output.
Sample code:
```Python
output = output_layer(inputs=...)
```
4. Save the output by adding it to the list of outputs.
3. Create your Keras model instance.
* It should have three inputs:
* `X`, the one-hot encoded inputs to the model, of shape ($T_{x}, humanVocabSize)$
* $s^{\langle 0 \rangle}$, the initial hidden state of the post-attention LSTM
* $c^{\langle 0 \rangle}$), the initial cell state of the post-attention LSTM
* The output is the list of outputs.
Sample code
```Python
model = Model(inputs=[...,...,...], outputs=...)
```<jupyter_code># GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of your model with a shape (Tx,)
# Define s0 (initial hidden state) and c0 (initial cell state)
# for the decoder LSTM with shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences=True), input_shape=(m, Tx, n_a*2))(X)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a, s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context, initial_state=[s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model(inputs=[X, s0, c0], outputs=outputs)
### END CODE HERE ###
return model<jupyter_output><empty_output><jupyter_text>Run the following cell to create your model.<jupyter_code>model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))<jupyter_output><empty_output><jupyter_text>#### Troubleshooting Note
* If you are getting repeated errors after an initially incorrect implementation of "model", but believe that you have corrected the error, you may still see error messages when building your model.
* A solution is to save and restart your kernel (or shutdown then restart your notebook), and re-run the cells.Let's get a summary of the model to check if it matches the expected output.<jupyter_code>model.summary()<jupyter_output>____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 30, 37) 0
____________________________________________________________________________________________________
s0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0]
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] [...]<jupyter_text>**Expected Output**:
Here is the summary you should see
**Total params:**
52,960
**Trainable params:**
52,960
**Non-trainable params:**
0
**bidirectional_1's output shape **
(None, 30, 64)
**repeat_vector_1's output shape **
(None, 30, 64)
**concatenate_1's output shape **
(None, 30, 128)
**attention_weights's output shape **
(None, 30, 1)
**dot_1's output shape **
(None, 1, 64)
**dense_3's output shape **
(None, 11)
#### Compile the model
* After creating your model in Keras, you need to compile it and define the loss function, optimizer and metrics you want to use.
* Loss function: 'categorical_crossentropy'.
* Optimizer: [Adam](https://keras.io/optimizers/#adam) [optimizer](https://keras.io/optimizers/#usage-of-optimizers)
- learning rate = 0.005
- $\beta_1 = 0.9$
- $\beta_2 = 0.999$
- decay = 0.01
* metric: 'accuracy'
Sample code
```Python
optimizer = Adam(lr=..., beta_1=..., beta_2=..., decay=...)
model.compile(optimizer=..., loss=..., metrics=[...])
```<jupyter_code>### START CODE HERE ### (≈2 lines)
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
### END CODE HERE ###<jupyter_output><empty_output><jupyter_text>#### Define inputs and outputs, and fit the model
The last step is to define all your inputs and outputs to fit the model:
- You have input X of shape $(m = 10000, T_x = 30)$ containing the training examples.
- You need to create `s0` and `c0` to initialize your `post_attention_LSTM_cell` with zeros.
- Given the `model()` you coded, you need the "outputs" to be a list of 10 elements of shape (m, T_y).
- The list `outputs[i][0], ..., outputs[i][Ty]` represents the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`).
- `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.<jupyter_code>s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))<jupyter_output><empty_output><jupyter_text>Let's now fit the model and run it for one epoch.<jupyter_code>model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)<jupyter_output>Epoch 1/1
10000/10000 [==============================] - 38s - loss: 16.4548 - dense_3_loss_1: 1.1989 - dense_3_loss_2: 1.0088 - dense_3_loss_3: 1.7391 - dense_3_loss_4: 2.6718 - dense_3_loss_5: 0.6979 - dense_3_loss_6: 1.3030 - dense_3_loss_7: 2.6251 - dense_3_loss_8: 0.9003 - dense_3_loss_9: 1.7024 - dense_3_loss_10: 2.6074 - dense_3_acc_1: 0.5036 - dense_3_acc_2: 0.6813 - dense_3_acc_3: 0.3185 - dense_3_acc_4: 0.0964 - dense_3_acc_5: 0.9116 - dense_3_acc_6: 0.3766 - dense_3_acc_7: 0.0664 - dense_3_acc_8: 0.9061 - dense_3_acc_9: 0.2235 - dense_3_acc_10: 0.0840
<jupyter_text>While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples:
Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data.
We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.) <jupyter_code>model.load_weights('models/model.h5')<jupyter_output><empty_output><jupyter_text>You can now see the results on new examples.<jupyter_code>EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output),"\n")<jupyter_output>source: 3 May 1979
output: 1979-05-03
source: 5 April 09
output: 2009-05-05
source: 21th of August 2016
output: 2016-08-21
source: Tue 10 Jul 2007
output: 2007-07-10
source: Saturday May 9 2018
output: 2018-05-09
source: March 3 2001
output: 2001-03-03
source: March 3rd 2001
output: 2001-03-03
source: 1 March 2001
output: 2001-03-01
<jupyter_text>You can also change these examples to test with your own examples. The next part will give you a better sense of what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. ## 3 - Visualizing Attention (Optional / Ungraded)
Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (such as the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what each part of the output is looking at which part of the input.
Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this:
**Figure 8**: Full Attention Map
Notice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We also see that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." ### 3.1 - Getting the attention weights from the network
Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$.
To figure out where the attention values are located, let's start by printing a summary of the model .<jupyter_code>model.summary()<jupyter_output>____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 30, 37) 0
____________________________________________________________________________________________________
s0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0]
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] [...]<jupyter_text>Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Let's get the attention weights from this layer.
The function `attention_map()` pulls out the attention values from your model and plots them.<jupyter_code>attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64);<jupyter_output><empty_output>
|
no_license
|
/Week3_Code1_Neural_machine_translation_with_attention_v4a.ipynb
|
AmelMohamedMahmoud/NNETS---Course-5-Sequence-Models-
| 16 |
<jupyter_start><jupyter_text>TUGAS BESAR TAHAP 2 MACHINE LEARNING
CLASSIFICATION Library yang digunakan <jupyter_code>import csv
import pandas as pd
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
import seaborn as sns
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
data_train = pd.read_csv("salju_train.csv")
data_test = pd.read_csv("salju_test.csv")<jupyter_output><empty_output><jupyter_text> data test salju dan data train salju disatukan dalam satu dataframe <jupyter_code>df1 = pd.DataFrame(data_train)
df2 = pd.DataFrame(data_test)
frames = [df1, df2]
dataset = pd.concat(frames)<jupyter_output><empty_output><jupyter_text> Data Preprocessing <jupyter_code>dataset.shape
dataset.info()
dataset.describe()
import matplotlib.pyplot as plt
plt.figure(figsize=(15,10))
data1 = dataset.corr()
korelasi = sns.heatmap(data1, annot=True)<jupyter_output><empty_output><jupyter_text> drop kolom yang tidak digunakan <jupyter_code>df = dataset.drop(columns=['id', 'Tanggal', 'KodeLokasi'])<jupyter_output><empty_output><jupyter_text> check missing values <jupyter_code>df.isnull().sum()<jupyter_output><empty_output><jupyter_text> fill missing values <jupyter_code>df['SuhuMin'].fillna(df['SuhuMin'].mean(), inplace=True)
df['SuhuMax'].fillna(df['SuhuMax'].mean(), inplace=True)
df['Hujan'].fillna(df['Hujan'].mean(), inplace=True)
df['Penguapan'].fillna(df['Penguapan'].mean(), inplace=True)
df['SinarMatahari'].fillna(df['SinarMatahari'].mean(), inplace=True)
df['ArahAnginTerkencang'].fillna(df['ArahAnginTerkencang'].mode()[0], inplace=True) #categorical
df['KecepatanAnginTerkencang'].fillna(df['KecepatanAnginTerkencang'].mean(), inplace=True)
df['ArahAngin9am'].fillna(df['ArahAngin9am'].mode()[0], inplace=True) #categorical
df['ArahAngin3pm'].fillna(df['ArahAngin3pm'].mode()[0], inplace=True) #categorical
df['KecepatanAngin9am'].fillna(df['KecepatanAngin9am'].mean(), inplace=True)
df['KecepatanAngin3pm'].fillna(df['KecepatanAngin3pm'].mean(), inplace=True)
df['Kelembaban9am'].fillna(df['Kelembaban9am'].mean(), inplace=True)
df['Kelembaban3pm'].fillna(df['Kelembaban3pm'].mean(), inplace=True)
df['Tekanan9am'].fillna(df['Tekanan9am'].mean(), inplace=True)
df['Tekanan3pm'].fillna(df['Tekanan3pm'].mean(), inplace=True)
df['Awan9am'].fillna(df['Awan9am'].mean(), inplace=True)
df['Awan3pm'].fillna(df['Awan3pm'].mean(), inplace=True)
df['Suhu9am'].fillna(df['Suhu9am'].mean(), inplace=True)
df['Suhu3pm'].fillna(df['Suhu3pm'].mean(), inplace=True)
df['BersaljuHariIni'].fillna(df['BersaljuHariIni'].mode()[0], inplace=True) #categorical
df['BersaljuBesok'].fillna(df['BersaljuBesok'].mode()[0], inplace=True) #categorical
df.isnull().sum()<jupyter_output><empty_output><jupyter_text> Mengubah data categorical menjadi numerical <jupyter_code>df['BersaljuHariIni'].replace(to_replace = ['No', 'Yes'], value =['Tidak','Ya'], inplace=True)
df['BersaljuBesok'].replace(to_replace = ['No', 'Yes'], value =['Tidak','Ya'], inplace=True)
temp = ['ArahAnginTerkencang','ArahAngin9am','ArahAngin3pm','BersaljuHariIni','BersaljuBesok']
for i in temp:
temp2 = LabelEncoder()
df[i] = temp2.fit_transform(df[i].astype('str'))
df<jupyter_output><empty_output><jupyter_text> check data outliers <jupyter_code>fig, axs = plt.subplots(5,5, figsize = (18,7))
sns.boxplot(x=df['SuhuMin'], ax = axs[0][0])
sns.boxplot(x=df['SuhuMax'], ax = axs[0][1])
sns.boxplot(x=df['Hujan'], ax = axs[0][2])
sns.boxplot(x=df['Penguapan'], ax = axs[0][3])
sns.boxplot(x=df['SinarMatahari'], ax = axs[0][4])
sns.boxplot(x=df['ArahAnginTerkencang'], ax = axs[1][0])
sns.boxplot(x=df['KecepatanAnginTerkencang'], ax = axs[1][1])
sns.boxplot(x=df['ArahAngin9am'], ax = axs[1][2])
sns.boxplot(x=df['ArahAngin3pm'], ax = axs[1][3])
sns.boxplot(x=df['Kelembaban9am'], ax = axs[1][4])
sns.boxplot(x=df['Kelembaban3pm'], ax = axs[2][0])
sns.boxplot(x=df['KecepatanAngin9am'], ax = axs[2][1])
sns.boxplot(x=df['KecepatanAngin3pm'], ax = axs[2][2])
sns.boxplot(x=df['Tekanan9am'], ax = axs[2][3])
sns.boxplot(x=df['Tekanan3pm'], ax = axs[2][4])
sns.boxplot(x=df['Awan9am'], ax = axs[3][0])
sns.boxplot(x=df['Awan3pm'], ax = axs[3][1])
sns.boxplot(x=df['Suhu9am'], ax = axs[3][2])
sns.boxplot(x=df['Suhu3pm'], ax = axs[3][3])
sns.boxplot(x=df['BersaljuHariIni'], ax = axs[3][4])
sns.boxplot(x=df['BersaljuBesok'], ax = axs[4][0])
plt.subplots_adjust(wspace=0.5, top=2)<jupyter_output><empty_output><jupyter_text> Normalisasi data <jupyter_code>minmax = MinMaxScaler()
datascaling = minmax.fit_transform(df)
kolom = ["Suhu9am","Suhu3pm","SuhuMin","SuhuMax","Hujan","Penguapan","SinarMatahari","ArahAnginTerkencang","KecepatanAnginTerkencang","ArahAngin9am","ArahAngin3pm","KecepatanAngin3pm","KecepatanAngin9am","Kelembaban9am","Kelembaban3pm","Tekanan9am","Tekanan3pm","Awan3pm","Awan9am","BersaljuBesok","BersaljuHariIni"]
scaled = pd.DataFrame(datascaling, columns=kolom)
scaled<jupyter_output><empty_output><jupyter_text> Classification
Model 1 Naive baiyes
Bersalju Hari Ini <jupyter_code>x = scaled[["Suhu9am","Suhu3pm","SuhuMin","SuhuMax","Hujan","Penguapan","SinarMatahari","ArahAnginTerkencang","KecepatanAnginTerkencang","ArahAngin9am","ArahAngin3pm","KecepatanAngin3pm","KecepatanAngin9am","Kelembaban9am","Kelembaban3pm","Tekanan9am","Tekanan3pm","Awan3pm","Awan9am"]]
y = scaled[["BersaljuHariIni"]]
X_train, X_test, y_train, y_test = train_test_split(x,y, test_size = 0.20)
model = GaussianNB()
model = model.fit(X_train, y_train)
y_pred_HariIni_GNB = model.predict(X_test)
from sklearn import metrics
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_sebenarnya_HariIni = y_test
cm = confusion_matrix(y_sebenarnya_HariIni, y_pred_HariIni_GNB)
print('Confusion Matrix :')
print(cm)
print('Accuracy Score :',accuracy_score(y_sebenarnya_HariIni, y_pred_HariIni_GNB))
print('Report : ')
print(classification_report(y_sebenarnya_HariIni, y_pred_HariIni_GNB))<jupyter_output>Confusion Matrix :
[[17412 2475]
[ 2386 3183]]
Accuracy Score : 0.8090430546825895
Report :
precision recall f1-score support
0.0 0.88 0.88 0.88 19887
1.0 0.56 0.57 0.57 5569
accuracy 0.81 25456
macro avg 0.72 0.72 0.72 25456
weighted avg 0.81 0.81 0.81 25456
<jupyter_text> BersaljuBesok<jupyter_code>x = scaled[["Suhu9am","Suhu3pm","SuhuMin","SuhuMax","Hujan","Penguapan","SinarMatahari","ArahAnginTerkencang","KecepatanAnginTerkencang","ArahAngin9am","ArahAngin3pm","KecepatanAngin3pm","KecepatanAngin9am","Kelembaban9am","Kelembaban3pm","Tekanan9am","Tekanan3pm","Awan3pm","Awan9am"]]
y = scaled[["BersaljuBesok"]]
X_train, X_test, y_train, y_test = train_test_split(x,y, test_size = 0.20)
model = GaussianNB()
model = model.fit(X_train, y_train)
y_pred_Besok_GNB = model.predict(X_test)
from sklearn import metrics
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_sebenarnya_Besok = y_test
cm = confusion_matrix(y_sebenarnya_Besok, y_pred_Besok_GNB)
print('Confusion Matrix :')
print(cm)
print('Accuracy Score :',accuracy_score(y_sebenarnya_Besok, y_pred_Besok_GNB))
print('Report : ')
print(classification_report(y_sebenarnya_Besok, y_pred_Besok_GNB))<jupyter_output>Confusion Matrix :
[[18544 1304]
[ 329 5279]]
Accuracy Score : 0.9358500942803268
Report :
precision recall f1-score support
0.0 0.98 0.93 0.96 19848
1.0 0.80 0.94 0.87 5608
accuracy 0.94 25456
macro avg 0.89 0.94 0.91 25456
weighted avg 0.94 0.94 0.94 25456
<jupyter_text> Model 2 ID3
Bersalju Hari Ini <jupyter_code>from sklearn.tree import DecisionTreeClassifier
x = scaled[["Suhu9am","Suhu3pm","SuhuMin","SuhuMax","Hujan","Penguapan","SinarMatahari","ArahAnginTerkencang","KecepatanAnginTerkencang","ArahAngin9am","ArahAngin3pm","KecepatanAngin3pm","KecepatanAngin9am","Kelembaban9am","Kelembaban3pm","Tekanan9am","Tekanan3pm","Awan3pm","Awan9am"]]
y = scaled[["BersaljuHariIni"]]
X_train, X_test, y_train, y_test = train_test_split(x,y, test_size = 0.20)
model = DecisionTreeClassifier(criterion='entropy')
model = model.fit(X_train, y_train)
y_pred_HariIni_ID3 = model.predict(X_test)
y_sebenarnya_HariIni = y_test
cm = confusion_matrix(y_sebenarnya_HariIni, y_pred_HariIni_ID3)
print('Confusion Matrix :')
print(cm)
print('Accuracy Score :',accuracy_score(y_sebenarnya_HariIni, y_pred_HariIni_ID3))
print('Report : ')
print(classification_report(y_sebenarnya_HariIni, y_pred_HariIni_ID3))<jupyter_output>Confusion Matrix :
[[17156 2736]
[ 2632 2932]]
Accuracy Score : 0.7891263356379635
Report :
precision recall f1-score support
0.0 0.87 0.86 0.86 19892
1.0 0.52 0.53 0.52 5564
accuracy 0.79 25456
macro avg 0.69 0.69 0.69 25456
weighted avg 0.79 0.79 0.79 25456
<jupyter_text> Bersalju Besok <jupyter_code>x = scaled[["Suhu9am","Suhu3pm","SuhuMin","SuhuMax","Hujan","Penguapan","SinarMatahari","ArahAnginTerkencang","KecepatanAnginTerkencang","ArahAngin9am","ArahAngin3pm","KecepatanAngin3pm","KecepatanAngin9am","Kelembaban9am","Kelembaban3pm","Tekanan9am","Tekanan3pm","Awan3pm","Awan9am"]]
y = scaled[["BersaljuBesok"]]
X_train, X_test, y_train, y_test = train_test_split(x,y, test_size = 0.20)
model = DecisionTreeClassifier(criterion='entropy')
model = model.fit(X_train, y_train)
y_pred_Besok_ID3 = model.predict(X_test)
y_sebenarnya_Besok = y_test
cm = confusion_matrix(y_sebenarnya_Besok, y_pred_Besok_ID3)
print('Confusion Matrix :')
print(cm)
print('Accuracy Score :',accuracy_score(y_sebenarnya_Besok, y_pred_Besok_ID3))
print('Report : ')
print(classification_report(y_sebenarnya_Besok, y_pred_Besok_ID3))
HasilSaljuHariIni = pd.DataFrame({
'Hasil Sebenarnya': y_sebenarnya_HariIni['BersaljuHariIni'],
'Naive Bayes': y_pred_HariIni_GNB,
'ID3' : y_pred_HariIni_ID3
})
HasilSaljuHariIni
HasilSaljuBesok = pd.DataFrame({
'Hasil Sebenarnya': y_sebenarnya_Besok['BersaljuBesok'],
'Naive Bayes': y_pred_HariIni_GNB,
'ID3' : y_pred_HariIni_ID3
})
HasilSaljuBesok
HasilSaljuHariIni.to_csv('HasilSaljuHariIni.csv')
HasilSaljuBesok.to_csv('HasilSaljuBesok.csv')<jupyter_output><empty_output>
|
no_license
|
/Tubes2/MCL-tubes-klasifikasi.ipynb
|
wanda-w/Classification
| 13 |
<jupyter_start><jupyter_text>1. Get funding.
2. Do work.
* Design experiment.
* Collect data.
* Analyze.
3. Write up.
4. Publish.<jupyter_code>7 * 3
2 + 1
x = 6 * 7 + 12
print(x)<jupyter_output>54
|
non_permissive
|
/01-run-quit.ipynb
|
n2ygk/swc-py-2019-08-26
| 1 |
<jupyter_start><jupyter_text># Sentiment Analysis of Health Tweets -- Exploratory Data Analysis
## Introduction
After the data cleaning step, the next step is to explore the data before doing any deep leaning. Here I will check **Most Common Words** to see which health keywords people are talking most at this moment.
## Loading the data
<jupyter_code>#load python packages
import os
import pandas as pd
import datetime
import time
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import re
import string
from datetime import datetime
from nltk.corpus import stopwords
import nltk
from collections import Counter
from wordcloud import WordCloud
%matplotlib inline
df = pd.read_csv('data/clean_data.csv',index_col=0)
df.head()
df.info()
df[df.text.isnull()]
df.dropna(inplace=True)
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
Int64Index: 146043 entries, 0 to 146045
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user_name 146043 non-null int64
1 date 146043 non-null object
2 text 146043 non-null object
dtypes: int64(1), object(2)
memory usage: 4.5+ MB
<jupyter_text>### Create a big list of words out of all the tweets
Let's put all words in all tweets together to check the most commom words.<jupyter_code>words_list = [word for line in df.text for word in line.split()]
words_list[:5]<jupyter_output><empty_output><jupyter_text>### Create the WordCloud<jupyter_code>wc = WordCloud(width=800, height=500, random_state=42)
plt.figure(figsize=(10, 7))
wc.generate(' '.join(words_list))
plt.imshow(wc, interpolation="bilinear")
plt.title("Vocabulary from All Tweets", fontsize = 22)
plt.axis("off")
plt.savefig(r'C:\Users\yuhan\Desktop\Springboard\Capstone project-3 Health_Care\figures\wordcloud_all_tweets.png')<jupyter_output><empty_output><jupyter_text>From the WordCloud figher above we can see that people talked about 'sore', 'sick', 'anxiety', 'pain' a lot. But there're some frequent words have very little meaning and could be added to a stop words list.
**Check the frequecy of each word**<jupyter_code>word_counts = Counter(words_list).most_common()
words_df = pd.DataFrame(word_counts)
words_df.columns = ['word', 'frequency']
words_df = words_df.sort_values('frequency', ascending=False)
words_df.head(10)<jupyter_output><empty_output><jupyter_text>**Check the tweets including Trump**<jupyter_code># The data was scraped during the election period, let's check how many tweets are about Trump
df_trump = df[df.text.str.contains('trump')]
print('The number of tweets about Trump: {}'.format(len(df_trump)))
print('The percentage of tweets about Trump: {}%'.format(round(len(df_trump) / len(df) * 100, 2)))
# Let's check how many tweets talk about election when they talk about Trump
df_trump_election = df_trump[df_trump.text.str.contains('election|vote|voting|debates|ivoted|electoral|voter|biden')]
print('The number of tweets about Trump with election: {}'.format(len(df_trump_election)))
print('The percentage of tweets about Trump with election: {}%'.format(round(len(df_trump_election) / len(df_trump) * 100, 2)))
# Let's drop those tweets talk about election since we only concern the health in this project
cond = df.index.isin(df_trump_election.index)
df.drop(df[cond].index, inplace = True)
df.shape<jupyter_output><empty_output><jupyter_text>**Update the words_list**<jupyter_code># get a new words_list from data without election
words_list = [word for line in df.text for word in line.split()]
word_counts = Counter(words_list).most_common()
words_df = pd.DataFrame(word_counts)
words_df.columns = ['word', 'frequency']
words_df = words_df.sort_values('frequency', ascending=False)
words_df.head(50)
# read the keywords from txt to a list
txt_list = []
with open('keywords.txt', "r") as f:
txt_list = f.read().split()
keywords = [word.strip(',') for word in txt_list]
# get top most common words that the count > 5000 and not including sick
add_stopwords = [word for word, count in word_counts if count > 4000 if word not in keywords]
add_stopwords
from sklearn.feature_extraction import text
# Add new stop words
stop_words = text.ENGLISH_STOP_WORDS.union(add_stopwords)
wc = WordCloud(stopwords=stop_words,width=800, height=500, random_state=42,max_font_size=250)
plt.figure(figsize=(10, 7))
wc.generate(' '.join(words_list))
plt.imshow(wc, interpolation="bilinear")
plt.title("Vocabulary from all tweets without unmeaning frequent words", fontsize = 18)
plt.axis("off")
plt.savefig(r'C:\Users\yuhan\Desktop\Springboard\Capstone project-3 Health_Care\figures\wordcloud_all_tweets_without_unmeaning.png')<jupyter_output><empty_output><jupyter_text>After removing unmeaning words, we can see that our data looks more sence. And we can see that people is talking about 'sick', 'ill', 'body,'pain','anxiety', 'ill', 'hurts' more ofen.
### Let's check how often people talk about for each word<jupyter_code>words_list_keywords = [word for word in words_list if word in keywords]
wc = WordCloud(width=800, height=500, random_state=42,max_font_size=250)
plt.figure(figsize=(10, 7))
wc.generate(' '.join(words_list_keywords))
plt.imshow(wc, interpolation="bilinear")
plt.title("Health words freqency in all tweets", fontsize = 22)
plt.axis("off")
plt.savefig(r'C:\Users\yuhan\Desktop\Springboard\Capstone project-3 Health_Care\figures\wordcloud_keywords_frequecy.png')
word_counts_keywords = Counter(words_list_keywords).most_common()
keywords_df = pd.DataFrame(word_counts_keywords)
keywords_df.columns = ['word', 'frequency']
keywords_df = keywords_df.sort_values('frequency', ascending=False)
keywords_df.head()
plt.figure(figsize=(12, 8))
sns.barplot(x='frequency', y='word', data=keywords_df[:20], palette="Blues_d")
plt.title("Health words frequency in all tweets", fontsize = 22)
plt.savefig(r'C:\Users\yuhan\Desktop\Springboard\Capstone project-3 Health_Care\figures\bar_keywords_frequency.png')<jupyter_output><empty_output><jupyter_text>## Conclusion
From WordCloud figures above we can see that people talk about 'sick', 'body', 'pain','ill','cold','sore', 'anxiety', 'killing', and 'vaccine' a lot.## Export data to a new csv file <jupyter_code># save the clean data for later use
df.to_csv('data/clean_data_2.csv')<jupyter_output><empty_output>
|
no_license
|
/Capstone project-3 Health_Care/3-Exploratory-Data-Analysis.ipynb
|
yuhan0623/Springboard
| 8 |
<jupyter_start><jupyter_text># Titanic DatasetIn this notebook,I am just trying to implement Logistic Regression.<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
train = pd.read_csv("../input/train.csv")
train.head()<jupyter_output><empty_output><jupyter_text>Finding the null values<jupyter_code>sns.heatmap(train.isnull(),yticklabels=False,cbar=False,
cmap='viridis')
sns.set_style('whitegrid')
sns.countplot(x="Survived",hue="Pclass",data=train)
sns.distplot(train['Age'].dropna(),kde=False,bins=30)
train.info()
sns.countplot(x='SibSp',data=train)
train['Fare'].hist(bins=40,figsize=(12,6))<jupyter_output><empty_output><jupyter_text>## Data cleaning process starts hereBy using boxplot we know the average,mean and median<jupyter_code>plt.figure(figsize=(12, 7))
sns.boxplot(x='Pclass',y="Age",data=train)<jupyter_output><empty_output><jupyter_text>Implement a function to fill the null value in the age<jupyter_code>def impute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else:
return 24
else:
return Age
train['Age'] = train[['Age','Pclass']].apply(impute_age,axis=1)<jupyter_output><empty_output><jupyter_text>Age value has been filled.See the below diagram<jupyter_code>sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
train.drop('Cabin',axis=1,inplace=True)
train.dropna(inplace=True)<jupyter_output><empty_output><jupyter_text>All the null values has been cleared<jupyter_code>sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
sex = pd.get_dummies(train['Sex'],drop_first=True)
embark = pd.get_dummies(train['Embarked'],drop_first=True)
train.drop(['Sex','Embarked','Name','Ticket'],axis=1,inplace=True)
train = pd.concat([train,sex,embark],axis=1)
train.head()<jupyter_output><empty_output><jupyter_text>## Developin Logistic Regression Model starts here <jupyter_code>from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(train.drop('Survived',axis=1),
train['Survived'], test_size=0.30,
random_state=101)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
from sklearn.metrics import accuracy_score
accuracy_score(y_test,predictions)<jupyter_output><empty_output><jupyter_text>Lets make a submission
<jupyter_code>test = pd.read_csv("../input/test.csv")
test.head()
test['Age'] = test[['Age','Pclass']].apply(impute_age,axis=1)
test.drop('Cabin',axis=1,inplace=True)
test.dropna(inplace=True)
sns.heatmap(test.isnull(),yticklabels=False,cbar=False,cmap='viridis')
sex = pd.get_dummies(test['Sex'],drop_first=True)
embark = pd.get_dummies(test['Embarked'],drop_first=True)
test.drop(['Sex','Embarked','Name','Ticket'],axis=1,inplace=True)
test = pd.concat([test,sex,embark],axis=1)
test.head()
predictions = logmodel.predict(test)
output = pd.Series(predictions)
output.to_csv("output.csv")<jupyter_output><empty_output>
|
no_license
|
/Notebooks/py/karthickaravindan/logistic-regression/logistic-regression.ipynb
|
nischalshrestha/automatic_wat_discovery
| 8 |
<jupyter_start><jupyter_text><jupyter_code># imdb SENTIMENT ANALYSIS
from __future__ import absolute_import, print_function, division, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
def plot_graphs(history,string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string],'')
plt.xlabel('Epochs')
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Setup input pipeline
dataset,info = tfds.load('imdb_reviews/subwords8k',with_info=True,
as_supervised=True)
train_dataset,test_dataset = dataset['train'],dataset['test']
encoder = info.features['text'].encoder
print('Vocab size',encoder.vocab_size)
# Prepare the data for training
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes)
test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes)
# Create the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size,64 ),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1,activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
# TRAIN THE MODEL
history = model.fit(train_dataset,epochs=10,
validation_data=test_dataset,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
def pad_to_size(vec, size):
zeros = [0] * (size - len(vec))
vec.extend(zeros)
return vec
def sample_predict(sentence, pad):
encoded_sample_pred_text = encoder.encode(sample_pred_text)
if pad:
encoded_sample_pred_text = pad_to_size(encoded_sample_pred_text, 64)
encoded_sample_pred_text = tf.cast(encoded_sample_pred_text, tf.float32)
predictions = model.predict(tf.expand_dims(encoded_sample_pred_text, 0))
return (predictions)
# predict on a sample text without padding.
sample_pred_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=False)
print (predictions)
# predict on a sample text with padding
sample_pred_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=True)
print (predictions)
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
<jupyter_output><empty_output>
|
no_license
|
/Text_Classification_with_an_RNN_(tf).ipynb
|
Georgemburu/MACHINE-LEARNING
| 1 |
<jupyter_start><jupyter_text># Data analysis#### Importing functions and reading files<jupyter_code>import pandas as pd
results = pd.read_csv("../your-project/data/final_db_percentages.csv", index_col=0)<jupyter_output><empty_output><jupyter_text>### Analysis of the participation regarding family income index<jupyter_code>participation = results.groupby(["district_name","participation", "index_family_income"]).max().sort_values("participation", axis=0, ascending=False, inplace=False).reset_index()
participation.drop(['percentage_CUP', 'percentage_Cs', 'percentage_PP',
'percentage_ECP', 'percentage_ERC', 'percentage_JxCAT',
'percentage_PSC', 'percentage_VOX', 'electors',
'total_votes'], axis=1, inplace= True)
participation
participation.corr()<jupyter_output><empty_output><jupyter_text>### Analysis of the Class voting reality- Conservative block: Cs, PP, VOX, JxCAT
- Progressist block: CUP-PR, PSOE, ECP-GUANYEM, ERC<jupyter_code>#Creating columns for each block
class_voting = results.copy()
class_voting['conservative']=class_voting['percentage_Cs'] + class_voting['percentage_PP'] + class_voting['percentage_VOX'] + class_voting['percentage_JxCAT']
class_voting['progressist']=class_voting['percentage_CUP'] + class_voting['percentage_PSC'] + class_voting['percentage_ECP'] + results['percentage_JxCAT']
#Assigning a new value to the class_voting analysis
class_voting.drop(['electors', 'total_votes',
'participation', 'percentage_CUP', 'percentage_Cs', 'percentage_PP',
'percentage_ECP', 'percentage_ERC', 'percentage_JxCAT',
'percentage_PSC', 'percentage_VOX'], axis=1, inplace= True)
class_voting.sort_values("index_family_income", axis=0, ascending=False, inplace=False)
class_voting.corr()<jupyter_output><empty_output><jupyter_text>### Analysis of the identitary voting reality- Independentist block: CUP-PR, JxCAT, ERC
- Unpositioned block: ECP-GUANYEM
- Unionist block: Cs, PP, VOX, PSOE<jupyter_code>identitary_voting = results.copy()
identitary_voting['independentist_block']=identitary_voting['percentage_CUP'] + identitary_voting['percentage_ERC'] + identitary_voting['percentage_JxCAT']
identitary_voting['unionist_block']=identitary_voting['percentage_Cs'] + identitary_voting['percentage_PP']+ identitary_voting['percentage_PSC']+ identitary_voting['percentage_VOX']
identitary_voting["unpositioned"]= identitary_voting["percentage_ECP"]
identitary_voting.drop(['electors', 'total_votes',
'participation', 'percentage_CUP', 'percentage_Cs', 'percentage_PP',
'percentage_ECP', 'percentage_ERC', 'percentage_JxCAT',
'percentage_PSC', 'percentage_VOX'], axis=1, inplace= True)
identitary_voting.sort_values("index_family_income", axis=0, ascending=False, inplace=False)
identitary_voting.corr()<jupyter_output><empty_output>
|
no_license
|
/your-project/Data analysis .ipynb
|
sergiomonge23/Project-Week-2-Barcelona
| 4 |
<jupyter_start><jupyter_text>Link to blog: https://medium.com/@zhiwei_zhang/amazon-employee-access-challenge-cba3fd9bae9a
link to final Kaggle Submission:
https://github.com/data1030/a6-zzhang83/blob/master/FinalSubmission.csvFinal Score : 0.89567<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn import metrics
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import RFECV
from sklearn.feature_selection import f_classif
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import StratifiedKFold
from sklearn import (metrics, cross_validation, linear_model, preprocessing)
from sklearn.cross_validation import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import cross_val_score, GridSearchCV<jupyter_output>/Users/zhiweizhang/anaconda/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
<jupyter_text># Loading dataset and split into features and responses<jupyter_code>train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
train_x = train.iloc[:,1:9] # drop last column role-code because it is unique to each row
train_y = train.iloc[:,0] # target
test_x = test.iloc[:,1:9] # drop last column <jupyter_output><empty_output><jupyter_text> # Feature Selection:
Tried Hybrid Feature, One-hot Encoding; Eliminating Unimportant Features; RFECV # Eliminating Unimportant Features:<jupyter_code># does not work well, can't simply drop features.
def selectimportance(model, X, k=5):
return X[:,model.feature_importances_.argsort()[::-1][:k]]
#******* RFECV with RandomForest
# same reason as above, dropping original features leads to worse performance
estimator = RandomForestClassifier()
selector = RFECV(estimator=estimator, step=1, cv=5, scoring='accuracy')
selector.fit(train_x, train_y)
selector.ranking_
# new_x = selector.transform(train_x) <jupyter_output><empty_output><jupyter_text> # Hybrid Feature: ROLE_ROLLUP_1 & ROLE_ROLLUP_2<jupyter_code>combined = pd.concat([train_x, test_x])
combined['Hybrid'] = (combined['ROLE_ROLLUP_1'] + combined['ROLE_ROLLUP_2'])
del combined['ROLE_ROLLUP_1']
del combined['ROLE_ROLLUP_2']<jupyter_output><empty_output><jupyter_text># One-hot encoding<jupyter_code># ******* one-hot encoding using pd_dummies (32769, 16592)
X_dummies = pd.get_dummies(combined, columns=combined.columns, drop_first = True)
new_x = X_dummies[:32769]
newtest_x = X_dummies[32769:] # update test dataset
# ******* eliminating unimportant features from one hot encoding
select = SelectKBest(score_func=f_classif, k = 9000) # keep 9000
fit = select.fit(new_x, train_y)
new_x2 = fit.transform(new_x)
newtest_x2 = fit.transform(newtest_x)<jupyter_output>/Users/zhiweizhang/anaconda/lib/python3.6/site-packages/sklearn/feature_selection/univariate_selection.py:113: UserWarning: Features [ 7533 7559 7612 ..., 16460 16462 16465] are constant.
UserWarning)
/Users/zhiweizhang/anaconda/lib/python3.6/site-packages/sklearn/feature_selection/univariate_selection.py:114: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
<jupyter_text># Logistic Regression<jupyter_code>lr = LogisticRegression() # initial model with original dataset
# ROC plot
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size = 0.3,random_state=0)
lr.fit(X_train, y_train) # initial model
probs = lr.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC =',roc_auc) # 0.521294334691
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
lr = LogisticRegression()
params = {'C':[1.5, 2, 2.5, 3, 3.5, 4, 4.5]}
lrgrid = GridSearchCV(estimator = lr, param_grid = params, cv=5, scoring = 'accuracy',return_train_score=True)
lrgrid.fit(new_x2,train_y.values)
lrgrid.best_params_
# fit model on new dataframe from one-hot encoding
lr = LogisticRegression(C=3.5)
score = cross_val_score(lrmodel,new_x2,train_y, cv = 10, scoring = 'roc_auc')
score.mean() # 0.87897722872753992
# predict using trained logistic model
lrmodel = lr.fit(new_x2,train_y)
probs = lrmodel.predict_proba(newtest_x2)
preds_logistic = probs[:,1]
lrmodel = LogisticRegression(C=3.5)
X_train, X_test, y_train, y_test = train_test_split(new_x2, train_y, test_size = 0.3,random_state=0)
lrmodel.fit(X_train,y_train)
probs = lrmodel.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC =',roc_auc) # 0.881069428452
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()<jupyter_output>AUC = 0.881069428452
<jupyter_text># Random Forest<jupyter_code>clf = RandomForestClassifier() # initial model with no parameter
clf = clf.fit(train_x, train_y)
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size = 0.3,random_state=0)
clf.fit(X_train, y_train)
probs = clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC =',roc_auc) # 0.798039419179
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Tuning and Using the best parameters to make predictions
clf = RandomForestClassifier()
clf = clf.fit(train_x.values, train_y.values)
params = {'max_features': [0.4,0.8], 'n_estimators':[300,400],
'min_samples_leaf':[1,2,4],'min_samples_split':[8,12],'max_depth':[7,15,20]}
RFgrid = RandomizedSearchCV(estimator = clf, param_distributions = params, cv=5, scoring = 'accuracy',return_train_score=True)
RFgrid.fit(train_x.values,train_y.values)
print(RFgrid.best_params_)
RFmodel = RandomForestClassifier(max_depth = 20, max_features=0.8, min_samples_leaf = 1, min_samples_split = 8, n_estimators = 400)
score = cross_val_score(RFmodel,train_x.values,train_y.values, cv = 10, scoring = 'roc_auc')
score.mean()
# predict using random forests
RFmodel.fit(train_x.values, train_y.values)
prods = RFmodel.predict_proba(test_x)
preds_rf = prods[:,1]
# final Random Forest ROC plot
RFmodel = RandomForestClassifier(max_depth = 20, max_features=0.8, min_samples_leaf = 1, min_samples_split = 8, n_estimators = 400)
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size = 0.3,random_state=0)
RFmodel.fit(X_train, y_train)
probs = RFmodel.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC =', roc_auc)
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()<jupyter_output>AUC = 0.854655014539
<jupyter_text># Gradient Boosting<jupyter_code>clf_gb = GradientBoostingClassifier()
clf_gb.fit(train_x, train_y) # initial model
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size = 0.3,random_state=0)
clf_gb.fit(X_train, y_train)
probs = clf_gb.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC = ',roc_auc )
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Hyperparameter Optimization
params = {'max_depth':[5,10,15],'n_estimators':[20,30,40,50]}
GBgrid = RandomizedSearchCV(estimator = clf_gb, param_distributions = params, cv=5, scoring = 'accuracy',return_train_score=True)
GBgrid.fit(train_x,train_y)
GBgrid.best_params_
GBmodel = GradientBoostingClassifier(max_depth = 10,n_estimators = 40)
score = cross_val_score(GBmodel,train_x,train_y, cv = 10, scoring = 'roc_auc')
score.mean() # 0.84802305636358033
# predict using trained gradient boosting classifier
GBmodel.fit(train_x.values, train_y.values)
y_pred_class = GBmodel.predict_proba(test_x)
preds_gb = y_pred_class[:,1]
GBmodel = GradientBoostingClassifier(max_depth = 10,n_estimators = 40)
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size = 0.3,random_state=0)
GBmodel.fit(X_train, y_train)
probs = GBmodel.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC = ',roc_auc )
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()<jupyter_output>AUC = 0.836208582592
<jupyter_text># Ensemble VotingClassifer<jupyter_code>lr = LogisticRegression(C=3.5)
lrmodel = lr.fit(new_x2,train_y)
RFmodel = RandomForestClassifier(max_depth = 20, max_features=0.8, min_samples_leaf = 1, min_samples_split = 8, n_estimators = 400)
RFmodel.fit(train_x.values, train_y.values)
GBmodel = GradientBoostingClassifier(max_depth = 10,n_estimators = 40)
GBmodel.fit(train_x.values, train_y.values)
eclf = VotingClassifier(estimators=[('RF',RFmodel), ('LR',lrmodel),('gradient',GBmodel)],
voting='soft',weights=[1,1,1])
eclf.fit(train_x,train_y)
score = cross_val_score(eclf, train_x, train_y, cv=5, scoring = 'roc_auc')
score.mean()
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size = 0.3,random_state=0)
eclf.fit(X_train, y_train)
probs = eclf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
print('AUC = ',roc_auc ) #
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()<jupyter_output>AUC = 0.859545788888
<jupyter_text># Final Submission<jupyter_code># Logistic
lr = LogisticRegression(C=3.5)
lrmodel = lr.fit(new_x2,train_y)
probs = lrmodel.predict_proba(newtest_x2)
preds_logistic = probs[:,1]
# random forest classifier
RFmodel = RandomForestClassifier(max_depth = 20, max_features=0.8, min_samples_leaf = 1, min_samples_split = 8, n_estimators = 400)
RFmodel.fit(train_x.values, train_y.values)
prods = RFmodel.predict_proba(test_x)
preds_rf = prods[:,1]
# gradient boosting classifier
GBmodel = GradientBoostingClassifier(max_depth = 10,n_estimators = 40)
GBmodel.fit(train_x.values, train_y.values)
y_pred_class = GBmodel.predict_proba(test_x)
preds_gb = y_pred_class[:,1]
Action = (preds_logistic + preds_rf + preds_gb)/3
id_ = list(range(1,58922))
df = pd.DataFrame({'Id':id_,'Action':Action})
df.set_index('Id', inplace=True)
df.to_csv('FinalSubmission.csv') # 0.8956
pd.read_csv('FinalSubmission.csv')<jupyter_output><empty_output>
|
no_license
|
/KaggleAmazon.ipynb
|
zzhang83/Kaggle-AmazonEmployeeAccessChallenge
| 10 |
<jupyter_start><jupyter_text>##### Copyright 2019 The TensorFlow Authors.<jupyter_code>#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.<jupyter_output><empty_output><jupyter_text># Text classification with TensorFlow Lite Model Maker
View on TensorFlow.org
Run in Google Colab
View source on GitHub
Download notebook
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.## Prerequisites
### Install the required packages
To run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
**If you run this notebook on Colab, you may see an error message about `tensorflowjs` and `tensorflow-hub` version imcompatibility. It is safe to ignore this error as we do not use `tensorflowjs` in this workflow.**<jupyter_code>!pip install -q tflite-model-maker<jupyter_output><empty_output><jupyter_text>Import the required packages.<jupyter_code>import numpy as np
import os
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker import TextClassifierDataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')<jupyter_output><empty_output><jupyter_text>### Download the sample training data.
In this tutorial, we will use the [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) which is one of the tasks in the [GLUE](https://gluebenchmark.com/) benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes: positive and negative movie reviews.<jupyter_code>data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')<jupyter_output><empty_output><jupyter_text>The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab `\t` character as its delimiter instead of a comma `,` in the CSV format.
Here are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.
| sentence | label | | | |
|-------------------------------------------------------------------------------------------|-------|---|---|---|
| hide new secretions from the parental units | 0 | | | |
| contains no wit , only labored gags | 0 | | | |
| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |
| remains utterly satisfied to remain the same throughout | 0 | | | |
| on the worst revenge-of-the-nerds clichés the filmmakers could dredge up | 0 | | | |
Next, we will load the dataset into a Pandas dataframe and change the current label names (`0` and `1`) to a more human-readable ones (`negative` and `positive`) and use them for model training.
<jupyter_code>import pandas as pd
def replace_label(original_file, new_file):
# Load the original file to pandas. We need to specify the separator as
# '\t' as the training data is stored in TSV format
df = pd.read_csv(original_file, sep='\t')
# Define how we want to change the label name
label_map = {0: 'negative', 1: 'positive'}
# Excute the label change
df.replace({'label': label_map}, inplace=True)
# Write the updated dataset to a new file
df.to_csv(new_file)
# Replace the label name for both the training and test dataset. Then write the
# updated CSV dataset to the current folder.
replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')
replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')<jupyter_output><empty_output><jupyter_text>## Quickstart
There are five steps to train a text classification model:
**Step 1. Choose a text classification model archiecture.**
Here we use the average word embedding model architecture, which will produce a small and fast model with decent accuracy.<jupyter_code>spec = model_spec.get('average_word_vec')<jupyter_output><empty_output><jupyter_text>Model Maker also supports other model architectures such as [BERT](https://arxiv.org/abs/1810.04805). If you are interested to learn about other architecture, see the [Choose a model architecture for Text Classifier](#scrollTo=kJ_B8fMDOhMR) section below.**Step 2. Load the training and test data, then preprocess them according to a specific `model_spec`.**
Model Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.
Each model architecture requires input data to be processed in a particular way. `TextClassifierDataLoader` reads the requirement from `model_spec` and automatically execute the necessary preprocessing.<jupyter_code>train_data = TextClassifierDataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=True)
test_data = TextClassifierDataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=False)<jupyter_output><empty_output><jupyter_text>**Step 3. Train the TensorFlow model with the training data.**
The average word embedding model use `batch_size = 32` by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.<jupyter_code>model = text_classifier.create(train_data, model_spec=spec, epochs=10)<jupyter_output><empty_output><jupyter_text>**Step 4. Evaluate the model with the test data.**
After training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model perform against new data it has never seen before.
As the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.<jupyter_code>loss, acc = model.evaluate(test_data)<jupyter_output><empty_output><jupyter_text>**Step 5. Export as a TensorFlow Lite model.**
Let's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.
You may see an warning about `vocab.txt` file does not exist in the metadata but they can be safely ignore.<jupyter_code>model.export(export_dir='average_word_vec')<jupyter_output><empty_output><jupyter_text>You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the `average_word_vec` folder as we specified in `export_dir` parameter above, right-click on the `model.tflite` file and choose `Download` to download it to your local computer.
This model can be integrated into an Android or an iOS app using the [NLClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/nl_classifier) of the [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview).
See the [TFLite Text Classification sample app](https://github.com/tensorflow/examples/blob/master/lite/examples/text_classification/android/lib_task_api/src/main/java/org/tensorflow/lite/examples/textclassification/client/TextClassificationClient.java#L54) for more details on how the model is used in an working app.
*Note 1: Android Studio Model Binding does not support text classification yet so please use the TensorFlow Lite Task Library.*
*Note 2: There is a `model.json` file in the same folder with the TFLite model. It contains the JSON representation of the [metadata](https://www.tensorflow.org/lite/convert/metadata) bundled inside the TensorFlow Lite model. Model metadata helps the TFLite Task Library know what the model does and how to pre-process/post-process data for the model. You don't need to download the `model.json` file as it is only for informational purpose and its content is already inside the TFLite file.*
*Note 3: If you train a text classification model using MobileBERT or BERT-Base architecture, you will need to use [BertNLClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_nl_classifier) instead to integrate the trained model into a mobile app.*The following sections walk through the example step by step to show more details.## Choose a model architecture for Text Classifier
Each `model_spec` object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports [MobileBERT](https://arxiv.org/pdf/2004.02984.pdf), averaging word embeddings and [BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) models.
| Supported Model | Name of model_spec | Model Description | Model size |
|--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------|
| Averaging Word Embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. | <1MB |
| MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. | 25MB w/ quantization 100MB w/o quantization |
| BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. | 300MB |
In the quick start, we have used the average word embedding model. Let's switch to [MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) to train a model with higher accuracy.<jupyter_code>mb_spec = model_spec.get('mobilebert_classifier')<jupyter_output><empty_output><jupyter_text>## Load training data
You can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).To keep it simple, we will reuse the SST-2 dataset downloaded earlier. Let's use the `TestClassifierDataLoader.from_csv` method to load the data.
Please be noted that as we have changed the model architecture, we will need to reload the training and test dataset to apply the new preprocessing logic.<jupyter_code>train_data = TextClassifierDataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=mb_spec,
is_training=True)
test_data = TextClassifierDataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=mb_spec,
is_training=False)<jupyter_output><empty_output><jupyter_text>The Model Maker library also supports the `from_folder()` method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The `class_labels` parameter is used to specify which the subfolders.## Train a TensorFlow Model
Train a text classification model using the training data.
*Note: As MobileBERT is a complex model, each training epoch will takes about 10 minutes on a Colab GPU. Please make sure that you are using a GPU runtime.*<jupyter_code>model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)<jupyter_output><empty_output><jupyter_text>Examine the detailed model structure.<jupyter_code>model.summary()<jupyter_output><empty_output><jupyter_text>## Evaluate the model
Evaluate the model that we have just trained using the test data and measure the loss and accuracy value.<jupyter_code>loss, acc = model.evaluate(test_data)<jupyter_output><empty_output><jupyter_text>## Quantize the model
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster. Model Maker automatically applies the recommended quantization scheme for each model architecture but you can customize the quantization config as below.<jupyter_code>config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config.experimental_new_quantizer = True<jupyter_output><empty_output><jupyter_text>## Export as a TensorFlow Lite model
Convert the trained model to TensorFlow Lite model format with [metadata](https://www.tensorflow.org/lite/convert/metadata) so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is `model.tflite`.<jupyter_code>model.export(export_dir='mobilebert/', quantization_config=config)<jupyter_output><empty_output><jupyter_text>The TensorFlow Lite model file can be integrated in a mobile app using the [BertNLClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_nl_classifier) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview). Please note that this is **different** from the `NLClassifier` API used to integrate the text classification trained with the average word vector model architecture.The export formats can be one or a list of the following:
* `ExportFormat.TFLITE`
* `ExportFormat.LABEL`
* `ExportFormat.VOCAB`
* `ExportFormat.SAVED_MODEL`
By default, it exports only the TensorFlow Lite model file containing the model metadata. You can also choose to export other files related to the model for better examination. For instance, exporting only the label file and vocab file as follows:<jupyter_code>model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])<jupyter_output><empty_output><jupyter_text>You can evaluate the TFLite model with `evaluate_tflite` method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.<jupyter_code>accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)
print('TFLite model accuracy: ', accuracy)<jupyter_output><empty_output><jupyter_text>## Advanced Usage
The `create` function is the driver function that the Model Maker library uses to create models. The `model_spec` parameter defines the model specification. The `AverageWordVecModelSpec` and `BertClassifierModelSpec` classes are currently supported. The `create` function comprises of the following steps:
1. Creates the model for the text classifier according to `model_spec`.
2. Trains the classifier model. The default epochs and the default batch size are set by the `default_training_epochs` and `default_batch_size` variables in the `model_spec` object.
This section covers advanced usage topics like adjusting the model and the training hyperparameters.### Customize the MobileBERT model hyperparameters
The model parameters you can adjust are:
* `seq_len`: Length of the sequence to feed into the model.
* `initializer_range`: The standard deviation of the `truncated_normal_initializer` for initializing all weight matrices.
* `trainable`: Boolean that specifies whether the pre-trained layer is trainable.
The training pipeline parameters you can adjust are:
* `model_dir`: The location of the model checkpoint files. If not set, a temporary directory will be used.
* `dropout_rate`: The dropout rate.
* `learning_rate`: The initial learning rate for the Adam optimizer.
* `tpu`: TPU address to connect to.
For instance, you can set the `seq_len=256` (default is 128). This allows the model to classify longer text.<jupyter_code>new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256<jupyter_output><empty_output><jupyter_text>### Customize the average word embedding model hyperparameters
You can adjust the model infrastructure like the `wordvec_dim` and the `seq_len` variables in the `AverageWordVecModelSpec` class.
For example, you can train the model with a larger value of `wordvec_dim`. Note that you must construct a new `model_spec` if you modify the model.<jupyter_code>new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32)<jupyter_output><empty_output><jupyter_text>Get the preprocessed data.<jupyter_code>new_train_data = TextClassifierDataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
is_training=True)<jupyter_output><empty_output><jupyter_text>Train the new model.<jupyter_code>model = text_classifier.create(new_train_data, model_spec=new_model_spec)<jupyter_output><empty_output><jupyter_text>### Tune the training hyperparameters
You can also tune the training hyperparameters like `epochs` and `batch_size` that affect the model accuracy. For instance,
* `epochs`: more epochs could achieve better accuracy, but may lead to overfitting.
* `batch_size`: the number of samples to use in one training step.
For example, you can train with more epochs.<jupyter_code>model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)<jupyter_output><empty_output><jupyter_text>Evaluate the newly retrained model with 20 training epochs.<jupyter_code>new_test_data = TextClassifierDataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
is_training=False)
loss, accuracy = model.evaluate(new_test_data)<jupyter_output><empty_output><jupyter_text>### Change the Model Architecture
You can change the model by changing the `model_spec`. The following shows how to change to BERT-Base model.
Change the `model_spec` to BERT-Base model for the text classifier.<jupyter_code>spec = model_spec.get('bert_classifier')<jupyter_output><empty_output>
|
non_permissive
|
/site/en-snapshot/lite/tutorials/model_maker_text_classification.ipynb
|
April53/docs-l10n
| 26 |
<jupyter_start><jupyter_text># Building a Red-Black TreeIn this notebook, we'll walk through how you might build a red-black tree. Remember, we need to follow the red-black tree rules, on top of the binary search tree rules. Our new rules are:
* All nodes have a color
* All nodes have two children (use NULL nodes)
* All NULL nodes are colored black
* If a node is red, its children must be black
* The root node must be black (optional)
* We'll go ahead and implement without this for now
* Every path to its descendant NULL nodes must contain the same number of black nodes### Sketch
Similar to our binary search tree implementation, we will define a class for nodes and a class for the tree itself. The `Node` class will need a couple new attributes. It is no longer enough to only know the children, because we need to ask questions during insertion like, "what color is my parent's sibling?". So we will add a parent link as well as the color.<jupyter_code>class Node(object):
def __init__(self, value, parent, color):
self.value = value
self.left = None
self.right = None
self.parent = parent
self.color = color<jupyter_output><empty_output><jupyter_text>For the tree, we can start with a mostly empty implementation. But we know we want to always insert nodes with color red, so let's fill in the constructor to insert the root node.<jupyter_code>class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def insert(self, new_val):
pass
def search(self, find_val):
return False<jupyter_output><empty_output><jupyter_text>### Insertion
Now how would we design our `insert` implementation? We know from our experience with BSTs how most of it will work. We can re-use that portion and augment it to assign colors and parents.<jupyter_code>class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def insert(self, new_val):
self.insert_helper(self.root, new_val)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
self.insert_helper(current.right, current, new_val)
else:
current.right = Node(new_val, current, 'red')
else:
if current.left:
self.insert_helper(current.left, current, new_val)
else:
current.left = Node(new_val, current, 'red')<jupyter_output><empty_output><jupyter_text>### Rotations
At this point we are only making a BST, with extra attributes. To make this a red-black tree, we need to add the extra sauce that makes red-black trees awesome. We will sketch out some more code for rebalancing the tree based on the case, and fill them in one at a time.
First, we need to change our `insert_helper` to return the node that was inserted so we can interrogate it when rebalancing.<jupyter_code>class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def insert(self, new_val):
new_node = self.insert_helper(self.root, new_val)
self.rebalance(new_node)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
return self.insert_helper(current.right, new_val)
else:
current.right = Node(new_val, current, 'red')
return current.right
else:
if current.left:
return self.insert_helper(current.left, new_val)
else:
current.left = Node(new_val, current, 'red')
return current.left
def rebalance(self, node):
pass<jupyter_output><empty_output><jupyter_text>#### Case 1
_We have just inserted the root node_
If we're enforcing that the root must be black, we change its color. We are not enforcing this, so we are all done! Four to go.<jupyter_code>def rebalance(node):
if node.parent == None:
return<jupyter_output><empty_output><jupyter_text>#### Case 2
_We inserted under a black parent node_
Thinking through this, we can observe the following: We inserted a red node beneath a black node. The new children (the NULL nodes) are black by definition, and our red node _replaced_ a black NULL node. So the number of black nodes for any paths from parents is unchanged. Nothing to do in this case, either.<jupyter_code>def rebalance(node):
# Case 1
if node.parent == None:
return
# Case 2
if node.parent.color == 'black':
return<jupyter_output><empty_output><jupyter_text>#### Case 3
_The parent and its sibling of the newly inserted node are both red_
Okay, we're done with free cases. In this specific case, we can flip the color of the parent and its sibling. We know they're both red in this case, which means the grandparent is black. It will also need to flip. At that point we will have a freshly painted red node at the grandparent. At that point, we need to do the same evaluation! If the grandparent turns red, and its sibling is also red, that's case 3 again. Guess what that means! Time for more recursion.
We will define the `grandparent` and `pibling` (a parent's sibling) methods later, for now let's focus on the core logic.<jupyter_code>def rebalance(self, node):
# Case 1
if node.parent == None:
return
# Case 2
if node.parent.color == 'black':
return
# From here, we know parent's color is red
# Case 3
if pibling(node).color == 'red':
pibling(node).color = 'black'
node.parent.color = 'black'
grandparent(node).color = 'red'
self.rebalance(grandparent(node))<jupyter_output><empty_output><jupyter_text>#### Case 4
_The newly inserted node has a red parent, but that parent has a black sibling_
These last cases get more interesting. The criteria above actually govern case 4 and 5. What separates them is if the newly inserted node is on the _inside_ or the _outside_ of the sub tree. We define _inside_ and _outside_ like this:
* inside
* _EITHER_
* the new node is a left child of its parent, but its parent is a right child, or
* the new node is a right child of its parent, but its parent is a left child
* outside
* the opposite of inside, the new node and its parent are on the same side of the grandparent
Case 4 is to handle the _inside_ scenario. In this case, we need to rotate. As we will see, this will not finish balancing the tree, but will now qualify for Case 5.
We rotate against the inside-ness of the new node. If the new node qualifies for case 4, it needs to move into its parent's spot. If it's on the right of the parent, that's a rotate left. If it's on the left of the parent, that's a rotate right.<jupyter_code>def rebalance(self, node):
# ... omitted cases 1-3 ...
# Case 4
gp = grandparent(node)
if gp.left and node == gp.left.right:
self.rotate_left(parent(node))
elif gp.right and node == gp.right.left:
self.rotate_right(parent(node))
# TODO: Case 5<jupyter_output><empty_output><jupyter_text>To implement `rotate_left` and `rotate_right`, think about what we want to accomplish. We want to take one of the node's children and have it take the place of its parent. The given node will move down to a child of the newly parental node.<jupyter_code>def rotate_left(self, node):
# Save off the parent of the sub-tree we're rotating
p = node.parent
node_moving_up = node.right
# After 'node' moves up, the right child will now be a left child
node.right = node_moving_up.left
# 'node' moves down, to being a left child
node_moving_up.left = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
# 'node' may have been the root
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
def rotate_right(self, node):
p = node.parent
node_moving_up = node.left
node.left = node_moving_up.right
node_moving_up.right = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
<jupyter_output><empty_output><jupyter_text>#### Case 5
Now that case 4 is resolved, or if we didn't qualify for case 4 and have an outside sub-tree already, we need to rotate again. If our new node is a left child of a left child, we rotate right. If our new node is a right of a right, we rotate left. This is done on the grandparent node.
But after this rotation, our colors will be off. Remember that for cases 3, 4, and 5, the parent of the new node is red. But we will have rotated a red node with a red child up, which violates our rule of all red nodes having two black children. So after rotating, we switch the colors of the (original) parent and grandparent nodes.<jupyter_code>def rebalance(self, node):
# ... omitted cases 1-3 ...
# Case 4
gp = grandparent(node)
if node == gp.left.right:
self.rotate_left(node.parent)
elif node == gp.right.left:
self.rotate_right(node.parent)
# Case 5
p = node.parent
gp = p.parent
if node == p.left:
self.rotate_right(gp)
else:
self.rotate_left(gp)
p.color = 'black'
gp.color = 'red'
<jupyter_output><empty_output><jupyter_text>### Result
Combining all of our efforts we have the following.<jupyter_code>class Node(object):
def __init__(self, value, parent, color):
self.value = value
self.left = None
self.right = None
self.parent = parent
self.color = color
def __repr__(self):
print_color = 'R' if self.color == 'red' else 'B'
return '%d%s' % (self.value, print_color)
def grandparent(node):
if node.parent == None:
return None
return node.parent.parent
# Helper for finding the node's parent's sibling
def pibling(node):
p = node.parent
gp = grandparent(node)
if gp == None:
return None
if p == gp.left:
return gp.right
if p == gp.right:
return gp.left
class RedBlackTree(object):
def __init__(self, root):
self.root = Node(root, None, 'red')
def __iter__(self):
yield from self.root.__iter__()
def insert(self, new_val):
new_node = self.insert_helper(self.root, new_val)
self.rebalance(new_node)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
return self.insert_helper(current.right, new_val)
else:
current.right = Node(new_val, current, 'red')
return current.right
else:
if current.left:
return self.insert_helper(current.left, new_val)
else:
current.left = Node(new_val, current, 'red')
return current.left
def rebalance(self, node):
# Case 1
if node.parent == None:
return
# Case 2
if node.parent.color == 'black':
return
# Case 3
if pibling(node) and pibling(node).color == 'red':
pibling(node).color = 'black'
node.parent.color = 'black'
grandparent(node).color = 'red'
return self.rebalance(grandparent(node))
gp = grandparent(node)
# Small change, if there is no grandparent, cases 4 and 5
# won't apply
if gp == None:
return
# Case 4
if gp.left and node == gp.left.right:
self.rotate_left(node.parent)
node = node.left
elif gp.right and node == gp.right.left:
self.rotate_right(node.parent)
node = node.right
# Case 5
p = node.parent
gp = p.parent
if node == p.left:
self.rotate_right(gp)
else:
self.rotate_left(gp)
p.color = 'black'
gp.color = 'red'
def rotate_left(self, node):
# Save off the parent of the sub-tree we're rotating
p = node.parent
node_moving_up = node.right
# After 'node' moves up, the right child will now be a left child
node.right = node_moving_up.left
# 'node' moves down, to being a left child
node_moving_up.left = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
# 'node' may have been the root
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p
def rotate_right(self, node):
p = node.parent
node_moving_up = node.left
node.left = node_moving_up.right
node_moving_up.right = node
node.parent = node_moving_up
# Now we need to connect to the sub-tree's parent
if p != None:
if node == p.left:
p.left = node_moving_up
else:
p.right = node_moving_up
node_moving_up.parent = p<jupyter_output><empty_output><jupyter_text>### Testing
We've written a lot of code. Let's see how the tree mutates as we add nodes.
First, we'll need a way to visualize the tree. The below will nest, but remember the first child is always the left child.<jupyter_code>def print_tree(node, level=0):
print(' ' * (level - 1) + '+--' * (level > 0) + '%s' % node)
if node.left:
print_tree(node.left, level + 1)
if node.right:
print_tree(node.right, level + 1)<jupyter_output><empty_output><jupyter_text>For cases 1 and 2, we can insert the first few nodes and see the tree behaves the same as a BST.<jupyter_code>tree = RedBlackTree(9)
tree.insert(6)
tree.insert(19)
print_tree(tree.root)<jupyter_output>9R
+--6R
+--19R
<jupyter_text>Inserting 13 should flip 6 and 19 to black, as it hits our Case 3 logic.<jupyter_code>tree.insert(13)
print_tree(tree.root)<jupyter_output>9R
+--6B
+--19B
+--13R
<jupyter_text>Observe that 13 was inserted as red, and then because of Case 3, 6 and 19 flipped to black. 9 was also assigned red, but that was not a net change. Because we're not enforcing the optional "root is always black rule", this is acceptable.
Now let's cause some rotations. When we insert 16, it goes under 13, but 13 does not have a red sibling. 16 rotates into 13's spot, because it's currently an _inside_ sub-tree. Then 16 rotates into 19's spot. After these rotations, the ordering of the BST has been preserved _and_ our tree is balanced.<jupyter_code>tree.insert(16)
print_tree(tree.root)<jupyter_output>9R
+--6B
+--16R
+--13B
+--16R
+--19B
|
no_license
|
/basic/RedBlackTreeWalkthrough.ipynb
|
ava6969/DataStructureAlgorithm
| 15 |
<jupyter_start><jupyter_text>+++
title = "Multiple correspondence analysis"
menu = "main"
weight = 3
toc = true
aliases = ["mca"]
+++## Resources- [Computation of Multiple Correspondence Analysis, with code in R](https://core.ac.uk/download/pdf/6591520.pdf)## Data
Multiple correspondence analysis is an extension of correspondence analysis. It should be used when you have more than two categorical variables. The idea is to one-hot encode a dataset, before applying correspondence analysis to it.
As an example, we're going to use the [balloons dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/balloons/) taken from the [UCI datasets website](https://archive.ics.uci.edu/ml/datasets.html).<jupyter_code>import pandas as pd
dataset = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/balloons/adult+stretch.data')
dataset.columns = ['Color', 'Size', 'Action', 'Age', 'Inflated']
dataset.head()<jupyter_output><empty_output><jupyter_text>## Fitting<jupyter_code>import prince
mca = prince.MCA(
n_components=3,
n_iter=3,
copy=True,
check_input=True,
engine='sklearn',
random_state=42
)
mca = mca.fit(dataset)<jupyter_output><empty_output><jupyter_text>The way MCA works is that it one-hot encodes the dataset, and then fits a correspondence analysis. In case your dataset is already one-hot encoded, you can specify `one_hot=False` to skip this step.<jupyter_code>one_hot = pd.get_dummies(dataset)
mca_no_one_hot = prince.MCA(one_hot=False)
mca_no_one_hot = mca_no_one_hot.fit(one_hot)<jupyter_output><empty_output><jupyter_text>## Eigenvalues<jupyter_code>mca.eigenvalues_summary<jupyter_output><empty_output><jupyter_text>## Coordinates<jupyter_code>mca.row_coordinates(dataset).head()
mca.column_coordinates(dataset).head()<jupyter_output><empty_output><jupyter_text>## Visualization<jupyter_code>mca.plot(
dataset,
x_component=0,
y_component=1
)<jupyter_output><empty_output><jupyter_text>## Contributions<jupyter_code>mca.row_contributions_.head().style.format('{:.0%}')
mca.column_contributions_.head().style.format('{:.0%}')<jupyter_output><empty_output><jupyter_text>## Cosine similarities<jupyter_code>mca.row_cosine_similarities(dataset).head()
mca.column_cosine_similarities(dataset).head()<jupyter_output><empty_output>
|
permissive
|
/docs/content/mca.ipynb
|
MaxHalford/prince
| 8 |
<jupyter_start><jupyter_text>이번에는 이미지는 아니지만 또 다시 COVID-19관련 데이터이다.
하지만 이전에 분석했던 데이터보다 많은 양의 정보 문자열로 이루어진 정보, 그리고 폐x-ray이미지의 이름까지 저장되어 있는 csv파일이다.
이를 통해
1. 성별, 나이, 장소등을 이용해서 생존 여부를 판단(단순한 NN알고리즘으로도 가능할 것 같다.)
2. 우리나라 한정이 아니기 때문에 국적에 따른 생존 여부를 판단
3. 'finding' column에 다양한 코로나 이외의 폐렴류의 질병이 존재하기 때문에 다른 데이터를 이용해서 이를 예측하는 DNN model을 만들어볼 것이다.
4. 다양한 정보들의 상관관계에 대해서 분석을 하면서 시각화를 해볼 생각이다.
5. 최종 목표는 어떤 질병인지('finding')을 알아내는 것이기 때문에 주어진 폐사진은 CNN모델을 이용해서 학습을 해 주어야 한다.
6. 이때 이미지데이터와 csv데이터 모두를 조합해서 어떠한 질병인지 예측하는 것을 해보고 싶었는데 그것을 ensemble로 하면 되는 줄 알았더니 알고 보니 그렇지는 않았다.
7. 일단 이 jupyter notebook에는 CNN을 이용해 학습한 내용만 기록하고 다음에는 csv 정보들만을 이용해 성별, 나이, 장소를 이용해서 생존 여부를 추측하는 과정을 거치고 싶은데 그러기에는 생존 여부가 저장된 샘플의 수가 훨씬 적다. 그래도 성별, 나이, 국적, 코로나여부를 이용해 저장된 샘플로 만이라도 생존 여부를 확인하고 상관관계 파악을 위해 정보 시각화를 연습해보고자 한다.<jupyter_code>batch_size = 16
epochs = 100
df
df['survival'].value_counts()
df['location'].value_counts()
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 29 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 patientid 372 non-null int64
1 offset 276 non-null float64
2 sex 329 non-null object
3 age 318 non-null float64
4 finding 372 non-null object
5 survival 116 non-null object
6 intubated 72 non-null object
7 intubation_present 77 non-null object
8 went_icu 35 non-null object
9 in_icu 7 non-null object
10 needed_supplemental_O2 12 non-null object
11 extubated 23 non-null object
12 temperature 35 non-null float64
13 pO2_saturation 44 non-null float64
14 leukocyte_count 11 non-null fl[...]<jupyter_text>object로 데이터가 저장된 것들도 꽤 있지만 그뿐이 아니라 비어있는 부분의 데이터도 꽤 존재한다. 이중에 학습을 시키는데 필요 없는 정보도 꽤 존재한다. 따라서 .drop()을 이용해 제거해 주는 과정도 필요할 것이고 중요한 데이터는 median을 이용해서 매꿔주는 등의 과정을 거칠 필요가 있다.<jupyter_code>df['folder'].value_counts()
covid_dataset_path = 'C:/CoronaChest'
dataset_path = './dataset'
# construct the path to the metadata CSV file and load it
csvPath = os.path.sep.join([covid_dataset_path, "metadata.csv"])
df = pd.read_csv(csvPath)
df
x_data = []
for (i, row) in df.iterrows():
imagePath = os.path.sep.join([covid_dataset_path, "images", row["filename"]])
if not os.path.exists(imagePath):
continue
x_data.append(imagePath)
import os
new_size = 128
x_image_data = []
y_image_data = []
for k in range(len(x_data)):
i = x_data[k]
if os.path.exists(i):
img_data = cv2.imread(i, cv2.IMREAD_COLOR)
img_data = cv2.resize(img_data, (new_size, new_size))
x_image_data.append(img_data)
y_image_data.append(df['finding'][k])
for i in range(len(y_image_data)):
if y_image_data[i] == 'COVID-19, ARDS':
y_image_data[i] = 'COVID-19'
import random
def visual_data():
choosen = random.sample([int(i) for i in range(351)], 12)
plt_index = 1
plt.figure(figsize = (12,12))
for i in choosen:
plt.subplot(3,4,plt_index)
plt.imshow(x_image_data[i])
plt.title(y_image_data[i])
plt_index += 1
visual_data()
y_image_dict = dict()
y_image_dict['COVID-19'] = 0
y_image_dict['SARS'] = 0
y_image_dict['Pneumocystis'] = 0
y_image_dict['Streptococcus'] = 0
y_image_dict['ARDS'] = 0
y_image_dict['No Finding'] = 0
y_image_dict['Chlamydophila'] = 0
y_image_dict['E.Coli'] = 0
y_image_dict['Klebsiella'] = 0
y_image_dict['Legionella'] = 0
for i in y_image_data:
y_image_dict[i] += 1
y_image_dict
<jupyter_output><empty_output><jupyter_text>나온 결과를 보니 COVID-19와 SARS이외에도 다양한 질병이 있었고, 이것들을 모두 분류하는 것은 의미도 없고 표본 개수 차이가 너무 많이 나서 학습이 어려울 것이라고 판단을 했다. 그래서 COVID-19데이터 제외 나머지는 SAFE로 바꿀 예정이다.
결국에 class의 최종 개수는 2개가 되어서 model.compile과정에서 loss를 'binary_classification'으로 설정해야 할 것이다.<jupyter_code>for i in range(len(y_image_data)):
if y_image_data[i] != 'COVID-19':
y_image_data[i] = 'SAFE'<jupyter_output><empty_output><jupyter_text>그리고 위에서 다시 image를 불러보니까 제대로 바뀌어 있었고, 뭔가 매번 위로 올라가서 이미지를 불러오기 번거로울 것 같아서 visual_data()라는 함수를 정의했다.주어진 폐의 이미지의 directory를 os를 이용해서 구했고, 사실 이렇게 파일을 다운받고 분석이 가능하도록 고치는 과정이 제일 어렵다고 생각한다. 이 과정에서 많이 배웠는데, 특히 python의 자료를 입력받는 방법중에 하나인 os에 대해서 더 정확하게 알 수 있었다고 생각한다.
이후 x_image_data와 y_image_data의 형태를 바꿔주었다.<jupyter_code>x_image_data = np.array(x_image_data, dtype = 'float32').reshape(-1, new_size, new_size, 3)
y_image_data = np.asarray(y_image_data).reshape(351)
x_image_data.shape[0]
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
index = np.arange(x_image_data.shape[0])
np.random.shuffle(index)
x_image_data = x_image_data[index]
y_image_data = y_image_data[index]
y_image_cat = label_encoder.fit_transform(y_image_data)
(x_image_train, x_image_test, y_image_train, y_image_test) = train_test_split(x_image_data, y_image_cat, test_size=0.2, random_state=None, shuffle= False, stratify=None)
(x_image_train, x_image_val , y_image_train, y_image_val ) = train_test_split(x_image_train, y_image_train, test_size=0.25, random_state=None, shuffle = False, stratify=None)
y_image_cat<jupyter_output><empty_output><jupyter_text>원하는대로 이미지 데이터와 label까지 마무리 지었으니, 그리고 두 정보의 개수까지 일치하니 이제는 학습을 위한 사진 처리 과정과 시각화 등의 과정을 거칠 때가 된것 같다.
이 문제 또한 이미지 관련 데이터 분석과 학습이기 때문에 CNN 네트워크를 사용하면 될것으로 예상된다.<jupyter_code>from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_image_generator = ImageDataGenerator(
rescale = 1./255,
rotation_range = 10,
zoom_range = 0.3,
shear_range = 0.5,
width_shift_range = 0.3,
height_shift_range = 0.3,
horizontal_flip = True,
vertical_flip = False
)
test_image_generator = ImageDataGenerator(
rescale = 1./255
)
val_image_generator = ImageDataGenerator(
rescale = 1./255
)
train_image_generator.fit(x_image_train)
train_data = train_image_generator.flow(
x_image_train, y_image_train, batch_size = batch_size
)
test_data = test_image_generator.flow(
x_image_test, y_image_test, batch_size = batch_size
)
val_data = val_image_generator.flow(
x_image_val, y_image_val, batch_size = batch_size
)
<jupyter_output><empty_output><jupyter_text>ImageDataGenerator에 대해 내가 잘 모르는 부분이 있었는데, 알고 보니 flow를 하는 방법도 다양했었던 것이다.
만약에 변형하고 싶은 데이터의 형태가 tensor image data의 형태, 즉 지금과 같은 경우에는 설정한 ImageDataGenerator의 이름이 generator이라면 generator.flow(x_data, y_data, batch_size)를 해주고
변형하고 싶은 형태가 directory에 담긴 이미지라면 generator.flow_from_directory(directory, target_size = (,),color_mode = 'rgb', batch_size = BATCH_SIZE, 등)의 방법으로,
마지막으로 pandas dataframe의 형태라면 generator.flow_from_dataframe(dataframe, directory = '', target_size = (,), batch_size= BATCH_SIZE, class_mode = ''등)의 방법으로 실행한다.<jupyter_code>model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters = 32, input_shape = (128,128,3), kernel_size = 3, activation = 'relu', padding='same'))
model.add(tf.keras.layers.MaxPool2D(pool_size = (2,2), strides = (2,2)))
model.add(tf.keras.layers.Conv2D(filters = 64, kernel_size = 3, activation = 'relu', padding='same'))
model.add(tf.keras.layers.MaxPool2D(pool_size = (2,2), strides = (2,2)))
model.add(tf.keras.layers.Conv2D(filters =128, kernel_size = 3, activation = 'relu', padding='same'))
model.add(tf.keras.layers.MaxPool2D(pool_size = (2,2), strides = (2,2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(rate = 0.5))
model.add(tf.keras.layers.Dense(units = 512, activation = 'relu'))
model.add(tf.keras.layers.Dense(units = 128, activation = 'relu'))
model.add(tf.keras.layers.Dense(units = 2, activation = 'sigmoid'))
model.summary()
early_stop = tf.keras.callbacks.EarlyStopping(monitor = 'val_loss', patience = 100)
reduce_on = tf.keras.callbacks.ReduceLROnPlateau(monitor = 'val_accuracy', patience = 10, verbose = 1, mode = 'max', min_lr = 0.000001)
adam_optimizer = tf.keras.optimizers.Adam(lr = 0.00001)
model.compile(
optimizer = adam_optimizer,
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy']
)
lasthistory = model.fit(
train_data, steps_per_epoch = len(x_image_train)//batch_size, epochs = 30, validation_data = val_data, callbacks = [early_stop,reduce_on]
)
model.evaluate(test_data)
plt.figure(figsize = (12,4))
plt.subplot(1,2,2)
plt.plot(lasthistory.history['loss'], 'g--', label = 'loss')
plt.plot(lasthistory.history['val_loss'], 'y--', label = 'val_loss')
plt.plot(lasthistory.history['accuracy'], 'b-', label = 'accuracy')
plt.plot(lasthistory.history['val_accuracy'], 'r-', label = 'val_accuracy')
plt.xlabel('Epoch')
plt.show()<jupyter_output>5/5 [==============================] - 0s 85ms/step - loss: 0.6309 - accuracy: 0.7746
<jupyter_text>위와같이 시각화를 하고 나니 꽤 괜찮은 결과가 나왔다.
loss는 점점 감소하고 accuracy는 점점 증가했는데, 사실 epoch수를 100에서 30으로 줄였기 때문일 수도 있다.
그러나 어쨌든 accuracy는 test_data로 해본 결과 77.46%로 꽤 높게 도출이 되었다.<jupyter_code>predict_data = model.predict(x_image_test/255.0)
predict_data.reshape(142,)
for i in range(71):
for j in range(2):
if predict_data[i][j] < 0.5:predict_data[i][j] = 0
else:predict_data[i][j] = 1
predict_datda = np.ravel(predict_data,order= 'C')
early_stop = tf.keras.callbacks.EarlyStopping(monitor = 'val_loss', patience = 100)
rmsprop_optimizer = tf.keras.optimizers.RMSprop(learning_rate = 0.001)
model.compile(
optimizer = rmsprop_optimizer,
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy']
)
history = model.fit(
train_data, steps_per_epoch = len(x_image_train)//batch_size, epochs = epochs, validation_data = val_data, callbacks = [early_stop]
)
model.evaluate(test_data)<jupyter_output>5/5 [==============================] - 0s 58ms/step - loss: 0.5347 - accuracy: 0.8169
<jupyter_text>정확도가 나름 81.69%대로 높게 나왔다고 생각한다.
그러나 조금더 정확도를 높이고 싶어서 callback에 ReduceLROnPlatequ라는 함수를 하나 더 이용하고자한다. 관찰중인 값에 변화가 없을 경우에 learning_rate를 조정하는 역할을 한다.
<jupyter_code>reduce_on = tf.keras.callbacks.ReduceLROnPlateau(monitor = 'val_loss', patience = 10, verbose = 1, mode = 'min', min_lr = 0.000001)
newhistory = model.fit(
train_data, steps_per_epoch = len(x_image_train)//batch_size, epochs = epochs, validation_data = val_data, callbacks = [early_stop,reduce_on]
)
model.evaluate(test_data)<jupyter_output>5/5 [==============================] - 0s 87ms/step - loss: 0.4934 - accuracy: 0.8169
<jupyter_text>결론적으로 loss는 줄었지만 accuracy에는 큰 변화가 없는 것 같다.
이제 예측과 학습 결과를 시각화해 보고자 한다.<jupyter_code>predict_label = model.predict(x_image_test/255.0)
y_image_test
plt.figure(figsize = (9,9))
plt_index = 1
choice = np.random.choice([int(i) for i in range(70)], 9, replace = False)
for i in choice:
plt.imshow(x_image_data[i])
plt.subplot(3,3,plt_index)
a, b = y_image_test[i], int(predict_data[i][1])
if a == b:
plt.title('%s : %s'%(y_image_test[i], int(predict_data[i][1])), color = 'blue')
else:
plt.title('%s : %s'%(a,b), color = 'red')
plt_index += 1
plt.figure(figsize = (12,12))
plt.subplot(1,2,1)
plt.plot(history.history['loss'], 'g-', label = 'loss')
plt.plot(history.history['val_loss'], 'r--', label = 'val_loss_1')
plt.plot(newhistory.history['val_loss'], 'b-.', label = 'val_loss_2')
plt.xlabel('Epoch')
plt.ylim(0.4,0.6)
plt.show()
plt.figure(figsize = (12,4))
plt.subplot(1,2,2)
plt.plot(history.history['accuracy'], 'g-', label = 'accuracy')
plt.plot(history.history['val_accuracy'], 'r--', label = 'val_accuracy_1')
plt.plot(newhistory.history['val_accuracy'], 'b-.', label = 'val_accuracy_2')
plt.xlabel('Epoch')
plt.show()
<jupyter_output><empty_output>
|
no_license
|
/Data Analysis with Kaggle DataFrame/Data_Analysis_Proj4(Kaggle_Covid-19_dataset)-CNN (3).ipynb
|
penguin1109/Machine-Learning-via-Tensorflow2.0
| 9 |
<jupyter_start><jupyter_text>
# Lab 3.1.1
# *Data Wrangling and Munging with Pandas*## Part 1: Wrangling DataThe term "data wrangling" is analogous to capturing wild horses and getting them into a fenced area; the horses are data and the fencing is your computer. The more common data wrangling tasks include:
- reading flat files
- reading Excel files
- downloading from web pages
- csv
- html
- json<jupyter_code>import numpy as np
import pandas as pd<jupyter_output><empty_output><jupyter_text>*It is good practice to display the library version numbers for future reference:*<jupyter_code>print('Numpy: ', np.__version__)
print('Pandas: ', pd.__version__)<jupyter_output>Numpy: 1.18.1
Pandas: 0.24.2
<jupyter_text>### CSV FilesBelow are three attempts to load the file "bikeshare.csv" into a DataFrame named `bikes`. Why are they wrong?<jupyter_code># wrong:
bikes = pd.read_table('~/Desktop/bikeshare.csv', header = None)
print(bikes.head())
print()
# wrong:
bikes = pd.read_table('~/Desktop/bikeshare.csv', header = 1)
print(bikes.head())
print()
# wrong:
bikes = pd.read_table('~/Desktop/bikeshare.csv', header = 0)
print(bikes.head())<jupyter_output> 0
0 instant,dteday,season,yr,mnth,hr,holiday,weekd...
1 1,2011-01-01,1,0,1,0,0,6,0,1,0.24,0.2879,0.81,...
2 2,2011-01-01,1,0,1,1,0,6,0,1,0.22,0.2727,0.8,0...
3 3,2011-01-01,1,0,1,2,0,6,0,1,0.22,0.2727,0.8,0...
4 4,2011-01-01,1,0,1,3,0,6,0,1,0.24,0.2879,0.75,...
1,2011-01-01,1,0,1,0,0,6,0,1,0.24,0.2879,0.81,0,3,13,16
0 2,2011-01-01,1,0,1,1,0,6,0,1,0.22,0.2727,0.8,0...
1 3,2011-01-01,1,0,1,2,0,6,0,1,0.22,0.2727,0.8,0...
2 4,2011-01-01,1,0,1,3,0,6,0,1,0.24,0.2879,0.75,...
3 5,2011-01-01,1,0,1,4,0,6,0,1,0.24,0.2879,0.75,...
4 6,2011-01-01,1,0,1,5,0,6,0,2,0.24,0.2576,0.75,...
instant,dteday,season,yr,mnth,hr,holiday,weekday,workingday,weathersit,temp,atemp,hum,windspeed,casual,registered,cnt
0 1,2011-01-01,1,0,1,0,0,6,0,1,0.24,0.2879,0.81,...
1 2,2011-01-01,1,0,1,1,0,6,0,1,0.22,0.2727,0.8,0... [...]<jupyter_text>?:
ANSWER: Case 1 treats headings as just another data row. Case 2 treats the 1st data row as the column header. Case 3 gets the header right (row 0), but reads each row as a single column (Nb. the other two make that same mistake). Load the file "bikeshare.csv" into a DataFrame named `bikes`, and confirm that it was loaded properly:<jupyter_code>#ANSWER:
bikes = pd.read_csv('~/Desktop/bikeshare.csv', sep = ',', index_col = None)
bikes.head()
<jupyter_output><empty_output><jupyter_text>Note that we could have used `read.csv()` above. When is `read_table()` necessary??:
ANSWER: When `sep` is not the comma character, or we need fine control that `read.csv()` does not provide.Flat files can be full of surprises. Here are some issues to watch out for:
- separator character is something other than the comma
- ";", "|", and tab are popular
- newline character is something other than what the O/S expects
- Tip: Don't hard-code the character codes for carriage returns, linefeeds, etc. Use Python's built-in representation instead (e.g. Python translates "\n" to the newline character and "\t" to the tab character on any O/S).
- truncated lines
- if there are empty fields at the end of a line it is possible that their separators will be missing, resulting in a "jagged" file
- embedded commas or quotes
- a free-text field containing embedded commas may split into separate fields on input
- a free-text field containing embedded quotes may not parse correctly
- unescaped characters
- the "\" character indicates a control code to Python, which will break the I/O
- e.g. the substring "\u0123" will be interpreted as Unicode(0123) -- which may not be what the file creator intended
- these may need to be fixed by loading whole strings and then parsing into a new data frame
Tip: Most issues can be delth with by correctly specifying the parameters of the function you use to load the file. Read the doco before reading the data!### Reading Excel Files<jupyter_code>from pandas import ExcelFile # Nb. Need to install xlrd from conda (it does not automatically install with pandas)
df = pd.read_excel('~/Desktop/Iris.xls', sheet_name = 'Data')
df<jupyter_output><empty_output><jupyter_text>So, this file appears to have an embedded table of aggregates on the same sheet as the raw data (a naughty but common practice amongst analysts).It is usually better to load data correctly than to meddle with the source file or load it 'warts and all' and then try to parse it in code. The Pandas functions for reading files have parameters that provide the control we need. For ecxample, we could make multiple calls to `read_excel()`, using combinations of the `header`, `usecols`, `skiprows`, `nrows`, and `skipfooter` parameters to load one table at a time from a spreadsheet with multiple tables.Load the above file without the unwanted columns:<jupyter_code>#ANSWER
df_setosa = pd.read_excel('~/Desktop/Iris.xls', sheet_name = 'Data', usecols= range(0,5), skipfooter = 100)
df_setosa
df_versicolor = pd.read_excel('~/Desktop/Iris.xls', sheet_name = 'Data', usecols= range(0,5), header = 0, skiprows = range(1,51), skipfooter = 50)
df_versicolor
df_verginica = pd.read_excel('~/Desktop/Iris.xls', sheet_name = 'Data', usecols= range(0,5), header = 0, skiprows = range(1,101))
df_verginica<jupyter_output><empty_output><jupyter_text>### Importing Data Directly from the WebWe usually want to store a local copy of a data file that we download from the Web, but when data retention is not a priority it is convenient to download the data directly into our running Python environment.#### Importing Text Files from the WebThe web is the 'wild west' of data formats. However, we can usually expect good behaviour from files that are automatically generated by a service, such as the earthquake report:<jupyter_code>df = pd.read_csv('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_hour.csv')
df.head()<jupyter_output><empty_output><jupyter_text>#### Importing HTML Files from the Web
Working with unstructured HTML files relies heavily on library functions. This one, however, is well-structured:<jupyter_code>url = 'http://www.fdic.gov/bank/individual/failed/banklist.html'
df = pd.read_html(url)
df<jupyter_output><empty_output><jupyter_text>#### Importing XML Files from the Web
XML files are semi-structured, but you're at the mercy of the file creator. If every record has the same format it will be much easier, but practical applications often require a lot of custom code. Here is an example that includes a nice parser class: http://www.austintaylor.io/lxml/python/pandas/xml/dataframe/2016/07/08/convert-xml-to-pandas-dataframe/#### Importing JSON Files from the Web
Like XML, JSON files are semi-structured and may require work to capture the schema into a dataframe. Here is a simple example: <jupyter_code>url = 'https://raw.githubusercontent.com/chrisalbon/simulated_datasets/master/data.json'
# Load the first sheet of the JSON file into a data frame
df = pd.read_json(url, orient = 'columns')
df.head()<jupyter_output><empty_output><jupyter_text>## Part 2: Data MungingData munging is manipulating data to get it into a form that we can start running analyses on (which usually means getting the data into a DataFrame). Before we get to this stage, we may need to remove headers or footers, transpose columns to rows, split wide data tables into long ones, and so on. (Nb. Excel files can be particularly troublesome, because users can format their data in mixed, complex shapes.) Essentially, we need to follow Hadley Wickham's guidelines for tidy datasets (http://vita.had.co.nz/papers/tidy-data.html):
The end goal of the cleaning data process:
- each variable should be in one column
- each observation should comprise one row
- each type of observational unit should form one table
- include key columns for linking multiple tables
- the top row contains (sensible) variable names
- in general, save data as one file per table
### Dataset MorphologyOnce we have our dataset in a DataFrame (or Series, if our data is only 1-dimensional), we can start examining its size and content.How many rows and columns are in `bikes`?<jupyter_code>#ANSWER
rows, cols = bikes.shape
print(f'There are {rows} rows and {cols} columns')<jupyter_output>There are 17379 rows and 17 columns
<jupyter_text>What are the column names in `bikes`?<jupyter_code>#ANSWER
list(bikes.columns)<jupyter_output><empty_output><jupyter_text>What are the data types of these columns?<jupyter_code>#ANSWER
bikes.dtypes<jupyter_output><empty_output><jupyter_text>What is the (row) index for this DataFrame?<jupyter_code>#ANSWER
bikes.index<jupyter_output><empty_output><jupyter_text>https://www.dataquest.io/blog/python-json-tutorial/## Slicing and DicingIt is often preferable to refer to DataFrame columns by name, but there is more than one way to do this.
Do `bikes['season']` and `bikes[['season']]` give the same object? Demonstrate:<jupyter_code>#ANSWER
bikes['season']
bikes[['season']]<jupyter_output><empty_output><jupyter_text>How would we use object notation to show the first 4 rows of `atemp`?<jupyter_code>#ANSWER
bikes.loc[0:3,'atemp']<jupyter_output><empty_output><jupyter_text>Algorithms that loop over multiple columns often access DataFrame columns by index. However, none of the following work (try them out by uncommenting / removing the "#E: " ): <jupyter_code>bikes[[0]]
#E: bikes[0]
#E: bikes[0,0]
#E: bikes[[0,0]]<jupyter_output><empty_output><jupyter_text>What is the correct way to access the 1st row of the DataFrame by its index?<jupyter_code>#ANSWER
bikes.iloc[0, :]<jupyter_output><empty_output><jupyter_text>What is the correct way to access the 2nd column of the DataFrame by its index?<jupyter_code>#ANSWER
bikes.iloc[:,1]<jupyter_output><empty_output><jupyter_text>## Handling Missing ValuesWhat is the Pandas `isnull` function for? ?
ANSWER: It checks whether the identified cell/row/column has a 'null' valueWe can apply `isnull` to the `bikes` DataFrame to show the result for every element:<jupyter_code>bikes.isnull().head()<jupyter_output><empty_output><jupyter_text>However, we usually start at a higher level. How many nulls are in `bikes` altogether?<jupyter_code>#ANSWER
bikes.isnull().values.sum()<jupyter_output><empty_output><jupyter_text>If this result were nonzero we would next want to find out which columns contained nulls. How can this be done in one line of code?<jupyter_code>#ANSWER
bikes.isnull().values.any()<jupyter_output><empty_output><jupyter_text>What is the Numpy object `nan` used for? (Write a descriptive answer.)?
ANSWER: Marking a data point as invalid.Write (and verify) a function that performs scalar division with built-in handling of the edge case (i.e. return a value instead of just trapping the error):<jupyter_code>#ANSWER
def scal_div(x,y):
'''Divides x by y. Args must be integer or float, y cannot be 0'''
if y == 0:
raise ValueError('y cannot be 0, cannot divide by 0')
else:
result = x / y
return result
print(scal_div(10,2))
print(scal_div(10,0))<jupyter_output><empty_output><jupyter_text>Apply the Pandas `isna` function to the following data objects:<jupyter_code>x = 2.3
y = np.nan
print(x, y)
#ANSWER
pd.isna(x)
pd.isna(y)
array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
print(array)
#ANSWER
pd.isna(array)<jupyter_output><empty_output><jupyter_text>How is the pandas I/O parameter `na_values` used?? ANSWER: To ID the character or value in the uploaded document that corresponds to a NA or NaN value## Data Profiling### Counts
When there are categorical variables in a dataset we will want to know how many possible values there are in each column. (Nb. If the dataset is a sample of a larger one, our sample may not capture all possible values of every categorical.)How many (different) seasons are in `bikes`?<jupyter_code>#ANSWER
bikes.season.unique()<jupyter_output><empty_output><jupyter_text>### RangesPrint the range of the `instant`, `dteday`, and `windspeed` columns: <jupyter_code>#ANSWER
instantrange = bikes.instant.max() - bikes.instant.min()
instantrange
startdate = bikes.dteday.min()
enddate = bikes.dteday.max()
print(bikes.dteday.min(), bikes.dteday.max())
windrange = bikes.windspeed.max() - bikes.windspeed.min()
windrange<jupyter_output><empty_output><jupyter_text>Compute and print the overall minimum and maximum of the numeric data columns:<jupyter_code>bikes_min, bikes_max = (min(bikes.min()), max(bikes.max()))
bikes_min, bikes_max
bikes.max()<jupyter_output><empty_output><jupyter_text>### QuantilesPandas makes computing quantiles easy. This is how to get the median of a Series:<jupyter_code>bikes['atemp'].quantile(0.5)<jupyter_output><empty_output><jupyter_text>Of course, the `quantiles` method can take a tuple as its argument. Compute the 10th, 25th, 50th, 75th, and 90th percentiles in one line of code: <jupyter_code>#ANSWER
bikes['atemp'].quantile((0.1, 0.25, 0.5, 0.75, 0.9))<jupyter_output><empty_output><jupyter_text>### Cuts
Sometimes we want to split the sample not by the quantiles of the distribution but by the range of the data. Let's take a closer look at `atemp`:<jupyter_code>type(bikes['atemp'])
bikes.sample(5)<jupyter_output><empty_output><jupyter_text>Suppose we decide to sort these values into 4 bins of equal width, but we want to apply the resulting groups to the entire DataFrame. Basically, we need to add a row label that indcates which bin each sample belongs in. Let's call this label "atemp_level", and use the `cut` method to populate it:<jupyter_code>atemp_level = pd.cut(bikes['atemp'], bins = 4)
atemp_level<jupyter_output><empty_output><jupyter_text>What is `atemp_level`?#ANSWER
A new row which contains the bins that each 'atemp' value lies within.Here is a random sample of `atemp_level`:<jupyter_code>atemp_level.sample(5) <jupyter_output><empty_output><jupyter_text>So, by default, `cut` produces labels that indicate the bin boundaries for each element in the series it was applied to. Usually, we will specify labels that are appropriate to the discretisation we are applying:<jupyter_code>atemp_level = pd.cut(bikes['atemp'], bins = 4, labels = ["cool", "mild", "warm", "hot"])
atemp_level.sample(5) <jupyter_output><empty_output><jupyter_text>Incorporate the new `atemp_level` column into the `bikes` DataFrame and use it to count the number of "mild" `atemp` entries in `season` 2:<jupyter_code>#ANSWER
bikes["atemp_level"] = atemp_level
bikes.head()
np.sum(bikes["atemp_level"] == 'mild')<jupyter_output><empty_output><jupyter_text>*Nb. The `atemp_level` variable we created is what the R language calls a "factor". Pandas has introduced a new data type called "category" that is similar to R's factors.*# Synthetic Data
Sometimes we may want to generate test data, or we may need to initalise a series, matrix, or data frame for input to an algorithm. Numpy has several methods we can use for this.Execute the following, then check the shape and content of each variable:<jupyter_code># Creating arrays with initial values
a = np.zeros((3))
b = np.ones((1,3))
c = np.random.randint(1,10,(2,3,4)) # randint(low, high, size)
d = np.arange(4)
e = np.array( [[1,2,3,4], [5,6,7,8]] )
# Cleaning Data
print(a.shape)
print(a)
print(b.shape)
print(b)
print(c.shape)
print(c)
print(d.shape)
print(d)
print(e.shape)
print(e)<jupyter_output>(3,)
[0. 0. 0.]
(1, 3)
[[1. 1. 1.]]
(2, 3, 4)
[[[4 2 5 8]
[9 9 2 8]
[1 6 5 3]]
[[2 4 1 7]
[5 2 2 4]
[1 8 6 6]]]
(4,)
[0 1 2 3]
(2, 4)
[[1 2 3 4]
[5 6 7 8]]
<jupyter_text>## Load Data
Load rock.csv and clean the dataset.<jupyter_code>rockdf = pd.read_csv('~/Desktop/rock.csv')
rockdf.head()<jupyter_output><empty_output><jupyter_text>## Check Column Names
Check column names and clean.<jupyter_code>rockdf.columns
rockdf.rename(columns = {"First?": "First", "Year?":"Year"}, inplace = True)
rockdf.head()<jupyter_output><empty_output><jupyter_text># Replace Null Values With 0
Check 'release' column whether this column have any null value or not. Replace null value with 0.<jupyter_code>rockdf['Release Year'].fillna(0, inplace = True)
rockdf.head()<jupyter_output><empty_output><jupyter_text># Check Datatypes of Dataset
Check datatypes of the dataset. Is there any column which should be int instead of object? Fix the column. <jupyter_code>#Check the column types
rockdf.dtypes
#Find the value which does not allow int conversion
rockdf.loc[rockdf['Release Year'] == 'SONGFACTS.COM']
#Replace the wrong value with a 0 (which can be converted to int)
rockdf["Release Year"].replace({'SONGFACTS.COM': '0'}, inplace=True)
#Convert column to int
introck = rockdf.astype({'Release Year': int})
introck.dtypes
#Recheck column types
rockdf.dtypes<jupyter_output><empty_output><jupyter_text># Check Min, Max of Each Column
Is there any illogical value in any column? How can we fix that?<jupyter_code>introck.describe()<jupyter_output><empty_output><jupyter_text># Write Some Functions## Write a function that will take a row of a DataFrame and print out the song, artist, and whether or not the release date is < 1970<jupyter_code>def rowrelease(df, rownum):
song = df.loc[rownum,'Song Clean']
artist = df.loc[rownum, 'ARTIST CLEAN']
reldate = df.loc[rownum, 'Release Year'] < 1970
print(f'It is {reldate} that {song}, by {artist} was released before 1970')
rowrelease(introck, 2)
introck.head()
introck.loc[1,'Song Clean']<jupyter_output><empty_output><jupyter_text>## Write a function that converts cells in a DataFrame to float and otherwise replaces them with np.nan<jupyter_code>#Don't know how to do this
def floatornan(df):
'''Convert all cells to float if possible, otherwise replace with NaN'''
for i in df.loc[:,:]:
if type(i) == 'int64' or type(i) == 'int32':
i.astype(float)
if type(i) != 'int64' or type(i) != 'int32':
i = np.nan
return df
nanrock = floatornan(introck)
nanrock.dtypes<jupyter_output><empty_output><jupyter_text>## Apply these functions to your dataset<jupyter_code>#Couldn't work out the function<jupyter_output><empty_output>
|
no_license
|
/Ciaran/module 3/Lab 3.1.1 CMath.ipynb
|
MirunaSuresh/Jun23cohortDataScienceAndAI
| 42 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.